Merge 6.12.25 into android16-6.12-lts
GKI (arm64) relevant 33 out of 218 changes, affecting 50 files +373/-2485ec9039702driver core: bus: add irq_get_affinity callback to bus_type [1 file, +3/-0]fe2bdefe86blk-mq: introduce blk_mq_map_hw_queues [2 files, +39/-0]6ad0acb56bBluetooth: hci_event: Fix sending MGMT_EV_DEVICE_FOUND for invalid address [1 file, +3/-2]d49798ecd2Bluetooth: l2cap: Check encryption key size on incoming connection [1 file, +2/-1]b02c2ac2f3ipv6: add exception routes to GC list in rt6_insert_exception [1 file, +1/-0]61765e1b41ethtool: cmis_cdb: use correct rpl size in ethtool_cmis_module_poll() [1 file, +1/-1]41e43134ddblock: fix resource leak in blk_register_queue() error path [1 file, +2/-0]0175902f6eloop: aio inherit the ioprio of original request [1 file, +1/-1]78253d44e9loop: stop using vfs_iter_{read,write} for buffered I/O [1 file, +17/-95]28da4dd840writeback: fix false warning in inode_to_wb() [1 file, +1/-0]f2e2926e9eRevert "PCI: Avoid reset when disabled via sysfs" [1 file, +0/-4]569bbe2fc7Bluetooth: l2cap: Process valid commands in too long frame [1 file, +17/-1]694521cb3floop: properly send KOBJ_CHANGED uevent for disk device [1 file, +2/-2]c45ba83935loop: LOOP_SET_FD: send uevents for partitions [1 file, +2/-1]4f34d6f979mm/compaction: fix bug in hugetlb handling pathway [1 file, +3/-3]b609a60e31mm/gup: fix wrongly calculated returned value in fault_in_safe_writeable() [1 file, +2/-2]8338e0723fmm: fix filemap_get_folios_contig returning batches of identical folios [1 file, +1/-0]029458063emm: fix apply_to_existing_page_range() [1 file, +2/-2]b9e3579213ovl: don't allow datadir only [1 file, +5/-0]8baa747193slab: ensure slab->obj_exts is clear in a newly allocated slab page [1 file, +10/-0]5f878db827string: Add load_unaligned_zeropad() code path to sized_strscpy() [1 file, +10/-3]5683eaf4eetracing: Fix filter string testing [1 file, +2/-2]c3e31d6139virtiofs: add filesystem context source name check [1 file, +3/-0]c1a485c46ccpufreq: Reference count policy in cpufreq_update_limits() [1 file, +8/-0]5b34f40cdablock: remove rq_list_move [1 file, +0/-17]2ad0f19a4eblock: add a rq_list type [11 files, +104/-88]7e2d224939block: don't reorder requests in blk_add_rq_to_plug [3 files, +4/-4]b906c1ad25mm/vma: add give_up_on_oom option on modify/merge, use in uffd release [3 files, +53/-7]d30b9c5950bpf: add find_containing_subprog() utility function [1 file, +24/-4]1d572c6048bpf: track changes_pkt_data property for global functions [2 files, +32/-1]3846e2bea5bpf: check changes_pkt_data property for extension programs [2 files, +13/-4]f0946dcccbbpf: fix null dereference when computing changes_pkt_data of prog w/o subprogs [1 file, +5/-2]f78507c1efblock: make struct rq_list available for !CONFIG_BLOCK [1 file, +1/-1] Changes in 6.12.25 scsi: hisi_sas: Enable force phy when SATA disk directly connected wifi: at76c50x: fix use after free access in at76_disconnect wifi: mac80211: Update skb's control block key in ieee80211_tx_dequeue() wifi: mac80211: Purge vif txq in ieee80211_do_stop() wifi: wl1251: fix memory leak in wl1251_tx_work scsi: iscsi: Fix missing scsi_host_put() in error path driver core: bus: add irq_get_affinity callback to bus_type blk-mq: introduce blk_mq_map_hw_queues scsi: replace blk_mq_pci_map_queues with blk_mq_map_hw_queues scsi: smartpqi: Use is_kdump_kernel() to check for kdump md/raid10: fix missing discard IO accounting md/md-bitmap: fix stats collection for external bitmaps ASoC: dwc: always enable/disable i2s irqs ASoC: Intel: avs: Fix null-ptr-deref in avs_component_probe() crypto: tegra - remove redundant error check on ret crypto: tegra - Do not use fixed size buffers crypto: tegra - Fix IV usage for AES ECB ovl: remove unused forward declaration RDMA/usnic: Fix passing zero to PTR_ERR in usnic_ib_pci_probe() RDMA/hns: Fix wrong maximum DMA segment size ALSA: hda/cirrus_scodec_test: Don't select dependencies ALSA: hda: improve bass speaker support for ASUS Zenbook UM5606WA ALSA: hda/realtek: Workaround for resume on Dell Venue 11 Pro 7130 ALSA: hda/realtek - Fixed ASUS platform headset Mic issue ASoC: cs42l43: Reset clamp override on jack removal RDMA/core: Silence oversized kvmalloc() warning Bluetooth: hci_event: Fix sending MGMT_EV_DEVICE_FOUND for invalid address Bluetooth: btrtl: Prevent potential NULL dereference Bluetooth: l2cap: Check encryption key size on incoming connection ipv6: add exception routes to GC list in rt6_insert_exception xen: fix multicall debug feature Revert "wifi: mac80211: Update skb's control block key in ieee80211_tx_dequeue()" igc: fix PTM cycle trigger logic igc: increase wait time before retrying PTM igc: move ktime snapshot into PTM retry loop igc: handle the IGC_PTP_ENABLED flag correctly igc: cleanup PTP module if probe fails igc: add lock preventing multiple simultaneous PTM transactions dt-bindings: soc: fsl: fsl,ls1028a-reset: Fix maintainer entry smc: Fix lockdep false-positive for IPPROTO_SMC. test suite: use %zu to print size_t pds_core: fix memory leak in pdsc_debugfs_add_qcq() ethtool: cmis_cdb: use correct rpl size in ethtool_cmis_module_poll() net: mctp: Set SOCK_RCU_FREE block: fix resource leak in blk_register_queue() error path netlink: specs: ovs_vport: align with C codegen capabilities net: openvswitch: fix nested key length validation in the set() action can: rockchip_canfd: fix broken quirks checks net: ngbe: fix memory leak in ngbe_probe() error path net: ethernet: ti: am65-cpsw: fix port_np reference counting eth: bnxt: fix missing ring index trim on error path loop: aio inherit the ioprio of original request loop: stop using vfs_iter_{read,write} for buffered I/O ata: libata-sata: Save all fields from sense data descriptor cxgb4: fix memory leak in cxgb4_init_ethtool_filters() error path netlink: specs: rt-link: add an attr layer around alt-ifname netlink: specs: rt-link: adjust mctp attribute naming net: b53: enable BPDU reception for management port net: bridge: switchdev: do not notify new brentries as changed net: txgbe: fix memory leak in txgbe_probe() error path net: dsa: mv88e6xxx: avoid unregistering devlink regions which were never registered net: dsa: mv88e6xxx: fix -ENOENT when deleting VLANs and MST is unsupported net: dsa: clean up FDB, MDB, VLAN entries on unbind net: dsa: free routing table on probe failure net: dsa: avoid refcount warnings when ds->ops->tag_8021q_vlan_del() fails ptp: ocp: fix start time alignment in ptp_ocp_signal_set net: ti: icss-iep: Add pwidth configuration for perout signal net: ti: icss-iep: Add phase offset configuration for perout signal net: ti: icss-iep: Fix possible NULL pointer dereference for perout request net: ethernet: mtk_eth_soc: reapply mdc divider on reset net: ethernet: mtk_eth_soc: correct the max weight of the queue limit for 100Mbps net: ethernet: mtk_eth_soc: revise QDMA packet scheduler settings riscv: Use kvmalloc_array on relocation_hashtable riscv: Properly export reserved regions in /proc/iomem riscv: module: Fix out-of-bounds relocation access riscv: module: Allocate PLT entries for R_RISCV_PLT32 kunit: qemu_configs: SH: Respect kunit cmdline riscv: KGDB: Do not inline arch_kgdb_breakpoint() riscv: KGDB: Remove ".option norvc/.option rvc" for kgdb_compiled_break cpufreq/sched: Fix the usage of CPUFREQ_NEED_UPDATE_LIMITS objtool/rust: add one more `noreturn` Rust function for Rust 1.86.0 rust: kasan/kbuild: fix missing flags on first build rust: disable `clippy::needless_continue` rust: kbuild: use `pound` to support GNU Make < 4.3 writeback: fix false warning in inode_to_wb() Revert "PCI: Avoid reset when disabled via sysfs" ASoC: fsl: fsl_qmc_audio: Reset audio data pointers on TRIGGER_START event ASoC: codecs:lpass-wsa-macro: Fix vi feedback rate ASoC: codecs:lpass-wsa-macro: Fix logic of enabling vi channels ASoC: Intel: sof_sdw: Add quirk for Asus Zenbook S16 ASoC: qcom: Fix sc7280 lpass potential buffer overflow asus-laptop: Fix an uninitialized variable block: integrity: Do not call set_page_dirty_lock() drm/v3d: Fix Indirect Dispatch configuration for V3D 7.1.6 and later dma-buf/sw_sync: Decrement refcount on error in sw_sync_ioctl_get_deadline() nfs: add missing selections of CONFIG_CRC32 nfsd: decrease sc_count directly if fail to queue dl_recall i2c: atr: Fix wrong include ftrace: fix incorrect hash size in register_ftrace_direct() drm/msm/a6xx+: Don't let IB_SIZE overflow Bluetooth: l2cap: Process valid commands in too long frame Bluetooth: vhci: Avoid needless snprintf() calls btrfs: correctly escape subvol in btrfs_show_options() cpufreq/sched: Explicitly synchronize limits_changed flag handling crypto: caam/qi - Fix drv_ctx refcount bug hfs/hfsplus: fix slab-out-of-bounds in hfs_bnode_read_key i2c: cros-ec-tunnel: defer probe if parent EC is not present isofs: Prevent the use of too small fid loop: properly send KOBJ_CHANGED uevent for disk device loop: LOOP_SET_FD: send uevents for partitions mm/compaction: fix bug in hugetlb handling pathway mm/gup: fix wrongly calculated returned value in fault_in_safe_writeable() mm: fix filemap_get_folios_contig returning batches of identical folios mm: fix apply_to_existing_page_range() ovl: don't allow datadir only ksmbd: Fix dangling pointer in krb_authenticate ksmbd: fix use-after-free in smb_break_all_levII_oplock() ksmbd: Prevent integer overflow in calculation of deadtime ksmbd: fix the warning from __kernel_write_iter Revert "smb: client: Fix netns refcount imbalance causing leaks and use-after-free" Revert "smb: client: fix TCP timers deadlock after rmmod" riscv: Avoid fortify warning in syscall_get_arguments() selftests/mm: generate a temporary mountpoint for cgroup filesystem slab: ensure slab->obj_exts is clear in a newly allocated slab page smb3 client: fix open hardlink on deferred close file error string: Add load_unaligned_zeropad() code path to sized_strscpy() tracing: Fix filter string testing virtiofs: add filesystem context source name check x86/microcode/AMD: Extend the SHA check to Zen5, block loading of any unreleased standalone Zen5 microcode patches x86/cpu/amd: Fix workaround for erratum 1054 x86/boot/sev: Avoid shared GHCB page for early memory acceptance scsi: megaraid_sas: Block zero-length ATA VPD inquiry scsi: ufs: exynos: Ensure consistent phy reference counts RDMA/cma: Fix workqueue crash in cma_netevent_work_handler RAS/AMD/ATL: Include row[13] bit in row retirement RAS/AMD/FMPM: Get masked address platform/x86: amd: pmf: Fix STT limits perf/x86/intel: Allow to update user space GPRs from PEBS records perf/x86/intel/uncore: Fix the scale of IIO free running counters on SNR perf/x86/intel/uncore: Fix the scale of IIO free running counters on ICX perf/x86/intel/uncore: Fix the scale of IIO free running counters on SPR drm/repaper: fix integer overflows in repeat functions drm/ast: Fix ast_dp connection status drm/msm/dsi: Add check for devm_kstrdup() drm/msm/a6xx: Fix stale rpmh votes from GPU drm/amdgpu: Prefer shadow rom when available drm/amd/display: prevent hang on link training fail drm/amd: Handle being compiled without SI or CIK support better drm/amd/display: Actually do immediate vblank disable drm/amd/display: Increase vblank offdelay for PSR panels drm/amd/pm: Prevent division by zero drm/amd/pm/powerplay: Prevent division by zero drm/amd/pm/smu11: Prevent division by zero drm/amd/pm/powerplay/hwmgr/smu7_thermal: Prevent division by zero drm/amd/pm/swsmu/smu13/smu_v13_0: Prevent division by zero drm/amd/pm/powerplay/hwmgr/vega20_thermal: Prevent division by zero drm/amdgpu/mes12: optimize MES pipe FW version fetching drm/i915/vrr: Add vrr.vsync_{start, end} in vrr_params_changed drm/xe: Use local fence in error path of xe_migrate_clear drm/amd/display: Add HP Elitebook 645 to the quirk list for eDP on DP1 drm/amd/display: Protect FPU in dml2_validate()/dml21_validate() drm/amd/display: Protect FPU in dml21_copy() drm/amdgpu/mes11: optimize MES pipe FW version fetching drm/amdgpu/dma_buf: fix page_link check drm/nouveau: prime: fix ttm_bo_delayed_delete oops drm/imagination: fix firmware memory leaks drm/imagination: take paired job reference drm/sti: remove duplicate object names drm/xe: Fix an out-of-bounds shift when invalidating TLB drm/i915/gvt: fix unterminated-string-initialization warning drm/amdgpu: immediately use GTT for new allocations drm/amd/display: Do not enable Replay and PSR while VRR is on in amdgpu_dm_commit_planes() drm/amd/display: Protect FPU in dml2_init()/dml21_init() drm/amd/display: Add HP Probook 445 and 465 to the quirk list for eDP on DP1 drm/xe/dma_buf: stop relying on placement in unmap drm/xe/userptr: fix notifier vs folio deadlock drm/xe: Set LRC addresses before guc load drm/amdgpu: fix warning of drm_mm_clean drm/mgag200: Fix value in <VBLKSTR> register arm64/sysreg: Update register fields for ID_AA64MMFR0_EL1 arm64/sysreg: Add register fields for HDFGRTR2_EL2 arm64/sysreg: Add register fields for HDFGWTR2_EL2 arm64/sysreg: Add register fields for HFGITR2_EL2 arm64/sysreg: Add register fields for HFGRTR2_EL2 arm64/sysreg: Add register fields for HFGWTR2_EL2 arm64/boot: Enable EL2 requirements for FEAT_PMUv3p9 cpufreq: Reference count policy in cpufreq_update_limits() scripts: generate_rust_analyzer: Add ffi crate kbuild: Add '-fno-builtin-wcslen' platform/x86: msi-wmi-platform: Rename "data" variable platform/x86: msi-wmi-platform: Workaround a ACPI firmware bug md: fix mddev uaf while iterating all_mddevs list selftests/bpf: Fix raw_tp null handling test misc: pci_endpoint_test: Avoid issue of interrupts remaining after request_irq error misc: pci_endpoint_test: Fix 'irq_type' to convey the correct type efi/libstub: Bump up EFI_MMAP_NR_SLACK_SLOTS to 32 LoongArch: Eliminate superfluous get_numa_distances_cnt() drm/amd/display: Temporarily disable hostvm on DCN31 nvmet-fc: Remove unused functions block: remove rq_list_move block: add a rq_list type block: don't reorder requests in blk_add_rq_to_plug mm/vma: add give_up_on_oom option on modify/merge, use in uffd release Revert "wifi: ath12k: Fix invalid entry fetch in ath12k_dp_mon_srng_process" MIPS: dec: Declare which_prom() as static MIPS: cevt-ds1287: Add missing ds1287.h include MIPS: ds1287: Match ds1287_set_base_clock() function types wifi: ath12k: Fix invalid entry fetch in ath12k_dp_mon_srng_process bpf: add find_containing_subprog() utility function bpf: track changes_pkt_data property for global functions selftests/bpf: test for changing packet data from global functions bpf: check changes_pkt_data property for extension programs selftests/bpf: freplace tests for tracking of changes_packet_data selftests/bpf: validate that tail call invalidates packet pointers bpf: fix null dereference when computing changes_pkt_data of prog w/o subprogs selftests/bpf: extend changes_pkt_data with cases w/o subprograms block: make struct rq_list available for !CONFIG_BLOCK Linux 6.12.25 Change-Id: Ib99b782fabf924c599a3c66bcac37febef9d422e Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
@@ -285,6 +285,12 @@ Before jumping into the kernel, the following conditions must be met:
|
||||
|
||||
- SCR_EL3.FGTEn (bit 27) must be initialised to 0b1.
|
||||
|
||||
For CPUs with the Fine Grained Traps 2 (FEAT_FGT2) extension present:
|
||||
|
||||
- If EL3 is present and the kernel is entered at EL2:
|
||||
|
||||
- SCR_EL3.FGTEn2 (bit 59) must be initialised to 0b1.
|
||||
|
||||
For CPUs with support for HCRX_EL2 (FEAT_HCX) present:
|
||||
|
||||
- If EL3 is present and the kernel is entered at EL2:
|
||||
@@ -379,6 +385,22 @@ Before jumping into the kernel, the following conditions must be met:
|
||||
|
||||
- SMCR_EL2.EZT0 (bit 30) must be initialised to 0b1.
|
||||
|
||||
For CPUs with the Performance Monitors Extension (FEAT_PMUv3p9):
|
||||
|
||||
- If EL3 is present:
|
||||
|
||||
- MDCR_EL3.EnPM2 (bit 7) must be initialised to 0b1.
|
||||
|
||||
- If the kernel is entered at EL1 and EL2 is present:
|
||||
|
||||
- HDFGRTR2_EL2.nPMICNTR_EL0 (bit 2) must be initialised to 0b1.
|
||||
- HDFGRTR2_EL2.nPMICFILTR_EL0 (bit 3) must be initialised to 0b1.
|
||||
- HDFGRTR2_EL2.nPMUACR_EL1 (bit 4) must be initialised to 0b1.
|
||||
|
||||
- HDFGWTR2_EL2.nPMICNTR_EL0 (bit 2) must be initialised to 0b1.
|
||||
- HDFGWTR2_EL2.nPMICFILTR_EL0 (bit 3) must be initialised to 0b1.
|
||||
- HDFGWTR2_EL2.nPMUACR_EL1 (bit 4) must be initialised to 0b1.
|
||||
|
||||
For CPUs with Memory Copy and Memory Set instructions (FEAT_MOPS):
|
||||
|
||||
- If the kernel is entered at EL1 and EL2 is present:
|
||||
|
||||
@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
|
||||
title: Freescale Layerscape Reset Registers Module
|
||||
|
||||
maintainers:
|
||||
- Frank Li
|
||||
- Frank Li <Frank.Li@nxp.com>
|
||||
|
||||
description:
|
||||
Reset Module includes chip reset, service processor control and Reset Control
|
||||
|
||||
@@ -123,12 +123,12 @@ attribute-sets:
|
||||
|
||||
operations:
|
||||
name-prefix: ovs-vport-cmd-
|
||||
fixed-header: ovs-header
|
||||
list:
|
||||
-
|
||||
name: new
|
||||
doc: Create a new OVS vport
|
||||
attribute-set: vport
|
||||
fixed-header: ovs-header
|
||||
do:
|
||||
request:
|
||||
attributes:
|
||||
@@ -141,7 +141,6 @@ operations:
|
||||
name: del
|
||||
doc: Delete existing OVS vport from a data path
|
||||
attribute-set: vport
|
||||
fixed-header: ovs-header
|
||||
do:
|
||||
request:
|
||||
attributes:
|
||||
@@ -152,7 +151,6 @@ operations:
|
||||
name: get
|
||||
doc: Get / dump OVS vport configuration and state
|
||||
attribute-set: vport
|
||||
fixed-header: ovs-header
|
||||
do: &vport-get-op
|
||||
request:
|
||||
attributes:
|
||||
|
||||
@@ -1094,11 +1094,10 @@ attribute-sets:
|
||||
-
|
||||
name: prop-list
|
||||
type: nest
|
||||
nested-attributes: link-attrs
|
||||
nested-attributes: prop-list-link-attrs
|
||||
-
|
||||
name: alt-ifname
|
||||
type: string
|
||||
multi-attr: true
|
||||
-
|
||||
name: perm-address
|
||||
type: binary
|
||||
@@ -1137,6 +1136,13 @@ attribute-sets:
|
||||
name: dpll-pin
|
||||
type: nest
|
||||
nested-attributes: link-dpll-pin-attrs
|
||||
-
|
||||
name: prop-list-link-attrs
|
||||
subset-of: link-attrs
|
||||
attributes:
|
||||
-
|
||||
name: alt-ifname
|
||||
multi-attr: true
|
||||
-
|
||||
name: af-spec-attrs
|
||||
attributes:
|
||||
@@ -2071,9 +2077,10 @@ attribute-sets:
|
||||
type: u32
|
||||
-
|
||||
name: mctp-attrs
|
||||
name-prefix: ifla-mctp-
|
||||
attributes:
|
||||
-
|
||||
name: mctp-net
|
||||
name: net
|
||||
type: u32
|
||||
-
|
||||
name: stats-attrs
|
||||
@@ -2319,7 +2326,6 @@ operations:
|
||||
- min-mtu
|
||||
- max-mtu
|
||||
- prop-list
|
||||
- alt-ifname
|
||||
- perm-address
|
||||
- proto-down-reason
|
||||
- parent-dev-name
|
||||
|
||||
@@ -138,6 +138,10 @@ input data, the meaning of which depends on the subfeature being accessed.
|
||||
The output buffer contains a single byte which signals success or failure (``0x00`` on failure)
|
||||
and 31 bytes of output data, the meaning if which depends on the subfeature being accessed.
|
||||
|
||||
.. note::
|
||||
The ACPI control method responsible for handling the WMI method calls is not thread-safe.
|
||||
This is a firmware bug that needs to be handled inside the driver itself.
|
||||
|
||||
WMI method Get_EC()
|
||||
-------------------
|
||||
|
||||
|
||||
6
Makefile
6
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 6
|
||||
PATCHLEVEL = 12
|
||||
SUBLEVEL = 24
|
||||
SUBLEVEL = 25
|
||||
EXTRAVERSION =
|
||||
NAME = Baby Opossum Posse
|
||||
|
||||
@@ -473,7 +473,6 @@ export rust_common_flags := --edition=2021 \
|
||||
-Wclippy::ignored_unit_patterns \
|
||||
-Wclippy::mut_mut \
|
||||
-Wclippy::needless_bitwise_bool \
|
||||
-Wclippy::needless_continue \
|
||||
-Aclippy::needless_lifetimes \
|
||||
-Wclippy::no_mangle_with_rust_abi \
|
||||
-Wclippy::undocumented_unsafe_blocks \
|
||||
@@ -1048,6 +1047,9 @@ endif
|
||||
# Ensure compilers do not transform certain loops into calls to wcslen()
|
||||
KBUILD_CFLAGS += -fno-builtin-wcslen
|
||||
|
||||
# Ensure compilers do not transform certain loops into calls to wcslen()
|
||||
KBUILD_CFLAGS += -fno-builtin-wcslen
|
||||
|
||||
# change __FILE__ to the relative path from the srctree
|
||||
KBUILD_CPPFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=)
|
||||
|
||||
|
||||
@@ -215,6 +215,30 @@
|
||||
.Lskip_fgt_\@:
|
||||
.endm
|
||||
|
||||
.macro __init_el2_fgt2
|
||||
mrs x1, id_aa64mmfr0_el1
|
||||
ubfx x1, x1, #ID_AA64MMFR0_EL1_FGT_SHIFT, #4
|
||||
cmp x1, #ID_AA64MMFR0_EL1_FGT_FGT2
|
||||
b.lt .Lskip_fgt2_\@
|
||||
|
||||
mov x0, xzr
|
||||
mrs x1, id_aa64dfr0_el1
|
||||
ubfx x1, x1, #ID_AA64DFR0_EL1_PMUVer_SHIFT, #4
|
||||
cmp x1, #ID_AA64DFR0_EL1_PMUVer_V3P9
|
||||
b.lt .Lskip_pmuv3p9_\@
|
||||
|
||||
orr x0, x0, #HDFGRTR2_EL2_nPMICNTR_EL0
|
||||
orr x0, x0, #HDFGRTR2_EL2_nPMICFILTR_EL0
|
||||
orr x0, x0, #HDFGRTR2_EL2_nPMUACR_EL1
|
||||
.Lskip_pmuv3p9_\@:
|
||||
msr_s SYS_HDFGRTR2_EL2, x0
|
||||
msr_s SYS_HDFGWTR2_EL2, x0
|
||||
msr_s SYS_HFGRTR2_EL2, xzr
|
||||
msr_s SYS_HFGWTR2_EL2, xzr
|
||||
msr_s SYS_HFGITR2_EL2, xzr
|
||||
.Lskip_fgt2_\@:
|
||||
.endm
|
||||
|
||||
.macro __init_el2_nvhe_prepare_eret
|
||||
mov x0, #INIT_PSTATE_EL1
|
||||
msr spsr_el2, x0
|
||||
@@ -256,6 +280,7 @@
|
||||
__init_el2_nvhe_idregs
|
||||
__init_el2_cptr
|
||||
__init_el2_fgt
|
||||
__init_el2_fgt2
|
||||
.endm
|
||||
|
||||
#ifndef __KVM_NVHE_HYPERVISOR__
|
||||
|
||||
@@ -1238,6 +1238,7 @@ UnsignedEnum 11:8 PMUVer
|
||||
0b0110 V3P5
|
||||
0b0111 V3P7
|
||||
0b1000 V3P8
|
||||
0b1001 V3P9
|
||||
0b1111 IMP_DEF
|
||||
EndEnum
|
||||
UnsignedEnum 7:4 TraceVer
|
||||
@@ -1556,6 +1557,7 @@ EndEnum
|
||||
UnsignedEnum 59:56 FGT
|
||||
0b0000 NI
|
||||
0b0001 IMP
|
||||
0b0010 FGT2
|
||||
EndEnum
|
||||
Res0 55:48
|
||||
UnsignedEnum 47:44 EXS
|
||||
@@ -1617,6 +1619,7 @@ Enum 3:0 PARANGE
|
||||
0b0100 44
|
||||
0b0101 48
|
||||
0b0110 52
|
||||
0b0111 56
|
||||
EndEnum
|
||||
EndSysreg
|
||||
|
||||
@@ -2463,6 +2466,101 @@ Field 1 ICIALLU
|
||||
Field 0 ICIALLUIS
|
||||
EndSysreg
|
||||
|
||||
Sysreg HDFGRTR2_EL2 3 4 3 1 0
|
||||
Res0 63:25
|
||||
Field 24 nPMBMAR_EL1
|
||||
Field 23 nMDSTEPOP_EL1
|
||||
Field 22 nTRBMPAM_EL1
|
||||
Res0 21
|
||||
Field 20 nTRCITECR_EL1
|
||||
Field 19 nPMSDSFR_EL1
|
||||
Field 18 nSPMDEVAFF_EL1
|
||||
Field 17 nSPMID
|
||||
Field 16 nSPMSCR_EL1
|
||||
Field 15 nSPMACCESSR_EL1
|
||||
Field 14 nSPMCR_EL0
|
||||
Field 13 nSPMOVS
|
||||
Field 12 nSPMINTEN
|
||||
Field 11 nSPMCNTEN
|
||||
Field 10 nSPMSELR_EL0
|
||||
Field 9 nSPMEVTYPERn_EL0
|
||||
Field 8 nSPMEVCNTRn_EL0
|
||||
Field 7 nPMSSCR_EL1
|
||||
Field 6 nPMSSDATA
|
||||
Field 5 nMDSELR_EL1
|
||||
Field 4 nPMUACR_EL1
|
||||
Field 3 nPMICFILTR_EL0
|
||||
Field 2 nPMICNTR_EL0
|
||||
Field 1 nPMIAR_EL1
|
||||
Field 0 nPMECR_EL1
|
||||
EndSysreg
|
||||
|
||||
Sysreg HDFGWTR2_EL2 3 4 3 1 1
|
||||
Res0 63:25
|
||||
Field 24 nPMBMAR_EL1
|
||||
Field 23 nMDSTEPOP_EL1
|
||||
Field 22 nTRBMPAM_EL1
|
||||
Field 21 nPMZR_EL0
|
||||
Field 20 nTRCITECR_EL1
|
||||
Field 19 nPMSDSFR_EL1
|
||||
Res0 18:17
|
||||
Field 16 nSPMSCR_EL1
|
||||
Field 15 nSPMACCESSR_EL1
|
||||
Field 14 nSPMCR_EL0
|
||||
Field 13 nSPMOVS
|
||||
Field 12 nSPMINTEN
|
||||
Field 11 nSPMCNTEN
|
||||
Field 10 nSPMSELR_EL0
|
||||
Field 9 nSPMEVTYPERn_EL0
|
||||
Field 8 nSPMEVCNTRn_EL0
|
||||
Field 7 nPMSSCR_EL1
|
||||
Res0 6
|
||||
Field 5 nMDSELR_EL1
|
||||
Field 4 nPMUACR_EL1
|
||||
Field 3 nPMICFILTR_EL0
|
||||
Field 2 nPMICNTR_EL0
|
||||
Field 1 nPMIAR_EL1
|
||||
Field 0 nPMECR_EL1
|
||||
EndSysreg
|
||||
|
||||
Sysreg HFGRTR2_EL2 3 4 3 1 2
|
||||
Res0 63:15
|
||||
Field 14 nACTLRALIAS_EL1
|
||||
Field 13 nACTLRMASK_EL1
|
||||
Field 12 nTCR2ALIAS_EL1
|
||||
Field 11 nTCRALIAS_EL1
|
||||
Field 10 nSCTLRALIAS2_EL1
|
||||
Field 9 nSCTLRALIAS_EL1
|
||||
Field 8 nCPACRALIAS_EL1
|
||||
Field 7 nTCR2MASK_EL1
|
||||
Field 6 nTCRMASK_EL1
|
||||
Field 5 nSCTLR2MASK_EL1
|
||||
Field 4 nSCTLRMASK_EL1
|
||||
Field 3 nCPACRMASK_EL1
|
||||
Field 2 nRCWSMASK_EL1
|
||||
Field 1 nERXGSR_EL1
|
||||
Field 0 nPFAR_EL1
|
||||
EndSysreg
|
||||
|
||||
Sysreg HFGWTR2_EL2 3 4 3 1 3
|
||||
Res0 63:15
|
||||
Field 14 nACTLRALIAS_EL1
|
||||
Field 13 nACTLRMASK_EL1
|
||||
Field 12 nTCR2ALIAS_EL1
|
||||
Field 11 nTCRALIAS_EL1
|
||||
Field 10 nSCTLRALIAS2_EL1
|
||||
Field 9 nSCTLRALIAS_EL1
|
||||
Field 8 nCPACRALIAS_EL1
|
||||
Field 7 nTCR2MASK_EL1
|
||||
Field 6 nTCRMASK_EL1
|
||||
Field 5 nSCTLR2MASK_EL1
|
||||
Field 4 nSCTLRMASK_EL1
|
||||
Field 3 nCPACRMASK_EL1
|
||||
Field 2 nRCWSMASK_EL1
|
||||
Res0 1
|
||||
Field 0 nPFAR_EL1
|
||||
EndSysreg
|
||||
|
||||
Sysreg HDFGRTR_EL2 3 4 3 1 4
|
||||
Field 63 PMBIDR_EL1
|
||||
Field 62 nPMSNEVFR_EL1
|
||||
@@ -2635,6 +2733,12 @@ Field 1 AMEVCNTR00_EL0
|
||||
Field 0 AMCNTEN0
|
||||
EndSysreg
|
||||
|
||||
Sysreg HFGITR2_EL2 3 4 3 1 7
|
||||
Res0 63:2
|
||||
Field 1 nDCCIVAPS
|
||||
Field 0 TSBCSYNC
|
||||
EndSysreg
|
||||
|
||||
Sysreg ZCR_EL2 3 4 1 2 0
|
||||
Fields ZCR_ELx
|
||||
EndSysreg
|
||||
|
||||
@@ -249,18 +249,6 @@ static __init int setup_node(int pxm)
|
||||
return acpi_map_pxm_to_node(pxm);
|
||||
}
|
||||
|
||||
/*
|
||||
* Callback for SLIT parsing. pxm_to_node() returns NUMA_NO_NODE for
|
||||
* I/O localities since SRAT does not list them. I/O localities are
|
||||
* not supported at this point.
|
||||
*/
|
||||
unsigned int numa_distance_cnt;
|
||||
|
||||
static inline unsigned int get_numa_distances_cnt(struct acpi_table_slit *slit)
|
||||
{
|
||||
return slit->locality_count;
|
||||
}
|
||||
|
||||
void __init numa_set_distance(int from, int to, int distance)
|
||||
{
|
||||
if ((u8)distance != distance || (from == to && distance != LOCAL_DISTANCE)) {
|
||||
|
||||
@@ -42,7 +42,7 @@ int (*__pmax_close)(int);
|
||||
* Detect which PROM the DECSTATION has, and set the callback vectors
|
||||
* appropriately.
|
||||
*/
|
||||
void __init which_prom(s32 magic, s32 *prom_vec)
|
||||
static void __init which_prom(s32 magic, s32 *prom_vec)
|
||||
{
|
||||
/*
|
||||
* No sign of the REX PROM's magic number means we assume a non-REX
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
#define __ASM_DS1287_H
|
||||
|
||||
extern int ds1287_timer_state(void);
|
||||
extern void ds1287_set_base_clock(unsigned int clock);
|
||||
extern int ds1287_set_base_clock(unsigned int hz);
|
||||
extern int ds1287_clockevent_init(int irq);
|
||||
|
||||
#endif
|
||||
|
||||
@@ -10,6 +10,7 @@
|
||||
#include <linux/mc146818rtc.h>
|
||||
#include <linux/irq.h>
|
||||
|
||||
#include <asm/ds1287.h>
|
||||
#include <asm/time.h>
|
||||
|
||||
int ds1287_timer_state(void)
|
||||
|
||||
@@ -19,16 +19,9 @@
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
|
||||
void arch_kgdb_breakpoint(void);
|
||||
extern unsigned long kgdb_compiled_break;
|
||||
|
||||
static inline void arch_kgdb_breakpoint(void)
|
||||
{
|
||||
asm(".global kgdb_compiled_break\n"
|
||||
".option norvc\n"
|
||||
"kgdb_compiled_break: ebreak\n"
|
||||
".option rvc\n");
|
||||
}
|
||||
|
||||
#endif /* !__ASSEMBLY__ */
|
||||
|
||||
#define DBG_REG_ZERO "zero"
|
||||
|
||||
@@ -62,8 +62,11 @@ static inline void syscall_get_arguments(struct task_struct *task,
|
||||
unsigned long *args)
|
||||
{
|
||||
args[0] = regs->orig_a0;
|
||||
args++;
|
||||
memcpy(args, ®s->a1, 5 * sizeof(args[0]));
|
||||
args[1] = regs->a1;
|
||||
args[2] = regs->a2;
|
||||
args[3] = regs->a3;
|
||||
args[4] = regs->a4;
|
||||
args[5] = regs->a5;
|
||||
}
|
||||
|
||||
static inline int syscall_get_arch(struct task_struct *task)
|
||||
|
||||
@@ -254,6 +254,12 @@ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long pc)
|
||||
regs->epc = pc;
|
||||
}
|
||||
|
||||
noinline void arch_kgdb_breakpoint(void)
|
||||
{
|
||||
asm(".global kgdb_compiled_break\n"
|
||||
"kgdb_compiled_break: ebreak\n");
|
||||
}
|
||||
|
||||
void kgdb_arch_handle_qxfer_pkt(char *remcom_in_buffer,
|
||||
char *remcom_out_buffer)
|
||||
{
|
||||
|
||||
@@ -73,16 +73,17 @@ static bool duplicate_rela(const Elf_Rela *rela, int idx)
|
||||
static void count_max_entries(Elf_Rela *relas, int num,
|
||||
unsigned int *plts, unsigned int *gots)
|
||||
{
|
||||
unsigned int type, i;
|
||||
|
||||
for (i = 0; i < num; i++) {
|
||||
type = ELF_RISCV_R_TYPE(relas[i].r_info);
|
||||
if (type == R_RISCV_CALL_PLT) {
|
||||
for (int i = 0; i < num; i++) {
|
||||
switch (ELF_R_TYPE(relas[i].r_info)) {
|
||||
case R_RISCV_CALL_PLT:
|
||||
case R_RISCV_PLT32:
|
||||
if (!duplicate_rela(relas, i))
|
||||
(*plts)++;
|
||||
} else if (type == R_RISCV_GOT_HI20) {
|
||||
break;
|
||||
case R_RISCV_GOT_HI20:
|
||||
if (!duplicate_rela(relas, i))
|
||||
(*gots)++;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -648,7 +648,7 @@ process_accumulated_relocations(struct module *me,
|
||||
kfree(bucket_iter);
|
||||
}
|
||||
|
||||
kfree(*relocation_hashtable);
|
||||
kvfree(*relocation_hashtable);
|
||||
}
|
||||
|
||||
static int add_relocation_to_accumulate(struct module *me, int type,
|
||||
@@ -752,9 +752,10 @@ initialize_relocation_hashtable(unsigned int num_relocations,
|
||||
|
||||
hashtable_size <<= should_double_size;
|
||||
|
||||
*relocation_hashtable = kmalloc_array(hashtable_size,
|
||||
sizeof(**relocation_hashtable),
|
||||
GFP_KERNEL);
|
||||
/* Number of relocations may be large, so kvmalloc it */
|
||||
*relocation_hashtable = kvmalloc_array(hashtable_size,
|
||||
sizeof(**relocation_hashtable),
|
||||
GFP_KERNEL);
|
||||
if (!*relocation_hashtable)
|
||||
return 0;
|
||||
|
||||
@@ -859,7 +860,7 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
|
||||
}
|
||||
|
||||
j++;
|
||||
if (j > sechdrs[relsec].sh_size / sizeof(*rel))
|
||||
if (j == num_relocations)
|
||||
j = 0;
|
||||
|
||||
} while (j_idx != j);
|
||||
|
||||
@@ -66,6 +66,9 @@ static struct resource bss_res = { .name = "Kernel bss", };
|
||||
static struct resource elfcorehdr_res = { .name = "ELF Core hdr", };
|
||||
#endif
|
||||
|
||||
static int num_standard_resources;
|
||||
static struct resource *standard_resources;
|
||||
|
||||
static int __init add_resource(struct resource *parent,
|
||||
struct resource *res)
|
||||
{
|
||||
@@ -139,7 +142,7 @@ static void __init init_resources(void)
|
||||
struct resource *res = NULL;
|
||||
struct resource *mem_res = NULL;
|
||||
size_t mem_res_sz = 0;
|
||||
int num_resources = 0, res_idx = 0;
|
||||
int num_resources = 0, res_idx = 0, non_resv_res = 0;
|
||||
int ret = 0;
|
||||
|
||||
/* + 1 as memblock_alloc() might increase memblock.reserved.cnt */
|
||||
@@ -195,6 +198,7 @@ static void __init init_resources(void)
|
||||
/* Add /memory regions to the resource tree */
|
||||
for_each_mem_region(region) {
|
||||
res = &mem_res[res_idx--];
|
||||
non_resv_res++;
|
||||
|
||||
if (unlikely(memblock_is_nomap(region))) {
|
||||
res->name = "Reserved";
|
||||
@@ -212,6 +216,9 @@ static void __init init_resources(void)
|
||||
goto error;
|
||||
}
|
||||
|
||||
num_standard_resources = non_resv_res;
|
||||
standard_resources = &mem_res[res_idx + 1];
|
||||
|
||||
/* Clean-up any unused pre-allocated resources */
|
||||
if (res_idx >= 0)
|
||||
memblock_free(mem_res, (res_idx + 1) * sizeof(*mem_res));
|
||||
@@ -223,6 +230,33 @@ static void __init init_resources(void)
|
||||
memblock_free(mem_res, mem_res_sz);
|
||||
}
|
||||
|
||||
static int __init reserve_memblock_reserved_regions(void)
|
||||
{
|
||||
u64 i, j;
|
||||
|
||||
for (i = 0; i < num_standard_resources; i++) {
|
||||
struct resource *mem = &standard_resources[i];
|
||||
phys_addr_t r_start, r_end, mem_size = resource_size(mem);
|
||||
|
||||
if (!memblock_is_region_reserved(mem->start, mem_size))
|
||||
continue;
|
||||
|
||||
for_each_reserved_mem_range(j, &r_start, &r_end) {
|
||||
resource_size_t start, end;
|
||||
|
||||
start = max(PFN_PHYS(PFN_DOWN(r_start)), mem->start);
|
||||
end = min(PFN_PHYS(PFN_UP(r_end)) - 1, mem->end);
|
||||
|
||||
if (start > mem->end || end < mem->start)
|
||||
continue;
|
||||
|
||||
reserve_region_with_split(mem, start, end, "Reserved");
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
arch_initcall(reserve_memblock_reserved_regions);
|
||||
|
||||
static void __init parse_dtb(void)
|
||||
{
|
||||
|
||||
@@ -34,11 +34,14 @@ static bool early_is_tdx_guest(void)
|
||||
|
||||
void arch_accept_memory(phys_addr_t start, phys_addr_t end)
|
||||
{
|
||||
static bool sevsnp;
|
||||
|
||||
/* Platform-specific memory-acceptance call goes here */
|
||||
if (early_is_tdx_guest()) {
|
||||
if (!tdx_accept_memory(start, end))
|
||||
panic("TDX: Failed to accept memory\n");
|
||||
} else if (sev_snp_enabled()) {
|
||||
} else if (sevsnp || (sev_get_status() & MSR_AMD64_SEV_SNP_ENABLED)) {
|
||||
sevsnp = true;
|
||||
snp_accept_memory(start, end);
|
||||
} else {
|
||||
error("Cannot accept memory: unknown platform\n");
|
||||
|
||||
@@ -164,10 +164,7 @@ bool sev_snp_enabled(void)
|
||||
|
||||
static void __page_state_change(unsigned long paddr, enum psc_op op)
|
||||
{
|
||||
u64 val;
|
||||
|
||||
if (!sev_snp_enabled())
|
||||
return;
|
||||
u64 val, msr;
|
||||
|
||||
/*
|
||||
* If private -> shared then invalidate the page before requesting the
|
||||
@@ -176,6 +173,9 @@ static void __page_state_change(unsigned long paddr, enum psc_op op)
|
||||
if (op == SNP_PAGE_STATE_SHARED)
|
||||
pvalidate_4k_page(paddr, paddr, false);
|
||||
|
||||
/* Save the current GHCB MSR value */
|
||||
msr = sev_es_rd_ghcb_msr();
|
||||
|
||||
/* Issue VMGEXIT to change the page state in RMP table. */
|
||||
sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op));
|
||||
VMGEXIT();
|
||||
@@ -185,6 +185,9 @@ static void __page_state_change(unsigned long paddr, enum psc_op op)
|
||||
if ((GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP) || GHCB_MSR_PSC_RESP_VAL(val))
|
||||
sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC);
|
||||
|
||||
/* Restore the GHCB MSR value */
|
||||
sev_es_wr_ghcb_msr(msr);
|
||||
|
||||
/*
|
||||
* Now that page state is changed in the RMP table, validate it so that it is
|
||||
* consistent with the RMP entry.
|
||||
@@ -195,11 +198,17 @@ static void __page_state_change(unsigned long paddr, enum psc_op op)
|
||||
|
||||
void snp_set_page_private(unsigned long paddr)
|
||||
{
|
||||
if (!sev_snp_enabled())
|
||||
return;
|
||||
|
||||
__page_state_change(paddr, SNP_PAGE_STATE_PRIVATE);
|
||||
}
|
||||
|
||||
void snp_set_page_shared(unsigned long paddr)
|
||||
{
|
||||
if (!sev_snp_enabled())
|
||||
return;
|
||||
|
||||
__page_state_change(paddr, SNP_PAGE_STATE_SHARED);
|
||||
}
|
||||
|
||||
@@ -223,56 +232,10 @@ static bool early_setup_ghcb(void)
|
||||
return true;
|
||||
}
|
||||
|
||||
static phys_addr_t __snp_accept_memory(struct snp_psc_desc *desc,
|
||||
phys_addr_t pa, phys_addr_t pa_end)
|
||||
{
|
||||
struct psc_hdr *hdr;
|
||||
struct psc_entry *e;
|
||||
unsigned int i;
|
||||
|
||||
hdr = &desc->hdr;
|
||||
memset(hdr, 0, sizeof(*hdr));
|
||||
|
||||
e = desc->entries;
|
||||
|
||||
i = 0;
|
||||
while (pa < pa_end && i < VMGEXIT_PSC_MAX_ENTRY) {
|
||||
hdr->end_entry = i;
|
||||
|
||||
e->gfn = pa >> PAGE_SHIFT;
|
||||
e->operation = SNP_PAGE_STATE_PRIVATE;
|
||||
if (IS_ALIGNED(pa, PMD_SIZE) && (pa_end - pa) >= PMD_SIZE) {
|
||||
e->pagesize = RMP_PG_SIZE_2M;
|
||||
pa += PMD_SIZE;
|
||||
} else {
|
||||
e->pagesize = RMP_PG_SIZE_4K;
|
||||
pa += PAGE_SIZE;
|
||||
}
|
||||
|
||||
e++;
|
||||
i++;
|
||||
}
|
||||
|
||||
if (vmgexit_psc(boot_ghcb, desc))
|
||||
sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC);
|
||||
|
||||
pvalidate_pages(desc);
|
||||
|
||||
return pa;
|
||||
}
|
||||
|
||||
void snp_accept_memory(phys_addr_t start, phys_addr_t end)
|
||||
{
|
||||
struct snp_psc_desc desc = {};
|
||||
unsigned int i;
|
||||
phys_addr_t pa;
|
||||
|
||||
if (!boot_ghcb && !early_setup_ghcb())
|
||||
sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC);
|
||||
|
||||
pa = start;
|
||||
while (pa < end)
|
||||
pa = __snp_accept_memory(&desc, pa, end);
|
||||
for (phys_addr_t pa = start; pa < end; pa += PAGE_SIZE)
|
||||
__page_state_change(pa, SNP_PAGE_STATE_PRIVATE);
|
||||
}
|
||||
|
||||
void sev_es_shutdown_ghcb(void)
|
||||
|
||||
@@ -12,11 +12,13 @@
|
||||
|
||||
bool sev_snp_enabled(void);
|
||||
void snp_accept_memory(phys_addr_t start, phys_addr_t end);
|
||||
u64 sev_get_status(void);
|
||||
|
||||
#else
|
||||
|
||||
static inline bool sev_snp_enabled(void) { return false; }
|
||||
static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { }
|
||||
static inline u64 sev_get_status(void) { return 0; }
|
||||
|
||||
#endif
|
||||
|
||||
|
||||
@@ -1317,8 +1317,10 @@ static u64 pebs_update_adaptive_cfg(struct perf_event *event)
|
||||
* + precise_ip < 2 for the non event IP
|
||||
* + For RTM TSX weight we need GPRs for the abort code.
|
||||
*/
|
||||
gprs = (sample_type & PERF_SAMPLE_REGS_INTR) &&
|
||||
(attr->sample_regs_intr & PEBS_GP_REGS);
|
||||
gprs = ((sample_type & PERF_SAMPLE_REGS_INTR) &&
|
||||
(attr->sample_regs_intr & PEBS_GP_REGS)) ||
|
||||
((sample_type & PERF_SAMPLE_REGS_USER) &&
|
||||
(attr->sample_regs_user & PEBS_GP_REGS));
|
||||
|
||||
tsx_weight = (sample_type & PERF_SAMPLE_WEIGHT_TYPE) &&
|
||||
((attr->config & INTEL_ARCH_EVENT_MASK) ==
|
||||
@@ -1970,7 +1972,7 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
|
||||
regs->flags &= ~PERF_EFLAGS_EXACT;
|
||||
}
|
||||
|
||||
if (sample_type & PERF_SAMPLE_REGS_INTR)
|
||||
if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER))
|
||||
adaptive_pebs_save_regs(regs, gprs);
|
||||
}
|
||||
|
||||
|
||||
@@ -4891,28 +4891,28 @@ static struct uncore_event_desc snr_uncore_iio_freerunning_events[] = {
|
||||
INTEL_UNCORE_EVENT_DESC(ioclk, "event=0xff,umask=0x10"),
|
||||
/* Free-Running IIO BANDWIDTH IN Counters */
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port0, "event=0xff,umask=0x20"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.0517578125e-5"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port1, "event=0xff,umask=0x21"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.0517578125e-5"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port2, "event=0xff,umask=0x22"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.0517578125e-5"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port3, "event=0xff,umask=0x23"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.0517578125e-5"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port4, "event=0xff,umask=0x24"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.0517578125e-5"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port4.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port5, "event=0xff,umask=0x25"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.0517578125e-5"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port5.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port6, "event=0xff,umask=0x26"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.0517578125e-5"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port6.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port7, "event=0xff,umask=0x27"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.0517578125e-5"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port7.unit, "MiB"),
|
||||
{ /* end: all zeroes */ },
|
||||
};
|
||||
@@ -5485,37 +5485,6 @@ static struct freerunning_counters icx_iio_freerunning[] = {
|
||||
[ICX_IIO_MSR_BW_IN] = { 0xaa0, 0x1, 0x10, 8, 48, icx_iio_bw_freerunning_box_offsets },
|
||||
};
|
||||
|
||||
static struct uncore_event_desc icx_uncore_iio_freerunning_events[] = {
|
||||
/* Free-Running IIO CLOCKS Counter */
|
||||
INTEL_UNCORE_EVENT_DESC(ioclk, "event=0xff,umask=0x10"),
|
||||
/* Free-Running IIO BANDWIDTH IN Counters */
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port0, "event=0xff,umask=0x20"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port1, "event=0xff,umask=0x21"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port2, "event=0xff,umask=0x22"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port3, "event=0xff,umask=0x23"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port4, "event=0xff,umask=0x24"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port4.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port5, "event=0xff,umask=0x25"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port5.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port6, "event=0xff,umask=0x26"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port6.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port7, "event=0xff,umask=0x27"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port7.unit, "MiB"),
|
||||
{ /* end: all zeroes */ },
|
||||
};
|
||||
|
||||
static struct intel_uncore_type icx_uncore_iio_free_running = {
|
||||
.name = "iio_free_running",
|
||||
.num_counters = 9,
|
||||
@@ -5523,7 +5492,7 @@ static struct intel_uncore_type icx_uncore_iio_free_running = {
|
||||
.num_freerunning_types = ICX_IIO_FREERUNNING_TYPE_MAX,
|
||||
.freerunning = icx_iio_freerunning,
|
||||
.ops = &skx_uncore_iio_freerunning_ops,
|
||||
.event_descs = icx_uncore_iio_freerunning_events,
|
||||
.event_descs = snr_uncore_iio_freerunning_events,
|
||||
.format_group = &skx_uncore_iio_freerunning_format_group,
|
||||
};
|
||||
|
||||
@@ -6320,69 +6289,13 @@ static struct freerunning_counters spr_iio_freerunning[] = {
|
||||
[SPR_IIO_MSR_BW_OUT] = { 0x3808, 0x1, 0x10, 8, 48 },
|
||||
};
|
||||
|
||||
static struct uncore_event_desc spr_uncore_iio_freerunning_events[] = {
|
||||
/* Free-Running IIO CLOCKS Counter */
|
||||
INTEL_UNCORE_EVENT_DESC(ioclk, "event=0xff,umask=0x10"),
|
||||
/* Free-Running IIO BANDWIDTH IN Counters */
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port0, "event=0xff,umask=0x20"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port1, "event=0xff,umask=0x21"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port2, "event=0xff,umask=0x22"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port3, "event=0xff,umask=0x23"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port4, "event=0xff,umask=0x24"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port4.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port5, "event=0xff,umask=0x25"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port5.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port6, "event=0xff,umask=0x26"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port6.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port7, "event=0xff,umask=0x27"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_in_port7.unit, "MiB"),
|
||||
/* Free-Running IIO BANDWIDTH OUT Counters */
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port0, "event=0xff,umask=0x30"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port0.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port0.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port1, "event=0xff,umask=0x31"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port1.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port1.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port2, "event=0xff,umask=0x32"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port2.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port2.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port3, "event=0xff,umask=0x33"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port3.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port3.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port4, "event=0xff,umask=0x34"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port4.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port4.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port5, "event=0xff,umask=0x35"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port5.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port5.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port6, "event=0xff,umask=0x36"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port6.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port6.unit, "MiB"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port7, "event=0xff,umask=0x37"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port7.scale, "3.814697266e-6"),
|
||||
INTEL_UNCORE_EVENT_DESC(bw_out_port7.unit, "MiB"),
|
||||
{ /* end: all zeroes */ },
|
||||
};
|
||||
|
||||
static struct intel_uncore_type spr_uncore_iio_free_running = {
|
||||
.name = "iio_free_running",
|
||||
.num_counters = 17,
|
||||
.num_freerunning_types = SPR_IIO_FREERUNNING_TYPE_MAX,
|
||||
.freerunning = spr_iio_freerunning,
|
||||
.ops = &skx_uncore_iio_freerunning_ops,
|
||||
.event_descs = spr_uncore_iio_freerunning_events,
|
||||
.event_descs = snr_uncore_iio_freerunning_events,
|
||||
.format_group = &skx_uncore_iio_freerunning_format_group,
|
||||
};
|
||||
|
||||
|
||||
@@ -862,6 +862,16 @@ static void init_amd_zen1(struct cpuinfo_x86 *c)
|
||||
|
||||
pr_notice_once("AMD Zen1 DIV0 bug detected. Disable SMT for full protection.\n");
|
||||
setup_force_cpu_bug(X86_BUG_DIV0);
|
||||
|
||||
/*
|
||||
* Turn off the Instructions Retired free counter on machines that are
|
||||
* susceptible to erratum #1054 "Instructions Retired Performance
|
||||
* Counter May Be Inaccurate".
|
||||
*/
|
||||
if (c->x86_model < 0x30) {
|
||||
msr_clear_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);
|
||||
clear_cpu_cap(c, X86_FEATURE_IRPERF);
|
||||
}
|
||||
}
|
||||
|
||||
static bool cpu_has_zenbleed_microcode(void)
|
||||
@@ -1045,13 +1055,8 @@ static void init_amd(struct cpuinfo_x86 *c)
|
||||
if (!cpu_feature_enabled(X86_FEATURE_XENPV))
|
||||
set_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS);
|
||||
|
||||
/*
|
||||
* Turn on the Instructions Retired free counter on machines not
|
||||
* susceptible to erratum #1054 "Instructions Retired Performance
|
||||
* Counter May Be Inaccurate".
|
||||
*/
|
||||
if (cpu_has(c, X86_FEATURE_IRPERF) &&
|
||||
(boot_cpu_has(X86_FEATURE_ZEN1) && c->x86_model > 0x2f))
|
||||
/* Enable the Instructions Retired free counter */
|
||||
if (cpu_has(c, X86_FEATURE_IRPERF))
|
||||
msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);
|
||||
|
||||
check_null_seg_clears_base(c);
|
||||
|
||||
@@ -199,6 +199,12 @@ static bool need_sha_check(u32 cur_rev)
|
||||
case 0xa70c0: return cur_rev <= 0xa70C009; break;
|
||||
case 0xaa001: return cur_rev <= 0xaa00116; break;
|
||||
case 0xaa002: return cur_rev <= 0xaa00218; break;
|
||||
case 0xb0021: return cur_rev <= 0xb002146; break;
|
||||
case 0xb1010: return cur_rev <= 0xb101046; break;
|
||||
case 0xb2040: return cur_rev <= 0xb204031; break;
|
||||
case 0xb4040: return cur_rev <= 0xb404031; break;
|
||||
case 0xb6000: return cur_rev <= 0xb600031; break;
|
||||
case 0xb7000: return cur_rev <= 0xb700031; break;
|
||||
default: break;
|
||||
}
|
||||
|
||||
@@ -214,8 +220,7 @@ static bool verify_sha256_digest(u32 patch_id, u32 cur_rev, const u8 *data, unsi
|
||||
struct sha256_state s;
|
||||
int i;
|
||||
|
||||
if (x86_family(bsp_cpuid_1_eax) < 0x17 ||
|
||||
x86_family(bsp_cpuid_1_eax) > 0x19)
|
||||
if (x86_family(bsp_cpuid_1_eax) < 0x17)
|
||||
return true;
|
||||
|
||||
if (!need_sha_check(cur_rev))
|
||||
|
||||
@@ -54,14 +54,20 @@ struct mc_debug_data {
|
||||
|
||||
static DEFINE_PER_CPU(struct mc_buffer, mc_buffer);
|
||||
static struct mc_debug_data mc_debug_data_early __initdata;
|
||||
static DEFINE_PER_CPU(struct mc_debug_data *, mc_debug_data) =
|
||||
&mc_debug_data_early;
|
||||
static struct mc_debug_data __percpu *mc_debug_data_ptr;
|
||||
DEFINE_PER_CPU(unsigned long, xen_mc_irq_flags);
|
||||
|
||||
static struct static_key mc_debug __ro_after_init;
|
||||
static bool mc_debug_enabled __initdata;
|
||||
|
||||
static struct mc_debug_data * __ref get_mc_debug(void)
|
||||
{
|
||||
if (!mc_debug_data_ptr)
|
||||
return &mc_debug_data_early;
|
||||
|
||||
return this_cpu_ptr(mc_debug_data_ptr);
|
||||
}
|
||||
|
||||
static int __init xen_parse_mc_debug(char *arg)
|
||||
{
|
||||
mc_debug_enabled = true;
|
||||
@@ -71,20 +77,16 @@ static int __init xen_parse_mc_debug(char *arg)
|
||||
}
|
||||
early_param("xen_mc_debug", xen_parse_mc_debug);
|
||||
|
||||
void mc_percpu_init(unsigned int cpu)
|
||||
{
|
||||
per_cpu(mc_debug_data, cpu) = per_cpu_ptr(mc_debug_data_ptr, cpu);
|
||||
}
|
||||
|
||||
static int __init mc_debug_enable(void)
|
||||
{
|
||||
unsigned long flags;
|
||||
struct mc_debug_data __percpu *mcdb;
|
||||
|
||||
if (!mc_debug_enabled)
|
||||
return 0;
|
||||
|
||||
mc_debug_data_ptr = alloc_percpu(struct mc_debug_data);
|
||||
if (!mc_debug_data_ptr) {
|
||||
mcdb = alloc_percpu(struct mc_debug_data);
|
||||
if (!mcdb) {
|
||||
pr_err("xen_mc_debug inactive\n");
|
||||
static_key_slow_dec(&mc_debug);
|
||||
return -ENOMEM;
|
||||
@@ -93,7 +95,7 @@ static int __init mc_debug_enable(void)
|
||||
/* Be careful when switching to percpu debug data. */
|
||||
local_irq_save(flags);
|
||||
xen_mc_flush();
|
||||
mc_percpu_init(0);
|
||||
mc_debug_data_ptr = mcdb;
|
||||
local_irq_restore(flags);
|
||||
|
||||
pr_info("xen_mc_debug active\n");
|
||||
@@ -155,7 +157,7 @@ void xen_mc_flush(void)
|
||||
trace_xen_mc_flush(b->mcidx, b->argidx, b->cbidx);
|
||||
|
||||
if (static_key_false(&mc_debug)) {
|
||||
mcdb = __this_cpu_read(mc_debug_data);
|
||||
mcdb = get_mc_debug();
|
||||
memcpy(mcdb->entries, b->entries,
|
||||
b->mcidx * sizeof(struct multicall_entry));
|
||||
}
|
||||
@@ -235,7 +237,7 @@ struct multicall_space __xen_mc_entry(size_t args)
|
||||
|
||||
ret.mc = &b->entries[b->mcidx];
|
||||
if (static_key_false(&mc_debug)) {
|
||||
struct mc_debug_data *mcdb = __this_cpu_read(mc_debug_data);
|
||||
struct mc_debug_data *mcdb = get_mc_debug();
|
||||
|
||||
mcdb->caller[b->mcidx] = __builtin_return_address(0);
|
||||
mcdb->argsz[b->mcidx] = args;
|
||||
|
||||
@@ -305,7 +305,6 @@ static int xen_pv_kick_ap(unsigned int cpu, struct task_struct *idle)
|
||||
return rc;
|
||||
|
||||
xen_pmu_init(cpu);
|
||||
mc_percpu_init(cpu);
|
||||
|
||||
/*
|
||||
* Why is this a BUG? If the hypercall fails then everything can be
|
||||
|
||||
@@ -261,9 +261,6 @@ void xen_mc_callback(void (*fn)(void *), void *data);
|
||||
*/
|
||||
struct multicall_space xen_mc_extend_args(unsigned long op, size_t arg_size);
|
||||
|
||||
/* Do percpu data initialization for multicalls. */
|
||||
void mc_percpu_init(unsigned int cpu);
|
||||
|
||||
extern bool is_xen_pmu;
|
||||
|
||||
irqreturn_t xen_pmu_irq_handler(int irq, void *dev_id);
|
||||
|
||||
@@ -104,16 +104,12 @@ err:
|
||||
}
|
||||
EXPORT_SYMBOL(bio_integrity_alloc);
|
||||
|
||||
static void bio_integrity_unpin_bvec(struct bio_vec *bv, int nr_vecs,
|
||||
bool dirty)
|
||||
static void bio_integrity_unpin_bvec(struct bio_vec *bv, int nr_vecs)
|
||||
{
|
||||
int i;
|
||||
|
||||
for (i = 0; i < nr_vecs; i++) {
|
||||
if (dirty && !PageCompound(bv[i].bv_page))
|
||||
set_page_dirty_lock(bv[i].bv_page);
|
||||
for (i = 0; i < nr_vecs; i++)
|
||||
unpin_user_page(bv[i].bv_page);
|
||||
}
|
||||
}
|
||||
|
||||
static void bio_integrity_uncopy_user(struct bio_integrity_payload *bip)
|
||||
@@ -129,7 +125,7 @@ static void bio_integrity_uncopy_user(struct bio_integrity_payload *bip)
|
||||
ret = copy_to_iter(bvec_virt(bounce_bvec), bytes, &orig_iter);
|
||||
WARN_ON_ONCE(ret != bytes);
|
||||
|
||||
bio_integrity_unpin_bvec(orig_bvecs, orig_nr_vecs, true);
|
||||
bio_integrity_unpin_bvec(orig_bvecs, orig_nr_vecs);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -149,8 +145,7 @@ void bio_integrity_unmap_user(struct bio *bio)
|
||||
return;
|
||||
}
|
||||
|
||||
bio_integrity_unpin_bvec(bip->bip_vec, bip->bip_max_vcnt,
|
||||
bio_data_dir(bio) == READ);
|
||||
bio_integrity_unpin_bvec(bip->bip_vec, bip->bip_max_vcnt);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -236,7 +231,7 @@ static int bio_integrity_copy_user(struct bio *bio, struct bio_vec *bvec,
|
||||
}
|
||||
|
||||
if (write)
|
||||
bio_integrity_unpin_bvec(bvec, nr_vecs, false);
|
||||
bio_integrity_unpin_bvec(bvec, nr_vecs);
|
||||
else
|
||||
memcpy(&bip->bip_vec[1], bvec, nr_vecs * sizeof(*bvec));
|
||||
|
||||
@@ -362,7 +357,7 @@ int bio_integrity_map_user(struct bio *bio, void __user *ubuf, ssize_t bytes,
|
||||
return 0;
|
||||
|
||||
release_pages:
|
||||
bio_integrity_unpin_bvec(bvec, nr_bvecs, false);
|
||||
bio_integrity_unpin_bvec(bvec, nr_bvecs);
|
||||
free_bvec:
|
||||
if (bvec != stack_vec)
|
||||
kfree(bvec);
|
||||
|
||||
@@ -11,6 +11,7 @@
|
||||
#include <linux/smp.h>
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/group_cpus.h>
|
||||
#include <linux/device/bus.h>
|
||||
|
||||
#include "blk.h"
|
||||
#include "blk-mq.h"
|
||||
@@ -54,3 +55,39 @@ int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, unsigned int index)
|
||||
|
||||
return NUMA_NO_NODE;
|
||||
}
|
||||
|
||||
/**
|
||||
* blk_mq_map_hw_queues - Create CPU to hardware queue mapping
|
||||
* @qmap: CPU to hardware queue map
|
||||
* @dev: The device to map queues
|
||||
* @offset: Queue offset to use for the device
|
||||
*
|
||||
* Create a CPU to hardware queue mapping in @qmap. The struct bus_type
|
||||
* irq_get_affinity callback will be used to retrieve the affinity.
|
||||
*/
|
||||
void blk_mq_map_hw_queues(struct blk_mq_queue_map *qmap,
|
||||
struct device *dev, unsigned int offset)
|
||||
|
||||
{
|
||||
const struct cpumask *mask;
|
||||
unsigned int queue, cpu;
|
||||
|
||||
if (!dev->bus->irq_get_affinity)
|
||||
goto fallback;
|
||||
|
||||
for (queue = 0; queue < qmap->nr_queues; queue++) {
|
||||
mask = dev->bus->irq_get_affinity(dev, queue + offset);
|
||||
if (!mask)
|
||||
goto fallback;
|
||||
|
||||
for_each_cpu(cpu, mask)
|
||||
qmap->mq_map[cpu] = qmap->queue_offset + queue;
|
||||
}
|
||||
|
||||
return;
|
||||
|
||||
fallback:
|
||||
WARN_ON_ONCE(qmap->nr_queues > 1);
|
||||
blk_mq_clear_mq_map(qmap);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(blk_mq_map_hw_queues);
|
||||
|
||||
@@ -815,6 +815,8 @@ out_unregister_ia_ranges:
|
||||
out_debugfs_remove:
|
||||
blk_debugfs_remove(disk);
|
||||
mutex_unlock(&q->sysfs_lock);
|
||||
if (queue_is_mq(q))
|
||||
blk_mq_sysfs_unregister(disk);
|
||||
out_put_queue_kobj:
|
||||
kobject_put(&disk->queue_kobj);
|
||||
mutex_unlock(&q->sysfs_dir_lock);
|
||||
|
||||
@@ -1510,6 +1510,8 @@ int ata_eh_get_ncq_success_sense(struct ata_link *link)
|
||||
unsigned int err_mask, tag;
|
||||
u8 *sense, sk = 0, asc = 0, ascq = 0;
|
||||
u64 sense_valid, val;
|
||||
u16 extended_sense;
|
||||
bool aux_icc_valid;
|
||||
int ret = 0;
|
||||
|
||||
err_mask = ata_read_log_page(dev, ATA_LOG_SENSE_NCQ, 0, buf, 2);
|
||||
@@ -1529,6 +1531,8 @@ int ata_eh_get_ncq_success_sense(struct ata_link *link)
|
||||
|
||||
sense_valid = (u64)buf[8] | ((u64)buf[9] << 8) |
|
||||
((u64)buf[10] << 16) | ((u64)buf[11] << 24);
|
||||
extended_sense = get_unaligned_le16(&buf[14]);
|
||||
aux_icc_valid = extended_sense & BIT(15);
|
||||
|
||||
ata_qc_for_each_raw(ap, qc, tag) {
|
||||
if (!(qc->flags & ATA_QCFLAG_EH) ||
|
||||
@@ -1556,6 +1560,17 @@ int ata_eh_get_ncq_success_sense(struct ata_link *link)
|
||||
continue;
|
||||
}
|
||||
|
||||
qc->result_tf.nsect = sense[6];
|
||||
qc->result_tf.hob_nsect = sense[7];
|
||||
qc->result_tf.lbal = sense[8];
|
||||
qc->result_tf.lbam = sense[9];
|
||||
qc->result_tf.lbah = sense[10];
|
||||
qc->result_tf.hob_lbal = sense[11];
|
||||
qc->result_tf.hob_lbam = sense[12];
|
||||
qc->result_tf.hob_lbah = sense[13];
|
||||
if (aux_icc_valid)
|
||||
qc->result_tf.auxiliary = get_unaligned_le32(&sense[16]);
|
||||
|
||||
/* Set sense without also setting scsicmd->result */
|
||||
scsi_build_sense_buffer(dev->flags & ATA_DFLAG_D_SENSE,
|
||||
qc->scsicmd->sense_buffer, sk,
|
||||
|
||||
@@ -233,72 +233,6 @@ static void loop_set_size(struct loop_device *lo, loff_t size)
|
||||
kobject_uevent(&disk_to_dev(lo->lo_disk)->kobj, KOBJ_CHANGE);
|
||||
}
|
||||
|
||||
static int lo_write_bvec(struct file *file, struct bio_vec *bvec, loff_t *ppos)
|
||||
{
|
||||
struct iov_iter i;
|
||||
ssize_t bw;
|
||||
|
||||
iov_iter_bvec(&i, ITER_SOURCE, bvec, 1, bvec->bv_len);
|
||||
|
||||
bw = vfs_iter_write(file, &i, ppos, 0);
|
||||
|
||||
if (likely(bw == bvec->bv_len))
|
||||
return 0;
|
||||
|
||||
printk_ratelimited(KERN_ERR
|
||||
"loop: Write error at byte offset %llu, length %i.\n",
|
||||
(unsigned long long)*ppos, bvec->bv_len);
|
||||
if (bw >= 0)
|
||||
bw = -EIO;
|
||||
return bw;
|
||||
}
|
||||
|
||||
static int lo_write_simple(struct loop_device *lo, struct request *rq,
|
||||
loff_t pos)
|
||||
{
|
||||
struct bio_vec bvec;
|
||||
struct req_iterator iter;
|
||||
int ret = 0;
|
||||
|
||||
rq_for_each_segment(bvec, rq, iter) {
|
||||
ret = lo_write_bvec(lo->lo_backing_file, &bvec, &pos);
|
||||
if (ret < 0)
|
||||
break;
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int lo_read_simple(struct loop_device *lo, struct request *rq,
|
||||
loff_t pos)
|
||||
{
|
||||
struct bio_vec bvec;
|
||||
struct req_iterator iter;
|
||||
struct iov_iter i;
|
||||
ssize_t len;
|
||||
|
||||
rq_for_each_segment(bvec, rq, iter) {
|
||||
iov_iter_bvec(&i, ITER_DEST, &bvec, 1, bvec.bv_len);
|
||||
len = vfs_iter_read(lo->lo_backing_file, &i, &pos, 0);
|
||||
if (len < 0)
|
||||
return len;
|
||||
|
||||
flush_dcache_page(bvec.bv_page);
|
||||
|
||||
if (len != bvec.bv_len) {
|
||||
struct bio *bio;
|
||||
|
||||
__rq_for_each_bio(bio, rq)
|
||||
zero_fill_bio(bio);
|
||||
break;
|
||||
}
|
||||
cond_resched();
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void loop_clear_limits(struct loop_device *lo, int mode)
|
||||
{
|
||||
struct queue_limits lim = queue_limits_start_update(lo->lo_queue);
|
||||
@@ -364,7 +298,7 @@ static void lo_complete_rq(struct request *rq)
|
||||
struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq);
|
||||
blk_status_t ret = BLK_STS_OK;
|
||||
|
||||
if (!cmd->use_aio || cmd->ret < 0 || cmd->ret == blk_rq_bytes(rq) ||
|
||||
if (cmd->ret < 0 || cmd->ret == blk_rq_bytes(rq) ||
|
||||
req_op(rq) != REQ_OP_READ) {
|
||||
if (cmd->ret < 0)
|
||||
ret = errno_to_blk_status(cmd->ret);
|
||||
@@ -380,14 +314,13 @@ static void lo_complete_rq(struct request *rq)
|
||||
cmd->ret = 0;
|
||||
blk_mq_requeue_request(rq, true);
|
||||
} else {
|
||||
if (cmd->use_aio) {
|
||||
struct bio *bio = rq->bio;
|
||||
struct bio *bio = rq->bio;
|
||||
|
||||
while (bio) {
|
||||
zero_fill_bio(bio);
|
||||
bio = bio->bi_next;
|
||||
}
|
||||
while (bio) {
|
||||
zero_fill_bio(bio);
|
||||
bio = bio->bi_next;
|
||||
}
|
||||
|
||||
ret = BLK_STS_IOERR;
|
||||
end_io:
|
||||
blk_mq_end_request(rq, ret);
|
||||
@@ -467,9 +400,14 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
|
||||
|
||||
cmd->iocb.ki_pos = pos;
|
||||
cmd->iocb.ki_filp = file;
|
||||
cmd->iocb.ki_complete = lo_rw_aio_complete;
|
||||
cmd->iocb.ki_flags = IOCB_DIRECT;
|
||||
cmd->iocb.ki_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0);
|
||||
cmd->iocb.ki_ioprio = req_get_ioprio(rq);
|
||||
if (cmd->use_aio) {
|
||||
cmd->iocb.ki_complete = lo_rw_aio_complete;
|
||||
cmd->iocb.ki_flags = IOCB_DIRECT;
|
||||
} else {
|
||||
cmd->iocb.ki_complete = NULL;
|
||||
cmd->iocb.ki_flags = 0;
|
||||
}
|
||||
|
||||
if (rw == ITER_SOURCE)
|
||||
ret = file->f_op->write_iter(&cmd->iocb, &iter);
|
||||
@@ -480,7 +418,7 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
|
||||
|
||||
if (ret != -EIOCBQUEUED)
|
||||
lo_rw_aio_complete(&cmd->iocb, ret);
|
||||
return 0;
|
||||
return -EIOCBQUEUED;
|
||||
}
|
||||
|
||||
static int do_req_filebacked(struct loop_device *lo, struct request *rq)
|
||||
@@ -488,15 +426,6 @@ static int do_req_filebacked(struct loop_device *lo, struct request *rq)
|
||||
struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq);
|
||||
loff_t pos = ((loff_t) blk_rq_pos(rq) << 9) + lo->lo_offset;
|
||||
|
||||
/*
|
||||
* lo_write_simple and lo_read_simple should have been covered
|
||||
* by io submit style function like lo_rw_aio(), one blocker
|
||||
* is that lo_read_simple() need to call flush_dcache_page after
|
||||
* the page is written from kernel, and it isn't easy to handle
|
||||
* this in io submit style function which submits all segments
|
||||
* of the req at one time. And direct read IO doesn't need to
|
||||
* run flush_dcache_page().
|
||||
*/
|
||||
switch (req_op(rq)) {
|
||||
case REQ_OP_FLUSH:
|
||||
return lo_req_flush(lo, rq);
|
||||
@@ -512,15 +441,9 @@ static int do_req_filebacked(struct loop_device *lo, struct request *rq)
|
||||
case REQ_OP_DISCARD:
|
||||
return lo_fallocate(lo, rq, pos, FALLOC_FL_PUNCH_HOLE);
|
||||
case REQ_OP_WRITE:
|
||||
if (cmd->use_aio)
|
||||
return lo_rw_aio(lo, cmd, pos, ITER_SOURCE);
|
||||
else
|
||||
return lo_write_simple(lo, rq, pos);
|
||||
return lo_rw_aio(lo, cmd, pos, ITER_SOURCE);
|
||||
case REQ_OP_READ:
|
||||
if (cmd->use_aio)
|
||||
return lo_rw_aio(lo, cmd, pos, ITER_DEST);
|
||||
else
|
||||
return lo_read_simple(lo, rq, pos);
|
||||
return lo_rw_aio(lo, cmd, pos, ITER_DEST);
|
||||
default:
|
||||
WARN_ON_ONCE(1);
|
||||
return -EIO;
|
||||
@@ -652,19 +575,20 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
|
||||
* dependency.
|
||||
*/
|
||||
fput(old_file);
|
||||
dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
|
||||
if (partscan)
|
||||
loop_reread_partitions(lo);
|
||||
|
||||
error = 0;
|
||||
done:
|
||||
/* enable and uncork uevent now that we are done */
|
||||
dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
|
||||
kobject_uevent(&disk_to_dev(lo->lo_disk)->kobj, KOBJ_CHANGE);
|
||||
return error;
|
||||
|
||||
out_err:
|
||||
loop_global_unlock(lo, is_loop);
|
||||
out_putf:
|
||||
fput(file);
|
||||
dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
|
||||
goto done;
|
||||
}
|
||||
|
||||
@@ -1119,8 +1043,8 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
|
||||
if (partscan)
|
||||
clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state);
|
||||
|
||||
/* enable and uncork uevent now that we are done */
|
||||
dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
|
||||
kobject_uevent(&disk_to_dev(lo->lo_disk)->kobj, KOBJ_CHANGE);
|
||||
|
||||
loop_global_unlock(lo, is_loop);
|
||||
if (partscan)
|
||||
@@ -1906,7 +1830,6 @@ static void loop_handle_cmd(struct loop_cmd *cmd)
|
||||
struct loop_device *lo = rq->q->queuedata;
|
||||
int ret = 0;
|
||||
struct mem_cgroup *old_memcg = NULL;
|
||||
const bool use_aio = cmd->use_aio;
|
||||
|
||||
if (write && (lo->lo_flags & LO_FLAGS_READ_ONLY)) {
|
||||
ret = -EIO;
|
||||
@@ -1936,7 +1859,7 @@ static void loop_handle_cmd(struct loop_cmd *cmd)
|
||||
}
|
||||
failed:
|
||||
/* complete non-aio request */
|
||||
if (!use_aio || ret) {
|
||||
if (ret != -EIOCBQUEUED) {
|
||||
if (ret == -EOPNOTSUPP)
|
||||
cmd->ret = ret;
|
||||
else
|
||||
|
||||
@@ -1215,6 +1215,8 @@ next:
|
||||
rtl_dev_err(hdev, "mandatory config file %s not found",
|
||||
btrtl_dev->ic_info->cfg_name);
|
||||
ret = btrtl_dev->cfg_len;
|
||||
if (!ret)
|
||||
ret = -EINVAL;
|
||||
goto err_free;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -289,18 +289,18 @@ static void vhci_coredump(struct hci_dev *hdev)
|
||||
|
||||
static void vhci_coredump_hdr(struct hci_dev *hdev, struct sk_buff *skb)
|
||||
{
|
||||
char buf[80];
|
||||
const char *buf;
|
||||
|
||||
snprintf(buf, sizeof(buf), "Controller Name: vhci_ctrl\n");
|
||||
buf = "Controller Name: vhci_ctrl\n";
|
||||
skb_put_data(skb, buf, strlen(buf));
|
||||
|
||||
snprintf(buf, sizeof(buf), "Firmware Version: vhci_fw\n");
|
||||
buf = "Firmware Version: vhci_fw\n";
|
||||
skb_put_data(skb, buf, strlen(buf));
|
||||
|
||||
snprintf(buf, sizeof(buf), "Driver: vhci_drv\n");
|
||||
buf = "Driver: vhci_drv\n";
|
||||
skb_put_data(skb, buf, strlen(buf));
|
||||
|
||||
snprintf(buf, sizeof(buf), "Vendor: vhci\n");
|
||||
buf = "Vendor: vhci\n";
|
||||
skb_put_data(skb, buf, strlen(buf));
|
||||
}
|
||||
|
||||
|
||||
@@ -2771,10 +2771,18 @@ EXPORT_SYMBOL(cpufreq_update_policy);
|
||||
*/
|
||||
void cpufreq_update_limits(unsigned int cpu)
|
||||
{
|
||||
struct cpufreq_policy *policy;
|
||||
|
||||
policy = cpufreq_cpu_get(cpu);
|
||||
if (!policy)
|
||||
return;
|
||||
|
||||
if (cpufreq_driver->update_limits)
|
||||
cpufreq_driver->update_limits(cpu);
|
||||
else
|
||||
cpufreq_update_policy(cpu);
|
||||
|
||||
cpufreq_cpu_put(policy);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(cpufreq_update_limits);
|
||||
|
||||
|
||||
@@ -122,12 +122,12 @@ int caam_qi_enqueue(struct device *qidev, struct caam_drv_req *req)
|
||||
qm_fd_addr_set64(&fd, addr);
|
||||
|
||||
do {
|
||||
refcount_inc(&req->drv_ctx->refcnt);
|
||||
ret = qman_enqueue(req->drv_ctx->req_fq, &fd);
|
||||
if (likely(!ret)) {
|
||||
refcount_inc(&req->drv_ctx->refcnt);
|
||||
if (likely(!ret))
|
||||
return 0;
|
||||
}
|
||||
|
||||
refcount_dec(&req->drv_ctx->refcnt);
|
||||
if (ret != -EBUSY)
|
||||
break;
|
||||
num_retries++;
|
||||
|
||||
@@ -263,13 +263,7 @@ static int tegra_aes_do_one_req(struct crypto_engine *engine, void *areq)
|
||||
unsigned int cmdlen;
|
||||
int ret;
|
||||
|
||||
rctx->datbuf.buf = dma_alloc_coherent(se->dev, SE_AES_BUFLEN,
|
||||
&rctx->datbuf.addr, GFP_KERNEL);
|
||||
if (!rctx->datbuf.buf)
|
||||
return -ENOMEM;
|
||||
|
||||
rctx->datbuf.size = SE_AES_BUFLEN;
|
||||
rctx->iv = (u32 *)req->iv;
|
||||
rctx->iv = (ctx->alg == SE_ALG_ECB) ? NULL : (u32 *)req->iv;
|
||||
rctx->len = req->cryptlen;
|
||||
|
||||
/* Pad input to AES Block size */
|
||||
@@ -278,6 +272,12 @@ static int tegra_aes_do_one_req(struct crypto_engine *engine, void *areq)
|
||||
rctx->len += AES_BLOCK_SIZE - (rctx->len % AES_BLOCK_SIZE);
|
||||
}
|
||||
|
||||
rctx->datbuf.size = rctx->len;
|
||||
rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->datbuf.size,
|
||||
&rctx->datbuf.addr, GFP_KERNEL);
|
||||
if (!rctx->datbuf.buf)
|
||||
return -ENOMEM;
|
||||
|
||||
scatterwalk_map_and_copy(rctx->datbuf.buf, req->src, 0, req->cryptlen, 0);
|
||||
|
||||
/* Prepare the command and submit for execution */
|
||||
@@ -289,7 +289,7 @@ static int tegra_aes_do_one_req(struct crypto_engine *engine, void *areq)
|
||||
scatterwalk_map_and_copy(rctx->datbuf.buf, req->dst, 0, req->cryptlen, 1);
|
||||
|
||||
/* Free the buffer */
|
||||
dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
|
||||
dma_free_coherent(ctx->se->dev, rctx->datbuf.size,
|
||||
rctx->datbuf.buf, rctx->datbuf.addr);
|
||||
|
||||
crypto_finalize_skcipher_request(se->engine, req, ret);
|
||||
@@ -443,9 +443,6 @@ static int tegra_aes_crypt(struct skcipher_request *req, bool encrypt)
|
||||
if (!req->cryptlen)
|
||||
return 0;
|
||||
|
||||
if (ctx->alg == SE_ALG_ECB)
|
||||
req->iv = NULL;
|
||||
|
||||
rctx->encrypt = encrypt;
|
||||
rctx->config = tegra234_aes_cfg(ctx->alg, encrypt);
|
||||
rctx->crypto_config = tegra234_aes_crypto_cfg(ctx->alg, encrypt);
|
||||
@@ -1120,6 +1117,11 @@ static int tegra_ccm_crypt_init(struct aead_request *req, struct tegra_se *se,
|
||||
rctx->assoclen = req->assoclen;
|
||||
rctx->authsize = crypto_aead_authsize(tfm);
|
||||
|
||||
if (rctx->encrypt)
|
||||
rctx->cryptlen = req->cryptlen;
|
||||
else
|
||||
rctx->cryptlen = req->cryptlen - rctx->authsize;
|
||||
|
||||
memcpy(iv, req->iv, 16);
|
||||
|
||||
ret = tegra_ccm_check_iv(iv);
|
||||
@@ -1148,30 +1150,26 @@ static int tegra_ccm_do_one_req(struct crypto_engine *engine, void *areq)
|
||||
struct tegra_se *se = ctx->se;
|
||||
int ret;
|
||||
|
||||
ret = tegra_ccm_crypt_init(req, se, rctx);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Allocate buffers required */
|
||||
rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN,
|
||||
rctx->inbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen + 100;
|
||||
rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->inbuf.size,
|
||||
&rctx->inbuf.addr, GFP_KERNEL);
|
||||
if (!rctx->inbuf.buf)
|
||||
return -ENOMEM;
|
||||
|
||||
rctx->inbuf.size = SE_AES_BUFLEN;
|
||||
|
||||
rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN,
|
||||
rctx->outbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen + 100;
|
||||
rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->outbuf.size,
|
||||
&rctx->outbuf.addr, GFP_KERNEL);
|
||||
if (!rctx->outbuf.buf) {
|
||||
ret = -ENOMEM;
|
||||
goto outbuf_err;
|
||||
}
|
||||
|
||||
rctx->outbuf.size = SE_AES_BUFLEN;
|
||||
|
||||
ret = tegra_ccm_crypt_init(req, se, rctx);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
if (rctx->encrypt) {
|
||||
rctx->cryptlen = req->cryptlen;
|
||||
|
||||
/* CBC MAC Operation */
|
||||
ret = tegra_ccm_compute_auth(ctx, rctx);
|
||||
if (ret)
|
||||
@@ -1182,10 +1180,6 @@ static int tegra_ccm_do_one_req(struct crypto_engine *engine, void *areq)
|
||||
if (ret)
|
||||
goto out;
|
||||
} else {
|
||||
rctx->cryptlen = req->cryptlen - ctx->authsize;
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
/* CTR operation */
|
||||
ret = tegra_ccm_do_ctr(ctx, rctx);
|
||||
if (ret)
|
||||
@@ -1198,11 +1192,11 @@ static int tegra_ccm_do_one_req(struct crypto_engine *engine, void *areq)
|
||||
}
|
||||
|
||||
out:
|
||||
dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
|
||||
dma_free_coherent(ctx->se->dev, rctx->inbuf.size,
|
||||
rctx->outbuf.buf, rctx->outbuf.addr);
|
||||
|
||||
outbuf_err:
|
||||
dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
|
||||
dma_free_coherent(ctx->se->dev, rctx->outbuf.size,
|
||||
rctx->inbuf.buf, rctx->inbuf.addr);
|
||||
|
||||
crypto_finalize_aead_request(ctx->se->engine, req, ret);
|
||||
@@ -1218,23 +1212,6 @@ static int tegra_gcm_do_one_req(struct crypto_engine *engine, void *areq)
|
||||
struct tegra_aead_reqctx *rctx = aead_request_ctx(req);
|
||||
int ret;
|
||||
|
||||
/* Allocate buffers required */
|
||||
rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN,
|
||||
&rctx->inbuf.addr, GFP_KERNEL);
|
||||
if (!rctx->inbuf.buf)
|
||||
return -ENOMEM;
|
||||
|
||||
rctx->inbuf.size = SE_AES_BUFLEN;
|
||||
|
||||
rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN,
|
||||
&rctx->outbuf.addr, GFP_KERNEL);
|
||||
if (!rctx->outbuf.buf) {
|
||||
ret = -ENOMEM;
|
||||
goto outbuf_err;
|
||||
}
|
||||
|
||||
rctx->outbuf.size = SE_AES_BUFLEN;
|
||||
|
||||
rctx->src_sg = req->src;
|
||||
rctx->dst_sg = req->dst;
|
||||
rctx->assoclen = req->assoclen;
|
||||
@@ -1248,6 +1225,21 @@ static int tegra_gcm_do_one_req(struct crypto_engine *engine, void *areq)
|
||||
memcpy(rctx->iv, req->iv, GCM_AES_IV_SIZE);
|
||||
rctx->iv[3] = (1 << 24);
|
||||
|
||||
/* Allocate buffers required */
|
||||
rctx->inbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen;
|
||||
rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->inbuf.size,
|
||||
&rctx->inbuf.addr, GFP_KERNEL);
|
||||
if (!rctx->inbuf.buf)
|
||||
return -ENOMEM;
|
||||
|
||||
rctx->outbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen;
|
||||
rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->outbuf.size,
|
||||
&rctx->outbuf.addr, GFP_KERNEL);
|
||||
if (!rctx->outbuf.buf) {
|
||||
ret = -ENOMEM;
|
||||
goto outbuf_err;
|
||||
}
|
||||
|
||||
/* If there is associated data perform GMAC operation */
|
||||
if (rctx->assoclen) {
|
||||
ret = tegra_gcm_do_gmac(ctx, rctx);
|
||||
@@ -1271,11 +1263,11 @@ static int tegra_gcm_do_one_req(struct crypto_engine *engine, void *areq)
|
||||
ret = tegra_gcm_do_verify(ctx->se, rctx);
|
||||
|
||||
out:
|
||||
dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
|
||||
dma_free_coherent(ctx->se->dev, rctx->outbuf.size,
|
||||
rctx->outbuf.buf, rctx->outbuf.addr);
|
||||
|
||||
outbuf_err:
|
||||
dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
|
||||
dma_free_coherent(ctx->se->dev, rctx->inbuf.size,
|
||||
rctx->inbuf.buf, rctx->inbuf.addr);
|
||||
|
||||
/* Finalize the request if there are no errors */
|
||||
@@ -1502,6 +1494,11 @@ static int tegra_cmac_do_update(struct ahash_request *req)
|
||||
return 0;
|
||||
}
|
||||
|
||||
rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->datbuf.size,
|
||||
&rctx->datbuf.addr, GFP_KERNEL);
|
||||
if (!rctx->datbuf.buf)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Copy the previous residue first */
|
||||
if (rctx->residue.size)
|
||||
memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
|
||||
@@ -1527,6 +1524,9 @@ static int tegra_cmac_do_update(struct ahash_request *req)
|
||||
|
||||
tegra_cmac_copy_result(ctx->se, rctx);
|
||||
|
||||
dma_free_coherent(ctx->se->dev, rctx->datbuf.size,
|
||||
rctx->datbuf.buf, rctx->datbuf.addr);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -1541,10 +1541,20 @@ static int tegra_cmac_do_final(struct ahash_request *req)
|
||||
|
||||
if (!req->nbytes && !rctx->total_len && ctx->fallback_tfm) {
|
||||
return crypto_shash_tfm_digest(ctx->fallback_tfm,
|
||||
rctx->datbuf.buf, 0, req->result);
|
||||
NULL, 0, req->result);
|
||||
}
|
||||
|
||||
if (rctx->residue.size) {
|
||||
rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->residue.size,
|
||||
&rctx->datbuf.addr, GFP_KERNEL);
|
||||
if (!rctx->datbuf.buf) {
|
||||
ret = -ENOMEM;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
|
||||
}
|
||||
|
||||
memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
|
||||
rctx->datbuf.size = rctx->residue.size;
|
||||
rctx->total_len += rctx->residue.size;
|
||||
rctx->config = tegra234_aes_cfg(SE_ALG_CMAC, 0);
|
||||
@@ -1570,8 +1580,10 @@ static int tegra_cmac_do_final(struct ahash_request *req)
|
||||
writel(0, se->base + se->hw->regs->result + (i * 4));
|
||||
|
||||
out:
|
||||
dma_free_coherent(se->dev, SE_SHA_BUFLEN,
|
||||
rctx->datbuf.buf, rctx->datbuf.addr);
|
||||
if (rctx->residue.size)
|
||||
dma_free_coherent(se->dev, rctx->datbuf.size,
|
||||
rctx->datbuf.buf, rctx->datbuf.addr);
|
||||
out_free:
|
||||
dma_free_coherent(se->dev, crypto_ahash_blocksize(tfm) * 2,
|
||||
rctx->residue.buf, rctx->residue.addr);
|
||||
return ret;
|
||||
@@ -1683,28 +1695,15 @@ static int tegra_cmac_init(struct ahash_request *req)
|
||||
rctx->residue.buf = dma_alloc_coherent(se->dev, rctx->blk_size * 2,
|
||||
&rctx->residue.addr, GFP_KERNEL);
|
||||
if (!rctx->residue.buf)
|
||||
goto resbuf_fail;
|
||||
return -ENOMEM;
|
||||
|
||||
rctx->residue.size = 0;
|
||||
|
||||
rctx->datbuf.buf = dma_alloc_coherent(se->dev, SE_SHA_BUFLEN,
|
||||
&rctx->datbuf.addr, GFP_KERNEL);
|
||||
if (!rctx->datbuf.buf)
|
||||
goto datbuf_fail;
|
||||
|
||||
rctx->datbuf.size = 0;
|
||||
|
||||
/* Clear any previous result */
|
||||
for (i = 0; i < CMAC_RESULT_REG_COUNT; i++)
|
||||
writel(0, se->base + se->hw->regs->result + (i * 4));
|
||||
|
||||
return 0;
|
||||
|
||||
datbuf_fail:
|
||||
dma_free_coherent(se->dev, rctx->blk_size, rctx->residue.buf,
|
||||
rctx->residue.addr);
|
||||
resbuf_fail:
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
static int tegra_cmac_setkey(struct crypto_ahash *tfm, const u8 *key,
|
||||
|
||||
@@ -332,6 +332,11 @@ static int tegra_sha_do_update(struct ahash_request *req)
|
||||
return 0;
|
||||
}
|
||||
|
||||
rctx->datbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->datbuf.size,
|
||||
&rctx->datbuf.addr, GFP_KERNEL);
|
||||
if (!rctx->datbuf.buf)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Copy the previous residue first */
|
||||
if (rctx->residue.size)
|
||||
memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
|
||||
@@ -368,6 +373,9 @@ static int tegra_sha_do_update(struct ahash_request *req)
|
||||
if (!(rctx->task & SHA_FINAL))
|
||||
tegra_sha_copy_hash_result(se, rctx);
|
||||
|
||||
dma_free_coherent(ctx->se->dev, rctx->datbuf.size,
|
||||
rctx->datbuf.buf, rctx->datbuf.addr);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -380,7 +388,17 @@ static int tegra_sha_do_final(struct ahash_request *req)
|
||||
u32 *cpuvaddr = se->cmdbuf->addr;
|
||||
int size, ret = 0;
|
||||
|
||||
memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
|
||||
if (rctx->residue.size) {
|
||||
rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->residue.size,
|
||||
&rctx->datbuf.addr, GFP_KERNEL);
|
||||
if (!rctx->datbuf.buf) {
|
||||
ret = -ENOMEM;
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
|
||||
}
|
||||
|
||||
rctx->datbuf.size = rctx->residue.size;
|
||||
rctx->total_len += rctx->residue.size;
|
||||
|
||||
@@ -397,8 +415,10 @@ static int tegra_sha_do_final(struct ahash_request *req)
|
||||
memcpy(req->result, rctx->digest.buf, rctx->digest.size);
|
||||
|
||||
out:
|
||||
dma_free_coherent(se->dev, SE_SHA_BUFLEN,
|
||||
rctx->datbuf.buf, rctx->datbuf.addr);
|
||||
if (rctx->residue.size)
|
||||
dma_free_coherent(se->dev, rctx->datbuf.size,
|
||||
rctx->datbuf.buf, rctx->datbuf.addr);
|
||||
out_free:
|
||||
dma_free_coherent(se->dev, crypto_ahash_blocksize(tfm),
|
||||
rctx->residue.buf, rctx->residue.addr);
|
||||
dma_free_coherent(se->dev, rctx->digest.size, rctx->digest.buf,
|
||||
@@ -534,19 +554,11 @@ static int tegra_sha_init(struct ahash_request *req)
|
||||
if (!rctx->residue.buf)
|
||||
goto resbuf_fail;
|
||||
|
||||
rctx->datbuf.buf = dma_alloc_coherent(se->dev, SE_SHA_BUFLEN,
|
||||
&rctx->datbuf.addr, GFP_KERNEL);
|
||||
if (!rctx->datbuf.buf)
|
||||
goto datbuf_fail;
|
||||
|
||||
return 0;
|
||||
|
||||
datbuf_fail:
|
||||
dma_free_coherent(se->dev, rctx->blk_size, rctx->residue.buf,
|
||||
rctx->residue.addr);
|
||||
resbuf_fail:
|
||||
dma_free_coherent(se->dev, SE_SHA_BUFLEN, rctx->datbuf.buf,
|
||||
rctx->datbuf.addr);
|
||||
dma_free_coherent(se->dev, rctx->digest.size, rctx->digest.buf,
|
||||
rctx->digest.addr);
|
||||
digbuf_fail:
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
@@ -340,8 +340,6 @@
|
||||
#define SE_CRYPTO_CTR_REG_COUNT 4
|
||||
#define SE_MAX_KEYSLOT 15
|
||||
#define SE_MAX_MEM_ALLOC SZ_4M
|
||||
#define SE_AES_BUFLEN 0x8000
|
||||
#define SE_SHA_BUFLEN 0x2000
|
||||
|
||||
#define SHA_FIRST BIT(0)
|
||||
#define SHA_UPDATE BIT(1)
|
||||
|
||||
@@ -444,15 +444,17 @@ static int sw_sync_ioctl_get_deadline(struct sync_timeline *obj, unsigned long a
|
||||
return -EINVAL;
|
||||
|
||||
pt = dma_fence_to_sync_pt(fence);
|
||||
if (!pt)
|
||||
return -EINVAL;
|
||||
if (!pt) {
|
||||
ret = -EINVAL;
|
||||
goto put_fence;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(fence->lock, flags);
|
||||
if (test_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags)) {
|
||||
data.deadline_ns = ktime_to_ns(pt->deadline);
|
||||
} else {
|
||||
if (!test_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags)) {
|
||||
ret = -ENOENT;
|
||||
goto unlock;
|
||||
}
|
||||
data.deadline_ns = ktime_to_ns(pt->deadline);
|
||||
spin_unlock_irqrestore(fence->lock, flags);
|
||||
|
||||
dma_fence_put(fence);
|
||||
@@ -464,6 +466,13 @@ static int sw_sync_ioctl_get_deadline(struct sync_timeline *obj, unsigned long a
|
||||
return -EFAULT;
|
||||
|
||||
return 0;
|
||||
|
||||
unlock:
|
||||
spin_unlock_irqrestore(fence->lock, flags);
|
||||
put_fence:
|
||||
dma_fence_put(fence);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static long sw_sync_ioctl(struct file *file, unsigned int cmd,
|
||||
|
||||
@@ -171,7 +171,7 @@ void efi_set_u64_split(u64 data, u32 *lo, u32 *hi)
|
||||
* the EFI memory map. Other related structures, e.g. x86 e820ext, need
|
||||
* to factor in this headroom requirement as well.
|
||||
*/
|
||||
#define EFI_MMAP_NR_SLACK_SLOTS 8
|
||||
#define EFI_MMAP_NR_SLACK_SLOTS 32
|
||||
|
||||
typedef struct efi_generic_dev_path efi_device_path_protocol_t;
|
||||
|
||||
|
||||
@@ -437,6 +437,13 @@ success:
|
||||
return true;
|
||||
}
|
||||
|
||||
static bool amdgpu_prefer_rom_resource(struct amdgpu_device *adev)
|
||||
{
|
||||
struct resource *res = &adev->pdev->resource[PCI_ROM_RESOURCE];
|
||||
|
||||
return (res->flags & IORESOURCE_ROM_SHADOW);
|
||||
}
|
||||
|
||||
static bool amdgpu_get_bios_dgpu(struct amdgpu_device *adev)
|
||||
{
|
||||
if (amdgpu_atrm_get_bios(adev)) {
|
||||
@@ -455,14 +462,27 @@ static bool amdgpu_get_bios_dgpu(struct amdgpu_device *adev)
|
||||
goto success;
|
||||
}
|
||||
|
||||
if (amdgpu_read_platform_bios(adev)) {
|
||||
dev_info(adev->dev, "Fetched VBIOS from platform\n");
|
||||
goto success;
|
||||
}
|
||||
if (amdgpu_prefer_rom_resource(adev)) {
|
||||
if (amdgpu_read_bios(adev)) {
|
||||
dev_info(adev->dev, "Fetched VBIOS from ROM BAR\n");
|
||||
goto success;
|
||||
}
|
||||
|
||||
if (amdgpu_read_bios(adev)) {
|
||||
dev_info(adev->dev, "Fetched VBIOS from ROM BAR\n");
|
||||
goto success;
|
||||
if (amdgpu_read_platform_bios(adev)) {
|
||||
dev_info(adev->dev, "Fetched VBIOS from platform\n");
|
||||
goto success;
|
||||
}
|
||||
|
||||
} else {
|
||||
if (amdgpu_read_platform_bios(adev)) {
|
||||
dev_info(adev->dev, "Fetched VBIOS from platform\n");
|
||||
goto success;
|
||||
}
|
||||
|
||||
if (amdgpu_read_bios(adev)) {
|
||||
dev_info(adev->dev, "Fetched VBIOS from ROM BAR\n");
|
||||
goto success;
|
||||
}
|
||||
}
|
||||
|
||||
if (amdgpu_read_bios_from_rom(adev)) {
|
||||
|
||||
@@ -3322,6 +3322,7 @@ static int amdgpu_device_ip_fini(struct amdgpu_device *adev)
|
||||
amdgpu_device_mem_scratch_fini(adev);
|
||||
amdgpu_ib_pool_fini(adev);
|
||||
amdgpu_seq64_fini(adev);
|
||||
amdgpu_doorbell_fini(adev);
|
||||
}
|
||||
|
||||
r = adev->ip_blocks[i].version->funcs->sw_fini((void *)adev);
|
||||
@@ -4670,7 +4671,6 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
|
||||
|
||||
iounmap(adev->rmmio);
|
||||
adev->rmmio = NULL;
|
||||
amdgpu_doorbell_fini(adev);
|
||||
drm_dev_exit(idx);
|
||||
}
|
||||
|
||||
|
||||
@@ -181,7 +181,7 @@ static void amdgpu_dma_buf_unmap(struct dma_buf_attachment *attach,
|
||||
struct sg_table *sgt,
|
||||
enum dma_data_direction dir)
|
||||
{
|
||||
if (sgt->sgl->page_link) {
|
||||
if (sg_page(sgt->sgl)) {
|
||||
dma_unmap_sgtable(attach->dev, sgt, dir, 0);
|
||||
sg_free_table(sgt);
|
||||
kfree(sgt);
|
||||
|
||||
@@ -1795,7 +1795,6 @@ static const u16 amdgpu_unsupported_pciidlist[] = {
|
||||
};
|
||||
|
||||
static const struct pci_device_id pciidlist[] = {
|
||||
#ifdef CONFIG_DRM_AMDGPU_SI
|
||||
{0x1002, 0x6780, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI},
|
||||
{0x1002, 0x6784, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI},
|
||||
{0x1002, 0x6788, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI},
|
||||
@@ -1868,8 +1867,6 @@ static const struct pci_device_id pciidlist[] = {
|
||||
{0x1002, 0x6665, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|AMD_IS_MOBILITY},
|
||||
{0x1002, 0x6667, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|AMD_IS_MOBILITY},
|
||||
{0x1002, 0x666F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|AMD_IS_MOBILITY},
|
||||
#endif
|
||||
#ifdef CONFIG_DRM_AMDGPU_CIK
|
||||
/* Kaveri */
|
||||
{0x1002, 0x1304, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|AMD_IS_MOBILITY|AMD_IS_APU},
|
||||
{0x1002, 0x1305, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|AMD_IS_APU},
|
||||
@@ -1952,7 +1949,6 @@ static const struct pci_device_id pciidlist[] = {
|
||||
{0x1002, 0x985D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_MULLINS|AMD_IS_MOBILITY|AMD_IS_APU},
|
||||
{0x1002, 0x985E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_MULLINS|AMD_IS_MOBILITY|AMD_IS_APU},
|
||||
{0x1002, 0x985F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_MULLINS|AMD_IS_MOBILITY|AMD_IS_APU},
|
||||
#endif
|
||||
/* topaz */
|
||||
{0x1002, 0x6900, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TOPAZ},
|
||||
{0x1002, 0x6901, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TOPAZ},
|
||||
@@ -2284,14 +2280,14 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
|
||||
switch (flags & AMD_ASIC_MASK) {
|
||||
case CHIP_TAHITI:
|
||||
case CHIP_PITCAIRN:
|
||||
case CHIP_VERDE:
|
||||
case CHIP_OLAND:
|
||||
case CHIP_HAINAN:
|
||||
#ifdef CONFIG_DRM_AMDGPU_SI
|
||||
if (!amdgpu_si_support) {
|
||||
switch (flags & AMD_ASIC_MASK) {
|
||||
case CHIP_TAHITI:
|
||||
case CHIP_PITCAIRN:
|
||||
case CHIP_VERDE:
|
||||
case CHIP_OLAND:
|
||||
case CHIP_HAINAN:
|
||||
if (!amdgpu_si_support) {
|
||||
dev_info(&pdev->dev,
|
||||
"SI support provided by radeon.\n");
|
||||
dev_info(&pdev->dev,
|
||||
@@ -2299,16 +2295,18 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
|
||||
);
|
||||
return -ENODEV;
|
||||
}
|
||||
}
|
||||
break;
|
||||
#else
|
||||
dev_info(&pdev->dev, "amdgpu is built without SI support.\n");
|
||||
return -ENODEV;
|
||||
#endif
|
||||
case CHIP_KAVERI:
|
||||
case CHIP_BONAIRE:
|
||||
case CHIP_HAWAII:
|
||||
case CHIP_KABINI:
|
||||
case CHIP_MULLINS:
|
||||
#ifdef CONFIG_DRM_AMDGPU_CIK
|
||||
if (!amdgpu_cik_support) {
|
||||
switch (flags & AMD_ASIC_MASK) {
|
||||
case CHIP_KAVERI:
|
||||
case CHIP_BONAIRE:
|
||||
case CHIP_HAWAII:
|
||||
case CHIP_KABINI:
|
||||
case CHIP_MULLINS:
|
||||
if (!amdgpu_cik_support) {
|
||||
dev_info(&pdev->dev,
|
||||
"CIK support provided by radeon.\n");
|
||||
dev_info(&pdev->dev,
|
||||
@@ -2316,8 +2314,14 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
|
||||
);
|
||||
return -ENODEV;
|
||||
}
|
||||
}
|
||||
break;
|
||||
#else
|
||||
dev_info(&pdev->dev, "amdgpu is built without CIK support.\n");
|
||||
return -ENODEV;
|
||||
#endif
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
adev = devm_drm_dev_alloc(&pdev->dev, &amdgpu_kms_driver, typeof(*adev), ddev);
|
||||
if (IS_ERR(adev))
|
||||
|
||||
@@ -161,8 +161,8 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
|
||||
* When GTT is just an alternative to VRAM make sure that we
|
||||
* only use it as fallback and still try to fill up VRAM first.
|
||||
*/
|
||||
if (domain & abo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM &&
|
||||
!(adev->flags & AMD_IS_APU))
|
||||
if (abo->tbo.resource && !(adev->flags & AMD_IS_APU) &&
|
||||
domain & abo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM)
|
||||
places[c].flags |= TTM_PL_FLAG_FALLBACK;
|
||||
c++;
|
||||
}
|
||||
|
||||
@@ -859,6 +859,10 @@ static void mes_v11_0_get_fw_version(struct amdgpu_device *adev)
|
||||
{
|
||||
int pipe;
|
||||
|
||||
/* return early if we have already fetched these */
|
||||
if (adev->mes.sched_version && adev->mes.kiq_version)
|
||||
return;
|
||||
|
||||
/* get MES scheduler/KIQ versions */
|
||||
mutex_lock(&adev->srbm_mutex);
|
||||
|
||||
|
||||
@@ -1225,17 +1225,20 @@ static int mes_v12_0_queue_init(struct amdgpu_device *adev,
|
||||
mes_v12_0_queue_init_register(ring);
|
||||
}
|
||||
|
||||
/* get MES scheduler/KIQ versions */
|
||||
mutex_lock(&adev->srbm_mutex);
|
||||
soc21_grbm_select(adev, 3, pipe, 0, 0);
|
||||
if (((pipe == AMDGPU_MES_SCHED_PIPE) && !adev->mes.sched_version) ||
|
||||
((pipe == AMDGPU_MES_KIQ_PIPE) && !adev->mes.kiq_version)) {
|
||||
/* get MES scheduler/KIQ versions */
|
||||
mutex_lock(&adev->srbm_mutex);
|
||||
soc21_grbm_select(adev, 3, pipe, 0, 0);
|
||||
|
||||
if (pipe == AMDGPU_MES_SCHED_PIPE)
|
||||
adev->mes.sched_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO);
|
||||
else if (pipe == AMDGPU_MES_KIQ_PIPE && adev->enable_mes_kiq)
|
||||
adev->mes.kiq_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO);
|
||||
if (pipe == AMDGPU_MES_SCHED_PIPE)
|
||||
adev->mes.sched_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO);
|
||||
else if (pipe == AMDGPU_MES_KIQ_PIPE && adev->enable_mes_kiq)
|
||||
adev->mes.kiq_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO);
|
||||
|
||||
soc21_grbm_select(adev, 0, 0, 0, 0);
|
||||
mutex_unlock(&adev->srbm_mutex);
|
||||
soc21_grbm_select(adev, 0, 0, 0, 0);
|
||||
mutex_unlock(&adev->srbm_mutex);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -1690,6 +1690,13 @@ static const struct dmi_system_id dmi_quirk_table[] = {
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "HP Elite mt645 G8 Mobile Thin Client"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.callback = edp0_on_dp1_callback,
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "HP"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 645 14 inch G11 Notebook PC"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.callback = edp0_on_dp1_callback,
|
||||
.matches = {
|
||||
@@ -1697,6 +1704,20 @@ static const struct dmi_system_id dmi_quirk_table[] = {
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 665 16 inch G11 Notebook PC"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.callback = edp0_on_dp1_callback,
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "HP"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook 445 14 inch G11 Notebook PC"),
|
||||
},
|
||||
},
|
||||
{
|
||||
.callback = edp0_on_dp1_callback,
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "HP"),
|
||||
DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook 465 16 inch G11 Notebook PC"),
|
||||
},
|
||||
},
|
||||
{}
|
||||
/* TODO: refactor this from a fixed table to a dynamic option */
|
||||
};
|
||||
@@ -8458,14 +8479,39 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
|
||||
int offdelay;
|
||||
|
||||
if (acrtc_state) {
|
||||
if (amdgpu_ip_version(adev, DCE_HWIP, 0) <
|
||||
IP_VERSION(3, 5, 0) ||
|
||||
acrtc_state->stream->link->psr_settings.psr_version <
|
||||
DC_PSR_VERSION_UNSUPPORTED ||
|
||||
!(adev->flags & AMD_IS_APU)) {
|
||||
timing = &acrtc_state->stream->timing;
|
||||
timing = &acrtc_state->stream->timing;
|
||||
|
||||
/* at least 2 frames */
|
||||
/*
|
||||
* Depending on when the HW latching event of double-buffered
|
||||
* registers happen relative to the PSR SDP deadline, and how
|
||||
* bad the Panel clock has drifted since the last ALPM off
|
||||
* event, there can be up to 3 frames of delay between sending
|
||||
* the PSR exit cmd to DMUB fw, and when the panel starts
|
||||
* displaying live frames.
|
||||
*
|
||||
* We can set:
|
||||
*
|
||||
* 20/100 * offdelay_ms = 3_frames_ms
|
||||
* => offdelay_ms = 5 * 3_frames_ms
|
||||
*
|
||||
* This ensures that `3_frames_ms` will only be experienced as a
|
||||
* 20% delay on top how long the display has been static, and
|
||||
* thus make the delay less perceivable.
|
||||
*/
|
||||
if (acrtc_state->stream->link->psr_settings.psr_version <
|
||||
DC_PSR_VERSION_UNSUPPORTED) {
|
||||
offdelay = DIV64_U64_ROUND_UP((u64)5 * 3 * 10 *
|
||||
timing->v_total *
|
||||
timing->h_total,
|
||||
timing->pix_clk_100hz);
|
||||
config.offdelay_ms = offdelay ?: 30;
|
||||
} else if (amdgpu_ip_version(adev, DCE_HWIP, 0) <
|
||||
IP_VERSION(3, 5, 0) ||
|
||||
!(adev->flags & AMD_IS_APU)) {
|
||||
/*
|
||||
* Older HW and DGPU have issues with instant off;
|
||||
* use a 2 frame offdelay.
|
||||
*/
|
||||
offdelay = DIV64_U64_ROUND_UP((u64)20 *
|
||||
timing->v_total *
|
||||
timing->h_total,
|
||||
@@ -8473,6 +8519,8 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
|
||||
|
||||
config.offdelay_ms = offdelay ?: 30;
|
||||
} else {
|
||||
/* offdelay_ms = 0 will never disable vblank */
|
||||
config.offdelay_ms = 1;
|
||||
config.disable_immediate = true;
|
||||
}
|
||||
|
||||
|
||||
@@ -113,6 +113,7 @@ bool amdgpu_dm_crtc_vrr_active(const struct dm_crtc_state *dm_state)
|
||||
*
|
||||
* Panel Replay and PSR SU
|
||||
* - Enable when:
|
||||
* - VRR is disabled
|
||||
* - vblank counter is disabled
|
||||
* - entry is allowed: usermode demonstrates an adequate number of fast
|
||||
* commits)
|
||||
@@ -131,19 +132,20 @@ static void amdgpu_dm_crtc_set_panel_sr_feature(
|
||||
bool is_sr_active = (link->replay_settings.replay_allow_active ||
|
||||
link->psr_settings.psr_allow_active);
|
||||
bool is_crc_window_active = false;
|
||||
bool vrr_active = amdgpu_dm_crtc_vrr_active_irq(vblank_work->acrtc);
|
||||
|
||||
#ifdef CONFIG_DRM_AMD_SECURE_DISPLAY
|
||||
is_crc_window_active =
|
||||
amdgpu_dm_crc_window_is_activated(&vblank_work->acrtc->base);
|
||||
#endif
|
||||
|
||||
if (link->replay_settings.replay_feature_enabled &&
|
||||
if (link->replay_settings.replay_feature_enabled && !vrr_active &&
|
||||
allow_sr_entry && !is_sr_active && !is_crc_window_active) {
|
||||
amdgpu_dm_replay_enable(vblank_work->stream, true);
|
||||
} else if (vblank_enabled) {
|
||||
if (link->psr_settings.psr_version < DC_PSR_VERSION_SU_1 && is_sr_active)
|
||||
amdgpu_dm_psr_disable(vblank_work->stream, false);
|
||||
} else if (link->psr_settings.psr_feature_enabled &&
|
||||
} else if (link->psr_settings.psr_feature_enabled && !vrr_active &&
|
||||
allow_sr_entry && !is_sr_active && !is_crc_window_active) {
|
||||
|
||||
struct amdgpu_dm_connector *aconn =
|
||||
|
||||
@@ -87,6 +87,8 @@ static void dml21_init(const struct dc *in_dc, struct dml2_context **dml_ctx, co
|
||||
/* Store configuration options */
|
||||
(*dml_ctx)->config = *config;
|
||||
|
||||
DC_FP_START();
|
||||
|
||||
/*Initialize SOCBB and DCNIP params */
|
||||
dml21_initialize_soc_bb_params(&(*dml_ctx)->v21.dml_init, config, in_dc);
|
||||
dml21_initialize_ip_params(&(*dml_ctx)->v21.dml_init, config, in_dc);
|
||||
@@ -97,6 +99,8 @@ static void dml21_init(const struct dc *in_dc, struct dml2_context **dml_ctx, co
|
||||
|
||||
/*Initialize DML21 instance */
|
||||
dml2_initialize_instance(&(*dml_ctx)->v21.dml_init);
|
||||
|
||||
DC_FP_END();
|
||||
}
|
||||
|
||||
bool dml21_create(const struct dc *in_dc, struct dml2_context **dml_ctx, const struct dml2_configuration_options *config)
|
||||
@@ -277,11 +281,16 @@ bool dml21_validate(const struct dc *in_dc, struct dc_state *context, struct dml
|
||||
{
|
||||
bool out = false;
|
||||
|
||||
DC_FP_START();
|
||||
|
||||
/* Use dml_validate_only for fast_validate path */
|
||||
if (fast_validate) {
|
||||
if (fast_validate)
|
||||
out = dml21_check_mode_support(in_dc, context, dml_ctx);
|
||||
} else
|
||||
else
|
||||
out = dml21_mode_check_and_programming(in_dc, context, dml_ctx);
|
||||
|
||||
DC_FP_END();
|
||||
|
||||
return out;
|
||||
}
|
||||
|
||||
@@ -420,8 +429,12 @@ void dml21_copy(struct dml2_context *dst_dml_ctx,
|
||||
|
||||
dst_dml_ctx->v21.mode_programming.programming = dst_dml2_programming;
|
||||
|
||||
DC_FP_START();
|
||||
|
||||
/* need to initialize copied instance for internal references to be correct */
|
||||
dml2_initialize_instance(&dst_dml_ctx->v21.dml_init);
|
||||
|
||||
DC_FP_END();
|
||||
}
|
||||
|
||||
bool dml21_create_copy(struct dml2_context **dst_dml_ctx,
|
||||
|
||||
@@ -734,11 +734,16 @@ bool dml2_validate(const struct dc *in_dc, struct dc_state *context, struct dml2
|
||||
return out;
|
||||
}
|
||||
|
||||
DC_FP_START();
|
||||
|
||||
/* Use dml_validate_only for fast_validate path */
|
||||
if (fast_validate)
|
||||
out = dml2_validate_only(context);
|
||||
else
|
||||
out = dml2_validate_and_build_resource(in_dc, context);
|
||||
|
||||
DC_FP_END();
|
||||
|
||||
return out;
|
||||
}
|
||||
|
||||
@@ -779,11 +784,15 @@ static void dml2_init(const struct dc *in_dc, const struct dml2_configuration_op
|
||||
break;
|
||||
}
|
||||
|
||||
DC_FP_START();
|
||||
|
||||
initialize_dml2_ip_params(*dml2, in_dc, &(*dml2)->v20.dml_core_ctx.ip);
|
||||
|
||||
initialize_dml2_soc_bbox(*dml2, in_dc, &(*dml2)->v20.dml_core_ctx.soc);
|
||||
|
||||
initialize_dml2_soc_states(*dml2, in_dc, &(*dml2)->v20.dml_core_ctx.soc, &(*dml2)->v20.dml_core_ctx.states);
|
||||
|
||||
DC_FP_END();
|
||||
}
|
||||
|
||||
bool dml2_create(const struct dc *in_dc, const struct dml2_configuration_options *config, struct dml2_context **dml2)
|
||||
|
||||
@@ -3003,7 +3003,11 @@ void dcn20_enable_stream(struct pipe_ctx *pipe_ctx)
|
||||
dccg->funcs->set_dpstreamclk(dccg, DTBCLK0, tg->inst, dp_hpo_inst);
|
||||
|
||||
phyd32clk = get_phyd32clk_src(link);
|
||||
dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
|
||||
if (link->cur_link_settings.link_rate == LINK_RATE_UNKNOWN) {
|
||||
dccg->funcs->disable_symclk32_se(dccg, dp_hpo_inst);
|
||||
} else {
|
||||
dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
|
||||
}
|
||||
} else {
|
||||
if (dccg->funcs->enable_symclk_se)
|
||||
dccg->funcs->enable_symclk_se(dccg, stream_enc->stream_enc_inst,
|
||||
|
||||
@@ -1001,8 +1001,11 @@ void dcn401_enable_stream(struct pipe_ctx *pipe_ctx)
|
||||
if (dc_is_dp_signal(pipe_ctx->stream->signal) || dc_is_virtual_signal(pipe_ctx->stream->signal)) {
|
||||
if (dc->link_srv->dp_is_128b_132b_signal(pipe_ctx)) {
|
||||
dccg->funcs->set_dpstreamclk(dccg, DPREFCLK, tg->inst, dp_hpo_inst);
|
||||
|
||||
dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
|
||||
if (link->cur_link_settings.link_rate == LINK_RATE_UNKNOWN) {
|
||||
dccg->funcs->disable_symclk32_se(dccg, dp_hpo_inst);
|
||||
} else {
|
||||
dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
|
||||
}
|
||||
} else {
|
||||
/* need to set DTBCLK_P source to DPREFCLK for DP8B10B */
|
||||
dccg->funcs->set_dtbclk_p_src(dccg, DPREFCLK, tg->inst);
|
||||
|
||||
@@ -891,7 +891,7 @@ static const struct dc_debug_options debug_defaults_drv = {
|
||||
.disable_z10 = true,
|
||||
.enable_legacy_fast_update = true,
|
||||
.enable_z9_disable_interface = true, /* Allow support for the PMFW interface for disable Z9*/
|
||||
.dml_hostvm_override = DML_HOSTVM_NO_OVERRIDE,
|
||||
.dml_hostvm_override = DML_HOSTVM_OVERRIDE_FALSE,
|
||||
.using_dml2 = false,
|
||||
};
|
||||
|
||||
|
||||
@@ -267,10 +267,10 @@ int smu7_fan_ctrl_set_fan_speed_rpm(struct pp_hwmgr *hwmgr, uint32_t speed)
|
||||
if (hwmgr->thermal_controller.fanInfo.bNoFan ||
|
||||
(hwmgr->thermal_controller.fanInfo.
|
||||
ucTachometerPulsesPerRevolution == 0) ||
|
||||
speed == 0 ||
|
||||
(!speed || speed > UINT_MAX/8) ||
|
||||
(speed < hwmgr->thermal_controller.fanInfo.ulMinRPM) ||
|
||||
(speed > hwmgr->thermal_controller.fanInfo.ulMaxRPM))
|
||||
return 0;
|
||||
return -EINVAL;
|
||||
|
||||
if (PP_CAP(PHM_PlatformCaps_MicrocodeFanControl))
|
||||
smu7_fan_ctrl_stop_smc_fan_control(hwmgr);
|
||||
|
||||
@@ -307,10 +307,10 @@ int vega10_fan_ctrl_set_fan_speed_rpm(struct pp_hwmgr *hwmgr, uint32_t speed)
|
||||
int result = 0;
|
||||
|
||||
if (hwmgr->thermal_controller.fanInfo.bNoFan ||
|
||||
speed == 0 ||
|
||||
(!speed || speed > UINT_MAX/8) ||
|
||||
(speed < hwmgr->thermal_controller.fanInfo.ulMinRPM) ||
|
||||
(speed > hwmgr->thermal_controller.fanInfo.ulMaxRPM))
|
||||
return -1;
|
||||
return -EINVAL;
|
||||
|
||||
if (PP_CAP(PHM_PlatformCaps_MicrocodeFanControl))
|
||||
result = vega10_fan_ctrl_stop_smc_fan_control(hwmgr);
|
||||
|
||||
@@ -191,7 +191,7 @@ int vega20_fan_ctrl_set_fan_speed_rpm(struct pp_hwmgr *hwmgr, uint32_t speed)
|
||||
uint32_t tach_period, crystal_clock_freq;
|
||||
int result = 0;
|
||||
|
||||
if (!speed)
|
||||
if (!speed || speed > UINT_MAX/8)
|
||||
return -EINVAL;
|
||||
|
||||
if (PP_CAP(PHM_PlatformCaps_MicrocodeFanControl)) {
|
||||
|
||||
@@ -1267,6 +1267,9 @@ static int arcturus_set_fan_speed_rpm(struct smu_context *smu,
|
||||
uint32_t crystal_clock_freq = 2500;
|
||||
uint32_t tach_period;
|
||||
|
||||
if (!speed || speed > UINT_MAX/8)
|
||||
return -EINVAL;
|
||||
|
||||
tach_period = 60 * crystal_clock_freq * 10000 / (8 * speed);
|
||||
WREG32_SOC15(THM, 0, mmCG_TACH_CTRL_ARCT,
|
||||
REG_SET_FIELD(RREG32_SOC15(THM, 0, mmCG_TACH_CTRL_ARCT),
|
||||
|
||||
@@ -1199,7 +1199,7 @@ int smu_v11_0_set_fan_speed_rpm(struct smu_context *smu,
|
||||
uint32_t crystal_clock_freq = 2500;
|
||||
uint32_t tach_period;
|
||||
|
||||
if (speed == 0)
|
||||
if (!speed || speed > UINT_MAX/8)
|
||||
return -EINVAL;
|
||||
/*
|
||||
* To prevent from possible overheat, some ASICs may have requirement
|
||||
|
||||
@@ -1228,7 +1228,7 @@ int smu_v13_0_set_fan_speed_rpm(struct smu_context *smu,
|
||||
uint32_t tach_period;
|
||||
int ret;
|
||||
|
||||
if (!speed)
|
||||
if (!speed || speed > UINT_MAX/8)
|
||||
return -EINVAL;
|
||||
|
||||
ret = smu_v13_0_auto_fan_control(smu, 0);
|
||||
|
||||
@@ -17,6 +17,12 @@ static bool ast_astdp_is_connected(struct ast_device *ast)
|
||||
{
|
||||
if (!ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDF, AST_IO_VGACRDF_HPD))
|
||||
return false;
|
||||
/*
|
||||
* HPD might be set even if no monitor is connected, so also check that
|
||||
* the link training was successful.
|
||||
*/
|
||||
if (!ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDC, AST_IO_VGACRDC_LINK_SUCCESS))
|
||||
return false;
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
@@ -1006,7 +1006,9 @@ static bool vrr_params_changed(const struct intel_crtc_state *old_crtc_state,
|
||||
old_crtc_state->vrr.vmin != new_crtc_state->vrr.vmin ||
|
||||
old_crtc_state->vrr.vmax != new_crtc_state->vrr.vmax ||
|
||||
old_crtc_state->vrr.guardband != new_crtc_state->vrr.guardband ||
|
||||
old_crtc_state->vrr.pipeline_full != new_crtc_state->vrr.pipeline_full;
|
||||
old_crtc_state->vrr.pipeline_full != new_crtc_state->vrr.pipeline_full ||
|
||||
old_crtc_state->vrr.vsync_start != new_crtc_state->vrr.vsync_start ||
|
||||
old_crtc_state->vrr.vsync_end != new_crtc_state->vrr.vsync_end;
|
||||
}
|
||||
|
||||
static bool cmrr_params_changed(const struct intel_crtc_state *old_crtc_state,
|
||||
|
||||
@@ -222,7 +222,6 @@ int intel_vgpu_init_opregion(struct intel_vgpu *vgpu)
|
||||
u8 *buf;
|
||||
struct opregion_header *header;
|
||||
struct vbt v;
|
||||
const char opregion_signature[16] = OPREGION_SIGNATURE;
|
||||
|
||||
gvt_dbg_core("init vgpu%d opregion\n", vgpu->id);
|
||||
vgpu_opregion(vgpu)->va = (void *)__get_free_pages(GFP_KERNEL |
|
||||
@@ -236,8 +235,10 @@ int intel_vgpu_init_opregion(struct intel_vgpu *vgpu)
|
||||
/* emulated opregion with VBT mailbox only */
|
||||
buf = (u8 *)vgpu_opregion(vgpu)->va;
|
||||
header = (struct opregion_header *)buf;
|
||||
memcpy(header->signature, opregion_signature,
|
||||
sizeof(opregion_signature));
|
||||
|
||||
static_assert(sizeof(header->signature) == sizeof(OPREGION_SIGNATURE) - 1);
|
||||
memcpy(header->signature, OPREGION_SIGNATURE, sizeof(header->signature));
|
||||
|
||||
header->size = 0x8;
|
||||
header->opregion_ver = 0x02000000;
|
||||
header->mboxes = MBOX_VBT;
|
||||
|
||||
@@ -732,7 +732,7 @@ pvr_fw_process(struct pvr_device *pvr_dev)
|
||||
fw_mem->core_data, fw_mem->core_code_alloc_size);
|
||||
|
||||
if (err)
|
||||
goto err_free_fw_core_data_obj;
|
||||
goto err_free_kdata;
|
||||
|
||||
memcpy(fw_code_ptr, fw_mem->code, fw_mem->code_alloc_size);
|
||||
memcpy(fw_data_ptr, fw_mem->data, fw_mem->data_alloc_size);
|
||||
@@ -742,10 +742,14 @@ pvr_fw_process(struct pvr_device *pvr_dev)
|
||||
memcpy(fw_core_data_ptr, fw_mem->core_data, fw_mem->core_data_alloc_size);
|
||||
|
||||
/* We're finished with the firmware section memory on the CPU, unmap. */
|
||||
if (fw_core_data_ptr)
|
||||
if (fw_core_data_ptr) {
|
||||
pvr_fw_object_vunmap(fw_mem->core_data_obj);
|
||||
if (fw_core_code_ptr)
|
||||
fw_core_data_ptr = NULL;
|
||||
}
|
||||
if (fw_core_code_ptr) {
|
||||
pvr_fw_object_vunmap(fw_mem->core_code_obj);
|
||||
fw_core_code_ptr = NULL;
|
||||
}
|
||||
pvr_fw_object_vunmap(fw_mem->data_obj);
|
||||
fw_data_ptr = NULL;
|
||||
pvr_fw_object_vunmap(fw_mem->code_obj);
|
||||
@@ -753,7 +757,7 @@ pvr_fw_process(struct pvr_device *pvr_dev)
|
||||
|
||||
err = pvr_fw_create_fwif_connection_ctl(pvr_dev);
|
||||
if (err)
|
||||
goto err_free_fw_core_data_obj;
|
||||
goto err_free_kdata;
|
||||
|
||||
return 0;
|
||||
|
||||
@@ -763,13 +767,16 @@ err_free_kdata:
|
||||
kfree(fw_mem->data);
|
||||
kfree(fw_mem->code);
|
||||
|
||||
err_free_fw_core_data_obj:
|
||||
if (fw_core_data_ptr)
|
||||
pvr_fw_object_unmap_and_destroy(fw_mem->core_data_obj);
|
||||
pvr_fw_object_vunmap(fw_mem->core_data_obj);
|
||||
if (fw_mem->core_data_obj)
|
||||
pvr_fw_object_destroy(fw_mem->core_data_obj);
|
||||
|
||||
err_free_fw_core_code_obj:
|
||||
if (fw_core_code_ptr)
|
||||
pvr_fw_object_unmap_and_destroy(fw_mem->core_code_obj);
|
||||
pvr_fw_object_vunmap(fw_mem->core_code_obj);
|
||||
if (fw_mem->core_code_obj)
|
||||
pvr_fw_object_destroy(fw_mem->core_code_obj);
|
||||
|
||||
err_free_fw_data_obj:
|
||||
if (fw_data_ptr)
|
||||
@@ -836,6 +843,12 @@ pvr_fw_cleanup(struct pvr_device *pvr_dev)
|
||||
struct pvr_fw_mem *fw_mem = &pvr_dev->fw_dev.mem;
|
||||
|
||||
pvr_fw_fini_fwif_connection_ctl(pvr_dev);
|
||||
|
||||
kfree(fw_mem->core_data);
|
||||
kfree(fw_mem->core_code);
|
||||
kfree(fw_mem->data);
|
||||
kfree(fw_mem->code);
|
||||
|
||||
if (fw_mem->core_code_obj)
|
||||
pvr_fw_object_destroy(fw_mem->core_code_obj);
|
||||
if (fw_mem->core_data_obj)
|
||||
|
||||
@@ -684,6 +684,13 @@ pvr_jobs_link_geom_frag(struct pvr_job_data *job_data, u32 *job_count)
|
||||
geom_job->paired_job = frag_job;
|
||||
frag_job->paired_job = geom_job;
|
||||
|
||||
/* The geometry job pvr_job structure is used when the fragment
|
||||
* job is being prepared by the GPU scheduler. Have the fragment
|
||||
* job hold a reference on the geometry job to prevent it being
|
||||
* freed until the fragment job has finished with it.
|
||||
*/
|
||||
pvr_job_get(geom_job);
|
||||
|
||||
/* Skip the fragment job we just paired to the geometry job. */
|
||||
i++;
|
||||
}
|
||||
|
||||
@@ -866,6 +866,10 @@ static void pvr_queue_free_job(struct drm_sched_job *sched_job)
|
||||
struct pvr_job *job = container_of(sched_job, struct pvr_job, base);
|
||||
|
||||
drm_sched_job_cleanup(sched_job);
|
||||
|
||||
if (job->type == DRM_PVR_JOB_TYPE_FRAGMENT && job->paired_job)
|
||||
pvr_job_put(job->paired_job);
|
||||
|
||||
job->paired_job = NULL;
|
||||
pvr_job_put(job);
|
||||
}
|
||||
|
||||
@@ -223,7 +223,7 @@ void mgag200_set_mode_regs(struct mga_device *mdev, const struct drm_display_mod
|
||||
vsyncstr = mode->crtc_vsync_start - 1;
|
||||
vsyncend = mode->crtc_vsync_end - 1;
|
||||
vtotal = mode->crtc_vtotal - 2;
|
||||
vblkstr = mode->crtc_vblank_start;
|
||||
vblkstr = mode->crtc_vblank_start - 1;
|
||||
vblkend = vtotal + 1;
|
||||
|
||||
linecomp = vdispend;
|
||||
|
||||
@@ -1126,50 +1126,51 @@ static void a6xx_gmu_shutdown(struct a6xx_gmu *gmu)
|
||||
struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu);
|
||||
struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
/*
|
||||
* The GMU may still be in slumber unless the GPU started so check and
|
||||
* skip putting it back into slumber if so
|
||||
* GMU firmware's internal power state gets messed up if we send "prepare_slumber" hfi when
|
||||
* oob_gpu handshake wasn't done after the last wake up. So do a dummy handshake here when
|
||||
* required
|
||||
*/
|
||||
val = gmu_read(gmu, REG_A6XX_GPU_GMU_CX_GMU_RPMH_POWER_STATE);
|
||||
if (adreno_gpu->base.needs_hw_init) {
|
||||
if (a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET))
|
||||
goto force_off;
|
||||
|
||||
if (val != 0xf) {
|
||||
int ret = a6xx_gmu_wait_for_idle(gmu);
|
||||
|
||||
/* If the GMU isn't responding assume it is hung */
|
||||
if (ret) {
|
||||
a6xx_gmu_force_off(gmu);
|
||||
return;
|
||||
}
|
||||
|
||||
a6xx_bus_clear_pending_transactions(adreno_gpu, a6xx_gpu->hung);
|
||||
|
||||
/* tell the GMU we want to slumber */
|
||||
ret = a6xx_gmu_notify_slumber(gmu);
|
||||
if (ret) {
|
||||
a6xx_gmu_force_off(gmu);
|
||||
return;
|
||||
}
|
||||
|
||||
ret = gmu_poll_timeout(gmu,
|
||||
REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS, val,
|
||||
!(val & A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS_GPUBUSYIGNAHB),
|
||||
100, 10000);
|
||||
|
||||
/*
|
||||
* Let the user know we failed to slumber but don't worry too
|
||||
* much because we are powering down anyway
|
||||
*/
|
||||
|
||||
if (ret)
|
||||
DRM_DEV_ERROR(gmu->dev,
|
||||
"Unable to slumber GMU: status = 0%x/0%x\n",
|
||||
gmu_read(gmu,
|
||||
REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS),
|
||||
gmu_read(gmu,
|
||||
REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS2));
|
||||
a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET);
|
||||
}
|
||||
|
||||
ret = a6xx_gmu_wait_for_idle(gmu);
|
||||
|
||||
/* If the GMU isn't responding assume it is hung */
|
||||
if (ret)
|
||||
goto force_off;
|
||||
|
||||
a6xx_bus_clear_pending_transactions(adreno_gpu, a6xx_gpu->hung);
|
||||
|
||||
/* tell the GMU we want to slumber */
|
||||
ret = a6xx_gmu_notify_slumber(gmu);
|
||||
if (ret)
|
||||
goto force_off;
|
||||
|
||||
ret = gmu_poll_timeout(gmu,
|
||||
REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS, val,
|
||||
!(val & A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS_GPUBUSYIGNAHB),
|
||||
100, 10000);
|
||||
|
||||
/*
|
||||
* Let the user know we failed to slumber but don't worry too
|
||||
* much because we are powering down anyway
|
||||
*/
|
||||
|
||||
if (ret)
|
||||
DRM_DEV_ERROR(gmu->dev,
|
||||
"Unable to slumber GMU: status = 0%x/0%x\n",
|
||||
gmu_read(gmu,
|
||||
REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS),
|
||||
gmu_read(gmu,
|
||||
REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS2));
|
||||
|
||||
/* Turn off HFI */
|
||||
a6xx_hfi_stop(gmu);
|
||||
|
||||
@@ -1178,6 +1179,11 @@ static void a6xx_gmu_shutdown(struct a6xx_gmu *gmu)
|
||||
|
||||
/* Tell RPMh to power off the GPU */
|
||||
a6xx_rpmh_stop(gmu);
|
||||
|
||||
return;
|
||||
|
||||
force_off:
|
||||
a6xx_gmu_force_off(gmu);
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -233,10 +233,10 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
|
||||
break;
|
||||
fallthrough;
|
||||
case MSM_SUBMIT_CMD_BUF:
|
||||
OUT_PKT7(ring, CP_INDIRECT_BUFFER_PFE, 3);
|
||||
OUT_PKT7(ring, CP_INDIRECT_BUFFER, 3);
|
||||
OUT_RING(ring, lower_32_bits(submit->cmd[i].iova));
|
||||
OUT_RING(ring, upper_32_bits(submit->cmd[i].iova));
|
||||
OUT_RING(ring, submit->cmd[i].size);
|
||||
OUT_RING(ring, A5XX_CP_INDIRECT_BUFFER_2_IB_SIZE(submit->cmd[i].size));
|
||||
ibs++;
|
||||
break;
|
||||
}
|
||||
@@ -319,10 +319,10 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
|
||||
break;
|
||||
fallthrough;
|
||||
case MSM_SUBMIT_CMD_BUF:
|
||||
OUT_PKT7(ring, CP_INDIRECT_BUFFER_PFE, 3);
|
||||
OUT_PKT7(ring, CP_INDIRECT_BUFFER, 3);
|
||||
OUT_RING(ring, lower_32_bits(submit->cmd[i].iova));
|
||||
OUT_RING(ring, upper_32_bits(submit->cmd[i].iova));
|
||||
OUT_RING(ring, submit->cmd[i].size);
|
||||
OUT_RING(ring, A5XX_CP_INDIRECT_BUFFER_2_IB_SIZE(submit->cmd[i].size));
|
||||
ibs++;
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -1827,8 +1827,15 @@ static int dsi_host_parse_dt(struct msm_dsi_host *msm_host)
|
||||
__func__, ret);
|
||||
goto err;
|
||||
}
|
||||
if (!ret)
|
||||
if (!ret) {
|
||||
msm_dsi->te_source = devm_kstrdup(dev, te_source, GFP_KERNEL);
|
||||
if (!msm_dsi->te_source) {
|
||||
DRM_DEV_ERROR(dev, "%s: failed to allocate te_source\n",
|
||||
__func__);
|
||||
ret = -ENOMEM;
|
||||
goto err;
|
||||
}
|
||||
}
|
||||
ret = 0;
|
||||
|
||||
if (of_property_read_bool(np, "syscon-sfpb")) {
|
||||
|
||||
@@ -2264,5 +2264,12 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
|
||||
</reg32>
|
||||
</domain>
|
||||
|
||||
<domain name="CP_INDIRECT_BUFFER" width="32" varset="chip" prefix="chip" variants="A5XX-">
|
||||
<reg64 offset="0" name="IB_BASE" type="address"/>
|
||||
<reg32 offset="2" name="2">
|
||||
<bitfield name="IB_SIZE" low="0" high="19"/>
|
||||
</reg32>
|
||||
</domain>
|
||||
|
||||
</database>
|
||||
|
||||
|
||||
@@ -144,6 +144,9 @@ nouveau_bo_del_ttm(struct ttm_buffer_object *bo)
|
||||
nouveau_bo_del_io_reserve_lru(bo);
|
||||
nv10_bo_put_tile_region(dev, nvbo->tile, NULL);
|
||||
|
||||
if (bo->base.import_attach)
|
||||
drm_prime_gem_destroy(&bo->base, bo->sg);
|
||||
|
||||
/*
|
||||
* If nouveau_bo_new() allocated this buffer, the GEM object was never
|
||||
* initialized, so don't attempt to release it.
|
||||
|
||||
@@ -87,9 +87,6 @@ nouveau_gem_object_del(struct drm_gem_object *gem)
|
||||
return;
|
||||
}
|
||||
|
||||
if (gem->import_attach)
|
||||
drm_prime_gem_destroy(gem, nvbo->bo.sg);
|
||||
|
||||
ttm_bo_put(&nvbo->bo);
|
||||
|
||||
pm_runtime_mark_last_busy(dev);
|
||||
|
||||
@@ -7,8 +7,6 @@ sti-drm-y := \
|
||||
sti_compositor.o \
|
||||
sti_crtc.o \
|
||||
sti_plane.o \
|
||||
sti_crtc.o \
|
||||
sti_plane.o \
|
||||
sti_hdmi.o \
|
||||
sti_hdmi_tx3g4c28phy.o \
|
||||
sti_dvo.o \
|
||||
|
||||
@@ -455,7 +455,7 @@ static void repaper_frame_fixed_repeat(struct repaper_epd *epd, u8 fixed_value,
|
||||
enum repaper_stage stage)
|
||||
{
|
||||
u64 start = local_clock();
|
||||
u64 end = start + (epd->factored_stage_time * 1000 * 1000);
|
||||
u64 end = start + ((u64)epd->factored_stage_time * 1000 * 1000);
|
||||
|
||||
do {
|
||||
repaper_frame_fixed(epd, fixed_value, stage);
|
||||
@@ -466,7 +466,7 @@ static void repaper_frame_data_repeat(struct repaper_epd *epd, const u8 *image,
|
||||
const u8 *mask, enum repaper_stage stage)
|
||||
{
|
||||
u64 start = local_clock();
|
||||
u64 end = start + (epd->factored_stage_time * 1000 * 1000);
|
||||
u64 end = start + ((u64)epd->factored_stage_time * 1000 * 1000);
|
||||
|
||||
do {
|
||||
repaper_frame_data(epd, image, mask, stage);
|
||||
|
||||
@@ -410,7 +410,8 @@ v3d_rewrite_csd_job_wg_counts_from_indirect(struct v3d_cpu_job *job)
|
||||
struct v3d_bo *bo = to_v3d_bo(job->base.bo[0]);
|
||||
struct v3d_bo *indirect = to_v3d_bo(indirect_csd->indirect);
|
||||
struct drm_v3d_submit_csd *args = &indirect_csd->job->args;
|
||||
u32 *wg_counts;
|
||||
struct v3d_dev *v3d = job->base.v3d;
|
||||
u32 num_batches, *wg_counts;
|
||||
|
||||
v3d_get_bo_vaddr(bo);
|
||||
v3d_get_bo_vaddr(indirect);
|
||||
@@ -423,8 +424,17 @@ v3d_rewrite_csd_job_wg_counts_from_indirect(struct v3d_cpu_job *job)
|
||||
args->cfg[0] = wg_counts[0] << V3D_CSD_CFG012_WG_COUNT_SHIFT;
|
||||
args->cfg[1] = wg_counts[1] << V3D_CSD_CFG012_WG_COUNT_SHIFT;
|
||||
args->cfg[2] = wg_counts[2] << V3D_CSD_CFG012_WG_COUNT_SHIFT;
|
||||
args->cfg[4] = DIV_ROUND_UP(indirect_csd->wg_size, 16) *
|
||||
(wg_counts[0] * wg_counts[1] * wg_counts[2]) - 1;
|
||||
|
||||
num_batches = DIV_ROUND_UP(indirect_csd->wg_size, 16) *
|
||||
(wg_counts[0] * wg_counts[1] * wg_counts[2]);
|
||||
|
||||
/* V3D 7.1.6 and later don't subtract 1 from the number of batches */
|
||||
if (v3d->ver < 71 || (v3d->ver == 71 && v3d->rev < 6))
|
||||
args->cfg[4] = num_batches - 1;
|
||||
else
|
||||
args->cfg[4] = num_batches;
|
||||
|
||||
WARN_ON(args->cfg[4] == ~0);
|
||||
|
||||
for (int i = 0; i < 3; i++) {
|
||||
/* 0xffffffff indicates that the uniform rewrite is not needed */
|
||||
|
||||
@@ -145,10 +145,7 @@ static void xe_dma_buf_unmap(struct dma_buf_attachment *attach,
|
||||
struct sg_table *sgt,
|
||||
enum dma_data_direction dir)
|
||||
{
|
||||
struct dma_buf *dma_buf = attach->dmabuf;
|
||||
struct xe_bo *bo = gem_to_xe_bo(dma_buf->priv);
|
||||
|
||||
if (!xe_bo_is_vram(bo)) {
|
||||
if (sg_page(sgt->sgl)) {
|
||||
dma_unmap_sgtable(attach->dev, sgt, dir, 0);
|
||||
sg_free_table(sgt);
|
||||
kfree(sgt);
|
||||
|
||||
@@ -310,6 +310,13 @@ int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* Ensure that roundup_pow_of_two(length) doesn't overflow.
|
||||
* Note that roundup_pow_of_two() operates on unsigned long,
|
||||
* not on u64.
|
||||
*/
|
||||
#define MAX_RANGE_TLB_INVALIDATION_LENGTH (rounddown_pow_of_two(ULONG_MAX))
|
||||
|
||||
/**
|
||||
* xe_gt_tlb_invalidation_range - Issue a TLB invalidation on this GT for an
|
||||
* address range
|
||||
@@ -334,6 +341,7 @@ int xe_gt_tlb_invalidation_range(struct xe_gt *gt,
|
||||
struct xe_device *xe = gt_to_xe(gt);
|
||||
#define MAX_TLB_INVALIDATION_LEN 7
|
||||
u32 action[MAX_TLB_INVALIDATION_LEN];
|
||||
u64 length = end - start;
|
||||
int len = 0;
|
||||
|
||||
xe_gt_assert(gt, fence);
|
||||
@@ -346,11 +354,11 @@ int xe_gt_tlb_invalidation_range(struct xe_gt *gt,
|
||||
|
||||
action[len++] = XE_GUC_ACTION_TLB_INVALIDATION;
|
||||
action[len++] = 0; /* seqno, replaced in send_tlb_invalidation */
|
||||
if (!xe->info.has_range_tlb_invalidation) {
|
||||
if (!xe->info.has_range_tlb_invalidation ||
|
||||
length > MAX_RANGE_TLB_INVALIDATION_LENGTH) {
|
||||
action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL);
|
||||
} else {
|
||||
u64 orig_start = start;
|
||||
u64 length = end - start;
|
||||
u64 align;
|
||||
|
||||
if (length < SZ_4K)
|
||||
|
||||
@@ -483,24 +483,52 @@ static void fill_engine_enable_masks(struct xe_gt *gt,
|
||||
engine_enable_mask(gt, XE_ENGINE_CLASS_OTHER));
|
||||
}
|
||||
|
||||
static void guc_prep_golden_lrc_null(struct xe_guc_ads *ads)
|
||||
/*
|
||||
* Write the offsets corresponding to the golden LRCs. The actual data is
|
||||
* populated later by guc_golden_lrc_populate()
|
||||
*/
|
||||
static void guc_golden_lrc_init(struct xe_guc_ads *ads)
|
||||
{
|
||||
struct xe_device *xe = ads_to_xe(ads);
|
||||
struct xe_gt *gt = ads_to_gt(ads);
|
||||
struct iosys_map info_map = IOSYS_MAP_INIT_OFFSET(ads_to_map(ads),
|
||||
offsetof(struct __guc_ads_blob, system_info));
|
||||
u8 guc_class;
|
||||
size_t alloc_size, real_size;
|
||||
u32 addr_ggtt, offset;
|
||||
int class;
|
||||
|
||||
offset = guc_ads_golden_lrc_offset(ads);
|
||||
addr_ggtt = xe_bo_ggtt_addr(ads->bo) + offset;
|
||||
|
||||
for (class = 0; class < XE_ENGINE_CLASS_MAX; ++class) {
|
||||
u8 guc_class;
|
||||
|
||||
guc_class = xe_engine_class_to_guc_class(class);
|
||||
|
||||
for (guc_class = 0; guc_class <= GUC_MAX_ENGINE_CLASSES; ++guc_class) {
|
||||
if (!info_map_read(xe, &info_map,
|
||||
engine_enabled_masks[guc_class]))
|
||||
continue;
|
||||
|
||||
real_size = xe_gt_lrc_size(gt, class);
|
||||
alloc_size = PAGE_ALIGN(real_size);
|
||||
|
||||
/*
|
||||
* This interface is slightly confusing. We need to pass the
|
||||
* base address of the full golden context and the size of just
|
||||
* the engine state, which is the section of the context image
|
||||
* that starts after the execlists LRC registers. This is
|
||||
* required to allow the GuC to restore just the engine state
|
||||
* when a watchdog reset occurs.
|
||||
* We calculate the engine state size by removing the size of
|
||||
* what comes before it in the context image (which is identical
|
||||
* on all engines).
|
||||
*/
|
||||
ads_blob_write(ads, ads.eng_state_size[guc_class],
|
||||
guc_ads_golden_lrc_size(ads) -
|
||||
xe_lrc_skip_size(xe));
|
||||
real_size - xe_lrc_skip_size(xe));
|
||||
ads_blob_write(ads, ads.golden_context_lrca[guc_class],
|
||||
xe_bo_ggtt_addr(ads->bo) +
|
||||
guc_ads_golden_lrc_offset(ads));
|
||||
addr_ggtt);
|
||||
|
||||
addr_ggtt += alloc_size;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -710,7 +738,7 @@ void xe_guc_ads_populate_minimal(struct xe_guc_ads *ads)
|
||||
|
||||
xe_map_memset(ads_to_xe(ads), ads_to_map(ads), 0, 0, ads->bo->size);
|
||||
guc_policies_init(ads);
|
||||
guc_prep_golden_lrc_null(ads);
|
||||
guc_golden_lrc_init(ads);
|
||||
guc_mapping_table_init_invalid(gt, &info_map);
|
||||
guc_doorbell_init(ads);
|
||||
|
||||
@@ -736,7 +764,7 @@ void xe_guc_ads_populate(struct xe_guc_ads *ads)
|
||||
guc_policies_init(ads);
|
||||
fill_engine_enable_masks(gt, &info_map);
|
||||
guc_mmio_reg_state_init(ads);
|
||||
guc_prep_golden_lrc_null(ads);
|
||||
guc_golden_lrc_init(ads);
|
||||
guc_mapping_table_init(gt, &info_map);
|
||||
guc_capture_list_init(ads);
|
||||
guc_doorbell_init(ads);
|
||||
@@ -756,18 +784,22 @@ void xe_guc_ads_populate(struct xe_guc_ads *ads)
|
||||
guc_ads_private_data_offset(ads));
|
||||
}
|
||||
|
||||
static void guc_populate_golden_lrc(struct xe_guc_ads *ads)
|
||||
/*
|
||||
* After the golden LRC's are recorded for each engine class by the first
|
||||
* submission, copy them to the ADS, as initialized earlier by
|
||||
* guc_golden_lrc_init().
|
||||
*/
|
||||
static void guc_golden_lrc_populate(struct xe_guc_ads *ads)
|
||||
{
|
||||
struct xe_device *xe = ads_to_xe(ads);
|
||||
struct xe_gt *gt = ads_to_gt(ads);
|
||||
struct iosys_map info_map = IOSYS_MAP_INIT_OFFSET(ads_to_map(ads),
|
||||
offsetof(struct __guc_ads_blob, system_info));
|
||||
size_t total_size = 0, alloc_size, real_size;
|
||||
u32 addr_ggtt, offset;
|
||||
u32 offset;
|
||||
int class;
|
||||
|
||||
offset = guc_ads_golden_lrc_offset(ads);
|
||||
addr_ggtt = xe_bo_ggtt_addr(ads->bo) + offset;
|
||||
|
||||
for (class = 0; class < XE_ENGINE_CLASS_MAX; ++class) {
|
||||
u8 guc_class;
|
||||
@@ -784,26 +816,9 @@ static void guc_populate_golden_lrc(struct xe_guc_ads *ads)
|
||||
alloc_size = PAGE_ALIGN(real_size);
|
||||
total_size += alloc_size;
|
||||
|
||||
/*
|
||||
* This interface is slightly confusing. We need to pass the
|
||||
* base address of the full golden context and the size of just
|
||||
* the engine state, which is the section of the context image
|
||||
* that starts after the execlists LRC registers. This is
|
||||
* required to allow the GuC to restore just the engine state
|
||||
* when a watchdog reset occurs.
|
||||
* We calculate the engine state size by removing the size of
|
||||
* what comes before it in the context image (which is identical
|
||||
* on all engines).
|
||||
*/
|
||||
ads_blob_write(ads, ads.eng_state_size[guc_class],
|
||||
real_size - xe_lrc_skip_size(xe));
|
||||
ads_blob_write(ads, ads.golden_context_lrca[guc_class],
|
||||
addr_ggtt);
|
||||
|
||||
xe_map_memcpy_to(xe, ads_to_map(ads), offset,
|
||||
gt->default_lrc[class], real_size);
|
||||
|
||||
addr_ggtt += alloc_size;
|
||||
offset += alloc_size;
|
||||
}
|
||||
|
||||
@@ -812,7 +827,7 @@ static void guc_populate_golden_lrc(struct xe_guc_ads *ads)
|
||||
|
||||
void xe_guc_ads_populate_post_load(struct xe_guc_ads *ads)
|
||||
{
|
||||
guc_populate_golden_lrc(ads);
|
||||
guc_golden_lrc_populate(ads);
|
||||
}
|
||||
|
||||
static int guc_ads_action_update_policies(struct xe_guc_ads *ads, u32 policy_offset)
|
||||
|
||||
@@ -19,29 +19,6 @@ static u64 xe_npages_in_range(unsigned long start, unsigned long end)
|
||||
return (end - start) >> PAGE_SHIFT;
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_mark_range_accessed() - mark a range is accessed, so core mm
|
||||
* have such information for memory eviction or write back to
|
||||
* hard disk
|
||||
* @range: the range to mark
|
||||
* @write: if write to this range, we mark pages in this range
|
||||
* as dirty
|
||||
*/
|
||||
static void xe_mark_range_accessed(struct hmm_range *range, bool write)
|
||||
{
|
||||
struct page *page;
|
||||
u64 i, npages;
|
||||
|
||||
npages = xe_npages_in_range(range->start, range->end);
|
||||
for (i = 0; i < npages; i++) {
|
||||
page = hmm_pfn_to_page(range->hmm_pfns[i]);
|
||||
if (write)
|
||||
set_page_dirty_lock(page);
|
||||
|
||||
mark_page_accessed(page);
|
||||
}
|
||||
}
|
||||
|
||||
static int xe_alloc_sg(struct xe_device *xe, struct sg_table *st,
|
||||
struct hmm_range *range, struct rw_semaphore *notifier_sem)
|
||||
{
|
||||
@@ -331,7 +308,6 @@ int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma,
|
||||
if (ret)
|
||||
goto out_unlock;
|
||||
|
||||
xe_mark_range_accessed(&hmm_range, write);
|
||||
userptr->sg = &userptr->sgt;
|
||||
xe_hmm_userptr_set_mapped(uvma);
|
||||
userptr->notifier_seq = hmm_range.notifier_seq;
|
||||
|
||||
@@ -1177,7 +1177,7 @@ err:
|
||||
err_sync:
|
||||
/* Sync partial copies if any. FIXME: job_mutex? */
|
||||
if (fence) {
|
||||
dma_fence_wait(m->fence, false);
|
||||
dma_fence_wait(fence, false);
|
||||
dma_fence_put(fence);
|
||||
}
|
||||
|
||||
|
||||
@@ -247,6 +247,9 @@ static int ec_i2c_probe(struct platform_device *pdev)
|
||||
u32 remote_bus;
|
||||
int err;
|
||||
|
||||
if (!ec)
|
||||
return dev_err_probe(dev, -EPROBE_DEFER, "couldn't find parent EC device\n");
|
||||
|
||||
if (!ec->cmd_xfer) {
|
||||
dev_err(dev, "Missing sendrecv\n");
|
||||
return -EINVAL;
|
||||
|
||||
@@ -8,12 +8,12 @@
|
||||
* Originally based on i2c-mux.c
|
||||
*/
|
||||
|
||||
#include <linux/fwnode.h>
|
||||
#include <linux/i2c-atr.h>
|
||||
#include <linux/i2c.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/mutex.h>
|
||||
#include <linux/property.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/spinlock.h>
|
||||
|
||||
|
||||
@@ -72,6 +72,8 @@ static const char * const cma_events[] = {
|
||||
static void cma_iboe_set_mgid(struct sockaddr *addr, union ib_gid *mgid,
|
||||
enum ib_gid_type gid_type);
|
||||
|
||||
static void cma_netevent_work_handler(struct work_struct *_work);
|
||||
|
||||
const char *__attribute_const__ rdma_event_msg(enum rdma_cm_event_type event)
|
||||
{
|
||||
size_t index = event;
|
||||
@@ -1033,6 +1035,7 @@ __rdma_create_id(struct net *net, rdma_cm_event_handler event_handler,
|
||||
get_random_bytes(&id_priv->seq_num, sizeof id_priv->seq_num);
|
||||
id_priv->id.route.addr.dev_addr.net = get_net(net);
|
||||
id_priv->seq_num &= 0x00ffffff;
|
||||
INIT_WORK(&id_priv->id.net_work, cma_netevent_work_handler);
|
||||
|
||||
rdma_restrack_new(&id_priv->res, RDMA_RESTRACK_CM_ID);
|
||||
if (parent)
|
||||
@@ -5227,7 +5230,6 @@ static int cma_netevent_callback(struct notifier_block *self,
|
||||
if (!memcmp(current_id->id.route.addr.dev_addr.dst_dev_addr,
|
||||
neigh->ha, ETH_ALEN))
|
||||
continue;
|
||||
INIT_WORK(¤t_id->id.net_work, cma_netevent_work_handler);
|
||||
cma_id_get(current_id);
|
||||
queue_work(cma_wq, ¤t_id->id.net_work);
|
||||
}
|
||||
|
||||
@@ -76,12 +76,14 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp,
|
||||
|
||||
npfns = (end - start) >> PAGE_SHIFT;
|
||||
umem_odp->pfn_list = kvcalloc(
|
||||
npfns, sizeof(*umem_odp->pfn_list), GFP_KERNEL);
|
||||
npfns, sizeof(*umem_odp->pfn_list),
|
||||
GFP_KERNEL | __GFP_NOWARN);
|
||||
if (!umem_odp->pfn_list)
|
||||
return -ENOMEM;
|
||||
|
||||
umem_odp->dma_list = kvcalloc(
|
||||
ndmas, sizeof(*umem_odp->dma_list), GFP_KERNEL);
|
||||
ndmas, sizeof(*umem_odp->dma_list),
|
||||
GFP_KERNEL | __GFP_NOWARN);
|
||||
if (!umem_odp->dma_list) {
|
||||
ret = -ENOMEM;
|
||||
goto out_pfn_list;
|
||||
|
||||
@@ -763,7 +763,7 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev)
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
dma_set_max_seg_size(dev, UINT_MAX);
|
||||
dma_set_max_seg_size(dev, SZ_2G);
|
||||
ret = ib_register_device(ib_dev, "hns_%d", dev);
|
||||
if (ret) {
|
||||
dev_err(dev, "ib_register_device failed!\n");
|
||||
|
||||
@@ -380,7 +380,7 @@ static void *usnic_ib_device_add(struct pci_dev *dev)
|
||||
if (!us_ibdev) {
|
||||
usnic_err("Device %s context alloc failed\n",
|
||||
netdev_name(pci_get_drvdata(dev)));
|
||||
return ERR_PTR(-EFAULT);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
us_ibdev->ufdev = usnic_fwd_dev_alloc(dev);
|
||||
@@ -500,8 +500,8 @@ static struct usnic_ib_dev *usnic_ib_discover_pf(struct usnic_vnic *vnic)
|
||||
}
|
||||
|
||||
us_ibdev = usnic_ib_device_add(parent_pci);
|
||||
if (IS_ERR_OR_NULL(us_ibdev)) {
|
||||
us_ibdev = us_ibdev ? us_ibdev : ERR_PTR(-EFAULT);
|
||||
if (!us_ibdev) {
|
||||
us_ibdev = ERR_PTR(-EFAULT);
|
||||
goto out;
|
||||
}
|
||||
|
||||
@@ -569,10 +569,10 @@ static int usnic_ib_pci_probe(struct pci_dev *pdev,
|
||||
}
|
||||
|
||||
pf = usnic_ib_discover_pf(vf->vnic);
|
||||
if (IS_ERR_OR_NULL(pf)) {
|
||||
usnic_err("Failed to discover pf of vnic %s with err%ld\n",
|
||||
pci_name(pdev), PTR_ERR(pf));
|
||||
err = pf ? PTR_ERR(pf) : -EFAULT;
|
||||
if (IS_ERR(pf)) {
|
||||
err = PTR_ERR(pf);
|
||||
usnic_err("Failed to discover pf of vnic %s with err%d\n",
|
||||
pci_name(pdev), err);
|
||||
goto out_clean_vnic;
|
||||
}
|
||||
|
||||
|
||||
@@ -2355,9 +2355,8 @@ static int bitmap_get_stats(void *data, struct md_bitmap_stats *stats)
|
||||
|
||||
if (!bitmap)
|
||||
return -ENOENT;
|
||||
if (bitmap->mddev->bitmap_info.external)
|
||||
return -ENOENT;
|
||||
if (!bitmap->storage.sb_page) /* no superblock */
|
||||
if (!bitmap->mddev->bitmap_info.external &&
|
||||
!bitmap->storage.sb_page)
|
||||
return -EINVAL;
|
||||
sb = kmap_local_page(bitmap->storage.sb_page);
|
||||
stats->sync_size = le64_to_cpu(sb->sync_size);
|
||||
|
||||
@@ -629,6 +629,12 @@ static void __mddev_put(struct mddev *mddev)
|
||||
queue_work(md_misc_wq, &mddev->del_work);
|
||||
}
|
||||
|
||||
static void mddev_put_locked(struct mddev *mddev)
|
||||
{
|
||||
if (atomic_dec_and_test(&mddev->active))
|
||||
__mddev_put(mddev);
|
||||
}
|
||||
|
||||
void mddev_put(struct mddev *mddev)
|
||||
{
|
||||
if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
|
||||
@@ -8461,9 +8467,7 @@ static int md_seq_show(struct seq_file *seq, void *v)
|
||||
if (mddev == list_last_entry(&all_mddevs, struct mddev, all_mddevs))
|
||||
status_unused(seq);
|
||||
|
||||
if (atomic_dec_and_test(&mddev->active))
|
||||
__mddev_put(mddev);
|
||||
|
||||
mddev_put_locked(mddev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -9886,11 +9890,11 @@ EXPORT_SYMBOL_GPL(rdev_clear_badblocks);
|
||||
static int md_notify_reboot(struct notifier_block *this,
|
||||
unsigned long code, void *x)
|
||||
{
|
||||
struct mddev *mddev, *n;
|
||||
struct mddev *mddev;
|
||||
int need_delay = 0;
|
||||
|
||||
spin_lock(&all_mddevs_lock);
|
||||
list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) {
|
||||
list_for_each_entry(mddev, &all_mddevs, all_mddevs) {
|
||||
if (!mddev_get(mddev))
|
||||
continue;
|
||||
spin_unlock(&all_mddevs_lock);
|
||||
@@ -9902,8 +9906,8 @@ static int md_notify_reboot(struct notifier_block *this,
|
||||
mddev_unlock(mddev);
|
||||
}
|
||||
need_delay = 1;
|
||||
mddev_put(mddev);
|
||||
spin_lock(&all_mddevs_lock);
|
||||
mddev_put_locked(mddev);
|
||||
}
|
||||
spin_unlock(&all_mddevs_lock);
|
||||
|
||||
@@ -10236,7 +10240,7 @@ void md_autostart_arrays(int part)
|
||||
|
||||
static __exit void md_exit(void)
|
||||
{
|
||||
struct mddev *mddev, *n;
|
||||
struct mddev *mddev;
|
||||
int delay = 1;
|
||||
|
||||
unregister_blkdev(MD_MAJOR,"md");
|
||||
@@ -10257,7 +10261,7 @@ static __exit void md_exit(void)
|
||||
remove_proc_entry("mdstat", NULL);
|
||||
|
||||
spin_lock(&all_mddevs_lock);
|
||||
list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) {
|
||||
list_for_each_entry(mddev, &all_mddevs, all_mddevs) {
|
||||
if (!mddev_get(mddev))
|
||||
continue;
|
||||
spin_unlock(&all_mddevs_lock);
|
||||
@@ -10269,8 +10273,8 @@ static __exit void md_exit(void)
|
||||
* the mddev for destruction by a workqueue, and the
|
||||
* destroy_workqueue() below will wait for that to complete.
|
||||
*/
|
||||
mddev_put(mddev);
|
||||
spin_lock(&all_mddevs_lock);
|
||||
mddev_put_locked(mddev);
|
||||
}
|
||||
spin_unlock(&all_mddevs_lock);
|
||||
|
||||
|
||||
@@ -1687,6 +1687,7 @@ retry_discard:
|
||||
* The discard bio returns only first r10bio finishes
|
||||
*/
|
||||
if (first_copy) {
|
||||
md_account_bio(mddev, &bio);
|
||||
r10_bio->master_bio = bio;
|
||||
set_bit(R10BIO_Discard, &r10_bio->state);
|
||||
first_copy = false;
|
||||
|
||||
@@ -251,6 +251,9 @@ fail:
|
||||
break;
|
||||
}
|
||||
|
||||
test->num_irqs = i;
|
||||
pci_endpoint_test_release_irq(test);
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
@@ -738,6 +741,7 @@ static bool pci_endpoint_test_set_irq(struct pci_endpoint_test *test,
|
||||
if (!pci_endpoint_test_request_irq(test))
|
||||
goto err;
|
||||
|
||||
irq_type = test->irq_type;
|
||||
return true;
|
||||
|
||||
err:
|
||||
|
||||
@@ -907,15 +907,16 @@ static int rkcanfd_probe(struct platform_device *pdev)
|
||||
priv->can.data_bittiming_const = &rkcanfd_data_bittiming_const;
|
||||
priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK |
|
||||
CAN_CTRLMODE_BERR_REPORTING;
|
||||
if (!(priv->devtype_data.quirks & RKCANFD_QUIRK_CANFD_BROKEN))
|
||||
priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD;
|
||||
priv->can.do_set_mode = rkcanfd_set_mode;
|
||||
priv->can.do_get_berr_counter = rkcanfd_get_berr_counter;
|
||||
priv->ndev = ndev;
|
||||
|
||||
match = device_get_match_data(&pdev->dev);
|
||||
if (match)
|
||||
if (match) {
|
||||
priv->devtype_data = *(struct rkcanfd_devtype_data *)match;
|
||||
if (!(priv->devtype_data.quirks & RKCANFD_QUIRK_CANFD_BROKEN))
|
||||
priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD;
|
||||
}
|
||||
|
||||
err = can_rx_offload_add_manual(ndev, &priv->offload,
|
||||
RKCANFD_NAPI_WEIGHT);
|
||||
|
||||
@@ -737,6 +737,15 @@ static void b53_enable_mib(struct b53_device *dev)
|
||||
b53_write8(dev, B53_MGMT_PAGE, B53_GLOBAL_CONFIG, gc);
|
||||
}
|
||||
|
||||
static void b53_enable_stp(struct b53_device *dev)
|
||||
{
|
||||
u8 gc;
|
||||
|
||||
b53_read8(dev, B53_MGMT_PAGE, B53_GLOBAL_CONFIG, &gc);
|
||||
gc |= GC_RX_BPDU_EN;
|
||||
b53_write8(dev, B53_MGMT_PAGE, B53_GLOBAL_CONFIG, gc);
|
||||
}
|
||||
|
||||
static u16 b53_default_pvid(struct b53_device *dev)
|
||||
{
|
||||
if (is5325(dev) || is5365(dev))
|
||||
@@ -876,6 +885,7 @@ static int b53_switch_reset(struct b53_device *dev)
|
||||
}
|
||||
|
||||
b53_enable_mib(dev);
|
||||
b53_enable_stp(dev);
|
||||
|
||||
return b53_flush_arl(dev, FAST_AGE_STATIC);
|
||||
}
|
||||
|
||||
@@ -1878,6 +1878,8 @@ static int mv88e6xxx_vtu_get(struct mv88e6xxx_chip *chip, u16 vid,
|
||||
if (!chip->info->ops->vtu_getnext)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
memset(entry, 0, sizeof(*entry));
|
||||
|
||||
entry->vid = vid ? vid - 1 : mv88e6xxx_max_vid(chip);
|
||||
entry->valid = false;
|
||||
|
||||
@@ -2013,7 +2015,16 @@ static int mv88e6xxx_mst_put(struct mv88e6xxx_chip *chip, u8 sid)
|
||||
struct mv88e6xxx_mst *mst, *tmp;
|
||||
int err;
|
||||
|
||||
if (!sid)
|
||||
/* If the SID is zero, it is for a VLAN mapped to the default MSTI,
|
||||
* and mv88e6xxx_stu_setup() made sure it is always present, and thus,
|
||||
* should not be removed here.
|
||||
*
|
||||
* If the chip lacks STU support, numerically the "sid" variable will
|
||||
* happen to also be zero, but we don't want to rely on that fact, so
|
||||
* we explicitly test that first. In that case, there is also nothing
|
||||
* to do here.
|
||||
*/
|
||||
if (!mv88e6xxx_has_stu(chip) || !sid)
|
||||
return 0;
|
||||
|
||||
list_for_each_entry_safe(mst, tmp, &chip->msts, node) {
|
||||
|
||||
@@ -743,7 +743,8 @@ void mv88e6xxx_teardown_devlink_regions_global(struct dsa_switch *ds)
|
||||
int i;
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(mv88e6xxx_regions); i++)
|
||||
dsa_devlink_region_destroy(chip->regions[i]);
|
||||
if (chip->regions[i])
|
||||
dsa_devlink_region_destroy(chip->regions[i]);
|
||||
}
|
||||
|
||||
void mv88e6xxx_teardown_devlink_regions_port(struct dsa_switch *ds, int port)
|
||||
|
||||
@@ -154,8 +154,9 @@ void pdsc_debugfs_add_qcq(struct pdsc *pdsc, struct pdsc_qcq *qcq)
|
||||
debugfs_create_u32("index", 0400, intr_dentry, &intr->index);
|
||||
debugfs_create_u32("vector", 0400, intr_dentry, &intr->vector);
|
||||
|
||||
intr_ctrl_regset = kzalloc(sizeof(*intr_ctrl_regset),
|
||||
GFP_KERNEL);
|
||||
intr_ctrl_regset = devm_kzalloc(pdsc->dev,
|
||||
sizeof(*intr_ctrl_regset),
|
||||
GFP_KERNEL);
|
||||
if (!intr_ctrl_regset)
|
||||
return;
|
||||
intr_ctrl_regset->regs = intr_ctrl_regs;
|
||||
|
||||
@@ -758,7 +758,7 @@ tx_free:
|
||||
dev_kfree_skb_any(skb);
|
||||
tx_kick_pending:
|
||||
if (BNXT_TX_PTP_IS_SET(lflags)) {
|
||||
txr->tx_buf_ring[txr->tx_prod].is_ts_pkt = 0;
|
||||
txr->tx_buf_ring[RING_TX(bp, txr->tx_prod)].is_ts_pkt = 0;
|
||||
atomic64_inc(&bp->ptp_cfg->stats.ts_err);
|
||||
if (!(bp->fw_cap & BNXT_FW_CAP_TX_TS_CMP))
|
||||
/* set SKB to err so PTP worker will clean up */
|
||||
@@ -766,7 +766,7 @@ tx_kick_pending:
|
||||
}
|
||||
if (txr->kick_pending)
|
||||
bnxt_txr_db_kick(bp, txr, txr->tx_prod);
|
||||
txr->tx_buf_ring[txr->tx_prod].skb = NULL;
|
||||
txr->tx_buf_ring[RING_TX(bp, txr->tx_prod)].skb = NULL;
|
||||
dev_core_stats_tx_dropped_inc(dev);
|
||||
return NETDEV_TX_OK;
|
||||
}
|
||||
|
||||
@@ -2270,6 +2270,7 @@ int cxgb4_init_ethtool_filters(struct adapter *adap)
|
||||
eth_filter->port[i].bmap = bitmap_zalloc(nentries, GFP_KERNEL);
|
||||
if (!eth_filter->port[i].bmap) {
|
||||
ret = -ENOMEM;
|
||||
kvfree(eth_filter->port[i].loc_array);
|
||||
goto free_eth_finfo;
|
||||
}
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user