Merge 6.12.37 into android16-6.12-lts
GKI (arm64) relevant 35 out of 230 changes, affecting 67 files +612/-427ec9be081c5Revert "mmc: sdhci: Disable SD card clock before changing parameters" [1 file, +2/-7]0698a2eb7dBluetooth: HCI: Set extended advertising data synchronously [2 files, +130/-113]3672fe9d1eBluetooth: hci_sync: revert some mesh modifications [1 file, +4/-12]44bb1e13b4Bluetooth: MGMT: set_mesh: update LE scan interval and window [1 file, +22/-0]a99f80c88aBluetooth: MGMT: mesh_send: check instances prior disabling advertising [1 file, +2/-1]5581e694d3usb: typec: altmodes/displayport: do not index invalid pin_assignments [2 files, +2/-1]b1abc5ab47scsi: sd: Fix VPD page 0xb7 length check [1 file, +1/-1]381c1c1219Bluetooth: Prevent unintended pause by checking if advertising is active [1 file, +4/-0]f0fee863a7nvme: Fix incorrect cdw15 value in passthru error logging [1 file, +1/-1]50c86c0945netfs: Fix i_size updating [2 files, +8/-2]a553afd91fnet/sched: Always pass notifications when child class becomes empty [1 file, +5/-14]d78f79a2c1spinlock: extend guard with spinlock_bh variants [1 file, +13/-0]0cc4721a71sched/fair: Rename h_nr_running into h_nr_queued [5 files, +53/-53]a2562bdd35sched/fair: Fixup wake_up_sync() vs DELAYED_DEQUEUE [1 file, +11/-2]5833026221f2fs: decrease spare area for pinned files for zoned devices [3 files, +5/-2]8912b139a8f2fs: zone: fix to calculate first_zoned_segno correctly [4 files, +69/-20]c5474a7b04bpf: use common instruction history across all states [2 files, +63/-63]4265682c29bpf: Do not include stack ptr register in precision backtracking bookkeeping [2 files, +24/-6]e0fefe9bc0netfs: Fix oops in write-retry from mis-resetting the subreq iterator [1 file, +3/-2]acf9ab15ecselinux: change security_compute_sid to return the ssid or tsid on match [1 file, +11/-5]42c5a4b47drcu: Return early if callback is not specified [1 file, +4/-0]e036efbe58add a string-to-qstr constructor [10 files, +13/-23]f94c422157fs: export anon_inode_make_secure_inode() and fix secretmem LSM bypass [3 files, +21/-13]8caccd2eacusb: xhci: Skip xhci_reset in xhci_resume if xhci is being removed [1 file, +4/-1]9f75893189Revert "usb: xhci: Implement xhci_handshake_check_state() helper" [3 files, +3/-30]fbebc2254ausb: xhci: quirk for data loss in ISOC transfers [3 files, +30/-0]195597e0bexhci: Disable stream for xHC controller with XHCI_BROKEN_STREAMS [1 file, +2/-1]dbdd2a2320Input: xpad - support Acer NGR 200 Controller [1 file, +2/-0]3b1407caacusb: dwc3: Abort suspend on soft disconnect failure [2 files, +16/-15]7cb8750160usb: acpi: fix device link removal [3 files, +8/-1]c745744a82dma-buf: fix timeout handling in dma_resv_wait_timeout v2 [1 file, +7/-5]ccdc472b4dLogitech C-270 even more broken [1 file, +2/-1]c782f98eefusb: typec: displayport: Fix potential deadlock [1 file, +1/-2]ead91de35dmm/vmalloc: fix data race in show_numa_info() [1 file, +35/-28]4c443046d8mm: userfaultfd: fix race of userfaultfd_move and swap cache [1 file, +31/-2] Changes in 6.12.37 rtc: pcf2127: add missing semicolon after statement rtc: pcf2127: fix SPI command byte for PCF2131 rtc: cmos: use spin_lock_irqsave in cmos_interrupt virtio-net: xsk: rx: fix the frame's length check virtio-net: ensure the received length does not exceed allocated size s390/pci: Fix stale function handles in error handling s390/pci: Do not try re-enabling load/store if device is disabled net: txgbe: request MISC IRQ in ndo_open vsock/vmci: Clear the vmci transport packet properly when initializing it net: libwx: fix the incorrect display of the queue number mmc: sdhci: Add a helper function for dump register in dynamic debug mode Revert "mmc: sdhci: Disable SD card clock before changing parameters" mmc: core: sd: Apply BROKEN_SD_DISCARD quirk earlier Bluetooth: HCI: Set extended advertising data synchronously Bluetooth: hci_sync: revert some mesh modifications Bluetooth: MGMT: set_mesh: update LE scan interval and window Bluetooth: MGMT: mesh_send: check instances prior disabling advertising iommufd/selftest: Fix iommufd_dirty_tracking with large hugepage sizes regulator: gpio: Fix the out-of-bounds access to drvdata::gpiods Input: cs40l50-vibra - fix potential NULL dereference in cs40l50_upload_owt() usb: typec: altmodes/displayport: do not index invalid pin_assignments mtk-sd: Fix a pagefault in dma_unmap_sg() for not prepared data mtk-sd: Prevent memory corruption from DMA map failure mtk-sd: reset host->mrq on prepare_data() error drm/v3d: Disable interrupts before resetting the GPU firmware: arm_ffa: Fix memory leak by freeing notifier callback node firmware: arm_ffa: Move memory allocation outside the mutex locking firmware: arm_ffa: Replace mutex with rwlock to avoid sleep in atomic context arm64: dts: apple: t8103: Fix PCIe BCM4377 nodename platform/mellanox: mlxbf-tmfifo: fix vring_desc.len assignment RDMA/mlx5: Fix unsafe xarray access in implicit ODP handling RDMA/mlx5: Initialize obj_event->obj_sub_list before xa_insert nfs: Clean up /proc/net/rpc/nfs when nfs_fs_proc_net_init() fails. NFSv4/pNFS: Fix a race to wake on NFS_LAYOUT_DRAIN scsi: qla2xxx: Fix DMA mapping test in qla24xx_get_port_database() scsi: qla4xxx: Fix missing DMA mapping error in qla4xxx_alloc_pdu() scsi: sd: Fix VPD page 0xb7 length check scsi: ufs: core: Fix spelling of a sysfs attribute name RDMA/mlx5: Fix HW counters query for non-representor devices RDMA/mlx5: Fix CC counters query for MPV RDMA/mlx5: Fix vport loopback for MPV device platform/mellanox: mlxbf-pmc: Fix duplicate event ID for CACHE_DATA1 platform/mellanox: nvsw-sn2201: Fix bus number in adapter error message Bluetooth: Prevent unintended pause by checking if advertising is active btrfs: fix missing error handling when searching for inode refs during log replay btrfs: fix iteration of extrefs during log replay btrfs: return a btrfs_inode from btrfs_iget_logging() btrfs: return a btrfs_inode from read_one_inode() btrfs: fix invalid inode pointer dereferences during log replay btrfs: fix inode lookup error handling during log replay btrfs: record new subvolume in parent dir earlier to avoid dir logging races btrfs: propagate last_unlink_trans earlier when doing a rmdir btrfs: use btrfs_record_snapshot_destroy() during rmdir ethernet: atl1: Add missing DMA mapping error checks and count errors dpaa2-eth: fix xdp_rxq_info leak drm/exynos: fimd: Guard display clock control with runtime PM calls spi: spi-fsl-dspi: Clear completion counter before initiating transfer drm/i915/selftests: Change mock_request() to return error pointers nvme: Fix incorrect cdw15 value in passthru error logging nvmet: fix memory leak of bio integrity platform/x86: dell-wmi-sysman: Fix WMI data block retrieval in sysfs callbacks platform/x86: hp-bioscfg: Directly use firmware_attributes_class platform/x86: hp-bioscfg: Fix class device unregistration platform/x86: firmware_attributes_class: Move include linux/device/class.h platform/x86: firmware_attributes_class: Simplify API platform/x86: think-lmi: Directly use firmware_attributes_class platform/x86: think-lmi: Fix class device unregistration platform/x86: dell-sysman: Directly use firmware_attributes_class platform/x86: dell-wmi-sysman: Fix class device unregistration platform/mellanox: mlxreg-lc: Fix logic error in power state check drm/bridge: aux-hpd-bridge: fix assignment of the of_node smb: client: fix warning when reconnecting channel net: usb: lan78xx: fix WARN in __netif_napi_del_locked on disconnect drm/i915/gt: Fix timeline left held on VMA alloc error drm/i915/gsc: mei interrupt top half should be in irq disabled context idpf: return 0 size for RSS key if not supported idpf: convert control queue mutex to a spinlock igc: disable L1.2 PCI-E link substate to avoid performance issue smb: client: set missing retry flag in smb2_writev_callback() smb: client: set missing retry flag in cifs_readv_callback() smb: client: set missing retry flag in cifs_writev_callback() netfs: Fix i_size updating lib: test_objagg: Set error message in check_expect_hints_stats() amd-xgbe: align CL37 AN sequence as per databook enic: fix incorrect MTU comparison in enic_change_mtu() rose: fix dangling neighbour pointers in rose_rt_device_down() nui: Fix dma_mapping_error() check net/sched: Always pass notifications when child class becomes empty amd-xgbe: do not double read link status smb: client: fix race condition in negotiate timeout by using more precise timing arm64: dts: rockchip: fix internal USB hub instability on RK3399 Puma crypto: iaa - Remove dst_null support crypto: iaa - Do not clobber req->base.data spinlock: extend guard with spinlock_bh variants crypto: zynqmp-sha - Add locking kunit: qemu_configs: sparc: use Zilog console kunit: qemu_configs: sparc: Explicitly enable CONFIG_SPARC32=y kunit: qemu_configs: Disable faulting tests on 32-bit SPARC gfs2: Initialize gl_no_formal_ino earlier gfs2: Rename GIF_{DEFERRED -> DEFER}_DELETE gfs2: Rename dinode_demise to evict_behavior gfs2: Prevent inode creation race gfs2: Decode missing glock flags in tracepoints gfs2: Add GLF_PENDING_REPLY flag gfs2: Replace GIF_DEFER_DELETE with GLF_DEFER_DELETE gfs2: Move gfs2_dinode_dealloc gfs2: Move GIF_ALLOC_FAILED check out of gfs2_ea_dealloc gfs2: deallocate inodes in gfs2_create_inode btrfs: prepare btrfs_page_mkwrite() for large folios btrfs: fix wrong start offset for delalloc space release during mmap write sched/fair: Rename h_nr_running into h_nr_queued sched/fair: Add new cfs_rq.h_nr_runnable sched/fair: Fixup wake_up_sync() vs DELAYED_DEQUEUE gfs2: Move gfs2_trans_add_databufs gfs2: Don't start unnecessary transactions during log flush ASoC: tas2764: Extend driver to SN012776 ASoC: tas2764: Reinit cache on part reset ACPI: thermal: Fix stale comment regarding trip points ACPI: thermal: Execute _SCP before reading trip points bonding: Mark active offloaded xfrm_states wifi: ath12k: fix skb_ext_desc leak in ath12k_dp_tx() error path wifi: ath12k: Handle error cases during extended skb allocation wifi: ath12k: fix wrong handling of CCMP256 and GCMP ciphers RDMA/rxe: Fix "trying to register non-static key in rxe_qp_do_cleanup" bug iommu: ipmmu-vmsa: avoid Wformat-security warning f2fs: decrease spare area for pinned files for zoned devices f2fs: zone: introduce first_zoned_segno in f2fs_sb_info f2fs: zone: fix to calculate first_zoned_segno correctly scsi: lpfc: Remove NLP_RELEASE_RPI flag from nodelist structure scsi: lpfc: Change lpfc_nodelist nlp_flag member into a bitmask scsi: lpfc: Avoid potential ndlp use-after-free in dev_loss_tmo_callbk hisi_acc_vfio_pci: bugfix cache write-back issue hisi_acc_vfio_pci: bugfix the problem of uninstalling driver bpf: use common instruction history across all states bpf: Do not include stack ptr register in precision backtracking bookkeeping arm64: dts: qcom: sm8650: change labels to lower-case arm64: dts: qcom: sm8650: Fix domain-idle-state for CPU2 arm64: dts: renesas: Use interrupts-extended for Ethernet PHYs arm64: dts: renesas: Factor out White Hawk Single board support arm64: dts: renesas: white-hawk-single: Improve Ethernet TSN description arm64: dts: qcom: sm8650: add the missing l2 cache node ubsan: integer-overflow: depend on BROKEN to keep this out of CI remoteproc: k3: Call of_node_put(rmem_np) only once in three functions remoteproc: k3-r5: Add devm action to release reserved memory remoteproc: k3-r5: Use devm_kcalloc() helper remoteproc: k3-r5: Use devm_ioremap_wc() helper remoteproc: k3-r5: Use devm_rproc_add() helper remoteproc: k3-r5: Refactor sequential core power up/down operations netfs: Fix oops in write-retry from mis-resetting the subreq iterator mfd: exynos-lpass: Fix another error handling path in exynos_lpass_probe() drm/xe: Fix DSB buffer coherency drm/xe: Move DSB l2 flush to a more sensible place drm/xe: add interface to request physical alignment for buffer objects drm/xe: Allow bo mapping on multiple ggtts drm/xe: move DPT l2 flush to a more sensible place drm/xe: Replace double space with single space after comma drm/xe/guc: Dead CT helper drm/xe/guc: Explicitly exit CT safe mode on unwind selinux: change security_compute_sid to return the ssid or tsid on match drm/simpledrm: Do not upcast in release helpers drm/amdgpu: VCN v5_0_1 to prevent FW checking RB during DPG pause drm/i915/dp_mst: Work around Thunderbolt sink disconnect after SINK_COUNT_ESI read drm/amdgpu: add kicker fws loading for gfx11/smu13/psp13 drm/amd/display: Add more checks for DSC / HUBP ONO guarantees arm64: dts: qcom: x1e80100-crd: mark l12b and l15b always-on drm/amdgpu/mes: add missing locking in helper functions sched_ext: Make scx_group_set_weight() always update tg->scx.weight scsi: lpfc: Restore clearing of NLP_UNREG_INP in ndlp->nlp_flag drm/msm: Fix a fence leak in submit error path drm/msm: Fix another leak in the submit error path ALSA: sb: Don't allow changing the DMA mode during operations ALSA: sb: Force to disable DMAs once when DMA mode is changed ata: libata-acpi: Do not assume 40 wire cable if no devices are enabled ata: pata_cs5536: fix build on 32-bit UML ASoC: amd: yc: Add quirk for MSI Bravo 17 D7VF internal mic platform/x86/amd/pmc: Add PCSpecialist Lafite Pro V 14M to 8042 quirks list genirq/irq_sim: Initialize work context pointers properly powerpc: Fix struct termio related ioctl macros ASoC: amd: yc: update quirk data for HP Victus regulator: fan53555: add enable_time support and soft-start times scsi: target: Fix NULL pointer dereference in core_scsi3_decode_spec_i_port() aoe: defer rexmit timer downdev work to workqueue wifi: mac80211: drop invalid source address OCB frames wifi: ath6kl: remove WARN on bad firmware input ACPICA: Refuse to evaluate a method if arguments are missing mtd: spinand: fix memory leak of ECC engine conf rcu: Return early if callback is not specified add a string-to-qstr constructor module: Provide EXPORT_SYMBOL_GPL_FOR_MODULES() helper fs: export anon_inode_make_secure_inode() and fix secretmem LSM bypass RDMA/mlx5: Fix cache entry update on dereg error IB/mlx5: Fix potential deadlock in MR deregistration drm/xe/bmg: Update Wa_22019338487 drm/xe: Allow dropping kunit dependency as built-in NFSv4/flexfiles: Fix handling of NFS level errors in I/O usb: xhci: Skip xhci_reset in xhci_resume if xhci is being removed Revert "usb: xhci: Implement xhci_handshake_check_state() helper" usb: xhci: quirk for data loss in ISOC transfers xhci: dbctty: disable ECHO flag by default xhci: dbc: Flush queued requests before stopping dbc xhci: Disable stream for xHC controller with XHCI_BROKEN_STREAMS Input: xpad - support Acer NGR 200 Controller Input: iqs7222 - explicitly define number of external channels usb: cdnsp: do not disable slot for disabled slot usb: cdnsp: Fix issue with CV Bad Descriptor test usb: dwc3: Abort suspend on soft disconnect failure usb: chipidea: udc: disconnect/reconnect from host when do suspend/resume usb: acpi: fix device link removal smb: client: fix readdir returning wrong type with POSIX extensions cifs: all initializations for tcon should happen in tcon_info_alloc dma-buf: fix timeout handling in dma_resv_wait_timeout v2 i2c/designware: Fix an initialization issue Logitech C-270 even more broken optee: ffa: fix sleep in atomic context iommu/rockchip: prevent iommus dead loop when two masters share one IOMMU powercap: intel_rapl: Do not change CLAMPING bit if ENABLE bit cannot be changed riscv: cpu_ops_sbi: Use static array for boot_data platform/x86: think-lmi: Create ksets consecutively platform/x86: think-lmi: Fix kobject cleanup platform/x86: think-lmi: Fix sysfs group cleanup usb: typec: displayport: Fix potential deadlock powerpc/kernel: Fix ppc_save_regs inclusion in build mm/vmalloc: fix data race in show_numa_info() mm: userfaultfd: fix race of userfaultfd_move and swap cache x86/bugs: Rename MDS machinery to something more generic x86/bugs: Add a Transient Scheduler Attacks mitigation KVM: SVM: Advertise TSA CPUID bits to guests x86/microcode/AMD: Add TSA microcode SHAs x86/process: Move the buffer clearing before MONITOR Linux 6.12.37 Change-Id: If1d8d0f83e11df1540bebaf0fb136fe340f25dcb Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
@@ -523,6 +523,7 @@ What: /sys/devices/system/cpu/vulnerabilities
|
||||
/sys/devices/system/cpu/vulnerabilities/spectre_v1
|
||||
/sys/devices/system/cpu/vulnerabilities/spectre_v2
|
||||
/sys/devices/system/cpu/vulnerabilities/srbds
|
||||
/sys/devices/system/cpu/vulnerabilities/tsa
|
||||
/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
|
||||
Date: January 2018
|
||||
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
|
||||
|
||||
@@ -711,7 +711,7 @@ Description: This file shows the thin provisioning type. This is one of
|
||||
|
||||
The file is read only.
|
||||
|
||||
What: /sys/class/scsi_device/*/device/unit_descriptor/physical_memory_resourse_count
|
||||
What: /sys/class/scsi_device/*/device/unit_descriptor/physical_memory_resource_count
|
||||
Date: February 2018
|
||||
Contact: Stanislav Nijnikov <stanislav.nijnikov@wdc.com>
|
||||
Description: This file shows the total physical memory resources. This is
|
||||
|
||||
@@ -157,9 +157,7 @@ This is achieved by using the otherwise unused and obsolete VERW instruction in
|
||||
combination with a microcode update. The microcode clears the affected CPU
|
||||
buffers when the VERW instruction is executed.
|
||||
|
||||
Kernel reuses the MDS function to invoke the buffer clearing:
|
||||
|
||||
mds_clear_cpu_buffers()
|
||||
Kernel does the buffer clearing with x86_clear_cpu_buffers().
|
||||
|
||||
On MDS affected CPUs, the kernel already invokes CPU buffer clear on
|
||||
kernel/userspace, hypervisor/guest and C-state (idle) transitions. No
|
||||
|
||||
@@ -7054,6 +7054,19 @@
|
||||
having this key zero'ed is acceptable. E.g. in testing
|
||||
scenarios.
|
||||
|
||||
tsa= [X86] Control mitigation for Transient Scheduler
|
||||
Attacks on AMD CPUs. Search the following in your
|
||||
favourite search engine for more details:
|
||||
|
||||
"Technical guidance for mitigating transient scheduler
|
||||
attacks".
|
||||
|
||||
off - disable the mitigation
|
||||
on - enable the mitigation (default)
|
||||
user - mitigate only user/kernel transitions
|
||||
vm - mitigate only guest/host transitions
|
||||
|
||||
|
||||
tsc= Disable clocksource stability checks for TSC.
|
||||
Format: <string>
|
||||
[x86] reliable: mark tsc clocksource as reliable, this
|
||||
|
||||
@@ -93,7 +93,7 @@ enters a C-state.
|
||||
|
||||
The kernel provides a function to invoke the buffer clearing:
|
||||
|
||||
mds_clear_cpu_buffers()
|
||||
x86_clear_cpu_buffers()
|
||||
|
||||
Also macro CLEAR_CPU_BUFFERS can be used in ASM late in exit-to-user path.
|
||||
Other than CFLAGS.ZF, this macro doesn't clobber any registers.
|
||||
@@ -185,9 +185,9 @@ Mitigation points
|
||||
idle clearing would be a window dressing exercise and is therefore not
|
||||
activated.
|
||||
|
||||
The invocation is controlled by the static key mds_idle_clear which is
|
||||
switched depending on the chosen mitigation mode and the SMT state of
|
||||
the system.
|
||||
The invocation is controlled by the static key cpu_buf_idle_clear which is
|
||||
switched depending on the chosen mitigation mode and the SMT state of the
|
||||
system.
|
||||
|
||||
The buffer clear is only invoked before entering the C-State to prevent
|
||||
that stale data from the idling CPU from spilling to the Hyper-Thread
|
||||
|
||||
@@ -28,6 +28,9 @@ kernel. As of today, modules that make use of symbols exported into namespaces,
|
||||
are required to import the namespace. Otherwise the kernel will, depending on
|
||||
its configuration, reject loading the module or warn about a missing import.
|
||||
|
||||
Additionally, it is possible to put symbols into a module namespace, strictly
|
||||
limiting which modules are allowed to use these symbols.
|
||||
|
||||
2. How to define Symbol Namespaces
|
||||
==================================
|
||||
|
||||
@@ -84,6 +87,22 @@ unit as preprocessor statement. The above example would then read::
|
||||
within the corresponding compilation unit before any EXPORT_SYMBOL macro is
|
||||
used.
|
||||
|
||||
2.3 Using the EXPORT_SYMBOL_GPL_FOR_MODULES() macro
|
||||
===================================================
|
||||
|
||||
Symbols exported using this macro are put into a module namespace. This
|
||||
namespace cannot be imported.
|
||||
|
||||
The macro takes a comma separated list of module names, allowing only those
|
||||
modules to access this symbol. Simple tail-globs are supported.
|
||||
|
||||
For example:
|
||||
|
||||
EXPORT_SYMBOL_GPL_FOR_MODULES(preempt_notifier_inc, "kvm,kvm-*")
|
||||
|
||||
will limit usage of this symbol to modules whoes name matches the given
|
||||
patterns.
|
||||
|
||||
3. How to use Symbols exported in Namespaces
|
||||
============================================
|
||||
|
||||
@@ -155,3 +174,6 @@ in-tree modules::
|
||||
You can also run nsdeps for external module builds. A typical usage is::
|
||||
|
||||
$ make -C <path_to_kernel_src> M=$PWD nsdeps
|
||||
|
||||
Note: it will happily generate an import statement for the module namespace;
|
||||
which will not work and generates build and runtime failures.
|
||||
|
||||
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 6
|
||||
PATCHLEVEL = 12
|
||||
SUBLEVEL = 36
|
||||
SUBLEVEL = 37
|
||||
EXTRAVERSION =
|
||||
NAME = Baby Opossum Posse
|
||||
|
||||
|
||||
@@ -71,7 +71,7 @@
|
||||
*/
|
||||
&port00 {
|
||||
bus-range = <1 1>;
|
||||
wifi0: network@0,0 {
|
||||
wifi0: wifi@0,0 {
|
||||
compatible = "pci14e4,4425";
|
||||
reg = <0x10000 0x0 0x0 0x0 0x0>;
|
||||
/* To be filled by the loader */
|
||||
|
||||
@@ -68,18 +68,18 @@
|
||||
#address-cells = <2>;
|
||||
#size-cells = <0>;
|
||||
|
||||
CPU0: cpu@0 {
|
||||
cpu0: cpu@0 {
|
||||
device_type = "cpu";
|
||||
compatible = "arm,cortex-a520";
|
||||
reg = <0 0>;
|
||||
|
||||
clocks = <&cpufreq_hw 0>;
|
||||
|
||||
power-domains = <&CPU_PD0>;
|
||||
power-domains = <&cpu_pd0>;
|
||||
power-domain-names = "psci";
|
||||
|
||||
enable-method = "psci";
|
||||
next-level-cache = <&L2_0>;
|
||||
next-level-cache = <&l2_0>;
|
||||
capacity-dmips-mhz = <1024>;
|
||||
dynamic-power-coefficient = <100>;
|
||||
|
||||
@@ -87,13 +87,13 @@
|
||||
|
||||
#cooling-cells = <2>;
|
||||
|
||||
L2_0: l2-cache {
|
||||
l2_0: l2-cache {
|
||||
compatible = "cache";
|
||||
cache-level = <2>;
|
||||
cache-unified;
|
||||
next-level-cache = <&L3_0>;
|
||||
next-level-cache = <&l3_0>;
|
||||
|
||||
L3_0: l3-cache {
|
||||
l3_0: l3-cache {
|
||||
compatible = "cache";
|
||||
cache-level = <3>;
|
||||
cache-unified;
|
||||
@@ -101,18 +101,18 @@
|
||||
};
|
||||
};
|
||||
|
||||
CPU1: cpu@100 {
|
||||
cpu1: cpu@100 {
|
||||
device_type = "cpu";
|
||||
compatible = "arm,cortex-a520";
|
||||
reg = <0 0x100>;
|
||||
|
||||
clocks = <&cpufreq_hw 0>;
|
||||
|
||||
power-domains = <&CPU_PD1>;
|
||||
power-domains = <&cpu_pd1>;
|
||||
power-domain-names = "psci";
|
||||
|
||||
enable-method = "psci";
|
||||
next-level-cache = <&L2_0>;
|
||||
next-level-cache = <&l2_0>;
|
||||
capacity-dmips-mhz = <1024>;
|
||||
dynamic-power-coefficient = <100>;
|
||||
|
||||
@@ -121,18 +121,18 @@
|
||||
#cooling-cells = <2>;
|
||||
};
|
||||
|
||||
CPU2: cpu@200 {
|
||||
cpu2: cpu@200 {
|
||||
device_type = "cpu";
|
||||
compatible = "arm,cortex-a720";
|
||||
reg = <0 0x200>;
|
||||
|
||||
clocks = <&cpufreq_hw 3>;
|
||||
|
||||
power-domains = <&CPU_PD2>;
|
||||
power-domains = <&cpu_pd2>;
|
||||
power-domain-names = "psci";
|
||||
|
||||
enable-method = "psci";
|
||||
next-level-cache = <&L2_200>;
|
||||
next-level-cache = <&l2_200>;
|
||||
capacity-dmips-mhz = <1792>;
|
||||
dynamic-power-coefficient = <238>;
|
||||
|
||||
@@ -140,46 +140,53 @@
|
||||
|
||||
#cooling-cells = <2>;
|
||||
|
||||
L2_200: l2-cache {
|
||||
l2_200: l2-cache {
|
||||
compatible = "cache";
|
||||
cache-level = <2>;
|
||||
cache-unified;
|
||||
next-level-cache = <&L3_0>;
|
||||
next-level-cache = <&l3_0>;
|
||||
};
|
||||
};
|
||||
|
||||
CPU3: cpu@300 {
|
||||
cpu3: cpu@300 {
|
||||
device_type = "cpu";
|
||||
compatible = "arm,cortex-a720";
|
||||
reg = <0 0x300>;
|
||||
|
||||
clocks = <&cpufreq_hw 3>;
|
||||
|
||||
power-domains = <&CPU_PD3>;
|
||||
power-domains = <&cpu_pd3>;
|
||||
power-domain-names = "psci";
|
||||
|
||||
enable-method = "psci";
|
||||
next-level-cache = <&L2_200>;
|
||||
next-level-cache = <&l2_300>;
|
||||
capacity-dmips-mhz = <1792>;
|
||||
dynamic-power-coefficient = <238>;
|
||||
|
||||
qcom,freq-domain = <&cpufreq_hw 3>;
|
||||
|
||||
#cooling-cells = <2>;
|
||||
|
||||
l2_300: l2-cache {
|
||||
compatible = "cache";
|
||||
cache-level = <2>;
|
||||
cache-unified;
|
||||
next-level-cache = <&l3_0>;
|
||||
};
|
||||
};
|
||||
|
||||
CPU4: cpu@400 {
|
||||
cpu4: cpu@400 {
|
||||
device_type = "cpu";
|
||||
compatible = "arm,cortex-a720";
|
||||
reg = <0 0x400>;
|
||||
|
||||
clocks = <&cpufreq_hw 3>;
|
||||
|
||||
power-domains = <&CPU_PD4>;
|
||||
power-domains = <&cpu_pd4>;
|
||||
power-domain-names = "psci";
|
||||
|
||||
enable-method = "psci";
|
||||
next-level-cache = <&L2_400>;
|
||||
next-level-cache = <&l2_400>;
|
||||
capacity-dmips-mhz = <1792>;
|
||||
dynamic-power-coefficient = <238>;
|
||||
|
||||
@@ -187,26 +194,26 @@
|
||||
|
||||
#cooling-cells = <2>;
|
||||
|
||||
L2_400: l2-cache {
|
||||
l2_400: l2-cache {
|
||||
compatible = "cache";
|
||||
cache-level = <2>;
|
||||
cache-unified;
|
||||
next-level-cache = <&L3_0>;
|
||||
next-level-cache = <&l3_0>;
|
||||
};
|
||||
};
|
||||
|
||||
CPU5: cpu@500 {
|
||||
cpu5: cpu@500 {
|
||||
device_type = "cpu";
|
||||
compatible = "arm,cortex-a720";
|
||||
reg = <0 0x500>;
|
||||
|
||||
clocks = <&cpufreq_hw 1>;
|
||||
|
||||
power-domains = <&CPU_PD5>;
|
||||
power-domains = <&cpu_pd5>;
|
||||
power-domain-names = "psci";
|
||||
|
||||
enable-method = "psci";
|
||||
next-level-cache = <&L2_500>;
|
||||
next-level-cache = <&l2_500>;
|
||||
capacity-dmips-mhz = <1792>;
|
||||
dynamic-power-coefficient = <238>;
|
||||
|
||||
@@ -214,26 +221,26 @@
|
||||
|
||||
#cooling-cells = <2>;
|
||||
|
||||
L2_500: l2-cache {
|
||||
l2_500: l2-cache {
|
||||
compatible = "cache";
|
||||
cache-level = <2>;
|
||||
cache-unified;
|
||||
next-level-cache = <&L3_0>;
|
||||
next-level-cache = <&l3_0>;
|
||||
};
|
||||
};
|
||||
|
||||
CPU6: cpu@600 {
|
||||
cpu6: cpu@600 {
|
||||
device_type = "cpu";
|
||||
compatible = "arm,cortex-a720";
|
||||
reg = <0 0x600>;
|
||||
|
||||
clocks = <&cpufreq_hw 1>;
|
||||
|
||||
power-domains = <&CPU_PD6>;
|
||||
power-domains = <&cpu_pd6>;
|
||||
power-domain-names = "psci";
|
||||
|
||||
enable-method = "psci";
|
||||
next-level-cache = <&L2_600>;
|
||||
next-level-cache = <&l2_600>;
|
||||
capacity-dmips-mhz = <1792>;
|
||||
dynamic-power-coefficient = <238>;
|
||||
|
||||
@@ -241,26 +248,26 @@
|
||||
|
||||
#cooling-cells = <2>;
|
||||
|
||||
L2_600: l2-cache {
|
||||
l2_600: l2-cache {
|
||||
compatible = "cache";
|
||||
cache-level = <2>;
|
||||
cache-unified;
|
||||
next-level-cache = <&L3_0>;
|
||||
next-level-cache = <&l3_0>;
|
||||
};
|
||||
};
|
||||
|
||||
CPU7: cpu@700 {
|
||||
cpu7: cpu@700 {
|
||||
device_type = "cpu";
|
||||
compatible = "arm,cortex-x4";
|
||||
reg = <0 0x700>;
|
||||
|
||||
clocks = <&cpufreq_hw 2>;
|
||||
|
||||
power-domains = <&CPU_PD7>;
|
||||
power-domains = <&cpu_pd7>;
|
||||
power-domain-names = "psci";
|
||||
|
||||
enable-method = "psci";
|
||||
next-level-cache = <&L2_700>;
|
||||
next-level-cache = <&l2_700>;
|
||||
capacity-dmips-mhz = <1894>;
|
||||
dynamic-power-coefficient = <588>;
|
||||
|
||||
@@ -268,46 +275,46 @@
|
||||
|
||||
#cooling-cells = <2>;
|
||||
|
||||
L2_700: l2-cache {
|
||||
l2_700: l2-cache {
|
||||
compatible = "cache";
|
||||
cache-level = <2>;
|
||||
cache-unified;
|
||||
next-level-cache = <&L3_0>;
|
||||
next-level-cache = <&l3_0>;
|
||||
};
|
||||
};
|
||||
|
||||
cpu-map {
|
||||
cluster0 {
|
||||
core0 {
|
||||
cpu = <&CPU0>;
|
||||
cpu = <&cpu0>;
|
||||
};
|
||||
|
||||
core1 {
|
||||
cpu = <&CPU1>;
|
||||
cpu = <&cpu1>;
|
||||
};
|
||||
|
||||
core2 {
|
||||
cpu = <&CPU2>;
|
||||
cpu = <&cpu2>;
|
||||
};
|
||||
|
||||
core3 {
|
||||
cpu = <&CPU3>;
|
||||
cpu = <&cpu3>;
|
||||
};
|
||||
|
||||
core4 {
|
||||
cpu = <&CPU4>;
|
||||
cpu = <&cpu4>;
|
||||
};
|
||||
|
||||
core5 {
|
||||
cpu = <&CPU5>;
|
||||
cpu = <&cpu5>;
|
||||
};
|
||||
|
||||
core6 {
|
||||
cpu = <&CPU6>;
|
||||
cpu = <&cpu6>;
|
||||
};
|
||||
|
||||
core7 {
|
||||
cpu = <&CPU7>;
|
||||
cpu = <&cpu7>;
|
||||
};
|
||||
};
|
||||
};
|
||||
@@ -315,7 +322,7 @@
|
||||
idle-states {
|
||||
entry-method = "psci";
|
||||
|
||||
SILVER_CPU_SLEEP_0: cpu-sleep-0-0 {
|
||||
silver_cpu_sleep_0: cpu-sleep-0-0 {
|
||||
compatible = "arm,idle-state";
|
||||
idle-state-name = "silver-rail-power-collapse";
|
||||
arm,psci-suspend-param = <0x40000004>;
|
||||
@@ -325,7 +332,7 @@
|
||||
local-timer-stop;
|
||||
};
|
||||
|
||||
GOLD_CPU_SLEEP_0: cpu-sleep-1-0 {
|
||||
gold_cpu_sleep_0: cpu-sleep-1-0 {
|
||||
compatible = "arm,idle-state";
|
||||
idle-state-name = "gold-rail-power-collapse";
|
||||
arm,psci-suspend-param = <0x40000004>;
|
||||
@@ -335,7 +342,7 @@
|
||||
local-timer-stop;
|
||||
};
|
||||
|
||||
GOLD_PLUS_CPU_SLEEP_0: cpu-sleep-2-0 {
|
||||
gold_plus_cpu_sleep_0: cpu-sleep-2-0 {
|
||||
compatible = "arm,idle-state";
|
||||
idle-state-name = "gold-plus-rail-power-collapse";
|
||||
arm,psci-suspend-param = <0x40000004>;
|
||||
@@ -347,7 +354,7 @@
|
||||
};
|
||||
|
||||
domain-idle-states {
|
||||
CLUSTER_SLEEP_0: cluster-sleep-0 {
|
||||
cluster_sleep_0: cluster-sleep-0 {
|
||||
compatible = "domain-idle-state";
|
||||
arm,psci-suspend-param = <0x41000044>;
|
||||
entry-latency-us = <750>;
|
||||
@@ -355,7 +362,7 @@
|
||||
min-residency-us = <9144>;
|
||||
};
|
||||
|
||||
CLUSTER_SLEEP_1: cluster-sleep-1 {
|
||||
cluster_sleep_1: cluster-sleep-1 {
|
||||
compatible = "domain-idle-state";
|
||||
arm,psci-suspend-param = <0x4100c344>;
|
||||
entry-latency-us = <2800>;
|
||||
@@ -411,58 +418,58 @@
|
||||
compatible = "arm,psci-1.0";
|
||||
method = "smc";
|
||||
|
||||
CPU_PD0: power-domain-cpu0 {
|
||||
cpu_pd0: power-domain-cpu0 {
|
||||
#power-domain-cells = <0>;
|
||||
power-domains = <&CLUSTER_PD>;
|
||||
domain-idle-states = <&SILVER_CPU_SLEEP_0>;
|
||||
power-domains = <&cluster_pd>;
|
||||
domain-idle-states = <&silver_cpu_sleep_0>;
|
||||
};
|
||||
|
||||
CPU_PD1: power-domain-cpu1 {
|
||||
cpu_pd1: power-domain-cpu1 {
|
||||
#power-domain-cells = <0>;
|
||||
power-domains = <&CLUSTER_PD>;
|
||||
domain-idle-states = <&SILVER_CPU_SLEEP_0>;
|
||||
power-domains = <&cluster_pd>;
|
||||
domain-idle-states = <&silver_cpu_sleep_0>;
|
||||
};
|
||||
|
||||
CPU_PD2: power-domain-cpu2 {
|
||||
cpu_pd2: power-domain-cpu2 {
|
||||
#power-domain-cells = <0>;
|
||||
power-domains = <&CLUSTER_PD>;
|
||||
domain-idle-states = <&SILVER_CPU_SLEEP_0>;
|
||||
power-domains = <&cluster_pd>;
|
||||
domain-idle-states = <&gold_cpu_sleep_0>;
|
||||
};
|
||||
|
||||
CPU_PD3: power-domain-cpu3 {
|
||||
cpu_pd3: power-domain-cpu3 {
|
||||
#power-domain-cells = <0>;
|
||||
power-domains = <&CLUSTER_PD>;
|
||||
domain-idle-states = <&GOLD_CPU_SLEEP_0>;
|
||||
power-domains = <&cluster_pd>;
|
||||
domain-idle-states = <&gold_cpu_sleep_0>;
|
||||
};
|
||||
|
||||
CPU_PD4: power-domain-cpu4 {
|
||||
cpu_pd4: power-domain-cpu4 {
|
||||
#power-domain-cells = <0>;
|
||||
power-domains = <&CLUSTER_PD>;
|
||||
domain-idle-states = <&GOLD_CPU_SLEEP_0>;
|
||||
power-domains = <&cluster_pd>;
|
||||
domain-idle-states = <&gold_cpu_sleep_0>;
|
||||
};
|
||||
|
||||
CPU_PD5: power-domain-cpu5 {
|
||||
cpu_pd5: power-domain-cpu5 {
|
||||
#power-domain-cells = <0>;
|
||||
power-domains = <&CLUSTER_PD>;
|
||||
domain-idle-states = <&GOLD_CPU_SLEEP_0>;
|
||||
power-domains = <&cluster_pd>;
|
||||
domain-idle-states = <&gold_cpu_sleep_0>;
|
||||
};
|
||||
|
||||
CPU_PD6: power-domain-cpu6 {
|
||||
cpu_pd6: power-domain-cpu6 {
|
||||
#power-domain-cells = <0>;
|
||||
power-domains = <&CLUSTER_PD>;
|
||||
domain-idle-states = <&GOLD_CPU_SLEEP_0>;
|
||||
power-domains = <&cluster_pd>;
|
||||
domain-idle-states = <&gold_cpu_sleep_0>;
|
||||
};
|
||||
|
||||
CPU_PD7: power-domain-cpu7 {
|
||||
cpu_pd7: power-domain-cpu7 {
|
||||
#power-domain-cells = <0>;
|
||||
power-domains = <&CLUSTER_PD>;
|
||||
domain-idle-states = <&GOLD_PLUS_CPU_SLEEP_0>;
|
||||
power-domains = <&cluster_pd>;
|
||||
domain-idle-states = <&gold_plus_cpu_sleep_0>;
|
||||
};
|
||||
|
||||
CLUSTER_PD: power-domain-cluster {
|
||||
cluster_pd: power-domain-cluster {
|
||||
#power-domain-cells = <0>;
|
||||
domain-idle-states = <&CLUSTER_SLEEP_0>,
|
||||
<&CLUSTER_SLEEP_1>;
|
||||
domain-idle-states = <&cluster_sleep_0>,
|
||||
<&cluster_sleep_1>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -5233,7 +5240,7 @@
|
||||
<GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>,
|
||||
<GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
|
||||
|
||||
power-domains = <&CLUSTER_PD>;
|
||||
power-domains = <&cluster_pd>;
|
||||
|
||||
qcom,tcs-offset = <0xd00>;
|
||||
qcom,drv-id = <2>;
|
||||
|
||||
@@ -419,6 +419,7 @@
|
||||
regulator-min-microvolt = <1200000>;
|
||||
regulator-max-microvolt = <1200000>;
|
||||
regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
|
||||
regulator-always-on;
|
||||
};
|
||||
|
||||
vreg_l13b_3p0: ldo13 {
|
||||
@@ -440,6 +441,7 @@
|
||||
regulator-min-microvolt = <1800000>;
|
||||
regulator-max-microvolt = <1800000>;
|
||||
regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
|
||||
regulator-always-on;
|
||||
};
|
||||
|
||||
vreg_l16b_2p9: ldo16 {
|
||||
|
||||
@@ -62,8 +62,7 @@
|
||||
compatible = "ethernet-phy-id0022.1640",
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio2>;
|
||||
interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio2 11 IRQ_TYPE_LEVEL_LOW>;
|
||||
reset-gpios = <&gpio2 10 GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -25,8 +25,7 @@
|
||||
compatible = "ethernet-phy-id001c.c915",
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio2>;
|
||||
interrupts = <21 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio2 21 IRQ_TYPE_LEVEL_LOW>;
|
||||
reset-gpios = <&gpio1 20 GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -166,8 +166,7 @@
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
rxc-skew-ps = <1500>;
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio4>;
|
||||
interrupts = <23 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio4 23 IRQ_TYPE_LEVEL_LOW>;
|
||||
reset-gpios = <&gpio4 22 GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -247,8 +247,7 @@
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
rxc-skew-ps = <1500>;
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio5>;
|
||||
interrupts = <19 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio5 19 IRQ_TYPE_LEVEL_LOW>;
|
||||
reset-gpios = <&gpio5 18 GPIO_ACTIVE_LOW>;
|
||||
/*
|
||||
* TX clock internal delay mode is required for reliable
|
||||
|
||||
@@ -314,8 +314,7 @@
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
rxc-skew-ps = <1500>;
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio2>;
|
||||
interrupts = <21 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio2 21 IRQ_TYPE_LEVEL_LOW>;
|
||||
reset-gpios = <&gpio1 20 GPIO_ACTIVE_LOW>;
|
||||
/*
|
||||
* TX clock internal delay mode is required for reliable
|
||||
|
||||
@@ -27,8 +27,7 @@
|
||||
compatible = "ethernet-phy-id001c.c915",
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio2>;
|
||||
interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio2 11 IRQ_TYPE_LEVEL_LOW>;
|
||||
reset-gpios = <&gpio2 10 GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -111,8 +111,7 @@
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
rxc-skew-ps = <1500>;
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio1>;
|
||||
interrupts = <17 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio1 17 IRQ_TYPE_LEVEL_LOW>;
|
||||
reset-gpios = <&gpio1 16 GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -117,8 +117,7 @@
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
rxc-skew-ps = <1500>;
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio1>;
|
||||
interrupts = <17 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio1 17 IRQ_TYPE_LEVEL_LOW>;
|
||||
reset-gpios = <&gpio1 16 GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -124,8 +124,7 @@
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
rxc-skew-ps = <1500>;
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio4>;
|
||||
interrupts = <23 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio4 23 IRQ_TYPE_LEVEL_LOW>;
|
||||
reset-gpios = <&gpio4 22 GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -31,8 +31,7 @@
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
rxc-skew-ps = <1500>;
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio4>;
|
||||
interrupts = <16 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio4 16 IRQ_TYPE_LEVEL_LOW>;
|
||||
reset-gpios = <&gpio4 15 GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -60,8 +60,7 @@
|
||||
u101: ethernet-phy@1 {
|
||||
reg = <1>;
|
||||
compatible = "ethernet-phy-ieee802.3-c45";
|
||||
interrupt-parent = <&gpio3>;
|
||||
interrupts = <10 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio3 10 IRQ_TYPE_LEVEL_LOW>;
|
||||
};
|
||||
};
|
||||
};
|
||||
@@ -78,8 +77,7 @@
|
||||
u201: ethernet-phy@2 {
|
||||
reg = <2>;
|
||||
compatible = "ethernet-phy-ieee802.3-c45";
|
||||
interrupt-parent = <&gpio3>;
|
||||
interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio3 11 IRQ_TYPE_LEVEL_LOW>;
|
||||
};
|
||||
};
|
||||
};
|
||||
@@ -96,8 +94,7 @@
|
||||
u301: ethernet-phy@3 {
|
||||
reg = <3>;
|
||||
compatible = "ethernet-phy-ieee802.3-c45";
|
||||
interrupt-parent = <&gpio3>;
|
||||
interrupts = <9 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio3 9 IRQ_TYPE_LEVEL_LOW>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
@@ -197,8 +197,7 @@
|
||||
ic99: ethernet-phy@1 {
|
||||
reg = <1>;
|
||||
compatible = "ethernet-phy-ieee802.3-c45";
|
||||
interrupt-parent = <&gpio3>;
|
||||
interrupts = <10 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio3 10 IRQ_TYPE_LEVEL_LOW>;
|
||||
};
|
||||
};
|
||||
};
|
||||
@@ -216,8 +215,7 @@
|
||||
ic102: ethernet-phy@2 {
|
||||
reg = <2>;
|
||||
compatible = "ethernet-phy-ieee802.3-c45";
|
||||
interrupt-parent = <&gpio3>;
|
||||
interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio3 11 IRQ_TYPE_LEVEL_LOW>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
@@ -7,71 +7,10 @@
|
||||
|
||||
/dts-v1/;
|
||||
#include "r8a779g2.dtsi"
|
||||
#include "white-hawk-cpu-common.dtsi"
|
||||
#include "white-hawk-common.dtsi"
|
||||
#include "white-hawk-single.dtsi"
|
||||
|
||||
/ {
|
||||
model = "Renesas White Hawk Single board based on r8a779g2";
|
||||
compatible = "renesas,white-hawk-single", "renesas,r8a779g2",
|
||||
"renesas,r8a779g0";
|
||||
};
|
||||
|
||||
&hscif0 {
|
||||
uart-has-rtscts;
|
||||
};
|
||||
|
||||
&hscif0_pins {
|
||||
groups = "hscif0_data", "hscif0_ctrl";
|
||||
function = "hscif0";
|
||||
};
|
||||
|
||||
&pfc {
|
||||
tsn0_pins: tsn0 {
|
||||
mux {
|
||||
groups = "tsn0_link", "tsn0_mdio", "tsn0_rgmii",
|
||||
"tsn0_txcrefclk";
|
||||
function = "tsn0";
|
||||
};
|
||||
|
||||
link {
|
||||
groups = "tsn0_link";
|
||||
bias-disable;
|
||||
};
|
||||
|
||||
mdio {
|
||||
groups = "tsn0_mdio";
|
||||
drive-strength = <24>;
|
||||
bias-disable;
|
||||
};
|
||||
|
||||
rgmii {
|
||||
groups = "tsn0_rgmii";
|
||||
drive-strength = <24>;
|
||||
bias-disable;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
&tsn0 {
|
||||
pinctrl-0 = <&tsn0_pins>;
|
||||
pinctrl-names = "default";
|
||||
phy-mode = "rgmii";
|
||||
phy-handle = <&phy3>;
|
||||
status = "okay";
|
||||
|
||||
mdio {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
reset-gpios = <&gpio1 23 GPIO_ACTIVE_LOW>;
|
||||
reset-post-delay-us = <4000>;
|
||||
|
||||
phy3: ethernet-phy@0 {
|
||||
compatible = "ethernet-phy-id002b.0980",
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio4>;
|
||||
interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
@@ -175,8 +175,7 @@
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
rxc-skew-ps = <1500>;
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio7>;
|
||||
interrupts = <5 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio7 5 IRQ_TYPE_LEVEL_LOW>;
|
||||
reset-gpios = <&gpio7 10 GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -102,8 +102,7 @@
|
||||
compatible = "ethernet-phy-id0022.1640",
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
reg = <7>;
|
||||
interrupt-parent = <&irqc>;
|
||||
interrupts = <RZG2L_IRQ2 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&irqc RZG2L_IRQ2 IRQ_TYPE_LEVEL_LOW>;
|
||||
rxc-skew-psec = <2400>;
|
||||
txc-skew-psec = <2400>;
|
||||
rxdv-skew-psec = <0>;
|
||||
@@ -130,8 +129,7 @@
|
||||
compatible = "ethernet-phy-id0022.1640",
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
reg = <7>;
|
||||
interrupt-parent = <&irqc>;
|
||||
interrupts = <RZG2L_IRQ3 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&irqc RZG2L_IRQ3 IRQ_TYPE_LEVEL_LOW>;
|
||||
rxc-skew-psec = <2400>;
|
||||
txc-skew-psec = <2400>;
|
||||
rxdv-skew-psec = <0>;
|
||||
|
||||
@@ -82,8 +82,7 @@
|
||||
compatible = "ethernet-phy-id0022.1640",
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
reg = <7>;
|
||||
interrupt-parent = <&irqc>;
|
||||
interrupts = <RZG2L_IRQ0 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&irqc RZG2L_IRQ0 IRQ_TYPE_LEVEL_LOW>;
|
||||
rxc-skew-psec = <2400>;
|
||||
txc-skew-psec = <2400>;
|
||||
rxdv-skew-psec = <0>;
|
||||
|
||||
@@ -78,8 +78,7 @@
|
||||
compatible = "ethernet-phy-id0022.1640",
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
reg = <7>;
|
||||
interrupt-parent = <&irqc>;
|
||||
interrupts = <RZG2L_IRQ2 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&irqc RZG2L_IRQ2 IRQ_TYPE_LEVEL_LOW>;
|
||||
rxc-skew-psec = <2400>;
|
||||
txc-skew-psec = <2400>;
|
||||
rxdv-skew-psec = <0>;
|
||||
@@ -107,8 +106,7 @@
|
||||
compatible = "ethernet-phy-id0022.1640",
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
reg = <7>;
|
||||
interrupt-parent = <&irqc>;
|
||||
interrupts = <RZG2L_IRQ7 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&irqc RZG2L_IRQ7 IRQ_TYPE_LEVEL_LOW>;
|
||||
rxc-skew-psec = <2400>;
|
||||
txc-skew-psec = <2400>;
|
||||
rxdv-skew-psec = <0>;
|
||||
|
||||
@@ -98,8 +98,7 @@
|
||||
|
||||
phy0: ethernet-phy@7 {
|
||||
reg = <7>;
|
||||
interrupt-parent = <&pinctrl>;
|
||||
interrupts = <RZG2L_GPIO(12, 0) IRQ_TYPE_EDGE_FALLING>;
|
||||
interrupts-extended = <&pinctrl RZG2L_GPIO(12, 0) IRQ_TYPE_EDGE_FALLING>;
|
||||
rxc-skew-psec = <0>;
|
||||
txc-skew-psec = <0>;
|
||||
rxdv-skew-psec = <0>;
|
||||
@@ -124,8 +123,7 @@
|
||||
|
||||
phy1: ethernet-phy@7 {
|
||||
reg = <7>;
|
||||
interrupt-parent = <&pinctrl>;
|
||||
interrupts = <RZG2L_GPIO(12, 1) IRQ_TYPE_EDGE_FALLING>;
|
||||
interrupts-extended = <&pinctrl RZG2L_GPIO(12, 1) IRQ_TYPE_EDGE_FALLING>;
|
||||
rxc-skew-psec = <0>;
|
||||
txc-skew-psec = <0>;
|
||||
rxdv-skew-psec = <0>;
|
||||
|
||||
@@ -353,8 +353,7 @@
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
rxc-skew-ps = <1500>;
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio2>;
|
||||
interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio2 11 IRQ_TYPE_LEVEL_LOW>;
|
||||
reset-gpios = <&gpio2 10 GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -150,8 +150,7 @@
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
rxc-skew-ps = <1500>;
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio2>;
|
||||
interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio2 11 IRQ_TYPE_LEVEL_LOW>;
|
||||
reset-gpios = <&gpio2 10 GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -167,8 +167,7 @@
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
rxc-skew-ps = <1500>;
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio7>;
|
||||
interrupts = <5 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio7 5 IRQ_TYPE_LEVEL_LOW>;
|
||||
reset-gpios = <&gpio7 10 GPIO_ACTIVE_LOW>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -29,8 +29,7 @@
|
||||
avb1_phy: ethernet-phy@0 {
|
||||
compatible = "ethernet-phy-ieee802.3-c45";
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio6>;
|
||||
interrupts = <3 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio6 3 IRQ_TYPE_LEVEL_LOW>;
|
||||
};
|
||||
};
|
||||
};
|
||||
@@ -51,8 +50,7 @@
|
||||
avb2_phy: ethernet-phy@0 {
|
||||
compatible = "ethernet-phy-ieee802.3-c45";
|
||||
reg = <0>;
|
||||
interrupt-parent = <&gpio5>;
|
||||
interrupts = <4 IRQ_TYPE_LEVEL_LOW>;
|
||||
interrupts-extended = <&gpio5 4 IRQ_TYPE_LEVEL_LOW>;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
77
arch/arm64/boot/dts/renesas/white-hawk-single.dtsi
Normal file
77
arch/arm64/boot/dts/renesas/white-hawk-single.dtsi
Normal file
@@ -0,0 +1,77 @@
|
||||
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
|
||||
/*
|
||||
* Device Tree Source for the White Hawk Single board
|
||||
*
|
||||
* Copyright (C) 2023-2024 Glider bv
|
||||
*/
|
||||
|
||||
#include "white-hawk-cpu-common.dtsi"
|
||||
#include "white-hawk-common.dtsi"
|
||||
|
||||
/ {
|
||||
model = "Renesas White Hawk Single board";
|
||||
compatible = "renesas,white-hawk-single";
|
||||
|
||||
aliases {
|
||||
ethernet3 = &tsn0;
|
||||
};
|
||||
};
|
||||
|
||||
&hscif0 {
|
||||
uart-has-rtscts;
|
||||
};
|
||||
|
||||
&hscif0_pins {
|
||||
groups = "hscif0_data", "hscif0_ctrl";
|
||||
function = "hscif0";
|
||||
};
|
||||
|
||||
&pfc {
|
||||
tsn0_pins: tsn0 {
|
||||
mux {
|
||||
groups = "tsn0_link", "tsn0_mdio", "tsn0_rgmii",
|
||||
"tsn0_txcrefclk";
|
||||
function = "tsn0";
|
||||
};
|
||||
|
||||
link {
|
||||
groups = "tsn0_link";
|
||||
bias-disable;
|
||||
};
|
||||
|
||||
mdio {
|
||||
groups = "tsn0_mdio";
|
||||
drive-strength = <24>;
|
||||
bias-disable;
|
||||
};
|
||||
|
||||
rgmii {
|
||||
groups = "tsn0_rgmii";
|
||||
drive-strength = <24>;
|
||||
bias-disable;
|
||||
};
|
||||
};
|
||||
};
|
||||
|
||||
&tsn0 {
|
||||
pinctrl-0 = <&tsn0_pins>;
|
||||
pinctrl-names = "default";
|
||||
phy-mode = "rgmii";
|
||||
phy-handle = <&tsn0_phy>;
|
||||
status = "okay";
|
||||
|
||||
mdio {
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
reset-gpios = <&gpio1 23 GPIO_ACTIVE_LOW>;
|
||||
reset-post-delay-us = <4000>;
|
||||
|
||||
tsn0_phy: ethernet-phy@0 {
|
||||
compatible = "ethernet-phy-id002b.0980",
|
||||
"ethernet-phy-ieee802.3-c22";
|
||||
reg = <0>;
|
||||
interrupts-extended = <&gpio4 3 IRQ_TYPE_LEVEL_LOW>;
|
||||
};
|
||||
};
|
||||
};
|
||||
@@ -59,17 +59,7 @@
|
||||
vin-supply = <&vcc5v0_sys>;
|
||||
};
|
||||
|
||||
vcc5v0_host: vcc5v0-host-regulator {
|
||||
compatible = "regulator-fixed";
|
||||
gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_LOW>;
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&vcc5v0_host_en>;
|
||||
regulator-name = "vcc5v0_host";
|
||||
regulator-always-on;
|
||||
vin-supply = <&vcc5v0_sys>;
|
||||
};
|
||||
|
||||
vcc5v0_sys: vcc5v0-sys {
|
||||
vcc5v0_sys: regulator-vcc5v0-sys {
|
||||
compatible = "regulator-fixed";
|
||||
regulator-name = "vcc5v0_sys";
|
||||
regulator-always-on;
|
||||
@@ -509,10 +499,10 @@
|
||||
};
|
||||
};
|
||||
|
||||
usb2 {
|
||||
vcc5v0_host_en: vcc5v0-host-en {
|
||||
usb {
|
||||
cy3304_reset: cy3304-reset {
|
||||
rockchip,pins =
|
||||
<4 RK_PA3 RK_FUNC_GPIO &pcfg_pull_none>;
|
||||
<4 RK_PA3 RK_FUNC_GPIO &pcfg_output_high>;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -579,7 +569,6 @@
|
||||
};
|
||||
|
||||
u2phy1_host: host-port {
|
||||
phy-supply = <&vcc5v0_host>;
|
||||
status = "okay";
|
||||
};
|
||||
};
|
||||
@@ -591,6 +580,29 @@
|
||||
&usbdrd_dwc3_1 {
|
||||
status = "okay";
|
||||
dr_mode = "host";
|
||||
pinctrl-names = "default";
|
||||
pinctrl-0 = <&cy3304_reset>;
|
||||
#address-cells = <1>;
|
||||
#size-cells = <0>;
|
||||
|
||||
hub_2_0: hub@1 {
|
||||
compatible = "usb4b4,6502", "usb4b4,6506";
|
||||
reg = <1>;
|
||||
peer-hub = <&hub_3_0>;
|
||||
reset-gpios = <&gpio4 RK_PA3 GPIO_ACTIVE_HIGH>;
|
||||
vdd-supply = <&vcc1v2_phy>;
|
||||
vdd2-supply = <&vcc3v3_sys>;
|
||||
|
||||
};
|
||||
|
||||
hub_3_0: hub@2 {
|
||||
compatible = "usb4b4,6500", "usb4b4,6504";
|
||||
reg = <2>;
|
||||
peer-hub = <&hub_2_0>;
|
||||
reset-gpios = <&gpio4 RK_PA3 GPIO_ACTIVE_HIGH>;
|
||||
vdd-supply = <&vcc1v2_phy>;
|
||||
vdd2-supply = <&vcc3v3_sys>;
|
||||
};
|
||||
};
|
||||
|
||||
&usb_host1_ehci {
|
||||
|
||||
@@ -23,10 +23,10 @@
|
||||
#define TCSETSW _IOW('t', 21, struct termios)
|
||||
#define TCSETSF _IOW('t', 22, struct termios)
|
||||
|
||||
#define TCGETA _IOR('t', 23, struct termio)
|
||||
#define TCSETA _IOW('t', 24, struct termio)
|
||||
#define TCSETAW _IOW('t', 25, struct termio)
|
||||
#define TCSETAF _IOW('t', 28, struct termio)
|
||||
#define TCGETA 0x40147417 /* _IOR('t', 23, struct termio) */
|
||||
#define TCSETA 0x80147418 /* _IOW('t', 24, struct termio) */
|
||||
#define TCSETAW 0x80147419 /* _IOW('t', 25, struct termio) */
|
||||
#define TCSETAF 0x8014741c /* _IOW('t', 28, struct termio) */
|
||||
|
||||
#define TCSBRK _IO('t', 29)
|
||||
#define TCXONC _IO('t', 30)
|
||||
|
||||
@@ -162,9 +162,7 @@ endif
|
||||
|
||||
obj64-$(CONFIG_PPC_TRANSACTIONAL_MEM) += tm.o
|
||||
|
||||
ifneq ($(CONFIG_XMON)$(CONFIG_KEXEC_CORE)$(CONFIG_PPC_BOOK3S),)
|
||||
obj-y += ppc_save_regs.o
|
||||
endif
|
||||
|
||||
obj-$(CONFIG_EPAPR_PARAVIRT) += epapr_paravirt.o epapr_hcalls.o
|
||||
obj-$(CONFIG_KVM_GUEST) += kvm.o kvm_emul.o
|
||||
|
||||
@@ -18,10 +18,10 @@ const struct cpu_operations cpu_ops_sbi;
|
||||
|
||||
/*
|
||||
* Ordered booting via HSM brings one cpu at a time. However, cpu hotplug can
|
||||
* be invoked from multiple threads in parallel. Define a per cpu data
|
||||
* be invoked from multiple threads in parallel. Define an array of boot data
|
||||
* to handle that.
|
||||
*/
|
||||
static DEFINE_PER_CPU(struct sbi_hart_boot_data, boot_data);
|
||||
static struct sbi_hart_boot_data boot_data[NR_CPUS];
|
||||
|
||||
static int sbi_hsm_hart_start(unsigned long hartid, unsigned long saddr,
|
||||
unsigned long priv)
|
||||
@@ -67,7 +67,7 @@ static int sbi_cpu_start(unsigned int cpuid, struct task_struct *tidle)
|
||||
unsigned long boot_addr = __pa_symbol(secondary_start_sbi);
|
||||
unsigned long hartid = cpuid_to_hartid_map(cpuid);
|
||||
unsigned long hsm_data;
|
||||
struct sbi_hart_boot_data *bdata = &per_cpu(boot_data, cpuid);
|
||||
struct sbi_hart_boot_data *bdata = &boot_data[cpuid];
|
||||
|
||||
/* Make sure tidle is updated */
|
||||
smp_mb();
|
||||
|
||||
@@ -105,6 +105,10 @@ static pci_ers_result_t zpci_event_do_error_state_clear(struct pci_dev *pdev,
|
||||
struct zpci_dev *zdev = to_zpci(pdev);
|
||||
int rc;
|
||||
|
||||
/* The underlying device may have been disabled by the event */
|
||||
if (!zdev_enabled(zdev))
|
||||
return PCI_ERS_RESULT_NEED_RESET;
|
||||
|
||||
pr_info("%s: Unblocking device access for examination\n", pci_name(pdev));
|
||||
rc = zpci_reset_load_store_blocked(zdev);
|
||||
if (rc) {
|
||||
@@ -260,6 +264,8 @@ static void __zpci_event_error(struct zpci_ccdf_err *ccdf)
|
||||
struct zpci_dev *zdev = get_zdev_by_fid(ccdf->fid);
|
||||
struct pci_dev *pdev = NULL;
|
||||
pci_ers_result_t ers_res;
|
||||
u32 fh = 0;
|
||||
int rc;
|
||||
|
||||
zpci_dbg(3, "err fid:%x, fh:%x, pec:%x\n",
|
||||
ccdf->fid, ccdf->fh, ccdf->pec);
|
||||
@@ -268,6 +274,15 @@ static void __zpci_event_error(struct zpci_ccdf_err *ccdf)
|
||||
|
||||
if (zdev) {
|
||||
mutex_lock(&zdev->state_lock);
|
||||
rc = clp_refresh_fh(zdev->fid, &fh);
|
||||
if (rc)
|
||||
goto no_pdev;
|
||||
if (!fh || ccdf->fh != fh) {
|
||||
/* Ignore events with stale handles */
|
||||
zpci_dbg(3, "err fid:%x, fh:%x (stale %x)\n",
|
||||
ccdf->fid, fh, ccdf->fh);
|
||||
goto no_pdev;
|
||||
}
|
||||
zpci_update_fh(zdev, ccdf->fh);
|
||||
if (zdev->zbus->bus)
|
||||
pdev = pci_get_slot(zdev->zbus->bus, zdev->devfn);
|
||||
|
||||
@@ -2761,6 +2761,15 @@ config MITIGATION_ITS
|
||||
disabled, mitigation cannot be enabled via cmdline.
|
||||
See <file:Documentation/admin-guide/hw-vuln/indirect-target-selection.rst>
|
||||
|
||||
config MITIGATION_TSA
|
||||
bool "Mitigate Transient Scheduler Attacks"
|
||||
depends on CPU_SUP_AMD
|
||||
default y
|
||||
help
|
||||
Enable mitigation for Transient Scheduler Attacks. TSA is a hardware
|
||||
security vulnerability on AMD CPUs which can lead to forwarding of
|
||||
invalid info to subsequent instructions and thus can affect their
|
||||
timing and thereby cause a leakage.
|
||||
endif
|
||||
|
||||
config ARCH_HAS_ADD_PAGES
|
||||
|
||||
@@ -33,20 +33,20 @@ EXPORT_SYMBOL_GPL(entry_ibpb);
|
||||
|
||||
/*
|
||||
* Define the VERW operand that is disguised as entry code so that
|
||||
* it can be referenced with KPTI enabled. This ensure VERW can be
|
||||
* it can be referenced with KPTI enabled. This ensures VERW can be
|
||||
* used late in exit-to-user path after page tables are switched.
|
||||
*/
|
||||
.pushsection .entry.text, "ax"
|
||||
|
||||
.align L1_CACHE_BYTES, 0xcc
|
||||
SYM_CODE_START_NOALIGN(mds_verw_sel)
|
||||
SYM_CODE_START_NOALIGN(x86_verw_sel)
|
||||
UNWIND_HINT_UNDEFINED
|
||||
ANNOTATE_NOENDBR
|
||||
.word __KERNEL_DS
|
||||
.align L1_CACHE_BYTES, 0xcc
|
||||
SYM_CODE_END(mds_verw_sel);
|
||||
SYM_CODE_END(x86_verw_sel);
|
||||
/* For KVM */
|
||||
EXPORT_SYMBOL_GPL(mds_verw_sel);
|
||||
EXPORT_SYMBOL_GPL(x86_verw_sel);
|
||||
|
||||
.popsection
|
||||
|
||||
|
||||
@@ -69,4 +69,16 @@ int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type);
|
||||
|
||||
extern struct cpumask cpus_stop_mask;
|
||||
|
||||
union zen_patch_rev {
|
||||
struct {
|
||||
__u32 rev : 8,
|
||||
stepping : 4,
|
||||
model : 4,
|
||||
__reserved : 4,
|
||||
ext_model : 4,
|
||||
ext_fam : 8;
|
||||
};
|
||||
__u32 ucode_rev;
|
||||
};
|
||||
|
||||
#endif /* _ASM_X86_CPU_H */
|
||||
|
||||
@@ -455,6 +455,7 @@
|
||||
#define X86_FEATURE_NO_NESTED_DATA_BP (20*32+ 0) /* No Nested Data Breakpoints */
|
||||
#define X86_FEATURE_WRMSR_XX_BASE_NS (20*32+ 1) /* WRMSR to {FS,GS,KERNEL_GS}_BASE is non-serializing */
|
||||
#define X86_FEATURE_LFENCE_RDTSC (20*32+ 2) /* LFENCE always serializing / synchronizes RDTSC */
|
||||
#define X86_FEATURE_VERW_CLEAR (20*32+ 5) /* The memory form of VERW mitigates TSA */
|
||||
#define X86_FEATURE_NULL_SEL_CLR_BASE (20*32+ 6) /* Null Selector Clears Base */
|
||||
#define X86_FEATURE_AUTOIBRS (20*32+ 8) /* Automatic IBRS */
|
||||
#define X86_FEATURE_NO_SMM_CTL_MSR (20*32+ 9) /* SMM_CTL MSR is not present */
|
||||
@@ -477,6 +478,10 @@
|
||||
#define X86_FEATURE_FAST_CPPC (21*32 + 5) /* AMD Fast CPPC */
|
||||
#define X86_FEATURE_INDIRECT_THUNK_ITS (21*32 + 6) /* Use thunk for indirect branches in lower half of cacheline */
|
||||
|
||||
#define X86_FEATURE_TSA_SQ_NO (21*32+11) /* AMD CPU not vulnerable to TSA-SQ */
|
||||
#define X86_FEATURE_TSA_L1_NO (21*32+12) /* AMD CPU not vulnerable to TSA-L1 */
|
||||
#define X86_FEATURE_CLEAR_CPU_BUF_VM (21*32+13) /* Clear CPU buffers using VERW before VMRUN */
|
||||
|
||||
/*
|
||||
* BUG word(s)
|
||||
*/
|
||||
@@ -529,4 +534,5 @@
|
||||
#define X86_BUG_IBPB_NO_RET X86_BUG(1*32 + 4) /* "ibpb_no_ret" IBPB omits return target predictions */
|
||||
#define X86_BUG_ITS X86_BUG(1*32 + 5) /* "its" CPU is affected by Indirect Target Selection */
|
||||
#define X86_BUG_ITS_NATIVE_ONLY X86_BUG(1*32 + 6) /* "its_native_only" CPU is affected by ITS, VMX is not affected */
|
||||
#define X86_BUG_TSA X86_BUG( 1*32+ 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */
|
||||
#endif /* _ASM_X86_CPUFEATURES_H */
|
||||
|
||||
@@ -44,13 +44,13 @@ static __always_inline void native_irq_enable(void)
|
||||
|
||||
static __always_inline void native_safe_halt(void)
|
||||
{
|
||||
mds_idle_clear_cpu_buffers();
|
||||
x86_idle_clear_cpu_buffers();
|
||||
asm volatile("sti; hlt": : :"memory");
|
||||
}
|
||||
|
||||
static __always_inline void native_halt(void)
|
||||
{
|
||||
mds_idle_clear_cpu_buffers();
|
||||
x86_idle_clear_cpu_buffers();
|
||||
asm volatile("hlt": : :"memory");
|
||||
}
|
||||
|
||||
|
||||
@@ -44,8 +44,6 @@ static __always_inline void __monitorx(const void *eax, unsigned long ecx,
|
||||
|
||||
static __always_inline void __mwait(unsigned long eax, unsigned long ecx)
|
||||
{
|
||||
mds_idle_clear_cpu_buffers();
|
||||
|
||||
/* "mwait %eax, %ecx;" */
|
||||
asm volatile(".byte 0x0f, 0x01, 0xc9;"
|
||||
:: "a" (eax), "c" (ecx));
|
||||
@@ -80,7 +78,7 @@ static __always_inline void __mwait(unsigned long eax, unsigned long ecx)
|
||||
static __always_inline void __mwaitx(unsigned long eax, unsigned long ebx,
|
||||
unsigned long ecx)
|
||||
{
|
||||
/* No MDS buffer clear as this is AMD/HYGON only */
|
||||
/* No need for TSA buffer clearing on AMD */
|
||||
|
||||
/* "mwaitx %eax, %ebx, %ecx;" */
|
||||
asm volatile(".byte 0x0f, 0x01, 0xfb;"
|
||||
@@ -98,7 +96,7 @@ static __always_inline void __mwaitx(unsigned long eax, unsigned long ebx,
|
||||
*/
|
||||
static __always_inline void __sti_mwait(unsigned long eax, unsigned long ecx)
|
||||
{
|
||||
mds_idle_clear_cpu_buffers();
|
||||
|
||||
/* "mwait %eax, %ecx;" */
|
||||
asm volatile("sti; .byte 0x0f, 0x01, 0xc9;"
|
||||
:: "a" (eax), "c" (ecx));
|
||||
@@ -116,21 +114,29 @@ static __always_inline void __sti_mwait(unsigned long eax, unsigned long ecx)
|
||||
*/
|
||||
static __always_inline void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
|
||||
{
|
||||
if (need_resched())
|
||||
return;
|
||||
|
||||
x86_idle_clear_cpu_buffers();
|
||||
|
||||
if (static_cpu_has_bug(X86_BUG_MONITOR) || !current_set_polling_and_test()) {
|
||||
const void *addr = ¤t_thread_info()->flags;
|
||||
|
||||
alternative_input("", "clflush (%[addr])", X86_BUG_CLFLUSH_MONITOR, [addr] "a" (addr));
|
||||
__monitor(addr, 0, 0);
|
||||
|
||||
if (!need_resched()) {
|
||||
if (ecx & 1) {
|
||||
__mwait(eax, ecx);
|
||||
} else {
|
||||
__sti_mwait(eax, ecx);
|
||||
raw_local_irq_disable();
|
||||
}
|
||||
if (need_resched())
|
||||
goto out;
|
||||
|
||||
if (ecx & 1) {
|
||||
__mwait(eax, ecx);
|
||||
} else {
|
||||
__sti_mwait(eax, ecx);
|
||||
raw_local_irq_disable();
|
||||
}
|
||||
}
|
||||
|
||||
out:
|
||||
current_clr_polling();
|
||||
}
|
||||
|
||||
|
||||
@@ -315,25 +315,31 @@
|
||||
.endm
|
||||
|
||||
/*
|
||||
* Macro to execute VERW instruction that mitigate transient data sampling
|
||||
* attacks such as MDS. On affected systems a microcode update overloaded VERW
|
||||
* instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
|
||||
*
|
||||
* Macro to execute VERW insns that mitigate transient data sampling
|
||||
* attacks such as MDS or TSA. On affected systems a microcode update
|
||||
* overloaded VERW insns to also clear the CPU buffers. VERW clobbers
|
||||
* CFLAGS.ZF.
|
||||
* Note: Only the memory operand variant of VERW clears the CPU buffers.
|
||||
*/
|
||||
.macro CLEAR_CPU_BUFFERS
|
||||
.macro __CLEAR_CPU_BUFFERS feature
|
||||
#ifdef CONFIG_X86_64
|
||||
ALTERNATIVE "", "verw mds_verw_sel(%rip)", X86_FEATURE_CLEAR_CPU_BUF
|
||||
ALTERNATIVE "", "verw x86_verw_sel(%rip)", \feature
|
||||
#else
|
||||
/*
|
||||
* In 32bit mode, the memory operand must be a %cs reference. The data
|
||||
* segments may not be usable (vm86 mode), and the stack segment may not
|
||||
* be flat (ESPFIX32).
|
||||
*/
|
||||
ALTERNATIVE "", "verw %cs:mds_verw_sel", X86_FEATURE_CLEAR_CPU_BUF
|
||||
ALTERNATIVE "", "verw %cs:x86_verw_sel", \feature
|
||||
#endif
|
||||
.endm
|
||||
|
||||
#define CLEAR_CPU_BUFFERS \
|
||||
__CLEAR_CPU_BUFFERS X86_FEATURE_CLEAR_CPU_BUF
|
||||
|
||||
#define VM_CLEAR_CPU_BUFFERS \
|
||||
__CLEAR_CPU_BUFFERS X86_FEATURE_CLEAR_CPU_BUF_VM
|
||||
|
||||
#ifdef CONFIG_X86_64
|
||||
.macro CLEAR_BRANCH_HISTORY
|
||||
ALTERNATIVE "", "call clear_bhb_loop", X86_FEATURE_CLEAR_BHB_LOOP
|
||||
@@ -582,24 +588,24 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
|
||||
DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
|
||||
DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
|
||||
|
||||
DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
|
||||
DECLARE_STATIC_KEY_FALSE(cpu_buf_idle_clear);
|
||||
|
||||
DECLARE_STATIC_KEY_FALSE(switch_mm_cond_l1d_flush);
|
||||
|
||||
DECLARE_STATIC_KEY_FALSE(mmio_stale_data_clear);
|
||||
|
||||
extern u16 mds_verw_sel;
|
||||
extern u16 x86_verw_sel;
|
||||
|
||||
#include <asm/segment.h>
|
||||
|
||||
/**
|
||||
* mds_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
|
||||
* x86_clear_cpu_buffers - Buffer clearing support for different x86 CPU vulns
|
||||
*
|
||||
* This uses the otherwise unused and obsolete VERW instruction in
|
||||
* combination with microcode which triggers a CPU buffer flush when the
|
||||
* instruction is executed.
|
||||
*/
|
||||
static __always_inline void mds_clear_cpu_buffers(void)
|
||||
static __always_inline void x86_clear_cpu_buffers(void)
|
||||
{
|
||||
static const u16 ds = __KERNEL_DS;
|
||||
|
||||
@@ -616,14 +622,15 @@ static __always_inline void mds_clear_cpu_buffers(void)
|
||||
}
|
||||
|
||||
/**
|
||||
* mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability
|
||||
* x86_idle_clear_cpu_buffers - Buffer clearing support in idle for the MDS
|
||||
* and TSA vulnerabilities.
|
||||
*
|
||||
* Clear CPU buffers if the corresponding static key is enabled
|
||||
*/
|
||||
static __always_inline void mds_idle_clear_cpu_buffers(void)
|
||||
static __always_inline void x86_idle_clear_cpu_buffers(void)
|
||||
{
|
||||
if (static_branch_likely(&mds_idle_clear))
|
||||
mds_clear_cpu_buffers();
|
||||
if (static_branch_likely(&cpu_buf_idle_clear))
|
||||
x86_clear_cpu_buffers();
|
||||
}
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
||||
@@ -368,6 +368,63 @@ static void bsp_determine_snp(struct cpuinfo_x86 *c)
|
||||
#endif
|
||||
}
|
||||
|
||||
static bool amd_check_tsa_microcode(void)
|
||||
{
|
||||
struct cpuinfo_x86 *c = &boot_cpu_data;
|
||||
union zen_patch_rev p;
|
||||
u32 min_rev = 0;
|
||||
|
||||
p.ext_fam = c->x86 - 0xf;
|
||||
p.model = c->x86_model;
|
||||
p.stepping = c->x86_stepping;
|
||||
|
||||
if (cpu_has(c, X86_FEATURE_ZEN3) ||
|
||||
cpu_has(c, X86_FEATURE_ZEN4)) {
|
||||
switch (p.ucode_rev >> 8) {
|
||||
case 0xa0011: min_rev = 0x0a0011d7; break;
|
||||
case 0xa0012: min_rev = 0x0a00123b; break;
|
||||
case 0xa0082: min_rev = 0x0a00820d; break;
|
||||
case 0xa1011: min_rev = 0x0a10114c; break;
|
||||
case 0xa1012: min_rev = 0x0a10124c; break;
|
||||
case 0xa1081: min_rev = 0x0a108109; break;
|
||||
case 0xa2010: min_rev = 0x0a20102e; break;
|
||||
case 0xa2012: min_rev = 0x0a201211; break;
|
||||
case 0xa4041: min_rev = 0x0a404108; break;
|
||||
case 0xa5000: min_rev = 0x0a500012; break;
|
||||
case 0xa6012: min_rev = 0x0a60120a; break;
|
||||
case 0xa7041: min_rev = 0x0a704108; break;
|
||||
case 0xa7052: min_rev = 0x0a705208; break;
|
||||
case 0xa7080: min_rev = 0x0a708008; break;
|
||||
case 0xa70c0: min_rev = 0x0a70c008; break;
|
||||
case 0xaa002: min_rev = 0x0aa00216; break;
|
||||
default:
|
||||
pr_debug("%s: ucode_rev: 0x%x, current revision: 0x%x\n",
|
||||
__func__, p.ucode_rev, c->microcode);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
if (!min_rev)
|
||||
return false;
|
||||
|
||||
return c->microcode >= min_rev;
|
||||
}
|
||||
|
||||
static void tsa_init(struct cpuinfo_x86 *c)
|
||||
{
|
||||
if (cpu_has(c, X86_FEATURE_HYPERVISOR))
|
||||
return;
|
||||
|
||||
if (cpu_has(c, X86_FEATURE_ZEN3) ||
|
||||
cpu_has(c, X86_FEATURE_ZEN4)) {
|
||||
if (amd_check_tsa_microcode())
|
||||
setup_force_cpu_cap(X86_FEATURE_VERW_CLEAR);
|
||||
} else {
|
||||
setup_force_cpu_cap(X86_FEATURE_TSA_SQ_NO);
|
||||
setup_force_cpu_cap(X86_FEATURE_TSA_L1_NO);
|
||||
}
|
||||
}
|
||||
|
||||
static void bsp_init_amd(struct cpuinfo_x86 *c)
|
||||
{
|
||||
if (cpu_has(c, X86_FEATURE_CONSTANT_TSC)) {
|
||||
@@ -475,6 +532,9 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
|
||||
}
|
||||
|
||||
bsp_determine_snp(c);
|
||||
|
||||
tsa_init(c);
|
||||
|
||||
return;
|
||||
|
||||
warn:
|
||||
|
||||
@@ -50,6 +50,7 @@ static void __init l1d_flush_select_mitigation(void);
|
||||
static void __init srso_select_mitigation(void);
|
||||
static void __init gds_select_mitigation(void);
|
||||
static void __init its_select_mitigation(void);
|
||||
static void __init tsa_select_mitigation(void);
|
||||
|
||||
/* The base value of the SPEC_CTRL MSR without task-specific bits set */
|
||||
u64 x86_spec_ctrl_base;
|
||||
@@ -122,9 +123,9 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
|
||||
/* Control unconditional IBPB in switch_mm() */
|
||||
DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
|
||||
|
||||
/* Control MDS CPU buffer clear before idling (halt, mwait) */
|
||||
DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
|
||||
EXPORT_SYMBOL_GPL(mds_idle_clear);
|
||||
/* Control CPU buffer clear before idling (halt, mwait) */
|
||||
DEFINE_STATIC_KEY_FALSE(cpu_buf_idle_clear);
|
||||
EXPORT_SYMBOL_GPL(cpu_buf_idle_clear);
|
||||
|
||||
/*
|
||||
* Controls whether l1d flush based mitigations are enabled,
|
||||
@@ -185,6 +186,7 @@ void __init cpu_select_mitigations(void)
|
||||
srso_select_mitigation();
|
||||
gds_select_mitigation();
|
||||
its_select_mitigation();
|
||||
tsa_select_mitigation();
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -448,7 +450,7 @@ static void __init mmio_select_mitigation(void)
|
||||
* is required irrespective of SMT state.
|
||||
*/
|
||||
if (!(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO))
|
||||
static_branch_enable(&mds_idle_clear);
|
||||
static_branch_enable(&cpu_buf_idle_clear);
|
||||
|
||||
/*
|
||||
* Check if the system has the right microcode.
|
||||
@@ -2092,10 +2094,10 @@ static void update_mds_branch_idle(void)
|
||||
return;
|
||||
|
||||
if (sched_smt_active()) {
|
||||
static_branch_enable(&mds_idle_clear);
|
||||
static_branch_enable(&cpu_buf_idle_clear);
|
||||
} else if (mmio_mitigation == MMIO_MITIGATION_OFF ||
|
||||
(x86_arch_cap_msr & ARCH_CAP_FBSDP_NO)) {
|
||||
static_branch_disable(&mds_idle_clear);
|
||||
static_branch_disable(&cpu_buf_idle_clear);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2103,6 +2105,94 @@ static void update_mds_branch_idle(void)
|
||||
#define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
|
||||
#define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n"
|
||||
|
||||
#undef pr_fmt
|
||||
#define pr_fmt(fmt) "Transient Scheduler Attacks: " fmt
|
||||
|
||||
enum tsa_mitigations {
|
||||
TSA_MITIGATION_NONE,
|
||||
TSA_MITIGATION_UCODE_NEEDED,
|
||||
TSA_MITIGATION_USER_KERNEL,
|
||||
TSA_MITIGATION_VM,
|
||||
TSA_MITIGATION_FULL,
|
||||
};
|
||||
|
||||
static const char * const tsa_strings[] = {
|
||||
[TSA_MITIGATION_NONE] = "Vulnerable",
|
||||
[TSA_MITIGATION_UCODE_NEEDED] = "Vulnerable: Clear CPU buffers attempted, no microcode",
|
||||
[TSA_MITIGATION_USER_KERNEL] = "Mitigation: Clear CPU buffers: user/kernel boundary",
|
||||
[TSA_MITIGATION_VM] = "Mitigation: Clear CPU buffers: VM",
|
||||
[TSA_MITIGATION_FULL] = "Mitigation: Clear CPU buffers",
|
||||
};
|
||||
|
||||
static enum tsa_mitigations tsa_mitigation __ro_after_init =
|
||||
IS_ENABLED(CONFIG_MITIGATION_TSA) ? TSA_MITIGATION_FULL : TSA_MITIGATION_NONE;
|
||||
|
||||
static int __init tsa_parse_cmdline(char *str)
|
||||
{
|
||||
if (!str)
|
||||
return -EINVAL;
|
||||
|
||||
if (!strcmp(str, "off"))
|
||||
tsa_mitigation = TSA_MITIGATION_NONE;
|
||||
else if (!strcmp(str, "on"))
|
||||
tsa_mitigation = TSA_MITIGATION_FULL;
|
||||
else if (!strcmp(str, "user"))
|
||||
tsa_mitigation = TSA_MITIGATION_USER_KERNEL;
|
||||
else if (!strcmp(str, "vm"))
|
||||
tsa_mitigation = TSA_MITIGATION_VM;
|
||||
else
|
||||
pr_err("Ignoring unknown tsa=%s option.\n", str);
|
||||
|
||||
return 0;
|
||||
}
|
||||
early_param("tsa", tsa_parse_cmdline);
|
||||
|
||||
static void __init tsa_select_mitigation(void)
|
||||
{
|
||||
if (tsa_mitigation == TSA_MITIGATION_NONE)
|
||||
return;
|
||||
|
||||
if (cpu_mitigations_off() || !boot_cpu_has_bug(X86_BUG_TSA)) {
|
||||
tsa_mitigation = TSA_MITIGATION_NONE;
|
||||
return;
|
||||
}
|
||||
|
||||
if (!boot_cpu_has(X86_FEATURE_VERW_CLEAR))
|
||||
tsa_mitigation = TSA_MITIGATION_UCODE_NEEDED;
|
||||
|
||||
switch (tsa_mitigation) {
|
||||
case TSA_MITIGATION_USER_KERNEL:
|
||||
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
|
||||
break;
|
||||
|
||||
case TSA_MITIGATION_VM:
|
||||
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
|
||||
break;
|
||||
|
||||
case TSA_MITIGATION_UCODE_NEEDED:
|
||||
if (!boot_cpu_has(X86_FEATURE_HYPERVISOR))
|
||||
goto out;
|
||||
|
||||
pr_notice("Forcing mitigation on in a VM\n");
|
||||
|
||||
/*
|
||||
* On the off-chance that microcode has been updated
|
||||
* on the host, enable the mitigation in the guest just
|
||||
* in case.
|
||||
*/
|
||||
fallthrough;
|
||||
case TSA_MITIGATION_FULL:
|
||||
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
|
||||
setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
out:
|
||||
pr_info("%s\n", tsa_strings[tsa_mitigation]);
|
||||
}
|
||||
|
||||
void cpu_bugs_smt_update(void)
|
||||
{
|
||||
mutex_lock(&spec_ctrl_mutex);
|
||||
@@ -2156,6 +2246,24 @@ void cpu_bugs_smt_update(void)
|
||||
break;
|
||||
}
|
||||
|
||||
switch (tsa_mitigation) {
|
||||
case TSA_MITIGATION_USER_KERNEL:
|
||||
case TSA_MITIGATION_VM:
|
||||
case TSA_MITIGATION_FULL:
|
||||
case TSA_MITIGATION_UCODE_NEEDED:
|
||||
/*
|
||||
* TSA-SQ can potentially lead to info leakage between
|
||||
* SMT threads.
|
||||
*/
|
||||
if (sched_smt_active())
|
||||
static_branch_enable(&cpu_buf_idle_clear);
|
||||
else
|
||||
static_branch_disable(&cpu_buf_idle_clear);
|
||||
break;
|
||||
case TSA_MITIGATION_NONE:
|
||||
break;
|
||||
}
|
||||
|
||||
mutex_unlock(&spec_ctrl_mutex);
|
||||
}
|
||||
|
||||
@@ -3084,6 +3192,11 @@ static ssize_t gds_show_state(char *buf)
|
||||
return sysfs_emit(buf, "%s\n", gds_strings[gds_mitigation]);
|
||||
}
|
||||
|
||||
static ssize_t tsa_show_state(char *buf)
|
||||
{
|
||||
return sysfs_emit(buf, "%s\n", tsa_strings[tsa_mitigation]);
|
||||
}
|
||||
|
||||
static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
|
||||
char *buf, unsigned int bug)
|
||||
{
|
||||
@@ -3145,6 +3258,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
|
||||
case X86_BUG_ITS:
|
||||
return its_show_state(buf);
|
||||
|
||||
case X86_BUG_TSA:
|
||||
return tsa_show_state(buf);
|
||||
|
||||
default:
|
||||
break;
|
||||
}
|
||||
@@ -3229,6 +3345,11 @@ ssize_t cpu_show_indirect_target_selection(struct device *dev, struct device_att
|
||||
{
|
||||
return cpu_show_common(dev, attr, buf, X86_BUG_ITS);
|
||||
}
|
||||
|
||||
ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf)
|
||||
{
|
||||
return cpu_show_common(dev, attr, buf, X86_BUG_TSA);
|
||||
}
|
||||
#endif
|
||||
|
||||
void __warn_thunk(void)
|
||||
|
||||
@@ -1233,6 +1233,8 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
|
||||
#define ITS BIT(8)
|
||||
/* CPU is affected by Indirect Target Selection, but guest-host isolation is not affected */
|
||||
#define ITS_NATIVE_ONLY BIT(9)
|
||||
/* CPU is affected by Transient Scheduler Attacks */
|
||||
#define TSA BIT(10)
|
||||
|
||||
static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
|
||||
VULNBL_INTEL_STEPPINGS(INTEL_IVYBRIDGE, X86_STEPPING_ANY, SRBDS),
|
||||
@@ -1280,7 +1282,7 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
|
||||
VULNBL_AMD(0x16, RETBLEED),
|
||||
VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO),
|
||||
VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO),
|
||||
VULNBL_AMD(0x19, SRSO),
|
||||
VULNBL_AMD(0x19, SRSO | TSA),
|
||||
{}
|
||||
};
|
||||
|
||||
@@ -1490,6 +1492,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
|
||||
setup_force_cpu_bug(X86_BUG_ITS_NATIVE_ONLY);
|
||||
}
|
||||
|
||||
if (c->x86_vendor == X86_VENDOR_AMD) {
|
||||
if (!cpu_has(c, X86_FEATURE_TSA_SQ_NO) ||
|
||||
!cpu_has(c, X86_FEATURE_TSA_L1_NO)) {
|
||||
if (cpu_matches(cpu_vuln_blacklist, TSA) ||
|
||||
/* Enable bug on Zen guests to allow for live migration. */
|
||||
(cpu_has(c, X86_FEATURE_HYPERVISOR) && cpu_has(c, X86_FEATURE_ZEN)))
|
||||
setup_force_cpu_bug(X86_BUG_TSA);
|
||||
}
|
||||
}
|
||||
|
||||
if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
|
||||
return;
|
||||
|
||||
|
||||
@@ -94,18 +94,6 @@ static struct equiv_cpu_table {
|
||||
struct equiv_cpu_entry *entry;
|
||||
} equiv_table;
|
||||
|
||||
union zen_patch_rev {
|
||||
struct {
|
||||
__u32 rev : 8,
|
||||
stepping : 4,
|
||||
model : 4,
|
||||
__reserved : 4,
|
||||
ext_model : 4,
|
||||
ext_fam : 8;
|
||||
};
|
||||
__u32 ucode_rev;
|
||||
};
|
||||
|
||||
union cpuid_1_eax {
|
||||
struct {
|
||||
__u32 stepping : 4,
|
||||
|
||||
@@ -231,6 +231,13 @@ static const struct patch_digest phashes[] = {
|
||||
0x0d,0x5b,0x65,0x34,0x69,0xb2,0x62,0x21,
|
||||
}
|
||||
},
|
||||
{ 0xa0011d7, {
|
||||
0x35,0x07,0xcd,0x40,0x94,0xbc,0x81,0x6b,
|
||||
0xfc,0x61,0x56,0x1a,0xe2,0xdb,0x96,0x12,
|
||||
0x1c,0x1c,0x31,0xb1,0x02,0x6f,0xe5,0xd2,
|
||||
0xfe,0x1b,0x04,0x03,0x2c,0x8f,0x4c,0x36,
|
||||
}
|
||||
},
|
||||
{ 0xa001223, {
|
||||
0xfb,0x32,0x5f,0xc6,0x83,0x4f,0x8c,0xb8,
|
||||
0xa4,0x05,0xf9,0x71,0x53,0x01,0x16,0xc4,
|
||||
@@ -294,6 +301,13 @@ static const struct patch_digest phashes[] = {
|
||||
0xc0,0xcd,0x33,0xf2,0x8d,0xf9,0xef,0x59,
|
||||
}
|
||||
},
|
||||
{ 0xa00123b, {
|
||||
0xef,0xa1,0x1e,0x71,0xf1,0xc3,0x2c,0xe2,
|
||||
0xc3,0xef,0x69,0x41,0x7a,0x54,0xca,0xc3,
|
||||
0x8f,0x62,0x84,0xee,0xc2,0x39,0xd9,0x28,
|
||||
0x95,0xa7,0x12,0x49,0x1e,0x30,0x71,0x72,
|
||||
}
|
||||
},
|
||||
{ 0xa00820c, {
|
||||
0xa8,0x0c,0x81,0xc0,0xa6,0x00,0xe7,0xf3,
|
||||
0x5f,0x65,0xd3,0xb9,0x6f,0xea,0x93,0x63,
|
||||
@@ -301,6 +315,13 @@ static const struct patch_digest phashes[] = {
|
||||
0xe1,0x3b,0x8d,0xb2,0xf8,0x22,0x03,0xe2,
|
||||
}
|
||||
},
|
||||
{ 0xa00820d, {
|
||||
0xf9,0x2a,0xc0,0xf4,0x9e,0xa4,0x87,0xa4,
|
||||
0x7d,0x87,0x00,0xfd,0xab,0xda,0x19,0xca,
|
||||
0x26,0x51,0x32,0xc1,0x57,0x91,0xdf,0xc1,
|
||||
0x05,0xeb,0x01,0x7c,0x5a,0x95,0x21,0xb7,
|
||||
}
|
||||
},
|
||||
{ 0xa10113e, {
|
||||
0x05,0x3c,0x66,0xd7,0xa9,0x5a,0x33,0x10,
|
||||
0x1b,0xf8,0x9c,0x8f,0xed,0xfc,0xa7,0xa0,
|
||||
@@ -322,6 +343,13 @@ static const struct patch_digest phashes[] = {
|
||||
0xf1,0x5e,0xb0,0xde,0xb4,0x98,0xae,0xc4,
|
||||
}
|
||||
},
|
||||
{ 0xa10114c, {
|
||||
0x9e,0xb6,0xa2,0xd9,0x87,0x38,0xc5,0x64,
|
||||
0xd8,0x88,0xfa,0x78,0x98,0xf9,0x6f,0x74,
|
||||
0x39,0x90,0x1b,0xa5,0xcf,0x5e,0xb4,0x2a,
|
||||
0x02,0xff,0xd4,0x8c,0x71,0x8b,0xe2,0xc0,
|
||||
}
|
||||
},
|
||||
{ 0xa10123e, {
|
||||
0x03,0xb9,0x2c,0x76,0x48,0x93,0xc9,0x18,
|
||||
0xfb,0x56,0xfd,0xf7,0xe2,0x1d,0xca,0x4d,
|
||||
@@ -343,6 +371,13 @@ static const struct patch_digest phashes[] = {
|
||||
0x1b,0x7d,0x64,0x9d,0x4b,0x53,0x13,0x75,
|
||||
}
|
||||
},
|
||||
{ 0xa10124c, {
|
||||
0x29,0xea,0xf1,0x2c,0xb2,0xe4,0xef,0x90,
|
||||
0xa4,0xcd,0x1d,0x86,0x97,0x17,0x61,0x46,
|
||||
0xfc,0x22,0xcb,0x57,0x75,0x19,0xc8,0xcc,
|
||||
0x0c,0xf5,0xbc,0xac,0x81,0x9d,0x9a,0xd2,
|
||||
}
|
||||
},
|
||||
{ 0xa108108, {
|
||||
0xed,0xc2,0xec,0xa1,0x15,0xc6,0x65,0xe9,
|
||||
0xd0,0xef,0x39,0xaa,0x7f,0x55,0x06,0xc6,
|
||||
@@ -350,6 +385,13 @@ static const struct patch_digest phashes[] = {
|
||||
0x28,0x1e,0x9c,0x59,0x69,0x99,0x4d,0x16,
|
||||
}
|
||||
},
|
||||
{ 0xa108109, {
|
||||
0x85,0xb4,0xbd,0x7c,0x49,0xa7,0xbd,0xfa,
|
||||
0x49,0x36,0x80,0x81,0xc5,0xb7,0x39,0x1b,
|
||||
0x9a,0xaa,0x50,0xde,0x9b,0xe9,0x32,0x35,
|
||||
0x42,0x7e,0x51,0x4f,0x52,0x2c,0x28,0x59,
|
||||
}
|
||||
},
|
||||
{ 0xa20102d, {
|
||||
0xf9,0x6e,0xf2,0x32,0xd3,0x0f,0x5f,0x11,
|
||||
0x59,0xa1,0xfe,0xcc,0xcd,0x9b,0x42,0x89,
|
||||
@@ -357,6 +399,13 @@ static const struct patch_digest phashes[] = {
|
||||
0x8c,0xe9,0x19,0x3e,0xcc,0x3f,0x7b,0xb4,
|
||||
}
|
||||
},
|
||||
{ 0xa20102e, {
|
||||
0xbe,0x1f,0x32,0x04,0x0d,0x3c,0x9c,0xdd,
|
||||
0xe1,0xa4,0xbf,0x76,0x3a,0xec,0xc2,0xf6,
|
||||
0x11,0x00,0xa7,0xaf,0x0f,0xe5,0x02,0xc5,
|
||||
0x54,0x3a,0x1f,0x8c,0x16,0xb5,0xff,0xbe,
|
||||
}
|
||||
},
|
||||
{ 0xa201210, {
|
||||
0xe8,0x6d,0x51,0x6a,0x8e,0x72,0xf3,0xfe,
|
||||
0x6e,0x16,0xbc,0x62,0x59,0x40,0x17,0xe9,
|
||||
@@ -364,6 +413,13 @@ static const struct patch_digest phashes[] = {
|
||||
0xf7,0x55,0xf0,0x13,0xbb,0x22,0xf6,0x41,
|
||||
}
|
||||
},
|
||||
{ 0xa201211, {
|
||||
0x69,0xa1,0x17,0xec,0xd0,0xf6,0x6c,0x95,
|
||||
0xe2,0x1e,0xc5,0x59,0x1a,0x52,0x0a,0x27,
|
||||
0xc4,0xed,0xd5,0x59,0x1f,0xbf,0x00,0xff,
|
||||
0x08,0x88,0xb5,0xe1,0x12,0xb6,0xcc,0x27,
|
||||
}
|
||||
},
|
||||
{ 0xa404107, {
|
||||
0xbb,0x04,0x4e,0x47,0xdd,0x5e,0x26,0x45,
|
||||
0x1a,0xc9,0x56,0x24,0xa4,0x4c,0x82,0xb0,
|
||||
@@ -371,6 +427,13 @@ static const struct patch_digest phashes[] = {
|
||||
0x13,0xbc,0xc5,0x25,0xe4,0xc5,0xc3,0x99,
|
||||
}
|
||||
},
|
||||
{ 0xa404108, {
|
||||
0x69,0x67,0x43,0x06,0xf8,0x0c,0x62,0xdc,
|
||||
0xa4,0x21,0x30,0x4f,0x0f,0x21,0x2c,0xcb,
|
||||
0xcc,0x37,0xf1,0x1c,0xc3,0xf8,0x2f,0x19,
|
||||
0xdf,0x53,0x53,0x46,0xb1,0x15,0xea,0x00,
|
||||
}
|
||||
},
|
||||
{ 0xa500011, {
|
||||
0x23,0x3d,0x70,0x7d,0x03,0xc3,0xc4,0xf4,
|
||||
0x2b,0x82,0xc6,0x05,0xda,0x80,0x0a,0xf1,
|
||||
@@ -378,6 +441,13 @@ static const struct patch_digest phashes[] = {
|
||||
0x11,0x5e,0x96,0x7e,0x71,0xe9,0xfc,0x74,
|
||||
}
|
||||
},
|
||||
{ 0xa500012, {
|
||||
0xeb,0x74,0x0d,0x47,0xa1,0x8e,0x09,0xe4,
|
||||
0x93,0x4c,0xad,0x03,0x32,0x4c,0x38,0x16,
|
||||
0x10,0x39,0xdd,0x06,0xaa,0xce,0xd6,0x0f,
|
||||
0x62,0x83,0x9d,0x8e,0x64,0x55,0xbe,0x63,
|
||||
}
|
||||
},
|
||||
{ 0xa601209, {
|
||||
0x66,0x48,0xd4,0x09,0x05,0xcb,0x29,0x32,
|
||||
0x66,0xb7,0x9a,0x76,0xcd,0x11,0xf3,0x30,
|
||||
@@ -385,6 +455,13 @@ static const struct patch_digest phashes[] = {
|
||||
0xe8,0x73,0xe2,0xd6,0xdb,0xd2,0x77,0x1d,
|
||||
}
|
||||
},
|
||||
{ 0xa60120a, {
|
||||
0x0c,0x8b,0x3d,0xfd,0x52,0x52,0x85,0x7d,
|
||||
0x20,0x3a,0xe1,0x7e,0xa4,0x21,0x3b,0x7b,
|
||||
0x17,0x86,0xae,0xac,0x13,0xb8,0x63,0x9d,
|
||||
0x06,0x01,0xd0,0xa0,0x51,0x9a,0x91,0x2c,
|
||||
}
|
||||
},
|
||||
{ 0xa704107, {
|
||||
0xf3,0xc6,0x58,0x26,0xee,0xac,0x3f,0xd6,
|
||||
0xce,0xa1,0x72,0x47,0x3b,0xba,0x2b,0x93,
|
||||
@@ -392,6 +469,13 @@ static const struct patch_digest phashes[] = {
|
||||
0x64,0x39,0x71,0x8c,0xce,0xe7,0x41,0x39,
|
||||
}
|
||||
},
|
||||
{ 0xa704108, {
|
||||
0xd7,0x55,0x15,0x2b,0xfe,0xc4,0xbc,0x93,
|
||||
0xec,0x91,0xa0,0xae,0x45,0xb7,0xc3,0x98,
|
||||
0x4e,0xff,0x61,0x77,0x88,0xc2,0x70,0x49,
|
||||
0xe0,0x3a,0x1d,0x84,0x38,0x52,0xbf,0x5a,
|
||||
}
|
||||
},
|
||||
{ 0xa705206, {
|
||||
0x8d,0xc0,0x76,0xbd,0x58,0x9f,0x8f,0xa4,
|
||||
0x12,0x9d,0x21,0xfb,0x48,0x21,0xbc,0xe7,
|
||||
@@ -399,6 +483,13 @@ static const struct patch_digest phashes[] = {
|
||||
0x03,0x35,0xe9,0xbe,0xfb,0x06,0xdf,0xfc,
|
||||
}
|
||||
},
|
||||
{ 0xa705208, {
|
||||
0x30,0x1d,0x55,0x24,0xbc,0x6b,0x5a,0x19,
|
||||
0x0c,0x7d,0x1d,0x74,0xaa,0xd1,0xeb,0xd2,
|
||||
0x16,0x62,0xf7,0x5b,0xe1,0x1f,0x18,0x11,
|
||||
0x5c,0xf0,0x94,0x90,0x26,0xec,0x69,0xff,
|
||||
}
|
||||
},
|
||||
{ 0xa708007, {
|
||||
0x6b,0x76,0xcc,0x78,0xc5,0x8a,0xa3,0xe3,
|
||||
0x32,0x2d,0x79,0xe4,0xc3,0x80,0xdb,0xb2,
|
||||
@@ -406,6 +497,13 @@ static const struct patch_digest phashes[] = {
|
||||
0xdf,0x92,0x73,0x84,0x87,0x3c,0x73,0x93,
|
||||
}
|
||||
},
|
||||
{ 0xa708008, {
|
||||
0x08,0x6e,0xf0,0x22,0x4b,0x8e,0xc4,0x46,
|
||||
0x58,0x34,0xe6,0x47,0xa2,0x28,0xfd,0xab,
|
||||
0x22,0x3d,0xdd,0xd8,0x52,0x9e,0x1d,0x16,
|
||||
0xfa,0x01,0x68,0x14,0x79,0x3e,0xe8,0x6b,
|
||||
}
|
||||
},
|
||||
{ 0xa70c005, {
|
||||
0x88,0x5d,0xfb,0x79,0x64,0xd8,0x46,0x3b,
|
||||
0x4a,0x83,0x8e,0x77,0x7e,0xcf,0xb3,0x0f,
|
||||
@@ -413,6 +511,13 @@ static const struct patch_digest phashes[] = {
|
||||
0xee,0x49,0xac,0xe1,0x8b,0x13,0xc5,0x13,
|
||||
}
|
||||
},
|
||||
{ 0xa70c008, {
|
||||
0x0f,0xdb,0x37,0xa1,0x10,0xaf,0xd4,0x21,
|
||||
0x94,0x0d,0xa4,0xa2,0xe9,0x86,0x6c,0x0e,
|
||||
0x85,0x7c,0x36,0x30,0xa3,0x3a,0x78,0x66,
|
||||
0x18,0x10,0x60,0x0d,0x78,0x3d,0x44,0xd0,
|
||||
}
|
||||
},
|
||||
{ 0xaa00116, {
|
||||
0xe8,0x4c,0x2c,0x88,0xa1,0xac,0x24,0x63,
|
||||
0x65,0xe5,0xaa,0x2d,0x16,0xa9,0xc3,0xf5,
|
||||
@@ -441,4 +546,11 @@ static const struct patch_digest phashes[] = {
|
||||
0x68,0x2f,0x46,0xee,0xfe,0xc6,0x6d,0xef,
|
||||
}
|
||||
},
|
||||
{ 0xaa00216, {
|
||||
0x79,0xfb,0x5b,0x9f,0xb6,0xe6,0xa8,0xf5,
|
||||
0x4e,0x7c,0x4f,0x8e,0x1d,0xad,0xd0,0x08,
|
||||
0xc2,0x43,0x7c,0x8b,0xe6,0xdb,0xd0,0xd2,
|
||||
0xe8,0x39,0x26,0xc1,0xe5,0x5a,0x48,0xf1,
|
||||
}
|
||||
},
|
||||
};
|
||||
|
||||
@@ -49,6 +49,8 @@ static const struct cpuid_bit cpuid_bits[] = {
|
||||
{ X86_FEATURE_MBA, CPUID_EBX, 6, 0x80000008, 0 },
|
||||
{ X86_FEATURE_SMBA, CPUID_EBX, 2, 0x80000020, 0 },
|
||||
{ X86_FEATURE_BMEC, CPUID_EBX, 3, 0x80000020, 0 },
|
||||
{ X86_FEATURE_TSA_SQ_NO, CPUID_ECX, 1, 0x80000021, 0 },
|
||||
{ X86_FEATURE_TSA_L1_NO, CPUID_ECX, 2, 0x80000021, 0 },
|
||||
{ X86_FEATURE_PERFMON_V2, CPUID_EAX, 0, 0x80000022, 0 },
|
||||
{ X86_FEATURE_AMD_LBR_V2, CPUID_EAX, 1, 0x80000022, 0 },
|
||||
{ X86_FEATURE_AMD_LBR_PMC_FREEZE, CPUID_EAX, 2, 0x80000022, 0 },
|
||||
|
||||
@@ -912,16 +912,24 @@ static __init bool prefer_mwait_c1_over_halt(void)
|
||||
*/
|
||||
static __cpuidle void mwait_idle(void)
|
||||
{
|
||||
if (need_resched())
|
||||
return;
|
||||
|
||||
x86_idle_clear_cpu_buffers();
|
||||
|
||||
if (!current_set_polling_and_test()) {
|
||||
const void *addr = ¤t_thread_info()->flags;
|
||||
|
||||
alternative_input("", "clflush (%[addr])", X86_BUG_CLFLUSH_MONITOR, [addr] "a" (addr));
|
||||
__monitor(addr, 0, 0);
|
||||
if (!need_resched()) {
|
||||
__sti_mwait(0, 0);
|
||||
raw_local_irq_disable();
|
||||
}
|
||||
if (need_resched())
|
||||
goto out;
|
||||
|
||||
__sti_mwait(0, 0);
|
||||
raw_local_irq_disable();
|
||||
}
|
||||
|
||||
out:
|
||||
__current_clr_polling();
|
||||
}
|
||||
|
||||
|
||||
@@ -814,6 +814,7 @@ void kvm_set_cpu_caps(void)
|
||||
|
||||
kvm_cpu_cap_mask(CPUID_8000_0021_EAX,
|
||||
F(NO_NESTED_DATA_BP) | F(LFENCE_RDTSC) | 0 /* SmmPgCfgLock */ |
|
||||
F(VERW_CLEAR) |
|
||||
F(NULL_SEL_CLR_BASE) | F(AUTOIBRS) | 0 /* PrefetchCtlMsr */ |
|
||||
F(WRMSR_XX_BASE_NS)
|
||||
);
|
||||
@@ -826,6 +827,10 @@ void kvm_set_cpu_caps(void)
|
||||
F(PERFMON_V2)
|
||||
);
|
||||
|
||||
kvm_cpu_cap_init_kvm_defined(CPUID_8000_0021_ECX,
|
||||
F(TSA_SQ_NO) | F(TSA_L1_NO)
|
||||
);
|
||||
|
||||
/*
|
||||
* Synthesize "LFENCE is serializing" into the AMD-defined entry in
|
||||
* KVM's supported CPUID if the feature is reported as supported by the
|
||||
@@ -1376,8 +1381,9 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
|
||||
entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
|
||||
break;
|
||||
case 0x80000021:
|
||||
entry->ebx = entry->ecx = entry->edx = 0;
|
||||
entry->ebx = entry->edx = 0;
|
||||
cpuid_entry_override(entry, CPUID_8000_0021_EAX);
|
||||
cpuid_entry_override(entry, CPUID_8000_0021_ECX);
|
||||
break;
|
||||
/* AMD Extended Performance Monitoring and Debug */
|
||||
case 0x80000022: {
|
||||
|
||||
@@ -18,6 +18,7 @@ enum kvm_only_cpuid_leafs {
|
||||
CPUID_8000_0022_EAX,
|
||||
CPUID_7_2_EDX,
|
||||
CPUID_24_0_EBX,
|
||||
CPUID_8000_0021_ECX,
|
||||
NR_KVM_CPU_CAPS,
|
||||
|
||||
NKVMCAPINTS = NR_KVM_CPU_CAPS - NCAPINTS,
|
||||
@@ -68,6 +69,10 @@ enum kvm_only_cpuid_leafs {
|
||||
/* CPUID level 0x80000022 (EAX) */
|
||||
#define KVM_X86_FEATURE_PERFMON_V2 KVM_X86_FEATURE(CPUID_8000_0022_EAX, 0)
|
||||
|
||||
/* CPUID level 0x80000021 (ECX) */
|
||||
#define KVM_X86_FEATURE_TSA_SQ_NO KVM_X86_FEATURE(CPUID_8000_0021_ECX, 1)
|
||||
#define KVM_X86_FEATURE_TSA_L1_NO KVM_X86_FEATURE(CPUID_8000_0021_ECX, 2)
|
||||
|
||||
struct cpuid_reg {
|
||||
u32 function;
|
||||
u32 index;
|
||||
@@ -98,6 +103,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
|
||||
[CPUID_8000_0022_EAX] = {0x80000022, 0, CPUID_EAX},
|
||||
[CPUID_7_2_EDX] = { 7, 2, CPUID_EDX},
|
||||
[CPUID_24_0_EBX] = { 0x24, 0, CPUID_EBX},
|
||||
[CPUID_8000_0021_ECX] = {0x80000021, 0, CPUID_ECX},
|
||||
};
|
||||
|
||||
/*
|
||||
@@ -137,6 +143,8 @@ static __always_inline u32 __feature_translate(int x86_feature)
|
||||
KVM_X86_TRANSLATE_FEATURE(PERFMON_V2);
|
||||
KVM_X86_TRANSLATE_FEATURE(RRSBA_CTRL);
|
||||
KVM_X86_TRANSLATE_FEATURE(BHI_CTRL);
|
||||
KVM_X86_TRANSLATE_FEATURE(TSA_SQ_NO);
|
||||
KVM_X86_TRANSLATE_FEATURE(TSA_L1_NO);
|
||||
default:
|
||||
return x86_feature;
|
||||
}
|
||||
|
||||
@@ -169,6 +169,9 @@ SYM_FUNC_START(__svm_vcpu_run)
|
||||
#endif
|
||||
mov VCPU_RDI(%_ASM_DI), %_ASM_DI
|
||||
|
||||
/* Clobbers EFLAGS.ZF */
|
||||
VM_CLEAR_CPU_BUFFERS
|
||||
|
||||
/* Enter guest mode */
|
||||
3: vmrun %_ASM_AX
|
||||
4:
|
||||
@@ -335,6 +338,9 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run)
|
||||
mov SVM_current_vmcb(%rdi), %rax
|
||||
mov KVM_VMCB_pa(%rax), %rax
|
||||
|
||||
/* Clobbers EFLAGS.ZF */
|
||||
VM_CLEAR_CPU_BUFFERS
|
||||
|
||||
/* Enter guest mode */
|
||||
1: vmrun %rax
|
||||
2:
|
||||
|
||||
@@ -7313,7 +7313,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
|
||||
vmx_l1d_flush(vcpu);
|
||||
else if (static_branch_unlikely(&mmio_stale_data_clear) &&
|
||||
kvm_arch_has_assigned_device(vcpu->kvm))
|
||||
mds_clear_cpu_buffers();
|
||||
x86_clear_cpu_buffers();
|
||||
|
||||
vmx_disable_fb_clear(vmx);
|
||||
|
||||
|
||||
@@ -483,6 +483,13 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
|
||||
return_ACPI_STATUS(AE_NULL_OBJECT);
|
||||
}
|
||||
|
||||
if (this_walk_state->num_operands < obj_desc->method.param_count) {
|
||||
ACPI_ERROR((AE_INFO, "Missing argument for method [%4.4s]",
|
||||
acpi_ut_get_node_name(method_node)));
|
||||
|
||||
return_ACPI_STATUS(AE_AML_UNINITIALIZED_ARG);
|
||||
}
|
||||
|
||||
/* Init for new method, possibly wait on method mutex */
|
||||
|
||||
status =
|
||||
|
||||
@@ -803,7 +803,13 @@ static int acpi_thermal_add(struct acpi_device *device)
|
||||
|
||||
acpi_thermal_aml_dependency_fix(tz);
|
||||
|
||||
/* Get trip points [_CRT, _PSV, etc.] (required). */
|
||||
/*
|
||||
* Set the cooling mode [_SCP] to active cooling. This needs to happen before
|
||||
* we retrieve the trip point values.
|
||||
*/
|
||||
acpi_execute_simple_method(tz->device->handle, "_SCP", ACPI_THERMAL_MODE_ACTIVE);
|
||||
|
||||
/* Get trip points [_ACi, _PSV, etc.] (required). */
|
||||
acpi_thermal_get_trip_points(tz);
|
||||
|
||||
crit_temp = acpi_thermal_get_critical_trip(tz);
|
||||
@@ -814,10 +820,6 @@ static int acpi_thermal_add(struct acpi_device *device)
|
||||
if (result)
|
||||
goto free_memory;
|
||||
|
||||
/* Set the cooling mode [_SCP] to active cooling. */
|
||||
acpi_execute_simple_method(tz->device->handle, "_SCP",
|
||||
ACPI_THERMAL_MODE_ACTIVE);
|
||||
|
||||
/* Determine the default polling frequency [_TZP]. */
|
||||
if (tzp)
|
||||
tz->polling_frequency = tzp;
|
||||
|
||||
@@ -514,15 +514,19 @@ unsigned int ata_acpi_gtm_xfermask(struct ata_device *dev,
|
||||
EXPORT_SYMBOL_GPL(ata_acpi_gtm_xfermask);
|
||||
|
||||
/**
|
||||
* ata_acpi_cbl_80wire - Check for 80 wire cable
|
||||
* ata_acpi_cbl_pata_type - Return PATA cable type
|
||||
* @ap: Port to check
|
||||
* @gtm: GTM data to use
|
||||
*
|
||||
* Return 1 if the @gtm indicates the BIOS selected an 80wire mode.
|
||||
* Return ATA_CBL_PATA* according to the transfer mode selected by BIOS
|
||||
*/
|
||||
int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm)
|
||||
int ata_acpi_cbl_pata_type(struct ata_port *ap)
|
||||
{
|
||||
struct ata_device *dev;
|
||||
int ret = ATA_CBL_PATA_UNK;
|
||||
const struct ata_acpi_gtm *gtm = ata_acpi_init_gtm(ap);
|
||||
|
||||
if (!gtm)
|
||||
return ATA_CBL_PATA40;
|
||||
|
||||
ata_for_each_dev(dev, &ap->link, ENABLED) {
|
||||
unsigned int xfer_mask, udma_mask;
|
||||
@@ -530,13 +534,17 @@ int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm)
|
||||
xfer_mask = ata_acpi_gtm_xfermask(dev, gtm);
|
||||
ata_unpack_xfermask(xfer_mask, NULL, NULL, &udma_mask);
|
||||
|
||||
if (udma_mask & ~ATA_UDMA_MASK_40C)
|
||||
return 1;
|
||||
ret = ATA_CBL_PATA40;
|
||||
|
||||
if (udma_mask & ~ATA_UDMA_MASK_40C) {
|
||||
ret = ATA_CBL_PATA80;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(ata_acpi_cbl_80wire);
|
||||
EXPORT_SYMBOL_GPL(ata_acpi_cbl_pata_type);
|
||||
|
||||
static void ata_acpi_gtf_to_tf(struct ata_device *dev,
|
||||
const struct ata_acpi_gtf *gtf,
|
||||
|
||||
@@ -27,7 +27,7 @@
|
||||
#include <scsi/scsi_host.h>
|
||||
#include <linux/dmi.h>
|
||||
|
||||
#ifdef CONFIG_X86_32
|
||||
#if defined(CONFIG_X86) && defined(CONFIG_X86_32)
|
||||
#include <asm/msr.h>
|
||||
static int use_msr;
|
||||
module_param_named(msr, use_msr, int, 0644);
|
||||
|
||||
@@ -201,11 +201,9 @@ static int via_cable_detect(struct ata_port *ap) {
|
||||
two drives */
|
||||
if (ata66 & (0x10100000 >> (16 * ap->port_no)))
|
||||
return ATA_CBL_PATA80;
|
||||
|
||||
/* Check with ACPI so we can spot BIOS reported SATA bridges */
|
||||
if (ata_acpi_init_gtm(ap) &&
|
||||
ata_acpi_cbl_80wire(ap, ata_acpi_init_gtm(ap)))
|
||||
return ATA_CBL_PATA80;
|
||||
return ATA_CBL_PATA40;
|
||||
return ata_acpi_cbl_pata_type(ap);
|
||||
}
|
||||
|
||||
static int via_pre_reset(struct ata_link *link, unsigned long deadline)
|
||||
|
||||
@@ -600,6 +600,7 @@ CPU_SHOW_VULN_FALLBACK(spec_rstack_overflow);
|
||||
CPU_SHOW_VULN_FALLBACK(gds);
|
||||
CPU_SHOW_VULN_FALLBACK(reg_file_data_sampling);
|
||||
CPU_SHOW_VULN_FALLBACK(indirect_target_selection);
|
||||
CPU_SHOW_VULN_FALLBACK(tsa);
|
||||
|
||||
static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
|
||||
static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
|
||||
@@ -616,6 +617,7 @@ static DEVICE_ATTR(spec_rstack_overflow, 0444, cpu_show_spec_rstack_overflow, NU
|
||||
static DEVICE_ATTR(gather_data_sampling, 0444, cpu_show_gds, NULL);
|
||||
static DEVICE_ATTR(reg_file_data_sampling, 0444, cpu_show_reg_file_data_sampling, NULL);
|
||||
static DEVICE_ATTR(indirect_target_selection, 0444, cpu_show_indirect_target_selection, NULL);
|
||||
static DEVICE_ATTR(tsa, 0444, cpu_show_tsa, NULL);
|
||||
|
||||
static struct attribute *cpu_root_vulnerabilities_attrs[] = {
|
||||
&dev_attr_meltdown.attr,
|
||||
@@ -633,6 +635,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
|
||||
&dev_attr_gather_data_sampling.attr,
|
||||
&dev_attr_reg_file_data_sampling.attr,
|
||||
&dev_attr_indirect_target_selection.attr,
|
||||
&dev_attr_tsa.attr,
|
||||
NULL
|
||||
};
|
||||
|
||||
|
||||
@@ -80,6 +80,7 @@ enum {
|
||||
DEVFL_NEWSIZE = (1<<6), /* need to update dev size in block layer */
|
||||
DEVFL_FREEING = (1<<7), /* set when device is being cleaned up */
|
||||
DEVFL_FREED = (1<<8), /* device has been cleaned up */
|
||||
DEVFL_DEAD = (1<<9), /* device has timed out of aoe_deadsecs */
|
||||
};
|
||||
|
||||
enum {
|
||||
|
||||
@@ -754,7 +754,7 @@ rexmit_timer(struct timer_list *timer)
|
||||
|
||||
utgts = count_targets(d, NULL);
|
||||
|
||||
if (d->flags & DEVFL_TKILL) {
|
||||
if (d->flags & (DEVFL_TKILL | DEVFL_DEAD)) {
|
||||
spin_unlock_irqrestore(&d->lock, flags);
|
||||
return;
|
||||
}
|
||||
@@ -786,7 +786,8 @@ rexmit_timer(struct timer_list *timer)
|
||||
* to clean up.
|
||||
*/
|
||||
list_splice(&flist, &d->factive[0]);
|
||||
aoedev_downdev(d);
|
||||
d->flags |= DEVFL_DEAD;
|
||||
queue_work(aoe_wq, &d->work);
|
||||
goto out;
|
||||
}
|
||||
|
||||
@@ -898,6 +899,9 @@ aoecmd_sleepwork(struct work_struct *work)
|
||||
{
|
||||
struct aoedev *d = container_of(work, struct aoedev, work);
|
||||
|
||||
if (d->flags & DEVFL_DEAD)
|
||||
aoedev_downdev(d);
|
||||
|
||||
if (d->flags & DEVFL_GDALLOC)
|
||||
aoeblk_gdalloc(d);
|
||||
|
||||
|
||||
@@ -200,8 +200,11 @@ aoedev_downdev(struct aoedev *d)
|
||||
struct list_head *head, *pos, *nx;
|
||||
struct request *rq, *rqnext;
|
||||
int i;
|
||||
unsigned long flags;
|
||||
|
||||
d->flags &= ~DEVFL_UP;
|
||||
spin_lock_irqsave(&d->lock, flags);
|
||||
d->flags &= ~(DEVFL_UP | DEVFL_DEAD);
|
||||
spin_unlock_irqrestore(&d->lock, flags);
|
||||
|
||||
/* clean out active and to-be-retransmitted buffers */
|
||||
for (i = 0; i < NFACTIVE; i++) {
|
||||
|
||||
@@ -1126,8 +1126,7 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
|
||||
struct idxd_wq *wq,
|
||||
dma_addr_t src_addr, unsigned int slen,
|
||||
dma_addr_t dst_addr, unsigned int *dlen,
|
||||
u32 *compression_crc,
|
||||
bool disable_async)
|
||||
u32 *compression_crc)
|
||||
{
|
||||
struct iaa_device_compression_mode *active_compression_mode;
|
||||
struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm);
|
||||
@@ -1170,7 +1169,7 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
|
||||
desc->src2_size = sizeof(struct aecs_comp_table_record);
|
||||
desc->completion_addr = idxd_desc->compl_dma;
|
||||
|
||||
if (ctx->use_irq && !disable_async) {
|
||||
if (ctx->use_irq) {
|
||||
desc->flags |= IDXD_OP_FLAG_RCI;
|
||||
|
||||
idxd_desc->crypto.req = req;
|
||||
@@ -1183,8 +1182,7 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
|
||||
" src_addr %llx, dst_addr %llx\n", __func__,
|
||||
active_compression_mode->name,
|
||||
src_addr, dst_addr);
|
||||
} else if (ctx->async_mode && !disable_async)
|
||||
req->base.data = idxd_desc;
|
||||
}
|
||||
|
||||
dev_dbg(dev, "%s: compression mode %s,"
|
||||
" desc->src1_addr %llx, desc->src1_size %d,"
|
||||
@@ -1204,7 +1202,7 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
|
||||
update_total_comp_calls();
|
||||
update_wq_comp_calls(wq);
|
||||
|
||||
if (ctx->async_mode && !disable_async) {
|
||||
if (ctx->async_mode) {
|
||||
ret = -EINPROGRESS;
|
||||
dev_dbg(dev, "%s: returning -EINPROGRESS\n", __func__);
|
||||
goto out;
|
||||
@@ -1224,7 +1222,7 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req,
|
||||
|
||||
*compression_crc = idxd_desc->iax_completion->crc;
|
||||
|
||||
if (!ctx->async_mode || disable_async)
|
||||
if (!ctx->async_mode)
|
||||
idxd_free_desc(wq, idxd_desc);
|
||||
out:
|
||||
return ret;
|
||||
@@ -1421,8 +1419,7 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req,
|
||||
" src_addr %llx, dst_addr %llx\n", __func__,
|
||||
active_compression_mode->name,
|
||||
src_addr, dst_addr);
|
||||
} else if (ctx->async_mode && !disable_async)
|
||||
req->base.data = idxd_desc;
|
||||
}
|
||||
|
||||
dev_dbg(dev, "%s: decompression mode %s,"
|
||||
" desc->src1_addr %llx, desc->src1_size %d,"
|
||||
@@ -1490,13 +1487,11 @@ static int iaa_comp_acompress(struct acomp_req *req)
|
||||
struct iaa_compression_ctx *compression_ctx;
|
||||
struct crypto_tfm *tfm = req->base.tfm;
|
||||
dma_addr_t src_addr, dst_addr;
|
||||
bool disable_async = false;
|
||||
int nr_sgs, cpu, ret = 0;
|
||||
struct iaa_wq *iaa_wq;
|
||||
u32 compression_crc;
|
||||
struct idxd_wq *wq;
|
||||
struct device *dev;
|
||||
int order = -1;
|
||||
|
||||
compression_ctx = crypto_tfm_ctx(tfm);
|
||||
|
||||
@@ -1526,21 +1521,6 @@ static int iaa_comp_acompress(struct acomp_req *req)
|
||||
|
||||
iaa_wq = idxd_wq_get_private(wq);
|
||||
|
||||
if (!req->dst) {
|
||||
gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC;
|
||||
|
||||
/* incompressible data will always be < 2 * slen */
|
||||
req->dlen = 2 * req->slen;
|
||||
order = order_base_2(round_up(req->dlen, PAGE_SIZE) / PAGE_SIZE);
|
||||
req->dst = sgl_alloc_order(req->dlen, order, false, flags, NULL);
|
||||
if (!req->dst) {
|
||||
ret = -ENOMEM;
|
||||
order = -1;
|
||||
goto out;
|
||||
}
|
||||
disable_async = true;
|
||||
}
|
||||
|
||||
dev = &wq->idxd->pdev->dev;
|
||||
|
||||
nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
|
||||
@@ -1570,7 +1550,7 @@ static int iaa_comp_acompress(struct acomp_req *req)
|
||||
req->dst, req->dlen, sg_dma_len(req->dst));
|
||||
|
||||
ret = iaa_compress(tfm, req, wq, src_addr, req->slen, dst_addr,
|
||||
&req->dlen, &compression_crc, disable_async);
|
||||
&req->dlen, &compression_crc);
|
||||
if (ret == -EINPROGRESS)
|
||||
return ret;
|
||||
|
||||
@@ -1601,100 +1581,6 @@ err_map_dst:
|
||||
out:
|
||||
iaa_wq_put(wq);
|
||||
|
||||
if (order >= 0)
|
||||
sgl_free_order(req->dst, order);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int iaa_comp_adecompress_alloc_dest(struct acomp_req *req)
|
||||
{
|
||||
gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ?
|
||||
GFP_KERNEL : GFP_ATOMIC;
|
||||
struct crypto_tfm *tfm = req->base.tfm;
|
||||
dma_addr_t src_addr, dst_addr;
|
||||
int nr_sgs, cpu, ret = 0;
|
||||
struct iaa_wq *iaa_wq;
|
||||
struct device *dev;
|
||||
struct idxd_wq *wq;
|
||||
int order = -1;
|
||||
|
||||
cpu = get_cpu();
|
||||
wq = wq_table_next_wq(cpu);
|
||||
put_cpu();
|
||||
if (!wq) {
|
||||
pr_debug("no wq configured for cpu=%d\n", cpu);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
ret = iaa_wq_get(wq);
|
||||
if (ret) {
|
||||
pr_debug("no wq available for cpu=%d\n", cpu);
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
iaa_wq = idxd_wq_get_private(wq);
|
||||
|
||||
dev = &wq->idxd->pdev->dev;
|
||||
|
||||
nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
|
||||
if (nr_sgs <= 0 || nr_sgs > 1) {
|
||||
dev_dbg(dev, "couldn't map src sg for iaa device %d,"
|
||||
" wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
|
||||
iaa_wq->wq->id, ret);
|
||||
ret = -EIO;
|
||||
goto out;
|
||||
}
|
||||
src_addr = sg_dma_address(req->src);
|
||||
dev_dbg(dev, "dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p,"
|
||||
" req->slen %d, sg_dma_len(sg) %d\n", src_addr, nr_sgs,
|
||||
req->src, req->slen, sg_dma_len(req->src));
|
||||
|
||||
req->dlen = 4 * req->slen; /* start with ~avg comp rato */
|
||||
alloc_dest:
|
||||
order = order_base_2(round_up(req->dlen, PAGE_SIZE) / PAGE_SIZE);
|
||||
req->dst = sgl_alloc_order(req->dlen, order, false, flags, NULL);
|
||||
if (!req->dst) {
|
||||
ret = -ENOMEM;
|
||||
order = -1;
|
||||
goto out;
|
||||
}
|
||||
|
||||
nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
|
||||
if (nr_sgs <= 0 || nr_sgs > 1) {
|
||||
dev_dbg(dev, "couldn't map dst sg for iaa device %d,"
|
||||
" wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id,
|
||||
iaa_wq->wq->id, ret);
|
||||
ret = -EIO;
|
||||
goto err_map_dst;
|
||||
}
|
||||
|
||||
dst_addr = sg_dma_address(req->dst);
|
||||
dev_dbg(dev, "dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p,"
|
||||
" req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs,
|
||||
req->dst, req->dlen, sg_dma_len(req->dst));
|
||||
ret = iaa_decompress(tfm, req, wq, src_addr, req->slen,
|
||||
dst_addr, &req->dlen, true);
|
||||
if (ret == -EOVERFLOW) {
|
||||
dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
|
||||
req->dlen *= 2;
|
||||
if (req->dlen > CRYPTO_ACOMP_DST_MAX)
|
||||
goto err_map_dst;
|
||||
goto alloc_dest;
|
||||
}
|
||||
|
||||
if (ret != 0)
|
||||
dev_dbg(dev, "asynchronous decompress failed ret=%d\n", ret);
|
||||
|
||||
dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE);
|
||||
err_map_dst:
|
||||
dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE);
|
||||
out:
|
||||
iaa_wq_put(wq);
|
||||
|
||||
if (order >= 0)
|
||||
sgl_free_order(req->dst, order);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -1717,9 +1603,6 @@ static int iaa_comp_adecompress(struct acomp_req *req)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (!req->dst)
|
||||
return iaa_comp_adecompress_alloc_dest(req);
|
||||
|
||||
cpu = get_cpu();
|
||||
wq = wq_table_next_wq(cpu);
|
||||
put_cpu();
|
||||
@@ -1800,19 +1683,10 @@ static int iaa_comp_init_fixed(struct crypto_acomp *acomp_tfm)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void dst_free(struct scatterlist *sgl)
|
||||
{
|
||||
/*
|
||||
* Called for req->dst = NULL cases but we free elsewhere
|
||||
* using sgl_free_order().
|
||||
*/
|
||||
}
|
||||
|
||||
static struct acomp_alg iaa_acomp_fixed_deflate = {
|
||||
.init = iaa_comp_init_fixed,
|
||||
.compress = iaa_comp_acompress,
|
||||
.decompress = iaa_comp_adecompress,
|
||||
.dst_free = dst_free,
|
||||
.base = {
|
||||
.cra_name = "deflate",
|
||||
.cra_driver_name = "deflate-iaa",
|
||||
|
||||
@@ -3,18 +3,19 @@
|
||||
* Xilinx ZynqMP SHA Driver.
|
||||
* Copyright (c) 2022 Xilinx Inc.
|
||||
*/
|
||||
#include <linux/cacheflush.h>
|
||||
#include <crypto/hash.h>
|
||||
#include <crypto/internal/hash.h>
|
||||
#include <crypto/sha3.h>
|
||||
#include <linux/crypto.h>
|
||||
#include <linux/cacheflush.h>
|
||||
#include <linux/cleanup.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/err.h>
|
||||
#include <linux/firmware/xlnx-zynqmp.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/io.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
#define ZYNQMP_DMA_BIT_MASK 32U
|
||||
@@ -43,6 +44,8 @@ struct zynqmp_sha_desc_ctx {
|
||||
static dma_addr_t update_dma_addr, final_dma_addr;
|
||||
static char *ubuf, *fbuf;
|
||||
|
||||
static DEFINE_SPINLOCK(zynqmp_sha_lock);
|
||||
|
||||
static int zynqmp_sha_init_tfm(struct crypto_shash *hash)
|
||||
{
|
||||
const char *fallback_driver_name = crypto_shash_alg_name(hash);
|
||||
@@ -124,7 +127,8 @@ static int zynqmp_sha_export(struct shash_desc *desc, void *out)
|
||||
return crypto_shash_export(&dctx->fbk_req, out);
|
||||
}
|
||||
|
||||
static int zynqmp_sha_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out)
|
||||
static int __zynqmp_sha_digest(struct shash_desc *desc, const u8 *data,
|
||||
unsigned int len, u8 *out)
|
||||
{
|
||||
unsigned int remaining_len = len;
|
||||
int update_size;
|
||||
@@ -159,6 +163,12 @@ static int zynqmp_sha_digest(struct shash_desc *desc, const u8 *data, unsigned i
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int zynqmp_sha_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out)
|
||||
{
|
||||
scoped_guard(spinlock_bh, &zynqmp_sha_lock)
|
||||
return __zynqmp_sha_digest(desc, data, len, out);
|
||||
}
|
||||
|
||||
static struct zynqmp_sha_drv_ctx sha3_drv_ctx = {
|
||||
.sha3_384 = {
|
||||
.init = zynqmp_sha_init,
|
||||
|
||||
@@ -685,11 +685,13 @@ long dma_resv_wait_timeout(struct dma_resv *obj, enum dma_resv_usage usage,
|
||||
dma_resv_iter_begin(&cursor, obj, usage);
|
||||
dma_resv_for_each_fence_unlocked(&cursor, fence) {
|
||||
|
||||
ret = dma_fence_wait_timeout(fence, intr, ret);
|
||||
if (ret <= 0) {
|
||||
dma_resv_iter_end(&cursor);
|
||||
return ret;
|
||||
}
|
||||
ret = dma_fence_wait_timeout(fence, intr, timeout);
|
||||
if (ret <= 0)
|
||||
break;
|
||||
|
||||
/* Even for zero timeout the return value is 1 */
|
||||
if (timeout)
|
||||
timeout = ret;
|
||||
}
|
||||
dma_resv_iter_end(&cursor);
|
||||
|
||||
|
||||
@@ -86,7 +86,7 @@ struct ffa_drv_info {
|
||||
struct work_struct sched_recv_irq_work;
|
||||
struct xarray partition_info;
|
||||
DECLARE_HASHTABLE(notifier_hash, ilog2(FFA_MAX_NOTIFICATIONS));
|
||||
struct mutex notify_lock; /* lock to protect notifier hashtable */
|
||||
rwlock_t notify_lock; /* lock to protect notifier hashtable */
|
||||
};
|
||||
|
||||
static struct ffa_drv_info *drv_info;
|
||||
@@ -1115,12 +1115,11 @@ notifier_hash_node_get(u16 notify_id, enum notify_type type)
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static int
|
||||
update_notifier_cb(int notify_id, enum notify_type type, ffa_notifier_cb cb,
|
||||
void *cb_data, bool is_registration)
|
||||
static int update_notifier_cb(int notify_id, enum notify_type type,
|
||||
struct notifier_cb_info *cb)
|
||||
{
|
||||
struct notifier_cb_info *cb_info = NULL;
|
||||
bool cb_found;
|
||||
bool cb_found, is_registration = !!cb;
|
||||
|
||||
cb_info = notifier_hash_node_get(notify_id, type);
|
||||
cb_found = !!cb_info;
|
||||
@@ -1129,17 +1128,10 @@ update_notifier_cb(int notify_id, enum notify_type type, ffa_notifier_cb cb,
|
||||
return -EINVAL;
|
||||
|
||||
if (is_registration) {
|
||||
cb_info = kzalloc(sizeof(*cb_info), GFP_KERNEL);
|
||||
if (!cb_info)
|
||||
return -ENOMEM;
|
||||
|
||||
cb_info->type = type;
|
||||
cb_info->cb = cb;
|
||||
cb_info->cb_data = cb_data;
|
||||
|
||||
hash_add(drv_info->notifier_hash, &cb_info->hnode, notify_id);
|
||||
hash_add(drv_info->notifier_hash, &cb->hnode, notify_id);
|
||||
} else {
|
||||
hash_del(&cb_info->hnode);
|
||||
kfree(cb_info);
|
||||
}
|
||||
|
||||
return 0;
|
||||
@@ -1164,18 +1156,18 @@ static int ffa_notify_relinquish(struct ffa_device *dev, int notify_id)
|
||||
if (notify_id >= FFA_MAX_NOTIFICATIONS)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&drv_info->notify_lock);
|
||||
write_lock(&drv_info->notify_lock);
|
||||
|
||||
rc = update_notifier_cb(notify_id, type, NULL, NULL, false);
|
||||
rc = update_notifier_cb(notify_id, type, NULL);
|
||||
if (rc) {
|
||||
pr_err("Could not unregister notification callback\n");
|
||||
mutex_unlock(&drv_info->notify_lock);
|
||||
write_unlock(&drv_info->notify_lock);
|
||||
return rc;
|
||||
}
|
||||
|
||||
rc = ffa_notification_unbind(dev->vm_id, BIT(notify_id));
|
||||
|
||||
mutex_unlock(&drv_info->notify_lock);
|
||||
write_unlock(&drv_info->notify_lock);
|
||||
|
||||
return rc;
|
||||
}
|
||||
@@ -1185,6 +1177,7 @@ static int ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu,
|
||||
{
|
||||
int rc;
|
||||
u32 flags = 0;
|
||||
struct notifier_cb_info *cb_info = NULL;
|
||||
enum notify_type type = ffa_notify_type_get(dev->vm_id);
|
||||
|
||||
if (ffa_notifications_disabled())
|
||||
@@ -1193,24 +1186,34 @@ static int ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu,
|
||||
if (notify_id >= FFA_MAX_NOTIFICATIONS)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&drv_info->notify_lock);
|
||||
cb_info = kzalloc(sizeof(*cb_info), GFP_KERNEL);
|
||||
if (!cb_info)
|
||||
return -ENOMEM;
|
||||
|
||||
cb_info->type = type;
|
||||
cb_info->cb_data = cb_data;
|
||||
cb_info->cb = cb;
|
||||
|
||||
write_lock(&drv_info->notify_lock);
|
||||
|
||||
if (is_per_vcpu)
|
||||
flags = PER_VCPU_NOTIFICATION_FLAG;
|
||||
|
||||
rc = ffa_notification_bind(dev->vm_id, BIT(notify_id), flags);
|
||||
if (rc) {
|
||||
mutex_unlock(&drv_info->notify_lock);
|
||||
return rc;
|
||||
}
|
||||
if (rc)
|
||||
goto out_unlock_free;
|
||||
|
||||
rc = update_notifier_cb(notify_id, type, cb, cb_data, true);
|
||||
rc = update_notifier_cb(notify_id, type, cb_info);
|
||||
if (rc) {
|
||||
pr_err("Failed to register callback for %d - %d\n",
|
||||
notify_id, rc);
|
||||
ffa_notification_unbind(dev->vm_id, BIT(notify_id));
|
||||
}
|
||||
mutex_unlock(&drv_info->notify_lock);
|
||||
|
||||
out_unlock_free:
|
||||
write_unlock(&drv_info->notify_lock);
|
||||
if (rc)
|
||||
kfree(cb_info);
|
||||
|
||||
return rc;
|
||||
}
|
||||
@@ -1240,9 +1243,9 @@ static void handle_notif_callbacks(u64 bitmap, enum notify_type type)
|
||||
if (!(bitmap & 1))
|
||||
continue;
|
||||
|
||||
mutex_lock(&drv_info->notify_lock);
|
||||
read_lock(&drv_info->notify_lock);
|
||||
cb_info = notifier_hash_node_get(notify_id, type);
|
||||
mutex_unlock(&drv_info->notify_lock);
|
||||
read_unlock(&drv_info->notify_lock);
|
||||
|
||||
if (cb_info && cb_info->cb)
|
||||
cb_info->cb(notify_id, cb_info->cb_data);
|
||||
@@ -1692,7 +1695,7 @@ static void ffa_notifications_setup(void)
|
||||
goto cleanup;
|
||||
|
||||
hash_init(drv_info->notifier_hash);
|
||||
mutex_init(&drv_info->notify_lock);
|
||||
rwlock_init(&drv_info->notify_lock);
|
||||
|
||||
drv_info->notif_enabled = true;
|
||||
return;
|
||||
|
||||
@@ -860,7 +860,9 @@ int amdgpu_mes_map_legacy_queue(struct amdgpu_device *adev,
|
||||
queue_input.mqd_addr = amdgpu_bo_gpu_offset(ring->mqd_obj);
|
||||
queue_input.wptr_addr = ring->wptr_gpu_addr;
|
||||
|
||||
amdgpu_mes_lock(&adev->mes);
|
||||
r = adev->mes.funcs->map_legacy_queue(&adev->mes, &queue_input);
|
||||
amdgpu_mes_unlock(&adev->mes);
|
||||
if (r)
|
||||
DRM_ERROR("failed to map legacy queue\n");
|
||||
|
||||
@@ -883,7 +885,9 @@ int amdgpu_mes_unmap_legacy_queue(struct amdgpu_device *adev,
|
||||
queue_input.trail_fence_addr = gpu_addr;
|
||||
queue_input.trail_fence_data = seq;
|
||||
|
||||
amdgpu_mes_lock(&adev->mes);
|
||||
r = adev->mes.funcs->unmap_legacy_queue(&adev->mes, &queue_input);
|
||||
amdgpu_mes_unlock(&adev->mes);
|
||||
if (r)
|
||||
DRM_ERROR("failed to unmap legacy queue\n");
|
||||
|
||||
@@ -910,7 +914,9 @@ int amdgpu_mes_reset_legacy_queue(struct amdgpu_device *adev,
|
||||
queue_input.vmid = vmid;
|
||||
queue_input.use_mmio = use_mmio;
|
||||
|
||||
amdgpu_mes_lock(&adev->mes);
|
||||
r = adev->mes.funcs->reset_legacy_queue(&adev->mes, &queue_input);
|
||||
amdgpu_mes_unlock(&adev->mes);
|
||||
if (r)
|
||||
DRM_ERROR("failed to reset legacy queue\n");
|
||||
|
||||
@@ -931,7 +937,9 @@ uint32_t amdgpu_mes_rreg(struct amdgpu_device *adev, uint32_t reg)
|
||||
goto error;
|
||||
}
|
||||
|
||||
amdgpu_mes_lock(&adev->mes);
|
||||
r = adev->mes.funcs->misc_op(&adev->mes, &op_input);
|
||||
amdgpu_mes_unlock(&adev->mes);
|
||||
if (r)
|
||||
DRM_ERROR("failed to read reg (0x%x)\n", reg);
|
||||
else
|
||||
@@ -957,7 +965,9 @@ int amdgpu_mes_wreg(struct amdgpu_device *adev,
|
||||
goto error;
|
||||
}
|
||||
|
||||
amdgpu_mes_lock(&adev->mes);
|
||||
r = adev->mes.funcs->misc_op(&adev->mes, &op_input);
|
||||
amdgpu_mes_unlock(&adev->mes);
|
||||
if (r)
|
||||
DRM_ERROR("failed to write reg (0x%x)\n", reg);
|
||||
|
||||
@@ -984,7 +994,9 @@ int amdgpu_mes_reg_write_reg_wait(struct amdgpu_device *adev,
|
||||
goto error;
|
||||
}
|
||||
|
||||
amdgpu_mes_lock(&adev->mes);
|
||||
r = adev->mes.funcs->misc_op(&adev->mes, &op_input);
|
||||
amdgpu_mes_unlock(&adev->mes);
|
||||
if (r)
|
||||
DRM_ERROR("failed to reg_write_reg_wait\n");
|
||||
|
||||
@@ -1009,7 +1021,9 @@ int amdgpu_mes_reg_wait(struct amdgpu_device *adev, uint32_t reg,
|
||||
goto error;
|
||||
}
|
||||
|
||||
amdgpu_mes_lock(&adev->mes);
|
||||
r = adev->mes.funcs->misc_op(&adev->mes, &op_input);
|
||||
amdgpu_mes_unlock(&adev->mes);
|
||||
if (r)
|
||||
DRM_ERROR("failed to reg_write_reg_wait\n");
|
||||
|
||||
|
||||
@@ -3430,7 +3430,10 @@ int psp_init_sos_microcode(struct psp_context *psp, const char *chip_name)
|
||||
uint8_t *ucode_array_start_addr;
|
||||
int err = 0;
|
||||
|
||||
err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, "amdgpu/%s_sos.bin", chip_name);
|
||||
if (amdgpu_is_kicker_fw(adev))
|
||||
err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, "amdgpu/%s_sos_kicker.bin", chip_name);
|
||||
else
|
||||
err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, "amdgpu/%s_sos.bin", chip_name);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
@@ -3672,7 +3675,10 @@ int psp_init_ta_microcode(struct psp_context *psp, const char *chip_name)
|
||||
struct amdgpu_device *adev = psp->adev;
|
||||
int err;
|
||||
|
||||
err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, "amdgpu/%s_ta.bin", chip_name);
|
||||
if (amdgpu_is_kicker_fw(adev))
|
||||
err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, "amdgpu/%s_ta_kicker.bin", chip_name);
|
||||
else
|
||||
err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, "amdgpu/%s_ta.bin", chip_name);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
|
||||
@@ -84,6 +84,7 @@ MODULE_FIRMWARE("amdgpu/gc_11_0_0_pfp.bin");
|
||||
MODULE_FIRMWARE("amdgpu/gc_11_0_0_me.bin");
|
||||
MODULE_FIRMWARE("amdgpu/gc_11_0_0_mec.bin");
|
||||
MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc.bin");
|
||||
MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc_kicker.bin");
|
||||
MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc_1.bin");
|
||||
MODULE_FIRMWARE("amdgpu/gc_11_0_0_toc.bin");
|
||||
MODULE_FIRMWARE("amdgpu/gc_11_0_1_pfp.bin");
|
||||
@@ -734,6 +735,9 @@ static int gfx_v11_0_init_microcode(struct amdgpu_device *adev)
|
||||
adev->pdev->revision == 0xCE)
|
||||
err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw,
|
||||
"amdgpu/gc_11_0_0_rlc_1.bin");
|
||||
else if (amdgpu_is_kicker_fw(adev))
|
||||
err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw,
|
||||
"amdgpu/%s_rlc_kicker.bin", ucode_prefix);
|
||||
else
|
||||
err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw,
|
||||
"amdgpu/%s_rlc.bin", ucode_prefix);
|
||||
|
||||
@@ -32,6 +32,7 @@
|
||||
#include "gc/gc_11_0_0_sh_mask.h"
|
||||
|
||||
MODULE_FIRMWARE("amdgpu/gc_11_0_0_imu.bin");
|
||||
MODULE_FIRMWARE("amdgpu/gc_11_0_0_imu_kicker.bin");
|
||||
MODULE_FIRMWARE("amdgpu/gc_11_0_1_imu.bin");
|
||||
MODULE_FIRMWARE("amdgpu/gc_11_0_2_imu.bin");
|
||||
MODULE_FIRMWARE("amdgpu/gc_11_0_3_imu.bin");
|
||||
@@ -50,7 +51,10 @@ static int imu_v11_0_init_microcode(struct amdgpu_device *adev)
|
||||
DRM_DEBUG("\n");
|
||||
|
||||
amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix));
|
||||
err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, "amdgpu/%s_imu.bin", ucode_prefix);
|
||||
if (amdgpu_is_kicker_fw(adev))
|
||||
err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, "amdgpu/%s_imu_kicker.bin", ucode_prefix);
|
||||
else
|
||||
err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, "amdgpu/%s_imu.bin", ucode_prefix);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
|
||||
@@ -42,7 +42,9 @@ MODULE_FIRMWARE("amdgpu/psp_13_0_5_ta.bin");
|
||||
MODULE_FIRMWARE("amdgpu/psp_13_0_8_toc.bin");
|
||||
MODULE_FIRMWARE("amdgpu/psp_13_0_8_ta.bin");
|
||||
MODULE_FIRMWARE("amdgpu/psp_13_0_0_sos.bin");
|
||||
MODULE_FIRMWARE("amdgpu/psp_13_0_0_sos_kicker.bin");
|
||||
MODULE_FIRMWARE("amdgpu/psp_13_0_0_ta.bin");
|
||||
MODULE_FIRMWARE("amdgpu/psp_13_0_0_ta_kicker.bin");
|
||||
MODULE_FIRMWARE("amdgpu/psp_13_0_7_sos.bin");
|
||||
MODULE_FIRMWARE("amdgpu/psp_13_0_7_ta.bin");
|
||||
MODULE_FIRMWARE("amdgpu/psp_13_0_10_sos.bin");
|
||||
|
||||
1624
drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c
Normal file
1624
drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1019,8 +1019,22 @@ void dcn35_calc_blocks_to_gate(struct dc *dc, struct dc_state *context,
|
||||
if (pipe_ctx->plane_res.dpp || pipe_ctx->stream_res.opp)
|
||||
update_state->pg_pipe_res_update[PG_MPCC][pipe_ctx->plane_res.mpcc_inst] = false;
|
||||
|
||||
if (pipe_ctx->stream_res.dsc)
|
||||
if (pipe_ctx->stream_res.dsc) {
|
||||
update_state->pg_pipe_res_update[PG_DSC][pipe_ctx->stream_res.dsc->inst] = false;
|
||||
if (dc->caps.sequential_ono) {
|
||||
update_state->pg_pipe_res_update[PG_HUBP][pipe_ctx->stream_res.dsc->inst] = false;
|
||||
update_state->pg_pipe_res_update[PG_DPP][pipe_ctx->stream_res.dsc->inst] = false;
|
||||
|
||||
/* All HUBP/DPP instances must be powered if the DSC inst != HUBP inst */
|
||||
if (!pipe_ctx->top_pipe && pipe_ctx->plane_res.hubp &&
|
||||
pipe_ctx->plane_res.hubp->inst != pipe_ctx->stream_res.dsc->inst) {
|
||||
for (j = 0; j < dc->res_pool->pipe_count; ++j) {
|
||||
update_state->pg_pipe_res_update[PG_HUBP][j] = false;
|
||||
update_state->pg_pipe_res_update[PG_DPP][j] = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (pipe_ctx->stream_res.opp)
|
||||
update_state->pg_pipe_res_update[PG_OPP][pipe_ctx->stream_res.opp->inst] = false;
|
||||
@@ -1165,6 +1179,25 @@ void dcn35_calc_blocks_to_ungate(struct dc *dc, struct dc_state *context,
|
||||
update_state->pg_pipe_res_update[PG_HDMISTREAM][0] = true;
|
||||
|
||||
if (dc->caps.sequential_ono) {
|
||||
for (i = 0; i < dc->res_pool->pipe_count; i++) {
|
||||
struct pipe_ctx *new_pipe = &context->res_ctx.pipe_ctx[i];
|
||||
|
||||
if (new_pipe->stream_res.dsc && !new_pipe->top_pipe &&
|
||||
update_state->pg_pipe_res_update[PG_DSC][new_pipe->stream_res.dsc->inst]) {
|
||||
update_state->pg_pipe_res_update[PG_HUBP][new_pipe->stream_res.dsc->inst] = true;
|
||||
update_state->pg_pipe_res_update[PG_DPP][new_pipe->stream_res.dsc->inst] = true;
|
||||
|
||||
/* All HUBP/DPP instances must be powered if the DSC inst != HUBP inst */
|
||||
if (new_pipe->plane_res.hubp &&
|
||||
new_pipe->plane_res.hubp->inst != new_pipe->stream_res.dsc->inst) {
|
||||
for (j = 0; j < dc->res_pool->pipe_count; ++j) {
|
||||
update_state->pg_pipe_res_update[PG_HUBP][j] = true;
|
||||
update_state->pg_pipe_res_update[PG_DPP][j] = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (i = dc->res_pool->pipe_count - 1; i >= 0; i--) {
|
||||
if (update_state->pg_pipe_res_update[PG_HUBP][i] &&
|
||||
update_state->pg_pipe_res_update[PG_DPP][i]) {
|
||||
|
||||
@@ -58,6 +58,7 @@
|
||||
|
||||
MODULE_FIRMWARE("amdgpu/aldebaran_smc.bin");
|
||||
MODULE_FIRMWARE("amdgpu/smu_13_0_0.bin");
|
||||
MODULE_FIRMWARE("amdgpu/smu_13_0_0_kicker.bin");
|
||||
MODULE_FIRMWARE("amdgpu/smu_13_0_7.bin");
|
||||
MODULE_FIRMWARE("amdgpu/smu_13_0_10.bin");
|
||||
|
||||
@@ -92,7 +93,7 @@ const int pmfw_decoded_link_width[7] = {0, 1, 2, 4, 8, 12, 16};
|
||||
int smu_v13_0_init_microcode(struct smu_context *smu)
|
||||
{
|
||||
struct amdgpu_device *adev = smu->adev;
|
||||
char ucode_prefix[15];
|
||||
char ucode_prefix[30];
|
||||
int err = 0;
|
||||
const struct smc_firmware_header_v1_0 *hdr;
|
||||
const struct common_firmware_header *header;
|
||||
@@ -103,7 +104,10 @@ int smu_v13_0_init_microcode(struct smu_context *smu)
|
||||
return 0;
|
||||
|
||||
amdgpu_ucode_ip_version_decode(adev, MP1_HWIP, ucode_prefix, sizeof(ucode_prefix));
|
||||
err = amdgpu_ucode_request(adev, &adev->pm.fw, "amdgpu/%s.bin", ucode_prefix);
|
||||
if (amdgpu_is_kicker_fw(adev))
|
||||
err = amdgpu_ucode_request(adev, &adev->pm.fw, "amdgpu/%s_kicker.bin", ucode_prefix);
|
||||
else
|
||||
err = amdgpu_ucode_request(adev, &adev->pm.fw, "amdgpu/%s.bin", ucode_prefix);
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
|
||||
@@ -64,10 +64,11 @@ struct auxiliary_device *devm_drm_dp_hpd_bridge_alloc(struct device *parent, str
|
||||
adev->id = ret;
|
||||
adev->name = "dp_hpd_bridge";
|
||||
adev->dev.parent = parent;
|
||||
adev->dev.of_node = of_node_get(parent->of_node);
|
||||
adev->dev.release = drm_aux_hpd_bridge_release;
|
||||
adev->dev.platform_data = of_node_get(np);
|
||||
|
||||
device_set_of_node_from_dev(&adev->dev, parent);
|
||||
|
||||
ret = auxiliary_device_init(adev);
|
||||
if (ret) {
|
||||
of_node_put(adev->dev.platform_data);
|
||||
|
||||
@@ -187,6 +187,7 @@ struct fimd_context {
|
||||
u32 i80ifcon;
|
||||
bool i80_if;
|
||||
bool suspended;
|
||||
bool dp_clk_enabled;
|
||||
wait_queue_head_t wait_vsync_queue;
|
||||
atomic_t wait_vsync_event;
|
||||
atomic_t win_updated;
|
||||
@@ -1047,7 +1048,18 @@ static void fimd_dp_clock_enable(struct exynos_drm_clk *clk, bool enable)
|
||||
struct fimd_context *ctx = container_of(clk, struct fimd_context,
|
||||
dp_clk);
|
||||
u32 val = enable ? DP_MIE_CLK_DP_ENABLE : DP_MIE_CLK_DISABLE;
|
||||
|
||||
if (enable == ctx->dp_clk_enabled)
|
||||
return;
|
||||
|
||||
if (enable)
|
||||
pm_runtime_resume_and_get(ctx->dev);
|
||||
|
||||
ctx->dp_clk_enabled = enable;
|
||||
writel(val, ctx->regs + DP_MIE_CLKCON);
|
||||
|
||||
if (!enable)
|
||||
pm_runtime_put(ctx->dev);
|
||||
}
|
||||
|
||||
static const struct exynos_drm_crtc_ops fimd_crtc_ops = {
|
||||
|
||||
@@ -4300,6 +4300,24 @@ intel_dp_mst_disconnect(struct intel_dp *intel_dp)
|
||||
static bool
|
||||
intel_dp_get_sink_irq_esi(struct intel_dp *intel_dp, u8 *esi)
|
||||
{
|
||||
struct intel_display *display = to_intel_display(intel_dp);
|
||||
struct drm_i915_private *i915 = dp_to_i915(intel_dp);
|
||||
|
||||
/*
|
||||
* Display WA for HSD #13013007775: mtl/arl/lnl
|
||||
* Read the sink count and link service IRQ registers in separate
|
||||
* transactions to prevent disconnecting the sink on a TBT link
|
||||
* inadvertently.
|
||||
*/
|
||||
if (IS_DISPLAY_VER(display, 14, 20) && !IS_BATTLEMAGE(i915)) {
|
||||
if (drm_dp_dpcd_read(&intel_dp->aux, DP_SINK_COUNT_ESI, esi, 3) != 3)
|
||||
return false;
|
||||
|
||||
/* DP_SINK_COUNT_ESI + 3 == DP_LINK_SERVICE_IRQ_VECTOR_ESI0 */
|
||||
return drm_dp_dpcd_readb(&intel_dp->aux, DP_LINK_SERVICE_IRQ_VECTOR_ESI0,
|
||||
&esi[3]) == 1;
|
||||
}
|
||||
|
||||
return drm_dp_dpcd_read(&intel_dp->aux, DP_SINK_COUNT_ESI, esi, 4) == 4;
|
||||
}
|
||||
|
||||
|
||||
@@ -284,7 +284,7 @@ static void gsc_irq_handler(struct intel_gt *gt, unsigned int intf_id)
|
||||
if (gt->gsc.intf[intf_id].irq < 0)
|
||||
return;
|
||||
|
||||
ret = generic_handle_irq(gt->gsc.intf[intf_id].irq);
|
||||
ret = generic_handle_irq_safe(gt->gsc.intf[intf_id].irq);
|
||||
if (ret)
|
||||
gt_err_ratelimited(gt, "error handling GSC irq: %d\n", ret);
|
||||
}
|
||||
|
||||
@@ -575,7 +575,6 @@ static int ring_context_alloc(struct intel_context *ce)
|
||||
/* One ringbuffer to rule them all */
|
||||
GEM_BUG_ON(!engine->legacy.ring);
|
||||
ce->ring = engine->legacy.ring;
|
||||
ce->timeline = intel_timeline_get(engine->legacy.timeline);
|
||||
|
||||
GEM_BUG_ON(ce->state);
|
||||
if (engine->context_size) {
|
||||
@@ -588,6 +587,8 @@ static int ring_context_alloc(struct intel_context *ce)
|
||||
ce->state = vma;
|
||||
}
|
||||
|
||||
ce->timeline = intel_timeline_get(engine->legacy.timeline);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -73,8 +73,8 @@ static int igt_add_request(void *arg)
|
||||
/* Basic preliminary test to create a request and let it loose! */
|
||||
|
||||
request = mock_request(rcs0(i915)->kernel_context, HZ / 10);
|
||||
if (!request)
|
||||
return -ENOMEM;
|
||||
if (IS_ERR(request))
|
||||
return PTR_ERR(request);
|
||||
|
||||
i915_request_add(request);
|
||||
|
||||
@@ -91,8 +91,8 @@ static int igt_wait_request(void *arg)
|
||||
/* Submit a request, then wait upon it */
|
||||
|
||||
request = mock_request(rcs0(i915)->kernel_context, T);
|
||||
if (!request)
|
||||
return -ENOMEM;
|
||||
if (IS_ERR(request))
|
||||
return PTR_ERR(request);
|
||||
|
||||
i915_request_get(request);
|
||||
|
||||
@@ -160,8 +160,8 @@ static int igt_fence_wait(void *arg)
|
||||
/* Submit a request, treat it as a fence and wait upon it */
|
||||
|
||||
request = mock_request(rcs0(i915)->kernel_context, T);
|
||||
if (!request)
|
||||
return -ENOMEM;
|
||||
if (IS_ERR(request))
|
||||
return PTR_ERR(request);
|
||||
|
||||
if (dma_fence_wait_timeout(&request->fence, false, T) != -ETIME) {
|
||||
pr_err("fence wait success before submit (expected timeout)!\n");
|
||||
@@ -219,8 +219,8 @@ static int igt_request_rewind(void *arg)
|
||||
GEM_BUG_ON(IS_ERR(ce));
|
||||
request = mock_request(ce, 2 * HZ);
|
||||
intel_context_put(ce);
|
||||
if (!request) {
|
||||
err = -ENOMEM;
|
||||
if (IS_ERR(request)) {
|
||||
err = PTR_ERR(request);
|
||||
goto err_context_0;
|
||||
}
|
||||
|
||||
@@ -237,8 +237,8 @@ static int igt_request_rewind(void *arg)
|
||||
GEM_BUG_ON(IS_ERR(ce));
|
||||
vip = mock_request(ce, 0);
|
||||
intel_context_put(ce);
|
||||
if (!vip) {
|
||||
err = -ENOMEM;
|
||||
if (IS_ERR(vip)) {
|
||||
err = PTR_ERR(vip);
|
||||
goto err_context_1;
|
||||
}
|
||||
|
||||
|
||||
@@ -35,7 +35,7 @@ mock_request(struct intel_context *ce, unsigned long delay)
|
||||
/* NB the i915->requests slab cache is enlarged to fit mock_request */
|
||||
request = intel_context_create_request(ce);
|
||||
if (IS_ERR(request))
|
||||
return NULL;
|
||||
return request;
|
||||
|
||||
request->mock.delay = delay;
|
||||
return request;
|
||||
|
||||
@@ -85,6 +85,15 @@ void __msm_gem_submit_destroy(struct kref *kref)
|
||||
container_of(kref, struct msm_gem_submit, ref);
|
||||
unsigned i;
|
||||
|
||||
/*
|
||||
* In error paths, we could unref the submit without calling
|
||||
* drm_sched_entity_push_job(), so msm_job_free() will never
|
||||
* get called. Since drm_sched_job_cleanup() will NULL out
|
||||
* s_fence, we can use that to detect this case.
|
||||
*/
|
||||
if (submit->base.s_fence)
|
||||
drm_sched_job_cleanup(&submit->base);
|
||||
|
||||
if (submit->fence_id) {
|
||||
spin_lock(&submit->queue->idr_lock);
|
||||
idr_remove(&submit->queue->fence_idr, submit->fence_id);
|
||||
@@ -658,6 +667,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
|
||||
struct msm_ringbuffer *ring;
|
||||
struct msm_submit_post_dep *post_deps = NULL;
|
||||
struct drm_syncobj **syncobjs_to_reset = NULL;
|
||||
struct sync_file *sync_file = NULL;
|
||||
int out_fence_fd = -1;
|
||||
unsigned i;
|
||||
int ret;
|
||||
@@ -868,7 +878,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
|
||||
}
|
||||
|
||||
if (ret == 0 && args->flags & MSM_SUBMIT_FENCE_FD_OUT) {
|
||||
struct sync_file *sync_file = sync_file_create(submit->user_fence);
|
||||
sync_file = sync_file_create(submit->user_fence);
|
||||
if (!sync_file) {
|
||||
ret = -ENOMEM;
|
||||
} else {
|
||||
@@ -902,8 +912,11 @@ out:
|
||||
out_unlock:
|
||||
mutex_unlock(&queue->lock);
|
||||
out_post_unlock:
|
||||
if (ret && (out_fence_fd >= 0))
|
||||
if (ret && (out_fence_fd >= 0)) {
|
||||
put_unused_fd(out_fence_fd);
|
||||
if (sync_file)
|
||||
fput(sync_file->file);
|
||||
}
|
||||
|
||||
if (!IS_ERR_OR_NULL(submit)) {
|
||||
msm_gem_submit_put(submit);
|
||||
|
||||
@@ -284,7 +284,7 @@ static struct simpledrm_device *simpledrm_device_of_dev(struct drm_device *dev)
|
||||
|
||||
static void simpledrm_device_release_clocks(void *res)
|
||||
{
|
||||
struct simpledrm_device *sdev = simpledrm_device_of_dev(res);
|
||||
struct simpledrm_device *sdev = res;
|
||||
unsigned int i;
|
||||
|
||||
for (i = 0; i < sdev->clk_count; ++i) {
|
||||
@@ -382,7 +382,7 @@ static int simpledrm_device_init_clocks(struct simpledrm_device *sdev)
|
||||
|
||||
static void simpledrm_device_release_regulators(void *res)
|
||||
{
|
||||
struct simpledrm_device *sdev = simpledrm_device_of_dev(res);
|
||||
struct simpledrm_device *sdev = res;
|
||||
unsigned int i;
|
||||
|
||||
for (i = 0; i < sdev->regulator_count; ++i) {
|
||||
|
||||
@@ -95,6 +95,12 @@ struct v3d_perfmon {
|
||||
u64 values[] __counted_by(ncounters);
|
||||
};
|
||||
|
||||
enum v3d_irq {
|
||||
V3D_CORE_IRQ,
|
||||
V3D_HUB_IRQ,
|
||||
V3D_MAX_IRQS,
|
||||
};
|
||||
|
||||
struct v3d_dev {
|
||||
struct drm_device drm;
|
||||
|
||||
@@ -106,6 +112,8 @@ struct v3d_dev {
|
||||
|
||||
bool single_irq_line;
|
||||
|
||||
int irq[V3D_MAX_IRQS];
|
||||
|
||||
struct v3d_perfmon_info perfmon_info;
|
||||
|
||||
void __iomem *hub_regs;
|
||||
|
||||
@@ -118,6 +118,8 @@ v3d_reset(struct v3d_dev *v3d)
|
||||
if (false)
|
||||
v3d_idle_axi(v3d, 0);
|
||||
|
||||
v3d_irq_disable(v3d);
|
||||
|
||||
v3d_idle_gca(v3d);
|
||||
v3d_reset_v3d(v3d);
|
||||
|
||||
|
||||
@@ -228,7 +228,7 @@ v3d_hub_irq(int irq, void *arg)
|
||||
int
|
||||
v3d_irq_init(struct v3d_dev *v3d)
|
||||
{
|
||||
int irq1, ret, core;
|
||||
int irq, ret, core;
|
||||
|
||||
INIT_WORK(&v3d->overflow_mem_work, v3d_overflow_mem_work);
|
||||
|
||||
@@ -239,17 +239,24 @@ v3d_irq_init(struct v3d_dev *v3d)
|
||||
V3D_CORE_WRITE(core, V3D_CTL_INT_CLR, V3D_CORE_IRQS(v3d->ver));
|
||||
V3D_WRITE(V3D_HUB_INT_CLR, V3D_HUB_IRQS(v3d->ver));
|
||||
|
||||
irq1 = platform_get_irq_optional(v3d_to_pdev(v3d), 1);
|
||||
if (irq1 == -EPROBE_DEFER)
|
||||
return irq1;
|
||||
if (irq1 > 0) {
|
||||
ret = devm_request_irq(v3d->drm.dev, irq1,
|
||||
irq = platform_get_irq_optional(v3d_to_pdev(v3d), 1);
|
||||
if (irq == -EPROBE_DEFER)
|
||||
return irq;
|
||||
if (irq > 0) {
|
||||
v3d->irq[V3D_CORE_IRQ] = irq;
|
||||
|
||||
ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_CORE_IRQ],
|
||||
v3d_irq, IRQF_SHARED,
|
||||
"v3d_core0", v3d);
|
||||
if (ret)
|
||||
goto fail;
|
||||
ret = devm_request_irq(v3d->drm.dev,
|
||||
platform_get_irq(v3d_to_pdev(v3d), 0),
|
||||
|
||||
irq = platform_get_irq(v3d_to_pdev(v3d), 0);
|
||||
if (irq < 0)
|
||||
return irq;
|
||||
v3d->irq[V3D_HUB_IRQ] = irq;
|
||||
|
||||
ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_HUB_IRQ],
|
||||
v3d_hub_irq, IRQF_SHARED,
|
||||
"v3d_hub", v3d);
|
||||
if (ret)
|
||||
@@ -257,8 +264,12 @@ v3d_irq_init(struct v3d_dev *v3d)
|
||||
} else {
|
||||
v3d->single_irq_line = true;
|
||||
|
||||
ret = devm_request_irq(v3d->drm.dev,
|
||||
platform_get_irq(v3d_to_pdev(v3d), 0),
|
||||
irq = platform_get_irq(v3d_to_pdev(v3d), 0);
|
||||
if (irq < 0)
|
||||
return irq;
|
||||
v3d->irq[V3D_CORE_IRQ] = irq;
|
||||
|
||||
ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_CORE_IRQ],
|
||||
v3d_irq, IRQF_SHARED,
|
||||
"v3d", v3d);
|
||||
if (ret)
|
||||
@@ -299,6 +310,12 @@ v3d_irq_disable(struct v3d_dev *v3d)
|
||||
V3D_CORE_WRITE(core, V3D_CTL_INT_MSK_SET, ~0);
|
||||
V3D_WRITE(V3D_HUB_INT_MSK_SET, ~0);
|
||||
|
||||
/* Finish any interrupt handler still in flight. */
|
||||
for (int i = 0; i < V3D_MAX_IRQS; i++) {
|
||||
if (v3d->irq[i])
|
||||
synchronize_irq(v3d->irq[i]);
|
||||
}
|
||||
|
||||
/* Clear any pending interrupts we might have left. */
|
||||
for (core = 0; core < v3d->cores; core++)
|
||||
V3D_CORE_WRITE(core, V3D_CTL_INT_CLR, V3D_CORE_IRQS(v3d->ver));
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
config DRM_XE
|
||||
tristate "Intel Xe Graphics"
|
||||
depends on DRM && PCI && MMU && (m || (y && KUNIT=y))
|
||||
depends on DRM && PCI && MMU
|
||||
depends on KUNIT || !KUNIT
|
||||
select INTERVAL_TREE
|
||||
# we need shmfs for the swappable backing store, and in particular
|
||||
# the shmem_readpage() which depends upon tmpfs
|
||||
|
||||
@@ -52,6 +52,7 @@ struct guc_ct_buffer_desc {
|
||||
#define GUC_CTB_STATUS_OVERFLOW (1 << 0)
|
||||
#define GUC_CTB_STATUS_UNDERFLOW (1 << 1)
|
||||
#define GUC_CTB_STATUS_MISMATCH (1 << 2)
|
||||
#define GUC_CTB_STATUS_DISABLED (1 << 3)
|
||||
u32 reserved[13];
|
||||
} __packed;
|
||||
static_assert(sizeof(struct guc_ct_buffer_desc) == 64);
|
||||
|
||||
@@ -29,7 +29,7 @@ static inline int i915_gem_stolen_insert_node_in_range(struct xe_device *xe,
|
||||
|
||||
bo = xe_bo_create_locked_range(xe, xe_device_get_root_tile(xe),
|
||||
NULL, size, start, end,
|
||||
ttm_bo_type_kernel, flags);
|
||||
ttm_bo_type_kernel, flags, 0);
|
||||
if (IS_ERR(bo)) {
|
||||
err = PTR_ERR(bo);
|
||||
bo = NULL;
|
||||
|
||||
@@ -17,10 +17,7 @@ u32 intel_dsb_buffer_ggtt_offset(struct intel_dsb_buffer *dsb_buf)
|
||||
|
||||
void intel_dsb_buffer_write(struct intel_dsb_buffer *dsb_buf, u32 idx, u32 val)
|
||||
{
|
||||
struct xe_device *xe = dsb_buf->vma->bo->tile->xe;
|
||||
|
||||
iosys_map_wr(&dsb_buf->vma->bo->vmap, idx * 4, u32, val);
|
||||
xe_device_l2_flush(xe);
|
||||
}
|
||||
|
||||
u32 intel_dsb_buffer_read(struct intel_dsb_buffer *dsb_buf, u32 idx)
|
||||
@@ -30,12 +27,9 @@ u32 intel_dsb_buffer_read(struct intel_dsb_buffer *dsb_buf, u32 idx)
|
||||
|
||||
void intel_dsb_buffer_memset(struct intel_dsb_buffer *dsb_buf, u32 idx, u32 val, size_t size)
|
||||
{
|
||||
struct xe_device *xe = dsb_buf->vma->bo->tile->xe;
|
||||
|
||||
WARN_ON(idx > (dsb_buf->buf_size - size) / sizeof(*dsb_buf->cmd_buf));
|
||||
|
||||
iosys_map_memset(&dsb_buf->vma->bo->vmap, idx * 4, val, size);
|
||||
xe_device_l2_flush(xe);
|
||||
}
|
||||
|
||||
bool intel_dsb_buffer_create(struct intel_crtc *crtc, struct intel_dsb_buffer *dsb_buf, size_t size)
|
||||
@@ -48,11 +42,12 @@ bool intel_dsb_buffer_create(struct intel_crtc *crtc, struct intel_dsb_buffer *d
|
||||
if (!vma)
|
||||
return false;
|
||||
|
||||
/* Set scanout flag for WC mapping */
|
||||
obj = xe_bo_create_pin_map(xe, xe_device_get_root_tile(xe),
|
||||
NULL, PAGE_ALIGN(size),
|
||||
ttm_bo_type_kernel,
|
||||
XE_BO_FLAG_VRAM_IF_DGFX(xe_device_get_root_tile(xe)) |
|
||||
XE_BO_FLAG_GGTT);
|
||||
XE_BO_FLAG_SCANOUT | XE_BO_FLAG_GGTT);
|
||||
if (IS_ERR(obj)) {
|
||||
kfree(vma);
|
||||
return false;
|
||||
@@ -73,5 +68,12 @@ void intel_dsb_buffer_cleanup(struct intel_dsb_buffer *dsb_buf)
|
||||
|
||||
void intel_dsb_buffer_flush_map(struct intel_dsb_buffer *dsb_buf)
|
||||
{
|
||||
/* TODO: add xe specific flush_map() for dsb buffer object. */
|
||||
struct xe_device *xe = dsb_buf->vma->bo->tile->xe;
|
||||
|
||||
/*
|
||||
* The memory barrier here is to ensure coherency of DSB vs MMIO,
|
||||
* both for weak ordering archs and discrete cards.
|
||||
*/
|
||||
xe_device_wmb(xe);
|
||||
xe_device_l2_flush(xe);
|
||||
}
|
||||
|
||||
@@ -153,7 +153,10 @@ static int __xe_pin_fb_vma_dpt(const struct intel_framebuffer *fb,
|
||||
}
|
||||
|
||||
vma->dpt = dpt;
|
||||
vma->node = dpt->ggtt_node;
|
||||
vma->node = dpt->ggtt_node[tile0->id];
|
||||
|
||||
/* Ensure DPT writes are flushed */
|
||||
xe_device_l2_flush(xe);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -203,8 +206,8 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb,
|
||||
if (xe_bo_is_vram(bo) && ggtt->flags & XE_GGTT_FLAGS_64K)
|
||||
align = max_t(u32, align, SZ_64K);
|
||||
|
||||
if (bo->ggtt_node && view->type == I915_GTT_VIEW_NORMAL) {
|
||||
vma->node = bo->ggtt_node;
|
||||
if (bo->ggtt_node[ggtt->tile->id] && view->type == I915_GTT_VIEW_NORMAL) {
|
||||
vma->node = bo->ggtt_node[ggtt->tile->id];
|
||||
} else if (view->type == I915_GTT_VIEW_NORMAL) {
|
||||
u32 x, size = bo->ttm.base.size;
|
||||
|
||||
@@ -318,8 +321,6 @@ static struct i915_vma *__xe_pin_fb_vma(const struct intel_framebuffer *fb,
|
||||
if (ret)
|
||||
goto err_unpin;
|
||||
|
||||
/* Ensure DPT writes are flushed */
|
||||
xe_device_l2_flush(xe);
|
||||
return vma;
|
||||
|
||||
err_unpin:
|
||||
@@ -333,10 +334,12 @@ err:
|
||||
|
||||
static void __xe_unpin_fb_vma(struct i915_vma *vma)
|
||||
{
|
||||
u8 tile_id = vma->node->ggtt->tile->id;
|
||||
|
||||
if (vma->dpt)
|
||||
xe_bo_unpin_map_no_vm(vma->dpt);
|
||||
else if (!xe_ggtt_node_allocated(vma->bo->ggtt_node) ||
|
||||
vma->bo->ggtt_node->base.start != vma->node->base.start)
|
||||
else if (!xe_ggtt_node_allocated(vma->bo->ggtt_node[tile_id]) ||
|
||||
vma->bo->ggtt_node[tile_id]->base.start != vma->node->base.start)
|
||||
xe_ggtt_node_remove(vma->node, false);
|
||||
|
||||
ttm_bo_reserve(&vma->bo->ttm, false, false, NULL);
|
||||
|
||||
@@ -128,7 +128,7 @@ struct xe_reg_mcr {
|
||||
* options.
|
||||
*/
|
||||
#define XE_REG_MCR(r_, ...) ((const struct xe_reg_mcr){ \
|
||||
.__reg = XE_REG_INITIALIZER(r_, ##__VA_ARGS__, .mcr = 1) \
|
||||
.__reg = XE_REG_INITIALIZER(r_, ##__VA_ARGS__, .mcr = 1) \
|
||||
})
|
||||
|
||||
static inline bool xe_reg_is_valid(struct xe_reg r)
|
||||
|
||||
@@ -1130,6 +1130,8 @@ static void xe_ttm_bo_destroy(struct ttm_buffer_object *ttm_bo)
|
||||
{
|
||||
struct xe_bo *bo = ttm_to_xe_bo(ttm_bo);
|
||||
struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev);
|
||||
struct xe_tile *tile;
|
||||
u8 id;
|
||||
|
||||
if (bo->ttm.base.import_attach)
|
||||
drm_prime_gem_destroy(&bo->ttm.base, NULL);
|
||||
@@ -1137,8 +1139,9 @@ static void xe_ttm_bo_destroy(struct ttm_buffer_object *ttm_bo)
|
||||
|
||||
xe_assert(xe, list_empty(&ttm_bo->base.gpuva.list));
|
||||
|
||||
if (bo->ggtt_node && bo->ggtt_node->base.size)
|
||||
xe_ggtt_remove_bo(bo->tile->mem.ggtt, bo);
|
||||
for_each_tile(tile, xe, id)
|
||||
if (bo->ggtt_node[id] && bo->ggtt_node[id]->base.size)
|
||||
xe_ggtt_remove_bo(tile->mem.ggtt, bo);
|
||||
|
||||
#ifdef CONFIG_PROC_FS
|
||||
if (bo->client)
|
||||
@@ -1308,6 +1311,10 @@ struct xe_bo *___xe_bo_create_locked(struct xe_device *xe, struct xe_bo *bo,
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
/* XE_BO_FLAG_GGTTx requires XE_BO_FLAG_GGTT also be set */
|
||||
if ((flags & XE_BO_FLAG_GGTT_ALL) && !(flags & XE_BO_FLAG_GGTT))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
if (flags & (XE_BO_FLAG_VRAM_MASK | XE_BO_FLAG_STOLEN) &&
|
||||
!(flags & XE_BO_FLAG_IGNORE_MIN_PAGE_SIZE) &&
|
||||
((xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K) ||
|
||||
@@ -1454,7 +1461,8 @@ static struct xe_bo *
|
||||
__xe_bo_create_locked(struct xe_device *xe,
|
||||
struct xe_tile *tile, struct xe_vm *vm,
|
||||
size_t size, u64 start, u64 end,
|
||||
u16 cpu_caching, enum ttm_bo_type type, u32 flags)
|
||||
u16 cpu_caching, enum ttm_bo_type type, u32 flags,
|
||||
u64 alignment)
|
||||
{
|
||||
struct xe_bo *bo = NULL;
|
||||
int err;
|
||||
@@ -1483,6 +1491,8 @@ __xe_bo_create_locked(struct xe_device *xe,
|
||||
if (IS_ERR(bo))
|
||||
return bo;
|
||||
|
||||
bo->min_align = alignment;
|
||||
|
||||
/*
|
||||
* Note that instead of taking a reference no the drm_gpuvm_resv_bo(),
|
||||
* to ensure the shared resv doesn't disappear under the bo, the bo
|
||||
@@ -1495,19 +1505,29 @@ __xe_bo_create_locked(struct xe_device *xe,
|
||||
bo->vm = vm;
|
||||
|
||||
if (bo->flags & XE_BO_FLAG_GGTT) {
|
||||
if (!tile && flags & XE_BO_FLAG_STOLEN)
|
||||
tile = xe_device_get_root_tile(xe);
|
||||
struct xe_tile *t;
|
||||
u8 id;
|
||||
|
||||
xe_assert(xe, tile);
|
||||
if (!(bo->flags & XE_BO_FLAG_GGTT_ALL)) {
|
||||
if (!tile && flags & XE_BO_FLAG_STOLEN)
|
||||
tile = xe_device_get_root_tile(xe);
|
||||
|
||||
if (flags & XE_BO_FLAG_FIXED_PLACEMENT) {
|
||||
err = xe_ggtt_insert_bo_at(tile->mem.ggtt, bo,
|
||||
start + bo->size, U64_MAX);
|
||||
} else {
|
||||
err = xe_ggtt_insert_bo(tile->mem.ggtt, bo);
|
||||
xe_assert(xe, tile);
|
||||
}
|
||||
|
||||
for_each_tile(t, xe, id) {
|
||||
if (t != tile && !(bo->flags & XE_BO_FLAG_GGTTx(t)))
|
||||
continue;
|
||||
|
||||
if (flags & XE_BO_FLAG_FIXED_PLACEMENT) {
|
||||
err = xe_ggtt_insert_bo_at(t->mem.ggtt, bo,
|
||||
start + bo->size, U64_MAX);
|
||||
} else {
|
||||
err = xe_ggtt_insert_bo(t->mem.ggtt, bo);
|
||||
}
|
||||
if (err)
|
||||
goto err_unlock_put_bo;
|
||||
}
|
||||
if (err)
|
||||
goto err_unlock_put_bo;
|
||||
}
|
||||
|
||||
return bo;
|
||||
@@ -1523,16 +1543,18 @@ struct xe_bo *
|
||||
xe_bo_create_locked_range(struct xe_device *xe,
|
||||
struct xe_tile *tile, struct xe_vm *vm,
|
||||
size_t size, u64 start, u64 end,
|
||||
enum ttm_bo_type type, u32 flags)
|
||||
enum ttm_bo_type type, u32 flags, u64 alignment)
|
||||
{
|
||||
return __xe_bo_create_locked(xe, tile, vm, size, start, end, 0, type, flags);
|
||||
return __xe_bo_create_locked(xe, tile, vm, size, start, end, 0, type,
|
||||
flags, alignment);
|
||||
}
|
||||
|
||||
struct xe_bo *xe_bo_create_locked(struct xe_device *xe, struct xe_tile *tile,
|
||||
struct xe_vm *vm, size_t size,
|
||||
enum ttm_bo_type type, u32 flags)
|
||||
{
|
||||
return __xe_bo_create_locked(xe, tile, vm, size, 0, ~0ULL, 0, type, flags);
|
||||
return __xe_bo_create_locked(xe, tile, vm, size, 0, ~0ULL, 0, type,
|
||||
flags, 0);
|
||||
}
|
||||
|
||||
struct xe_bo *xe_bo_create_user(struct xe_device *xe, struct xe_tile *tile,
|
||||
@@ -1542,7 +1564,7 @@ struct xe_bo *xe_bo_create_user(struct xe_device *xe, struct xe_tile *tile,
|
||||
{
|
||||
struct xe_bo *bo = __xe_bo_create_locked(xe, tile, vm, size, 0, ~0ULL,
|
||||
cpu_caching, ttm_bo_type_device,
|
||||
flags | XE_BO_FLAG_USER);
|
||||
flags | XE_BO_FLAG_USER, 0);
|
||||
if (!IS_ERR(bo))
|
||||
xe_bo_unlock_vm_held(bo);
|
||||
|
||||
@@ -1565,6 +1587,17 @@ struct xe_bo *xe_bo_create_pin_map_at(struct xe_device *xe, struct xe_tile *tile
|
||||
struct xe_vm *vm,
|
||||
size_t size, u64 offset,
|
||||
enum ttm_bo_type type, u32 flags)
|
||||
{
|
||||
return xe_bo_create_pin_map_at_aligned(xe, tile, vm, size, offset,
|
||||
type, flags, 0);
|
||||
}
|
||||
|
||||
struct xe_bo *xe_bo_create_pin_map_at_aligned(struct xe_device *xe,
|
||||
struct xe_tile *tile,
|
||||
struct xe_vm *vm,
|
||||
size_t size, u64 offset,
|
||||
enum ttm_bo_type type, u32 flags,
|
||||
u64 alignment)
|
||||
{
|
||||
struct xe_bo *bo;
|
||||
int err;
|
||||
@@ -1576,7 +1609,8 @@ struct xe_bo *xe_bo_create_pin_map_at(struct xe_device *xe, struct xe_tile *tile
|
||||
flags |= XE_BO_FLAG_GGTT;
|
||||
|
||||
bo = xe_bo_create_locked_range(xe, tile, vm, size, start, end, type,
|
||||
flags | XE_BO_FLAG_NEEDS_CPU_ACCESS);
|
||||
flags | XE_BO_FLAG_NEEDS_CPU_ACCESS,
|
||||
alignment);
|
||||
if (IS_ERR(bo))
|
||||
return bo;
|
||||
|
||||
@@ -2355,14 +2389,18 @@ void xe_bo_put_commit(struct llist_head *deferred)
|
||||
|
||||
void xe_bo_put(struct xe_bo *bo)
|
||||
{
|
||||
struct xe_tile *tile;
|
||||
u8 id;
|
||||
|
||||
might_sleep();
|
||||
if (bo) {
|
||||
#ifdef CONFIG_PROC_FS
|
||||
if (bo->client)
|
||||
might_lock(&bo->client->bos_lock);
|
||||
#endif
|
||||
if (bo->ggtt_node && bo->ggtt_node->ggtt)
|
||||
might_lock(&bo->ggtt_node->ggtt->lock);
|
||||
for_each_tile(tile, xe_bo_device(bo), id)
|
||||
if (bo->ggtt_node[id] && bo->ggtt_node[id]->ggtt)
|
||||
might_lock(&bo->ggtt_node[id]->ggtt->lock);
|
||||
drm_gem_object_put(&bo->ttm.base);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -39,10 +39,22 @@
|
||||
#define XE_BO_FLAG_NEEDS_64K BIT(15)
|
||||
#define XE_BO_FLAG_NEEDS_2M BIT(16)
|
||||
#define XE_BO_FLAG_GGTT_INVALIDATE BIT(17)
|
||||
#define XE_BO_FLAG_GGTT0 BIT(18)
|
||||
#define XE_BO_FLAG_GGTT1 BIT(19)
|
||||
#define XE_BO_FLAG_GGTT2 BIT(20)
|
||||
#define XE_BO_FLAG_GGTT3 BIT(21)
|
||||
#define XE_BO_FLAG_GGTT_ALL (XE_BO_FLAG_GGTT0 | \
|
||||
XE_BO_FLAG_GGTT1 | \
|
||||
XE_BO_FLAG_GGTT2 | \
|
||||
XE_BO_FLAG_GGTT3)
|
||||
|
||||
/* this one is trigger internally only */
|
||||
#define XE_BO_FLAG_INTERNAL_TEST BIT(30)
|
||||
#define XE_BO_FLAG_INTERNAL_64K BIT(31)
|
||||
|
||||
#define XE_BO_FLAG_GGTTx(tile) \
|
||||
(XE_BO_FLAG_GGTT0 << (tile)->id)
|
||||
|
||||
#define XE_PTE_SHIFT 12
|
||||
#define XE_PAGE_SIZE (1 << XE_PTE_SHIFT)
|
||||
#define XE_PTE_MASK (XE_PAGE_SIZE - 1)
|
||||
@@ -77,7 +89,7 @@ struct xe_bo *
|
||||
xe_bo_create_locked_range(struct xe_device *xe,
|
||||
struct xe_tile *tile, struct xe_vm *vm,
|
||||
size_t size, u64 start, u64 end,
|
||||
enum ttm_bo_type type, u32 flags);
|
||||
enum ttm_bo_type type, u32 flags, u64 alignment);
|
||||
struct xe_bo *xe_bo_create_locked(struct xe_device *xe, struct xe_tile *tile,
|
||||
struct xe_vm *vm, size_t size,
|
||||
enum ttm_bo_type type, u32 flags);
|
||||
@@ -94,6 +106,12 @@ struct xe_bo *xe_bo_create_pin_map(struct xe_device *xe, struct xe_tile *tile,
|
||||
struct xe_bo *xe_bo_create_pin_map_at(struct xe_device *xe, struct xe_tile *tile,
|
||||
struct xe_vm *vm, size_t size, u64 offset,
|
||||
enum ttm_bo_type type, u32 flags);
|
||||
struct xe_bo *xe_bo_create_pin_map_at_aligned(struct xe_device *xe,
|
||||
struct xe_tile *tile,
|
||||
struct xe_vm *vm,
|
||||
size_t size, u64 offset,
|
||||
enum ttm_bo_type type, u32 flags,
|
||||
u64 alignment);
|
||||
struct xe_bo *xe_bo_create_from_data(struct xe_device *xe, struct xe_tile *tile,
|
||||
const void *data, size_t size,
|
||||
enum ttm_bo_type type, u32 flags);
|
||||
@@ -188,14 +206,24 @@ xe_bo_main_addr(struct xe_bo *bo, size_t page_size)
|
||||
}
|
||||
|
||||
static inline u32
|
||||
xe_bo_ggtt_addr(struct xe_bo *bo)
|
||||
__xe_bo_ggtt_addr(struct xe_bo *bo, u8 tile_id)
|
||||
{
|
||||
if (XE_WARN_ON(!bo->ggtt_node))
|
||||
struct xe_ggtt_node *ggtt_node = bo->ggtt_node[tile_id];
|
||||
|
||||
if (XE_WARN_ON(!ggtt_node))
|
||||
return 0;
|
||||
|
||||
XE_WARN_ON(bo->ggtt_node->base.size > bo->size);
|
||||
XE_WARN_ON(bo->ggtt_node->base.start + bo->ggtt_node->base.size > (1ull << 32));
|
||||
return bo->ggtt_node->base.start;
|
||||
XE_WARN_ON(ggtt_node->base.size > bo->size);
|
||||
XE_WARN_ON(ggtt_node->base.start + ggtt_node->base.size > (1ull << 32));
|
||||
return ggtt_node->base.start;
|
||||
}
|
||||
|
||||
static inline u32
|
||||
xe_bo_ggtt_addr(struct xe_bo *bo)
|
||||
{
|
||||
xe_assert(xe_bo_device(bo), bo->tile);
|
||||
|
||||
return __xe_bo_ggtt_addr(bo, bo->tile->id);
|
||||
}
|
||||
|
||||
int xe_bo_vmap(struct xe_bo *bo);
|
||||
|
||||
@@ -152,11 +152,17 @@ int xe_bo_restore_kernel(struct xe_device *xe)
|
||||
}
|
||||
|
||||
if (bo->flags & XE_BO_FLAG_GGTT) {
|
||||
struct xe_tile *tile = bo->tile;
|
||||
struct xe_tile *tile;
|
||||
u8 id;
|
||||
|
||||
mutex_lock(&tile->mem.ggtt->lock);
|
||||
xe_ggtt_map_bo(tile->mem.ggtt, bo);
|
||||
mutex_unlock(&tile->mem.ggtt->lock);
|
||||
for_each_tile(tile, xe, id) {
|
||||
if (tile != bo->tile && !(bo->flags & XE_BO_FLAG_GGTTx(tile)))
|
||||
continue;
|
||||
|
||||
mutex_lock(&tile->mem.ggtt->lock);
|
||||
xe_ggtt_map_bo(tile->mem.ggtt, bo);
|
||||
mutex_unlock(&tile->mem.ggtt->lock);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
#include <drm/ttm/ttm_execbuf_util.h>
|
||||
#include <drm/ttm/ttm_placement.h>
|
||||
|
||||
#include "xe_device_types.h"
|
||||
#include "xe_ggtt_types.h"
|
||||
|
||||
struct xe_device;
|
||||
@@ -39,8 +40,8 @@ struct xe_bo {
|
||||
struct ttm_place placements[XE_BO_MAX_PLACEMENTS];
|
||||
/** @placement: current placement for this BO */
|
||||
struct ttm_placement placement;
|
||||
/** @ggtt_node: GGTT node if this BO is mapped in the GGTT */
|
||||
struct xe_ggtt_node *ggtt_node;
|
||||
/** @ggtt_node: Array of GGTT nodes if this BO is mapped in the GGTTs */
|
||||
struct xe_ggtt_node *ggtt_node[XE_MAX_TILES_PER_DEVICE];
|
||||
/** @vmap: iosys map of this buffer */
|
||||
struct iosys_map vmap;
|
||||
/** @ttm_kmap: TTM bo kmap object for internal use only. Keep off. */
|
||||
@@ -76,6 +77,11 @@ struct xe_bo {
|
||||
|
||||
/** @vram_userfault_link: Link into @mem_access.vram_userfault.list */
|
||||
struct list_head vram_userfault_link;
|
||||
|
||||
/** @min_align: minimum alignment needed for this BO if different
|
||||
* from default
|
||||
*/
|
||||
u64 min_align;
|
||||
};
|
||||
|
||||
#define intel_bo_to_drm_bo(bo) (&(bo)->ttm.base)
|
||||
|
||||
@@ -37,6 +37,7 @@
|
||||
#include "xe_gt_printk.h"
|
||||
#include "xe_gt_sriov_vf.h"
|
||||
#include "xe_guc.h"
|
||||
#include "xe_guc_pc.h"
|
||||
#include "xe_hw_engine_group.h"
|
||||
#include "xe_hwmon.h"
|
||||
#include "xe_irq.h"
|
||||
@@ -871,31 +872,37 @@ void xe_device_td_flush(struct xe_device *xe)
|
||||
if (!IS_DGFX(xe) || GRAPHICS_VER(xe) < 20)
|
||||
return;
|
||||
|
||||
if (XE_WA(xe_root_mmio_gt(xe), 16023588340)) {
|
||||
gt = xe_root_mmio_gt(xe);
|
||||
if (XE_WA(gt, 16023588340)) {
|
||||
/* A transient flush is not sufficient: flush the L2 */
|
||||
xe_device_l2_flush(xe);
|
||||
return;
|
||||
}
|
||||
} else {
|
||||
xe_guc_pc_apply_flush_freq_limit(>->uc.guc.pc);
|
||||
|
||||
/* Execute TDF flush on all graphics GTs */
|
||||
for_each_gt(gt, xe, id) {
|
||||
if (xe_gt_is_media_type(gt))
|
||||
continue;
|
||||
|
||||
for_each_gt(gt, xe, id) {
|
||||
if (xe_gt_is_media_type(gt))
|
||||
continue;
|
||||
if (xe_force_wake_get(gt_to_fw(gt), XE_FW_GT))
|
||||
return;
|
||||
|
||||
if (xe_force_wake_get(gt_to_fw(gt), XE_FW_GT))
|
||||
return;
|
||||
xe_mmio_write32(gt, XE2_TDF_CTRL, TRANSIENT_FLUSH_REQUEST);
|
||||
/*
|
||||
* FIXME: We can likely do better here with our choice of
|
||||
* timeout. Currently we just assume the worst case, i.e. 150us,
|
||||
* which is believed to be sufficient to cover the worst case
|
||||
* scenario on current platforms if all cache entries are
|
||||
* transient and need to be flushed..
|
||||
*/
|
||||
if (xe_mmio_wait32(gt, XE2_TDF_CTRL, TRANSIENT_FLUSH_REQUEST, 0,
|
||||
150, NULL, false))
|
||||
xe_gt_err_once(gt, "TD flush timeout\n");
|
||||
|
||||
xe_mmio_write32(gt, XE2_TDF_CTRL, TRANSIENT_FLUSH_REQUEST);
|
||||
/*
|
||||
* FIXME: We can likely do better here with our choice of
|
||||
* timeout. Currently we just assume the worst case, i.e. 150us,
|
||||
* which is believed to be sufficient to cover the worst case
|
||||
* scenario on current platforms if all cache entries are
|
||||
* transient and need to be flushed..
|
||||
*/
|
||||
if (xe_mmio_wait32(gt, XE2_TDF_CTRL, TRANSIENT_FLUSH_REQUEST, 0,
|
||||
150, NULL, false))
|
||||
xe_gt_err_once(gt, "TD flush timeout\n");
|
||||
|
||||
xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
|
||||
xe_force_wake_put(gt_to_fw(gt), XE_FW_GT);
|
||||
}
|
||||
|
||||
xe_guc_pc_remove_flush_freq_limit(&xe_root_mmio_gt(xe)->uc.guc.pc);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user