Merge 6.12.19 into android16-6.12
GKI (arm64) relevant 48 out of 271 changes, affecting 92 files +576/-2235b414ed3bbRevert "of: reserved-memory: Fix using wrong number of cells to get property 'alignment'" [1 file, +2/-2]48a934fc47Revert "mm/page_alloc.c: don't show protection in zone's ->lowmem_reserve[] for empty zone" [1 file, +1/-2]88310caff6Bluetooth: Add check for mgmt_alloc_skb() in mgmt_remote_name() [1 file, +2/-0]7841180342Bluetooth: Add check for mgmt_alloc_skb() in mgmt_device_connected() [1 file, +3/-0]2d448dbd47userfaultfd: do not block on locking a large folio with raised refcount [1 file, +16/-1]f57e89c1cbblock: fix conversion of GPT partition name to 7-bit [1 file, +1/-1]9426f38372mm/page_alloc: fix uninitialized variable [1 file, +1/-0]79636d2981mm: abort vma_modify() on merge out of memory failure [1 file, +8/-4]605f53f13bmm: don't skip arch_sync_kernel_mappings() in error paths [2 files, +6/-4]9ed33c7bacmm: fix finish_fault() handling for large folios [1 file, +10/-5]576a2f4c43hwpoison, memory_hotplug: lock folio before unmap hwpoisoned folio [1 file, +4/-1]2e66d69941mm: memory-hotplug: check folio ref count first in do_migrate_range [1 file, +7/-13]3c63fb6ef7nvme-pci: use sgls for all user requests if possible [2 files, +13/-4]9dedafd86envme-ioctl: fix leaked requests on mapping error [1 file, +8/-4]084819b0d8net: gso: fix ownership in __udp_gso_segment [1 file, +6/-2]1688acf477perf/core: Fix pmus_lock vs. pmus_srcu ordering [1 file, +2/-2]a899adf706HID: hid-steam: Fix use-after-free when detaching device [1 file, +1/-1]8aa8a40c76ppp: Fix KMSAN uninit-value warning with bpf [1 file, +19/-9]b71cd95764ethtool: linkstate: migrate linkstate functions to support multi-PHY setups [1 file, +15/-8]9c1d09cdbcnet: ethtool: plumb PHY stats to PHY drivers [7 files, +167/-2]639c703529net: ethtool: netlink: Allow NULL nlattrs when getting a phy_device [9 files, +19/-18]30e8aee778vlan: enforce underlying device type [1 file, +2/-1]5d609f0d2fexfat: fix just enough dentries but allocate a new cluster to dir [1 file, +1/-1]c897b8ec46exfat: fix soft lockup in exfat_clear_bitmap [3 files, +16/-7]611015122dexfat: short-circuit zero-byte writes in exfat_file_write_iter [1 file, +1/-1]2b484789e9net-timestamp: support TCP GSO case for a few missing flags [1 file, +7/-4]b08e290324ublk: set_params: properly check if parameters can be applied [1 file, +5/-2]b5741e4b9esched/fair: Fix potential memory corruption in child_cfs_rq_on_list [1 file, +4/-2]39c2b2767exhci: Restrict USB4 tunnel detection for USB3 devices to Intel hosts [1 file, +8/-0]4ea3319f3eusb: hub: lack of clearing xHC resources [1 file, +33/-0]0cab185c73usb: quirks: Add DELAY_INIT and NO_LPM for Prolific Mass Storage Card Reader [1 file, +4/-0]079a3e52f3usb: typec: ucsi: Fix NULL pointer access [1 file, +7/-6]840afbea3fusb: gadget: u_ether: Set is_suspend flag if remote wakeup fails [1 file, +2/-2]ced69d88ebusb: dwc3: Set SUSPENDENABLE soon after phy init [3 files, +45/-30]35db1f1829usb: dwc3: gadget: Prevent irq storm when TH re-executes [2 files, +13/-13]b387312527usb: typec: ucsi: increase timeout for PPM reset operations [1 file, +1/-1]4bf6c57a89usb: gadget: Set self-powered based on MaxPower and bmAttributes [1 file, +11/-5]dcd7ffdefbusb: gadget: Fix setting self-powered state on suspend [1 file, +2/-1]395011ee82usb: gadget: Check bmAttributes only if configuration is valid [1 file, +1/-1]012b98cdb5acpi: typec: ucsi: Introduce a ->poll_cci method [7 files, +25/-12]d7015bb3c5xhci: pci: Fix indentation in the PCI device ID definitions [1 file, +4/-4]ea39f99864usb: xhci: Enable the TRB overfetch quirk on VIA VL805 [3 files, +10/-5]4e8df56636char: misc: deallocate static minor in error path [1 file, +1/-1]b50e18791fdrivers: core: fix device leak in __fw_devlink_relax_cycles() [1 file, +1/-0]a684bad77emm: hugetlb: Add huge page size param to huge_ptep_get_and_clear() [16 files, +46/-28]6ad9643aa5fs/netfs/read_pgpriv2: skip folio queues without `marks3` [1 file, +3/-2]5bc6e5b10ffs/netfs/read_collect: fix crash due to uninitialized `prev` variable [1 file, +11/-10]86b7ebddabuprobes: Fix race in uprobe_free_utask [1 file, +1/-1] Changes in 6.12.19 x86/amd_nb: Use rdmsr_safe() in amd_get_mmconfig_range() rust: block: fix formatting in GenDisk doc drm/i915/dsi: convert to struct intel_display drm/i915/dsi: Use TRANS_DDI_FUNC_CTL's own port width macro gpio: vf610: use generic device_get_match_data() gpio: vf610: add locking to gpio direction functions cifs: Remove symlink member from cifs_open_info_data union smb311: failure to open files of length 1040 when mounting with SMB3.1.1 POSIX extensions btrfs: fix data overwriting bug during buffered write when block size < page size x86/microcode/AMD: Add some forgotten models to the SHA check loongarch: Use ASM_REACHABLE rust: workqueue: remove unneeded ``#[allow(clippy::new_ret_no_self)]` rust: sort global Rust flags rust: types: avoid repetition in `{As,From}Bytes` impls rust: enable `clippy::undocumented_unsafe_blocks` lint rust: enable `clippy::unnecessary_safety_comment` lint rust: enable `clippy::unnecessary_safety_doc` lint rust: enable `clippy::ignored_unit_patterns` lint rust: enable `rustdoc::unescaped_backticks` lint rust: init: remove unneeded `#[allow(clippy::disallowed_names)]` rust: sync: remove unneeded `#[allow(clippy::non_send_fields_in_send_ty)]` rust: introduce `.clippy.toml` rust: replace `clippy::dbg_macro` with `disallowed_macros` rust: provide proper code documentation titles rust: enable Clippy's `check-private-items` Documentation: rust: add coding guidelines on lints rust: start using the `#[expect(...)]` attribute Documentation: rust: discuss `#[expect(...)]` in the guidelines rust: error: make conversion functions public rust: error: optimize error type to use nonzero rust: alloc: add `Allocator` trait rust: alloc: separate `aligned_size` from `krealloc_aligned` rust: alloc: rename `KernelAllocator` to `Kmalloc` rust: alloc: implement `ReallocFunc` rust: alloc: make `allocator` module public rust: alloc: implement `Allocator` for `Kmalloc` rust: alloc: add module `allocator_test` rust: alloc: implement `Vmalloc` allocator rust: alloc: implement `KVmalloc` allocator rust: alloc: add __GFP_NOWARN to `Flags` rust: alloc: implement kernel `Box` rust: treewide: switch to our kernel `Box` type rust: alloc: remove extension of std's `Box` rust: alloc: add `Box` to prelude rust: alloc: introduce `ArrayLayout` rust: alloc: implement kernel `Vec` type rust: alloc: implement `IntoIterator` for `Vec` rust: alloc: implement `collect` for `IntoIter` rust: treewide: switch to the kernel `Vec` type rust: alloc: remove `VecExt` extension rust: alloc: add `Vec` to prelude rust: error: use `core::alloc::LayoutError` rust: error: check for config `test` in `Error::name` rust: alloc: implement `contains` for `Flags` rust: alloc: implement `Cmalloc` in module allocator_test rust: str: test: replace `alloc::format` rust: alloc: update module comment of alloc.rs kbuild: rust: remove the `alloc` crate and `GlobalAlloc` MAINTAINERS: add entry for the Rust `alloc` module drm/panic: avoid reimplementing Iterator::find drm/panic: remove unnecessary borrow in alignment_pattern drm/panic: prefer eliding lifetimes drm/panic: remove redundant field when assigning value drm/panic: correctly indent continuation of line in list item drm/panic: allow verbose boolean for clarity drm/panic: allow verbose version check rust: kbuild: expand rusttest target for macros rust: fix size_t in bindgen prototypes of C builtins rust: map `__kernel_size_t` and friends also to usize/isize rust: use custom FFI integer types rust: alloc: Fix `ArrayLayout` allocations Revert "of: reserved-memory: Fix using wrong number of cells to get property 'alignment'" tracing: tprobe-events: Fix a memory leak when tprobe with $retval tracing: tprobe-events: Reject invalid tracepoint name stmmac: loongson: Pass correct arg to PCI function LoongArch: Convert unreachable() to BUG() LoongArch: Use polling play_dead() when resuming from hibernation LoongArch: Set max_pfn with the PFN of the last page LoongArch: KVM: Add interrupt checking for AVEC LoongArch: KVM: Reload guest CSR registers after sleep LoongArch: KVM: Fix GPA size issue about VM HID: appleir: Fix potential NULL dereference at raw event handle ksmbd: fix type confusion via race condition when using ipc_msg_send_request ksmbd: fix out-of-bounds in parse_sec_desc() ksmbd: fix use-after-free in smb2_lock ksmbd: fix bug on trap in smb2_lock gpio: rcar: Use raw_spinlock to protect register access gpio: aggregator: protect driver attr handlers against module unload ALSA: seq: Avoid module auto-load handling at event delivery ALSA: hda: intel: Add Dell ALC3271 to power_save denylist ALSA: hda/realtek - add supported Mic Mute LED for Lenovo platform ALSA: hda/realtek: update ALC222 depop optimize btrfs: fix a leaked chunk map issue in read_one_chunk() hwmon: (peci/dimmtemp) Do not provide fake thresholds data drm/amd/display: Fix null check for pipe_ctx->plane_state in resource_build_scaling_params drm/amdkfd: Fix NULL Pointer Dereference in KFD queue drm/amd/pm: always allow ih interrupt from fw drm/imagination: avoid deadlock on fence release drm/imagination: Hold drm_gem_gpuva lock for unmap drm/imagination: only init job done fences once drm/radeon: Fix rs400_gpu_init for ATI mobility radeon Xpress 200M Revert "mm/page_alloc.c: don't show protection in zone's ->lowmem_reserve[] for empty zone" Revert "selftests/mm: remove local __NR_* definitions" platform/x86: thinkpad_acpi: Add battery quirk for ThinkPad X131e x86/boot: Sanitize boot params before parsing command line x86/cacheinfo: Validate CPUID leaf 0x2 EDX output x86/cpu: Validate CPUID leaf 0x2 EDX output x86/cpu: Properly parse CPUID leaf 0x2 TLB descriptor 0x63 drm/xe: Add staging tree for VM binds drm/xe/hmm: Style- and include fixes drm/xe/hmm: Don't dereference struct page pointers without notifier lock drm/xe/vm: Fix a misplaced #endif drm/xe/vm: Validate userptr during gpu vma prefetching mptcp: fix 'scheduling while atomic' in mptcp_pm_nl_append_new_local_addr drm/xe: Fix GT "for each engine" workarounds drm/xe: Fix fault mode invalidation with unbind drm/xe/userptr: properly setup pfn_flags_mask drm/xe/userptr: Unmap userptrs in the mmu notifier Bluetooth: Add check for mgmt_alloc_skb() in mgmt_remote_name() Bluetooth: Add check for mgmt_alloc_skb() in mgmt_device_connected() wifi: cfg80211: regulatory: improve invalid hints checking wifi: nl80211: reject cooked mode if it is set along with other flags selftests/damon/damos_quota_goal: handle minimum quota that cannot be further reduced selftests/damon/damos_quota: make real expectation of quota exceeds selftests/damon/damon_nr_regions: set ops update for merge results check to 100ms selftests/damon/damon_nr_regions: sort collected regiosn before checking with min/max boundaries rapidio: add check for rio_add_net() in rio_scan_alloc_net() rapidio: fix an API misues when rio_add_net() fails dma: kmsan: export kmsan_handle_dma() for modules s390/traps: Fix test_monitor_call() inline assembly NFS: fix nfs_release_folio() to not deadlock via kcompactd writeback userfaultfd: do not block on locking a large folio with raised refcount block: fix conversion of GPT partition name to 7-bit mm/page_alloc: fix uninitialized variable mm: abort vma_modify() on merge out of memory failure mm: memory-failure: update ttu flag inside unmap_poisoned_folio mm: don't skip arch_sync_kernel_mappings() in error paths mm: fix finish_fault() handling for large folios hwpoison, memory_hotplug: lock folio before unmap hwpoisoned folio mm: memory-hotplug: check folio ref count first in do_migrate_range wifi: iwlwifi: mvm: clean up ROC on failure wifi: iwlwifi: mvm: don't try to talk to a dead firmware wifi: iwlwifi: limit printed string from FW file wifi: iwlwifi: Free pages allocated when failing to build A-MSDU wifi: iwlwifi: Fix A-MSDU TSO preparation HID: google: fix unused variable warning under !CONFIG_ACPI HID: intel-ish-hid: Fix use-after-free issue in hid_ishtp_cl_remove() HID: intel-ish-hid: Fix use-after-free issue in ishtp_hid_remove() coredump: Only sort VMAs when core_sort_vma sysctl is set nvme-pci: add support for sgl metadata nvme-pci: use sgls for all user requests if possible nvme-ioctl: fix leaked requests on mapping error wifi: mac80211: Support parsing EPCS ML element wifi: mac80211: fix MLE non-inheritance parsing wifi: mac80211: fix vendor-specific inheritance drm/fbdev-helper: Move color-mode lookup into 4CC format helper drm/fbdev: Add memory-agnostic fbdev client drm: Add client-agnostic setup helper drm/fbdev-ttm: Support struct drm_driver.fbdev_probe drm/nouveau: Run DRM default client setup drm/nouveau: select FW caching bluetooth: btusb: Initialize .owner field of force_poll_sync_fops nvme-tcp: add basic support for the C2HTermReq PDU nvme-tcp: fix potential memory corruption in nvme_tcp_recv_pdu() nvmet-tcp: Fix a possible sporadic response drops in weakly ordered arch ALSA: hda/realtek: Remove (revert) duplicate Ally X config net: gso: fix ownership in __udp_gso_segment caif_virtio: fix wrong pointer check in cfv_probe() perf/core: Fix pmus_lock vs. pmus_srcu ordering hwmon: (pmbus) Initialise page count in pmbus_identify() hwmon: (ntc_thermistor) Fix the ncpXXxh103 sensor table hwmon: (ad7314) Validate leading zero bits and return error tracing: probe-events: Remove unused MAX_ARG_BUF_LEN macro drm/imagination: Fix timestamps in firmware traces ALSA: usx2y: validate nrpacks module parameter on probe llc: do not use skb_get() before dev_queue_xmit() hwmon: fix a NULL vs IS_ERR_OR_NULL() check in xgene_hwmon_probe() drm/sched: Fix preprocessor guard be2net: fix sleeping while atomic bugs in be_ndo_bridge_getlink net: hns3: make sure ptp clock is unregister and freed if hclge_ptp_get_cycle returns an error drm/i915/color: Extract intel_color_modeset() drm/i915: Plumb 'dsb' all way to the plane hooks drm/xe: Remove double pageflip HID: hid-steam: Fix use-after-free when detaching device net: ipa: Fix v4.7 resource group names net: ipa: Fix QSB data for v4.7 net: ipa: Enable checksum for IPA_ENDPOINT_AP_MODEM_{RX,TX} for v4.7 ppp: Fix KMSAN uninit-value warning with bpf ethtool: linkstate: migrate linkstate functions to support multi-PHY setups net: ethtool: plumb PHY stats to PHY drivers net: ethtool: netlink: Allow NULL nlattrs when getting a phy_device vlan: enforce underlying device type x86/sgx: Fix size overflows in sgx_encl_create() exfat: fix just enough dentries but allocate a new cluster to dir exfat: fix soft lockup in exfat_clear_bitmap exfat: short-circuit zero-byte writes in exfat_file_write_iter net-timestamp: support TCP GSO case for a few missing flags ublk: set_params: properly check if parameters can be applied sched/fair: Fix potential memory corruption in child_cfs_rq_on_list nvme-tcp: fix signedness bug in nvme_tcp_init_connection() net: dsa: mt7530: Fix traffic flooding for MMIO devices mctp i3c: handle NULL header address net: ipv6: fix dst ref loop in ila lwtunnel net: ipv6: fix missing dst ref drop in ila lwtunnel gpio: rcar: Fix missing of_node_put() call Revert "drivers/card_reader/rtsx_usb: Restore interrupt based detection" usb: renesas_usbhs: Call clk_put() xhci: Restrict USB4 tunnel detection for USB3 devices to Intel hosts usb: renesas_usbhs: Use devm_usb_get_phy() usb: hub: lack of clearing xHC resources usb: quirks: Add DELAY_INIT and NO_LPM for Prolific Mass Storage Card Reader usb: typec: ucsi: Fix NULL pointer access usb: renesas_usbhs: Flush the notify_hotplug_work usb: gadget: u_ether: Set is_suspend flag if remote wakeup fails usb: atm: cxacru: fix a flaw in existing endpoint checks usb: dwc3: Set SUSPENDENABLE soon after phy init usb: dwc3: gadget: Prevent irq storm when TH re-executes usb: typec: ucsi: increase timeout for PPM reset operations usb: typec: tcpci_rt1711h: Unmask alert interrupts to fix functionality usb: gadget: Set self-powered based on MaxPower and bmAttributes usb: gadget: Fix setting self-powered state on suspend usb: gadget: Check bmAttributes only if configuration is valid kbuild: userprogs: use correct lld when linking through clang acpi: typec: ucsi: Introduce a ->poll_cci method rust: finish using custom FFI integer types rust: map `long` to `isize` and `char` to `u8` xhci: pci: Fix indentation in the PCI device ID definitions usb: xhci: Enable the TRB overfetch quirk on VIA VL805 KVM: SVM: Set RFLAGS.IF=1 in C code, to get VMRUN out of the STI shadow KVM: SVM: Save host DR masks on CPUs with DebugSwap KVM: SVM: Drop DEBUGCTL[5:2] from guest's effective value KVM: SVM: Suppress DEBUGCTL.BTF on AMD KVM: x86: Snapshot the host's DEBUGCTL in common x86 KVM: SVM: Manually context switch DEBUGCTL if LBR virtualization is disabled KVM: x86: Snapshot the host's DEBUGCTL after disabling IRQs KVM: x86: Explicitly zero EAX and EBX when PERFMON_V2 isn't supported by KVM cdx: Fix possible UAF error in driver_override_show() mei: me: add panther lake P DID mei: vsc: Use "wakeuphostint" when getting the host wakeup GPIO intel_th: pci: Add Arrow Lake support intel_th: pci: Add Panther Lake-H support intel_th: pci: Add Panther Lake-P/U support char: misc: deallocate static minor in error path drivers: core: fix device leak in __fw_devlink_relax_cycles() slimbus: messaging: Free transaction ID in delayed interrupt scenario bus: mhi: host: pci_generic: Use pci_try_reset_function() to avoid deadlock eeprom: digsy_mtc: Make GPIO lookup table match the device drivers: virt: acrn: hsm: Use kzalloc to avoid info leak in pmcmd_ioctl iio: filter: admv8818: Force initialization of SDO iio: light: apds9306: fix max_scale_nano values iio: dac: ad3552r: clear reset status flag iio: adc: ad7192: fix channel select iio: adc: at91-sama5d2_adc: fix sama7g5 realbits value mm: hugetlb: Add huge page size param to huge_ptep_get_and_clear() arm64: hugetlb: Fix huge_ptep_get_and_clear() for non-present ptes fs/netfs/read_pgpriv2: skip folio queues without `marks3` fs/netfs/read_collect: fix crash due to uninitialized `prev` variable kbuild: hdrcheck: fix cross build with clang ALSA: hda: realtek: fix incorrect IS_REACHABLE() usage nvme-tcp: Fix a C2HTermReq error message docs: rust: remove spurious item in `expect` list Revert "KVM: e500: always restore irqs" Revert "KVM: PPC: e500: Use __kvm_faultin_pfn() to handle page faults" Revert "KVM: PPC: e500: Mark "struct page" pfn accessed before dropping mmu_lock" Revert "KVM: PPC: e500: Mark "struct page" dirty in kvmppc_e500_shadow_map()" KVM: e500: always restore irqs uprobes: Fix race in uprobe_free_utask selftests/bpf: Clean up open-coded gettid syscall invocations x86/mm: Don't disable PCID when INVLPG has been fixed by microcode wifi: iwlwifi: pcie: Fix TSO preparation Linux 6.12.19 Change-Id: Ia0c2b2c6a95b53a66e21505ed6ba756c6b0a2388 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
@@ -212,6 +212,17 @@ pid>/``).
|
||||
This value defaults to 0.
|
||||
|
||||
|
||||
core_sort_vma
|
||||
=============
|
||||
|
||||
The default coredump writes VMAs in address order. By setting
|
||||
``core_sort_vma`` to 1, VMAs will be written from smallest size
|
||||
to largest size. This is known to break at least elfutils, but
|
||||
can be handy when dealing with very large (and truncated)
|
||||
coredumps where the more useful debugging details are included
|
||||
in the smaller VMAs.
|
||||
|
||||
|
||||
core_uses_pid
|
||||
=============
|
||||
|
||||
|
||||
@@ -296,9 +296,7 @@ may happen in several situations, e.g.:
|
||||
It also increases the visibility of the remaining ``allow``\ s and reduces the
|
||||
chance of misapplying one.
|
||||
|
||||
Thus prefer ``except`` over ``allow`` unless:
|
||||
|
||||
- The lint attribute is intended to be temporary, e.g. while developing.
|
||||
Thus prefer ``expect`` over ``allow`` unless:
|
||||
|
||||
- Conditional compilation triggers the warning in some cases but not others.
|
||||
|
||||
|
||||
@@ -20243,6 +20243,13 @@ F: scripts/*rust*
|
||||
F: tools/testing/selftests/rust/
|
||||
K: \b(?i:rust)\b
|
||||
|
||||
RUST [ALLOC]
|
||||
M: Danilo Krummrich <dakr@kernel.org>
|
||||
L: rust-for-linux@vger.kernel.org
|
||||
S: Maintained
|
||||
F: rust/kernel/alloc.rs
|
||||
F: rust/kernel/alloc/
|
||||
|
||||
RXRPC SOCKETS (AF_RXRPC)
|
||||
M: David Howells <dhowells@redhat.com>
|
||||
M: Marc Dionne <marc.dionne@auristor.com>
|
||||
|
||||
7
Makefile
7
Makefile
@@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 6
|
||||
PATCHLEVEL = 12
|
||||
SUBLEVEL = 18
|
||||
SUBLEVEL = 19
|
||||
EXTRAVERSION =
|
||||
NAME = Baby Opossum Posse
|
||||
|
||||
@@ -1100,6 +1100,11 @@ endif
|
||||
KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
|
||||
KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
|
||||
|
||||
# userspace programs are linked via the compiler, use the correct linker
|
||||
ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_LD_IS_LLD),yy)
|
||||
KBUILD_USERLDFLAGS += --ld-path=$(LD)
|
||||
endif
|
||||
|
||||
# make the checker run with the right architecture
|
||||
CHECKFLAGS += --arch=$(ARCH)
|
||||
|
||||
|
||||
@@ -34,8 +34,8 @@ extern int huge_ptep_set_access_flags(struct vm_area_struct *vma,
|
||||
unsigned long addr, pte_t *ptep,
|
||||
pte_t pte, int dirty);
|
||||
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
|
||||
extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep);
|
||||
extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, unsigned long sz);
|
||||
#define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT
|
||||
extern void huge_ptep_set_wrprotect(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep);
|
||||
|
||||
@@ -100,20 +100,11 @@ static int find_num_contig(struct mm_struct *mm, unsigned long addr,
|
||||
|
||||
static inline int num_contig_ptes(unsigned long size, size_t *pgsize)
|
||||
{
|
||||
int contig_ptes = 0;
|
||||
int contig_ptes = 1;
|
||||
|
||||
*pgsize = size;
|
||||
|
||||
switch (size) {
|
||||
#ifndef __PAGETABLE_PMD_FOLDED
|
||||
case PUD_SIZE:
|
||||
if (pud_sect_supported())
|
||||
contig_ptes = 1;
|
||||
break;
|
||||
#endif
|
||||
case PMD_SIZE:
|
||||
contig_ptes = 1;
|
||||
break;
|
||||
case CONT_PMD_SIZE:
|
||||
*pgsize = PMD_SIZE;
|
||||
contig_ptes = CONT_PMDS;
|
||||
@@ -122,6 +113,8 @@ static inline int num_contig_ptes(unsigned long size, size_t *pgsize)
|
||||
*pgsize = PAGE_SIZE;
|
||||
contig_ptes = CONT_PTES;
|
||||
break;
|
||||
default:
|
||||
WARN_ON(!__hugetlb_valid_size(size));
|
||||
}
|
||||
|
||||
return contig_ptes;
|
||||
@@ -163,24 +156,23 @@ static pte_t get_clear_contig(struct mm_struct *mm,
|
||||
unsigned long pgsize,
|
||||
unsigned long ncontig)
|
||||
{
|
||||
pte_t orig_pte = __ptep_get(ptep);
|
||||
unsigned long i;
|
||||
pte_t pte, tmp_pte;
|
||||
bool present;
|
||||
|
||||
for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) {
|
||||
pte_t pte = __ptep_get_and_clear(mm, addr, ptep);
|
||||
|
||||
/*
|
||||
* If HW_AFDBM is enabled, then the HW could turn on
|
||||
* the dirty or accessed bit for any page in the set,
|
||||
* so check them all.
|
||||
*/
|
||||
if (pte_dirty(pte))
|
||||
orig_pte = pte_mkdirty(orig_pte);
|
||||
|
||||
if (pte_young(pte))
|
||||
orig_pte = pte_mkyoung(orig_pte);
|
||||
pte = __ptep_get_and_clear(mm, addr, ptep);
|
||||
present = pte_present(pte);
|
||||
while (--ncontig) {
|
||||
ptep++;
|
||||
addr += pgsize;
|
||||
tmp_pte = __ptep_get_and_clear(mm, addr, ptep);
|
||||
if (present) {
|
||||
if (pte_dirty(tmp_pte))
|
||||
pte = pte_mkdirty(pte);
|
||||
if (pte_young(tmp_pte))
|
||||
pte = pte_mkyoung(pte);
|
||||
}
|
||||
}
|
||||
return orig_pte;
|
||||
return pte;
|
||||
}
|
||||
|
||||
static pte_t get_clear_contig_flush(struct mm_struct *mm,
|
||||
@@ -385,18 +377,13 @@ void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
|
||||
__pte_clear(mm, addr, ptep);
|
||||
}
|
||||
|
||||
pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, unsigned long sz)
|
||||
{
|
||||
int ncontig;
|
||||
size_t pgsize;
|
||||
pte_t orig_pte = __ptep_get(ptep);
|
||||
|
||||
if (!pte_cont(orig_pte))
|
||||
return __ptep_get_and_clear(mm, addr, ptep);
|
||||
|
||||
ncontig = find_num_contig(mm, addr, ptep, &pgsize);
|
||||
|
||||
ncontig = num_contig_ptes(sz, &pgsize);
|
||||
return get_clear_contig(mm, addr, ptep, pgsize, ncontig);
|
||||
}
|
||||
|
||||
@@ -538,6 +525,8 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
|
||||
|
||||
pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
unsigned long psize = huge_page_size(hstate_vma(vma));
|
||||
|
||||
if (alternative_has_cap_unlikely(ARM64_WORKAROUND_2645198)) {
|
||||
/*
|
||||
* Break-before-make (BBM) is required for all user space mappings
|
||||
@@ -547,7 +536,7 @@ pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr
|
||||
if (pte_user_exec(__ptep_get(ptep)))
|
||||
return huge_ptep_clear_flush(vma, addr, ptep);
|
||||
}
|
||||
return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
|
||||
return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, psize);
|
||||
}
|
||||
|
||||
void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
|
||||
|
||||
@@ -41,7 +41,8 @@ static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
|
||||
|
||||
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
|
||||
static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
unsigned long addr, pte_t *ptep,
|
||||
unsigned long sz)
|
||||
{
|
||||
pte_t clear;
|
||||
pte_t pte = ptep_get(ptep);
|
||||
@@ -56,8 +57,9 @@ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
pte_t pte;
|
||||
unsigned long sz = huge_page_size(hstate_vma(vma));
|
||||
|
||||
pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
|
||||
pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, sz);
|
||||
flush_tlb_page(vma, addr);
|
||||
return pte;
|
||||
}
|
||||
|
||||
@@ -126,14 +126,14 @@ void kexec_reboot(void)
|
||||
/* All secondary cpus go to kexec_smp_wait */
|
||||
if (smp_processor_id() > 0) {
|
||||
relocated_kexec_smp_wait(NULL);
|
||||
unreachable();
|
||||
BUG();
|
||||
}
|
||||
#endif
|
||||
|
||||
do_kexec = (void *)reboot_code_buffer;
|
||||
do_kexec(efi_boot, cmdline_ptr, systable_ptr, start_addr, first_ind_entry);
|
||||
|
||||
unreachable();
|
||||
BUG();
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -387,6 +387,9 @@ static void __init check_kernel_sections_mem(void)
|
||||
*/
|
||||
static void __init arch_mem_init(char **cmdline_p)
|
||||
{
|
||||
/* Recalculate max_low_pfn for "mem=xxx" */
|
||||
max_pfn = max_low_pfn = PHYS_PFN(memblock_end_of_DRAM());
|
||||
|
||||
if (usermem)
|
||||
pr_info("User-defined physical RAM map overwrite\n");
|
||||
|
||||
|
||||
@@ -19,6 +19,7 @@
|
||||
#include <linux/smp.h>
|
||||
#include <linux/threads.h>
|
||||
#include <linux/export.h>
|
||||
#include <linux/suspend.h>
|
||||
#include <linux/syscore_ops.h>
|
||||
#include <linux/time.h>
|
||||
#include <linux/tracepoint.h>
|
||||
@@ -423,7 +424,7 @@ void loongson_cpu_die(unsigned int cpu)
|
||||
mb();
|
||||
}
|
||||
|
||||
void __noreturn arch_cpu_idle_dead(void)
|
||||
static void __noreturn idle_play_dead(void)
|
||||
{
|
||||
register uint64_t addr;
|
||||
register void (*init_fn)(void);
|
||||
@@ -447,6 +448,50 @@ void __noreturn arch_cpu_idle_dead(void)
|
||||
BUG();
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HIBERNATION
|
||||
static void __noreturn poll_play_dead(void)
|
||||
{
|
||||
register uint64_t addr;
|
||||
register void (*init_fn)(void);
|
||||
|
||||
idle_task_exit();
|
||||
__this_cpu_write(cpu_state, CPU_DEAD);
|
||||
|
||||
__smp_mb();
|
||||
do {
|
||||
__asm__ __volatile__("nop\n\t");
|
||||
addr = iocsr_read64(LOONGARCH_IOCSR_MBUF0);
|
||||
} while (addr == 0);
|
||||
|
||||
init_fn = (void *)TO_CACHE(addr);
|
||||
iocsr_write32(0xffffffff, LOONGARCH_IOCSR_IPI_CLEAR);
|
||||
|
||||
init_fn();
|
||||
BUG();
|
||||
}
|
||||
#endif
|
||||
|
||||
static void (*play_dead)(void) = idle_play_dead;
|
||||
|
||||
void __noreturn arch_cpu_idle_dead(void)
|
||||
{
|
||||
play_dead();
|
||||
BUG(); /* play_dead() doesn't return */
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HIBERNATION
|
||||
int hibernate_resume_nonboot_cpu_disable(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
play_dead = poll_play_dead;
|
||||
ret = suspend_disable_secondary_cpus();
|
||||
play_dead = idle_play_dead;
|
||||
|
||||
return ret;
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
||||
/*
|
||||
|
||||
@@ -624,6 +624,12 @@ static int kvm_handle_rdwr_fault(struct kvm_vcpu *vcpu, bool write)
|
||||
struct kvm_run *run = vcpu->run;
|
||||
unsigned long badv = vcpu->arch.badv;
|
||||
|
||||
/* Inject ADE exception if exceed max GPA size */
|
||||
if (unlikely(badv >= vcpu->kvm->arch.gpa_size)) {
|
||||
kvm_queue_exception(vcpu, EXCCODE_ADE, EXSUBCODE_ADEM);
|
||||
return RESUME_GUEST;
|
||||
}
|
||||
|
||||
ret = kvm_handle_mm_fault(vcpu, badv, write);
|
||||
if (ret) {
|
||||
/* Treat as MMIO */
|
||||
|
||||
@@ -297,6 +297,13 @@ int kvm_arch_enable_virtualization_cpu(void)
|
||||
kvm_debug("GCFG:%lx GSTAT:%lx GINTC:%lx GTLBC:%lx",
|
||||
read_csr_gcfg(), read_csr_gstat(), read_csr_gintc(), read_csr_gtlbc());
|
||||
|
||||
/*
|
||||
* HW Guest CSR registers are lost after CPU suspend and resume.
|
||||
* Clear last_vcpu so that Guest CSR registers forced to reload
|
||||
* from vCPU SW state.
|
||||
*/
|
||||
this_cpu_ptr(vmcs)->last_vcpu = NULL;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -311,7 +311,7 @@ static int kvm_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
|
||||
{
|
||||
int ret = RESUME_GUEST;
|
||||
unsigned long estat = vcpu->arch.host_estat;
|
||||
u32 intr = estat & 0x1fff; /* Ignore NMI */
|
||||
u32 intr = estat & CSR_ESTAT_IS;
|
||||
u32 ecode = (estat & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT;
|
||||
|
||||
vcpu->mode = OUTSIDE_GUEST_MODE;
|
||||
|
||||
@@ -46,7 +46,11 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
|
||||
if (kvm_pvtime_supported())
|
||||
kvm->arch.pv_features |= BIT(KVM_FEATURE_STEAL_TIME);
|
||||
|
||||
kvm->arch.gpa_size = BIT(cpu_vabits - 1);
|
||||
/*
|
||||
* cpu_vabits means user address space only (a half of total).
|
||||
* GPA size of VM is the same with the size of user address space.
|
||||
*/
|
||||
kvm->arch.gpa_size = BIT(cpu_vabits);
|
||||
kvm->arch.root_level = CONFIG_PGTABLE_LEVELS - 1;
|
||||
kvm->arch.invalid_ptes[0] = 0;
|
||||
kvm->arch.invalid_ptes[1] = (unsigned long)invalid_pte_table;
|
||||
|
||||
@@ -32,7 +32,8 @@ static inline int prepare_hugepage_range(struct file *file,
|
||||
|
||||
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
|
||||
static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
unsigned long addr, pte_t *ptep,
|
||||
unsigned long sz)
|
||||
{
|
||||
pte_t clear;
|
||||
pte_t pte = *ptep;
|
||||
@@ -47,13 +48,14 @@ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
pte_t pte;
|
||||
unsigned long sz = huge_page_size(hstate_vma(vma));
|
||||
|
||||
/*
|
||||
* clear the huge pte entry firstly, so that the other smp threads will
|
||||
* not get old pte entry after finishing flush_tlb_page and before
|
||||
* setting new huge pte entry
|
||||
*/
|
||||
pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
|
||||
pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, sz);
|
||||
flush_tlb_page(vma, addr);
|
||||
return pte;
|
||||
}
|
||||
|
||||
@@ -10,7 +10,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
|
||||
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
|
||||
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep);
|
||||
pte_t *ptep, unsigned long sz);
|
||||
|
||||
/*
|
||||
* If the arch doesn't supply something else, assume that hugepage
|
||||
|
||||
@@ -147,7 +147,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
|
||||
|
||||
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep)
|
||||
pte_t *ptep, unsigned long sz)
|
||||
{
|
||||
pte_t entry;
|
||||
|
||||
|
||||
@@ -45,7 +45,8 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
|
||||
|
||||
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
|
||||
static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
unsigned long addr, pte_t *ptep,
|
||||
unsigned long sz)
|
||||
{
|
||||
return __pte(pte_update(mm, addr, ptep, ~0UL, 0, 1));
|
||||
}
|
||||
@@ -55,8 +56,9 @@ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
pte_t pte;
|
||||
unsigned long sz = huge_page_size(hstate_vma(vma));
|
||||
|
||||
pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
|
||||
pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, sz);
|
||||
flush_hugetlb_page(vma, addr);
|
||||
return pte;
|
||||
}
|
||||
|
||||
@@ -242,7 +242,7 @@ static inline int tlbe_is_writable(struct kvm_book3e_206_tlb_entry *tlbe)
|
||||
return tlbe->mas7_3 & (MAS3_SW|MAS3_UW);
|
||||
}
|
||||
|
||||
static inline bool kvmppc_e500_ref_setup(struct tlbe_ref *ref,
|
||||
static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,
|
||||
struct kvm_book3e_206_tlb_entry *gtlbe,
|
||||
kvm_pfn_t pfn, unsigned int wimg)
|
||||
{
|
||||
@@ -252,7 +252,11 @@ static inline bool kvmppc_e500_ref_setup(struct tlbe_ref *ref,
|
||||
/* Use guest supplied MAS2_G and MAS2_E */
|
||||
ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;
|
||||
|
||||
return tlbe_is_writable(gtlbe);
|
||||
/* Mark the page accessed */
|
||||
kvm_set_pfn_accessed(pfn);
|
||||
|
||||
if (tlbe_is_writable(gtlbe))
|
||||
kvm_set_pfn_dirty(pfn);
|
||||
}
|
||||
|
||||
static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref)
|
||||
@@ -322,7 +326,6 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
|
||||
{
|
||||
struct kvm_memory_slot *slot;
|
||||
unsigned long pfn = 0; /* silence GCC warning */
|
||||
struct page *page = NULL;
|
||||
unsigned long hva;
|
||||
int pfnmap = 0;
|
||||
int tsize = BOOK3E_PAGESZ_4K;
|
||||
@@ -334,7 +337,6 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
|
||||
unsigned int wimg = 0;
|
||||
pgd_t *pgdir;
|
||||
unsigned long flags;
|
||||
bool writable = false;
|
||||
|
||||
/* used to check for invalidations in progress */
|
||||
mmu_seq = kvm->mmu_invalidate_seq;
|
||||
@@ -444,7 +446,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
|
||||
|
||||
if (likely(!pfnmap)) {
|
||||
tsize_pages = 1UL << (tsize + 10 - PAGE_SHIFT);
|
||||
pfn = __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, NULL, &page);
|
||||
pfn = gfn_to_pfn_memslot(slot, gfn);
|
||||
if (is_error_noslot_pfn(pfn)) {
|
||||
if (printk_ratelimit())
|
||||
pr_err("%s: real page not found for gfn %lx\n",
|
||||
@@ -489,7 +491,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
|
||||
}
|
||||
local_irq_restore(flags);
|
||||
|
||||
writable = kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
|
||||
kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
|
||||
kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
|
||||
ref, gvaddr, stlbe);
|
||||
|
||||
@@ -497,8 +499,11 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
|
||||
kvmppc_mmu_flush_icache(pfn);
|
||||
|
||||
out:
|
||||
kvm_release_faultin_page(kvm, page, !!ret, writable);
|
||||
spin_unlock(&kvm->mmu_lock);
|
||||
|
||||
/* Drop refcount on page, so that mmu notifiers can clear it */
|
||||
kvm_release_pfn_clean(pfn);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
@@ -28,7 +28,8 @@ void set_huge_pte_at(struct mm_struct *mm,
|
||||
|
||||
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
|
||||
pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep);
|
||||
unsigned long addr, pte_t *ptep,
|
||||
unsigned long sz);
|
||||
|
||||
#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
|
||||
pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
|
||||
|
||||
@@ -293,7 +293,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
|
||||
|
||||
pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr,
|
||||
pte_t *ptep)
|
||||
pte_t *ptep, unsigned long sz)
|
||||
{
|
||||
pte_t orig_pte = ptep_get(ptep);
|
||||
int pte_num;
|
||||
|
||||
@@ -20,8 +20,15 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep, pte_t pte);
|
||||
pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep);
|
||||
pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep);
|
||||
pte_t __huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep);
|
||||
|
||||
static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep,
|
||||
unsigned long sz)
|
||||
{
|
||||
return __huge_ptep_get_and_clear(mm, addr, ptep);
|
||||
}
|
||||
|
||||
/*
|
||||
* If the arch doesn't supply something else, assume that hugepage
|
||||
@@ -57,7 +64,7 @@ static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
|
||||
static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
|
||||
unsigned long address, pte_t *ptep)
|
||||
{
|
||||
return huge_ptep_get_and_clear(vma->vm_mm, address, ptep);
|
||||
return __huge_ptep_get_and_clear(vma->vm_mm, address, ptep);
|
||||
}
|
||||
|
||||
static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
|
||||
@@ -66,7 +73,7 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
|
||||
{
|
||||
int changed = !pte_same(huge_ptep_get(vma->vm_mm, addr, ptep), pte);
|
||||
if (changed) {
|
||||
huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
|
||||
__huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
|
||||
__set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
|
||||
}
|
||||
return changed;
|
||||
@@ -75,7 +82,7 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
|
||||
static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
pte_t pte = huge_ptep_get_and_clear(mm, addr, ptep);
|
||||
pte_t pte = __huge_ptep_get_and_clear(mm, addr, ptep);
|
||||
__set_huge_pte_at(mm, addr, ptep, pte_wrprotect(pte));
|
||||
}
|
||||
|
||||
|
||||
@@ -284,10 +284,10 @@ static void __init test_monitor_call(void)
|
||||
return;
|
||||
asm volatile(
|
||||
" mc 0,0\n"
|
||||
"0: xgr %0,%0\n"
|
||||
"0: lhi %[val],0\n"
|
||||
"1:\n"
|
||||
EX_TABLE(0b,1b)
|
||||
: "+d" (val));
|
||||
EX_TABLE(0b, 1b)
|
||||
: [val] "+d" (val));
|
||||
if (!val)
|
||||
panic("Monitor call doesn't work!\n");
|
||||
}
|
||||
|
||||
@@ -174,8 +174,8 @@ pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
|
||||
return __rste_to_pte(pte_val(*ptep));
|
||||
}
|
||||
|
||||
pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
pte_t __huge_ptep_get_and_clear(struct mm_struct *mm,
|
||||
unsigned long addr, pte_t *ptep)
|
||||
{
|
||||
pte_t pte = huge_ptep_get(mm, addr, ptep);
|
||||
pmd_t *pmdp = (pmd_t *) ptep;
|
||||
|
||||
@@ -20,7 +20,7 @@ void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
|
||||
#define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
|
||||
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep);
|
||||
pte_t *ptep, unsigned long sz);
|
||||
|
||||
#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
|
||||
static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
|
||||
|
||||
@@ -368,7 +368,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
|
||||
}
|
||||
|
||||
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
|
||||
pte_t *ptep)
|
||||
pte_t *ptep, unsigned long sz)
|
||||
{
|
||||
unsigned int i, nptes, orig_shift, shift;
|
||||
unsigned long size;
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
#include "misc.h"
|
||||
#include <asm/bootparam.h>
|
||||
#include <asm/bootparam_utils.h>
|
||||
#include <asm/e820/types.h>
|
||||
#include <asm/processor.h>
|
||||
#include "pgtable.h"
|
||||
@@ -107,6 +108,7 @@ asmlinkage void configure_5level_paging(struct boot_params *bp, void *pgtable)
|
||||
bool l5_required = false;
|
||||
|
||||
/* Initialize boot_params. Required for cmdline_find_option_bool(). */
|
||||
sanitize_boot_params(bp);
|
||||
boot_params_ptr = bp;
|
||||
|
||||
/*
|
||||
|
||||
@@ -761,6 +761,7 @@ struct kvm_vcpu_arch {
|
||||
u32 pkru;
|
||||
u32 hflags;
|
||||
u64 efer;
|
||||
u64 host_debugctl;
|
||||
u64 apic_base;
|
||||
struct kvm_lapic *apic; /* kernel irqchip context */
|
||||
bool load_eoi_exitmap_pending;
|
||||
|
||||
@@ -808,7 +808,7 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
|
||||
cpuid(2, ®s[0], ®s[1], ®s[2], ®s[3]);
|
||||
|
||||
/* If bit 31 is set, this is an unknown format */
|
||||
for (j = 0 ; j < 3 ; j++)
|
||||
for (j = 0 ; j < 4 ; j++)
|
||||
if (regs[j] & (1 << 31))
|
||||
regs[j] = 0;
|
||||
|
||||
|
||||
@@ -672,26 +672,37 @@ static unsigned int intel_size_cache(struct cpuinfo_x86 *c, unsigned int size)
|
||||
}
|
||||
#endif
|
||||
|
||||
#define TLB_INST_4K 0x01
|
||||
#define TLB_INST_4M 0x02
|
||||
#define TLB_INST_2M_4M 0x03
|
||||
#define TLB_INST_4K 0x01
|
||||
#define TLB_INST_4M 0x02
|
||||
#define TLB_INST_2M_4M 0x03
|
||||
|
||||
#define TLB_INST_ALL 0x05
|
||||
#define TLB_INST_1G 0x06
|
||||
#define TLB_INST_ALL 0x05
|
||||
#define TLB_INST_1G 0x06
|
||||
|
||||
#define TLB_DATA_4K 0x11
|
||||
#define TLB_DATA_4M 0x12
|
||||
#define TLB_DATA_2M_4M 0x13
|
||||
#define TLB_DATA_4K_4M 0x14
|
||||
#define TLB_DATA_4K 0x11
|
||||
#define TLB_DATA_4M 0x12
|
||||
#define TLB_DATA_2M_4M 0x13
|
||||
#define TLB_DATA_4K_4M 0x14
|
||||
|
||||
#define TLB_DATA_1G 0x16
|
||||
#define TLB_DATA_1G 0x16
|
||||
#define TLB_DATA_1G_2M_4M 0x17
|
||||
|
||||
#define TLB_DATA0_4K 0x21
|
||||
#define TLB_DATA0_4M 0x22
|
||||
#define TLB_DATA0_2M_4M 0x23
|
||||
#define TLB_DATA0_4K 0x21
|
||||
#define TLB_DATA0_4M 0x22
|
||||
#define TLB_DATA0_2M_4M 0x23
|
||||
|
||||
#define STLB_4K 0x41
|
||||
#define STLB_4K_2M 0x42
|
||||
#define STLB_4K 0x41
|
||||
#define STLB_4K_2M 0x42
|
||||
|
||||
/*
|
||||
* All of leaf 0x2's one-byte TLB descriptors implies the same number of
|
||||
* entries for their respective TLB types. The 0x63 descriptor is an
|
||||
* exception: it implies 4 dTLB entries for 1GB pages 32 dTLB entries
|
||||
* for 2MB or 4MB pages. Encode descriptor 0x63 dTLB entry count for
|
||||
* 2MB/4MB pages here, as its count for dTLB 1GB pages is already at the
|
||||
* intel_tlb_table[] mapping.
|
||||
*/
|
||||
#define TLB_0x63_2M_4M_ENTRIES 32
|
||||
|
||||
static const struct _tlb_table intel_tlb_table[] = {
|
||||
{ 0x01, TLB_INST_4K, 32, " TLB_INST 4 KByte pages, 4-way set associative" },
|
||||
@@ -713,7 +724,8 @@ static const struct _tlb_table intel_tlb_table[] = {
|
||||
{ 0x5c, TLB_DATA_4K_4M, 128, " TLB_DATA 4 KByte and 4 MByte pages" },
|
||||
{ 0x5d, TLB_DATA_4K_4M, 256, " TLB_DATA 4 KByte and 4 MByte pages" },
|
||||
{ 0x61, TLB_INST_4K, 48, " TLB_INST 4 KByte pages, full associative" },
|
||||
{ 0x63, TLB_DATA_1G, 4, " TLB_DATA 1 GByte pages, 4-way set associative" },
|
||||
{ 0x63, TLB_DATA_1G_2M_4M, 4, " TLB_DATA 1 GByte pages, 4-way set associative"
|
||||
" (plus 32 entries TLB_DATA 2 MByte or 4 MByte pages, not encoded here)" },
|
||||
{ 0x6b, TLB_DATA_4K, 256, " TLB_DATA 4 KByte pages, 8-way associative" },
|
||||
{ 0x6c, TLB_DATA_2M_4M, 128, " TLB_DATA 2 MByte or 4 MByte pages, 8-way associative" },
|
||||
{ 0x6d, TLB_DATA_1G, 16, " TLB_DATA 1 GByte pages, fully associative" },
|
||||
@@ -813,6 +825,12 @@ static void intel_tlb_lookup(const unsigned char desc)
|
||||
if (tlb_lld_4m[ENTRIES] < intel_tlb_table[k].entries)
|
||||
tlb_lld_4m[ENTRIES] = intel_tlb_table[k].entries;
|
||||
break;
|
||||
case TLB_DATA_1G_2M_4M:
|
||||
if (tlb_lld_2m[ENTRIES] < TLB_0x63_2M_4M_ENTRIES)
|
||||
tlb_lld_2m[ENTRIES] = TLB_0x63_2M_4M_ENTRIES;
|
||||
if (tlb_lld_4m[ENTRIES] < TLB_0x63_2M_4M_ENTRIES)
|
||||
tlb_lld_4m[ENTRIES] = TLB_0x63_2M_4M_ENTRIES;
|
||||
fallthrough;
|
||||
case TLB_DATA_1G:
|
||||
if (tlb_lld_1g[ENTRIES] < intel_tlb_table[k].entries)
|
||||
tlb_lld_1g[ENTRIES] = intel_tlb_table[k].entries;
|
||||
@@ -836,7 +854,7 @@ static void intel_detect_tlb(struct cpuinfo_x86 *c)
|
||||
cpuid(2, ®s[0], ®s[1], ®s[2], ®s[3]);
|
||||
|
||||
/* If bit 31 is set, this is an unknown format */
|
||||
for (j = 0 ; j < 3 ; j++)
|
||||
for (j = 0 ; j < 4 ; j++)
|
||||
if (regs[j] & (1 << 31))
|
||||
regs[j] = 0;
|
||||
|
||||
|
||||
@@ -64,6 +64,13 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
|
||||
struct file *backing;
|
||||
long ret;
|
||||
|
||||
/*
|
||||
* ECREATE would detect this too, but checking here also ensures
|
||||
* that the 'encl_size' calculations below can never overflow.
|
||||
*/
|
||||
if (!is_power_of_2(secs->size))
|
||||
return -EINVAL;
|
||||
|
||||
va_page = sgx_encl_grow(encl, true);
|
||||
if (IS_ERR(va_page))
|
||||
return PTR_ERR(va_page);
|
||||
|
||||
@@ -1387,7 +1387,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
|
||||
|
||||
entry->ecx = entry->edx = 0;
|
||||
if (!enable_pmu || !kvm_cpu_cap_has(X86_FEATURE_PERFMON_V2)) {
|
||||
entry->eax = entry->ebx;
|
||||
entry->eax = entry->ebx = 0;
|
||||
break;
|
||||
}
|
||||
|
||||
|
||||
@@ -4579,6 +4579,8 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm)
|
||||
|
||||
void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa)
|
||||
{
|
||||
struct kvm *kvm = svm->vcpu.kvm;
|
||||
|
||||
/*
|
||||
* All host state for SEV-ES guests is categorized into three swap types
|
||||
* based on how it is handled by hardware during a world switch:
|
||||
@@ -4602,10 +4604,15 @@ void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_are
|
||||
|
||||
/*
|
||||
* If DebugSwap is enabled, debug registers are loaded but NOT saved by
|
||||
* the CPU (Type-B). If DebugSwap is disabled/unsupported, the CPU both
|
||||
* saves and loads debug registers (Type-A).
|
||||
* the CPU (Type-B). If DebugSwap is disabled/unsupported, the CPU does
|
||||
* not save or load debug registers. Sadly, on CPUs without
|
||||
* ALLOWED_SEV_FEATURES, KVM can't prevent SNP guests from enabling
|
||||
* DebugSwap on secondary vCPUs without KVM's knowledge via "AP Create".
|
||||
* Save all registers if DebugSwap is supported to prevent host state
|
||||
* from being clobbered by a misbehaving guest.
|
||||
*/
|
||||
if (sev_vcpu_has_debug_swap(svm)) {
|
||||
if (sev_vcpu_has_debug_swap(svm) ||
|
||||
(sev_snp_guest(kvm) && cpu_feature_enabled(X86_FEATURE_DEBUG_SWAP))) {
|
||||
hostsa->dr0 = native_get_debugreg(0);
|
||||
hostsa->dr1 = native_get_debugreg(1);
|
||||
hostsa->dr2 = native_get_debugreg(2);
|
||||
|
||||
@@ -3167,6 +3167,27 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
|
||||
kvm_pr_unimpl_wrmsr(vcpu, ecx, data);
|
||||
break;
|
||||
}
|
||||
|
||||
/*
|
||||
* AMD changed the architectural behavior of bits 5:2. On CPUs
|
||||
* without BusLockTrap, bits 5:2 control "external pins", but
|
||||
* on CPUs that support BusLockDetect, bit 2 enables BusLockTrap
|
||||
* and bits 5:3 are reserved-to-zero. Sadly, old KVM allowed
|
||||
* the guest to set bits 5:2 despite not actually virtualizing
|
||||
* Performance-Monitoring/Breakpoint external pins. Drop bits
|
||||
* 5:2 for backwards compatibility.
|
||||
*/
|
||||
data &= ~GENMASK(5, 2);
|
||||
|
||||
/*
|
||||
* Suppress BTF as KVM doesn't virtualize BTF, but there's no
|
||||
* way to communicate lack of support to the guest.
|
||||
*/
|
||||
if (data & DEBUGCTLMSR_BTF) {
|
||||
kvm_pr_unimpl_wrmsr(vcpu, MSR_IA32_DEBUGCTLMSR, data);
|
||||
data &= ~DEBUGCTLMSR_BTF;
|
||||
}
|
||||
|
||||
if (data & DEBUGCTL_RESERVED_BITS)
|
||||
return 1;
|
||||
|
||||
@@ -4176,6 +4197,18 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_in
|
||||
|
||||
guest_state_enter_irqoff();
|
||||
|
||||
/*
|
||||
* Set RFLAGS.IF prior to VMRUN, as the host's RFLAGS.IF at the time of
|
||||
* VMRUN controls whether or not physical IRQs are masked (KVM always
|
||||
* runs with V_INTR_MASKING_MASK). Toggle RFLAGS.IF here to avoid the
|
||||
* temptation to do STI+VMRUN+CLI, as AMD CPUs bleed the STI shadow
|
||||
* into guest state if delivery of an event during VMRUN triggers a
|
||||
* #VMEXIT, and the guest_state transitions already tell lockdep that
|
||||
* IRQs are being enabled/disabled. Note! GIF=0 for the entirety of
|
||||
* this path, so IRQs aren't actually unmasked while running host code.
|
||||
*/
|
||||
raw_local_irq_enable();
|
||||
|
||||
amd_clear_divider();
|
||||
|
||||
if (sev_es_guest(vcpu->kvm))
|
||||
@@ -4184,6 +4217,8 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_in
|
||||
else
|
||||
__svm_vcpu_run(svm, spec_ctrl_intercepted);
|
||||
|
||||
raw_local_irq_disable();
|
||||
|
||||
guest_state_exit_irqoff();
|
||||
}
|
||||
|
||||
@@ -4240,6 +4275,16 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
|
||||
clgi();
|
||||
kvm_load_guest_xsave_state(vcpu);
|
||||
|
||||
/*
|
||||
* Hardware only context switches DEBUGCTL if LBR virtualization is
|
||||
* enabled. Manually load DEBUGCTL if necessary (and restore it after
|
||||
* VM-Exit), as running with the host's DEBUGCTL can negatively affect
|
||||
* guest state and can even be fatal, e.g. due to Bus Lock Detect.
|
||||
*/
|
||||
if (!(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK) &&
|
||||
vcpu->arch.host_debugctl != svm->vmcb->save.dbgctl)
|
||||
update_debugctlmsr(svm->vmcb->save.dbgctl);
|
||||
|
||||
kvm_wait_lapic_expire(vcpu);
|
||||
|
||||
/*
|
||||
@@ -4267,6 +4312,10 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
|
||||
if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
|
||||
kvm_before_interrupt(vcpu, KVM_HANDLING_NMI);
|
||||
|
||||
if (!(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK) &&
|
||||
vcpu->arch.host_debugctl != svm->vmcb->save.dbgctl)
|
||||
update_debugctlmsr(vcpu->arch.host_debugctl);
|
||||
|
||||
kvm_load_host_xsave_state(vcpu);
|
||||
stgi();
|
||||
|
||||
|
||||
@@ -591,7 +591,7 @@ static inline bool is_vnmi_enabled(struct vcpu_svm *svm)
|
||||
/* svm.c */
|
||||
#define MSR_INVALID 0xffffffffU
|
||||
|
||||
#define DEBUGCTL_RESERVED_BITS (~(0x3fULL))
|
||||
#define DEBUGCTL_RESERVED_BITS (~DEBUGCTLMSR_LBR)
|
||||
|
||||
extern bool dump_invalid_vmcb;
|
||||
|
||||
|
||||
@@ -170,12 +170,8 @@ SYM_FUNC_START(__svm_vcpu_run)
|
||||
mov VCPU_RDI(%_ASM_DI), %_ASM_DI
|
||||
|
||||
/* Enter guest mode */
|
||||
sti
|
||||
|
||||
3: vmrun %_ASM_AX
|
||||
4:
|
||||
cli
|
||||
|
||||
/* Pop @svm to RAX while it's the only available register. */
|
||||
pop %_ASM_AX
|
||||
|
||||
@@ -340,12 +336,8 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run)
|
||||
mov KVM_VMCB_pa(%rax), %rax
|
||||
|
||||
/* Enter guest mode */
|
||||
sti
|
||||
|
||||
1: vmrun %rax
|
||||
|
||||
2: cli
|
||||
|
||||
2:
|
||||
/* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
|
||||
FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT
|
||||
|
||||
|
||||
@@ -1515,16 +1515,12 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
|
||||
*/
|
||||
void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
|
||||
{
|
||||
struct vcpu_vmx *vmx = to_vmx(vcpu);
|
||||
|
||||
if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))
|
||||
shrink_ple_window(vcpu);
|
||||
|
||||
vmx_vcpu_load_vmcs(vcpu, cpu, NULL);
|
||||
|
||||
vmx_vcpu_pi_load(vcpu, cpu);
|
||||
|
||||
vmx->host_debugctlmsr = get_debugctlmsr();
|
||||
}
|
||||
|
||||
void vmx_vcpu_put(struct kvm_vcpu *vcpu)
|
||||
@@ -7454,8 +7450,8 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
|
||||
}
|
||||
|
||||
/* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */
|
||||
if (vmx->host_debugctlmsr)
|
||||
update_debugctlmsr(vmx->host_debugctlmsr);
|
||||
if (vcpu->arch.host_debugctl)
|
||||
update_debugctlmsr(vcpu->arch.host_debugctl);
|
||||
|
||||
#ifndef CONFIG_X86_64
|
||||
/*
|
||||
|
||||
@@ -339,8 +339,6 @@ struct vcpu_vmx {
|
||||
/* apic deadline value in host tsc */
|
||||
u64 hv_deadline_tsc;
|
||||
|
||||
unsigned long host_debugctlmsr;
|
||||
|
||||
/*
|
||||
* Only bits masked by msr_ia32_feature_control_valid_bits can be set in
|
||||
* msr_ia32_feature_control. FEAT_CTL_LOCKED is always included
|
||||
|
||||
@@ -10964,6 +10964,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
|
||||
set_debugreg(0, 7);
|
||||
}
|
||||
|
||||
vcpu->arch.host_debugctl = get_debugctlmsr();
|
||||
|
||||
guest_timing_enter_irqoff();
|
||||
|
||||
for (;;) {
|
||||
|
||||
@@ -269,28 +269,33 @@ static void __init probe_page_size_mask(void)
|
||||
}
|
||||
|
||||
/*
|
||||
* INVLPG may not properly flush Global entries
|
||||
* on these CPUs when PCIDs are enabled.
|
||||
* INVLPG may not properly flush Global entries on
|
||||
* these CPUs. New microcode fixes the issue.
|
||||
*/
|
||||
static const struct x86_cpu_id invlpg_miss_ids[] = {
|
||||
X86_MATCH_VFM(INTEL_ALDERLAKE, 0),
|
||||
X86_MATCH_VFM(INTEL_ALDERLAKE_L, 0),
|
||||
X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, 0),
|
||||
X86_MATCH_VFM(INTEL_RAPTORLAKE, 0),
|
||||
X86_MATCH_VFM(INTEL_RAPTORLAKE_P, 0),
|
||||
X86_MATCH_VFM(INTEL_RAPTORLAKE_S, 0),
|
||||
X86_MATCH_VFM(INTEL_ALDERLAKE, 0x2e),
|
||||
X86_MATCH_VFM(INTEL_ALDERLAKE_L, 0x42c),
|
||||
X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, 0x11),
|
||||
X86_MATCH_VFM(INTEL_RAPTORLAKE, 0x118),
|
||||
X86_MATCH_VFM(INTEL_RAPTORLAKE_P, 0x4117),
|
||||
X86_MATCH_VFM(INTEL_RAPTORLAKE_S, 0x2e),
|
||||
{}
|
||||
};
|
||||
|
||||
static void setup_pcid(void)
|
||||
{
|
||||
const struct x86_cpu_id *invlpg_miss_match;
|
||||
|
||||
if (!IS_ENABLED(CONFIG_X86_64))
|
||||
return;
|
||||
|
||||
if (!boot_cpu_has(X86_FEATURE_PCID))
|
||||
return;
|
||||
|
||||
if (x86_match_cpu(invlpg_miss_ids)) {
|
||||
invlpg_miss_match = x86_match_cpu(invlpg_miss_ids);
|
||||
|
||||
if (invlpg_miss_match &&
|
||||
boot_cpu_data.microcode < invlpg_miss_match->driver_data) {
|
||||
pr_info("Incomplete global flushes, disabling PCID");
|
||||
setup_clear_cpu_cap(X86_FEATURE_PCID);
|
||||
return;
|
||||
|
||||
@@ -682,7 +682,7 @@ static void utf16_le_to_7bit(const __le16 *in, unsigned int size, u8 *out)
|
||||
out[size] = 0;
|
||||
|
||||
while (i < size) {
|
||||
u8 c = le16_to_cpu(in[i]) & 0xff;
|
||||
u8 c = le16_to_cpu(in[i]) & 0x7f;
|
||||
|
||||
if (c && !isprint(c))
|
||||
c = '!';
|
||||
|
||||
@@ -2665,9 +2665,12 @@ static int ublk_ctrl_set_params(struct ublk_device *ub,
|
||||
if (ph.len > sizeof(struct ublk_params))
|
||||
ph.len = sizeof(struct ublk_params);
|
||||
|
||||
/* parameters can only be changed when device isn't live */
|
||||
mutex_lock(&ub->mutex);
|
||||
if (ub->dev_info.state == UBLK_S_DEV_LIVE) {
|
||||
if (test_bit(UB_STATE_USED, &ub->state)) {
|
||||
/*
|
||||
* Parameters can only be changed when device hasn't
|
||||
* been started yet
|
||||
*/
|
||||
ret = -EACCES;
|
||||
} else if (copy_from_user(&ub->params, argp, ph.len)) {
|
||||
ret = -EFAULT;
|
||||
|
||||
@@ -3672,6 +3672,7 @@ static ssize_t force_poll_sync_write(struct file *file,
|
||||
}
|
||||
|
||||
static const struct file_operations force_poll_sync_fops = {
|
||||
.owner = THIS_MODULE,
|
||||
.open = simple_open,
|
||||
.read = force_poll_sync_read,
|
||||
.write = force_poll_sync_write,
|
||||
|
||||
@@ -1040,8 +1040,9 @@ static void mhi_pci_recovery_work(struct work_struct *work)
|
||||
err_unprepare:
|
||||
mhi_unprepare_after_power_down(mhi_cntrl);
|
||||
err_try_reset:
|
||||
if (pci_reset_function(pdev))
|
||||
dev_err(&pdev->dev, "Recovery failed\n");
|
||||
err = pci_try_reset_function(pdev);
|
||||
if (err)
|
||||
dev_err(&pdev->dev, "Recovery failed: %d\n", err);
|
||||
}
|
||||
|
||||
static void health_check(struct timer_list *t)
|
||||
|
||||
@@ -470,8 +470,12 @@ static ssize_t driver_override_show(struct device *dev,
|
||||
struct device_attribute *attr, char *buf)
|
||||
{
|
||||
struct cdx_device *cdx_dev = to_cdx_device(dev);
|
||||
ssize_t len;
|
||||
|
||||
return sysfs_emit(buf, "%s\n", cdx_dev->driver_override);
|
||||
device_lock(dev);
|
||||
len = sysfs_emit(buf, "%s\n", cdx_dev->driver_override);
|
||||
device_unlock(dev);
|
||||
return len;
|
||||
}
|
||||
static DEVICE_ATTR_RW(driver_override);
|
||||
|
||||
|
||||
@@ -264,8 +264,8 @@ int misc_register(struct miscdevice *misc)
|
||||
device_create_with_groups(&misc_class, misc->parent, dev,
|
||||
misc, misc->groups, "%s", misc->name);
|
||||
if (IS_ERR(misc->this_device)) {
|
||||
misc_minor_free(misc->minor);
|
||||
if (is_dynamic) {
|
||||
misc_minor_free(misc->minor);
|
||||
misc->minor = MISC_DYNAMIC_MINOR;
|
||||
}
|
||||
err = PTR_ERR(misc->this_device);
|
||||
|
||||
@@ -121,10 +121,15 @@ static ssize_t new_device_store(struct device_driver *driver, const char *buf,
|
||||
struct platform_device *pdev;
|
||||
int res, id;
|
||||
|
||||
if (!try_module_get(THIS_MODULE))
|
||||
return -ENOENT;
|
||||
|
||||
/* kernfs guarantees string termination, so count + 1 is safe */
|
||||
aggr = kzalloc(sizeof(*aggr) + count + 1, GFP_KERNEL);
|
||||
if (!aggr)
|
||||
return -ENOMEM;
|
||||
if (!aggr) {
|
||||
res = -ENOMEM;
|
||||
goto put_module;
|
||||
}
|
||||
|
||||
memcpy(aggr->args, buf, count + 1);
|
||||
|
||||
@@ -163,6 +168,7 @@ static ssize_t new_device_store(struct device_driver *driver, const char *buf,
|
||||
}
|
||||
|
||||
aggr->pdev = pdev;
|
||||
module_put(THIS_MODULE);
|
||||
return count;
|
||||
|
||||
remove_table:
|
||||
@@ -177,6 +183,8 @@ free_table:
|
||||
kfree(aggr->lookups);
|
||||
free_ga:
|
||||
kfree(aggr);
|
||||
put_module:
|
||||
module_put(THIS_MODULE);
|
||||
return res;
|
||||
}
|
||||
|
||||
@@ -205,13 +213,19 @@ static ssize_t delete_device_store(struct device_driver *driver,
|
||||
if (error)
|
||||
return error;
|
||||
|
||||
if (!try_module_get(THIS_MODULE))
|
||||
return -ENOENT;
|
||||
|
||||
mutex_lock(&gpio_aggregator_lock);
|
||||
aggr = idr_remove(&gpio_aggregator_idr, id);
|
||||
mutex_unlock(&gpio_aggregator_lock);
|
||||
if (!aggr)
|
||||
if (!aggr) {
|
||||
module_put(THIS_MODULE);
|
||||
return -ENOENT;
|
||||
}
|
||||
|
||||
gpio_aggregator_free(aggr);
|
||||
module_put(THIS_MODULE);
|
||||
return count;
|
||||
}
|
||||
static DRIVER_ATTR_WO(delete_device);
|
||||
|
||||
@@ -40,7 +40,7 @@ struct gpio_rcar_info {
|
||||
|
||||
struct gpio_rcar_priv {
|
||||
void __iomem *base;
|
||||
spinlock_t lock;
|
||||
raw_spinlock_t lock;
|
||||
struct device *dev;
|
||||
struct gpio_chip gpio_chip;
|
||||
unsigned int irq_parent;
|
||||
@@ -123,7 +123,7 @@ static void gpio_rcar_config_interrupt_input_mode(struct gpio_rcar_priv *p,
|
||||
* "Setting Level-Sensitive Interrupt Input Mode"
|
||||
*/
|
||||
|
||||
spin_lock_irqsave(&p->lock, flags);
|
||||
raw_spin_lock_irqsave(&p->lock, flags);
|
||||
|
||||
/* Configure positive or negative logic in POSNEG */
|
||||
gpio_rcar_modify_bit(p, POSNEG, hwirq, !active_high_rising_edge);
|
||||
@@ -142,7 +142,7 @@ static void gpio_rcar_config_interrupt_input_mode(struct gpio_rcar_priv *p,
|
||||
if (!level_trigger)
|
||||
gpio_rcar_write(p, INTCLR, BIT(hwirq));
|
||||
|
||||
spin_unlock_irqrestore(&p->lock, flags);
|
||||
raw_spin_unlock_irqrestore(&p->lock, flags);
|
||||
}
|
||||
|
||||
static int gpio_rcar_irq_set_type(struct irq_data *d, unsigned int type)
|
||||
@@ -246,7 +246,7 @@ static void gpio_rcar_config_general_input_output_mode(struct gpio_chip *chip,
|
||||
* "Setting General Input Mode"
|
||||
*/
|
||||
|
||||
spin_lock_irqsave(&p->lock, flags);
|
||||
raw_spin_lock_irqsave(&p->lock, flags);
|
||||
|
||||
/* Configure positive logic in POSNEG */
|
||||
gpio_rcar_modify_bit(p, POSNEG, gpio, false);
|
||||
@@ -261,7 +261,7 @@ static void gpio_rcar_config_general_input_output_mode(struct gpio_chip *chip,
|
||||
if (p->info.has_outdtsel && output)
|
||||
gpio_rcar_modify_bit(p, OUTDTSEL, gpio, false);
|
||||
|
||||
spin_unlock_irqrestore(&p->lock, flags);
|
||||
raw_spin_unlock_irqrestore(&p->lock, flags);
|
||||
}
|
||||
|
||||
static int gpio_rcar_request(struct gpio_chip *chip, unsigned offset)
|
||||
@@ -347,7 +347,7 @@ static int gpio_rcar_get_multiple(struct gpio_chip *chip, unsigned long *mask,
|
||||
return 0;
|
||||
}
|
||||
|
||||
spin_lock_irqsave(&p->lock, flags);
|
||||
raw_spin_lock_irqsave(&p->lock, flags);
|
||||
outputs = gpio_rcar_read(p, INOUTSEL);
|
||||
m = outputs & bankmask;
|
||||
if (m)
|
||||
@@ -356,7 +356,7 @@ static int gpio_rcar_get_multiple(struct gpio_chip *chip, unsigned long *mask,
|
||||
m = ~outputs & bankmask;
|
||||
if (m)
|
||||
val |= gpio_rcar_read(p, INDT) & m;
|
||||
spin_unlock_irqrestore(&p->lock, flags);
|
||||
raw_spin_unlock_irqrestore(&p->lock, flags);
|
||||
|
||||
bits[0] = val;
|
||||
return 0;
|
||||
@@ -367,9 +367,9 @@ static void gpio_rcar_set(struct gpio_chip *chip, unsigned offset, int value)
|
||||
struct gpio_rcar_priv *p = gpiochip_get_data(chip);
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&p->lock, flags);
|
||||
raw_spin_lock_irqsave(&p->lock, flags);
|
||||
gpio_rcar_modify_bit(p, OUTDT, offset, value);
|
||||
spin_unlock_irqrestore(&p->lock, flags);
|
||||
raw_spin_unlock_irqrestore(&p->lock, flags);
|
||||
}
|
||||
|
||||
static void gpio_rcar_set_multiple(struct gpio_chip *chip, unsigned long *mask,
|
||||
@@ -386,12 +386,12 @@ static void gpio_rcar_set_multiple(struct gpio_chip *chip, unsigned long *mask,
|
||||
if (!bankmask)
|
||||
return;
|
||||
|
||||
spin_lock_irqsave(&p->lock, flags);
|
||||
raw_spin_lock_irqsave(&p->lock, flags);
|
||||
val = gpio_rcar_read(p, OUTDT);
|
||||
val &= ~bankmask;
|
||||
val |= (bankmask & bits[0]);
|
||||
gpio_rcar_write(p, OUTDT, val);
|
||||
spin_unlock_irqrestore(&p->lock, flags);
|
||||
raw_spin_unlock_irqrestore(&p->lock, flags);
|
||||
}
|
||||
|
||||
static int gpio_rcar_direction_output(struct gpio_chip *chip, unsigned offset,
|
||||
@@ -468,7 +468,12 @@ static int gpio_rcar_parse_dt(struct gpio_rcar_priv *p, unsigned int *npins)
|
||||
p->info = *info;
|
||||
|
||||
ret = of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, 0, &args);
|
||||
*npins = ret == 0 ? args.args[2] : RCAR_MAX_GPIO_PER_BANK;
|
||||
if (ret) {
|
||||
*npins = RCAR_MAX_GPIO_PER_BANK;
|
||||
} else {
|
||||
*npins = args.args[2];
|
||||
of_node_put(args.np);
|
||||
}
|
||||
|
||||
if (*npins == 0 || *npins > RCAR_MAX_GPIO_PER_BANK) {
|
||||
dev_warn(p->dev, "Invalid number of gpio lines %u, using %u\n",
|
||||
@@ -505,7 +510,7 @@ static int gpio_rcar_probe(struct platform_device *pdev)
|
||||
return -ENOMEM;
|
||||
|
||||
p->dev = dev;
|
||||
spin_lock_init(&p->lock);
|
||||
raw_spin_lock_init(&p->lock);
|
||||
|
||||
/* Get device configuration from DT node */
|
||||
ret = gpio_rcar_parse_dt(p, &npins);
|
||||
|
||||
@@ -211,6 +211,18 @@ config DRM_DEBUG_MODESET_LOCK
|
||||
|
||||
If in doubt, say "N".
|
||||
|
||||
config DRM_CLIENT_SELECTION
|
||||
bool
|
||||
depends on DRM
|
||||
select DRM_CLIENT_SETUP if DRM_FBDEV_EMULATION
|
||||
help
|
||||
Drivers that support in-kernel DRM clients have to select this
|
||||
option.
|
||||
|
||||
config DRM_CLIENT_SETUP
|
||||
bool
|
||||
depends on DRM_CLIENT_SELECTION
|
||||
|
||||
config DRM_FBDEV_EMULATION
|
||||
bool "Enable legacy fbdev support for your modesetting driver"
|
||||
depends on DRM
|
||||
|
||||
@@ -144,8 +144,12 @@ drm_kms_helper-y := \
|
||||
drm_rect.o \
|
||||
drm_self_refresh_helper.o \
|
||||
drm_simple_kms_helper.o
|
||||
drm_kms_helper-$(CONFIG_DRM_CLIENT_SETUP) += \
|
||||
drm_client_setup.o
|
||||
drm_kms_helper-$(CONFIG_DRM_PANEL_BRIDGE) += bridge/panel.o
|
||||
drm_kms_helper-$(CONFIG_DRM_FBDEV_EMULATION) += drm_fb_helper.o
|
||||
drm_kms_helper-$(CONFIG_DRM_FBDEV_EMULATION) += \
|
||||
drm_fbdev_client.o \
|
||||
drm_fb_helper.o
|
||||
obj-$(CONFIG_DRM_KMS_HELPER) += drm_kms_helper.o
|
||||
|
||||
#
|
||||
|
||||
@@ -266,8 +266,8 @@ int kfd_queue_acquire_buffers(struct kfd_process_device *pdd, struct queue_prope
|
||||
/* EOP buffer is not required for all ASICs */
|
||||
if (properties->eop_ring_buffer_address) {
|
||||
if (properties->eop_ring_buffer_size != topo_dev->node_props.eop_buffer_size) {
|
||||
pr_debug("queue eop bo size 0x%lx not equal to node eop buf size 0x%x\n",
|
||||
properties->eop_buf_bo->tbo.base.size,
|
||||
pr_debug("queue eop bo size 0x%x not equal to node eop buf size 0x%x\n",
|
||||
properties->eop_ring_buffer_size,
|
||||
topo_dev->node_props.eop_buffer_size);
|
||||
err = -EINVAL;
|
||||
goto out_err_unreserve;
|
||||
|
||||
@@ -1455,7 +1455,8 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
|
||||
DC_LOGGER_INIT(pipe_ctx->stream->ctx->logger);
|
||||
|
||||
/* Invalid input */
|
||||
if (!plane_state->dst_rect.width ||
|
||||
if (!plane_state ||
|
||||
!plane_state->dst_rect.width ||
|
||||
!plane_state->dst_rect.height ||
|
||||
!plane_state->src_rect.width ||
|
||||
!plane_state->src_rect.height) {
|
||||
|
||||
@@ -1883,16 +1883,6 @@ static int smu_v14_0_allow_ih_interrupt(struct smu_context *smu)
|
||||
NULL);
|
||||
}
|
||||
|
||||
static int smu_v14_0_process_pending_interrupt(struct smu_context *smu)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
if (smu_cmn_feature_is_enabled(smu, SMU_FEATURE_ACDC_BIT))
|
||||
ret = smu_v14_0_allow_ih_interrupt(smu);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
int smu_v14_0_enable_thermal_alert(struct smu_context *smu)
|
||||
{
|
||||
int ret = 0;
|
||||
@@ -1904,7 +1894,7 @@ int smu_v14_0_enable_thermal_alert(struct smu_context *smu)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return smu_v14_0_process_pending_interrupt(smu);
|
||||
return smu_v14_0_allow_ih_interrupt(smu);
|
||||
}
|
||||
|
||||
int smu_v14_0_disable_thermal_alert(struct smu_context *smu)
|
||||
|
||||
66
drivers/gpu/drm/drm_client_setup.c
Normal file
66
drivers/gpu/drm/drm_client_setup.c
Normal file
@@ -0,0 +1,66 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
|
||||
#include <drm/drm_client_setup.h>
|
||||
#include <drm/drm_device.h>
|
||||
#include <drm/drm_fbdev_client.h>
|
||||
#include <drm/drm_fourcc.h>
|
||||
#include <drm/drm_print.h>
|
||||
|
||||
/**
|
||||
* drm_client_setup() - Setup in-kernel DRM clients
|
||||
* @dev: DRM device
|
||||
* @format: Preferred pixel format for the device. Use NULL, unless
|
||||
* there is clearly a driver-preferred format.
|
||||
*
|
||||
* This function sets up the in-kernel DRM clients. Restore, hotplug
|
||||
* events and teardown are all taken care of.
|
||||
*
|
||||
* Drivers should call drm_client_setup() after registering the new
|
||||
* DRM device with drm_dev_register(). This function is safe to call
|
||||
* even when there are no connectors present. Setup will be retried
|
||||
* on the next hotplug event.
|
||||
*
|
||||
* The clients are destroyed by drm_dev_unregister().
|
||||
*/
|
||||
void drm_client_setup(struct drm_device *dev, const struct drm_format_info *format)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = drm_fbdev_client_setup(dev, format);
|
||||
if (ret)
|
||||
drm_warn(dev, "Failed to set up DRM client; error %d\n", ret);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_client_setup);
|
||||
|
||||
/**
|
||||
* drm_client_setup_with_fourcc() - Setup in-kernel DRM clients for color mode
|
||||
* @dev: DRM device
|
||||
* @fourcc: Preferred pixel format as 4CC code for the device
|
||||
*
|
||||
* This function sets up the in-kernel DRM clients. It is equivalent
|
||||
* to drm_client_setup(), but expects a 4CC code as second argument.
|
||||
*/
|
||||
void drm_client_setup_with_fourcc(struct drm_device *dev, u32 fourcc)
|
||||
{
|
||||
drm_client_setup(dev, drm_format_info(fourcc));
|
||||
}
|
||||
EXPORT_SYMBOL(drm_client_setup_with_fourcc);
|
||||
|
||||
/**
|
||||
* drm_client_setup_with_color_mode() - Setup in-kernel DRM clients for color mode
|
||||
* @dev: DRM device
|
||||
* @color_mode: Preferred color mode for the device
|
||||
*
|
||||
* This function sets up the in-kernel DRM clients. It is equivalent
|
||||
* to drm_client_setup(), but expects a color mode as second argument.
|
||||
*
|
||||
* Do not use this function in new drivers. Prefer drm_client_setup() with a
|
||||
* format of NULL.
|
||||
*/
|
||||
void drm_client_setup_with_color_mode(struct drm_device *dev, unsigned int color_mode)
|
||||
{
|
||||
u32 fourcc = drm_driver_color_mode_format(dev, color_mode);
|
||||
|
||||
drm_client_setup_with_fourcc(dev, fourcc);
|
||||
}
|
||||
EXPORT_SYMBOL(drm_client_setup_with_color_mode);
|
||||
@@ -492,8 +492,8 @@ EXPORT_SYMBOL(drm_fb_helper_init);
|
||||
* @fb_helper: driver-allocated fbdev helper
|
||||
*
|
||||
* A helper to alloc fb_info and the member cmap. Called by the driver
|
||||
* within the fb_probe fb_helper callback function. Drivers do not
|
||||
* need to release the allocated fb_info structure themselves, this is
|
||||
* within the struct &drm_driver.fbdev_probe callback function. Drivers do
|
||||
* not need to release the allocated fb_info structure themselves, this is
|
||||
* automatically done when calling drm_fb_helper_fini().
|
||||
*
|
||||
* RETURNS:
|
||||
@@ -1443,67 +1443,27 @@ unlock:
|
||||
EXPORT_SYMBOL(drm_fb_helper_pan_display);
|
||||
|
||||
static uint32_t drm_fb_helper_find_format(struct drm_fb_helper *fb_helper, const uint32_t *formats,
|
||||
size_t format_count, uint32_t bpp, uint32_t depth)
|
||||
size_t format_count, unsigned int color_mode)
|
||||
{
|
||||
struct drm_device *dev = fb_helper->dev;
|
||||
uint32_t format;
|
||||
size_t i;
|
||||
|
||||
/*
|
||||
* Do not consider YUV or other complicated formats
|
||||
* for framebuffers. This means only legacy formats
|
||||
* are supported (fmt->depth is a legacy field), but
|
||||
* the framebuffer emulation can only deal with such
|
||||
* formats, specifically RGB/BGA formats.
|
||||
*/
|
||||
format = drm_mode_legacy_fb_format(bpp, depth);
|
||||
if (!format)
|
||||
goto err;
|
||||
format = drm_driver_color_mode_format(dev, color_mode);
|
||||
if (!format) {
|
||||
drm_info(dev, "unsupported color mode of %d\n", color_mode);
|
||||
return DRM_FORMAT_INVALID;
|
||||
}
|
||||
|
||||
for (i = 0; i < format_count; ++i) {
|
||||
if (formats[i] == format)
|
||||
return format;
|
||||
}
|
||||
|
||||
err:
|
||||
/* We found nothing. */
|
||||
drm_warn(dev, "bpp/depth value of %u/%u not supported\n", bpp, depth);
|
||||
drm_warn(dev, "format %p4cc not supported\n", &format);
|
||||
|
||||
return DRM_FORMAT_INVALID;
|
||||
}
|
||||
|
||||
static uint32_t drm_fb_helper_find_color_mode_format(struct drm_fb_helper *fb_helper,
|
||||
const uint32_t *formats, size_t format_count,
|
||||
unsigned int color_mode)
|
||||
{
|
||||
struct drm_device *dev = fb_helper->dev;
|
||||
uint32_t bpp, depth;
|
||||
|
||||
switch (color_mode) {
|
||||
case 1:
|
||||
case 2:
|
||||
case 4:
|
||||
case 8:
|
||||
case 16:
|
||||
case 24:
|
||||
bpp = depth = color_mode;
|
||||
break;
|
||||
case 15:
|
||||
bpp = 16;
|
||||
depth = 15;
|
||||
break;
|
||||
case 32:
|
||||
bpp = 32;
|
||||
depth = 24;
|
||||
break;
|
||||
default:
|
||||
drm_info(dev, "unsupported color mode of %d\n", color_mode);
|
||||
return DRM_FORMAT_INVALID;
|
||||
}
|
||||
|
||||
return drm_fb_helper_find_format(fb_helper, formats, format_count, bpp, depth);
|
||||
}
|
||||
|
||||
static int __drm_fb_helper_find_sizes(struct drm_fb_helper *fb_helper,
|
||||
struct drm_fb_helper_surface_size *sizes)
|
||||
{
|
||||
@@ -1533,10 +1493,10 @@ static int __drm_fb_helper_find_sizes(struct drm_fb_helper *fb_helper,
|
||||
if (!cmdline_mode->bpp_specified)
|
||||
continue;
|
||||
|
||||
surface_format = drm_fb_helper_find_color_mode_format(fb_helper,
|
||||
plane->format_types,
|
||||
plane->format_count,
|
||||
cmdline_mode->bpp);
|
||||
surface_format = drm_fb_helper_find_format(fb_helper,
|
||||
plane->format_types,
|
||||
plane->format_count,
|
||||
cmdline_mode->bpp);
|
||||
if (surface_format != DRM_FORMAT_INVALID)
|
||||
break; /* found supported format */
|
||||
}
|
||||
@@ -1546,10 +1506,10 @@ static int __drm_fb_helper_find_sizes(struct drm_fb_helper *fb_helper,
|
||||
break; /* found supported format */
|
||||
|
||||
/* try preferred color mode */
|
||||
surface_format = drm_fb_helper_find_color_mode_format(fb_helper,
|
||||
plane->format_types,
|
||||
plane->format_count,
|
||||
fb_helper->preferred_bpp);
|
||||
surface_format = drm_fb_helper_find_format(fb_helper,
|
||||
plane->format_types,
|
||||
plane->format_count,
|
||||
fb_helper->preferred_bpp);
|
||||
if (surface_format != DRM_FORMAT_INVALID)
|
||||
break; /* found supported format */
|
||||
}
|
||||
@@ -1650,7 +1610,7 @@ static int drm_fb_helper_find_sizes(struct drm_fb_helper *fb_helper,
|
||||
|
||||
/*
|
||||
* Allocates the backing storage and sets up the fbdev info structure through
|
||||
* the ->fb_probe callback.
|
||||
* the ->fbdev_probe callback.
|
||||
*/
|
||||
static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper)
|
||||
{
|
||||
@@ -1668,7 +1628,10 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper)
|
||||
}
|
||||
|
||||
/* push down into drivers */
|
||||
ret = (*fb_helper->funcs->fb_probe)(fb_helper, &sizes);
|
||||
if (dev->driver->fbdev_probe)
|
||||
ret = dev->driver->fbdev_probe(fb_helper, &sizes);
|
||||
else if (fb_helper->funcs)
|
||||
ret = fb_helper->funcs->fb_probe(fb_helper, &sizes);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
@@ -1740,7 +1703,7 @@ static void drm_fb_helper_fill_var(struct fb_info *info,
|
||||
* instance and the drm framebuffer allocated in &drm_fb_helper.fb.
|
||||
*
|
||||
* Drivers should call this (or their equivalent setup code) from their
|
||||
* &drm_fb_helper_funcs.fb_probe callback after having allocated the fbdev
|
||||
* &drm_driver.fbdev_probe callback after having allocated the fbdev
|
||||
* backing storage framebuffer.
|
||||
*/
|
||||
void drm_fb_helper_fill_info(struct fb_info *info,
|
||||
@@ -1896,7 +1859,7 @@ __drm_fb_helper_initial_config_and_unlock(struct drm_fb_helper *fb_helper)
|
||||
* Note that this also registers the fbdev and so allows userspace to call into
|
||||
* the driver through the fbdev interfaces.
|
||||
*
|
||||
* This function will call down into the &drm_fb_helper_funcs.fb_probe callback
|
||||
* This function will call down into the &drm_driver.fbdev_probe callback
|
||||
* to let the driver allocate and initialize the fbdev info structure and the
|
||||
* drm framebuffer used to back the fbdev. drm_fb_helper_fill_info() is provided
|
||||
* as a helper to setup simple default values for the fbdev info structure.
|
||||
|
||||
141
drivers/gpu/drm/drm_fbdev_client.c
Normal file
141
drivers/gpu/drm/drm_fbdev_client.c
Normal file
@@ -0,0 +1,141 @@
|
||||
// SPDX-License-Identifier: MIT
|
||||
|
||||
#include <drm/drm_client.h>
|
||||
#include <drm/drm_crtc_helper.h>
|
||||
#include <drm/drm_drv.h>
|
||||
#include <drm/drm_fbdev_client.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#include <drm/drm_fourcc.h>
|
||||
#include <drm/drm_print.h>
|
||||
|
||||
/*
|
||||
* struct drm_client_funcs
|
||||
*/
|
||||
|
||||
static void drm_fbdev_client_unregister(struct drm_client_dev *client)
|
||||
{
|
||||
struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client);
|
||||
|
||||
if (fb_helper->info) {
|
||||
drm_fb_helper_unregister_info(fb_helper);
|
||||
} else {
|
||||
drm_client_release(&fb_helper->client);
|
||||
drm_fb_helper_unprepare(fb_helper);
|
||||
kfree(fb_helper);
|
||||
}
|
||||
}
|
||||
|
||||
static int drm_fbdev_client_restore(struct drm_client_dev *client)
|
||||
{
|
||||
drm_fb_helper_lastclose(client->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int drm_fbdev_client_hotplug(struct drm_client_dev *client)
|
||||
{
|
||||
struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client);
|
||||
struct drm_device *dev = client->dev;
|
||||
int ret;
|
||||
|
||||
if (dev->fb_helper)
|
||||
return drm_fb_helper_hotplug_event(dev->fb_helper);
|
||||
|
||||
ret = drm_fb_helper_init(dev, fb_helper);
|
||||
if (ret)
|
||||
goto err_drm_err;
|
||||
|
||||
if (!drm_drv_uses_atomic_modeset(dev))
|
||||
drm_helper_disable_unused_functions(dev);
|
||||
|
||||
ret = drm_fb_helper_initial_config(fb_helper);
|
||||
if (ret)
|
||||
goto err_drm_fb_helper_fini;
|
||||
|
||||
return 0;
|
||||
|
||||
err_drm_fb_helper_fini:
|
||||
drm_fb_helper_fini(fb_helper);
|
||||
err_drm_err:
|
||||
drm_err(dev, "fbdev: Failed to setup emulation (ret=%d)\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct drm_client_funcs drm_fbdev_client_funcs = {
|
||||
.owner = THIS_MODULE,
|
||||
.unregister = drm_fbdev_client_unregister,
|
||||
.restore = drm_fbdev_client_restore,
|
||||
.hotplug = drm_fbdev_client_hotplug,
|
||||
};
|
||||
|
||||
/**
|
||||
* drm_fbdev_client_setup() - Setup fbdev emulation
|
||||
* @dev: DRM device
|
||||
* @format: Preferred color format for the device. DRM_FORMAT_XRGB8888
|
||||
* is used if this is zero.
|
||||
*
|
||||
* This function sets up fbdev emulation. Restore, hotplug events and
|
||||
* teardown are all taken care of. Drivers that do suspend/resume need
|
||||
* to call drm_fb_helper_set_suspend_unlocked() themselves. Simple
|
||||
* drivers might use drm_mode_config_helper_suspend().
|
||||
*
|
||||
* This function is safe to call even when there are no connectors present.
|
||||
* Setup will be retried on the next hotplug event.
|
||||
*
|
||||
* The fbdev client is destroyed by drm_dev_unregister().
|
||||
*
|
||||
* Returns:
|
||||
* 0 on success, or a negative errno code otherwise.
|
||||
*/
|
||||
int drm_fbdev_client_setup(struct drm_device *dev, const struct drm_format_info *format)
|
||||
{
|
||||
struct drm_fb_helper *fb_helper;
|
||||
unsigned int color_mode;
|
||||
int ret;
|
||||
|
||||
/* TODO: Use format info throughout DRM */
|
||||
if (format) {
|
||||
unsigned int bpp = drm_format_info_bpp(format, 0);
|
||||
|
||||
switch (bpp) {
|
||||
case 16:
|
||||
color_mode = format->depth; // could also be 15
|
||||
break;
|
||||
default:
|
||||
color_mode = bpp;
|
||||
}
|
||||
} else {
|
||||
switch (dev->mode_config.preferred_depth) {
|
||||
case 0:
|
||||
case 24:
|
||||
color_mode = 32;
|
||||
break;
|
||||
default:
|
||||
color_mode = dev->mode_config.preferred_depth;
|
||||
}
|
||||
}
|
||||
|
||||
drm_WARN(dev, !dev->registered, "Device has not been registered.\n");
|
||||
drm_WARN(dev, dev->fb_helper, "fb_helper is already set!\n");
|
||||
|
||||
fb_helper = kzalloc(sizeof(*fb_helper), GFP_KERNEL);
|
||||
if (!fb_helper)
|
||||
return -ENOMEM;
|
||||
drm_fb_helper_prepare(dev, fb_helper, color_mode, NULL);
|
||||
|
||||
ret = drm_client_init(dev, &fb_helper->client, "fbdev", &drm_fbdev_client_funcs);
|
||||
if (ret) {
|
||||
drm_err(dev, "Failed to register client: %d\n", ret);
|
||||
goto err_drm_client_init;
|
||||
}
|
||||
|
||||
drm_client_register(&fb_helper->client);
|
||||
|
||||
return 0;
|
||||
|
||||
err_drm_client_init:
|
||||
drm_fb_helper_unprepare(fb_helper);
|
||||
kfree(fb_helper);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_fbdev_client_setup);
|
||||
@@ -71,71 +71,7 @@ static const struct fb_ops drm_fbdev_ttm_fb_ops = {
|
||||
static int drm_fbdev_ttm_helper_fb_probe(struct drm_fb_helper *fb_helper,
|
||||
struct drm_fb_helper_surface_size *sizes)
|
||||
{
|
||||
struct drm_client_dev *client = &fb_helper->client;
|
||||
struct drm_device *dev = fb_helper->dev;
|
||||
struct drm_client_buffer *buffer;
|
||||
struct fb_info *info;
|
||||
size_t screen_size;
|
||||
void *screen_buffer;
|
||||
u32 format;
|
||||
int ret;
|
||||
|
||||
drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
|
||||
sizes->surface_width, sizes->surface_height,
|
||||
sizes->surface_bpp);
|
||||
|
||||
format = drm_driver_legacy_fb_format(dev, sizes->surface_bpp,
|
||||
sizes->surface_depth);
|
||||
buffer = drm_client_framebuffer_create(client, sizes->surface_width,
|
||||
sizes->surface_height, format);
|
||||
if (IS_ERR(buffer))
|
||||
return PTR_ERR(buffer);
|
||||
|
||||
fb_helper->buffer = buffer;
|
||||
fb_helper->fb = buffer->fb;
|
||||
|
||||
screen_size = buffer->gem->size;
|
||||
screen_buffer = vzalloc(screen_size);
|
||||
if (!screen_buffer) {
|
||||
ret = -ENOMEM;
|
||||
goto err_drm_client_framebuffer_delete;
|
||||
}
|
||||
|
||||
info = drm_fb_helper_alloc_info(fb_helper);
|
||||
if (IS_ERR(info)) {
|
||||
ret = PTR_ERR(info);
|
||||
goto err_vfree;
|
||||
}
|
||||
|
||||
drm_fb_helper_fill_info(info, fb_helper, sizes);
|
||||
|
||||
info->fbops = &drm_fbdev_ttm_fb_ops;
|
||||
|
||||
/* screen */
|
||||
info->flags |= FBINFO_VIRTFB | FBINFO_READS_FAST;
|
||||
info->screen_buffer = screen_buffer;
|
||||
info->fix.smem_len = screen_size;
|
||||
|
||||
/* deferred I/O */
|
||||
fb_helper->fbdefio.delay = HZ / 20;
|
||||
fb_helper->fbdefio.deferred_io = drm_fb_helper_deferred_io;
|
||||
|
||||
info->fbdefio = &fb_helper->fbdefio;
|
||||
ret = fb_deferred_io_init(info);
|
||||
if (ret)
|
||||
goto err_drm_fb_helper_release_info;
|
||||
|
||||
return 0;
|
||||
|
||||
err_drm_fb_helper_release_info:
|
||||
drm_fb_helper_release_info(fb_helper);
|
||||
err_vfree:
|
||||
vfree(screen_buffer);
|
||||
err_drm_client_framebuffer_delete:
|
||||
fb_helper->fb = NULL;
|
||||
fb_helper->buffer = NULL;
|
||||
drm_client_framebuffer_delete(buffer);
|
||||
return ret;
|
||||
return drm_fbdev_ttm_driver_fbdev_probe(fb_helper, sizes);
|
||||
}
|
||||
|
||||
static void drm_fbdev_ttm_damage_blit_real(struct drm_fb_helper *fb_helper,
|
||||
@@ -240,6 +176,82 @@ static const struct drm_fb_helper_funcs drm_fbdev_ttm_helper_funcs = {
|
||||
.fb_dirty = drm_fbdev_ttm_helper_fb_dirty,
|
||||
};
|
||||
|
||||
/*
|
||||
* struct drm_driver
|
||||
*/
|
||||
|
||||
int drm_fbdev_ttm_driver_fbdev_probe(struct drm_fb_helper *fb_helper,
|
||||
struct drm_fb_helper_surface_size *sizes)
|
||||
{
|
||||
struct drm_client_dev *client = &fb_helper->client;
|
||||
struct drm_device *dev = fb_helper->dev;
|
||||
struct drm_client_buffer *buffer;
|
||||
struct fb_info *info;
|
||||
size_t screen_size;
|
||||
void *screen_buffer;
|
||||
u32 format;
|
||||
int ret;
|
||||
|
||||
drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
|
||||
sizes->surface_width, sizes->surface_height,
|
||||
sizes->surface_bpp);
|
||||
|
||||
format = drm_driver_legacy_fb_format(dev, sizes->surface_bpp,
|
||||
sizes->surface_depth);
|
||||
buffer = drm_client_framebuffer_create(client, sizes->surface_width,
|
||||
sizes->surface_height, format);
|
||||
if (IS_ERR(buffer))
|
||||
return PTR_ERR(buffer);
|
||||
|
||||
fb_helper->funcs = &drm_fbdev_ttm_helper_funcs;
|
||||
fb_helper->buffer = buffer;
|
||||
fb_helper->fb = buffer->fb;
|
||||
|
||||
screen_size = buffer->gem->size;
|
||||
screen_buffer = vzalloc(screen_size);
|
||||
if (!screen_buffer) {
|
||||
ret = -ENOMEM;
|
||||
goto err_drm_client_framebuffer_delete;
|
||||
}
|
||||
|
||||
info = drm_fb_helper_alloc_info(fb_helper);
|
||||
if (IS_ERR(info)) {
|
||||
ret = PTR_ERR(info);
|
||||
goto err_vfree;
|
||||
}
|
||||
|
||||
drm_fb_helper_fill_info(info, fb_helper, sizes);
|
||||
|
||||
info->fbops = &drm_fbdev_ttm_fb_ops;
|
||||
|
||||
/* screen */
|
||||
info->flags |= FBINFO_VIRTFB | FBINFO_READS_FAST;
|
||||
info->screen_buffer = screen_buffer;
|
||||
info->fix.smem_len = screen_size;
|
||||
|
||||
/* deferred I/O */
|
||||
fb_helper->fbdefio.delay = HZ / 20;
|
||||
fb_helper->fbdefio.deferred_io = drm_fb_helper_deferred_io;
|
||||
|
||||
info->fbdefio = &fb_helper->fbdefio;
|
||||
ret = fb_deferred_io_init(info);
|
||||
if (ret)
|
||||
goto err_drm_fb_helper_release_info;
|
||||
|
||||
return 0;
|
||||
|
||||
err_drm_fb_helper_release_info:
|
||||
drm_fb_helper_release_info(fb_helper);
|
||||
err_vfree:
|
||||
vfree(screen_buffer);
|
||||
err_drm_client_framebuffer_delete:
|
||||
fb_helper->fb = NULL;
|
||||
fb_helper->buffer = NULL;
|
||||
drm_client_framebuffer_delete(buffer);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(drm_fbdev_ttm_driver_fbdev_probe);
|
||||
|
||||
static void drm_fbdev_ttm_client_unregister(struct drm_client_dev *client)
|
||||
{
|
||||
struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client);
|
||||
|
||||
@@ -36,7 +36,6 @@
|
||||
* @depth: bit depth per pixel
|
||||
*
|
||||
* Computes a drm fourcc pixel format code for the given @bpp/@depth values.
|
||||
* Useful in fbdev emulation code, since that deals in those values.
|
||||
*/
|
||||
uint32_t drm_mode_legacy_fb_format(uint32_t bpp, uint32_t depth)
|
||||
{
|
||||
@@ -140,6 +139,35 @@ uint32_t drm_driver_legacy_fb_format(struct drm_device *dev,
|
||||
}
|
||||
EXPORT_SYMBOL(drm_driver_legacy_fb_format);
|
||||
|
||||
/**
|
||||
* drm_driver_color_mode_format - Compute DRM 4CC code from color mode
|
||||
* @dev: DRM device
|
||||
* @color_mode: command-line color mode
|
||||
*
|
||||
* Computes a DRM 4CC pixel format code for the given color mode using
|
||||
* drm_driver_color_mode(). The color mode is in the format used and the
|
||||
* kernel command line. It specifies the number of bits per pixel
|
||||
* and color depth in a single value.
|
||||
*
|
||||
* Useful in fbdev emulation code, since that deals in those values. The
|
||||
* helper does not consider YUV or other complicated formats. This means
|
||||
* only legacy formats are supported (fmt->depth is a legacy field), but
|
||||
* the framebuffer emulation can only deal with such formats, specifically
|
||||
* RGB/BGA formats.
|
||||
*/
|
||||
uint32_t drm_driver_color_mode_format(struct drm_device *dev, unsigned int color_mode)
|
||||
{
|
||||
switch (color_mode) {
|
||||
case 15:
|
||||
return drm_driver_legacy_fb_format(dev, 16, 15);
|
||||
case 32:
|
||||
return drm_driver_legacy_fb_format(dev, 32, 24);
|
||||
default:
|
||||
return drm_driver_legacy_fb_format(dev, color_mode, color_mode);
|
||||
}
|
||||
}
|
||||
EXPORT_SYMBOL(drm_driver_color_mode_format);
|
||||
|
||||
/*
|
||||
* Internal function to query information for a given format. See
|
||||
* drm_format_info() for the public API.
|
||||
|
||||
@@ -209,12 +209,9 @@ const FORMAT_INFOS_QR_L: [u16; 8] = [
|
||||
impl Version {
|
||||
/// Returns the smallest QR version than can hold these segments.
|
||||
fn from_segments(segments: &[&Segment<'_>]) -> Option<Version> {
|
||||
for v in (1..=40).map(|k| Version(k)) {
|
||||
if v.max_data() * 8 >= segments.iter().map(|s| s.total_size_bits(v)).sum() {
|
||||
return Some(v);
|
||||
}
|
||||
}
|
||||
None
|
||||
(1..=40)
|
||||
.map(Version)
|
||||
.find(|&v| v.max_data() * 8 >= segments.iter().map(|s| s.total_size_bits(v)).sum())
|
||||
}
|
||||
|
||||
fn width(&self) -> u8 {
|
||||
@@ -242,7 +239,7 @@ impl Version {
|
||||
}
|
||||
|
||||
fn alignment_pattern(&self) -> &'static [u8] {
|
||||
&ALIGNMENT_PATTERNS[self.0 - 1]
|
||||
ALIGNMENT_PATTERNS[self.0 - 1]
|
||||
}
|
||||
|
||||
fn poly(&self) -> &'static [u8] {
|
||||
@@ -479,7 +476,7 @@ struct EncodedMsg<'a> {
|
||||
/// Data to be put in the QR code, with correct segment encoding, padding, and
|
||||
/// Error Code Correction.
|
||||
impl EncodedMsg<'_> {
|
||||
fn new<'a, 'b>(segments: &[&Segment<'b>], data: &'a mut [u8]) -> Option<EncodedMsg<'a>> {
|
||||
fn new<'a>(segments: &[&Segment<'_>], data: &'a mut [u8]) -> Option<EncodedMsg<'a>> {
|
||||
let version = Version::from_segments(segments)?;
|
||||
let ec_size = version.ec_size();
|
||||
let g1_blocks = version.g1_blocks();
|
||||
@@ -492,7 +489,7 @@ impl EncodedMsg<'_> {
|
||||
data.fill(0);
|
||||
|
||||
let mut em = EncodedMsg {
|
||||
data: data,
|
||||
data,
|
||||
ec_size,
|
||||
g1_blocks,
|
||||
g2_blocks,
|
||||
@@ -722,7 +719,10 @@ impl QrImage<'_> {
|
||||
|
||||
fn is_finder(&self, x: u8, y: u8) -> bool {
|
||||
let end = self.width - 8;
|
||||
(x < 8 && y < 8) || (x < 8 && y >= end) || (x >= end && y < 8)
|
||||
#[expect(clippy::nonminimal_bool)]
|
||||
{
|
||||
(x < 8 && y < 8) || (x < 8 && y >= end) || (x >= end && y < 8)
|
||||
}
|
||||
}
|
||||
|
||||
// Alignment pattern: 5x5 squares in a grid.
|
||||
@@ -931,7 +931,7 @@ impl QrImage<'_> {
|
||||
/// They must remain valid for the duration of the function call.
|
||||
#[no_mangle]
|
||||
pub unsafe extern "C" fn drm_panic_qr_generate(
|
||||
url: *const i8,
|
||||
url: *const kernel::ffi::c_char,
|
||||
data: *mut u8,
|
||||
data_len: usize,
|
||||
data_size: usize,
|
||||
@@ -978,10 +978,11 @@ pub unsafe extern "C" fn drm_panic_qr_generate(
|
||||
/// * `url_len`: Length of the URL.
|
||||
///
|
||||
/// * If `url_len` > 0, remove the 2 segments header/length and also count the
|
||||
/// conversion to numeric segments.
|
||||
/// conversion to numeric segments.
|
||||
/// * If `url_len` = 0, only removes 3 bytes for 1 binary segment.
|
||||
#[no_mangle]
|
||||
pub extern "C" fn drm_panic_qr_max_data_size(version: u8, url_len: usize) -> usize {
|
||||
#[expect(clippy::manual_range_contains)]
|
||||
if version < 1 || version > 40 {
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -416,7 +416,8 @@ static int i9xx_plane_min_cdclk(const struct intel_crtc_state *crtc_state,
|
||||
return DIV_ROUND_UP(pixel_rate * num, den);
|
||||
}
|
||||
|
||||
static void i9xx_plane_update_noarm(struct intel_plane *plane,
|
||||
static void i9xx_plane_update_noarm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
@@ -444,7 +445,8 @@ static void i9xx_plane_update_noarm(struct intel_plane *plane,
|
||||
}
|
||||
}
|
||||
|
||||
static void i9xx_plane_update_arm(struct intel_plane *plane,
|
||||
static void i9xx_plane_update_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
@@ -507,7 +509,8 @@ static void i9xx_plane_update_arm(struct intel_plane *plane,
|
||||
intel_plane_ggtt_offset(plane_state) + dspaddr_offset);
|
||||
}
|
||||
|
||||
static void i830_plane_update_arm(struct intel_plane *plane,
|
||||
static void i830_plane_update_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
@@ -517,11 +520,12 @@ static void i830_plane_update_arm(struct intel_plane *plane,
|
||||
* Additional breakage on i830 causes register reads to return
|
||||
* the last latched value instead of the last written value [ALM026].
|
||||
*/
|
||||
i9xx_plane_update_noarm(plane, crtc_state, plane_state);
|
||||
i9xx_plane_update_arm(plane, crtc_state, plane_state);
|
||||
i9xx_plane_update_noarm(dsb, plane, crtc_state, plane_state);
|
||||
i9xx_plane_update_arm(dsb, plane, crtc_state, plane_state);
|
||||
}
|
||||
|
||||
static void i9xx_plane_disable_arm(struct intel_plane *plane,
|
||||
static void i9xx_plane_disable_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
@@ -549,7 +553,8 @@ static void i9xx_plane_disable_arm(struct intel_plane *plane,
|
||||
}
|
||||
|
||||
static void
|
||||
g4x_primary_async_flip(struct intel_plane *plane,
|
||||
g4x_primary_async_flip(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state,
|
||||
bool async_flip)
|
||||
@@ -569,7 +574,8 @@ g4x_primary_async_flip(struct intel_plane *plane,
|
||||
}
|
||||
|
||||
static void
|
||||
vlv_primary_async_flip(struct intel_plane *plane,
|
||||
vlv_primary_async_flip(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state,
|
||||
bool async_flip)
|
||||
|
||||
@@ -790,7 +790,8 @@ skl_next_plane_to_commit(struct intel_atomic_state *state,
|
||||
return NULL;
|
||||
}
|
||||
|
||||
void intel_plane_update_noarm(struct intel_plane *plane,
|
||||
void intel_plane_update_noarm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
@@ -799,10 +800,11 @@ void intel_plane_update_noarm(struct intel_plane *plane,
|
||||
trace_intel_plane_update_noarm(plane, crtc);
|
||||
|
||||
if (plane->update_noarm)
|
||||
plane->update_noarm(plane, crtc_state, plane_state);
|
||||
plane->update_noarm(dsb, plane, crtc_state, plane_state);
|
||||
}
|
||||
|
||||
void intel_plane_async_flip(struct intel_plane *plane,
|
||||
void intel_plane_async_flip(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state,
|
||||
bool async_flip)
|
||||
@@ -810,34 +812,37 @@ void intel_plane_async_flip(struct intel_plane *plane,
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
|
||||
trace_intel_plane_async_flip(plane, crtc, async_flip);
|
||||
plane->async_flip(plane, crtc_state, plane_state, async_flip);
|
||||
plane->async_flip(dsb, plane, crtc_state, plane_state, async_flip);
|
||||
}
|
||||
|
||||
void intel_plane_update_arm(struct intel_plane *plane,
|
||||
void intel_plane_update_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
|
||||
if (crtc_state->do_async_flip && plane->async_flip) {
|
||||
intel_plane_async_flip(plane, crtc_state, plane_state, true);
|
||||
intel_plane_async_flip(dsb, plane, crtc_state, plane_state, true);
|
||||
return;
|
||||
}
|
||||
|
||||
trace_intel_plane_update_arm(plane, crtc);
|
||||
plane->update_arm(plane, crtc_state, plane_state);
|
||||
plane->update_arm(dsb, plane, crtc_state, plane_state);
|
||||
}
|
||||
|
||||
void intel_plane_disable_arm(struct intel_plane *plane,
|
||||
void intel_plane_disable_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
|
||||
trace_intel_plane_disable_arm(plane, crtc);
|
||||
plane->disable_arm(plane, crtc_state);
|
||||
plane->disable_arm(dsb, plane, crtc_state);
|
||||
}
|
||||
|
||||
void intel_crtc_planes_update_noarm(struct intel_atomic_state *state,
|
||||
void intel_crtc_planes_update_noarm(struct intel_dsb *dsb,
|
||||
struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct intel_crtc_state *new_crtc_state =
|
||||
@@ -862,11 +867,13 @@ void intel_crtc_planes_update_noarm(struct intel_atomic_state *state,
|
||||
/* TODO: for mailbox updates this should be skipped */
|
||||
if (new_plane_state->uapi.visible ||
|
||||
new_plane_state->planar_slave)
|
||||
intel_plane_update_noarm(plane, new_crtc_state, new_plane_state);
|
||||
intel_plane_update_noarm(dsb, plane,
|
||||
new_crtc_state, new_plane_state);
|
||||
}
|
||||
}
|
||||
|
||||
static void skl_crtc_planes_update_arm(struct intel_atomic_state *state,
|
||||
static void skl_crtc_planes_update_arm(struct intel_dsb *dsb,
|
||||
struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct intel_crtc_state *old_crtc_state =
|
||||
@@ -893,13 +900,14 @@ static void skl_crtc_planes_update_arm(struct intel_atomic_state *state,
|
||||
*/
|
||||
if (new_plane_state->uapi.visible ||
|
||||
new_plane_state->planar_slave)
|
||||
intel_plane_update_arm(plane, new_crtc_state, new_plane_state);
|
||||
intel_plane_update_arm(dsb, plane, new_crtc_state, new_plane_state);
|
||||
else
|
||||
intel_plane_disable_arm(plane, new_crtc_state);
|
||||
intel_plane_disable_arm(dsb, plane, new_crtc_state);
|
||||
}
|
||||
}
|
||||
|
||||
static void i9xx_crtc_planes_update_arm(struct intel_atomic_state *state,
|
||||
static void i9xx_crtc_planes_update_arm(struct intel_dsb *dsb,
|
||||
struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct intel_crtc_state *new_crtc_state =
|
||||
@@ -919,21 +927,22 @@ static void i9xx_crtc_planes_update_arm(struct intel_atomic_state *state,
|
||||
* would have to be called here as well.
|
||||
*/
|
||||
if (new_plane_state->uapi.visible)
|
||||
intel_plane_update_arm(plane, new_crtc_state, new_plane_state);
|
||||
intel_plane_update_arm(dsb, plane, new_crtc_state, new_plane_state);
|
||||
else
|
||||
intel_plane_disable_arm(plane, new_crtc_state);
|
||||
intel_plane_disable_arm(dsb, plane, new_crtc_state);
|
||||
}
|
||||
}
|
||||
|
||||
void intel_crtc_planes_update_arm(struct intel_atomic_state *state,
|
||||
void intel_crtc_planes_update_arm(struct intel_dsb *dsb,
|
||||
struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(state->base.dev);
|
||||
|
||||
if (DISPLAY_VER(i915) >= 9)
|
||||
skl_crtc_planes_update_arm(state, crtc);
|
||||
skl_crtc_planes_update_arm(dsb, state, crtc);
|
||||
else
|
||||
i9xx_crtc_planes_update_arm(state, crtc);
|
||||
i9xx_crtc_planes_update_arm(dsb, state, crtc);
|
||||
}
|
||||
|
||||
int intel_atomic_plane_check_clipping(struct intel_plane_state *plane_state,
|
||||
|
||||
@@ -14,6 +14,7 @@ struct drm_rect;
|
||||
struct intel_atomic_state;
|
||||
struct intel_crtc;
|
||||
struct intel_crtc_state;
|
||||
struct intel_dsb;
|
||||
struct intel_plane;
|
||||
struct intel_plane_state;
|
||||
enum plane_id;
|
||||
@@ -32,26 +33,32 @@ void intel_plane_copy_uapi_to_hw_state(struct intel_plane_state *plane_state,
|
||||
struct intel_crtc *crtc);
|
||||
void intel_plane_copy_hw_state(struct intel_plane_state *plane_state,
|
||||
const struct intel_plane_state *from_plane_state);
|
||||
void intel_plane_async_flip(struct intel_plane *plane,
|
||||
void intel_plane_async_flip(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state,
|
||||
bool async_flip);
|
||||
void intel_plane_update_noarm(struct intel_plane *plane,
|
||||
void intel_plane_update_noarm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state);
|
||||
void intel_plane_update_arm(struct intel_plane *plane,
|
||||
void intel_plane_update_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state);
|
||||
void intel_plane_disable_arm(struct intel_plane *plane,
|
||||
void intel_plane_disable_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
struct intel_plane *intel_plane_alloc(void);
|
||||
void intel_plane_free(struct intel_plane *plane);
|
||||
struct drm_plane_state *intel_plane_duplicate_state(struct drm_plane *plane);
|
||||
void intel_plane_destroy_state(struct drm_plane *plane,
|
||||
struct drm_plane_state *state);
|
||||
void intel_crtc_planes_update_noarm(struct intel_atomic_state *state,
|
||||
void intel_crtc_planes_update_noarm(struct intel_dsb *dsb,
|
||||
struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
void intel_crtc_planes_update_arm(struct intel_atomic_state *state,
|
||||
void intel_crtc_planes_update_arm(struct intel_dsb *dsbx,
|
||||
struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc);
|
||||
int intel_plane_atomic_check_with_state(const struct intel_crtc_state *old_crtc_state,
|
||||
struct intel_crtc_state *crtc_state,
|
||||
|
||||
@@ -1912,6 +1912,23 @@ void intel_color_post_update(const struct intel_crtc_state *crtc_state)
|
||||
i915->display.funcs.color->color_post_update(crtc_state);
|
||||
}
|
||||
|
||||
void intel_color_modeset(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_display *display = to_intel_display(crtc_state);
|
||||
|
||||
intel_color_load_luts(crtc_state);
|
||||
intel_color_commit_noarm(crtc_state);
|
||||
intel_color_commit_arm(crtc_state);
|
||||
|
||||
if (DISPLAY_VER(display) < 9) {
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct intel_plane *plane = to_intel_plane(crtc->base.primary);
|
||||
|
||||
/* update DSPCNTR to configure gamma/csc for pipe bottom color */
|
||||
plane->disable_arm(NULL, plane, crtc_state);
|
||||
}
|
||||
}
|
||||
|
||||
void intel_color_prepare_commit(struct intel_atomic_state *state,
|
||||
struct intel_crtc *crtc)
|
||||
{
|
||||
|
||||
@@ -28,6 +28,7 @@ void intel_color_commit_noarm(const struct intel_crtc_state *crtc_state);
|
||||
void intel_color_commit_arm(const struct intel_crtc_state *crtc_state);
|
||||
void intel_color_post_update(const struct intel_crtc_state *crtc_state);
|
||||
void intel_color_load_luts(const struct intel_crtc_state *crtc_state);
|
||||
void intel_color_modeset(const struct intel_crtc_state *crtc_state);
|
||||
void intel_color_get_config(struct intel_crtc_state *crtc_state);
|
||||
bool intel_color_lut_equal(const struct intel_crtc_state *crtc_state,
|
||||
const struct drm_property_blob *blob1,
|
||||
|
||||
@@ -275,7 +275,8 @@ static int i845_check_cursor(struct intel_crtc_state *crtc_state,
|
||||
}
|
||||
|
||||
/* TODO: split into noarm+arm pair */
|
||||
static void i845_cursor_update_arm(struct intel_plane *plane,
|
||||
static void i845_cursor_update_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
@@ -315,10 +316,11 @@ static void i845_cursor_update_arm(struct intel_plane *plane,
|
||||
}
|
||||
}
|
||||
|
||||
static void i845_cursor_disable_arm(struct intel_plane *plane,
|
||||
static void i845_cursor_disable_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
i845_cursor_update_arm(plane, crtc_state, NULL);
|
||||
i845_cursor_update_arm(dsb, plane, crtc_state, NULL);
|
||||
}
|
||||
|
||||
static bool i845_cursor_get_hw_state(struct intel_plane *plane,
|
||||
@@ -527,22 +529,25 @@ static int i9xx_check_cursor(struct intel_crtc_state *crtc_state,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void i9xx_cursor_disable_sel_fetch_arm(struct intel_plane *plane,
|
||||
static void i9xx_cursor_disable_sel_fetch_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
enum pipe pipe = plane->pipe;
|
||||
|
||||
if (!crtc_state->enable_psr2_sel_fetch)
|
||||
return;
|
||||
|
||||
intel_de_write_fw(dev_priv, SEL_FETCH_CUR_CTL(pipe), 0);
|
||||
intel_de_write_dsb(display, dsb, SEL_FETCH_CUR_CTL(pipe), 0);
|
||||
}
|
||||
|
||||
static void wa_16021440873(struct intel_plane *plane,
|
||||
static void wa_16021440873(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
u32 ctl = plane_state->ctl;
|
||||
int et_y_position = drm_rect_height(&crtc_state->pipe_src) + 1;
|
||||
@@ -551,16 +556,18 @@ static void wa_16021440873(struct intel_plane *plane,
|
||||
ctl &= ~MCURSOR_MODE_MASK;
|
||||
ctl |= MCURSOR_MODE_64_2B;
|
||||
|
||||
intel_de_write_fw(dev_priv, SEL_FETCH_CUR_CTL(pipe), ctl);
|
||||
intel_de_write_dsb(display, dsb, SEL_FETCH_CUR_CTL(pipe), ctl);
|
||||
|
||||
intel_de_write(dev_priv, CURPOS_ERLY_TPT(dev_priv, pipe),
|
||||
CURSOR_POS_Y(et_y_position));
|
||||
intel_de_write_dsb(display, dsb, CURPOS_ERLY_TPT(dev_priv, pipe),
|
||||
CURSOR_POS_Y(et_y_position));
|
||||
}
|
||||
|
||||
static void i9xx_cursor_update_sel_fetch_arm(struct intel_plane *plane,
|
||||
static void i9xx_cursor_update_sel_fetch_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
enum pipe pipe = plane->pipe;
|
||||
|
||||
@@ -571,19 +578,17 @@ static void i9xx_cursor_update_sel_fetch_arm(struct intel_plane *plane,
|
||||
if (crtc_state->enable_psr2_su_region_et) {
|
||||
u32 val = intel_cursor_position(crtc_state, plane_state,
|
||||
true);
|
||||
intel_de_write_fw(dev_priv,
|
||||
CURPOS_ERLY_TPT(dev_priv, pipe),
|
||||
val);
|
||||
|
||||
intel_de_write_dsb(display, dsb, CURPOS_ERLY_TPT(dev_priv, pipe), val);
|
||||
}
|
||||
|
||||
intel_de_write_fw(dev_priv, SEL_FETCH_CUR_CTL(pipe),
|
||||
plane_state->ctl);
|
||||
intel_de_write_dsb(display, dsb, SEL_FETCH_CUR_CTL(pipe), plane_state->ctl);
|
||||
} else {
|
||||
/* Wa_16021440873 */
|
||||
if (crtc_state->enable_psr2_su_region_et)
|
||||
wa_16021440873(plane, crtc_state, plane_state);
|
||||
wa_16021440873(dsb, plane, crtc_state, plane_state);
|
||||
else
|
||||
i9xx_cursor_disable_sel_fetch_arm(plane, crtc_state);
|
||||
i9xx_cursor_disable_sel_fetch_arm(dsb, plane, crtc_state);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -610,9 +615,11 @@ static u32 skl_cursor_wm_reg_val(const struct skl_wm_level *level)
|
||||
return val;
|
||||
}
|
||||
|
||||
static void skl_write_cursor_wm(struct intel_plane *plane,
|
||||
static void skl_write_cursor_wm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
struct drm_i915_private *i915 = to_i915(plane->base.dev);
|
||||
enum plane_id plane_id = plane->id;
|
||||
enum pipe pipe = plane->pipe;
|
||||
@@ -622,30 +629,32 @@ static void skl_write_cursor_wm(struct intel_plane *plane,
|
||||
int level;
|
||||
|
||||
for (level = 0; level < i915->display.wm.num_levels; level++)
|
||||
intel_de_write_fw(i915, CUR_WM(pipe, level),
|
||||
skl_cursor_wm_reg_val(skl_plane_wm_level(pipe_wm, plane_id, level)));
|
||||
intel_de_write_dsb(display, dsb, CUR_WM(pipe, level),
|
||||
skl_cursor_wm_reg_val(skl_plane_wm_level(pipe_wm, plane_id, level)));
|
||||
|
||||
intel_de_write_fw(i915, CUR_WM_TRANS(pipe),
|
||||
skl_cursor_wm_reg_val(skl_plane_trans_wm(pipe_wm, plane_id)));
|
||||
intel_de_write_dsb(display, dsb, CUR_WM_TRANS(pipe),
|
||||
skl_cursor_wm_reg_val(skl_plane_trans_wm(pipe_wm, plane_id)));
|
||||
|
||||
if (HAS_HW_SAGV_WM(i915)) {
|
||||
const struct skl_plane_wm *wm = &pipe_wm->planes[plane_id];
|
||||
|
||||
intel_de_write_fw(i915, CUR_WM_SAGV(pipe),
|
||||
skl_cursor_wm_reg_val(&wm->sagv.wm0));
|
||||
intel_de_write_fw(i915, CUR_WM_SAGV_TRANS(pipe),
|
||||
skl_cursor_wm_reg_val(&wm->sagv.trans_wm));
|
||||
intel_de_write_dsb(display, dsb, CUR_WM_SAGV(pipe),
|
||||
skl_cursor_wm_reg_val(&wm->sagv.wm0));
|
||||
intel_de_write_dsb(display, dsb, CUR_WM_SAGV_TRANS(pipe),
|
||||
skl_cursor_wm_reg_val(&wm->sagv.trans_wm));
|
||||
}
|
||||
|
||||
intel_de_write_fw(i915, CUR_BUF_CFG(pipe),
|
||||
skl_cursor_ddb_reg_val(ddb));
|
||||
intel_de_write_dsb(display, dsb, CUR_BUF_CFG(pipe),
|
||||
skl_cursor_ddb_reg_val(ddb));
|
||||
}
|
||||
|
||||
/* TODO: split into noarm+arm pair */
|
||||
static void i9xx_cursor_update_arm(struct intel_plane *plane,
|
||||
static void i9xx_cursor_update_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
enum pipe pipe = plane->pipe;
|
||||
u32 cntl = 0, base = 0, pos = 0, fbc_ctl = 0;
|
||||
@@ -685,38 +694,36 @@ static void i9xx_cursor_update_arm(struct intel_plane *plane,
|
||||
*/
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 9)
|
||||
skl_write_cursor_wm(plane, crtc_state);
|
||||
skl_write_cursor_wm(dsb, plane, crtc_state);
|
||||
|
||||
if (plane_state)
|
||||
i9xx_cursor_update_sel_fetch_arm(plane, crtc_state,
|
||||
plane_state);
|
||||
i9xx_cursor_update_sel_fetch_arm(dsb, plane, crtc_state, plane_state);
|
||||
else
|
||||
i9xx_cursor_disable_sel_fetch_arm(plane, crtc_state);
|
||||
i9xx_cursor_disable_sel_fetch_arm(dsb, plane, crtc_state);
|
||||
|
||||
if (plane->cursor.base != base ||
|
||||
plane->cursor.size != fbc_ctl ||
|
||||
plane->cursor.cntl != cntl) {
|
||||
if (HAS_CUR_FBC(dev_priv))
|
||||
intel_de_write_fw(dev_priv,
|
||||
CUR_FBC_CTL(dev_priv, pipe),
|
||||
fbc_ctl);
|
||||
intel_de_write_fw(dev_priv, CURCNTR(dev_priv, pipe), cntl);
|
||||
intel_de_write_fw(dev_priv, CURPOS(dev_priv, pipe), pos);
|
||||
intel_de_write_fw(dev_priv, CURBASE(dev_priv, pipe), base);
|
||||
intel_de_write_dsb(display, dsb, CUR_FBC_CTL(dev_priv, pipe), fbc_ctl);
|
||||
intel_de_write_dsb(display, dsb, CURCNTR(dev_priv, pipe), cntl);
|
||||
intel_de_write_dsb(display, dsb, CURPOS(dev_priv, pipe), pos);
|
||||
intel_de_write_dsb(display, dsb, CURBASE(dev_priv, pipe), base);
|
||||
|
||||
plane->cursor.base = base;
|
||||
plane->cursor.size = fbc_ctl;
|
||||
plane->cursor.cntl = cntl;
|
||||
} else {
|
||||
intel_de_write_fw(dev_priv, CURPOS(dev_priv, pipe), pos);
|
||||
intel_de_write_fw(dev_priv, CURBASE(dev_priv, pipe), base);
|
||||
intel_de_write_dsb(display, dsb, CURPOS(dev_priv, pipe), pos);
|
||||
intel_de_write_dsb(display, dsb, CURBASE(dev_priv, pipe), base);
|
||||
}
|
||||
}
|
||||
|
||||
static void i9xx_cursor_disable_arm(struct intel_plane *plane,
|
||||
static void i9xx_cursor_disable_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
i9xx_cursor_update_arm(plane, crtc_state, NULL);
|
||||
i9xx_cursor_update_arm(dsb, plane, crtc_state, NULL);
|
||||
}
|
||||
|
||||
static bool i9xx_cursor_get_hw_state(struct intel_plane *plane,
|
||||
@@ -905,10 +912,10 @@ intel_legacy_cursor_update(struct drm_plane *_plane,
|
||||
}
|
||||
|
||||
if (new_plane_state->uapi.visible) {
|
||||
intel_plane_update_noarm(plane, crtc_state, new_plane_state);
|
||||
intel_plane_update_arm(plane, crtc_state, new_plane_state);
|
||||
intel_plane_update_noarm(NULL, plane, crtc_state, new_plane_state);
|
||||
intel_plane_update_arm(NULL, plane, crtc_state, new_plane_state);
|
||||
} else {
|
||||
intel_plane_disable_arm(plane, crtc_state);
|
||||
intel_plane_disable_arm(NULL, plane, crtc_state);
|
||||
}
|
||||
|
||||
local_irq_enable();
|
||||
|
||||
@@ -8,6 +8,7 @@
|
||||
|
||||
#include "i915_drv.h"
|
||||
#include "i915_trace.h"
|
||||
#include "intel_dsb.h"
|
||||
#include "intel_uncore.h"
|
||||
|
||||
static inline struct intel_uncore *__to_uncore(struct intel_display *display)
|
||||
@@ -233,4 +234,14 @@ __intel_de_write_notrace(struct intel_display *display, i915_reg_t reg,
|
||||
}
|
||||
#define intel_de_write_notrace(p,...) __intel_de_write_notrace(__to_intel_display(p), __VA_ARGS__)
|
||||
|
||||
static __always_inline void
|
||||
intel_de_write_dsb(struct intel_display *display, struct intel_dsb *dsb,
|
||||
i915_reg_t reg, u32 val)
|
||||
{
|
||||
if (dsb)
|
||||
intel_dsb_reg_write(dsb, reg, val);
|
||||
else
|
||||
intel_de_write_fw(display, reg, val);
|
||||
}
|
||||
|
||||
#endif /* __INTEL_DE_H__ */
|
||||
|
||||
@@ -135,7 +135,8 @@
|
||||
static void intel_set_transcoder_timings(const struct intel_crtc_state *crtc_state);
|
||||
static void intel_set_pipe_src_size(const struct intel_crtc_state *crtc_state);
|
||||
static void hsw_set_transconf(const struct intel_crtc_state *crtc_state);
|
||||
static void bdw_set_pipe_misc(const struct intel_crtc_state *crtc_state);
|
||||
static void bdw_set_pipe_misc(struct intel_dsb *dsb,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
|
||||
/* returns HPLL frequency in kHz */
|
||||
int vlv_get_hpll_vco(struct drm_i915_private *dev_priv)
|
||||
@@ -715,7 +716,7 @@ void intel_plane_disable_noatomic(struct intel_crtc *crtc,
|
||||
if (DISPLAY_VER(dev_priv) == 2 && !crtc_state->active_planes)
|
||||
intel_set_cpu_fifo_underrun_reporting(dev_priv, crtc->pipe, false);
|
||||
|
||||
intel_plane_disable_arm(plane, crtc_state);
|
||||
intel_plane_disable_arm(NULL, plane, crtc_state);
|
||||
intel_crtc_wait_for_next_vblank(crtc);
|
||||
}
|
||||
|
||||
@@ -1172,8 +1173,8 @@ static void intel_crtc_async_flip_disable_wa(struct intel_atomic_state *state,
|
||||
* Apart from the async flip bit we want to
|
||||
* preserve the old state for the plane.
|
||||
*/
|
||||
intel_plane_async_flip(plane, old_crtc_state,
|
||||
old_plane_state, false);
|
||||
intel_plane_async_flip(NULL, plane,
|
||||
old_crtc_state, old_plane_state, false);
|
||||
need_vbl_wait = true;
|
||||
}
|
||||
}
|
||||
@@ -1315,7 +1316,7 @@ static void intel_crtc_disable_planes(struct intel_atomic_state *state,
|
||||
!(update_mask & BIT(plane->id)))
|
||||
continue;
|
||||
|
||||
intel_plane_disable_arm(plane, new_crtc_state);
|
||||
intel_plane_disable_arm(NULL, plane, new_crtc_state);
|
||||
|
||||
if (old_plane_state->uapi.visible)
|
||||
fb_bits |= plane->frontbuffer_bit;
|
||||
@@ -1502,14 +1503,6 @@ static void intel_encoders_update_pipe(struct intel_atomic_state *state,
|
||||
}
|
||||
}
|
||||
|
||||
static void intel_disable_primary_plane(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct intel_plane *plane = to_intel_plane(crtc->base.primary);
|
||||
|
||||
plane->disable_arm(plane, crtc_state);
|
||||
}
|
||||
|
||||
static void ilk_configure_cpu_transcoder(const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
@@ -1575,11 +1568,7 @@ static void ilk_crtc_enable(struct intel_atomic_state *state,
|
||||
* On ILK+ LUT must be loaded before the pipe is running but with
|
||||
* clocks enabled
|
||||
*/
|
||||
intel_color_load_luts(new_crtc_state);
|
||||
intel_color_commit_noarm(new_crtc_state);
|
||||
intel_color_commit_arm(new_crtc_state);
|
||||
/* update DSPCNTR to configure gamma for pipe bottom color */
|
||||
intel_disable_primary_plane(new_crtc_state);
|
||||
intel_color_modeset(new_crtc_state);
|
||||
|
||||
intel_initial_watermarks(state, crtc);
|
||||
intel_enable_transcoder(new_crtc_state);
|
||||
@@ -1716,7 +1705,7 @@ static void hsw_crtc_enable(struct intel_atomic_state *state,
|
||||
intel_set_pipe_src_size(pipe_crtc_state);
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 9 || IS_BROADWELL(dev_priv))
|
||||
bdw_set_pipe_misc(pipe_crtc_state);
|
||||
bdw_set_pipe_misc(NULL, pipe_crtc_state);
|
||||
}
|
||||
|
||||
if (!transcoder_is_dsi(cpu_transcoder))
|
||||
@@ -1741,12 +1730,7 @@ static void hsw_crtc_enable(struct intel_atomic_state *state,
|
||||
* On ILK+ LUT must be loaded before the pipe is running but with
|
||||
* clocks enabled
|
||||
*/
|
||||
intel_color_load_luts(pipe_crtc_state);
|
||||
intel_color_commit_noarm(pipe_crtc_state);
|
||||
intel_color_commit_arm(pipe_crtc_state);
|
||||
/* update DSPCNTR to configure gamma/csc for pipe bottom color */
|
||||
if (DISPLAY_VER(dev_priv) < 9)
|
||||
intel_disable_primary_plane(pipe_crtc_state);
|
||||
intel_color_modeset(pipe_crtc_state);
|
||||
|
||||
hsw_set_linetime_wm(pipe_crtc_state);
|
||||
|
||||
@@ -2147,11 +2131,7 @@ static void valleyview_crtc_enable(struct intel_atomic_state *state,
|
||||
|
||||
i9xx_pfit_enable(new_crtc_state);
|
||||
|
||||
intel_color_load_luts(new_crtc_state);
|
||||
intel_color_commit_noarm(new_crtc_state);
|
||||
intel_color_commit_arm(new_crtc_state);
|
||||
/* update DSPCNTR to configure gamma for pipe bottom color */
|
||||
intel_disable_primary_plane(new_crtc_state);
|
||||
intel_color_modeset(new_crtc_state);
|
||||
|
||||
intel_initial_watermarks(state, crtc);
|
||||
intel_enable_transcoder(new_crtc_state);
|
||||
@@ -2187,11 +2167,7 @@ static void i9xx_crtc_enable(struct intel_atomic_state *state,
|
||||
|
||||
i9xx_pfit_enable(new_crtc_state);
|
||||
|
||||
intel_color_load_luts(new_crtc_state);
|
||||
intel_color_commit_noarm(new_crtc_state);
|
||||
intel_color_commit_arm(new_crtc_state);
|
||||
/* update DSPCNTR to configure gamma for pipe bottom color */
|
||||
intel_disable_primary_plane(new_crtc_state);
|
||||
intel_color_modeset(new_crtc_state);
|
||||
|
||||
if (!intel_initial_watermarks(state, crtc))
|
||||
intel_update_watermarks(dev_priv);
|
||||
@@ -3246,9 +3222,11 @@ static void hsw_set_transconf(const struct intel_crtc_state *crtc_state)
|
||||
intel_de_posting_read(dev_priv, TRANSCONF(dev_priv, cpu_transcoder));
|
||||
}
|
||||
|
||||
static void bdw_set_pipe_misc(const struct intel_crtc_state *crtc_state)
|
||||
static void bdw_set_pipe_misc(struct intel_dsb *dsb,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
|
||||
struct intel_display *display = to_intel_display(crtc->base.dev);
|
||||
struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
|
||||
u32 val = 0;
|
||||
|
||||
@@ -3293,7 +3271,7 @@ static void bdw_set_pipe_misc(const struct intel_crtc_state *crtc_state)
|
||||
if (IS_BROADWELL(dev_priv))
|
||||
val |= PIPE_MISC_PSR_MASK_SPRITE_ENABLE;
|
||||
|
||||
intel_de_write(dev_priv, PIPE_MISC(crtc->pipe), val);
|
||||
intel_de_write_dsb(display, dsb, PIPE_MISC(crtc->pipe), val);
|
||||
}
|
||||
|
||||
int bdw_get_pipe_misc_bpp(struct intel_crtc *crtc)
|
||||
@@ -6846,7 +6824,7 @@ static void commit_pipe_pre_planes(struct intel_atomic_state *state,
|
||||
intel_color_commit_arm(new_crtc_state);
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 9 || IS_BROADWELL(dev_priv))
|
||||
bdw_set_pipe_misc(new_crtc_state);
|
||||
bdw_set_pipe_misc(NULL, new_crtc_state);
|
||||
|
||||
if (intel_crtc_needs_fastset(new_crtc_state))
|
||||
intel_pipe_fastset(old_crtc_state, new_crtc_state);
|
||||
@@ -6946,7 +6924,7 @@ static void intel_pre_update_crtc(struct intel_atomic_state *state,
|
||||
intel_crtc_needs_color_update(new_crtc_state))
|
||||
intel_color_commit_noarm(new_crtc_state);
|
||||
|
||||
intel_crtc_planes_update_noarm(state, crtc);
|
||||
intel_crtc_planes_update_noarm(NULL, state, crtc);
|
||||
}
|
||||
|
||||
static void intel_update_crtc(struct intel_atomic_state *state,
|
||||
@@ -6962,7 +6940,7 @@ static void intel_update_crtc(struct intel_atomic_state *state,
|
||||
|
||||
commit_pipe_pre_planes(state, crtc);
|
||||
|
||||
intel_crtc_planes_update_arm(state, crtc);
|
||||
intel_crtc_planes_update_arm(NULL, state, crtc);
|
||||
|
||||
commit_pipe_post_planes(state, crtc);
|
||||
|
||||
|
||||
@@ -1036,6 +1036,10 @@ struct intel_csc_matrix {
|
||||
u16 postoff[3];
|
||||
};
|
||||
|
||||
void intel_io_mmio_fw_write(void *ctx, i915_reg_t reg, u32 val);
|
||||
|
||||
typedef void (*intel_io_reg_write)(void *ctx, i915_reg_t reg, u32 val);
|
||||
|
||||
struct intel_crtc_state {
|
||||
/*
|
||||
* uapi (drm) state. This is the software state shown to userspace.
|
||||
@@ -1578,22 +1582,26 @@ struct intel_plane {
|
||||
u32 pixel_format, u64 modifier,
|
||||
unsigned int rotation);
|
||||
/* Write all non-self arming plane registers */
|
||||
void (*update_noarm)(struct intel_plane *plane,
|
||||
void (*update_noarm)(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state);
|
||||
/* Write all self-arming plane registers */
|
||||
void (*update_arm)(struct intel_plane *plane,
|
||||
void (*update_arm)(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state);
|
||||
/* Disable the plane, must arm */
|
||||
void (*disable_arm)(struct intel_plane *plane,
|
||||
void (*disable_arm)(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state);
|
||||
bool (*get_hw_state)(struct intel_plane *plane, enum pipe *pipe);
|
||||
int (*check_plane)(struct intel_crtc_state *crtc_state,
|
||||
struct intel_plane_state *plane_state);
|
||||
int (*min_cdclk)(const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state);
|
||||
void (*async_flip)(struct intel_plane *plane,
|
||||
void (*async_flip)(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state,
|
||||
bool async_flip);
|
||||
|
||||
@@ -378,7 +378,8 @@ static void vlv_sprite_update_gamma(const struct intel_plane_state *plane_state)
|
||||
}
|
||||
|
||||
static void
|
||||
vlv_sprite_update_noarm(struct intel_plane *plane,
|
||||
vlv_sprite_update_noarm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
@@ -399,7 +400,8 @@ vlv_sprite_update_noarm(struct intel_plane *plane,
|
||||
}
|
||||
|
||||
static void
|
||||
vlv_sprite_update_arm(struct intel_plane *plane,
|
||||
vlv_sprite_update_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
@@ -449,7 +451,8 @@ vlv_sprite_update_arm(struct intel_plane *plane,
|
||||
}
|
||||
|
||||
static void
|
||||
vlv_sprite_disable_arm(struct intel_plane *plane,
|
||||
vlv_sprite_disable_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
@@ -795,7 +798,8 @@ static void ivb_sprite_update_gamma(const struct intel_plane_state *plane_state)
|
||||
}
|
||||
|
||||
static void
|
||||
ivb_sprite_update_noarm(struct intel_plane *plane,
|
||||
ivb_sprite_update_noarm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
@@ -826,7 +830,8 @@ ivb_sprite_update_noarm(struct intel_plane *plane,
|
||||
}
|
||||
|
||||
static void
|
||||
ivb_sprite_update_arm(struct intel_plane *plane,
|
||||
ivb_sprite_update_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
@@ -874,7 +879,8 @@ ivb_sprite_update_arm(struct intel_plane *plane,
|
||||
}
|
||||
|
||||
static void
|
||||
ivb_sprite_disable_arm(struct intel_plane *plane,
|
||||
ivb_sprite_disable_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
@@ -1133,7 +1139,8 @@ static void ilk_sprite_update_gamma(const struct intel_plane_state *plane_state)
|
||||
}
|
||||
|
||||
static void
|
||||
g4x_sprite_update_noarm(struct intel_plane *plane,
|
||||
g4x_sprite_update_noarm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
@@ -1162,7 +1169,8 @@ g4x_sprite_update_noarm(struct intel_plane *plane,
|
||||
}
|
||||
|
||||
static void
|
||||
g4x_sprite_update_arm(struct intel_plane *plane,
|
||||
g4x_sprite_update_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
@@ -1206,7 +1214,8 @@ g4x_sprite_update_arm(struct intel_plane *plane,
|
||||
}
|
||||
|
||||
static void
|
||||
g4x_sprite_disable_arm(struct intel_plane *plane,
|
||||
g4x_sprite_disable_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
|
||||
@@ -589,11 +589,11 @@ static u32 skl_plane_min_alignment(struct intel_plane *plane,
|
||||
* in full-range YCbCr.
|
||||
*/
|
||||
static void
|
||||
icl_program_input_csc(struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
icl_program_input_csc(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
enum pipe pipe = plane->pipe;
|
||||
enum plane_id plane_id = plane->id;
|
||||
|
||||
@@ -637,31 +637,31 @@ icl_program_input_csc(struct intel_plane *plane,
|
||||
};
|
||||
const u16 *csc = input_csc_matrix[plane_state->hw.color_encoding];
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 0),
|
||||
ROFF(csc[0]) | GOFF(csc[1]));
|
||||
intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 1),
|
||||
BOFF(csc[2]));
|
||||
intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 2),
|
||||
ROFF(csc[3]) | GOFF(csc[4]));
|
||||
intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 3),
|
||||
BOFF(csc[5]));
|
||||
intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 4),
|
||||
ROFF(csc[6]) | GOFF(csc[7]));
|
||||
intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 5),
|
||||
BOFF(csc[8]));
|
||||
intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 0),
|
||||
ROFF(csc[0]) | GOFF(csc[1]));
|
||||
intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 1),
|
||||
BOFF(csc[2]));
|
||||
intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 2),
|
||||
ROFF(csc[3]) | GOFF(csc[4]));
|
||||
intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 3),
|
||||
BOFF(csc[5]));
|
||||
intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 4),
|
||||
ROFF(csc[6]) | GOFF(csc[7]));
|
||||
intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 5),
|
||||
BOFF(csc[8]));
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 0),
|
||||
PREOFF_YUV_TO_RGB_HI);
|
||||
intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
|
||||
PREOFF_YUV_TO_RGB_ME);
|
||||
intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 2),
|
||||
PREOFF_YUV_TO_RGB_LO);
|
||||
intel_de_write_fw(dev_priv,
|
||||
PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 0), 0x0);
|
||||
intel_de_write_fw(dev_priv,
|
||||
PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 1), 0x0);
|
||||
intel_de_write_fw(dev_priv,
|
||||
PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 2), 0x0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 0),
|
||||
PREOFF_YUV_TO_RGB_HI);
|
||||
intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
|
||||
PREOFF_YUV_TO_RGB_ME);
|
||||
intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 2),
|
||||
PREOFF_YUV_TO_RGB_LO);
|
||||
intel_de_write_dsb(display, dsb,
|
||||
PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 0), 0x0);
|
||||
intel_de_write_dsb(display, dsb,
|
||||
PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 1), 0x0);
|
||||
intel_de_write_dsb(display, dsb,
|
||||
PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 2), 0x0);
|
||||
}
|
||||
|
||||
static unsigned int skl_plane_stride_mult(const struct drm_framebuffer *fb,
|
||||
@@ -715,9 +715,11 @@ static u32 skl_plane_wm_reg_val(const struct skl_wm_level *level)
|
||||
return val;
|
||||
}
|
||||
|
||||
static void skl_write_plane_wm(struct intel_plane *plane,
|
||||
static void skl_write_plane_wm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
struct drm_i915_private *i915 = to_i915(plane->base.dev);
|
||||
enum plane_id plane_id = plane->id;
|
||||
enum pipe pipe = plane->pipe;
|
||||
@@ -729,71 +731,75 @@ static void skl_write_plane_wm(struct intel_plane *plane,
|
||||
int level;
|
||||
|
||||
for (level = 0; level < i915->display.wm.num_levels; level++)
|
||||
intel_de_write_fw(i915, PLANE_WM(pipe, plane_id, level),
|
||||
skl_plane_wm_reg_val(skl_plane_wm_level(pipe_wm, plane_id, level)));
|
||||
intel_de_write_dsb(display, dsb, PLANE_WM(pipe, plane_id, level),
|
||||
skl_plane_wm_reg_val(skl_plane_wm_level(pipe_wm, plane_id, level)));
|
||||
|
||||
intel_de_write_fw(i915, PLANE_WM_TRANS(pipe, plane_id),
|
||||
skl_plane_wm_reg_val(skl_plane_trans_wm(pipe_wm, plane_id)));
|
||||
intel_de_write_dsb(display, dsb, PLANE_WM_TRANS(pipe, plane_id),
|
||||
skl_plane_wm_reg_val(skl_plane_trans_wm(pipe_wm, plane_id)));
|
||||
|
||||
if (HAS_HW_SAGV_WM(i915)) {
|
||||
const struct skl_plane_wm *wm = &pipe_wm->planes[plane_id];
|
||||
|
||||
intel_de_write_fw(i915, PLANE_WM_SAGV(pipe, plane_id),
|
||||
skl_plane_wm_reg_val(&wm->sagv.wm0));
|
||||
intel_de_write_fw(i915, PLANE_WM_SAGV_TRANS(pipe, plane_id),
|
||||
skl_plane_wm_reg_val(&wm->sagv.trans_wm));
|
||||
intel_de_write_dsb(display, dsb, PLANE_WM_SAGV(pipe, plane_id),
|
||||
skl_plane_wm_reg_val(&wm->sagv.wm0));
|
||||
intel_de_write_dsb(display, dsb, PLANE_WM_SAGV_TRANS(pipe, plane_id),
|
||||
skl_plane_wm_reg_val(&wm->sagv.trans_wm));
|
||||
}
|
||||
|
||||
intel_de_write_fw(i915, PLANE_BUF_CFG(pipe, plane_id),
|
||||
skl_plane_ddb_reg_val(ddb));
|
||||
intel_de_write_dsb(display, dsb, PLANE_BUF_CFG(pipe, plane_id),
|
||||
skl_plane_ddb_reg_val(ddb));
|
||||
|
||||
if (DISPLAY_VER(i915) < 11)
|
||||
intel_de_write_fw(i915, PLANE_NV12_BUF_CFG(pipe, plane_id),
|
||||
skl_plane_ddb_reg_val(ddb_y));
|
||||
intel_de_write_dsb(display, dsb, PLANE_NV12_BUF_CFG(pipe, plane_id),
|
||||
skl_plane_ddb_reg_val(ddb_y));
|
||||
}
|
||||
|
||||
static void
|
||||
skl_plane_disable_arm(struct intel_plane *plane,
|
||||
skl_plane_disable_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
enum plane_id plane_id = plane->id;
|
||||
enum pipe pipe = plane->pipe;
|
||||
|
||||
skl_write_plane_wm(plane, crtc_state);
|
||||
skl_write_plane_wm(dsb, plane, crtc_state);
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), 0);
|
||||
intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id), 0);
|
||||
}
|
||||
|
||||
static void icl_plane_disable_sel_fetch_arm(struct intel_plane *plane,
|
||||
static void icl_plane_disable_sel_fetch_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(plane->base.dev);
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
enum pipe pipe = plane->pipe;
|
||||
|
||||
if (!crtc_state->enable_psr2_sel_fetch)
|
||||
return;
|
||||
|
||||
intel_de_write_fw(i915, SEL_FETCH_PLANE_CTL(pipe, plane->id), 0);
|
||||
intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_CTL(pipe, plane->id), 0);
|
||||
}
|
||||
|
||||
static void
|
||||
icl_plane_disable_arm(struct intel_plane *plane,
|
||||
icl_plane_disable_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
enum plane_id plane_id = plane->id;
|
||||
enum pipe pipe = plane->pipe;
|
||||
|
||||
if (icl_is_hdr_plane(dev_priv, plane_id))
|
||||
intel_de_write_fw(dev_priv, PLANE_CUS_CTL(pipe, plane_id), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CUS_CTL(pipe, plane_id), 0);
|
||||
|
||||
skl_write_plane_wm(plane, crtc_state);
|
||||
skl_write_plane_wm(dsb, plane, crtc_state);
|
||||
|
||||
icl_plane_disable_sel_fetch_arm(plane, crtc_state);
|
||||
intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), 0);
|
||||
intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id), 0);
|
||||
icl_plane_disable_sel_fetch_arm(dsb, plane, crtc_state);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id), 0);
|
||||
}
|
||||
|
||||
static bool
|
||||
@@ -1230,28 +1236,30 @@ static u32 skl_plane_keymsk(const struct intel_plane_state *plane_state)
|
||||
return keymsk;
|
||||
}
|
||||
|
||||
static void icl_plane_csc_load_black(struct intel_plane *plane)
|
||||
static void icl_plane_csc_load_black(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(plane->base.dev);
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
enum plane_id plane_id = plane->id;
|
||||
enum pipe pipe = plane->pipe;
|
||||
|
||||
intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 0), 0);
|
||||
intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 1), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 0), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 1), 0);
|
||||
|
||||
intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 2), 0);
|
||||
intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 3), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 2), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 3), 0);
|
||||
|
||||
intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 4), 0);
|
||||
intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 5), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 4), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 5), 0);
|
||||
|
||||
intel_de_write_fw(i915, PLANE_CSC_PREOFF(pipe, plane_id, 0), 0);
|
||||
intel_de_write_fw(i915, PLANE_CSC_PREOFF(pipe, plane_id, 1), 0);
|
||||
intel_de_write_fw(i915, PLANE_CSC_PREOFF(pipe, plane_id, 2), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CSC_PREOFF(pipe, plane_id, 0), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CSC_PREOFF(pipe, plane_id, 1), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CSC_PREOFF(pipe, plane_id, 2), 0);
|
||||
|
||||
intel_de_write_fw(i915, PLANE_CSC_POSTOFF(pipe, plane_id, 0), 0);
|
||||
intel_de_write_fw(i915, PLANE_CSC_POSTOFF(pipe, plane_id, 1), 0);
|
||||
intel_de_write_fw(i915, PLANE_CSC_POSTOFF(pipe, plane_id, 2), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CSC_POSTOFF(pipe, plane_id, 0), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CSC_POSTOFF(pipe, plane_id, 1), 0);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CSC_POSTOFF(pipe, plane_id, 2), 0);
|
||||
}
|
||||
|
||||
static int icl_plane_color_plane(const struct intel_plane_state *plane_state)
|
||||
@@ -1264,11 +1272,12 @@ static int icl_plane_color_plane(const struct intel_plane_state *plane_state)
|
||||
}
|
||||
|
||||
static void
|
||||
skl_plane_update_noarm(struct intel_plane *plane,
|
||||
skl_plane_update_noarm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
enum plane_id plane_id = plane->id;
|
||||
enum pipe pipe = plane->pipe;
|
||||
u32 stride = skl_plane_stride(plane_state, 0);
|
||||
@@ -1283,21 +1292,23 @@ skl_plane_update_noarm(struct intel_plane *plane,
|
||||
crtc_y = 0;
|
||||
}
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_STRIDE(pipe, plane_id),
|
||||
PLANE_STRIDE_(stride));
|
||||
intel_de_write_fw(dev_priv, PLANE_POS(pipe, plane_id),
|
||||
PLANE_POS_Y(crtc_y) | PLANE_POS_X(crtc_x));
|
||||
intel_de_write_fw(dev_priv, PLANE_SIZE(pipe, plane_id),
|
||||
PLANE_HEIGHT(src_h - 1) | PLANE_WIDTH(src_w - 1));
|
||||
intel_de_write_dsb(display, dsb, PLANE_STRIDE(pipe, plane_id),
|
||||
PLANE_STRIDE_(stride));
|
||||
intel_de_write_dsb(display, dsb, PLANE_POS(pipe, plane_id),
|
||||
PLANE_POS_Y(crtc_y) | PLANE_POS_X(crtc_x));
|
||||
intel_de_write_dsb(display, dsb, PLANE_SIZE(pipe, plane_id),
|
||||
PLANE_HEIGHT(src_h - 1) | PLANE_WIDTH(src_w - 1));
|
||||
|
||||
skl_write_plane_wm(plane, crtc_state);
|
||||
skl_write_plane_wm(dsb, plane, crtc_state);
|
||||
}
|
||||
|
||||
static void
|
||||
skl_plane_update_arm(struct intel_plane *plane,
|
||||
skl_plane_update_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
enum plane_id plane_id = plane->id;
|
||||
enum pipe pipe = plane->pipe;
|
||||
@@ -1317,22 +1328,26 @@ skl_plane_update_arm(struct intel_plane *plane,
|
||||
plane_color_ctl = plane_state->color_ctl |
|
||||
glk_plane_color_ctl_crtc(crtc_state);
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_KEYVAL(pipe, plane_id), skl_plane_keyval(plane_state));
|
||||
intel_de_write_fw(dev_priv, PLANE_KEYMSK(pipe, plane_id), skl_plane_keymsk(plane_state));
|
||||
intel_de_write_fw(dev_priv, PLANE_KEYMAX(pipe, plane_id), skl_plane_keymax(plane_state));
|
||||
intel_de_write_dsb(display, dsb, PLANE_KEYVAL(pipe, plane_id),
|
||||
skl_plane_keyval(plane_state));
|
||||
intel_de_write_dsb(display, dsb, PLANE_KEYMSK(pipe, plane_id),
|
||||
skl_plane_keymsk(plane_state));
|
||||
intel_de_write_dsb(display, dsb, PLANE_KEYMAX(pipe, plane_id),
|
||||
skl_plane_keymax(plane_state));
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_OFFSET(pipe, plane_id),
|
||||
PLANE_OFFSET_Y(y) | PLANE_OFFSET_X(x));
|
||||
intel_de_write_dsb(display, dsb, PLANE_OFFSET(pipe, plane_id),
|
||||
PLANE_OFFSET_Y(y) | PLANE_OFFSET_X(x));
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_AUX_DIST(pipe, plane_id),
|
||||
skl_plane_aux_dist(plane_state, 0));
|
||||
intel_de_write_dsb(display, dsb, PLANE_AUX_DIST(pipe, plane_id),
|
||||
skl_plane_aux_dist(plane_state, 0));
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_AUX_OFFSET(pipe, plane_id),
|
||||
PLANE_OFFSET_Y(plane_state->view.color_plane[1].y) |
|
||||
PLANE_OFFSET_X(plane_state->view.color_plane[1].x));
|
||||
intel_de_write_dsb(display, dsb, PLANE_AUX_OFFSET(pipe, plane_id),
|
||||
PLANE_OFFSET_Y(plane_state->view.color_plane[1].y) |
|
||||
PLANE_OFFSET_X(plane_state->view.color_plane[1].x));
|
||||
|
||||
if (DISPLAY_VER(dev_priv) >= 10)
|
||||
intel_de_write_fw(dev_priv, PLANE_COLOR_CTL(pipe, plane_id), plane_color_ctl);
|
||||
intel_de_write_dsb(display, dsb, PLANE_COLOR_CTL(pipe, plane_id),
|
||||
plane_color_ctl);
|
||||
|
||||
/*
|
||||
* Enable the scaler before the plane so that we don't
|
||||
@@ -1349,17 +1364,19 @@ skl_plane_update_arm(struct intel_plane *plane,
|
||||
* disabled. Try to make the plane enable atomic by writing
|
||||
* the control register just before the surface register.
|
||||
*/
|
||||
intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), plane_ctl);
|
||||
intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id),
|
||||
skl_plane_surf(plane_state, 0));
|
||||
intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id),
|
||||
plane_ctl);
|
||||
intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id),
|
||||
skl_plane_surf(plane_state, 0));
|
||||
}
|
||||
|
||||
static void icl_plane_update_sel_fetch_noarm(struct intel_plane *plane,
|
||||
static void icl_plane_update_sel_fetch_noarm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state,
|
||||
int color_plane)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(plane->base.dev);
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
enum pipe pipe = plane->pipe;
|
||||
const struct drm_rect *clip;
|
||||
u32 val;
|
||||
@@ -1376,7 +1393,7 @@ static void icl_plane_update_sel_fetch_noarm(struct intel_plane *plane,
|
||||
y = (clip->y1 + plane_state->uapi.dst.y1);
|
||||
val = y << 16;
|
||||
val |= plane_state->uapi.dst.x1;
|
||||
intel_de_write_fw(i915, SEL_FETCH_PLANE_POS(pipe, plane->id), val);
|
||||
intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_POS(pipe, plane->id), val);
|
||||
|
||||
x = plane_state->view.color_plane[color_plane].x;
|
||||
|
||||
@@ -1391,20 +1408,21 @@ static void icl_plane_update_sel_fetch_noarm(struct intel_plane *plane,
|
||||
|
||||
val = y << 16 | x;
|
||||
|
||||
intel_de_write_fw(i915, SEL_FETCH_PLANE_OFFSET(pipe, plane->id),
|
||||
val);
|
||||
intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_OFFSET(pipe, plane->id), val);
|
||||
|
||||
/* Sizes are 0 based */
|
||||
val = (drm_rect_height(clip) - 1) << 16;
|
||||
val |= (drm_rect_width(&plane_state->uapi.src) >> 16) - 1;
|
||||
intel_de_write_fw(i915, SEL_FETCH_PLANE_SIZE(pipe, plane->id), val);
|
||||
intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_SIZE(pipe, plane->id), val);
|
||||
}
|
||||
|
||||
static void
|
||||
icl_plane_update_noarm(struct intel_plane *plane,
|
||||
icl_plane_update_noarm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
enum plane_id plane_id = plane->id;
|
||||
enum pipe pipe = plane->pipe;
|
||||
@@ -1428,76 +1446,82 @@ icl_plane_update_noarm(struct intel_plane *plane,
|
||||
crtc_y = 0;
|
||||
}
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_STRIDE(pipe, plane_id),
|
||||
PLANE_STRIDE_(stride));
|
||||
intel_de_write_fw(dev_priv, PLANE_POS(pipe, plane_id),
|
||||
PLANE_POS_Y(crtc_y) | PLANE_POS_X(crtc_x));
|
||||
intel_de_write_fw(dev_priv, PLANE_SIZE(pipe, plane_id),
|
||||
PLANE_HEIGHT(src_h - 1) | PLANE_WIDTH(src_w - 1));
|
||||
intel_de_write_dsb(display, dsb, PLANE_STRIDE(pipe, plane_id),
|
||||
PLANE_STRIDE_(stride));
|
||||
intel_de_write_dsb(display, dsb, PLANE_POS(pipe, plane_id),
|
||||
PLANE_POS_Y(crtc_y) | PLANE_POS_X(crtc_x));
|
||||
intel_de_write_dsb(display, dsb, PLANE_SIZE(pipe, plane_id),
|
||||
PLANE_HEIGHT(src_h - 1) | PLANE_WIDTH(src_w - 1));
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_KEYVAL(pipe, plane_id), skl_plane_keyval(plane_state));
|
||||
intel_de_write_fw(dev_priv, PLANE_KEYMSK(pipe, plane_id), skl_plane_keymsk(plane_state));
|
||||
intel_de_write_fw(dev_priv, PLANE_KEYMAX(pipe, plane_id), skl_plane_keymax(plane_state));
|
||||
intel_de_write_dsb(display, dsb, PLANE_KEYVAL(pipe, plane_id),
|
||||
skl_plane_keyval(plane_state));
|
||||
intel_de_write_dsb(display, dsb, PLANE_KEYMSK(pipe, plane_id),
|
||||
skl_plane_keymsk(plane_state));
|
||||
intel_de_write_dsb(display, dsb, PLANE_KEYMAX(pipe, plane_id),
|
||||
skl_plane_keymax(plane_state));
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_OFFSET(pipe, plane_id),
|
||||
PLANE_OFFSET_Y(y) | PLANE_OFFSET_X(x));
|
||||
intel_de_write_dsb(display, dsb, PLANE_OFFSET(pipe, plane_id),
|
||||
PLANE_OFFSET_Y(y) | PLANE_OFFSET_X(x));
|
||||
|
||||
if (intel_fb_is_rc_ccs_cc_modifier(fb->modifier)) {
|
||||
intel_de_write_fw(dev_priv, PLANE_CC_VAL(pipe, plane_id, 0),
|
||||
lower_32_bits(plane_state->ccval));
|
||||
intel_de_write_fw(dev_priv, PLANE_CC_VAL(pipe, plane_id, 1),
|
||||
upper_32_bits(plane_state->ccval));
|
||||
intel_de_write_dsb(display, dsb, PLANE_CC_VAL(pipe, plane_id, 0),
|
||||
lower_32_bits(plane_state->ccval));
|
||||
intel_de_write_dsb(display, dsb, PLANE_CC_VAL(pipe, plane_id, 1),
|
||||
upper_32_bits(plane_state->ccval));
|
||||
}
|
||||
|
||||
/* FLAT CCS doesn't need to program AUX_DIST */
|
||||
if (!HAS_FLAT_CCS(dev_priv) && DISPLAY_VER(dev_priv) < 20)
|
||||
intel_de_write_fw(dev_priv, PLANE_AUX_DIST(pipe, plane_id),
|
||||
skl_plane_aux_dist(plane_state, color_plane));
|
||||
intel_de_write_dsb(display, dsb, PLANE_AUX_DIST(pipe, plane_id),
|
||||
skl_plane_aux_dist(plane_state, color_plane));
|
||||
|
||||
if (icl_is_hdr_plane(dev_priv, plane_id))
|
||||
intel_de_write_fw(dev_priv, PLANE_CUS_CTL(pipe, plane_id),
|
||||
plane_state->cus_ctl);
|
||||
intel_de_write_dsb(display, dsb, PLANE_CUS_CTL(pipe, plane_id),
|
||||
plane_state->cus_ctl);
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_COLOR_CTL(pipe, plane_id), plane_color_ctl);
|
||||
intel_de_write_dsb(display, dsb, PLANE_COLOR_CTL(pipe, plane_id),
|
||||
plane_color_ctl);
|
||||
|
||||
if (fb->format->is_yuv && icl_is_hdr_plane(dev_priv, plane_id))
|
||||
icl_program_input_csc(plane, crtc_state, plane_state);
|
||||
icl_program_input_csc(dsb, plane, plane_state);
|
||||
|
||||
skl_write_plane_wm(plane, crtc_state);
|
||||
skl_write_plane_wm(dsb, plane, crtc_state);
|
||||
|
||||
/*
|
||||
* FIXME: pxp session invalidation can hit any time even at time of commit
|
||||
* or after the commit, display content will be garbage.
|
||||
*/
|
||||
if (plane_state->force_black)
|
||||
icl_plane_csc_load_black(plane);
|
||||
icl_plane_csc_load_black(dsb, plane, crtc_state);
|
||||
|
||||
icl_plane_update_sel_fetch_noarm(plane, crtc_state, plane_state, color_plane);
|
||||
icl_plane_update_sel_fetch_noarm(dsb, plane, crtc_state, plane_state, color_plane);
|
||||
}
|
||||
|
||||
static void icl_plane_update_sel_fetch_arm(struct intel_plane *plane,
|
||||
static void icl_plane_update_sel_fetch_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct drm_i915_private *i915 = to_i915(plane->base.dev);
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
enum pipe pipe = plane->pipe;
|
||||
|
||||
if (!crtc_state->enable_psr2_sel_fetch)
|
||||
return;
|
||||
|
||||
if (drm_rect_height(&plane_state->psr2_sel_fetch_area) > 0)
|
||||
intel_de_write_fw(i915, SEL_FETCH_PLANE_CTL(pipe, plane->id),
|
||||
SEL_FETCH_PLANE_CTL_ENABLE);
|
||||
intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_CTL(pipe, plane->id),
|
||||
SEL_FETCH_PLANE_CTL_ENABLE);
|
||||
else
|
||||
icl_plane_disable_sel_fetch_arm(plane, crtc_state);
|
||||
icl_plane_disable_sel_fetch_arm(dsb, plane, crtc_state);
|
||||
}
|
||||
|
||||
static void
|
||||
icl_plane_update_arm(struct intel_plane *plane,
|
||||
icl_plane_update_arm(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
enum plane_id plane_id = plane->id;
|
||||
enum pipe pipe = plane->pipe;
|
||||
int color_plane = icl_plane_color_plane(plane_state);
|
||||
@@ -1516,25 +1540,27 @@ icl_plane_update_arm(struct intel_plane *plane,
|
||||
if (plane_state->scaler_id >= 0)
|
||||
skl_program_plane_scaler(plane, crtc_state, plane_state);
|
||||
|
||||
icl_plane_update_sel_fetch_arm(plane, crtc_state, plane_state);
|
||||
icl_plane_update_sel_fetch_arm(dsb, plane, crtc_state, plane_state);
|
||||
|
||||
/*
|
||||
* The control register self-arms if the plane was previously
|
||||
* disabled. Try to make the plane enable atomic by writing
|
||||
* the control register just before the surface register.
|
||||
*/
|
||||
intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), plane_ctl);
|
||||
intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id),
|
||||
skl_plane_surf(plane_state, color_plane));
|
||||
intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id),
|
||||
plane_ctl);
|
||||
intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id),
|
||||
skl_plane_surf(plane_state, color_plane));
|
||||
}
|
||||
|
||||
static void
|
||||
skl_plane_async_flip(struct intel_plane *plane,
|
||||
skl_plane_async_flip(struct intel_dsb *dsb,
|
||||
struct intel_plane *plane,
|
||||
const struct intel_crtc_state *crtc_state,
|
||||
const struct intel_plane_state *plane_state,
|
||||
bool async_flip)
|
||||
{
|
||||
struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
|
||||
struct intel_display *display = to_intel_display(plane->base.dev);
|
||||
enum plane_id plane_id = plane->id;
|
||||
enum pipe pipe = plane->pipe;
|
||||
u32 plane_ctl = plane_state->ctl;
|
||||
@@ -1544,9 +1570,10 @@ skl_plane_async_flip(struct intel_plane *plane,
|
||||
if (async_flip)
|
||||
plane_ctl |= PLANE_CTL_ASYNC_FLIP;
|
||||
|
||||
intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), plane_ctl);
|
||||
intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id),
|
||||
skl_plane_surf(plane_state, 0));
|
||||
intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id),
|
||||
plane_ctl);
|
||||
intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id),
|
||||
skl_plane_surf(plane_state, 0));
|
||||
}
|
||||
|
||||
static bool intel_format_is_p01x(u32 format)
|
||||
|
||||
@@ -527,8 +527,10 @@ pvr_meta_vm_map(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj)
|
||||
static void
|
||||
pvr_meta_vm_unmap(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj)
|
||||
{
|
||||
pvr_vm_unmap(pvr_dev->kernel_vm_ctx, fw_obj->fw_mm_node.start,
|
||||
fw_obj->fw_mm_node.size);
|
||||
struct pvr_gem_object *pvr_obj = fw_obj->gem;
|
||||
|
||||
pvr_vm_unmap_obj(pvr_dev->kernel_vm_ctx, pvr_obj,
|
||||
fw_obj->fw_mm_node.start, fw_obj->fw_mm_node.size);
|
||||
}
|
||||
|
||||
static bool
|
||||
|
||||
@@ -333,8 +333,8 @@ static int fw_trace_seq_show(struct seq_file *s, void *v)
|
||||
if (sf_id == ROGUE_FW_SF_LAST)
|
||||
return -EINVAL;
|
||||
|
||||
timestamp = read_fw_trace(trace_seq_data, 1) |
|
||||
((u64)read_fw_trace(trace_seq_data, 2) << 32);
|
||||
timestamp = ((u64)read_fw_trace(trace_seq_data, 1) << 32) |
|
||||
read_fw_trace(trace_seq_data, 2);
|
||||
timestamp = (timestamp & ~ROGUE_FWT_TIMESTAMP_TIME_CLRMSK) >>
|
||||
ROGUE_FWT_TIMESTAMP_TIME_SHIFT;
|
||||
|
||||
|
||||
@@ -109,12 +109,20 @@ pvr_queue_fence_get_driver_name(struct dma_fence *f)
|
||||
return PVR_DRIVER_NAME;
|
||||
}
|
||||
|
||||
static void pvr_queue_fence_release_work(struct work_struct *w)
|
||||
{
|
||||
struct pvr_queue_fence *fence = container_of(w, struct pvr_queue_fence, release_work);
|
||||
|
||||
pvr_context_put(fence->queue->ctx);
|
||||
dma_fence_free(&fence->base);
|
||||
}
|
||||
|
||||
static void pvr_queue_fence_release(struct dma_fence *f)
|
||||
{
|
||||
struct pvr_queue_fence *fence = container_of(f, struct pvr_queue_fence, base);
|
||||
struct pvr_device *pvr_dev = fence->queue->ctx->pvr_dev;
|
||||
|
||||
pvr_context_put(fence->queue->ctx);
|
||||
dma_fence_free(f);
|
||||
queue_work(pvr_dev->sched_wq, &fence->release_work);
|
||||
}
|
||||
|
||||
static const char *
|
||||
@@ -268,6 +276,7 @@ pvr_queue_fence_init(struct dma_fence *f,
|
||||
|
||||
pvr_context_get(queue->ctx);
|
||||
fence->queue = queue;
|
||||
INIT_WORK(&fence->release_work, pvr_queue_fence_release_work);
|
||||
dma_fence_init(&fence->base, fence_ops,
|
||||
&fence_ctx->lock, fence_ctx->id,
|
||||
atomic_inc_return(&fence_ctx->seqno));
|
||||
@@ -304,8 +313,9 @@ pvr_queue_cccb_fence_init(struct dma_fence *fence, struct pvr_queue *queue)
|
||||
static void
|
||||
pvr_queue_job_fence_init(struct dma_fence *fence, struct pvr_queue *queue)
|
||||
{
|
||||
pvr_queue_fence_init(fence, queue, &pvr_queue_job_fence_ops,
|
||||
&queue->job_fence_ctx);
|
||||
if (!fence->ops)
|
||||
pvr_queue_fence_init(fence, queue, &pvr_queue_job_fence_ops,
|
||||
&queue->job_fence_ctx);
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
#define PVR_QUEUE_H
|
||||
|
||||
#include <drm/gpu_scheduler.h>
|
||||
#include <linux/workqueue.h>
|
||||
|
||||
#include "pvr_cccb.h"
|
||||
#include "pvr_device.h"
|
||||
@@ -63,6 +64,9 @@ struct pvr_queue_fence {
|
||||
|
||||
/** @queue: Queue that created this fence. */
|
||||
struct pvr_queue *queue;
|
||||
|
||||
/** @release_work: Fence release work structure. */
|
||||
struct work_struct release_work;
|
||||
};
|
||||
|
||||
/**
|
||||
|
||||
@@ -293,8 +293,9 @@ err_bind_op_fini:
|
||||
|
||||
static int
|
||||
pvr_vm_bind_op_unmap_init(struct pvr_vm_bind_op *bind_op,
|
||||
struct pvr_vm_context *vm_ctx, u64 device_addr,
|
||||
u64 size)
|
||||
struct pvr_vm_context *vm_ctx,
|
||||
struct pvr_gem_object *pvr_obj,
|
||||
u64 device_addr, u64 size)
|
||||
{
|
||||
int err;
|
||||
|
||||
@@ -318,6 +319,7 @@ pvr_vm_bind_op_unmap_init(struct pvr_vm_bind_op *bind_op,
|
||||
goto err_bind_op_fini;
|
||||
}
|
||||
|
||||
bind_op->pvr_obj = pvr_obj;
|
||||
bind_op->vm_ctx = vm_ctx;
|
||||
bind_op->device_addr = device_addr;
|
||||
bind_op->size = size;
|
||||
@@ -597,20 +599,6 @@ err_free:
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
/**
|
||||
* pvr_vm_unmap_all() - Unmap all mappings associated with a VM context.
|
||||
* @vm_ctx: Target VM context.
|
||||
*
|
||||
* This function ensures that no mappings are left dangling by unmapping them
|
||||
* all in order of ascending device-virtual address.
|
||||
*/
|
||||
void
|
||||
pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx)
|
||||
{
|
||||
WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuvm_mgr.mm_start,
|
||||
vm_ctx->gpuvm_mgr.mm_range));
|
||||
}
|
||||
|
||||
/**
|
||||
* pvr_vm_context_release() - Teardown a VM context.
|
||||
* @ref_count: Pointer to reference counter of the VM context.
|
||||
@@ -703,11 +691,7 @@ pvr_vm_lock_extra(struct drm_gpuvm_exec *vm_exec)
|
||||
struct pvr_vm_bind_op *bind_op = vm_exec->extra.priv;
|
||||
struct pvr_gem_object *pvr_obj = bind_op->pvr_obj;
|
||||
|
||||
/* Unmap operations don't have an object to lock. */
|
||||
if (!pvr_obj)
|
||||
return 0;
|
||||
|
||||
/* Acquire lock on the GEM being mapped. */
|
||||
/* Acquire lock on the GEM object being mapped/unmapped. */
|
||||
return drm_exec_lock_obj(&vm_exec->exec, gem_from_pvr_gem(pvr_obj));
|
||||
}
|
||||
|
||||
@@ -772,8 +756,10 @@ err_cleanup:
|
||||
}
|
||||
|
||||
/**
|
||||
* pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory.
|
||||
* pvr_vm_unmap_obj_locked() - Unmap an already mapped section of device-virtual
|
||||
* memory.
|
||||
* @vm_ctx: Target VM context.
|
||||
* @pvr_obj: Target PowerVR memory object.
|
||||
* @device_addr: Virtual device address at the start of the target mapping.
|
||||
* @size: Size of the target mapping.
|
||||
*
|
||||
@@ -784,9 +770,13 @@ err_cleanup:
|
||||
* * Any error encountered while performing internal operations required to
|
||||
* destroy the mapping (returned from pvr_vm_gpuva_unmap or
|
||||
* pvr_vm_gpuva_remap).
|
||||
*
|
||||
* The vm_ctx->lock must be held when calling this function.
|
||||
*/
|
||||
int
|
||||
pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
|
||||
static int
|
||||
pvr_vm_unmap_obj_locked(struct pvr_vm_context *vm_ctx,
|
||||
struct pvr_gem_object *pvr_obj,
|
||||
u64 device_addr, u64 size)
|
||||
{
|
||||
struct pvr_vm_bind_op bind_op = {0};
|
||||
struct drm_gpuvm_exec vm_exec = {
|
||||
@@ -799,11 +789,13 @@ pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
|
||||
},
|
||||
};
|
||||
|
||||
int err = pvr_vm_bind_op_unmap_init(&bind_op, vm_ctx, device_addr,
|
||||
size);
|
||||
int err = pvr_vm_bind_op_unmap_init(&bind_op, vm_ctx, pvr_obj,
|
||||
device_addr, size);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
pvr_gem_object_get(pvr_obj);
|
||||
|
||||
err = drm_gpuvm_exec_lock(&vm_exec);
|
||||
if (err)
|
||||
goto err_cleanup;
|
||||
@@ -818,6 +810,96 @@ err_cleanup:
|
||||
return err;
|
||||
}
|
||||
|
||||
/**
|
||||
* pvr_vm_unmap_obj() - Unmap an already mapped section of device-virtual
|
||||
* memory.
|
||||
* @vm_ctx: Target VM context.
|
||||
* @pvr_obj: Target PowerVR memory object.
|
||||
* @device_addr: Virtual device address at the start of the target mapping.
|
||||
* @size: Size of the target mapping.
|
||||
*
|
||||
* Return:
|
||||
* * 0 on success,
|
||||
* * Any error encountered by pvr_vm_unmap_obj_locked.
|
||||
*/
|
||||
int
|
||||
pvr_vm_unmap_obj(struct pvr_vm_context *vm_ctx, struct pvr_gem_object *pvr_obj,
|
||||
u64 device_addr, u64 size)
|
||||
{
|
||||
int err;
|
||||
|
||||
mutex_lock(&vm_ctx->lock);
|
||||
err = pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj, device_addr, size);
|
||||
mutex_unlock(&vm_ctx->lock);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
/**
|
||||
* pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory.
|
||||
* @vm_ctx: Target VM context.
|
||||
* @device_addr: Virtual device address at the start of the target mapping.
|
||||
* @size: Size of the target mapping.
|
||||
*
|
||||
* Return:
|
||||
* * 0 on success,
|
||||
* * Any error encountered by drm_gpuva_find,
|
||||
* * Any error encountered by pvr_vm_unmap_obj_locked.
|
||||
*/
|
||||
int
|
||||
pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
|
||||
{
|
||||
struct pvr_gem_object *pvr_obj;
|
||||
struct drm_gpuva *va;
|
||||
int err;
|
||||
|
||||
mutex_lock(&vm_ctx->lock);
|
||||
|
||||
va = drm_gpuva_find(&vm_ctx->gpuvm_mgr, device_addr, size);
|
||||
if (va) {
|
||||
pvr_obj = gem_to_pvr_gem(va->gem.obj);
|
||||
err = pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj,
|
||||
va->va.addr, va->va.range);
|
||||
} else {
|
||||
err = -ENOENT;
|
||||
}
|
||||
|
||||
mutex_unlock(&vm_ctx->lock);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
/**
|
||||
* pvr_vm_unmap_all() - Unmap all mappings associated with a VM context.
|
||||
* @vm_ctx: Target VM context.
|
||||
*
|
||||
* This function ensures that no mappings are left dangling by unmapping them
|
||||
* all in order of ascending device-virtual address.
|
||||
*/
|
||||
void
|
||||
pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx)
|
||||
{
|
||||
mutex_lock(&vm_ctx->lock);
|
||||
|
||||
for (;;) {
|
||||
struct pvr_gem_object *pvr_obj;
|
||||
struct drm_gpuva *va;
|
||||
|
||||
va = drm_gpuva_find_first(&vm_ctx->gpuvm_mgr,
|
||||
vm_ctx->gpuvm_mgr.mm_start,
|
||||
vm_ctx->gpuvm_mgr.mm_range);
|
||||
if (!va)
|
||||
break;
|
||||
|
||||
pvr_obj = gem_to_pvr_gem(va->gem.obj);
|
||||
|
||||
WARN_ON(pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj,
|
||||
va->va.addr, va->va.range));
|
||||
}
|
||||
|
||||
mutex_unlock(&vm_ctx->lock);
|
||||
}
|
||||
|
||||
/* Static data areas are determined by firmware. */
|
||||
static const struct drm_pvr_static_data_area static_data_areas[] = {
|
||||
{
|
||||
|
||||
@@ -38,6 +38,9 @@ struct pvr_vm_context *pvr_vm_create_context(struct pvr_device *pvr_dev,
|
||||
int pvr_vm_map(struct pvr_vm_context *vm_ctx,
|
||||
struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
|
||||
u64 device_addr, u64 size);
|
||||
int pvr_vm_unmap_obj(struct pvr_vm_context *vm_ctx,
|
||||
struct pvr_gem_object *pvr_obj,
|
||||
u64 device_addr, u64 size);
|
||||
int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size);
|
||||
void pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx);
|
||||
|
||||
|
||||
@@ -4,6 +4,8 @@ config DRM_NOUVEAU
|
||||
depends on DRM && PCI && MMU
|
||||
select IOMMU_API
|
||||
select FW_LOADER
|
||||
select FW_CACHE if PM_SLEEP
|
||||
select DRM_CLIENT_SELECTION
|
||||
select DRM_DISPLAY_DP_HELPER
|
||||
select DRM_DISPLAY_HDMI_HELPER
|
||||
select DRM_DISPLAY_HELPER
|
||||
|
||||
@@ -31,6 +31,7 @@
|
||||
#include <linux/dynamic_debug.h>
|
||||
|
||||
#include <drm/drm_aperture.h>
|
||||
#include <drm/drm_client_setup.h>
|
||||
#include <drm/drm_drv.h>
|
||||
#include <drm/drm_fbdev_ttm.h>
|
||||
#include <drm/drm_gem_ttm_helper.h>
|
||||
@@ -836,6 +837,7 @@ static int nouveau_drm_probe(struct pci_dev *pdev,
|
||||
{
|
||||
struct nvkm_device *device;
|
||||
struct nouveau_drm *drm;
|
||||
const struct drm_format_info *format;
|
||||
int ret;
|
||||
|
||||
if (vga_switcheroo_client_probe_defer(pdev))
|
||||
@@ -873,9 +875,11 @@ static int nouveau_drm_probe(struct pci_dev *pdev,
|
||||
goto fail_pci;
|
||||
|
||||
if (drm->client.device.info.ram_size <= 32 * 1024 * 1024)
|
||||
drm_fbdev_ttm_setup(drm->dev, 8);
|
||||
format = drm_format_info(DRM_FORMAT_C8);
|
||||
else
|
||||
drm_fbdev_ttm_setup(drm->dev, 32);
|
||||
format = NULL;
|
||||
|
||||
drm_client_setup(drm->dev, format);
|
||||
|
||||
quirk_broken_nv_runpm(pdev);
|
||||
return 0;
|
||||
@@ -1318,6 +1322,8 @@ driver_stub = {
|
||||
.dumb_create = nouveau_display_dumb_create,
|
||||
.dumb_map_offset = drm_gem_ttm_dumb_map_offset,
|
||||
|
||||
DRM_FBDEV_TTM_DRIVER_OPS,
|
||||
|
||||
.name = DRIVER_NAME,
|
||||
.desc = DRIVER_DESC,
|
||||
#ifdef GIT_REVISION
|
||||
|
||||
@@ -359,7 +359,8 @@ int r300_mc_wait_for_idle(struct radeon_device *rdev)
|
||||
return -1;
|
||||
}
|
||||
|
||||
static void r300_gpu_init(struct radeon_device *rdev)
|
||||
/* rs400_gpu_init also calls this! */
|
||||
void r300_gpu_init(struct radeon_device *rdev)
|
||||
{
|
||||
uint32_t gb_tile_config, tmp;
|
||||
|
||||
|
||||
@@ -165,6 +165,7 @@ void r200_set_safe_registers(struct radeon_device *rdev);
|
||||
*/
|
||||
extern int r300_init(struct radeon_device *rdev);
|
||||
extern void r300_fini(struct radeon_device *rdev);
|
||||
extern void r300_gpu_init(struct radeon_device *rdev);
|
||||
extern int r300_suspend(struct radeon_device *rdev);
|
||||
extern int r300_resume(struct radeon_device *rdev);
|
||||
extern int r300_asic_reset(struct radeon_device *rdev, bool hard);
|
||||
|
||||
@@ -256,8 +256,22 @@ int rs400_mc_wait_for_idle(struct radeon_device *rdev)
|
||||
|
||||
static void rs400_gpu_init(struct radeon_device *rdev)
|
||||
{
|
||||
/* FIXME: is this correct ? */
|
||||
r420_pipes_init(rdev);
|
||||
/* Earlier code was calling r420_pipes_init and then
|
||||
* rs400_mc_wait_for_idle(rdev). The problem is that
|
||||
* at least on my Mobility Radeon Xpress 200M RC410 card
|
||||
* that ends up in this code path ends up num_gb_pipes == 3
|
||||
* while the card seems to have only one pipe. With the
|
||||
* r420 pipe initialization method.
|
||||
*
|
||||
* Problems shown up as HyperZ glitches, see:
|
||||
* https://bugs.freedesktop.org/show_bug.cgi?id=110897
|
||||
*
|
||||
* Delegating initialization to r300 code seems to work
|
||||
* and results in proper pipe numbers. The rs400 cards
|
||||
* are said to be not r400, but r300 kind of cards.
|
||||
*/
|
||||
r300_gpu_init(rdev);
|
||||
|
||||
if (rs400_mc_wait_for_idle(rdev)) {
|
||||
pr_warn("rs400: Failed to wait MC idle while programming pipes. Bad things might happen. %08x\n",
|
||||
RREG32(RADEON_MC_STATUS));
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
*
|
||||
*/
|
||||
|
||||
#if !defined(_GPU_SCHED_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
|
||||
#if !defined(_GPU_SCHED_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
|
||||
#define _GPU_SCHED_TRACE_H_
|
||||
|
||||
#include <linux/stringify.h>
|
||||
@@ -106,7 +106,7 @@ TRACE_EVENT(drm_sched_job_wait_dep,
|
||||
__entry->seqno)
|
||||
);
|
||||
|
||||
#endif
|
||||
#endif /* _GPU_SCHED_TRACE_H_ */
|
||||
|
||||
/* This part must be outside protection */
|
||||
#undef TRACE_INCLUDE_PATH
|
||||
|
||||
@@ -194,8 +194,6 @@ intel_find_initial_plane_obj(struct intel_crtc *crtc,
|
||||
to_intel_plane(crtc->base.primary);
|
||||
struct intel_plane_state *plane_state =
|
||||
to_intel_plane_state(plane->base.state);
|
||||
struct intel_crtc_state *crtc_state =
|
||||
to_intel_crtc_state(crtc->base.state);
|
||||
struct drm_framebuffer *fb;
|
||||
struct i915_vma *vma;
|
||||
|
||||
@@ -241,14 +239,6 @@ intel_find_initial_plane_obj(struct intel_crtc *crtc,
|
||||
atomic_or(plane->frontbuffer_bit, &to_intel_frontbuffer(fb)->bits);
|
||||
|
||||
plane_config->vma = vma;
|
||||
|
||||
/*
|
||||
* Flip to the newly created mapping ASAP, so we can re-use the
|
||||
* first part of GGTT for WOPCM, prevent flickering, and prevent
|
||||
* the lookup of sysmem scratch pages.
|
||||
*/
|
||||
plane->check_plane(crtc_state, plane_state);
|
||||
plane->async_flip(plane, crtc_state, plane_state, true);
|
||||
return;
|
||||
|
||||
nofb:
|
||||
|
||||
@@ -379,9 +379,7 @@ int xe_gt_init_early(struct xe_gt *gt)
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
xe_wa_process_gt(gt);
|
||||
xe_wa_process_oob(gt);
|
||||
xe_tuning_process_gt(gt);
|
||||
|
||||
xe_force_wake_init_gt(gt, gt_to_fw(gt));
|
||||
spin_lock_init(>->global_invl_lock);
|
||||
@@ -469,6 +467,8 @@ static int all_fw_domain_init(struct xe_gt *gt)
|
||||
goto err_hw_fence_irq;
|
||||
|
||||
xe_gt_mcr_set_implicit_defaults(gt);
|
||||
xe_wa_process_gt(gt);
|
||||
xe_tuning_process_gt(gt);
|
||||
xe_reg_sr_apply_mmio(>->reg_sr, gt);
|
||||
|
||||
err = xe_gt_clock_init(gt);
|
||||
|
||||
@@ -19,11 +19,10 @@ static u64 xe_npages_in_range(unsigned long start, unsigned long end)
|
||||
return (end - start) >> PAGE_SHIFT;
|
||||
}
|
||||
|
||||
/*
|
||||
/**
|
||||
* xe_mark_range_accessed() - mark a range is accessed, so core mm
|
||||
* have such information for memory eviction or write back to
|
||||
* hard disk
|
||||
*
|
||||
* @range: the range to mark
|
||||
* @write: if write to this range, we mark pages in this range
|
||||
* as dirty
|
||||
@@ -43,15 +42,51 @@ static void xe_mark_range_accessed(struct hmm_range *range, bool write)
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
static int xe_alloc_sg(struct xe_device *xe, struct sg_table *st,
|
||||
struct hmm_range *range, struct rw_semaphore *notifier_sem)
|
||||
{
|
||||
unsigned long i, npages, hmm_pfn;
|
||||
unsigned long num_chunks = 0;
|
||||
int ret;
|
||||
|
||||
/* HMM docs says this is needed. */
|
||||
ret = down_read_interruptible(notifier_sem);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (mmu_interval_read_retry(range->notifier, range->notifier_seq)) {
|
||||
up_read(notifier_sem);
|
||||
return -EAGAIN;
|
||||
}
|
||||
|
||||
npages = xe_npages_in_range(range->start, range->end);
|
||||
for (i = 0; i < npages;) {
|
||||
unsigned long len;
|
||||
|
||||
hmm_pfn = range->hmm_pfns[i];
|
||||
xe_assert(xe, hmm_pfn & HMM_PFN_VALID);
|
||||
|
||||
len = 1UL << hmm_pfn_to_map_order(hmm_pfn);
|
||||
|
||||
/* If order > 0 the page may extend beyond range->start */
|
||||
len -= (hmm_pfn & ~HMM_PFN_FLAGS) & (len - 1);
|
||||
i += len;
|
||||
num_chunks++;
|
||||
}
|
||||
up_read(notifier_sem);
|
||||
|
||||
return sg_alloc_table(st, num_chunks, GFP_KERNEL);
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_build_sg() - build a scatter gather table for all the physical pages/pfn
|
||||
* in a hmm_range. dma-map pages if necessary. dma-address is save in sg table
|
||||
* and will be used to program GPU page table later.
|
||||
*
|
||||
* @xe: the xe device who will access the dma-address in sg table
|
||||
* @range: the hmm range that we build the sg table from. range->hmm_pfns[]
|
||||
* has the pfn numbers of pages that back up this hmm address range.
|
||||
* @st: pointer to the sg table.
|
||||
* @notifier_sem: The xe notifier lock.
|
||||
* @write: whether we write to this range. This decides dma map direction
|
||||
* for system pages. If write we map it bi-diretional; otherwise
|
||||
* DMA_TO_DEVICE
|
||||
@@ -78,43 +113,84 @@ static void xe_mark_range_accessed(struct hmm_range *range, bool write)
|
||||
* Returns 0 if successful; -ENOMEM if fails to allocate memory
|
||||
*/
|
||||
static int xe_build_sg(struct xe_device *xe, struct hmm_range *range,
|
||||
struct sg_table *st, bool write)
|
||||
struct sg_table *st,
|
||||
struct rw_semaphore *notifier_sem,
|
||||
bool write)
|
||||
{
|
||||
unsigned long npages = xe_npages_in_range(range->start, range->end);
|
||||
struct device *dev = xe->drm.dev;
|
||||
struct page **pages;
|
||||
u64 i, npages;
|
||||
int ret;
|
||||
struct scatterlist *sgl;
|
||||
struct page *page;
|
||||
unsigned long i, j;
|
||||
|
||||
npages = xe_npages_in_range(range->start, range->end);
|
||||
pages = kvmalloc_array(npages, sizeof(*pages), GFP_KERNEL);
|
||||
if (!pages)
|
||||
return -ENOMEM;
|
||||
lockdep_assert_held(notifier_sem);
|
||||
|
||||
for (i = 0; i < npages; i++) {
|
||||
pages[i] = hmm_pfn_to_page(range->hmm_pfns[i]);
|
||||
xe_assert(xe, !is_device_private_page(pages[i]));
|
||||
i = 0;
|
||||
for_each_sg(st->sgl, sgl, st->nents, j) {
|
||||
unsigned long hmm_pfn, size;
|
||||
|
||||
hmm_pfn = range->hmm_pfns[i];
|
||||
page = hmm_pfn_to_page(hmm_pfn);
|
||||
xe_assert(xe, !is_device_private_page(page));
|
||||
|
||||
size = 1UL << hmm_pfn_to_map_order(hmm_pfn);
|
||||
size -= page_to_pfn(page) & (size - 1);
|
||||
i += size;
|
||||
|
||||
if (unlikely(j == st->nents - 1)) {
|
||||
if (i > npages)
|
||||
size -= (i - npages);
|
||||
sg_mark_end(sgl);
|
||||
}
|
||||
sg_set_page(sgl, page, size << PAGE_SHIFT, 0);
|
||||
}
|
||||
xe_assert(xe, i == npages);
|
||||
|
||||
ret = sg_alloc_table_from_pages_segment(st, pages, npages, 0, npages << PAGE_SHIFT,
|
||||
xe_sg_segment_size(dev), GFP_KERNEL);
|
||||
if (ret)
|
||||
goto free_pages;
|
||||
|
||||
ret = dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE,
|
||||
DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING);
|
||||
if (ret) {
|
||||
sg_free_table(st);
|
||||
st = NULL;
|
||||
}
|
||||
|
||||
free_pages:
|
||||
kvfree(pages);
|
||||
return ret;
|
||||
return dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE,
|
||||
DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING);
|
||||
}
|
||||
|
||||
/*
|
||||
static void xe_hmm_userptr_set_mapped(struct xe_userptr_vma *uvma)
|
||||
{
|
||||
struct xe_userptr *userptr = &uvma->userptr;
|
||||
struct xe_vm *vm = xe_vma_vm(&uvma->vma);
|
||||
|
||||
lockdep_assert_held_write(&vm->lock);
|
||||
lockdep_assert_held(&vm->userptr.notifier_lock);
|
||||
|
||||
mutex_lock(&userptr->unmap_mutex);
|
||||
xe_assert(vm->xe, !userptr->mapped);
|
||||
userptr->mapped = true;
|
||||
mutex_unlock(&userptr->unmap_mutex);
|
||||
}
|
||||
|
||||
void xe_hmm_userptr_unmap(struct xe_userptr_vma *uvma)
|
||||
{
|
||||
struct xe_userptr *userptr = &uvma->userptr;
|
||||
struct xe_vma *vma = &uvma->vma;
|
||||
bool write = !xe_vma_read_only(vma);
|
||||
struct xe_vm *vm = xe_vma_vm(vma);
|
||||
struct xe_device *xe = vm->xe;
|
||||
|
||||
if (!lockdep_is_held_type(&vm->userptr.notifier_lock, 0) &&
|
||||
!lockdep_is_held_type(&vm->lock, 0) &&
|
||||
!(vma->gpuva.flags & XE_VMA_DESTROYED)) {
|
||||
/* Don't unmap in exec critical section. */
|
||||
xe_vm_assert_held(vm);
|
||||
/* Don't unmap while mapping the sg. */
|
||||
lockdep_assert_held(&vm->lock);
|
||||
}
|
||||
|
||||
mutex_lock(&userptr->unmap_mutex);
|
||||
if (userptr->sg && userptr->mapped)
|
||||
dma_unmap_sgtable(xe->drm.dev, userptr->sg,
|
||||
write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, 0);
|
||||
userptr->mapped = false;
|
||||
mutex_unlock(&userptr->unmap_mutex);
|
||||
}
|
||||
|
||||
/**
|
||||
* xe_hmm_userptr_free_sg() - Free the scatter gather table of userptr
|
||||
*
|
||||
* @uvma: the userptr vma which hold the scatter gather table
|
||||
*
|
||||
* With function xe_userptr_populate_range, we allocate storage of
|
||||
@@ -124,16 +200,9 @@ free_pages:
|
||||
void xe_hmm_userptr_free_sg(struct xe_userptr_vma *uvma)
|
||||
{
|
||||
struct xe_userptr *userptr = &uvma->userptr;
|
||||
struct xe_vma *vma = &uvma->vma;
|
||||
bool write = !xe_vma_read_only(vma);
|
||||
struct xe_vm *vm = xe_vma_vm(vma);
|
||||
struct xe_device *xe = vm->xe;
|
||||
struct device *dev = xe->drm.dev;
|
||||
|
||||
xe_assert(xe, userptr->sg);
|
||||
dma_unmap_sgtable(dev, userptr->sg,
|
||||
write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, 0);
|
||||
|
||||
xe_assert(xe_vma_vm(&uvma->vma)->xe, userptr->sg);
|
||||
xe_hmm_userptr_unmap(uvma);
|
||||
sg_free_table(userptr->sg);
|
||||
userptr->sg = NULL;
|
||||
}
|
||||
@@ -166,13 +235,20 @@ int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma,
|
||||
{
|
||||
unsigned long timeout =
|
||||
jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
|
||||
unsigned long *pfns, flags = HMM_PFN_REQ_FAULT;
|
||||
unsigned long *pfns;
|
||||
struct xe_userptr *userptr;
|
||||
struct xe_vma *vma = &uvma->vma;
|
||||
u64 userptr_start = xe_vma_userptr(vma);
|
||||
u64 userptr_end = userptr_start + xe_vma_size(vma);
|
||||
struct xe_vm *vm = xe_vma_vm(vma);
|
||||
struct hmm_range hmm_range;
|
||||
struct hmm_range hmm_range = {
|
||||
.pfn_flags_mask = 0, /* ignore pfns */
|
||||
.default_flags = HMM_PFN_REQ_FAULT,
|
||||
.start = userptr_start,
|
||||
.end = userptr_end,
|
||||
.notifier = &uvma->userptr.notifier,
|
||||
.dev_private_owner = vm->xe,
|
||||
};
|
||||
bool write = !xe_vma_read_only(vma);
|
||||
unsigned long notifier_seq;
|
||||
u64 npages;
|
||||
@@ -199,19 +275,14 @@ int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma,
|
||||
return -ENOMEM;
|
||||
|
||||
if (write)
|
||||
flags |= HMM_PFN_REQ_WRITE;
|
||||
hmm_range.default_flags |= HMM_PFN_REQ_WRITE;
|
||||
|
||||
if (!mmget_not_zero(userptr->notifier.mm)) {
|
||||
ret = -EFAULT;
|
||||
goto free_pfns;
|
||||
}
|
||||
|
||||
hmm_range.default_flags = flags;
|
||||
hmm_range.hmm_pfns = pfns;
|
||||
hmm_range.notifier = &userptr->notifier;
|
||||
hmm_range.start = userptr_start;
|
||||
hmm_range.end = userptr_end;
|
||||
hmm_range.dev_private_owner = vm->xe;
|
||||
|
||||
while (true) {
|
||||
hmm_range.notifier_seq = mmu_interval_read_begin(&userptr->notifier);
|
||||
@@ -238,16 +309,37 @@ int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma,
|
||||
if (ret)
|
||||
goto free_pfns;
|
||||
|
||||
ret = xe_build_sg(vm->xe, &hmm_range, &userptr->sgt, write);
|
||||
ret = xe_alloc_sg(vm->xe, &userptr->sgt, &hmm_range, &vm->userptr.notifier_lock);
|
||||
if (ret)
|
||||
goto free_pfns;
|
||||
|
||||
ret = down_read_interruptible(&vm->userptr.notifier_lock);
|
||||
if (ret)
|
||||
goto free_st;
|
||||
|
||||
if (mmu_interval_read_retry(hmm_range.notifier, hmm_range.notifier_seq)) {
|
||||
ret = -EAGAIN;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
ret = xe_build_sg(vm->xe, &hmm_range, &userptr->sgt,
|
||||
&vm->userptr.notifier_lock, write);
|
||||
if (ret)
|
||||
goto out_unlock;
|
||||
|
||||
xe_mark_range_accessed(&hmm_range, write);
|
||||
userptr->sg = &userptr->sgt;
|
||||
xe_hmm_userptr_set_mapped(uvma);
|
||||
userptr->notifier_seq = hmm_range.notifier_seq;
|
||||
up_read(&vm->userptr.notifier_lock);
|
||||
kvfree(pfns);
|
||||
return 0;
|
||||
|
||||
out_unlock:
|
||||
up_read(&vm->userptr.notifier_lock);
|
||||
free_st:
|
||||
sg_free_table(&userptr->sgt);
|
||||
free_pfns:
|
||||
kvfree(pfns);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
@@ -3,9 +3,16 @@
|
||||
* Copyright © 2024 Intel Corporation
|
||||
*/
|
||||
|
||||
#ifndef _XE_HMM_H_
|
||||
#define _XE_HMM_H_
|
||||
|
||||
#include <linux/types.h>
|
||||
|
||||
struct xe_userptr_vma;
|
||||
|
||||
int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma, bool is_mm_mmap_locked);
|
||||
|
||||
void xe_hmm_userptr_free_sg(struct xe_userptr_vma *uvma);
|
||||
|
||||
void xe_hmm_userptr_unmap(struct xe_userptr_vma *uvma);
|
||||
#endif
|
||||
|
||||
@@ -28,6 +28,8 @@ struct xe_pt_dir {
|
||||
struct xe_pt pt;
|
||||
/** @children: Array of page-table child nodes */
|
||||
struct xe_ptw *children[XE_PDES];
|
||||
/** @staging: Array of page-table staging nodes */
|
||||
struct xe_ptw *staging[XE_PDES];
|
||||
};
|
||||
|
||||
#if IS_ENABLED(CONFIG_DRM_XE_DEBUG_VM)
|
||||
@@ -48,9 +50,10 @@ static struct xe_pt_dir *as_xe_pt_dir(struct xe_pt *pt)
|
||||
return container_of(pt, struct xe_pt_dir, pt);
|
||||
}
|
||||
|
||||
static struct xe_pt *xe_pt_entry(struct xe_pt_dir *pt_dir, unsigned int index)
|
||||
static struct xe_pt *
|
||||
xe_pt_entry_staging(struct xe_pt_dir *pt_dir, unsigned int index)
|
||||
{
|
||||
return container_of(pt_dir->children[index], struct xe_pt, base);
|
||||
return container_of(pt_dir->staging[index], struct xe_pt, base);
|
||||
}
|
||||
|
||||
static u64 __xe_pt_empty_pte(struct xe_tile *tile, struct xe_vm *vm,
|
||||
@@ -125,6 +128,7 @@ struct xe_pt *xe_pt_create(struct xe_vm *vm, struct xe_tile *tile,
|
||||
}
|
||||
pt->bo = bo;
|
||||
pt->base.children = level ? as_xe_pt_dir(pt)->children : NULL;
|
||||
pt->base.staging = level ? as_xe_pt_dir(pt)->staging : NULL;
|
||||
|
||||
if (vm->xef)
|
||||
xe_drm_client_add_bo(vm->xef->client, pt->bo);
|
||||
@@ -205,8 +209,8 @@ void xe_pt_destroy(struct xe_pt *pt, u32 flags, struct llist_head *deferred)
|
||||
struct xe_pt_dir *pt_dir = as_xe_pt_dir(pt);
|
||||
|
||||
for (i = 0; i < XE_PDES; i++) {
|
||||
if (xe_pt_entry(pt_dir, i))
|
||||
xe_pt_destroy(xe_pt_entry(pt_dir, i), flags,
|
||||
if (xe_pt_entry_staging(pt_dir, i))
|
||||
xe_pt_destroy(xe_pt_entry_staging(pt_dir, i), flags,
|
||||
deferred);
|
||||
}
|
||||
}
|
||||
@@ -375,8 +379,10 @@ xe_pt_insert_entry(struct xe_pt_stage_bind_walk *xe_walk, struct xe_pt *parent,
|
||||
/* Continue building a non-connected subtree. */
|
||||
struct iosys_map *map = &parent->bo->vmap;
|
||||
|
||||
if (unlikely(xe_child))
|
||||
if (unlikely(xe_child)) {
|
||||
parent->base.children[offset] = &xe_child->base;
|
||||
parent->base.staging[offset] = &xe_child->base;
|
||||
}
|
||||
|
||||
xe_pt_write(xe_walk->vm->xe, map, offset, pte);
|
||||
parent->num_live++;
|
||||
@@ -613,6 +619,7 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
|
||||
.ops = &xe_pt_stage_bind_ops,
|
||||
.shifts = xe_normal_pt_shifts,
|
||||
.max_level = XE_PT_HIGHEST_LEVEL,
|
||||
.staging = true,
|
||||
},
|
||||
.vm = xe_vma_vm(vma),
|
||||
.tile = tile,
|
||||
@@ -872,7 +879,7 @@ static void xe_pt_cancel_bind(struct xe_vma *vma,
|
||||
}
|
||||
}
|
||||
|
||||
static void xe_pt_commit_locks_assert(struct xe_vma *vma)
|
||||
static void xe_pt_commit_prepare_locks_assert(struct xe_vma *vma)
|
||||
{
|
||||
struct xe_vm *vm = xe_vma_vm(vma);
|
||||
|
||||
@@ -884,6 +891,16 @@ static void xe_pt_commit_locks_assert(struct xe_vma *vma)
|
||||
xe_vm_assert_held(vm);
|
||||
}
|
||||
|
||||
static void xe_pt_commit_locks_assert(struct xe_vma *vma)
|
||||
{
|
||||
struct xe_vm *vm = xe_vma_vm(vma);
|
||||
|
||||
xe_pt_commit_prepare_locks_assert(vma);
|
||||
|
||||
if (xe_vma_is_userptr(vma))
|
||||
lockdep_assert_held_read(&vm->userptr.notifier_lock);
|
||||
}
|
||||
|
||||
static void xe_pt_commit(struct xe_vma *vma,
|
||||
struct xe_vm_pgtable_update *entries,
|
||||
u32 num_entries, struct llist_head *deferred)
|
||||
@@ -894,13 +911,17 @@ static void xe_pt_commit(struct xe_vma *vma,
|
||||
|
||||
for (i = 0; i < num_entries; i++) {
|
||||
struct xe_pt *pt = entries[i].pt;
|
||||
struct xe_pt_dir *pt_dir;
|
||||
|
||||
if (!pt->level)
|
||||
continue;
|
||||
|
||||
pt_dir = as_xe_pt_dir(pt);
|
||||
for (j = 0; j < entries[i].qwords; j++) {
|
||||
struct xe_pt *oldpte = entries[i].pt_entries[j].pt;
|
||||
int j_ = j + entries[i].ofs;
|
||||
|
||||
pt_dir->children[j_] = pt_dir->staging[j_];
|
||||
xe_pt_destroy(oldpte, xe_vma_vm(vma)->flags, deferred);
|
||||
}
|
||||
}
|
||||
@@ -912,7 +933,7 @@ static void xe_pt_abort_bind(struct xe_vma *vma,
|
||||
{
|
||||
int i, j;
|
||||
|
||||
xe_pt_commit_locks_assert(vma);
|
||||
xe_pt_commit_prepare_locks_assert(vma);
|
||||
|
||||
for (i = num_entries - 1; i >= 0; --i) {
|
||||
struct xe_pt *pt = entries[i].pt;
|
||||
@@ -927,10 +948,10 @@ static void xe_pt_abort_bind(struct xe_vma *vma,
|
||||
pt_dir = as_xe_pt_dir(pt);
|
||||
for (j = 0; j < entries[i].qwords; j++) {
|
||||
u32 j_ = j + entries[i].ofs;
|
||||
struct xe_pt *newpte = xe_pt_entry(pt_dir, j_);
|
||||
struct xe_pt *newpte = xe_pt_entry_staging(pt_dir, j_);
|
||||
struct xe_pt *oldpte = entries[i].pt_entries[j].pt;
|
||||
|
||||
pt_dir->children[j_] = oldpte ? &oldpte->base : 0;
|
||||
pt_dir->staging[j_] = oldpte ? &oldpte->base : 0;
|
||||
xe_pt_destroy(newpte, xe_vma_vm(vma)->flags, NULL);
|
||||
}
|
||||
}
|
||||
@@ -942,7 +963,7 @@ static void xe_pt_commit_prepare_bind(struct xe_vma *vma,
|
||||
{
|
||||
u32 i, j;
|
||||
|
||||
xe_pt_commit_locks_assert(vma);
|
||||
xe_pt_commit_prepare_locks_assert(vma);
|
||||
|
||||
for (i = 0; i < num_entries; i++) {
|
||||
struct xe_pt *pt = entries[i].pt;
|
||||
@@ -960,10 +981,10 @@ static void xe_pt_commit_prepare_bind(struct xe_vma *vma,
|
||||
struct xe_pt *newpte = entries[i].pt_entries[j].pt;
|
||||
struct xe_pt *oldpte = NULL;
|
||||
|
||||
if (xe_pt_entry(pt_dir, j_))
|
||||
oldpte = xe_pt_entry(pt_dir, j_);
|
||||
if (xe_pt_entry_staging(pt_dir, j_))
|
||||
oldpte = xe_pt_entry_staging(pt_dir, j_);
|
||||
|
||||
pt_dir->children[j_] = &newpte->base;
|
||||
pt_dir->staging[j_] = &newpte->base;
|
||||
entries[i].pt_entries[j].pt = oldpte;
|
||||
}
|
||||
}
|
||||
@@ -1212,42 +1233,22 @@ static int vma_check_userptr(struct xe_vm *vm, struct xe_vma *vma,
|
||||
return 0;
|
||||
|
||||
uvma = to_userptr_vma(vma);
|
||||
if (xe_pt_userptr_inject_eagain(uvma))
|
||||
xe_vma_userptr_force_invalidate(uvma);
|
||||
|
||||
notifier_seq = uvma->userptr.notifier_seq;
|
||||
|
||||
if (uvma->userptr.initial_bind && !xe_vm_in_fault_mode(vm))
|
||||
return 0;
|
||||
|
||||
if (!mmu_interval_read_retry(&uvma->userptr.notifier,
|
||||
notifier_seq) &&
|
||||
!xe_pt_userptr_inject_eagain(uvma))
|
||||
notifier_seq))
|
||||
return 0;
|
||||
|
||||
if (xe_vm_in_fault_mode(vm)) {
|
||||
if (xe_vm_in_fault_mode(vm))
|
||||
return -EAGAIN;
|
||||
} else {
|
||||
spin_lock(&vm->userptr.invalidated_lock);
|
||||
list_move_tail(&uvma->userptr.invalidate_link,
|
||||
&vm->userptr.invalidated);
|
||||
spin_unlock(&vm->userptr.invalidated_lock);
|
||||
|
||||
if (xe_vm_in_preempt_fence_mode(vm)) {
|
||||
struct dma_resv_iter cursor;
|
||||
struct dma_fence *fence;
|
||||
long err;
|
||||
|
||||
dma_resv_iter_begin(&cursor, xe_vm_resv(vm),
|
||||
DMA_RESV_USAGE_BOOKKEEP);
|
||||
dma_resv_for_each_fence_unlocked(&cursor, fence)
|
||||
dma_fence_enable_sw_signaling(fence);
|
||||
dma_resv_iter_end(&cursor);
|
||||
|
||||
err = dma_resv_wait_timeout(xe_vm_resv(vm),
|
||||
DMA_RESV_USAGE_BOOKKEEP,
|
||||
false, MAX_SCHEDULE_TIMEOUT);
|
||||
XE_WARN_ON(err <= 0);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Just continue the operation since exec or rebind worker
|
||||
* will take care of rebinding.
|
||||
*/
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -1513,6 +1514,7 @@ static unsigned int xe_pt_stage_unbind(struct xe_tile *tile, struct xe_vma *vma,
|
||||
.ops = &xe_pt_stage_unbind_ops,
|
||||
.shifts = xe_normal_pt_shifts,
|
||||
.max_level = XE_PT_HIGHEST_LEVEL,
|
||||
.staging = true,
|
||||
},
|
||||
.tile = tile,
|
||||
.modified_start = xe_vma_start(vma),
|
||||
@@ -1554,7 +1556,7 @@ static void xe_pt_abort_unbind(struct xe_vma *vma,
|
||||
{
|
||||
int i, j;
|
||||
|
||||
xe_pt_commit_locks_assert(vma);
|
||||
xe_pt_commit_prepare_locks_assert(vma);
|
||||
|
||||
for (i = num_entries - 1; i >= 0; --i) {
|
||||
struct xe_vm_pgtable_update *entry = &entries[i];
|
||||
@@ -1567,7 +1569,7 @@ static void xe_pt_abort_unbind(struct xe_vma *vma,
|
||||
continue;
|
||||
|
||||
for (j = entry->ofs; j < entry->ofs + entry->qwords; j++)
|
||||
pt_dir->children[j] =
|
||||
pt_dir->staging[j] =
|
||||
entries[i].pt_entries[j - entry->ofs].pt ?
|
||||
&entries[i].pt_entries[j - entry->ofs].pt->base : NULL;
|
||||
}
|
||||
@@ -1580,7 +1582,7 @@ xe_pt_commit_prepare_unbind(struct xe_vma *vma,
|
||||
{
|
||||
int i, j;
|
||||
|
||||
xe_pt_commit_locks_assert(vma);
|
||||
xe_pt_commit_prepare_locks_assert(vma);
|
||||
|
||||
for (i = 0; i < num_entries; ++i) {
|
||||
struct xe_vm_pgtable_update *entry = &entries[i];
|
||||
@@ -1594,8 +1596,8 @@ xe_pt_commit_prepare_unbind(struct xe_vma *vma,
|
||||
pt_dir = as_xe_pt_dir(pt);
|
||||
for (j = entry->ofs; j < entry->ofs + entry->qwords; j++) {
|
||||
entry->pt_entries[j - entry->ofs].pt =
|
||||
xe_pt_entry(pt_dir, j);
|
||||
pt_dir->children[j] = NULL;
|
||||
xe_pt_entry_staging(pt_dir, j);
|
||||
pt_dir->staging[j] = NULL;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -74,7 +74,8 @@ int xe_pt_walk_range(struct xe_ptw *parent, unsigned int level,
|
||||
u64 addr, u64 end, struct xe_pt_walk *walk)
|
||||
{
|
||||
pgoff_t offset = xe_pt_offset(addr, level, walk);
|
||||
struct xe_ptw **entries = parent->children ? parent->children : NULL;
|
||||
struct xe_ptw **entries = walk->staging ? (parent->staging ?: NULL) :
|
||||
(parent->children ?: NULL);
|
||||
const struct xe_pt_walk_ops *ops = walk->ops;
|
||||
enum page_walk_action action;
|
||||
struct xe_ptw *child;
|
||||
|
||||
@@ -11,12 +11,14 @@
|
||||
/**
|
||||
* struct xe_ptw - base class for driver pagetable subclassing.
|
||||
* @children: Pointer to an array of children if any.
|
||||
* @staging: Pointer to an array of staging if any.
|
||||
*
|
||||
* Drivers could subclass this, and if it's a page-directory, typically
|
||||
* embed an array of xe_ptw pointers.
|
||||
*/
|
||||
struct xe_ptw {
|
||||
struct xe_ptw **children;
|
||||
struct xe_ptw **staging;
|
||||
};
|
||||
|
||||
/**
|
||||
@@ -41,6 +43,8 @@ struct xe_pt_walk {
|
||||
* as shared pagetables.
|
||||
*/
|
||||
bool shared_pt_mode;
|
||||
/** @staging: Walk staging PT structure */
|
||||
bool staging;
|
||||
};
|
||||
|
||||
/**
|
||||
|
||||
@@ -580,51 +580,26 @@ out_unlock_outer:
|
||||
trace_xe_vm_rebind_worker_exit(vm);
|
||||
}
|
||||
|
||||
static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
|
||||
const struct mmu_notifier_range *range,
|
||||
unsigned long cur_seq)
|
||||
static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uvma)
|
||||
{
|
||||
struct xe_userptr *userptr = container_of(mni, typeof(*userptr), notifier);
|
||||
struct xe_userptr_vma *uvma = container_of(userptr, typeof(*uvma), userptr);
|
||||
struct xe_userptr *userptr = &uvma->userptr;
|
||||
struct xe_vma *vma = &uvma->vma;
|
||||
struct xe_vm *vm = xe_vma_vm(vma);
|
||||
struct dma_resv_iter cursor;
|
||||
struct dma_fence *fence;
|
||||
long err;
|
||||
|
||||
xe_assert(vm->xe, xe_vma_is_userptr(vma));
|
||||
trace_xe_vma_userptr_invalidate(vma);
|
||||
|
||||
if (!mmu_notifier_range_blockable(range))
|
||||
return false;
|
||||
|
||||
vm_dbg(&xe_vma_vm(vma)->xe->drm,
|
||||
"NOTIFIER: addr=0x%016llx, range=0x%016llx",
|
||||
xe_vma_start(vma), xe_vma_size(vma));
|
||||
|
||||
down_write(&vm->userptr.notifier_lock);
|
||||
mmu_interval_set_seq(mni, cur_seq);
|
||||
|
||||
/* No need to stop gpu access if the userptr is not yet bound. */
|
||||
if (!userptr->initial_bind) {
|
||||
up_write(&vm->userptr.notifier_lock);
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* Tell exec and rebind worker they need to repin and rebind this
|
||||
* userptr.
|
||||
*/
|
||||
if (!xe_vm_in_fault_mode(vm) &&
|
||||
!(vma->gpuva.flags & XE_VMA_DESTROYED) && vma->tile_present) {
|
||||
!(vma->gpuva.flags & XE_VMA_DESTROYED)) {
|
||||
spin_lock(&vm->userptr.invalidated_lock);
|
||||
list_move_tail(&userptr->invalidate_link,
|
||||
&vm->userptr.invalidated);
|
||||
spin_unlock(&vm->userptr.invalidated_lock);
|
||||
}
|
||||
|
||||
up_write(&vm->userptr.notifier_lock);
|
||||
|
||||
/*
|
||||
* Preempt fences turn into schedule disables, pipeline these.
|
||||
* Note that even in fault mode, we need to wait for binds and
|
||||
@@ -642,11 +617,37 @@ static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
|
||||
false, MAX_SCHEDULE_TIMEOUT);
|
||||
XE_WARN_ON(err <= 0);
|
||||
|
||||
if (xe_vm_in_fault_mode(vm)) {
|
||||
if (xe_vm_in_fault_mode(vm) && userptr->initial_bind) {
|
||||
err = xe_vm_invalidate_vma(vma);
|
||||
XE_WARN_ON(err);
|
||||
}
|
||||
|
||||
xe_hmm_userptr_unmap(uvma);
|
||||
}
|
||||
|
||||
static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
|
||||
const struct mmu_notifier_range *range,
|
||||
unsigned long cur_seq)
|
||||
{
|
||||
struct xe_userptr_vma *uvma = container_of(mni, typeof(*uvma), userptr.notifier);
|
||||
struct xe_vma *vma = &uvma->vma;
|
||||
struct xe_vm *vm = xe_vma_vm(vma);
|
||||
|
||||
xe_assert(vm->xe, xe_vma_is_userptr(vma));
|
||||
trace_xe_vma_userptr_invalidate(vma);
|
||||
|
||||
if (!mmu_notifier_range_blockable(range))
|
||||
return false;
|
||||
|
||||
vm_dbg(&xe_vma_vm(vma)->xe->drm,
|
||||
"NOTIFIER: addr=0x%016llx, range=0x%016llx",
|
||||
xe_vma_start(vma), xe_vma_size(vma));
|
||||
|
||||
down_write(&vm->userptr.notifier_lock);
|
||||
mmu_interval_set_seq(mni, cur_seq);
|
||||
|
||||
__vma_userptr_invalidate(vm, uvma);
|
||||
up_write(&vm->userptr.notifier_lock);
|
||||
trace_xe_vma_userptr_invalidate_complete(vma);
|
||||
|
||||
return true;
|
||||
@@ -656,6 +657,34 @@ static const struct mmu_interval_notifier_ops vma_userptr_notifier_ops = {
|
||||
.invalidate = vma_userptr_invalidate,
|
||||
};
|
||||
|
||||
#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
|
||||
/**
|
||||
* xe_vma_userptr_force_invalidate() - force invalidate a userptr
|
||||
* @uvma: The userptr vma to invalidate
|
||||
*
|
||||
* Perform a forced userptr invalidation for testing purposes.
|
||||
*/
|
||||
void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma)
|
||||
{
|
||||
struct xe_vm *vm = xe_vma_vm(&uvma->vma);
|
||||
|
||||
/* Protect against concurrent userptr pinning */
|
||||
lockdep_assert_held(&vm->lock);
|
||||
/* Protect against concurrent notifiers */
|
||||
lockdep_assert_held(&vm->userptr.notifier_lock);
|
||||
/*
|
||||
* Protect against concurrent instances of this function and
|
||||
* the critical exec sections
|
||||
*/
|
||||
xe_vm_assert_held(vm);
|
||||
|
||||
if (!mmu_interval_read_retry(&uvma->userptr.notifier,
|
||||
uvma->userptr.notifier_seq))
|
||||
uvma->userptr.notifier_seq -= 2;
|
||||
__vma_userptr_invalidate(vm, uvma);
|
||||
}
|
||||
#endif
|
||||
|
||||
int xe_vm_userptr_pin(struct xe_vm *vm)
|
||||
{
|
||||
struct xe_userptr_vma *uvma, *next;
|
||||
@@ -1012,6 +1041,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
|
||||
INIT_LIST_HEAD(&userptr->invalidate_link);
|
||||
INIT_LIST_HEAD(&userptr->repin_link);
|
||||
vma->gpuva.gem.offset = bo_offset_or_userptr;
|
||||
mutex_init(&userptr->unmap_mutex);
|
||||
|
||||
err = mmu_interval_notifier_insert(&userptr->notifier,
|
||||
current->mm,
|
||||
@@ -1053,6 +1083,7 @@ static void xe_vma_destroy_late(struct xe_vma *vma)
|
||||
* them anymore
|
||||
*/
|
||||
mmu_interval_notifier_remove(&userptr->notifier);
|
||||
mutex_destroy(&userptr->unmap_mutex);
|
||||
xe_vm_put(vm);
|
||||
} else if (xe_vma_is_null(vma)) {
|
||||
xe_vm_put(vm);
|
||||
@@ -2284,8 +2315,17 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
|
||||
break;
|
||||
}
|
||||
case DRM_GPUVA_OP_UNMAP:
|
||||
xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
|
||||
break;
|
||||
case DRM_GPUVA_OP_PREFETCH:
|
||||
/* FIXME: Need to skip some prefetch ops */
|
||||
vma = gpuva_to_vma(op->base.prefetch.va);
|
||||
|
||||
if (xe_vma_is_userptr(vma)) {
|
||||
err = xe_vma_userptr_pin_pages(to_userptr_vma(vma));
|
||||
if (err)
|
||||
return err;
|
||||
}
|
||||
|
||||
xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
|
||||
break;
|
||||
default:
|
||||
|
||||
@@ -275,9 +275,17 @@ static inline void vm_dbg(const struct drm_device *dev,
|
||||
const char *format, ...)
|
||||
{ /* noop */ }
|
||||
#endif
|
||||
#endif
|
||||
|
||||
struct xe_vm_snapshot *xe_vm_snapshot_capture(struct xe_vm *vm);
|
||||
void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap);
|
||||
void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p);
|
||||
void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
|
||||
|
||||
#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
|
||||
void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma);
|
||||
#else
|
||||
static inline void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma)
|
||||
{
|
||||
}
|
||||
#endif
|
||||
#endif
|
||||
|
||||
@@ -59,12 +59,16 @@ struct xe_userptr {
|
||||
struct sg_table *sg;
|
||||
/** @notifier_seq: notifier sequence number */
|
||||
unsigned long notifier_seq;
|
||||
/** @unmap_mutex: Mutex protecting dma-unmapping */
|
||||
struct mutex unmap_mutex;
|
||||
/**
|
||||
* @initial_bind: user pointer has been bound at least once.
|
||||
* write: vm->userptr.notifier_lock in read mode and vm->resv held.
|
||||
* read: vm->userptr.notifier_lock in write mode or vm->resv held.
|
||||
*/
|
||||
bool initial_bind;
|
||||
/** @mapped: Whether the @sgt sg-table is dma-mapped. Protected by @unmap_mutex. */
|
||||
bool mapped;
|
||||
#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
|
||||
u32 divisor;
|
||||
#endif
|
||||
@@ -227,8 +231,8 @@ struct xe_vm {
|
||||
* up for revalidation. Protected from access with the
|
||||
* @invalidated_lock. Removing items from the list
|
||||
* additionally requires @lock in write mode, and adding
|
||||
* items to the list requires the @userptr.notifer_lock in
|
||||
* write mode.
|
||||
* items to the list requires either the @userptr.notifer_lock in
|
||||
* write mode, OR @lock in write mode.
|
||||
*/
|
||||
struct list_head invalidated;
|
||||
} userptr;
|
||||
|
||||
@@ -188,7 +188,7 @@ static int appleir_raw_event(struct hid_device *hid, struct hid_report *report,
|
||||
static const u8 flatbattery[] = { 0x25, 0x87, 0xe0 };
|
||||
unsigned long flags;
|
||||
|
||||
if (len != 5)
|
||||
if (len != 5 || !(hid->claimed & HID_CLAIMED_INPUT))
|
||||
goto out;
|
||||
|
||||
if (!memcmp(data, keydown, sizeof(keydown))) {
|
||||
|
||||
@@ -268,11 +268,13 @@ static void cbas_ec_remove(struct platform_device *pdev)
|
||||
mutex_unlock(&cbas_ec_reglock);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_ACPI
|
||||
static const struct acpi_device_id cbas_ec_acpi_ids[] = {
|
||||
{ "GOOG000B", 0 },
|
||||
{ }
|
||||
};
|
||||
MODULE_DEVICE_TABLE(acpi, cbas_ec_acpi_ids);
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_OF
|
||||
static const struct of_device_id cbas_ec_of_match[] = {
|
||||
|
||||
@@ -1327,11 +1327,11 @@ static void steam_remove(struct hid_device *hdev)
|
||||
return;
|
||||
}
|
||||
|
||||
hid_destroy_device(steam->client_hdev);
|
||||
cancel_delayed_work_sync(&steam->mode_switch);
|
||||
cancel_work_sync(&steam->work_connect);
|
||||
cancel_work_sync(&steam->rumble_work);
|
||||
cancel_work_sync(&steam->unregister_work);
|
||||
hid_destroy_device(steam->client_hdev);
|
||||
steam->client_hdev = NULL;
|
||||
steam->client_opened = 0;
|
||||
if (steam->quirks & STEAM_QUIRK_WIRELESS) {
|
||||
|
||||
@@ -833,9 +833,9 @@ static void hid_ishtp_cl_remove(struct ishtp_cl_device *cl_device)
|
||||
hid_ishtp_cl);
|
||||
|
||||
dev_dbg(ishtp_device(cl_device), "%s\n", __func__);
|
||||
hid_ishtp_cl_deinit(hid_ishtp_cl);
|
||||
ishtp_put_device(cl_device);
|
||||
ishtp_hid_remove(client_data);
|
||||
hid_ishtp_cl_deinit(hid_ishtp_cl);
|
||||
|
||||
hid_ishtp_cl = NULL;
|
||||
|
||||
|
||||
@@ -261,12 +261,14 @@ err_hid_data:
|
||||
*/
|
||||
void ishtp_hid_remove(struct ishtp_cl_data *client_data)
|
||||
{
|
||||
void *data;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < client_data->num_hid_devices; ++i) {
|
||||
if (client_data->hid_sensor_hubs[i]) {
|
||||
kfree(client_data->hid_sensor_hubs[i]->driver_data);
|
||||
data = client_data->hid_sensor_hubs[i]->driver_data;
|
||||
hid_destroy_device(client_data->hid_sensor_hubs[i]);
|
||||
kfree(data);
|
||||
client_data->hid_sensor_hubs[i] = NULL;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -22,11 +22,13 @@
|
||||
*/
|
||||
#define AD7314_TEMP_MASK 0x7FE0
|
||||
#define AD7314_TEMP_SHIFT 5
|
||||
#define AD7314_LEADING_ZEROS_MASK BIT(15)
|
||||
|
||||
/*
|
||||
* ADT7301 and ADT7302 temperature masks
|
||||
*/
|
||||
#define ADT7301_TEMP_MASK 0x3FFF
|
||||
#define ADT7301_LEADING_ZEROS_MASK (BIT(15) | BIT(14))
|
||||
|
||||
enum ad7314_variant {
|
||||
adt7301,
|
||||
@@ -65,12 +67,20 @@ static ssize_t ad7314_temperature_show(struct device *dev,
|
||||
return ret;
|
||||
switch (spi_get_device_id(chip->spi_dev)->driver_data) {
|
||||
case ad7314:
|
||||
if (ret & AD7314_LEADING_ZEROS_MASK) {
|
||||
/* Invalid read-out, leading zero part is missing */
|
||||
return -EIO;
|
||||
}
|
||||
data = (ret & AD7314_TEMP_MASK) >> AD7314_TEMP_SHIFT;
|
||||
data = sign_extend32(data, 9);
|
||||
|
||||
return sprintf(buf, "%d\n", 250 * data);
|
||||
case adt7301:
|
||||
case adt7302:
|
||||
if (ret & ADT7301_LEADING_ZEROS_MASK) {
|
||||
/* Invalid read-out, leading zero part is missing */
|
||||
return -EIO;
|
||||
}
|
||||
/*
|
||||
* Documented as a 13 bit twos complement register
|
||||
* with a sign bit - which is a 14 bit 2's complement
|
||||
|
||||
@@ -181,40 +181,40 @@ static const struct ntc_compensation ncpXXwf104[] = {
|
||||
};
|
||||
|
||||
static const struct ntc_compensation ncpXXxh103[] = {
|
||||
{ .temp_c = -40, .ohm = 247565 },
|
||||
{ .temp_c = -35, .ohm = 181742 },
|
||||
{ .temp_c = -30, .ohm = 135128 },
|
||||
{ .temp_c = -25, .ohm = 101678 },
|
||||
{ .temp_c = -20, .ohm = 77373 },
|
||||
{ .temp_c = -15, .ohm = 59504 },
|
||||
{ .temp_c = -10, .ohm = 46222 },
|
||||
{ .temp_c = -5, .ohm = 36244 },
|
||||
{ .temp_c = 0, .ohm = 28674 },
|
||||
{ .temp_c = 5, .ohm = 22878 },
|
||||
{ .temp_c = 10, .ohm = 18399 },
|
||||
{ .temp_c = 15, .ohm = 14910 },
|
||||
{ .temp_c = 20, .ohm = 12169 },
|
||||
{ .temp_c = -40, .ohm = 195652 },
|
||||
{ .temp_c = -35, .ohm = 148171 },
|
||||
{ .temp_c = -30, .ohm = 113347 },
|
||||
{ .temp_c = -25, .ohm = 87559 },
|
||||
{ .temp_c = -20, .ohm = 68237 },
|
||||
{ .temp_c = -15, .ohm = 53650 },
|
||||
{ .temp_c = -10, .ohm = 42506 },
|
||||
{ .temp_c = -5, .ohm = 33892 },
|
||||
{ .temp_c = 0, .ohm = 27219 },
|
||||
{ .temp_c = 5, .ohm = 22021 },
|
||||
{ .temp_c = 10, .ohm = 17926 },
|
||||
{ .temp_c = 15, .ohm = 14674 },
|
||||
{ .temp_c = 20, .ohm = 12081 },
|
||||
{ .temp_c = 25, .ohm = 10000 },
|
||||
{ .temp_c = 30, .ohm = 8271 },
|
||||
{ .temp_c = 35, .ohm = 6883 },
|
||||
{ .temp_c = 40, .ohm = 5762 },
|
||||
{ .temp_c = 45, .ohm = 4851 },
|
||||
{ .temp_c = 50, .ohm = 4105 },
|
||||
{ .temp_c = 55, .ohm = 3492 },
|
||||
{ .temp_c = 60, .ohm = 2985 },
|
||||
{ .temp_c = 65, .ohm = 2563 },
|
||||
{ .temp_c = 70, .ohm = 2211 },
|
||||
{ .temp_c = 75, .ohm = 1915 },
|
||||
{ .temp_c = 80, .ohm = 1666 },
|
||||
{ .temp_c = 85, .ohm = 1454 },
|
||||
{ .temp_c = 90, .ohm = 1275 },
|
||||
{ .temp_c = 95, .ohm = 1121 },
|
||||
{ .temp_c = 100, .ohm = 990 },
|
||||
{ .temp_c = 105, .ohm = 876 },
|
||||
{ .temp_c = 110, .ohm = 779 },
|
||||
{ .temp_c = 115, .ohm = 694 },
|
||||
{ .temp_c = 120, .ohm = 620 },
|
||||
{ .temp_c = 125, .ohm = 556 },
|
||||
{ .temp_c = 30, .ohm = 8315 },
|
||||
{ .temp_c = 35, .ohm = 6948 },
|
||||
{ .temp_c = 40, .ohm = 5834 },
|
||||
{ .temp_c = 45, .ohm = 4917 },
|
||||
{ .temp_c = 50, .ohm = 4161 },
|
||||
{ .temp_c = 55, .ohm = 3535 },
|
||||
{ .temp_c = 60, .ohm = 3014 },
|
||||
{ .temp_c = 65, .ohm = 2586 },
|
||||
{ .temp_c = 70, .ohm = 2228 },
|
||||
{ .temp_c = 75, .ohm = 1925 },
|
||||
{ .temp_c = 80, .ohm = 1669 },
|
||||
{ .temp_c = 85, .ohm = 1452 },
|
||||
{ .temp_c = 90, .ohm = 1268 },
|
||||
{ .temp_c = 95, .ohm = 1110 },
|
||||
{ .temp_c = 100, .ohm = 974 },
|
||||
{ .temp_c = 105, .ohm = 858 },
|
||||
{ .temp_c = 110, .ohm = 758 },
|
||||
{ .temp_c = 115, .ohm = 672 },
|
||||
{ .temp_c = 120, .ohm = 596 },
|
||||
{ .temp_c = 125, .ohm = 531 },
|
||||
};
|
||||
|
||||
/*
|
||||
|
||||
@@ -127,8 +127,6 @@ static int update_thresholds(struct peci_dimmtemp *priv, int dimm_no)
|
||||
return 0;
|
||||
|
||||
ret = priv->gen_info->read_thresholds(priv, dimm_order, chan_rank, &data);
|
||||
if (ret == -ENODATA) /* Use default or previous value */
|
||||
return 0;
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
@@ -509,11 +507,11 @@ read_thresholds_icx(struct peci_dimmtemp *priv, int dimm_order, int chan_rank, u
|
||||
|
||||
ret = peci_ep_pci_local_read(priv->peci_dev, 0, 13, 0, 2, 0xd4, ®_val);
|
||||
if (ret || !(reg_val & BIT(31)))
|
||||
return -ENODATA; /* Use default or previous value */
|
||||
return -ENODATA;
|
||||
|
||||
ret = peci_ep_pci_local_read(priv->peci_dev, 0, 13, 0, 2, 0xd0, ®_val);
|
||||
if (ret)
|
||||
return -ENODATA; /* Use default or previous value */
|
||||
return -ENODATA;
|
||||
|
||||
/*
|
||||
* Device 26, Offset 224e0: IMC 0 channel 0 -> rank 0
|
||||
@@ -546,11 +544,11 @@ read_thresholds_spr(struct peci_dimmtemp *priv, int dimm_order, int chan_rank, u
|
||||
|
||||
ret = peci_ep_pci_local_read(priv->peci_dev, 0, 30, 0, 2, 0xd4, ®_val);
|
||||
if (ret || !(reg_val & BIT(31)))
|
||||
return -ENODATA; /* Use default or previous value */
|
||||
return -ENODATA;
|
||||
|
||||
ret = peci_ep_pci_local_read(priv->peci_dev, 0, 30, 0, 2, 0xd0, ®_val);
|
||||
if (ret)
|
||||
return -ENODATA; /* Use default or previous value */
|
||||
return -ENODATA;
|
||||
|
||||
/*
|
||||
* Device 26, Offset 219a8: IMC 0 channel 0 -> rank 0
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user