9 Commits

Author SHA1 Message Date
Eric Funsten
14898fb272 NVIDIA: SAUCE: perf: arm_cspmu: NVIDIA T264 PMU leakage workaround
The NVIDIA Tegra T264 SOC has a HW issue where events captured on a
prior experiment can corrupt the current experiment. This adds a
workaround which involves the following steps:
1. First experiment ends; Disable PMCR.E as we do normally
2. Clear PMCNTEN for all counters
3. Enable PMCR.E
4. Disable PMCR.E
5. Enable back PMCNTEN for counters cleared in step 2

Bug 5524939

Change-Id: Ie5885b9bb9495aa0cfb1844a88cbdc7e0509ce67
Signed-off-by: Eric Funsten <efunsten@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/3rdparty/canonical/linux-noble/+/3459618
Reviewed-by: Besar Wicaksono <bwicaksono@nvidia.com>
Tested-by: Ryan Bissell <rbissell@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
2025-09-30 04:15:35 -07:00
Besar Wicaksono
bb1aeb903a NVIDIA: SAUCE: perf: arm_cspmu: nvidia: add T264 support
Adds PMU support for the following IPs in NVIDIA Tegra T264 SOC:
- Unified Coherency Fabric (UCF)
- Vision
- Display
- High-speed IO
- UCF GPU

Bug 5524939

Change-Id: I595dc746e3b45b9f40c5f4343212c37f42f0faa1
Signed-off-by: Besar Wicaksono <bwicaksono@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/3rdparty/canonical/linux-noble/+/3459617
Tested-by: Ryan Bissell <rbissell@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
2025-09-30 04:15:30 -07:00
Brad Griffis
b837fb9d0d NVIDIA: SAUCE: arm64: defconfig: more Calico modules
Configure the following in order to get Calico operational:

CONFIG_NET_IPIP=m
CONFIG_IP_SET_MAX=512
CONFIG_IP_SET_BITMAP_IP=m
CONFIG_IP_SET_BITMAP_IPMAC=m
CONFIG_IP_SET_BITMAP_PORT=m
CONFIG_IP_SET_HASH_IP=m
CONFIG_IP_SET_HASH_IPMARK=m
CONFIG_IP_SET_HASH_IPPORT=m
CONFIG_IP_SET_HASH_IPPORTIP=m
CONFIG_IP_SET_HASH_IPPORTNET=m
CONFIG_IP_SET_HASH_IPMAC=m
CONFIG_IP_SET_HASH_MAC=m
CONFIG_IP_SET_HASH_NETPORTNET=m
CONFIG_IP_SET_HASH_NET=m
CONFIG_IP_SET_HASH_NETNET=m
CONFIG_IP_SET_HASH_NETPORT=m
CONFIG_IP_SET_HASH_NETIFACE=m
CONFIG_IP_SET_LIST_SET=m

Bug 5438065

Change-Id: I35d5510f9a572baf66944ad3489d1e050499d25e
Signed-off-by: Brad Griffis <bgriffis@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/3rdparty/canonical/linux-noble/+/3449304
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Tested-by: Yongchang Liu <yongchangl@nvidia.com>
Reviewed-by: Yongchang Liu <yongchangl@nvidia.com>
2025-09-26 21:28:53 -07:00
Rahul Bedarkar
f29136b334 NVIDIA: SAUCE: tegra-epl: add plausibility checks and improve error handling
Add comprehensive plausibility checks and validation throughout the EPL driver:

- Add timestamp validation with configurable resolution for TEGRA234/TEGRA264
- Implement timestamp overflow handling and range validation (90ms limit)
- Add input parameter validation in device file ioctl operations
- Improve error handling in HSP mailbox communication with proper channel validation
- Add register mapping validation before write operations
- Enhance power management state handling with proper error checking
- Replace generic print statements with device-specific debug messages
- Add proper error handling in probe, suspend, resume, and shutdown functions
- Improve handshake retry logic with better error reporting
- Add validation for state parameters in PM notification functions

The changes improve driver robustness by adding proper validation
for all critical operations and providing better error reporting
for debugging purposes.

Bug 5320023

Change-Id: I8095416921ec8d229e2cdab47d1b3c3e50fa1bbf
Signed-off-by: Rahul Bedarkar <rabedarkar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/3rdparty/canonical/linux-noble/+/3437098
(cherry picked from commit a7415c43437d37f2732d334cfe90ef2f4c7d7575)
Reviewed-on: https://git-master.nvidia.com/r/c/3rdparty/canonical/linux-noble/+/3459572
Tested-by: Hiteshkumar Patel <hiteshkumarg@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Dipen Patel <dipenp@nvidia.com>
Reviewed-by: Brad Griffis <bgriffis@nvidia.com>
2025-09-26 19:27:52 -07:00
Shubham Jain
a385828614 NVIDIA: SAUCE: tegra-epl: allow tegra-epl to be built as module
- updating kconfig file to allow tegra-epl to
  be built as module.
- Updating misc ec sw generic error index offset
  depeding on Orin or Thor chip.

Bug 5142445
Bug 5119438
Bug 5405209
Bug 5415787

Change-Id: Iea589710e1a90856550623543f9ac342854c2a2c
Signed-off-by: Shubham Jain <shubhamj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/3rdparty/canonical/linux-noble/+/3430974
(cherry picked from commit 2a56160a9c270b5b411a88f9e79865e5442581d3)
Reviewed-on: https://git-master.nvidia.com/r/c/3rdparty/canonical/linux-noble/+/3459571
Tested-by: Hiteshkumar Patel <hiteshkumarg@nvidia.com>
Reviewed-by: Dipen Patel <dipenp@nvidia.com>
Reviewed-by: Brad Griffis <bgriffis@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2025-09-26 19:27:47 -07:00
Rahul Bedarkar
d684657795 NVIDIA: SAUCE: tegra-epl: Map mission status reg if only required
Mission status register is only required if either of the MISC EC
registers is mapped.

Bug 5100266
Bug 5415787
Bug 5405209

Change-Id: Idf9c64050d3d106ac1b78b7c1e0d64257f9195d3
Signed-off-by: Rahul Bedarkar <rabedarkar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/3rdparty/canonical/linux-noble/+/3430973
(cherry picked from commit c28c7e50c460912f60a1b4e265d37a88fc522136)
Reviewed-on: https://git-master.nvidia.com/r/c/3rdparty/canonical/linux-noble/+/3459570
Tested-by: Hiteshkumar Patel <hiteshkumarg@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Brad Griffis <bgriffis@nvidia.com>
Reviewed-by: Dipen Patel <dipenp@nvidia.com>
2025-09-26 19:27:42 -07:00
Petlozu Pravareshwar
072b8487aa NVIDIA: SAUCE: soc/tegra: pmc: Remove reset status sysfs nodes
Reset status related sysfs nodes are no longer supported in T264
because of security reasons. Change accordingly deletes these
sysfs nodes.

Bug 5245235

Signed-off-by: Petlozu Pravareshwar <petlozup@nvidia.com>
Change-Id: Ia96872c083d23ca7f3bfc774ca9f614bcb3c63bd
Reviewed-on: https://git-master.nvidia.com/r/c/3rdparty/canonical/linux-noble/+/3419584
(cherry picked from commit 25870226d1aef86ea221d10f6376cc694122d196)
Reviewed-on: https://git-master.nvidia.com/r/c/3rdparty/canonical/linux-noble/+/3446575
Tested-by: Brad Griffis <bgriffis@nvidia.com>
Reviewed-by: Jon Hunter <jonathanh@nvidia.com>
Reviewed-by: svcacv <svcacv@nvidia.com>
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Brad Griffis <bgriffis@nvidia.com>
2025-09-04 12:56:56 -07:00
Kartik Rajput
094a99a56d NVIDIA: SAUCE: serial: amba-pl011: Do not use IBRD
Tegra UART controllers does not support FBRD register, which cause it to
not support various standard baudrates that HW supports.

Use clk_set_rate to program UART clock rate instead.

Bug 5406304

Change-Id: I6fcf14b0186e54a6f418287791d80e12d17600a0
Signed-off-by: Kartik Rajput <kkartik@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/3rdparty/canonical/linux-noble/+/3441884
Reviewed-by: Laxman Dewangan <ldewangan@nvidia.com>
(cherry picked from commit daa72589899560fb3570b165163bd8e35cf704e1)
Reviewed-on: https://git-master.nvidia.com/r/c/3rdparty/canonical/linux-noble/+/3445145
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
2025-09-04 01:56:57 -07:00
Oleg Nesterov
6768ecea00 posix-cpu-timers: fix race between handle_posix_cpu_timers() and posix_cpu_timer_del()
If an exiting non-autoreaping task has already passed exit_notify() and
calls handle_posix_cpu_timers() from IRQ, it can be reaped by its parent
or debugger right after unlock_task_sighand().

If a concurrent posix_cpu_timer_del() runs at that moment, it won't be
able to detect timer->it.cpu.firing != 0: cpu_timer_task_rcu() and/or
lock_task_sighand() will fail.

Add the tsk->exit_state check into run_posix_cpu_timers() to fix this.

This fix is not needed if CONFIG_POSIX_CPU_TIMERS_TASK_WORK=y, because
exit_task_work() is called before exit_notify(). But the check still
makes sense, task_work_add(&tsk->posix_cputimers_work.work) will fail
anyway in this case.

Cc: stable@vger.kernel.org
Reported-by: Benoît Sevens <bsevens@google.com>
Fixes: 0bdd2ed413 ("sched: run_posix_cpu_timers: Don't check ->exit_state, use lock_task_sighand()")
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit f90fff1e152dedf52b932240ebbd670d83330eca)

Bug 5341153

CVE-2025-38352
Change-Id: I2146869cc8f9684d4e4d56eaa247f54ed0225e1e
Signed-off-by: Brad Griffis <bgriffis@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/c/3rdparty/canonical/linux-noble/+/3435903
GVS: buildbot_gerritrpt <buildbot_gerritrpt@nvidia.com>
Reviewed-by: Paritosh Dixit <paritoshd@nvidia.com>
2025-08-20 09:12:13 -07:00
9 changed files with 627 additions and 176 deletions

View File

@@ -10,6 +10,11 @@ metrics like memory bandwidth, latency, and utilization:
* NVLink-C2C1
* CNVLink
* PCIE
* Unified Coherency Fabric (UCF)
* Vision
* Display
* High-speed IO
* UCF-GPU
PMU Driver
----------
@@ -183,6 +188,159 @@ Example usage:
perf stat -a -e nvidia_pcie_pmu_1/event=0x0,root_port=0x3/
UCF PMU
-------
The UCF PMU monitors system level cache events and DRAM traffic that flows
through UCF.
The events and configuration options of this PMU device are described in sysfs,
see /sys/bus/event_sources/devices/nvidia_ucf_pmu_<socket-id>.
User can configure the PMU to capture events from specific source and destination.
The source/destination filter is described in
/sys/bus/event_sources/devices/nvidia_ucf_pmu_<socket-id>/format/. By default
traffic from all sources and destinations will be captured if no source/destination
is specified.
Example usage:
* Count event id 0x0 from any source/destination of socket 0::
perf stat -a -e nvidia_ucf_pmu_0/event=0x0/
* Count event id 0x1 from socket 0's CPUs to socket 0's DRAM::
perf stat -a -e nvidia_ucf_pmu_0/event=0x1,src_loc_cpu=0x1,dst_loc=0x1/
* Count event id 0x1 from remote source of socket 0 to local and remote DRAM::
perf stat -a -e nvidia_ucf_pmu_0/event=0x1,src_rem=0x1,dst_loc=0x1,dst_rem=0x1/
* Count event id 0x2 from any source/destination of socket 1::
perf stat -a -e nvidia_ucf_pmu_1/event=0x2/
* Count event id 0x3 from socket 1's CPUs to socket 1's DRAM::
perf stat -a -e nvidia_ucf_pmu_1/event=0x3,src_loc_cpu=0x1,dst_loc=0x1/
Vision PMU
------------
The vision PMU monitors memory traffic from the multimedia IPs in the SOC.
The events and configuration options of this PMU device are described in sysfs,
see /sys/bus/event_sources/devices/nvidia_vision_pmu_<socket-id>.
User can configure the PMU to capture events from specific IPs.
/sys/bus/event_sources/devices/nvidia_vision_pmu_<socket-id>/format/ contains
the filter attribute name of each multimedia IP. This filter attribute is a
bitmask to select the AXI/hub interface of the IP to monitor. By default traffic
from all interfaces of all IPs will be captured if no IPs are specified.
Example usage:
* Count event id 0x0 from all multimedia IPs in socket 0::
perf stat -a -e nvidia_vision_pmu_0/event=0x0/
* Count event id 0x1 from AXI/hub interface 0 in VI-0 of socket 0::
perf stat -a -e nvidia_vision_pmu_0/event=0x1,vi_0=0x1/
* Count event id 0x1 from AXI/hub interface 0 and 1 in VI-0 of socket 0::
perf stat -a -e nvidia_vision_pmu_0/event=0x1,vi_0=0x3/
* Count event id 0x2 from all multimedia IPs in socket 1::
perf stat -a -e nvidia_vision_pmu_1/event=0x2/
* Count event id 0x3 from AXI/hub interface 0 in VI-0 and PVA of socket 1::
perf stat -a -e nvidia_vision_pmu_1/event=0x3,vi_0=0x1,pva=0x1/
Display PMU
------------
The display PMU monitors memory traffic from the display IP in the SOC.
The events and configuration options of this PMU device are described in sysfs,
see /sys/bus/event_sources/devices/nvidia_display_pmu_<socket-id>.
Example usage:
* Count event id 0x0 in socket 0::
perf stat -a -e nvidia_display_pmu_0/event=0x0/
* Count event id 0x0 in socket 1::
perf stat -a -e nvidia_display_pmu_1/event=0x0/
High-speed I/O PMU
-------------------
The high-speed I/O PMU monitors memory traffic from the high speed I/O devices
in the SOC.
The events and configuration options of this PMU device are described in sysfs,
see /sys/bus/event_sources/devices/nvidia_uphy_pmu_<socket-id>.
User can configure the PMU to capture events from specific I/Os.
/sys/bus/event_sources/devices/nvidia_uphy_pmu_<socket-id>/format/ contains
the filter attribute name of each I/O. This filter attribute is a
bitmask to select the AXI/hub interface of the I/O to monitor. By default
traffic from all interfaces of all I/Os will be captured if no I/Os are
specified.
Example usage:
* Count event id 0x0 from all I/Os in socket 0::
perf stat -a -e nvidia_uphy_pmu_0/event=0x0/
* Count event id 0x1 from PCIE Root Port 1 of socket 0::
perf stat -a -e nvidia_uphy_pmu_0/event=0x1,pcie_rp_1=0x1/
* Count event id 0x1 from PCIE Root Port 1 and Root Port 2 of socket 0::
perf stat -a -e nvidia_uphy_pmu_0/event=0x1,pcie_rp_1=0x1,pcie_rp_2=0x1/
* Count event id 0x2 from all IPs in socket 1::
perf stat -a -e nvidia_uphy_pmu_1/event=0x2/
* Count event id 0x3 from PCIE Root Port 3 and UFS of socket 1::
perf stat -a -e nvidia_uphy_pmu_1/event=0x1,pcie_rp_3=0x1,ufs=0x1/
UCF-GPU PMU
------------
The UCF-GPU PMU monitors integrated GPU physical address traffic flowing through
UCF.
The events and configuration options of this PMU device are described in sysfs,
see /sys/bus/event_sources/devices/nvidia_ucf_gpu_pmu_<socket-id>.
Example usage:
* Count event id 0x0 in socket 0::
perf stat -a -e nvidia_ucf_gpu_pmu_0/event=0x0/
* Count event id 0x0 in socket 1::
perf stat -a -e nvidia_ucf_gpu_pmu_1/event=0x0/
.. _NVIDIA_Uncore_PMU_Traffic_Coverage_Section:
Traffic Coverage

View File

@@ -151,6 +151,7 @@ CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_NET_IPIP=m
CONFIG_SYN_COOKIES=y
CONFIG_IPV6=m
CONFIG_NETFILTER=y
@@ -181,6 +182,23 @@ CONFIG_NETFILTER_XT_MATCH_RECENT=m
CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
CONFIG_NETFILTER_XT_MATCH_U32=m
CONFIG_IP_SET=m
CONFIG_IP_SET_MAX=512
CONFIG_IP_SET_BITMAP_IP=m
CONFIG_IP_SET_BITMAP_IPMAC=m
CONFIG_IP_SET_BITMAP_PORT=m
CONFIG_IP_SET_HASH_IP=m
CONFIG_IP_SET_HASH_IPMARK=m
CONFIG_IP_SET_HASH_IPPORT=m
CONFIG_IP_SET_HASH_IPPORTIP=m
CONFIG_IP_SET_HASH_IPPORTNET=m
CONFIG_IP_SET_HASH_IPMAC=m
CONFIG_IP_SET_HASH_MAC=m
CONFIG_IP_SET_HASH_NETPORTNET=m
CONFIG_IP_SET_HASH_NET=m
CONFIG_IP_SET_HASH_NETNET=m
CONFIG_IP_SET_HASH_NETPORT=m
CONFIG_IP_SET_HASH_NETIFACE=m
CONFIG_IP_SET_LIST_SET=m
CONFIG_IP_VS=m
CONFIG_NF_TABLES_ARP=y
CONFIG_IP_NF_IPTABLES=m

View File

@@ -11,6 +11,12 @@
#include "arm_cspmu.h"
#define PMCNTENSET 0xC00
#define PMCNTENCLR 0xC20
#define PMCR 0xE04
#define PMCR_E BIT(0)
#define NV_PCIE_PORT_COUNT 10ULL
#define NV_PCIE_FILTER_ID_MASK GENMASK_ULL(NV_PCIE_PORT_COUNT - 1, 0)
@@ -20,6 +26,16 @@
#define NV_CNVL_PORT_COUNT 4ULL
#define NV_CNVL_FILTER_ID_MASK GENMASK_ULL(NV_CNVL_PORT_COUNT - 1, 0)
#define NV_UCF_FILTER_ID_MASK GENMASK_ULL(4, 0)
#define NV_UPHY_FILTER_ID_MASK GENMASK_ULL(16, 0)
#define NV_VISION_FILTER_ID_MASK GENMASK_ULL(19, 0)
#define NV_DISPLAY_FILTER_ID_MASK BIT(0)
#define NV_UCF_GPU_FILTER_ID_MASK BIT(0)
#define NV_GENERIC_FILTER_ID_MASK GENMASK_ULL(31, 0)
#define NV_PRODID_MASK (ARM_CSPMU_PMIIDR_PRODUCTID | \
@@ -45,6 +61,7 @@ struct nv_cspmu_ctx {
u32 filter_default_val;
struct attribute **event_attr;
struct attribute **format_attr;
u32 *pmcnten;
};
static struct attribute *scf_pmu_event_attrs[] = {
@@ -178,6 +195,72 @@ static struct attribute *mcf_pmu_event_attrs[] = {
NULL,
};
static struct attribute *ucf_pmu_event_attrs[] = {
ARM_CSPMU_EVENT_ATTR(slc_allocate, 0xf0),
ARM_CSPMU_EVENT_ATTR(slc_refill, 0xf1),
ARM_CSPMU_EVENT_ATTR(slc_access, 0xf2),
ARM_CSPMU_EVENT_ATTR(slc_wb, 0xf3),
ARM_CSPMU_EVENT_ATTR(slc_hit, 0x118),
ARM_CSPMU_EVENT_ATTR(slc_access_wr, 0x112),
ARM_CSPMU_EVENT_ATTR(slc_access_rd, 0x111),
ARM_CSPMU_EVENT_ATTR(slc_refill_wr, 0x10a),
ARM_CSPMU_EVENT_ATTR(slc_refill_rd, 0x109),
ARM_CSPMU_EVENT_ATTR(slc_hit_wr, 0x11a),
ARM_CSPMU_EVENT_ATTR(slc_hit_rd, 0x119),
ARM_CSPMU_EVENT_ATTR(slc_access_dataless, 0x183),
ARM_CSPMU_EVENT_ATTR(slc_access_atomic, 0x184),
ARM_CSPMU_EVENT_ATTR(local_snoop, 0x180),
ARM_CSPMU_EVENT_ATTR(ext_snp_access, 0x181),
ARM_CSPMU_EVENT_ATTR(ext_snp_evict, 0x182),
ARM_CSPMU_EVENT_ATTR(ucf_bus_cycles, 0x1d),
ARM_CSPMU_EVENT_ATTR(any_access_wr, 0x112),
ARM_CSPMU_EVENT_ATTR(any_access_rd, 0x111),
ARM_CSPMU_EVENT_ATTR(any_byte_wr, 0x114),
ARM_CSPMU_EVENT_ATTR(any_byte_rd, 0x113),
ARM_CSPMU_EVENT_ATTR(any_outstanding_rd, 0x115),
ARM_CSPMU_EVENT_ATTR(local_dram_access_wr, 0x122),
ARM_CSPMU_EVENT_ATTR(local_dram_access_rd, 0x121),
ARM_CSPMU_EVENT_ATTR(local_dram_byte_wr, 0x124),
ARM_CSPMU_EVENT_ATTR(local_dram_byte_rd, 0x123),
ARM_CSPMU_EVENT_ATTR(mmio_access_wr, 0x132),
ARM_CSPMU_EVENT_ATTR(mmio_access_rd, 0x131),
ARM_CSPMU_EVENT_ATTR(mmio_byte_wr, 0x134),
ARM_CSPMU_EVENT_ATTR(mmio_byte_rd, 0x133),
ARM_CSPMU_EVENT_ATTR(mmio_outstanding_rd, 0x135),
ARM_CSPMU_EVENT_ATTR(cycles, ARM_CSPMU_EVT_CYCLES_DEFAULT),
NULL,
};
static struct attribute *display_pmu_event_attrs[] = {
ARM_CSPMU_EVENT_ATTR(rd_bytes_loc, 0x0),
ARM_CSPMU_EVENT_ATTR(rd_req_loc, 0x6),
ARM_CSPMU_EVENT_ATTR(rd_cum_outs_loc, 0xc),
ARM_CSPMU_EVENT_ATTR(cycles, ARM_CSPMU_EVT_CYCLES_DEFAULT),
NULL,
};
static struct attribute *ucf_gpu_pmu_event_attrs[] = {
ARM_CSPMU_EVENT_ATTR(rd_bytes_loc_rem, 0x0),
ARM_CSPMU_EVENT_ATTR(wr_bytes_loc, 0x2),
ARM_CSPMU_EVENT_ATTR(wr_bytes_rem, 0x3),
ARM_CSPMU_EVENT_ATTR(rd_req_loc_rem, 0x6),
ARM_CSPMU_EVENT_ATTR(wr_req_loc, 0x8),
ARM_CSPMU_EVENT_ATTR(wr_req_rem, 0x9),
ARM_CSPMU_EVENT_ATTR(rd_cum_outs_loc_rem, 0xc),
ARM_CSPMU_EVENT_ATTR(cycles, ARM_CSPMU_EVT_CYCLES_DEFAULT),
NULL,
};
static struct attribute *generic_pmu_event_attrs[] = {
ARM_CSPMU_EVENT_ATTR(cycles, ARM_CSPMU_EVT_CYCLES_DEFAULT),
NULL,
@@ -205,6 +288,54 @@ static struct attribute *cnvlink_pmu_format_attrs[] = {
NULL,
};
static struct attribute *ucf_pmu_format_attrs[] = {
ARM_CSPMU_FORMAT_EVENT_ATTR,
ARM_CSPMU_FORMAT_ATTR(src_loc_noncpu, "config1:0"),
ARM_CSPMU_FORMAT_ATTR(src_loc_cpu, "config1:1"),
ARM_CSPMU_FORMAT_ATTR(src_rem, "config1:2"),
ARM_CSPMU_FORMAT_ATTR(dst_loc, "config1:3"),
ARM_CSPMU_FORMAT_ATTR(dst_rem, "config1:4"),
NULL,
};
static struct attribute *display_pmu_format_attrs[] = {
ARM_CSPMU_FORMAT_EVENT_ATTR,
NULL,
};
static struct attribute *ucf_gpu_pmu_format_attrs[] = {
ARM_CSPMU_FORMAT_EVENT_ATTR,
NULL,
};
static struct attribute *uphy_pmu_format_attrs[] = {
ARM_CSPMU_FORMAT_EVENT_ATTR,
ARM_CSPMU_FORMAT_ATTR(pcie_rp_1, "config1:0"),
ARM_CSPMU_FORMAT_ATTR(pcie_rp_2, "config1:1"),
ARM_CSPMU_FORMAT_ATTR(pcie_rp_3, "config1:2"),
ARM_CSPMU_FORMAT_ATTR(pcie_rp_4, "config1:3"),
ARM_CSPMU_FORMAT_ATTR(pcie_rp_5, "config1:4"),
ARM_CSPMU_FORMAT_ATTR(xusb, "config1:5-10"),
ARM_CSPMU_FORMAT_ATTR(mgbe_0, "config1:11"),
ARM_CSPMU_FORMAT_ATTR(mgbe_1, "config1:12"),
ARM_CSPMU_FORMAT_ATTR(mgbe_2, "config1:13"),
ARM_CSPMU_FORMAT_ATTR(mgbe_3, "config1:14"),
ARM_CSPMU_FORMAT_ATTR(eqos, "config1:15"),
ARM_CSPMU_FORMAT_ATTR(ufs, "config1:16"),
NULL,
};
static struct attribute *vision_pmu_format_attrs[] = {
ARM_CSPMU_FORMAT_EVENT_ATTR,
ARM_CSPMU_FORMAT_ATTR(vi_0, "config1:0-1"),
ARM_CSPMU_FORMAT_ATTR(vi_1, "config1:2-3"),
ARM_CSPMU_FORMAT_ATTR(isp_0, "config1:4-7"),
ARM_CSPMU_FORMAT_ATTR(isp_1, "config1:8-11"),
ARM_CSPMU_FORMAT_ATTR(vic, "config1:12-13"),
ARM_CSPMU_FORMAT_ATTR(pva, "config1:14-19"),
NULL,
};
static struct attribute *generic_pmu_format_attrs[] = {
ARM_CSPMU_FORMAT_EVENT_ATTR,
ARM_CSPMU_FORMAT_FILTER_ATTR,
@@ -246,6 +377,43 @@ static u32 nv_cspmu_event_filter(const struct perf_event *event)
return event->attr.config1 & ctx->filter_mask;
}
/*
* UCF leakage workaround:
* Disables PMCR and PMCNTEN for each counter before running a
* dummy experiment. This clears the internal state and prevents
* event leakage from the previous experiment. PMCNTEN is then
* re-enabled.
*/
static void ucf_pmu_stop_counters_leakage(struct arm_cspmu *cspmu)
{
int reg_id;
u32 cntenclr_offset = PMCNTENCLR;
u32 cntenset_offset = PMCNTENSET;
struct nv_cspmu_ctx *ctx = to_nv_cspmu_ctx(cspmu);
/* Step 1: Disable PMCR.E */
writel(0, cspmu->base0 + PMCR);
/* Step 2: Clear PMCNTEN for all counters */
for (reg_id = 0; reg_id < cspmu->num_set_clr_reg; ++reg_id) {
ctx->pmcnten[reg_id] = readl(cspmu->base0 + cntenclr_offset);
writel(ctx->pmcnten[reg_id], cspmu->base0 + cntenclr_offset);
cntenclr_offset += sizeof(u32);
}
/* Step 3: Enable PMCR.E */
writel(PMCR_E, cspmu->base0 + PMCR);
/* Step 4: Disable PMCR.E */
writel(0, cspmu->base0 + PMCR);
/* Step 5: Enable back PMCNTEN for counters cleared in step 2 */
for (reg_id = 0; reg_id < cspmu->num_set_clr_reg; ++reg_id) {
writel(ctx->pmcnten[reg_id], cspmu->base0 + cntenset_offset);
cntenset_offset += sizeof(u32);
}
}
enum nv_cspmu_name_fmt {
NAME_FMT_GENERIC,
NAME_FMT_SOCKET
@@ -260,6 +428,7 @@ struct nv_cspmu_match {
enum nv_cspmu_name_fmt name_fmt;
struct attribute **event_attr;
struct attribute **format_attr;
void (*stop_counters)(struct arm_cspmu *cspmu);
};
static const struct nv_cspmu_match nv_cspmu_match[] = {
@@ -313,6 +482,57 @@ static const struct nv_cspmu_match nv_cspmu_match[] = {
.event_attr = scf_pmu_event_attrs,
.format_attr = scf_pmu_format_attrs
},
{
.prodid = 0x2CF10000,
.prodid_mask = NV_PRODID_MASK,
.filter_mask = NV_UCF_FILTER_ID_MASK,
.filter_default_val = NV_UCF_FILTER_ID_MASK,
.name_pattern = "nvidia_ucf_pmu_%u",
.name_fmt = NAME_FMT_SOCKET,
.event_attr = ucf_pmu_event_attrs,
.format_attr = ucf_pmu_format_attrs,
.stop_counters = ucf_pmu_stop_counters_leakage
},
{
.prodid = 0x10800000,
.prodid_mask = NV_PRODID_MASK,
.filter_mask = NV_UPHY_FILTER_ID_MASK,
.filter_default_val = NV_UPHY_FILTER_ID_MASK,
.name_pattern = "nvidia_uphy_pmu_%u",
.name_fmt = NAME_FMT_SOCKET,
.event_attr = mcf_pmu_event_attrs,
.format_attr = uphy_pmu_format_attrs
},
{
.prodid = 0x10a00000,
.prodid_mask = NV_PRODID_MASK,
.filter_mask = 0,
.filter_default_val = NV_UCF_GPU_FILTER_ID_MASK,
.name_pattern = "nvidia_ucf_gpu_pmu_%u",
.name_fmt = NAME_FMT_SOCKET,
.event_attr = ucf_gpu_pmu_event_attrs,
.format_attr = ucf_gpu_pmu_format_attrs
},
{
.prodid = 0x10d00000,
.prodid_mask = NV_PRODID_MASK,
.filter_mask = 0,
.filter_default_val = NV_DISPLAY_FILTER_ID_MASK,
.name_pattern = "nvidia_display_pmu_%u",
.name_fmt = NAME_FMT_SOCKET,
.event_attr = display_pmu_event_attrs,
.format_attr = display_pmu_format_attrs
},
{
.prodid = 0x10e00000,
.prodid_mask = NV_PRODID_MASK,
.filter_mask = NV_VISION_FILTER_ID_MASK,
.filter_default_val = NV_VISION_FILTER_ID_MASK,
.name_pattern = "nvidia_vision_pmu_%u",
.name_fmt = NAME_FMT_SOCKET,
.event_attr = mcf_pmu_event_attrs,
.format_attr = vision_pmu_format_attrs
},
{
.prodid = 0,
.prodid_mask = 0,
@@ -389,6 +609,13 @@ static int nv_cspmu_init_ops(struct arm_cspmu *cspmu)
impl_ops->get_event_attrs = nv_cspmu_get_event_attrs;
impl_ops->get_format_attrs = nv_cspmu_get_format_attrs;
impl_ops->get_name = nv_cspmu_get_name;
if (match->stop_counters != NULL) {
ctx->pmcnten = devm_kzalloc(dev, cspmu->num_set_clr_reg *
sizeof(u32), GFP_KERNEL);
if (!ctx->pmcnten)
return -ENOMEM;
impl_ops->stop_counters = match->stop_counters;
}
return 0;
}

View File

@@ -16,7 +16,7 @@ menuconfig TEGRA_PLATFORM_DEVICES
if TEGRA_PLATFORM_DEVICES
config TEGRA_EPL
bool "Tegra Error Propagation Layer Driver"
tristate "Tegra Error Propagation Layer Driver"
depends on MAILBOX
help
The tegra-epl driver provides interface for reporting software detected

View File

@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
// Copyright (c) 2021-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
// Copyright (c) 2021-2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#include <linux/module.h>
#include <linux/fs.h>
@@ -22,12 +22,19 @@
/* Macro indicating total number of Misc Sw generic errors in Misc EC */
#define NUM_SW_GENERIC_ERR 5U
/* Error index offset in mission status register */
#define ERROR_INDEX_OFFSET 24U
/* signature code for HSP pm notify data */
#define PM_STATE_UNI_CODE 0xFDEF
/* Timestamp validation constants */
#define TIMESTAMP_CNT_PERIOD 0x100000000ULL /* 32-bit SoC TSC counter period (2^32) */
/* This value is derived from the DOS FDTI (100ms) - EPL propagation delay (10ms) */
#define TIMESTAMP_VALID_RANGE 90000000ULL /* 90ms in nanoseconds */
/* Timestamp resolution constants (in nanoseconds) */
#define TEGRA234_TIMESTAMP_RESOLUTION_NS 32U
#define TEGRA264_TIMESTAMP_RESOLUTION_NS 1U
/* State Management */
#define EPS_DOS_INIT 0U
#define EPS_DOS_SUSPEND 3U
@@ -60,6 +67,12 @@ struct epl_misc_sw_err_cfg {
const char *dev_configured;
};
/* Error index offset in mission status register */
static uint32_t error_index_offset = 3;
/* Timestamp resolution for current SoC (in nanoseconds) */
static uint32_t timestamp_resolution_ns = TEGRA264_TIMESTAMP_RESOLUTION_NS;
static int device_file_major_number;
static const char device_name[] = "epdaemon";
@@ -80,6 +93,21 @@ static uint32_t handshake_retry_count;
static bool enable_deinit_notify;
/* Helper function to SoC TSC timestamp */
static inline uint32_t epl_get_current_timestamp(void)
{
uint32_t timestamp;
asm volatile("mrs %0, cntvct_el0" : "=r" (timestamp));
return timestamp;
}
/* Helper function to convert SoC TSC timestamp ticks to nanoseconds */
static inline uint64_t epl_ticks_to_ns(uint64_t ticks)
{
return ticks * timestamp_resolution_ns;
}
static void tegra_hsp_tx_empty_notify(struct mbox_client *cl,
void *data, int empty_value)
{
@@ -113,21 +141,21 @@ static int tegra_hsp_mb_init(struct device *dev)
static ssize_t device_file_ioctl(
struct file *fp, unsigned int cmd, unsigned long arg)
{
uint32_t lData[MAX_LEN];
struct epl_error_report_frame error_frame;
int ret;
if (copy_from_user(lData, (void __user *)arg,
MAX_LEN * sizeof(uint32_t)))
/* Validate input parameters */
if (!arg)
return -EINVAL;
if (copy_from_user(&error_frame, (void __user *)arg,
sizeof(error_frame)))
return -EACCES;
switch (cmd) {
case EPL_REPORT_ERROR_CMD:
if (hs_state == HANDSHAKE_DONE)
ret = mbox_send_message(epl_hsp_v->tx.chan, (void *) lData);
else
ret = -ENODEV;
ret = epl_report_error(error_frame);
break;
default:
return -EINVAL;
@@ -148,12 +176,16 @@ int epl_get_misc_ec_err_status(struct device *dev, uint8_t err_number, bool *sta
if (miscerr_cfg[err_number].dev_configured == NULL || isAddrMappOk == false)
return -ENODEV;
/* Validate mission error status register mapping */
if (!mission_err_status_va)
return -ENODEV;
dev_str = dev_driver_string(dev);
if (strcmp(dev_str, miscerr_cfg[err_number].dev_configured) != 0)
return -EACCES;
mask = (1U << ((ERROR_INDEX_OFFSET + err_number) % 32U));
mask = (1U << ((error_index_offset + err_number) % 32U));
mission_err_status = readl(mission_err_status_va);
if ((mission_err_status & mask) != 0U)
@@ -182,6 +214,10 @@ int epl_report_misc_ec_error(struct device *dev, uint8_t err_number,
if (status == false)
return -EAGAIN;
/* Validate register mappings before writing */
if (!miscerr_cfg[err_number].err_code_va || !miscerr_cfg[err_number].err_assert_va)
return -ENODEV;
/* Updating error code */
writel(sw_error_code, miscerr_cfg[err_number].err_code_va);
@@ -195,9 +231,39 @@ EXPORT_SYMBOL_GPL(epl_report_misc_ec_error);
int epl_report_error(struct epl_error_report_frame error_report)
{
int ret = -EINVAL;
uint64_t current_timestamp_64;
uint64_t reported_timestamp_64;
if (epl_hsp_v == NULL || hs_state != HANDSHAKE_DONE) {
/* Validate input parameters */
if (epl_hsp_v == NULL || hs_state != HANDSHAKE_DONE)
return -ENODEV;
/* Validate HSP channel */
if (!epl_hsp_v->tx.chan)
return -ENODEV;
/* Plausibility check for timestamp - only if timestamp is not zero */
if (error_report.timestamp != 0) {
/* Get current timestamp (32-bit LSB) and convert to 64-bit for calculations */
current_timestamp_64 = (uint64_t)epl_get_current_timestamp();
reported_timestamp_64 = (uint64_t)error_report.timestamp;
/* Check for timestamp overflow */
/* If current timestamp is less than reported timestamp, assume overflow occurred */
if (current_timestamp_64 < reported_timestamp_64)
current_timestamp_64 += TIMESTAMP_CNT_PERIOD;
/* Validate timestamp range - reject if difference is more than ~90ms */
/* Convert 90ms to counter ticks based on current resolution */
uint64_t valid_range_ticks = TIMESTAMP_VALID_RANGE / timestamp_resolution_ns;
if ((current_timestamp_64 - reported_timestamp_64) > valid_range_ticks) {
dev_warn(&epl_hsp_v->dev, "epl_report_error: Invalid timestamp - difference %llu ticks (%llu ns) exceeds valid range (%llu ticks)\n",
current_timestamp_64 - reported_timestamp_64,
epl_ticks_to_ns(current_timestamp_64 - reported_timestamp_64),
valid_range_ticks);
return -EINVAL;
}
}
ret = mbox_send_message(epl_hsp_v->tx.chan, (void *)&error_report);
@@ -211,12 +277,16 @@ static int epl_client_fsi_pm_notify(u32 state)
int ret;
u32 pdata[4];
/* Validate state parameter */
if (state > EPS_DOS_UNKNOWN)
return -EINVAL;
pdata[0] = PM_STATE_UNI_CODE;
pdata[1] = state;
pdata[2] = state;
pdata[3] = PM_STATE_UNI_CODE;
if (hs_state == HANDSHAKE_DONE)
if (hs_state == HANDSHAKE_DONE && epl_hsp_v && epl_hsp_v->tx.chan)
ret = mbox_send_message(epl_hsp_v->tx.chan, (void *) pdata);
else
ret = -ENODEV;
@@ -228,7 +298,7 @@ static int epl_client_fsi_handshake(void *arg)
{
uint8_t count = 0;
if (epl_hsp_v) {
if (epl_hsp_v && epl_hsp_v->tx.chan) {
int ret;
const uint32_t handshake_data[] = {0x45504C48, 0x414E4453, 0x48414B45,
0x44415441};
@@ -244,12 +314,15 @@ static int epl_client_fsi_handshake(void *arg)
break;
}
} while (count < handshake_retry_count);
} else {
hs_state = HANDSHAKE_FAILED;
dev_warn(&pdev_local->dev, "epl_client: handshake failed - no valid HSP channel\n");
}
if (hs_state == HANDSHAKE_FAILED)
pr_warn("epl_client: handshake with FSI failed\n");
dev_warn(&pdev_local->dev, "epl_client: handshake with FSI failed\n");
else
pr_info("epl_client: handshake done with FSI, try %u\n", count);
dev_info(&pdev_local->dev, "epl_client: handshake done with FSI, try %u\n", count);
return 0;
}
@@ -257,10 +330,14 @@ static int epl_client_fsi_handshake(void *arg)
static int __maybe_unused epl_client_suspend(struct device *dev)
{
int ret = 0;
pr_debug("tegra-epl: suspend called\n");
if (enable_deinit_notify)
dev_dbg(dev, "tegra-epl: suspend called\n");
if (enable_deinit_notify) {
ret = epl_client_fsi_pm_notify(EPS_DOS_SUSPEND);
if (ret < 0)
dev_warn(dev, "tegra-epl: suspend notification failed: %d\n", ret);
}
hs_state = HANDSHAKE_PENDING;
return ret;
@@ -268,15 +345,32 @@ static int __maybe_unused epl_client_suspend(struct device *dev)
static int __maybe_unused epl_client_resume(struct device *dev)
{
pr_debug("tegra-epl: resume called\n");
int ret;
(void)epl_client_fsi_handshake(NULL);
return epl_client_fsi_pm_notify(EPS_DOS_RESUME);
dev_dbg(dev, "tegra-epl: resume called\n");
ret = epl_client_fsi_handshake(NULL);
if (ret < 0) {
dev_warn(dev, "tegra-epl: handshake failed during resume: %d\n", ret);
return ret;
}
/* Only send PM notify if handshake was successful */
if (hs_state == HANDSHAKE_DONE) {
ret = epl_client_fsi_pm_notify(EPS_DOS_RESUME);
if (ret < 0)
dev_warn(dev, "tegra-epl: resume notification failed: %d\n", ret);
} else {
dev_warn(dev, "tegra-epl: skipping resume notification - handshake not successful\n");
}
return ret;
}
static SIMPLE_DEV_PM_OPS(epl_client_pm, epl_client_suspend, epl_client_resume);
static const struct of_device_id epl_client_dt_match[] = {
{ .compatible = "nvidia,tegra234-epl-client"},
{ .compatible = "nvidia,tegra264-epl-client"},
{}
};
@@ -299,6 +393,7 @@ static int epl_register_device(void)
return result;
}
device_file_major_number = result;
dev_class = class_create(device_name);
if (dev_class == NULL) {
pr_err("%s> Could not create class for device\n", device_name);
@@ -333,18 +428,30 @@ static int epl_client_probe(struct platform_device *pdev)
const struct device_node *np = dev->of_node;
int iterator = 0;
char name[32] = "client-misc-sw-generic-err";
bool is_misc_ec_mapped = false;
hs_state = HANDSHAKE_PENDING;
epl_register_device();
ret = epl_register_device();
if (ret < 0) {
dev_err(dev, "Failed to register device: %d\n", ret);
return ret;
}
ret = tegra_hsp_mb_init(dev);
if (ret < 0) {
dev_err(dev, "Failed to initialize HSP mailbox: %d\n", ret);
epl_unregister_device();
return ret;
}
pdev_local = pdev;
for (iterator = 0; iterator < NUM_SW_GENERIC_ERR; iterator++) {
name[26] = (char)(iterator+48U);
name[27] = '\0';
if (of_property_read_string(np, name, &miscerr_cfg[iterator].dev_configured) == 0) {
pr_info("Misc Sw Generic Err #%d configured to client %s\n",
dev_info(dev, "Misc Sw Generic Err #%d configured to client %s\n",
iterator, miscerr_cfg[iterator].dev_configured);
/* Mapping registers to process address space */
@@ -359,9 +466,12 @@ static int epl_client_probe(struct platform_device *pdev)
ret = -1;
dev_err(&pdev->dev, "error in mapping misc err register for err #%d\n",
iterator);
} else {
is_misc_ec_mapped = true;
}
} else {
pr_info("Misc Sw Generic Err %d not configured for any client\n", iterator);
dev_info(dev, "Misc Sw Generic Err %d not configured for any client\n",
iterator);
}
}
@@ -374,16 +484,41 @@ static int epl_client_probe(struct platform_device *pdev)
dev_info(dev, "handshake-retry-count %u\n", handshake_retry_count);
mission_err_status_va = devm_platform_ioremap_resource(pdev, NUM_SW_GENERIC_ERR * 2);
if (IS_ERR(mission_err_status_va)) {
isAddrMappOk = false;
dev_err(&pdev->dev, "error in mapping mission error status register\n");
return PTR_ERR(mission_err_status_va);
if (of_device_is_compatible(np, "nvidia,tegra234-epl-client")) {
error_index_offset = 24;
timestamp_resolution_ns = TEGRA234_TIMESTAMP_RESOLUTION_NS;
} else if (of_device_is_compatible(np, "nvidia,tegra264-epl-client")) {
error_index_offset = 3;
timestamp_resolution_ns = TEGRA264_TIMESTAMP_RESOLUTION_NS;
} else {
dev_err(dev, "tegra-epl: valid dt compatible string not found\n");
ret = -1;
}
if (is_misc_ec_mapped == true) {
mission_err_status_va = devm_platform_ioremap_resource(pdev, NUM_SW_GENERIC_ERR * 2);
if (IS_ERR(mission_err_status_va)) {
isAddrMappOk = false;
dev_err(&pdev->dev, "error in mapping mission error status register\n");
return PTR_ERR(mission_err_status_va);
}
}
if (ret == 0) {
(void) epl_client_fsi_handshake(NULL);
return epl_client_fsi_pm_notify(EPS_DOS_INIT);
ret = epl_client_fsi_handshake(NULL);
if (ret < 0) {
dev_warn(dev, "tegra-epl: handshake failed during probe: %d\n", ret);
return ret;
}
/* Only send PM notify if handshake was successful */
if (hs_state == HANDSHAKE_DONE) {
ret = epl_client_fsi_pm_notify(EPS_DOS_INIT);
if (ret < 0)
dev_warn(dev, "tegra-epl: init notification failed: %d\n", ret);
} else {
dev_warn(dev, "tegra-epl: skipping init notification - handshake not successful\n");
}
}
return ret;
@@ -391,11 +526,15 @@ static int epl_client_probe(struct platform_device *pdev)
static void epl_client_shutdown(struct platform_device *pdev)
{
pr_debug("tegra-epl: shutdown called\n");
int ret;
if (enable_deinit_notify)
if (epl_client_fsi_pm_notify(EPS_DOS_DEINIT) < 0)
pr_err("Unable to send notification to fsi\n");
dev_dbg(&pdev->dev, "tegra-epl: shutdown called\n");
if (enable_deinit_notify) {
ret = epl_client_fsi_pm_notify(EPS_DOS_DEINIT);
if (ret < 0)
dev_err(&pdev->dev, "Unable to send notification to fsi: %d\n", ret);
}
hs_state = HANDSHAKE_PENDING;

View File

@@ -160,15 +160,13 @@ static int tegra_pmc_probe(struct platform_device *pdev)
return err;
}
tegra_pmc_reset_sysfs_init(pmc);
err = tegra_pmc_pinctrl_init(pmc);
if (err)
goto cleanup_sysfs;
return err;
err = tegra_pmc_irq_init(pmc);
if (err < 0)
goto cleanup_sysfs;
return err;
/* Some wakes require specific filter configuration */
if (pmc->soc->set_wake_filters)
@@ -177,11 +175,6 @@ static int tegra_pmc_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, pmc);
return 0;
cleanup_sysfs:
tegra_pmc_reset_sysfs_remove(pmc);
return err;
}
static int __maybe_unused tegra_pmc_resume(struct device *dev)
@@ -204,10 +197,6 @@ static const struct dev_pm_ops tegra_pmc_pm_ops = {
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(tegra_pmc_suspend, tegra_pmc_resume)
};
static const char * const tegra264_reset_levels[] = {
"L0", "L1", "L2", "WARM"
};
#define TEGRA264_IO_PAD(_id, _dpd, _request, _status, _has_int_reg, _e_reg06, _e_reg18, _voltage, _e_33v_ctl, _name) \
((struct tegra_io_pad_soc) { \
.id = (_id), \
@@ -262,11 +251,6 @@ static const struct pinctrl_pin_desc tegra264_pin_descs[] = {
static const struct tegra_pmc_regs tegra264_pmc_regs = {
.scratch0 = 0x684,
.rst_status = 0x4,
.rst_source_shift = 0x2,
.rst_source_mask = 0x1fc,
.rst_level_shift = 0x0,
.rst_level_mask = 0x3,
.aowake_cntrl = 0x0,
.aowake_mask_w = 0x200,
.aowake_status_w = 0x410,
@@ -278,97 +262,6 @@ static const struct tegra_pmc_regs tegra264_pmc_regs = {
.aowake_ctrl = 0x68c,
};
static const char * const tegra264_reset_sources[] = {
"SYS_RESET_N", /* 0 */
"CSDC_RTC_XTAL",
"VREFRO_POWER_BAD",
"SCPM_SOC_XTAL",
"SCPM_RTC_XTAL",
"FMON_32K",
"FMON_OSC",
"POD_RTC",
"POD_IO",
"POD_PLUS_IO_SPLL",
"POD_PLUS_SOC", /* 10 */
"VMON_PLUS_UV",
"VMON_PLUS_OV",
"FUSECRC_FAULT",
"OSC_FAULT",
"BPMP_BOOT_FAULT",
"SCPM_BPMP_CORE_CLK",
"SCPM_PSC_SE_CLK",
"VMON_SOC_MIN",
"VMON_SOC_MAX",
"VMON_MSS_MIN", /* 20 */
"VMON_MSS_MAX",
"POD_PLUS_IO_U4_TSENSE",
"SOC_THERM_FAULT",
"FSI_THERM_FAULT",
"PSC_TURTLE_MODE",
"SCPM_OESP_SE_CLK",
"SCPM_SB_SE_CLK",
"POD_CPU",
"POD_GPU",
"DCLS_GPU", /* 30 */
"POD_MSS",
"FSI_FMON",
"VMON_FSI_MIN",
"VMON_FSI_MAX",
"VMON_CPU_MIN",
"VMON_CPU_MAX",
"NVJTAG_SEL_MONITOR",
"BPMP_FMON",
"AO_WDT_POR",
"BPMP_WDT_POR", /* 40 */
"AO_TKE_WDT_POR",
"RCE0_WDT_POR",
"RCE1_WDT_POR",
"DCE_WDT_POR",
"PVA_0_WDT_POR",
"FSI_R5_WDT_POR",
"FSI_R52_0_WDT_POR",
"FSI_R52_1_WDT_POR",
"FSI_R52_2_WDT_POR",
"FSI_R52_3_WDT_POR", /* 50 */
"TOP_0_WDT_POR",
"TOP_1_WDT_POR",
"TOP_2_WDT_POR",
"APE_C0_WDT_POR",
"APE_C1_WDT_POR",
"GPU_TKE_WDT_POR",
"OESP_WDT_POR",
"SB_WDT_POR",
"PSC_WDT_POR",
"SW_MAIN", /* 60 */
"L0L1_RST_OUT_N",
"FSI_HSM",
"CSITE_SW",
"AO_WDT_DBG",
"BPMP_WDT_DBG",
"AO_TKE_WDT_DBG",
"RCE0_WDT_DBG",
"RCE1_WDT_DBG",
"DCE_WDT_DBG",
"PVA_0_WDT_DBG", /* 70 */
"FSI_R5_WDT_DBG",
"FSI_R52_0_WDT_DBG",
"FSI_R52_1_WDT_DBG",
"FSI_R52_2_WDT_DBG",
"FSI_R52_3_WDT_DBG",
"TOP_0_WDT_DBG",
"TOP_1_WDT_DBG",
"TOP_2_WDT_DBG",
"APE_C0_WDT_DBG",
"APE_C1_WDT_DBG", /* 80 */
"SB_WDT_DBG",
"OESP_WDT_DBG",
"PSC_WDT_DBG",
"TSC_0_WDT_DBG",
"TSC_1_WDT_DBG",
"L2_RST_OUT_N",
"SC7", /* 87 */
};
static const struct tegra_wake_event tegra264_wake_events[] = {
TEGRA_WAKE_IRQ("pmu", 0, 727),
TEGRA_WAKE_IRQ("rtc", 65, 548),
@@ -403,10 +296,10 @@ static const struct tegra_pmc_soc tegra264_pmc_soc = {
.set_wake_filters = tegra186_pmc_set_wake_filters,
.irq_set_wake = tegra186_pmc_irq_set_wake,
.irq_set_type = tegra186_pmc_irq_set_type,
.reset_sources = tegra264_reset_sources,
.num_reset_sources = ARRAY_SIZE(tegra264_reset_sources),
.reset_levels = tegra264_reset_levels,
.num_reset_levels = ARRAY_SIZE(tegra264_reset_levels),
.reset_sources = NULL,
.num_reset_sources = 0,
.reset_levels = NULL,
.num_reset_levels = 0,
.num_wake_events = ARRAY_SIZE(tegra264_wake_events),
.wake_events = tegra264_wake_events,
.max_wake_events = 128,

View File

@@ -2173,11 +2173,15 @@ pl011_set_termios(struct uart_port *port, struct ktermios *termios,
else
clkdiv = 16;
/*
* Ask the core to calculate the divisor for us.
*/
baud = uart_get_baud_rate(port, termios, old, 0,
port->uartclk / clkdiv);
if (uap->vendor->enable_car) {
baud = tty_termios_baud_rate(termios);
clk_set_rate(uap->clk, baud * clkdiv);
}
else {
baud = uart_get_baud_rate(port, termios, old, 0,
port->uartclk / clkdiv);
}
#ifdef CONFIG_DMA_ENGINE
/*
* Adjust RX DMA polling rate with baud rate if not specified.
@@ -2186,10 +2190,12 @@ pl011_set_termios(struct uart_port *port, struct ktermios *termios,
uap->dmarx.poll_rate = DIV_ROUND_UP(10000000, baud);
#endif
if (baud > port->uartclk / 16)
quot = DIV_ROUND_CLOSEST(port->uartclk * 8, baud);
else
quot = DIV_ROUND_CLOSEST(port->uartclk * 4, baud);
if (!uap->vendor->enable_car) {
if (baud > port->uartclk / 16)
quot = DIV_ROUND_CLOSEST(port->uartclk * 8, baud);
else
quot = DIV_ROUND_CLOSEST(port->uartclk * 4, baud);
}
switch (termios->c_cflag & CSIZE) {
case CS5:
@@ -2261,22 +2267,23 @@ pl011_set_termios(struct uart_port *port, struct ktermios *termios,
old_cr &= ~ST_UART011_CR_OVSFACT;
}
/*
* Workaround for the ST Micro oversampling variants to
* increase the bitrate slightly, by lowering the divisor,
* to avoid delayed sampling of start bit at high speeds,
* else we see data corruption.
*/
if (uap->vendor->oversampling) {
if (baud >= 3000000 && baud < 3250000 && quot > 1)
quot -= 1;
else if (baud > 3250000 && quot > 2)
quot -= 2;
if (!uap->vendor->enable_car) {
/*
* Workaround for the ST Micro oversampling variants to
* increase the bitrate slightly, by lowering the divisor,
* to avoid delayed sampling of start bit at high speeds,
* else we see data corruption.
*/
if (uap->vendor->oversampling) {
if (baud >= 3000000 && baud < 3250000 && quot > 1)
quot -= 1;
else if (baud > 3250000 && quot > 2)
quot -= 2;
}
/* Set baud rate */
pl011_write(quot & 0x3f, uap, REG_FBRD);
pl011_write(quot >> 6, uap, REG_IBRD);
}
/* Set baud rate */
pl011_write(quot & 0x3f, uap, REG_FBRD);
pl011_write(quot >> 6, uap, REG_IBRD);
/*
* ----------v----------v----------v----------v-----
* NOTE: REG_LCRH_TX and REG_LCRH_RX MUST BE WRITTEN AFTER

View File

@@ -25,7 +25,7 @@ struct epl_error_report_frame {
uint16_t reporter_id;
};
#ifdef CONFIG_TEGRA_EPL
#if IS_ENABLED(CONFIG_TEGRA_EPL)
/**
* @brief API to check if SW error can be reported via Misc EC
* by reading and checking Misc EC error status register value.

View File

@@ -1437,6 +1437,15 @@ void run_posix_cpu_timers(void)
lockdep_assert_irqs_disabled();
/*
* Ensure that release_task(tsk) can't happen while
* handle_posix_cpu_timers() is running. Otherwise, a concurrent
* posix_cpu_timer_del() may fail to lock_task_sighand(tsk) and
* miss timer->it.cpu.firing != 0.
*/
if (tsk->exit_state)
return;
/*
* If the actual expiry is deferred to task work context and the
* work is already scheduled there is no point to do anything here.