Merge android16-6.12 into android16-6.12-lts

This merges the android16-6.12 branch into the -lts branch, catching
it up with the latest changes in there.

It contains the following commits:

* 030e00a2d7 ANDROID: 16K: Use vma_area slab cache for pad VMA
* 2078b86505 ANDROID: ABI: Update pixel symbol list
* b4b7821275 ANDROID: cgroup: Add android_rvh_cgroup_force_kthread_migration
* 94ce385c22 ANDROID: ashmem_rust: return EINVAL on offset > size
* d7b077d5e1 ANDROID: GKI: update symbols for xiaomi.
* 5fdebcb05d ANDROID: Update symbols list for imx
* 9d80e32548 ANDROID: GKI: add allowed list for Exynosauto SoC
* b6652492f9 ANDROID: rust_binder: enforce that overflow in ptr_align is checked
* dc4eb8482b ANDROID: GKI: update the ABI symbol list
* 19151c7f81 ANDROID: freezer: Add vendor hook to freezer for GKI purpose.
* 56508b8d26 ANDROID: freezer: export the freezer_cgrp_subsys for GKI purpose.
* 275bcc2e06 ANDROID: GKI: Update symbol list for xiaomi
* 159dbf7174 ANDROID: vendor_hooks: export cgroup_threadgroup_rwsem
* 764e54b8e9 ANDROID: GKI: update the ABI symbol list
* 5111c39916 ANDROID: cgroup: Add vendor hook for cpuset.
* 478166f75c ANDROID: GKI: Update symbol list for vivo
* 6eca3e3fc3 ANDROID: mm: Reset unused page flag bits on free
* 52e6e0403d ANDROID: ashmem: fix overflow on __page_align
* a25c0fcfff ANDROID: Align x86-64 microdroid cgroup support with aarch64 microdroid
* 986d344a47 ANDROID: Enable memory controller for microdroid
* 1d4f4d446d ANDROID: ABI: Update pixel symbol list
* 1195c63bae ANDROID: vendor_hook: add trace_android_rvh_setscheduler_prio
* 41cfa0c6c7 FROMGIT: wifi: cfg80211: Add support for link reconfiguration negotiation offload to driver
* ada89d7892 FROMGIT: wifi: cfg80211: Improve the documentation for NL80211_CMD_ASSOC_MLO_RECONF
* 5bf4b91e33 Merge tag 'android16-6.12.30_r00' into android16-6.12
* d9fd901baa UPSTREAM: scsi: ufs: core: Don't perform UFS clkscaling during host async scan
* 6c1c18fcb8 ANDROID: GKI: net: add vendor hooks net qos for gki purpose
* 54f2463845 ANDROID: Update symbols list for imx
* f9fbc66f84 ANDROID: GKI: Update symbol list for xiaomi
* d0b1c84377 ANDROID: mm: export mem_cgroup_move_account
* ac2a74af54 ANDROID: mm: add vendor hook to trace shrink_node
* 1da7155f07 ANDROID: mm: add vendor hook to add folio to specific memcg
* 72ccaf358b ANDROID: rust_binder: allow duplicated freeze listener cookies
* ace4b8298e ANDROID: GKI: Update symbol list for xiaomi
* 9651bcfa2a ANDROID: vendor_hook: Add hook is to optimize the time consumption of shrink slab.
* 30b14cdad4 ANDROID: vendor_hooks: add symbols for lazy preemption
* 10fb4d1471 ANDROID: GKI: Add zap_page_range_single to symbol list for qcom
* ac5b13d8cf ANDROID: GKI: Add unisoc modules symbols
* 0697e85aef FROMGIT: genirq/cpuhotplug: Restore affinity even for suspended IRQ
* a5e745236b FROMGIT: genirq/cpuhotplug: Rebalance managed interrupts across multi-CPU hotplug
* 0df9d7574c ANDROID: GKI: Add symbol to symbol list for vivo.
* 19a7e0717d ANDROID: vendor_hooks: add hooks in prctl_set_vma
* 9197be6cf9 ANDROID: GKI: Update symbol list for Amlogic
* 85da719ace ANDROID: GKI: Update symbol list for vivo
* c4de34084f ANDROID: KVM: Don't release the VM memory after it is given to the hyp
* df79f04f71 ANDROID: rust_binder: add Process::lock_with_nodes()
* d9be16a90c ANDROID: rust_binder: freeze notifications
* 882b6c0267 ANDROID: tvgki: disabling CONFIG_DEBUG_INFO_BTF in tvgki
* 272df8fc87 ANDROID: tvgki: disabling CONFIG_DEBUG_FS in tvgki
* c1a3f22e06 ANDROID: CRC / ABI fixups for sched_dl_entity bitfield addition
* 738664c527 FROMGIT: sched/deadline: Less agressive dl_server handling
* 35c421e883 UPSTREAM: gendwarfksyms: Fix structure type overrides
* 511b97bbd3 Revert "FROMLIST: gendwarfksyms: Fix structure type overrides"
* f4feb433ec BACKPORT: erofs: lazily initialize per-CPU workers and CPU hotplug hooks
* d05f5863e8 FROMGIT: scsi: ufs: mcq: Delete ufshcd_release_scsi_cmd() in ufshcd_mcq_abort()
* 3d0eadbedd ANDROID: GKI: vivo add symbols to symbol list
* 0b0551f1f0 BACKPORT: FROMGIT: arm64: Add override for MPAM
* 2de467102e ANDROID: tvgki: disabling CONFIG_KALLSYMS_ALL in tvgki
* 9450d0b07b ANDROID: tvgki: disabling CONFIG_KVM in tvgki
* e328f87bab ANDROID: tvgki: disabling CONFIG_BLK_DEV_NVME in tvgki
* 3031d665e5 ANDROID: tvgki: disabling CONFIG_ANDROID_VENDOR_OEM_DATA in tvgki
* c7ee6ca674 ANDROID: tvgki: disabling CONFIG_ANDROID_KABI_RESERVE in tvgki
* 75547173ff ANDROID: vendor_hooks: add one hook for lazy preemption
* 31e7de7400 ANDROID: disable KABI macros for VDSO and EFI libstub
* 52cc64794a ANDROID: virt: gunyah: Add a gunyah hcall for ioremap
* a7c066a30e ANDROID: rust_binder: access lru list through private data
* cf1ef61244 FROMLIST: KVM: arm64: Restrict FF-A host version renegotiation
* e97af2a2f5 ANDROID: more KABI macros for gendwarfksyms
* 2c69381349 FROMGIT: block: don't use submit_bio_noacct_nocheck in blk_zone_wplug_bio_work
* b1605a0473 ANDROID: Revert "block: Fix a deadlock related freezing zoned storage devices"
* 9538ac3daa FROMLIST: gendwarfksyms: Fix structure type overrides
* 7ef7fa4d24 UPSTREAM: Documentation/kbuild: Add new gendwarfksyms kABI rules
* ce025a6921 UPSTREAM: Documentation/kbuild: Drop section numbers
* b0ac9a4ddf UPSTREAM: gendwarfksyms: Add a kABI rule to override type strings
* 7001a993fa UPSTREAM: gendwarfksyms: Add a kABI rule to override byte_size attributes
* 8b73dde524 UPSTREAM: gendwarfksyms: Clean up kABI rule look-ups
* ede5aa33f4 FROMGIT: platform/chrome: cros_ec_typec: Defer probe on missing EC parent
* c40a59df7f ANDROID: Avoid ABI Breakage
* 5161413c60 UPSTREAM: f2fs: fix to correct check conditions in f2fs_cross_rename
* 2244c336a0 UPSTREAM: f2fs: use d_inode(dentry) cleanup dentry->d_inode
* 63352a68f1 UPSTREAM: f2fs: fix to skip f2fs_balance_fs() if checkpoint is disabled
* 9822518231 UPSTREAM: f2fs: add ckpt_valid_blocks to the section entry
* 75dfdc1ec0 UPSTREAM: f2fs: add a method for calculating the remaining blocks in the current segment in LFS mode.
* b3f29a679d UPSTREAM: f2fs: use vmalloc instead of kvmalloc in .init_{,de}compress_ctx
* b1222fd642 UPSTREAM: f2fs: don't over-report free space or inodes in statvfs
* 5f553da18d UPSTREAM: f2fs: handle error cases of memory donation
* afb757a936 UPSTREAM: f2fs: fix to bail out in get_new_segment()
* f2afd01447 UPSTREAM: f2fs: sysfs: export linear_lookup in features directory
* ff061af575 UPSTREAM: f2fs: sysfs: add encoding_flags entry
* 5c1983aa96 UPSTREAM: f2fs: zone: fix to calculate first_zoned_segno correctly
* 44c66f24dd UPSTREAM: f2fs: fix to do sanity check on sit_bitmap_size
* 03dc067975 UPSTREAM: f2fs: fix to detect gcing page in f2fs_is_cp_guaranteed()
* 7221743bb4 UPSTREAM: f2fs: clean up w/ fscrypt_is_bounce_page()
* 4205e8fd7a UPSTREAM: f2fs: prevent kernel warning due to negative i_nlink from corrupted image
* 74d87b5fd4 UPSTREAM: f2fs: fix to do sanity check on sbi->total_valid_block_count
* 3a1ee29cd6 BACKPORT: f2fs: support to disable linear lookup fallback
* 370cb16c93 UPSTREAM: f2fs: prevent the current section from being selected as a victim during GC
* db2ef83d50 UPSTREAM: f2fs: clean up unnecessary indentation
* 1040597527 UPSTREAM: f2fs: fix to do sanity check on ino and xnid
* f6e898257b UPSTREAM: f2fs: add a fast path in finish_preallocate_blocks()
* 0973ac072c UPSTREAM: f2fs: zone: fix to avoid inconsistence in between SIT and SSA
* 70bc670f4b UPSTREAM: f2fs: fix to set atomic write status more clear
* 9af46e52d6 UPSTREAM: f2fs: remove redundant assignment to variable err
* 8e7a03dcb9 ANDROID: iommu/pkvm-iommu: Fix locking in maple tree usage
* e6f1cbbab1 ANDROID: KVM: arm64: Fix hyp_alloc(0)
* e365f69928 ANDROID: KVM: Send VM availability FF-A direct messages to Trustzone
* 4a5a2ac2af UPSTREAM: PCI/pwrctrl: Cancel outstanding rescan work when unregistering
* 174f671de0 BACKPORT: binder: Create safe versions of binder log files
* d513ac52bc UPSTREAM: binder: Refactor binder_node print synchronization
* 71516273b0 ANDROID: arm64: KVM: iommu: Fix range check for MMIO
* 413ed6ba22 ANDROID: iommu/arm-smmu-v3-kvm: Fix accidental domain ID freeing in free()
* f152c4b68d ANDROID: create initial empty aarch64 allowed breaks file
* 32d79853e4 FROMGIT: scsi: core: ufs: Fix a hang in the error handler
* e29203fd6b ANDROID: GKI: update symbol list for xiaomi
* 4f8031582a ANDROID: GKI: Update symbol list for qcom
* 53e8841d90 ANDROID: Export the symbol of ext_sched_class
* 0f41effe61 ANDROID: GKI: add allowed list for Exynosauto SoC
* e9d50375f7 ANDROID: CONFIG_CRYPTO_SHA1_ARM64_CE=y to GKI and Microdroid kernel
* b4abcf44e1 ANDROID: qcom: Sort the ABI symbol list
* 9fc45b0aca ANDROID: KVM: arm64: Deprecate lazy pte mappings for hyp modules
* 5d45bc0cb2 ANDROID: KVM: arm64: Unmap host stage-2 memory on FF-A lend
* 7368dfbdb8 ANDROID: KVM: arm64: Use the correct handle during ff-a transfer
* 66182cb20f ANDROID: ABI: Update pixel symbol list
* dc7c02e143 ANDROID: Export symbols for vendor hooks
* 5b56ab949a ANDROID: sched: Add vendor hook for util_fits_cpu
* 401be11d93 ANDROID: sched: Add trace_android_rvh_set_user_nice_locked
* 2ab1628371 ANDROID: topology: Add vendor hook for use_amu_fie
* 124897124a ANDROID: Add new hook to enable overriding uclamp_validate()
* 390c2b429d ANDROID: sched: Add vendor hooks for override sugov behavior
* 3e15db3d1b ANDROID: sched: Add vendor hook for rt util update
* c6e1897112 ANDROID: sched/rt: fix rt balance push
* bdad3fc9ed ANDROID: qcom: Update the ABI symbol list
* 04458d9907 ANDROID: KVM: arm64: Advertise support for FFA_RX_RELEASE
* 5498b5fec4 FROMGIT: mm: add CONFIG_PAGE_BLOCK_ORDER to select page block order
* 47d6161bd4 ANDROID: GKI: update rtktv symbol
* 64307f8895 FROMGIT: drm: writeback: Fix use after free in drm_writeback_connector_cleanup()
* 3405224680 ANDROID: Update symbols list for imx
* 1c8ad988ac UPSTREAM: rust: kbuild: do not export generated KASAN ODR symbols
* 30844cb972 ANDROID: Prune default dependencies for kernel_build
* bfd0a3a315 ANDROID: Allow spinlock, trap_handler and iommu for DDK pKVM modules
* cefce17ac4 UPSTREAM: kcov: rust: add flags for KCOV with Rust

Change-Id: I5e77eb9d2a3d0cd5e9a360f0d44ca7fc15ae786c
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
This commit is contained in:
Greg Kroah-Hartman
2025-06-25 09:46:12 +00:00
139 changed files with 7046 additions and 912 deletions

View File

@@ -169,6 +169,9 @@ filegroup(
# cscope files
"cscope.*",
"ncscope.*",
# ABI and symbol list files
"gki/**",
],
),
visibility = ["//visibility:public"],
@@ -1615,11 +1618,15 @@ ddk_headers(
hdrs = [
"arch/arm64/kvm/hyp/include/module/nvhe/define_events.h",
"arch/arm64/kvm/hyp/include/module/nvhe/trace.h",
"arch/arm64/kvm/hyp/include/nvhe/iommu.h",
"arch/arm64/kvm/hyp/include/nvhe/spinlock.h",
"arch/arm64/kvm/hyp/include/nvhe/trap_handler.h",
],
includes = [
# LINT.IfChange(pkvm_includes)
"arch/arm64/kvm/hyp/include/module",
# LINT.ThenChange(/arch/arm64/kvm/hyp/nvhe/Makefile.module:includes)
"arch/arm64/kvm/hyp/include",
],
visibility = ["//visibility:private"],
)

View File

@@ -270,7 +270,7 @@ Description: Shows all enabled kernel features.
inode_checksum, flexible_inline_xattr, quota_ino,
inode_crtime, lost_found, verity, sb_checksum,
casefold, readonly, compression, test_dummy_encryption_v2,
atomic_write, pin_file, encrypted_casefold.
atomic_write, pin_file, encrypted_casefold, linear_lookup.
What: /sys/fs/f2fs/<disk>/inject_rate
Date: May 2016
@@ -846,3 +846,16 @@ Description: For several zoned storage devices, vendors will provide extra space
reserved_blocks. However, it is not enough, since this extra space should
not be shown to users. So, with this new sysfs node, we can hide the space
by substracting reserved_blocks from total bytes.
What: /sys/fs/f2fs/<disk>/encoding_flags
Date: April 2025
Contact: "Chao Yu" <chao@kernel.org>
Description: This is a read-only entry to show the value of sb.s_encoding_flags, the
value is hexadecimal.
============================ ==========
Flag_Name Flag_Value
============================ ==========
SB_ENC_STRICT_MODE_FL 0x00000001
SB_ENC_NO_COMPAT_FALLBACK_FL 0x00000002
============================ ==========

View File

@@ -449,6 +449,9 @@
arm64.nomops [ARM64] Unconditionally disable Memory Copy and Memory
Set instructions support
arm64.nompam [ARM64] Unconditionally disable Memory Partitioning And
Monitoring support
arm64.nomte [ARM64] Unconditionally disable Memory Tagging Extension
support

View File

@@ -2,8 +2,8 @@
DWARF module versioning
=======================
1. Introduction
===============
Introduction
============
When CONFIG_MODVERSIONS is enabled, symbol versions for modules
are typically calculated from preprocessed source code using the
@@ -14,8 +14,8 @@ selected, **gendwarfksyms** is used instead to calculate symbol versions
from the DWARF debugging information, which contains the necessary
details about the final module ABI.
1.1. Usage
==========
Usage
-----
gendwarfksyms accepts a list of object files on the command line, and a
list of symbol names (one per line) in standard input::
@@ -33,8 +33,8 @@ list of symbol names (one per line) in standard input::
-h, --help Print this message
2. Type information availability
================================
Type information availability
=============================
While symbols are typically exported in the same translation unit (TU)
where they're defined, it's also perfectly fine for a TU to export
@@ -56,8 +56,8 @@ type for calculating symbol versions even if the symbol is defined
elsewhere. The name of the symbol pointer is expected to start with
`__gendwarfksyms_ptr_`, followed by the name of the exported symbol.
3. Symtypes output format
=========================
Symtypes output format
======================
Similarly to genksyms, gendwarfksyms supports writing a symtypes
file for each processed object that contain types for exported
@@ -85,8 +85,8 @@ produces C-style type strings, gendwarfksyms uses the same simple parsed
DWARF format produced by **--dump-dies**, but with type references
instead of fully expanded strings.
4. Maintaining a stable kABI
============================
Maintaining a stable kABI
=========================
Distribution maintainers often need the ability to make ABI compatible
changes to kernel data structures due to LTS updates or backports. Using
@@ -104,8 +104,8 @@ for source code annotation. Note that as these features are only used to
transform the inputs for symbol versioning, the user is responsible for
ensuring that their changes actually won't break the ABI.
4.1. kABI rules
===============
kABI rules
----------
kABI rules allow distributions to fine-tune certain parts
of gendwarfksyms output and thus control how symbol
@@ -125,22 +125,25 @@ the rules. The fields are as follows:
qualified name of the DWARF Debugging Information Entry (DIE).
- `value`: Provides rule-specific data.
The following helper macro, for example, can be used to specify rules
The following helper macros, for example, can be used to specify rules
in the source code::
#define __KABI_RULE(hint, target, value) \
static const char __PASTE(__gendwarfksyms_rule_, \
#define ___KABI_RULE(hint, target, value) \
static const char __PASTE(__gendwarfksyms_rule_, \
__COUNTER__)[] __used __aligned(1) \
__section(".discard.gendwarfksyms.kabi_rules") = \
"1\0" #hint "\0" #target "\0" #value
"1\0" #hint "\0" target "\0" value
#define __KABI_RULE(hint, target, value) \
___KABI_RULE(hint, #target, #value)
Currently, only the rules discussed in this section are supported, but
the format is extensible enough to allow further rules to be added as
need arises.
4.1.1. Managing definition visibility
=====================================
Managing definition visibility
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A declaration can change into a full definition when additional includes
are pulled into the translation unit. This changes the versions of any
@@ -168,8 +171,8 @@ Example usage::
KABI_DECLONLY(s);
4.1.2. Adding enumerators
=========================
Adding enumerators
~~~~~~~~~~~~~~~~~~
For enums, all enumerators and their values are included in calculating
symbol versions, which becomes a problem if we later need to add more
@@ -223,8 +226,89 @@ Example usage::
KABI_ENUMERATOR_IGNORE(e, C);
KABI_ENUMERATOR_VALUE(e, LAST, 2);
4.3. Adding structure members
=============================
Managing structure size changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A data structure can be partially opaque to modules if its allocation is
handled by the core kernel, and modules only need to access some of its
members. In this situation, it's possible to append new members to the
structure without breaking the ABI, as long as the layout for the original
members remains unchanged.
To append new members, we can hide them from symbol versioning as
described in section :ref:`Hiding members <hiding_members>`, but we can't
hide the increase in structure size. The `byte_size` rule allows us to
override the structure size used for symbol versioning.
The rule fields are expected to be as follows:
- `type`: "byte_size"
- `target`: The fully qualified name of the target data structure
(as shown in **--dump-dies** output).
- `value`: A positive decimal number indicating the structure size
in bytes.
Using the `__KABI_RULE` macro, this rule can be defined as::
#define KABI_BYTE_SIZE(fqn, value) \
__KABI_RULE(byte_size, fqn, value)
Example usage::
struct s {
/* Unchanged original members */
unsigned long a;
void *p;
/* Appended new members */
KABI_IGNORE(0, unsigned long n);
};
KABI_BYTE_SIZE(s, 16);
Overriding type strings
~~~~~~~~~~~~~~~~~~~~~~~
In rare situations where distributions must make significant changes to
otherwise opaque data structures that have inadvertently been included
in the published ABI, keeping symbol versions stable using the more
targeted kABI rules can become tedious. The `type_string` rule allows us
to override the full type string for a type or a symbol, and even add
types for versioning that no longer exist in the kernel.
The rule fields are expected to be as follows:
- `type`: "type_string"
- `target`: The fully qualified name of the target data structure
(as shown in **--dump-dies** output) or symbol.
- `value`: A valid type string (as shown in **--symtypes**) output)
to use instead of the real type.
Using the `__KABI_RULE` macro, this rule can be defined as::
#define KABI_TYPE_STRING(type, str) \
___KABI_RULE("type_string", type, str)
Example usage::
/* Override type for a structure */
KABI_TYPE_STRING("s#s",
"structure_type s { "
"member base_type int byte_size(4) "
"encoding(5) n "
"data_member_location(0) "
"} byte_size(8)");
/* Override type for a symbol */
KABI_TYPE_STRING("my_symbol", "variable s#s");
The `type_string` rule should be used only as a last resort if maintaining
a stable symbol versions cannot be reasonably achieved using other
means. Overriding a type string increases the risk of actual ABI breakages
going unnoticed as it hides all changes to the type.
Adding structure members
------------------------
Perhaps the most common ABI compatible change is adding a member to a
kernel data structure. When changes to a structure are anticipated,
@@ -237,8 +321,8 @@ natural method. This section describes gendwarfksyms support for using
reserved space in data structures and hiding members that don't change
the ABI when calculating symbol versions.
4.3.1. Reserving space and replacing members
============================================
Reserving space and replacing members
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Space is typically reserved for later use by appending integer types, or
arrays, to the end of the data structure, but any type can be used. Each
@@ -276,8 +360,10 @@ The examples include `KABI_(RESERVE|USE|REPLACE)*` macros that help
simplify the process and also ensure the replacement member is correctly
aligned and its size won't exceed the reserved space.
4.3.2. Hiding members
=====================
.. _hiding_members:
Hiding members
~~~~~~~~~~~~~~
Predicting which structures will require changes during the support
timeframe isn't always possible, in which case one might have to resort
@@ -305,4 +391,5 @@ member to a union where one of the fields has a name starting with
unsigned long b;
};
With **--stable**, both versions produce the same symbol version.
With **--stable**, both versions produce the same symbol version. The
examples include a `KABI_IGNORE` macro to simplify the code.

View File

@@ -762,6 +762,7 @@ CONFIG_CRYPTO_LZ4=y
CONFIG_CRYPTO_ZSTD=y
CONFIG_CRYPTO_ANSI_CPRNG=y
CONFIG_CRYPTO_GHASH_ARM64_CE=y
CONFIG_CRYPTO_SHA1_ARM64_CE=y
CONFIG_CRYPTO_SHA2_ARM64_CE=y
CONFIG_CRYPTO_SHA512_ARM64_CE=y
CONFIG_CRYPTO_POLYVAL_ARM64_CE=y

View File

@@ -13,6 +13,8 @@ CONFIG_IKCONFIG_PROC=y
# CONFIG_TIME_NS is not set
# CONFIG_PID_NS is not set
# CONFIG_NET_NS is not set
CONFIG_CGROUPS=y
CONFIG_MEMCG=y
# CONFIG_RD_GZIP is not set
# CONFIG_RD_BZIP2 is not set
# CONFIG_RD_LZMA is not set
@@ -146,6 +148,7 @@ CONFIG_SECURITY_SELINUX=y
CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y
CONFIG_CRYPTO_HCTR2=y
CONFIG_CRYPTO_LZO=y
CONFIG_CRYPTO_SHA1_ARM64_CE=y
CONFIG_CRYPTO_SHA2_ARM64_CE=y
CONFIG_CRYPTO_POLYVAL_ARM64_CE=y
CONFIG_CRYPTO_AES_ARM64_CE_BLK=y

View File

@@ -0,0 +1,9 @@
# CONFIG_ANDROID_KABI_RESERVE is not set
# CONFIG_ANDROID_VENDOR_OEM_DATA is not set
# CONFIG_BLK_DEV_NVME is not set
# CONFIG_KVM is not set
# CONFIG_KALLSYMS_ALL is not set
# CONFIG_PAGE_OWNER is not set
# CONFIG_PAGE_PINNER is not set
# CONFIG_DEBUG_FS is not set
# CONFIG_DEBUG_INFO_BTF is not set

View File

@@ -44,6 +44,7 @@ EXPORT_SYMBOL_GPL(arch_is_gunyah_guest);
#define GUNYAH_HYPERCALL_MSGQ_RECV GUNYAH_HYPERCALL(0x801C)
#define GUNYAH_HYPERCALL_ADDRSPACE_MAP GUNYAH_HYPERCALL(0x802B)
#define GUNYAH_HYPERCALL_ADDRSPACE_UNMAP GUNYAH_HYPERCALL(0x802C)
#define GUNYAH_HYPERCALL_ADDRSPACE_CONFIG_VMMIO_RANGE GUNYAH_HYPERCALL(0x8060)
#define GUNYAH_HYPERCALL_MEMEXTENT_DONATE GUNYAH_HYPERCALL(0x8061)
#define GUNYAH_HYPERCALL_VCPU_RUN GUNYAH_HYPERCALL(0x8065)
#define GUNYAH_HYPERCALL_ADDRSPC_MODIFY_PAGES GUNYAH_HYPERCALL(0x8069)
@@ -81,6 +82,34 @@ enum gunyah_error gunyah_hypercall_addrspc_modify_pages(u64 capid, u64 addr,
}
EXPORT_SYMBOL_GPL(gunyah_hypercall_addrspc_modify_pages);
/**
* gunyah_hypercall_addrspc_configure_vmmio_range() - Configure virtual MMIO device regions for
* the address space.
* @capid: Address space capability ID
* @base: Base guest address of MMIO region
* @size: Size of the MMIO region
* @op: Map or Unmap
*/
enum gunyah_error gunyah_hypercall_addrspc_configure_vmmio_range(u64 capid, u64 base,
u64 size, u64 op)
{
struct arm_smccc_1_2_regs args = {
.a0 = GUNYAH_HYPERCALL_ADDRSPACE_CONFIG_VMMIO_RANGE,
.a1 = capid,
.a2 = base,
.a3 = size,
.a4 = op,
/* Reserved. Must be 0 */
.a5 = 0,
};
struct arm_smccc_1_2_regs res;
arm_smccc_1_2_hvc(&args, &res);
return res.a0;
}
EXPORT_SYMBOL_GPL(gunyah_hypercall_addrspc_configure_vmmio_range);
/**
* gunyah_hypercall_bell_send() - Assert a gunyah doorbell
* @capid: capability ID of the doorbell

View File

@@ -244,21 +244,6 @@
msr spsr_el2, x0
.endm
.macro __init_el2_mpam
/* Memory Partitioning And Monitoring: disable EL2 traps */
mrs x1, id_aa64pfr0_el1
ubfx x0, x1, #ID_AA64PFR0_EL1_MPAM_SHIFT, #4
cbz x0, .Lskip_mpam_\@ // skip if no MPAM
mov_q x0, MPAM2_HOST_FLAGS
msr_s SYS_MPAM2_EL2, x0 // use the default partition
// and disable lower traps
// don't trap access to MPAMSM_EL1
mrs_s x0, SYS_MPAMIDR_EL1
tbz x0, #MPAMIDR_EL1_HAS_HCR_SHIFT, .Lskip_mpam_\@ // skip if no MPAMHCR reg
msr_s SYS_MPAMHCR_EL2, xzr // clear TRAP_MPAMIDR_EL1 -> EL2
.Lskip_mpam_\@:
.endm
/**
* Initialize EL2 registers to sane values. This should be called early on all
* cores that were booted in EL2. Note that everything gets initialised as
@@ -276,7 +261,6 @@
__init_el2_stage2
__init_el2_gicv3
__init_el2_hstr
__init_el2_mpam
__init_el2_nvhe_idregs
__init_el2_cptr
__init_el2_fgt
@@ -322,6 +306,18 @@
#endif
.macro finalise_el2_state
check_override id_aa64pfr0, ID_AA64PFR0_EL1_MPAM_SHIFT, .Linit_mpam_\@, .Lskip_mpam_\@, x1, x2
.Linit_mpam_\@:
mov_q x0, MPAM2_HOST_FLAGS
msr_s SYS_MPAM2_EL2, x0 // use the default partition
// and disable lower traps
// don't trap access to MPAMSM_EL1
mrs_s x0, SYS_MPAMIDR_EL1
tbz x0, #MPAMIDR_EL1_HAS_HCR_SHIFT, .Lskip_mpam_\@ // skip if no MPAMHCR reg
msr_s SYS_MPAMHCR_EL2, xzr // clear TRAP_MPAMIDR_EL1 -> EL2
.Lskip_mpam_\@:
check_override id_aa64pfr0, ID_AA64PFR0_EL1_SVE_SHIFT, .Linit_sve_\@, .Lskip_sve_\@, x1, x2
.Linit_sve_\@: /* SVE register access */

View File

@@ -100,6 +100,7 @@ enum __kvm_host_smccc_func {
__KVM_HOST_SMCCC_FUNC___pkvm_finalize_teardown_vm,
__KVM_HOST_SMCCC_FUNC___pkvm_reclaim_dying_guest_page,
__KVM_HOST_SMCCC_FUNC___pkvm_reclaim_dying_guest_ffa_resources,
__KVM_HOST_SMCCC_FUNC___pkvm_notify_guest_vm_avail,
__KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load,
__KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put,
__KVM_HOST_SMCCC_FUNC___pkvm_vcpu_sync_state,

View File

@@ -941,27 +941,4 @@ enum kvm_pgtable_prot kvm_pgtable_hyp_pte_prot(kvm_pte_t pte);
*/
void kvm_tlb_flush_vmid_range(struct kvm_s2_mmu *mmu,
phys_addr_t addr, size_t size);
/**
* kvm_pgtable_stage2_get_pages() - Raise the refcount for each entry and unmap them.
*
* @pgt: Page-table structure initialised by kvm_pgtable_*_init()
* or a similar initialiser.
* @addr: Input address for the start of the walk.
* @size: Size of the range.
* @mc: Cache of pre-allocated and zeroed memory from which to allocate
* page-table pages.
*/
int kvm_pgtable_stage2_get_pages(struct kvm_pgtable *pgt, u64 addr, u64 size, void *mc);
/**
* kvm_pgtable_stage2_put_pages() - Drop the refcount for each entry. This is the
* opposite of kvm_pgtable_get_pages().
*
* @pgt: Page-table structure initialised by kvm_pgtable_*_init()
* or a similar initialiser.
* @addr: Input address for the start of the walk.
* @size: Size of the range.
*/
int kvm_pgtable_stage2_put_pages(struct kvm_pgtable *pgt, u64 addr, u64 size);
#endif /* __ARM64_KVM_PGTABLE_H__ */

View File

@@ -413,7 +413,8 @@ static inline unsigned long pkvm_selftest_pages(void) { return 32; }
static inline unsigned long pkvm_selftest_pages(void) { return 0; }
#endif
#define KVM_FFA_MBOX_NR_PAGES 1
#define KVM_FFA_MBOX_NR_PAGES 1
#define KVM_FFA_SPM_HANDLE_NR_PAGES 2
/*
* Maximum number of consitutents allowed in a descriptor. This number is
@@ -424,6 +425,7 @@ static inline unsigned long pkvm_selftest_pages(void) { return 0; }
static inline unsigned long hyp_ffa_proxy_pages(void)
{
size_t desc_max;
unsigned long num_pages;
/*
* SG_MAX_SEGMENTS is supposed to bound the number of elements in an
@@ -446,7 +448,9 @@ static inline unsigned long hyp_ffa_proxy_pages(void)
KVM_FFA_MAX_NR_CONSTITUENTS * sizeof(struct ffa_mem_region_addr_range);
/* Plus a page each for the hypervisor's RX and TX mailboxes. */
return (2 * KVM_FFA_MBOX_NR_PAGES) + DIV_ROUND_UP(desc_max, PAGE_SIZE);
num_pages = (2 * KVM_FFA_MBOX_NR_PAGES) + DIV_ROUND_UP(desc_max, PAGE_SIZE);
return num_pages;
}
static inline size_t pkvm_host_sve_state_size(void)

View File

@@ -95,12 +95,14 @@ struct pkvm_sglist_page {
* allows to apply this prot on a range of
* contiguous memory.
* @host_stage2_enable_lazy_pte:
* DEPRECATED
* Unmap a range of memory from the host stage-2,
* leaving the pages host ownership intact. The
* pages will be remapped lazily (subject to the
* usual ownership checks) in response to a
* faulting access from the host.
* @host_stage2_disable_lazy_pte:
* DEPRECATED
* This is the opposite function of
* host_stage2_enable_lazy_pte. Must be called once
* the module is done with the region.

View File

@@ -1177,8 +1177,10 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
cpacr_restore(cpacr);
}
if (id_aa64pfr0_mpam(info->reg_id_aa64pfr0))
if (id_aa64pfr0_mpam(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) {
info->reg_mpamidr = read_cpuid(MPAMIDR_EL1);
init_cpu_ftr_reg(SYS_MPAMIDR_EL1, info->reg_mpamidr);
}
if (id_aa64pfr1_mte(info->reg_id_aa64pfr1))
init_cpu_ftr_reg(SYS_GMID_EL1, info->reg_gmid);
@@ -1429,7 +1431,8 @@ void update_cpu_features(int cpu,
cpacr_restore(cpacr);
}
if (id_aa64pfr0_mpam(info->reg_id_aa64pfr0)) {
if (id_aa64pfr0_mpam(read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1))) {
info->reg_mpamidr = read_cpuid(MPAMIDR_EL1);
taint |= check_update_ftr_reg(SYS_MPAMIDR_EL1, cpu,
info->reg_mpamidr, boot->reg_mpamidr);
}

View File

@@ -482,6 +482,12 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
__cpuinfo_store_cpu_32bit(&info->aarch32);
/*
* info->reg_mpamidr deferred to {init,update}_cpu_features because we
* don't want to read it (and trigger a trap on buggy firmware) if
* using an aa64pfr0_el1 override to unconditionally disable MPAM.
*/
if (IS_ENABLED(CONFIG_ARM64_SME) &&
id_aa64pfr1_sme(info->reg_id_aa64pfr1)) {
/*
@@ -492,9 +498,6 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS;
}
if (id_aa64pfr0_mpam(info->reg_id_aa64pfr0))
info->reg_mpamidr = read_cpuid(MPAMIDR_EL1);
cpuinfo_detect_icache_policy(info);
}

View File

@@ -149,6 +149,7 @@ KVM_NVHE_ALIAS(__hyp_patchable_function_entries_end);
/* pKVM static key */
KVM_NVHE_ALIAS(kvm_protected_mode_initialized);
KVM_NVHE_ALIAS(kvm_ffa_unmap_on_lend);
#endif /* CONFIG_KVM */
#ifdef CONFIG_EFI_ZBOOT

View File

@@ -118,6 +118,7 @@ static const struct ftr_set_desc pfr0 __prel64_initconst = {
.fields = {
FIELD("sve", ID_AA64PFR0_EL1_SVE_SHIFT, pfr0_sve_filter),
FIELD("el0", ID_AA64PFR0_EL1_EL0_SHIFT, NULL),
FIELD("mpam", ID_AA64PFR0_EL1_MPAM_SHIFT, NULL),
{}
},
};
@@ -144,6 +145,7 @@ static const struct ftr_set_desc pfr1 __prel64_initconst = {
FIELD("bt", ID_AA64PFR1_EL1_BT_SHIFT, NULL ),
FIELD("mte", ID_AA64PFR1_EL1_MTE_SHIFT, NULL),
FIELD("sme", ID_AA64PFR1_EL1_SME_SHIFT, pfr1_sme_filter),
FIELD("mpam_frac", ID_AA64PFR1_EL1_MPAM_frac_SHIFT, NULL),
{}
},
};
@@ -234,6 +236,7 @@ static const struct {
{ "rodata=off", "arm64_sw.rodataoff=1" },
{ "arm64.nolva", "id_aa64mmfr2.varange=0" },
{ "arm64.no32bit_el0", "id_aa64pfr0.el0=1" },
{ "arm64.nompam", "id_aa64pfr0.mpam=0 id_aa64pfr1.mpam_frac=0" },
};
static int __init parse_hexdigit(const char *p, u64 *v)

View File

@@ -22,6 +22,8 @@
#include <asm/cputype.h>
#include <asm/topology.h>
#include <trace/hooks/topology.h>
#ifdef CONFIG_ACPI
static bool __init acpi_cpu_is_threaded(int cpu)
{
@@ -154,6 +156,11 @@ static void amu_scale_freq_tick(void)
{
u64 prev_core_cnt, prev_const_cnt;
u64 core_cnt, const_cnt, scale;
bool use_amu_fie = true;
trace_android_vh_use_amu_fie(&use_amu_fie);
if(!use_amu_fie)
return;
prev_const_cnt = this_cpu_read(arch_const_cycles_prev);
prev_core_cnt = this_cpu_read(arch_core_cycles_prev);

View File

@@ -12,6 +12,8 @@
#define FFA_MIN_FUNC_NUM 0x60
#define FFA_MAX_FUNC_NUM 0xFF
#define FFA_INVALID_HANDLE (-1LL)
/*
* "ID value 0 must be returned at the Non-secure physical FF-A instance"
* We share this ID with the host.
@@ -29,6 +31,7 @@ bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt, u32 func_id);
bool kvm_guest_ffa_handler(struct pkvm_hyp_vcpu *hyp_vcpu, u64 *exit_code);
struct ffa_mem_transfer *find_transfer_by_handle(u64 ffa_handle, struct kvm_ffa_buffers *buf);
int kvm_dying_guest_reclaim_ffa_resources(struct pkvm_hyp_vm *vm);
int kvm_guest_notify_availability(u32 ffa_handle, struct kvm_ffa_buffers *ffa_buf, bool is_dying);
u32 ffa_get_hypervisor_version(void);
static inline bool is_ffa_call(u64 func_id)

View File

@@ -65,6 +65,8 @@ int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm, u64 size);
int __pkvm_host_test_clear_young_guest(u64 gfn, u64 size, bool mkold, struct pkvm_hyp_vm *vm);
kvm_pte_t __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu);
int __pkvm_host_split_guest(u64 gfn, u64 size, struct pkvm_hyp_vcpu *vcpu);
int __pkvm_host_donate_ffa(u64 pfn, u64 nr_pages);
int __pkvm_host_reclaim_ffa(u64 pfn, u64 nr_pages);
int __pkvm_guest_share_host(struct pkvm_hyp_vcpu *hyp_vcpu, u64 ipa,
u64 nr_pages, u64 *nr_shared);
int __pkvm_guest_unshare_host(struct pkvm_hyp_vcpu *hyp_vcpu, u64 ipa,
@@ -76,7 +78,6 @@ int __pkvm_guest_relinquish_to_host(struct pkvm_hyp_vcpu *vcpu,
u64 ipa, u64 *ppa);
int __pkvm_use_dma(u64 phys_addr, size_t size, struct pkvm_hyp_vcpu *hyp_vcpu);
int __pkvm_unuse_dma(u64 phys_addr, size_t size, struct pkvm_hyp_vcpu *hyp_vcpu);
int __pkvm_host_lazy_pte(u64 pfn, u64 nr_pages, bool enable);
u64 __pkvm_ptdump_get_config(pkvm_handle_t handle, enum pkvm_ptdump_ops op);
u64 __pkvm_ptdump_walk_range(pkvm_handle_t handle, struct pkvm_ptdump_log_hdr *log_hva);

View File

@@ -48,6 +48,8 @@ struct kvm_ffa_buffers {
void *rx;
u64 rx_ipa;
struct list_head xfer_list;
u64 vm_avail_bitmap;
u64 vm_creating_bitmap;
};
/*
@@ -122,6 +124,7 @@ int __pkvm_start_teardown_vm(pkvm_handle_t handle);
int __pkvm_finalize_teardown_vm(pkvm_handle_t handle);
int __pkvm_reclaim_dying_guest_page(pkvm_handle_t handle, u64 pfn, u64 gfn, u8 order);
int __pkvm_reclaim_dying_guest_ffa_resources(pkvm_handle_t handle);
int __pkvm_notify_guest_vm_avail(pkvm_handle_t handle);
struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle,
unsigned int vcpu_idx);
@@ -218,5 +221,6 @@ int pkvm_device_register_reset(u64 phys, void *cookie,
int (*cb)(void *cookie, bool host_to_guest));
int pkvm_handle_empty_memcache(struct pkvm_hyp_vcpu *hyp_vcpu, u64 *exit_code);
u32 hyp_vcpu_to_ffa_handle(struct pkvm_hyp_vcpu *hyp_vcpu);
u32 vm_handle_to_ffa_handle(pkvm_handle_t vm_handle);
#endif /* __ARM64_KVM_NVHE_PKVM_H__ */

View File

@@ -551,7 +551,7 @@ void *hyp_alloc(size_t size)
unsigned long chunk_addr;
int missing_map, ret = 0;
size = ALIGN(size, MIN_ALLOC);
size = ALIGN(size ?: MIN_ALLOC, MIN_ALLOC);
hyp_spin_lock(&allocator->lock);

View File

@@ -30,6 +30,7 @@
#include <asm/kvm_hypevents.h>
#include <asm/kvm_pkvm.h>
#include <kvm/arm_hypercalls.h>
#include <asm/virt.h>
#include <nvhe/arm-smccc.h>
#include <nvhe/alloc.h>
@@ -41,6 +42,16 @@
#include <nvhe/spinlock.h>
#define VM_FFA_SUPPORTED(vcpu) ((vcpu)->kvm->arch.pkvm.ffa_support)
#define FFA_INVALID_SPM_HANDLE (BIT(63) - 1)
/* The maximum number of secure partitions that can register for VM availability */
#define FFA_MAX_VM_AVAIL_SPS (8)
#define FFA_VM_AVAIL_SPS_OOM (-2)
#define FFA_PART_VM_AVAIL_MASK (FFA_PARTITION_DIRECT_RECV |\
FFA_PARTITION_HYP_CREATE_VM |\
FFA_PARTITION_HYP_DESTROY_VM)
#define FFA_PART_SUPPORTS_VM_AVAIL (FFA_PART_VM_AVAIL_MASK)
/*
* A buffer to hold the maximum descriptor size we can see from the host,
@@ -60,6 +71,11 @@ struct ffa_translation {
phys_addr_t pa;
};
struct ffa_handle {
u64 handle: 63;
u64 is_lend: 1;
};
/*
* Note that we don't currently lock these buffers explicitly, instead
* relying on the locking of the hyp FFA buffers.
@@ -68,10 +84,25 @@ static struct kvm_ffa_buffers hyp_buffers;
static struct kvm_ffa_buffers host_buffers;
static u32 hyp_ffa_version;
static bool has_version_negotiated;
static bool has_hyp_ffa_buffer_mapped;
static bool has_host_signalled;
static struct ffa_handle *spm_handles, *spm_free_handle;
static u32 num_spm_handles;
static DEFINE_HYP_SPINLOCK(version_lock);
static DEFINE_HYP_SPINLOCK(kvm_ffa_hyp_lock);
/* Secure partitions that can receive VM availability messages */
struct kvm_ffa_vm_avail_sp {
u16 sp_id;
bool wants_create;
bool wants_destroy;
};
static struct kvm_ffa_vm_avail_sp vm_avail_sps[FFA_MAX_VM_AVAIL_SPS];
static int num_vm_avail_sps = -1;
static struct kvm_ffa_buffers *ffa_get_buffers(struct pkvm_hyp_vcpu *hyp_vcpu)
{
if (!hyp_vcpu)
@@ -80,6 +111,57 @@ static struct kvm_ffa_buffers *ffa_get_buffers(struct pkvm_hyp_vcpu *hyp_vcpu)
return &pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu)->ffa_buf;
}
DECLARE_STATIC_KEY_FALSE(kvm_ffa_unmap_on_lend);
static int ffa_host_store_handle(u64 ffa_handle, bool is_lend)
{
u32 i;
struct ffa_handle *free_handle = NULL;
if (!static_branch_unlikely(&kvm_ffa_unmap_on_lend))
return 0;
if (spm_free_handle >= spm_handles &&
spm_free_handle < (spm_handles + num_spm_handles)) {
free_handle = spm_free_handle;
} else {
for (i = 0; i < num_spm_handles; i++)
if (spm_handles[i].handle == FFA_INVALID_SPM_HANDLE)
break;
if (i == num_spm_handles)
return -ENOSPC;
free_handle = &spm_handles[i];
}
free_handle->handle = ffa_handle;
free_handle->is_lend = is_lend;
return 0;
}
static struct ffa_handle *ffa_host_get_handle(u64 ffa_handle)
{
u32 i;
for (i = 0; i < num_spm_handles; i++)
if (spm_handles[i].handle == ffa_handle)
return &spm_handles[i];
return NULL;
}
static int ffa_host_clear_handle(u64 ffa_handle)
{
struct ffa_handle *entry = ffa_host_get_handle(ffa_handle);
if (!entry)
return -EINVAL;
entry->handle = FFA_INVALID_SPM_HANDLE;
spm_free_handle = entry;
return 0;
}
static void ffa_to_smccc_error(struct arm_smccc_res *res, u64 ffa_errno)
{
*res = (struct arm_smccc_res) {
@@ -116,12 +198,27 @@ static int ffa_map_hyp_buffers(u64 ffa_page_count)
{
struct arm_smccc_res res;
/*
* Ensure that the read of `has_hyp_ffa_buffer_mapped` is visible
* to other CPUs before proceeding.
*/
if (smp_load_acquire(&has_hyp_ffa_buffer_mapped))
return 0;
arm_smccc_1_1_smc(FFA_FN64_RXTX_MAP,
hyp_virt_to_phys(hyp_buffers.tx),
hyp_virt_to_phys(hyp_buffers.rx),
ffa_page_count,
0, 0, 0, 0,
&res);
if (res.a0 != FFA_SUCCESS)
return res.a2;
/*
* Ensure that the write to `has_hyp_ffa_buffer_mapped` is visible
* to other CPUs after the previous operations.
*/
smp_store_release(&has_hyp_ffa_buffer_mapped, true);
return res.a0 == FFA_SUCCESS ? FFA_RET_SUCCESS : res.a2;
}
@@ -130,12 +227,27 @@ static int ffa_unmap_hyp_buffers(void)
{
struct arm_smccc_res res;
/*
* Ensure that the read of `has_hyp_ffa_buffer_mapped` is visible
* to other CPUs before proceeding.
*/
if (!smp_load_acquire(&has_hyp_ffa_buffer_mapped))
return 0;
arm_smccc_1_1_smc(FFA_RXTX_UNMAP,
HOST_FFA_ID,
0, 0, 0, 0, 0, 0,
&res);
if (res.a0 != FFA_SUCCESS)
return res.a2;
return res.a0 == FFA_SUCCESS ? FFA_RET_SUCCESS : res.a2;
/*
* Ensure that the write to `has_hyp_ffa_buffer_mapped` is visible
* to other CPUs after the previous operations.
*/
smp_store_release(&has_hyp_ffa_buffer_mapped, false);
return FFA_RET_SUCCESS;
}
static void ffa_mem_frag_tx(struct arm_smccc_res *res, u32 handle_lo,
@@ -189,6 +301,156 @@ static void ffa_rx_release(struct arm_smccc_res *res)
res);
}
static int parse_vm_availability_resp(u32 partition_sz, u32 count)
{
struct ffa_partition_info *part;
u32 i, j, off;
bool supports_direct_recv, wants_create, wants_destroy;
if (num_vm_avail_sps >= 0)
return FFA_RET_SUCCESS;
if (num_vm_avail_sps == FFA_VM_AVAIL_SPS_OOM)
return FFA_RET_NO_MEMORY;
num_vm_avail_sps = 0;
for (i = 0; i < count; i++) {
if (check_mul_overflow(i, partition_sz, &off))
return FFA_RET_INVALID_PARAMETERS;
if (off >= KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE)
return FFA_RET_INVALID_PARAMETERS;
part = hyp_buffers.rx + off;
supports_direct_recv = part->properties & FFA_PARTITION_DIRECT_RECV;
wants_create = part->properties & FFA_PARTITION_HYP_CREATE_VM;
wants_destroy = part->properties & FFA_PARTITION_HYP_DESTROY_VM;
if (supports_direct_recv && (wants_create || wants_destroy)) {
/* Check for duplicate SP IDs */
for (j = 0; j < num_vm_avail_sps; j++)
if (vm_avail_sps[j].sp_id == part->id)
break;
if (j == num_vm_avail_sps) {
if (num_vm_avail_sps >= FFA_MAX_VM_AVAIL_SPS) {
/* We ran out of space in the array */
num_vm_avail_sps = FFA_VM_AVAIL_SPS_OOM;
return FFA_RET_NO_MEMORY;
}
vm_avail_sps[num_vm_avail_sps].sp_id = part->id;
vm_avail_sps[num_vm_avail_sps].wants_create = wants_create;
vm_avail_sps[num_vm_avail_sps].wants_destroy = wants_destroy;
num_vm_avail_sps++;
}
}
}
return FFA_RET_SUCCESS;
}
static int kvm_notify_vm_availability(uint16_t vm_handle, struct kvm_ffa_buffers *ffa_buf,
u32 availability_msg)
{
int i;
struct arm_smccc_res res;
u64 avail_bit = availability_msg != FFA_VM_DESTRUCTION_MSG;
for (i = 0; i < num_vm_avail_sps; i++) {
u64 sp_mask = 1UL << i;
u64 avail_value = avail_bit << i;
uint32_t dest = ((uint32_t)vm_avail_sps[i].sp_id << 16) | hyp_smp_processor_id();
if ((ffa_buf->vm_avail_bitmap & sp_mask) == avail_value &&
!(ffa_buf->vm_creating_bitmap & sp_mask))
continue;
if (avail_bit && !vm_avail_sps[i].wants_create) {
/*
* The SP did not ask for creation messages,
* so just mark this VM as available and
* continue
*/
ffa_buf->vm_avail_bitmap |= avail_value;
continue;
} else if (!avail_bit && !vm_avail_sps[i].wants_destroy) {
/*
* The SP did not ask for destruction messages,
* so just mark this VM as not available and
* continue
*/
ffa_buf->vm_avail_bitmap &= ~sp_mask;
continue;
}
/*
* Give the SP some cycles in advance,
* in case it got interrupted the last time.
*
* Some TEEs return NOT_SUPPORTED instead.
* If that happens, ignore the error and continue.
*/
arm_smccc_1_1_smc(FFA_RUN, dest, 0, 0, 0, 0, 0, 0, &res);
if (res.a0 == FFA_ERROR && (int)res.a2 != FFA_RET_NOT_SUPPORTED)
return ffa_to_linux_errno(res.a2);
else if (res.a0 == FFA_INTERRUPT)
return -EINTR;
if (availability_msg == FFA_VM_DESTRUCTION_MSG &&
(ffa_buf->vm_creating_bitmap & sp_mask)) {
/*
* If we sent the initial creation message for this VM
* but never got the success response from the TEE, we
* need to keep trying to create it until it works.
* Otherwise we cannot destroy it.
*
* TODO: this is not triggered for SPs that requested only
* creation messages (but not destruction). In that case,
* we will never retry the creation message, and the SP
* will probably leak its state for the pending VM.
*/
arm_smccc_1_1_smc(FFA_MSG_SEND_DIRECT_REQ, vm_avail_sps[i].sp_id,
FFA_VM_CREATION_MSG, HANDLE_LOW(FFA_INVALID_HANDLE),
HANDLE_HIGH(FFA_INVALID_HANDLE), vm_handle, 0, 0,
&res);
if (res.a0 != FFA_MSG_SEND_DIRECT_RESP)
return -EINVAL;
if (res.a3 != FFA_RET_SUCCESS)
return ffa_to_linux_errno(res.a3);
/* Creation completed successfully, clear the flag */
ffa_buf->vm_creating_bitmap &= ~sp_mask;
}
arm_smccc_1_1_smc(FFA_MSG_SEND_DIRECT_REQ, vm_avail_sps[i].sp_id,
availability_msg, HANDLE_LOW(FFA_INVALID_HANDLE),
HANDLE_HIGH(FFA_INVALID_HANDLE), vm_handle, 0, 0,
&res);
if (res.a0 != FFA_MSG_SEND_DIRECT_RESP)
return -EINVAL;
switch ((int)res.a3) {
case FFA_RET_SUCCESS:
ffa_buf->vm_avail_bitmap &= ~sp_mask;
ffa_buf->vm_avail_bitmap |= avail_value;
ffa_buf->vm_creating_bitmap &= ~sp_mask;
break;
case FFA_RET_INTERRUPTED:
case FFA_RET_RETRY:
if (availability_msg == FFA_VM_CREATION_MSG)
ffa_buf->vm_creating_bitmap |= sp_mask;
fallthrough;
default:
return ffa_to_linux_errno(res.a3);
}
}
return 0;
}
static void do_ffa_rxtx_map(struct arm_smccc_res *res,
struct kvm_cpu_context *ctxt,
struct pkvm_hyp_vcpu *hyp_vcpu)
@@ -372,9 +634,10 @@ out:
}
static u32 __ffa_host_share_ranges(struct ffa_mem_region_addr_range *ranges,
u32 nranges)
u32 nranges, bool is_lend)
{
u32 i;
int ret;
for (i = 0; i < nranges; ++i) {
struct ffa_mem_region_addr_range *range = &ranges[i];
@@ -384,17 +647,27 @@ static u32 __ffa_host_share_ranges(struct ffa_mem_region_addr_range *ranges,
if (!PAGE_ALIGNED(sz))
break;
if (__pkvm_host_share_ffa(pfn, sz / PAGE_SIZE))
if (static_branch_unlikely(&kvm_ffa_unmap_on_lend) && is_lend)
ret = __pkvm_host_donate_ffa(pfn, sz / PAGE_SIZE);
else
ret = __pkvm_host_share_ffa(pfn, sz / PAGE_SIZE);
if (ret)
break;
}
return i;
}
/*
* Verify if the page is lent on shared and unshare it with FF-A.
* On success, return the number of *unshared* pages and store in the
* is_lend argument whether the range was shared or lent.
*/
static u32 __ffa_host_unshare_ranges(struct ffa_mem_region_addr_range *ranges,
u32 nranges)
u32 nranges, bool is_lend)
{
u32 i;
int ret;
for (i = 0; i < nranges; ++i) {
struct ffa_mem_region_addr_range *range = &ranges[i];
@@ -404,7 +677,12 @@ static u32 __ffa_host_unshare_ranges(struct ffa_mem_region_addr_range *ranges,
if (!PAGE_ALIGNED(sz))
break;
if (__pkvm_host_unshare_ffa(pfn, sz / PAGE_SIZE))
if (static_branch_unlikely(&kvm_ffa_unmap_on_lend) && is_lend)
ret = __pkvm_host_reclaim_ffa(pfn, sz / PAGE_SIZE);
else
ret = __pkvm_host_unshare_ffa(pfn, sz / PAGE_SIZE);
if (ret)
break;
}
@@ -489,13 +767,13 @@ unshare:
}
static int ffa_host_share_ranges(struct ffa_mem_region_addr_range *ranges,
u32 nranges)
u32 nranges, bool is_lend)
{
u32 nshared = __ffa_host_share_ranges(ranges, nranges);
u32 nshared = __ffa_host_share_ranges(ranges, nranges, is_lend);
int ret = 0;
if (nshared != nranges) {
WARN_ON(__ffa_host_unshare_ranges(ranges, nshared) != nshared);
WARN_ON(__ffa_host_unshare_ranges(ranges, nshared, is_lend) != nshared);
ret = -EACCES;
}
@@ -503,13 +781,13 @@ static int ffa_host_share_ranges(struct ffa_mem_region_addr_range *ranges,
}
static int ffa_host_unshare_ranges(struct ffa_mem_region_addr_range *ranges,
u32 nranges)
u32 nranges, bool is_lend)
{
u32 nunshared = __ffa_host_unshare_ranges(ranges, nranges);
int ret = 0;
u32 nunshared = __ffa_host_unshare_ranges(ranges, nranges, is_lend);
if (nunshared != nranges) {
WARN_ON(__ffa_host_share_ranges(ranges, nunshared) != nunshared);
WARN_ON(__ffa_host_share_ranges(ranges, nunshared, is_lend) != nunshared);
ret = -EACCES;
}
@@ -528,6 +806,9 @@ static void do_ffa_mem_frag_tx(struct arm_smccc_res *res,
int ret = FFA_RET_INVALID_PARAMETERS;
u32 nr_ranges;
struct kvm_ffa_buffers *ffa_buf;
bool is_lend = false;
u64 host_handle = PACK_HANDLE(handle_lo, handle_hi);
struct ffa_handle *entry;
if (fraglen > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE)
goto out;
@@ -544,7 +825,17 @@ static void do_ffa_mem_frag_tx(struct arm_smccc_res *res,
memcpy(buf, ffa_buf->tx, fraglen);
nr_ranges = fraglen / sizeof(*buf);
ret = ffa_host_share_ranges(buf, nr_ranges);
if (static_branch_unlikely(&kvm_ffa_unmap_on_lend)) {
entry = ffa_host_get_handle(host_handle);
if (!entry) {
ffa_to_smccc_error(res, FFA_RET_INVALID_PARAMETERS);
goto out_unlock;
}
is_lend = entry->is_lend;
}
ret = ffa_host_share_ranges(buf, nr_ranges, is_lend);
if (ret) {
/*
* We're effectively aborting the transaction, so we need
@@ -558,7 +849,7 @@ static void do_ffa_mem_frag_tx(struct arm_smccc_res *res,
ffa_mem_frag_tx(res, handle_lo, handle_hi, fraglen, endpoint_id);
if (res->a0 != FFA_SUCCESS && res->a0 != FFA_MEM_FRAG_RX)
WARN_ON(ffa_host_unshare_ranges(buf, nr_ranges));
WARN_ON(ffa_host_unshare_ranges(buf, nr_ranges, is_lend));
out_unlock:
hyp_spin_unlock(&kvm_ffa_hyp_lock);
@@ -609,6 +900,8 @@ static int __do_ffa_mem_xfer(const u64 func_id,
u32 offset, nr_ranges;
int ret = 0;
struct ffa_mem_transfer *transfer = NULL;
u64 ffa_handle;
bool is_lend = func_id == FFA_FN64_MEM_LEND;
if (addr_mbz || npages_mbz || fraglen > len ||
fraglen > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) {
@@ -705,7 +998,7 @@ static int __do_ffa_mem_xfer(const u64 func_id,
temp_reg->addr_range_cnt * sizeof(struct ffa_mem_region_addr_range));
}
} else
ret = ffa_host_share_ranges(reg->constituents, nr_ranges);
ret = ffa_host_share_ranges(reg->constituents, nr_ranges, is_lend);
if (ret)
goto out_unlock;
@@ -716,15 +1009,22 @@ static int __do_ffa_mem_xfer(const u64 func_id,
if (res->a3 != fraglen)
goto err_unshare;
} else if (res->a0 != FFA_SUCCESS) {
ffa_handle = PACK_HANDLE(res->a1, res->a2);
} else if (res->a0 == FFA_SUCCESS) {
ffa_handle = PACK_HANDLE(res->a2, res->a3);
} else {
goto err_unshare;
}
if (hyp_vcpu && transfer) {
transfer->ffa_handle = PACK_HANDLE(res->a2, res->a3);
transfer->ffa_handle = ffa_handle;
list_add(&transfer->node, &ffa_buf->xfer_list);
} else if (!hyp_vcpu) {
ret = ffa_host_store_handle(ffa_handle, is_lend);
if (ret)
goto err_unshare;
}
hyp_spin_unlock(&kvm_ffa_hyp_lock);
return 0;
out_unlock:
@@ -738,7 +1038,7 @@ err_unshare:
if (hyp_vcpu)
ffa_guest_unshare_ranges(hyp_vcpu, transfer);
else
WARN_ON(ffa_host_unshare_ranges(reg->constituents, nr_ranges));
WARN_ON(ffa_host_unshare_ranges(reg->constituents, nr_ranges, is_lend));
goto out_unlock;
}
@@ -773,6 +1073,8 @@ static void do_ffa_mem_reclaim(struct arm_smccc_res *res,
u64 handle;
struct ffa_mem_transfer *transfer = NULL;
struct kvm_ffa_buffers *ffa_buf;
struct ffa_handle *entry;
bool is_lend = false;
handle = PACK_HANDLE(handle_lo, handle_hi);
@@ -791,6 +1093,16 @@ static void do_ffa_mem_reclaim(struct arm_smccc_res *res,
/* Prevent the host from replicating a transfer handle used by the guest */
WARN_ON(transfer);
if (static_branch_unlikely(&kvm_ffa_unmap_on_lend)) {
entry = ffa_host_get_handle(handle);
if (!entry) {
ret = FFA_RET_INVALID_PARAMETERS;
goto out_unlock;
}
is_lend = entry->is_lend;
}
}
buf = hyp_buffers.tx;
@@ -854,7 +1166,9 @@ out_reclaim:
else {
reg = (void *)buf + offset;
WARN_ON(ffa_host_unshare_ranges(reg->constituents,
reg->addr_range_cnt));
reg->addr_range_cnt, is_lend));
if (static_branch_unlikely(&kvm_ffa_unmap_on_lend))
ffa_host_clear_handle(handle);
}
if (transfer) {
@@ -938,6 +1252,7 @@ static void do_ffa_guest_features(struct arm_smccc_res *res, struct kvm_cpu_cont
case FFA_FN64_MEM_SHARE:
case FFA_MEM_LEND:
case FFA_FN64_MEM_LEND:
case FFA_RX_RELEASE:
ret = FFA_RET_SUCCESS;
goto out_handled;
case FFA_RXTX_MAP:
@@ -961,6 +1276,48 @@ out_handled:
ffa_to_smccc_res_prop(res, ret, prop);
}
static void do_ffa_part_get_response(struct arm_smccc_res *res,
u32 uuid0, u32 uuid1, u32 uuid2,
u32 uuid3, u32 flags, struct kvm_ffa_buffers *ffa_buf)
{
int ret;
u32 count, partition_sz, copy_sz;
arm_smccc_1_1_smc(FFA_PARTITION_INFO_GET, uuid0, uuid1,
uuid2, uuid3, flags, 0, 0,
res);
if (res->a0 != FFA_SUCCESS)
return;
count = res->a2;
if (!count)
return;
if (hyp_ffa_version > FFA_VERSION_1_0) {
/* Get the number of partitions deployed in the system */
if (flags & 0x1)
return;
partition_sz = res->a3;
} else
/* FFA_VERSION_1_0 lacks the size in the response */
partition_sz = FFA_1_0_PARTITON_INFO_SZ;
copy_sz = partition_sz * count;
if (copy_sz > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) {
ffa_to_smccc_res(res, FFA_RET_ABORTED);
return;
}
ret = parse_vm_availability_resp(partition_sz, count);
if (ret)
ffa_to_smccc_res(res, ret);
if (ffa_buf)
memcpy(ffa_buf->rx, hyp_buffers.rx, copy_sz);
}
static int hyp_ffa_post_init(void)
{
size_t min_rxtx_sz;
@@ -1010,7 +1367,10 @@ static void do_ffa_version(struct arm_smccc_res *res,
hyp_spin_lock(&version_lock);
if (has_version_negotiated) {
res->a0 = hyp_ffa_version;
if (FFA_MINOR_VERSION(ffa_req_version) < FFA_MINOR_VERSION(hyp_ffa_version))
res->a0 = FFA_RET_NOT_SUPPORTED;
else
res->a0 = hyp_ffa_version;
goto unlock;
}
@@ -1031,6 +1391,10 @@ static void do_ffa_version(struct arm_smccc_res *res,
if (hyp_ffa_post_init()) {
res->a0 = FFA_RET_NOT_SUPPORTED;
} else {
/*
* Ensure that the write to `has_version_negotiated` is visible
* to other CPUs after the previous operations.
*/
smp_store_release(&has_version_negotiated, true);
res->a0 = hyp_ffa_version;
}
@@ -1065,7 +1429,6 @@ static void do_ffa_part_get(struct arm_smccc_res *res,
DECLARE_REG(u32, uuid2, ctxt, 3);
DECLARE_REG(u32, uuid3, ctxt, 4);
DECLARE_REG(u32, flags, ctxt, 5);
u32 count, partition_sz, copy_sz;
struct kvm_ffa_buffers *ffa_buf;
hyp_spin_lock(&kvm_ffa_hyp_lock);
@@ -1075,35 +1438,7 @@ static void do_ffa_part_get(struct arm_smccc_res *res,
goto out_unlock;
}
arm_smccc_1_1_smc(FFA_PARTITION_INFO_GET, uuid0, uuid1,
uuid2, uuid3, flags, 0, 0,
res);
if (res->a0 != FFA_SUCCESS)
goto out_unlock;
count = res->a2;
if (!count)
goto out_unlock;
if (hyp_ffa_version > FFA_VERSION_1_0) {
/* Get the number of partitions deployed in the system */
if (flags & 0x1)
goto out_unlock;
partition_sz = res->a3;
} else {
/* FFA_VERSION_1_0 lacks the size in the response */
partition_sz = FFA_1_0_PARTITON_INFO_SZ;
}
copy_sz = partition_sz * count;
if (copy_sz > KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) {
ffa_to_smccc_res(res, FFA_RET_ABORTED);
goto out_unlock;
}
memcpy(ffa_buf->rx, hyp_buffers.rx, copy_sz);
do_ffa_part_get_response(res, uuid0, uuid1, uuid2, uuid3, flags, ffa_buf);
out_unlock:
hyp_spin_unlock(&kvm_ffa_hyp_lock);
}
@@ -1128,9 +1463,32 @@ static void do_ffa_direct_msg(struct kvm_cpu_context *ctxt,
__hyp_enter();
}
static int kvm_host_ffa_signal_availability(void)
{
int ret;
struct arm_smccc_res res;
/*
* Map our hypervisor buffers into the SPMD before mapping and
* pinning the host buffers in our own address space.
*/
ret = ffa_map_hyp_buffers((KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE) / FFA_PAGE_SIZE);
if (ret)
return ffa_to_linux_errno(ret);
do_ffa_part_get_response(&res, 0, 0, 0, 0, 0, NULL);
if (res.a0 != FFA_SUCCESS)
return ffa_to_linux_errno(ret);
ffa_rx_release(&res);
return kvm_notify_vm_availability(HOST_FFA_ID, &host_buffers, FFA_VM_CREATION_MSG);
}
bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt, u32 func_id)
{
struct arm_smccc_res res;
int ret;
/*
* There's no way we can tell what a non-standard SMC call might
@@ -1154,6 +1512,34 @@ bool kvm_host_ffa_handler(struct kvm_cpu_context *host_ctxt, u32 func_id)
goto out_handled;
}
/*
* Notify TZ of host VM creation immediately
* before handling the first non-version SMC/HVC
*/
if (func_id != FFA_VERSION && !has_host_signalled) {
ret = kvm_host_ffa_signal_availability();
if (!ret)
/*
* Ensure that the write to `has_host_signalled` is visible
* to other CPUs after the previous operations.
*/
has_host_signalled = true;
else if (ret == -EAGAIN || ret == -EINTR) {
/*
* Don't retry with interrupts masked as we will spin
* forever.
*/
if (host_ctxt->regs.pstate & PSR_I_BIT) {
ffa_to_smccc_error(&res, FFA_RET_DENIED);
goto out_handled;
}
/* Go back to the host and replay the last instruction */
write_sysreg_el2(read_sysreg_el2(SYS_ELR) - 4, SYS_ELR);
return true;
}
}
switch (func_id) {
case FFA_FEATURES:
if (!do_ffa_features(&res, host_ctxt))
@@ -1370,6 +1756,18 @@ unlock:
return ret;
}
int kvm_guest_notify_availability(u32 ffa_handle, struct kvm_ffa_buffers *ffa_buf, bool is_dying)
{
int ret;
hyp_spin_lock(&kvm_ffa_hyp_lock);
ret = kvm_notify_vm_availability(ffa_handle, ffa_buf,
is_dying ? FFA_VM_DESTRUCTION_MSG : FFA_VM_CREATION_MSG);
hyp_spin_unlock(&kvm_ffa_hyp_lock);
return ret;
}
u32 ffa_get_hypervisor_version(void)
{
u32 version = 0;
@@ -1420,6 +1818,14 @@ int hyp_ffa_init(void *pages)
rx = pages;
pages += KVM_FFA_MBOX_NR_PAGES * PAGE_SIZE;
if (static_branch_unlikely(&kvm_ffa_unmap_on_lend)) {
spm_handles = pages;
pages += KVM_FFA_SPM_HANDLE_NR_PAGES * PAGE_SIZE;
num_spm_handles = KVM_FFA_SPM_HANDLE_NR_PAGES * PAGE_SIZE /
sizeof(struct ffa_handle);
memset(spm_handles, -1, KVM_FFA_SPM_HANDLE_NR_PAGES * PAGE_SIZE);
}
ffa_desc_buf = (struct kvm_ffa_descriptor_buffer) {
.buf = pages,
.len = PAGE_SIZE *

View File

@@ -1469,6 +1469,13 @@ static void handle___pkvm_reclaim_dying_guest_ffa_resources(struct kvm_cpu_conte
cpu_reg(host_ctxt, 1) = __pkvm_reclaim_dying_guest_ffa_resources(handle);
}
static void handle___pkvm_notify_guest_vm_avail(struct kvm_cpu_context *host_ctxt)
{
DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1);
cpu_reg(host_ctxt, 1) = __pkvm_notify_guest_vm_avail(handle);
}
static void handle___pkvm_create_private_mapping(struct kvm_cpu_context *host_ctxt)
{
DECLARE_REG(phys_addr_t, phys, host_ctxt, 1);
@@ -1947,6 +1954,7 @@ static const hcall_t host_hcall[] = {
HANDLE_FUNC(__pkvm_finalize_teardown_vm),
HANDLE_FUNC(__pkvm_reclaim_dying_guest_page),
HANDLE_FUNC(__pkvm_reclaim_dying_guest_ffa_resources),
HANDLE_FUNC(__pkvm_notify_guest_vm_avail),
HANDLE_FUNC(__pkvm_vcpu_load),
HANDLE_FUNC(__pkvm_vcpu_put),
HANDLE_FUNC(__pkvm_vcpu_sync_state),

View File

@@ -1638,6 +1638,60 @@ unlock:
return ret;
}
int __pkvm_host_donate_ffa(u64 pfn, u64 nr_pages)
{
u64 size, phys = hyp_pfn_to_phys(pfn), end;
struct kvm_mem_range range;
struct memblock_region *reg;
int ret;
if (check_shl_overflow(nr_pages, PAGE_SHIFT, &size) ||
check_add_overflow(phys, size, &end))
return -EINVAL;
reg = find_mem_range(phys, &range);
if (!reg || !is_in_mem_range(end - 1, &range))
return -EPERM;
host_lock_component();
ret = __host_check_page_state_range(phys, size, PKVM_PAGE_OWNED);
if (ret)
goto unlock;
WARN_ON(host_stage2_set_owner_locked(phys, size, PKVM_ID_FFA));
unlock:
host_unlock_component();
return ret;
}
int __pkvm_host_reclaim_ffa(u64 pfn, u64 nr_pages)
{
u64 size, phys = hyp_pfn_to_phys(pfn), end;
struct memblock_region *reg;
struct kvm_mem_range range;
int ret;
if (check_shl_overflow(nr_pages, PAGE_SHIFT, &size) ||
check_add_overflow(phys, size, &end))
return -EINVAL;
reg = find_mem_range(phys, &range);
if (!reg || !is_in_mem_range(end - 1, &range))
return -EPERM;
host_lock_component();
ret = __host_check_page_state_range(phys, size, PKVM_NOPAGE);
if (ret)
goto unlock;
WARN_ON(host_stage2_set_owner_locked(phys, size, PKVM_ID_HOST));
unlock:
host_unlock_component();
return ret;
}
#define MODULE_PROT_ALLOWLIST (KVM_PGTABLE_PROT_RWX | \
KVM_PGTABLE_PROT_DEVICE | \
KVM_PGTABLE_PROT_NORMAL_NC | \
@@ -1725,45 +1779,6 @@ unlock:
return ret;
}
int __pkvm_host_lazy_pte(u64 pfn, u64 nr_pages, bool enable)
{
u64 size, end, addr = hyp_pfn_to_phys(pfn);
struct memblock_region *reg;
struct kvm_mem_range range;
int ret;
if (check_shl_overflow(nr_pages, PAGE_SHIFT, &size) ||
check_add_overflow(addr, size, &end))
return -EINVAL;
/* Reject MMIO regions */
reg = find_mem_range(addr, &range);
if (!reg || !is_in_mem_range(end - 1, &range))
return -EPERM;
host_lock_component();
ret = ___host_check_page_state_range(addr, size, PKVM_PAGE_OWNED, reg, true);
if (ret)
goto unlock;
if (enable) {
ret = kvm_pgtable_stage2_get_pages(&host_mmu.pgt, addr, size,
&host_s2_pool);
} else {
ret = kvm_pgtable_stage2_put_pages(&host_mmu.pgt, addr, size);
if (ret)
goto unlock;
WARN_ON(host_stage2_idmap_locked(addr, size, PKVM_HOST_MEM_PROT, false));
}
unlock:
host_unlock_component();
return ret;
}
int hyp_pin_shared_mem(void *from, void *to)
{
u64 cur, start = ALIGN_DOWN((u64)from, PAGE_SIZE);
@@ -1888,18 +1903,23 @@ static int __pkvm_use_dma_locked(phys_addr_t phys_addr, size_t size,
if (hyp_vcpu)
return -EINVAL;
ret = ___host_check_page_state_range(phys_addr, size,
PKVM_PAGE_TAINTED,
reg, false);
if (!ret)
return ret;
ret = ___host_check_page_state_range(phys_addr, size,
PKVM_PAGE_OWNED,
reg, false);
if (ret)
return ret;
for (i = 0; i < nr_pages; i++) {
u64 addr = phys_addr + i * PAGE_SIZE;
ret = ___host_check_page_state_range(addr, PAGE_SIZE,
PKVM_PAGE_TAINTED,
reg, false);
/* Page already tainted */
if (!ret)
continue;
ret = ___host_check_page_state_range(addr, PAGE_SIZE,
PKVM_PAGE_OWNED,
reg, false);
if (ret)
return ret;
}
prot = pkvm_mkstate(PKVM_HOST_MMIO_PROT, PKVM_PAGE_TAINTED);
ret = host_stage2_idmap_locked(phys_addr, size, prot, false);
WARN_ON(host_stage2_idmap_locked(phys_addr, size, prot, false));
} else {
/* For VMs, we know if we reach this point the VM has access to the page. */
if (!hyp_vcpu) {

View File

@@ -116,12 +116,20 @@ static void tracing_mod_hyp_printk(u8 fmt_id, u64 a, u64 b, u64 c, u64 d)
static int host_stage2_enable_lazy_pte(u64 pfn, u64 nr_pages)
{
return __pkvm_host_lazy_pte(pfn, nr_pages, true);
/*
* Deprecating the lazy PTE functionality as now the
* host can unmap on FF-A lend.
*/
WARN_ON(1);
return -EPERM;
}
static int host_stage2_disable_lazy_pte(u64 pfn, u64 nr_pages)
{
return __pkvm_host_lazy_pte(pfn, nr_pages, false);
WARN_ON(1);
return -EPERM;
}
static int __hyp_smp_processor_id(void)

View File

@@ -396,6 +396,25 @@ int __pkvm_reclaim_dying_guest_ffa_resources(pkvm_handle_t handle)
return ret;
}
int __pkvm_notify_guest_vm_avail(pkvm_handle_t handle)
{
struct pkvm_hyp_vm *hyp_vm;
int ret = 0;
hyp_read_lock(&vm_table_lock);
hyp_vm = get_vm_by_handle(handle);
if (!hyp_vm || !hyp_vm->kvm.arch.pkvm.ffa_support) {
ret = -EBUSY;
goto unlock;
}
ret = kvm_guest_notify_availability(vm_handle_to_ffa_handle(handle), &hyp_vm->ffa_buf,
hyp_vm->is_dying);
unlock:
hyp_read_unlock(&vm_table_lock);
return ret;
}
struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle,
unsigned int vcpu_idx)
{
@@ -761,6 +780,18 @@ static void remove_vm_table_entry(pkvm_handle_t handle)
hyp_assert_write_lock_held(&vm_table_lock);
hyp_vm = vm_table[vm_handle_to_idx(handle)];
/*
* If we didn't send the destruction message leak the vmid to
* prevent others from using it.
*/
if (hyp_vm->kvm.arch.pkvm.ffa_support &&
hyp_vm->ffa_buf.vm_avail_bitmap) {
vm_table[vm_handle_to_idx(handle)] = (void *)0xdeadbeef;
list_del(&hyp_vm->vm_list);
return;
}
vm_table[vm_handle_to_idx(handle)] = NULL;
list_del(&hyp_vm->vm_list);
}
@@ -1804,6 +1835,14 @@ bool kvm_handle_pvm_hvc64(struct kvm_vcpu *vcpu, u64 *exit_code)
return true;
}
u32 vm_handle_to_ffa_handle(pkvm_handle_t vm_handle)
{
if (!vm_handle)
return HOST_FFA_ID;
else
return vm_handle_to_idx(vm_handle) + 1;
}
u32 hyp_vcpu_to_ffa_handle(struct pkvm_hyp_vcpu *hyp_vcpu)
{
pkvm_handle_t vm_handle;
@@ -1812,5 +1851,5 @@ u32 hyp_vcpu_to_ffa_handle(struct pkvm_hyp_vcpu *hyp_vcpu)
return HOST_FFA_ID;
vm_handle = hyp_vcpu->vcpu.kvm->arch.pkvm.handle;
return vm_handle_to_idx(vm_handle) + 1;
return vm_handle_to_ffa_handle(vm_handle);
}

View File

@@ -1263,66 +1263,6 @@ int kvm_pgtable_stage2_annotate(struct kvm_pgtable *pgt, u64 addr, u64 size,
return ret;
}
static int stage2_get_pages_walker(const struct kvm_pgtable_visit_ctx *ctx,
enum kvm_pgtable_walk_flags visit)
{
struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
struct stage2_map_data *data = ctx->arg;
int ret;
ret = stage2_map_walk_leaf(ctx, data);
if (ret)
return ret;
if (ctx->level == KVM_PGTABLE_LAST_LEVEL)
mm_ops->get_page(ctx->ptep);
return 0;
}
int kvm_pgtable_stage2_get_pages(struct kvm_pgtable *pgt, u64 addr, u64 size,
void *mc)
{
struct stage2_map_data map_data = {
.phys = KVM_PHYS_INVALID,
.mmu = pgt->mmu,
.memcache = mc,
.force_pte = true,
};
struct kvm_pgtable_walker walker = {
.cb = stage2_get_pages_walker,
.flags = KVM_PGTABLE_WALK_LEAF,
.arg = &map_data,
};
return kvm_pgtable_walk(pgt, addr, size, &walker);
}
static int stage2_put_pages_walker(const struct kvm_pgtable_visit_ctx *ctx,
enum kvm_pgtable_walk_flags visit)
{
struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops;
/* get_pages has force_pte */
if (WARN_ON(ctx->level != KVM_PGTABLE_LAST_LEVEL))
return -EINVAL;
mm_ops->put_page(ctx->ptep);
return 0;
}
int kvm_pgtable_stage2_put_pages(struct kvm_pgtable *pgt, u64 addr, u64 size)
{
struct kvm_pgtable_walker walker = {
.cb = stage2_put_pages_walker,
.flags = KVM_PGTABLE_WALK_LEAF,
.arg = pgt->mmu,
};
return kvm_pgtable_walk(pgt, addr, size, &walker);
}
static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx,
enum kvm_pgtable_walk_flags visit)
{

View File

@@ -4,6 +4,8 @@
* Author: Quentin Perret <qperret@google.com>
*/
#include <linux/arm_ffa.h>
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/initrd.h>
#include <linux/interval_tree_generic.h>
@@ -39,6 +41,15 @@
#define PKVM_DEVICE_ASSIGN_COMPAT "pkvm,device-assignment"
/*
* Retry the VM creation message for the host for a maximul total
* amount of times, with sleeps in between. For the first few attempts,
* do a faster reschedule instead of a full sleep.
*/
#define VM_AVAILABILITY_FAST_RETRIES 5
#define VM_AVAILABILITY_TOTAL_RETRIES 500
#define VM_AVAILABILITY_RETRY_SLEEP_MS 10
DEFINE_STATIC_KEY_FALSE(kvm_protected_mode_initialized);
static phys_addr_t pvmfw_base;
@@ -194,6 +205,8 @@ static int __init early_hyp_lm_size_mb_cfg(char *arg)
}
early_param("kvm-arm.hyp_lm_size_mb", early_hyp_lm_size_mb_cfg);
DEFINE_STATIC_KEY_FALSE(kvm_ffa_unmap_on_lend);
void __init kvm_hyp_reserve(void)
{
u64 hyp_mem_pages = 0;
@@ -225,6 +238,10 @@ void __init kvm_hyp_reserve(void)
hyp_mem_pages += hyp_vmemmap_pages(STRUCT_HYP_PAGE_SIZE);
hyp_mem_pages += pkvm_selftest_pages();
hyp_mem_pages += hyp_ffa_proxy_pages();
if (static_branch_unlikely(&kvm_ffa_unmap_on_lend))
hyp_mem_pages += KVM_FFA_SPM_HANDLE_NR_PAGES;
hyp_mem_pages++; /* hyp_ppages */
/*
@@ -361,6 +378,52 @@ static int __reclaim_dying_guest_page_call(u64 pfn, u64 gfn, u8 order, void *arg
pfn, gfn, order);
}
/* __pkvm_notify_guest_vm_avail_retry - notify secure of the VM state change
* @host_kvm: the kvm structure
* @availability_msg: the VM state that will be notified
*
* Returns: 0 when the notification is sent with success, -EINTR or -EAGAIN if
* the destruction notification is interrupted and retries exceeded and
* a positive value indicating the remaining jiffies when the creation
* notification is sent but interrupted.
*/
static int __pkvm_notify_guest_vm_avail_retry(struct kvm *host_kvm, u32 availability_msg)
{
int ret, retries;
long timeout;
if (!host_kvm->arch.pkvm.ffa_support)
return 0;
for (retries = 0; retries < VM_AVAILABILITY_TOTAL_RETRIES; retries++) {
ret = kvm_call_hyp_nvhe(__pkvm_notify_guest_vm_avail,
host_kvm->arch.pkvm.handle);
if (!ret)
return 0;
else if (ret != -EINTR && ret != -EAGAIN)
return ret;
if (retries < VM_AVAILABILITY_FAST_RETRIES) {
cond_resched();
} else if (availability_msg == FFA_VM_DESTRUCTION_MSG) {
msleep(VM_AVAILABILITY_RETRY_SLEEP_MS);
} else {
timeout = msecs_to_jiffies(VM_AVAILABILITY_RETRY_SLEEP_MS);
timeout = schedule_timeout_killable(timeout);
if (timeout) {
/*
* The timer did not expire,
* most likely because the
* process was killed.
*/
return ret;
}
}
}
return ret;
}
static void __pkvm_destroy_hyp_vm(struct kvm *host_kvm)
{
struct mm_struct *mm = current->mm;
@@ -369,7 +432,7 @@ static void __pkvm_destroy_hyp_vm(struct kvm *host_kvm)
unsigned long nr_busy;
unsigned long pages;
unsigned long idx;
int ret;
int ret, notify_status;
if (!pkvm_is_hyp_created(host_kvm))
goto out_free;
@@ -406,11 +469,16 @@ retry:
account_locked_vm(mm, pages, false);
notify_status = __pkvm_notify_guest_vm_avail_retry(host_kvm, FFA_VM_DESTRUCTION_MSG);
if (nr_busy) {
do {
ret = kvm_call_hyp_nvhe(__pkvm_reclaim_dying_guest_ffa_resources,
host_kvm->arch.pkvm.handle);
WARN_ON(ret && ret != -EAGAIN);
if (notify_status == -EINTR || notify_status == -EAGAIN)
notify_status = __pkvm_notify_guest_vm_avail_retry(
host_kvm, FFA_VM_DESTRUCTION_MSG);
cond_resched();
} while (ret == -EAGAIN);
goto retry;
@@ -483,7 +551,7 @@ static int __pkvm_create_hyp_vm(struct kvm *host_kvm)
kvm_account_pgtable_pages(pgd, pgd_sz >> PAGE_SHIFT);
return 0;
return __pkvm_notify_guest_vm_avail_retry(host_kvm, FFA_VM_CREATION_MSG);
free_pgd:
free_pages_exact(pgd, pgd_sz);
atomic64_sub(pgd_sz, &host_kvm->stat.protected_hyp_mem);
@@ -1846,3 +1914,11 @@ int pkvm_pgtable_stage2_split(struct kvm_pgtable *pgt, u64 addr, u64 size, void
WARN_ON_ONCE(1);
return -EINVAL;
}
static int early_ffa_unmap_on_lend_cfg(char *arg)
{
static_branch_enable(&kvm_ffa_unmap_on_lend);
return 0;
}
early_param("kvm-arm.ffa-unmap-on-lend", early_ffa_unmap_on_lend_cfg);

View File

@@ -672,6 +672,7 @@ CONFIG_CRYPTO_ZSTD=y
CONFIG_CRYPTO_ANSI_CPRNG=y
CONFIG_CRYPTO_AES_NI_INTEL=y
CONFIG_CRYPTO_POLYVAL_CLMUL_NI=y
CONFIG_CRYPTO_SHA1_SSSE3=y
CONFIG_CRYPTO_SHA256_SSSE3=y
CONFIG_CRYPTO_SHA512_SSSE3=y
CONFIG_CRC_CCITT=y

View File

@@ -15,12 +15,6 @@ CONFIG_UCLAMP_TASK=y
CONFIG_UCLAMP_BUCKETS_COUNT=20
CONFIG_CGROUPS=y
CONFIG_MEMCG=y
CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_SCHED=y
CONFIG_UCLAMP_TASK_GROUP=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
# CONFIG_UTS_NS is not set
# CONFIG_TIME_NS is not set
# CONFIG_PID_NS is not set
@@ -51,11 +45,9 @@ CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_JUMP_LABEL=y
CONFIG_BLK_DEV_ZONED=y
CONFIG_BLK_CGROUP_IOCOST=y
CONFIG_PARTITION_ADVANCED=y
# CONFIG_MSDOS_PARTITION is not set
CONFIG_IOSCHED_BFQ=y
CONFIG_BFQ_GROUP_IOSCHED=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_MISC=y
# CONFIG_SLAB_MERGE_DEFAULT is not set
@@ -215,6 +207,7 @@ CONFIG_CRYPTO_HCTR2=y
CONFIG_CRYPTO_LZO=y
CONFIG_CRYPTO_AES_NI_INTEL=y
CONFIG_CRYPTO_POLYVAL_CLMUL_NI=y
CONFIG_CRYPTO_SHA1_SSSE3=y
CONFIG_CRYPTO_SHA256_SSSE3=y
CONFIG_CRYPTO_SHA512_SSSE3=y
CONFIG_PRINTK_TIME=y

View File

@@ -621,13 +621,6 @@ static inline blk_status_t blk_check_zone_append(struct request_queue *q,
return BLK_STS_OK;
}
/*
* Do not call bio_queue_enter() if the BIO_ZONE_WRITE_PLUGGING flag has been
* set because this causes blk_mq_freeze_queue() to deadlock if
* blk_zone_wplug_bio_work() submits a bio. Calling bio_queue_enter() for bios
* on the plug list is not necessary since a q_usage_counter reference is held
* while a bio is on the plug list.
*/
static void __submit_bio(struct bio *bio)
{
/* If plug is not used, add new plug here to cache nsecs time. */
@@ -640,12 +633,8 @@ static void __submit_bio(struct bio *bio)
if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO)) {
blk_mq_submit_bio(bio);
} else {
} else if (likely(bio_queue_enter(bio) == 0)) {
struct gendisk *disk = bio->bi_bdev->bd_disk;
bool zwp = bio_zone_write_plugging(bio);
if (unlikely(!zwp && bio_queue_enter(bio) != 0))
goto finish_plug;
if ((bio->bi_opf & REQ_POLLED) &&
!(disk->queue->limits.features & BLK_FEAT_POLL)) {
@@ -654,12 +643,9 @@ static void __submit_bio(struct bio *bio)
} else {
disk->fops->submit_bio(bio);
}
if (!zwp)
blk_queue_exit(disk->queue);
blk_queue_exit(disk->queue);
}
finish_plug:
blk_finish_plug(&plug);
}

View File

@@ -1318,7 +1318,6 @@ again:
spin_unlock_irqrestore(&zwplug->lock, flags);
bdev = bio->bi_bdev;
submit_bio_noacct_nocheck(bio);
/*
* blk-mq devices will reuse the extra reference on the request queue
@@ -1326,8 +1325,12 @@ again:
* path for BIO-based devices will not do that. So drop this extra
* reference here.
*/
if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO))
if (bdev_test_flag(bdev, BD_HAS_SUBMIT_BIO)) {
bdev->bd_disk->fops->submit_bio(bio);
blk_queue_exit(bdev->bd_disk->queue);
} else {
blk_mq_submit_bio(bio);
}
put_zwplug:
/* Drop the reference we took in disk_zone_wplug_schedule_bio_work(). */

View File

@@ -6645,10 +6645,10 @@ static void print_binder_transaction_ilocked(struct seq_file *m,
}
static void print_binder_work_ilocked(struct seq_file *m,
struct binder_proc *proc,
const char *prefix,
const char *transaction_prefix,
struct binder_work *w)
struct binder_proc *proc,
const char *prefix,
const char *transaction_prefix,
struct binder_work *w, bool hash_ptrs)
{
struct binder_node *node;
struct binder_transaction *t;
@@ -6671,9 +6671,15 @@ static void print_binder_work_ilocked(struct seq_file *m,
break;
case BINDER_WORK_NODE:
node = container_of(w, struct binder_node, work);
seq_printf(m, "%snode work %d: u%016llx c%016llx\n",
prefix, node->debug_id,
(u64)node->ptr, (u64)node->cookie);
if (hash_ptrs)
seq_printf(m, "%snode work %d: u%p c%p\n",
prefix, node->debug_id,
(void *)(long)node->ptr,
(void *)(long)node->cookie);
else
seq_printf(m, "%snode work %d: u%016llx c%016llx\n",
prefix, node->debug_id,
(u64)node->ptr, (u64)node->cookie);
break;
case BINDER_WORK_DEAD_BINDER:
seq_printf(m, "%shas dead binder\n", prefix);
@@ -6698,7 +6704,7 @@ static void print_binder_work_ilocked(struct seq_file *m,
static void print_binder_thread_ilocked(struct seq_file *m,
struct binder_thread *thread,
int print_always)
bool print_always, bool hash_ptrs)
{
struct binder_transaction *t;
struct binder_work *w;
@@ -6728,14 +6734,16 @@ static void print_binder_thread_ilocked(struct seq_file *m,
}
list_for_each_entry(w, &thread->todo, entry) {
print_binder_work_ilocked(m, thread->proc, " ",
" pending transaction", w);
" pending transaction",
w, hash_ptrs);
}
if (!print_always && m->count == header_pos)
m->count = start_pos;
}
static void print_binder_node_nilocked(struct seq_file *m,
struct binder_node *node)
struct binder_node *node,
bool hash_ptrs)
{
struct binder_ref *ref;
struct binder_work *w;
@@ -6743,8 +6751,13 @@ static void print_binder_node_nilocked(struct seq_file *m,
count = hlist_count_nodes(&node->refs);
seq_printf(m, " node %d: u%016llx c%016llx pri %d:%d hs %d hw %d ls %d lw %d is %d iw %d tr %d",
node->debug_id, (u64)node->ptr, (u64)node->cookie,
if (hash_ptrs)
seq_printf(m, " node %d: u%p c%p", node->debug_id,
(void *)(long)node->ptr, (void *)(long)node->cookie);
else
seq_printf(m, " node %d: u%016llx c%016llx", node->debug_id,
(u64)node->ptr, (u64)node->cookie);
seq_printf(m, " pri %d:%d hs %d hw %d ls %d lw %d is %d iw %d tr %d",
node->sched_policy, node->min_priority,
node->has_strong_ref, node->has_weak_ref,
node->local_strong_refs, node->local_weak_refs,
@@ -6758,7 +6771,8 @@ static void print_binder_node_nilocked(struct seq_file *m,
if (node->proc) {
list_for_each_entry(w, &node->async_todo, entry)
print_binder_work_ilocked(m, node->proc, " ",
" pending async transaction", w);
" pending async transaction",
w, hash_ptrs);
}
}
@@ -6774,8 +6788,54 @@ static void print_binder_ref_olocked(struct seq_file *m,
binder_node_unlock(ref->node);
}
static void print_binder_proc(struct seq_file *m,
struct binder_proc *proc, int print_all)
/**
* print_next_binder_node_ilocked() - Print binder_node from a locked list
* @m: struct seq_file for output via seq_printf()
* @proc: struct binder_proc we hold the inner_proc_lock to (if any)
* @node: struct binder_node to print fields of
* @prev_node: struct binder_node we hold a temporary reference to (if any)
* @hash_ptrs: whether to hash @node's binder_uintptr_t fields
*
* Helper function to handle synchronization around printing a struct
* binder_node while iterating through @proc->nodes or the dead nodes list.
* Caller must hold either @proc->inner_lock (for live nodes) or
* binder_dead_nodes_lock. This lock will be released during the body of this
* function, but it will be reacquired before returning to the caller.
*
* Return: pointer to the struct binder_node we hold a tmpref on
*/
static struct binder_node *
print_next_binder_node_ilocked(struct seq_file *m, struct binder_proc *proc,
struct binder_node *node,
struct binder_node *prev_node, bool hash_ptrs)
{
/*
* Take a temporary reference on the node so that isn't freed while
* we print it.
*/
binder_inc_node_tmpref_ilocked(node);
/*
* Live nodes need to drop the inner proc lock and dead nodes need to
* drop the binder_dead_nodes_lock before trying to take the node lock.
*/
if (proc)
binder_inner_proc_unlock(proc);
else
spin_unlock(&binder_dead_nodes_lock);
if (prev_node)
binder_put_node(prev_node);
binder_node_inner_lock(node);
print_binder_node_nilocked(m, node, hash_ptrs);
binder_node_inner_unlock(node);
if (proc)
binder_inner_proc_lock(proc);
else
spin_lock(&binder_dead_nodes_lock);
return node;
}
static void print_binder_proc(struct seq_file *m, struct binder_proc *proc,
bool print_all, bool hash_ptrs)
{
struct binder_work *w;
struct rb_node *n;
@@ -6789,31 +6849,19 @@ static void print_binder_proc(struct seq_file *m,
header_pos = m->count;
binder_inner_proc_lock(proc);
for (n = rb_first(&proc->threads); n != NULL; n = rb_next(n))
for (n = rb_first(&proc->threads); n; n = rb_next(n))
print_binder_thread_ilocked(m, rb_entry(n, struct binder_thread,
rb_node), print_all);
rb_node), print_all, hash_ptrs);
for (n = rb_first(&proc->nodes); n != NULL; n = rb_next(n)) {
for (n = rb_first(&proc->nodes); n; n = rb_next(n)) {
struct binder_node *node = rb_entry(n, struct binder_node,
rb_node);
if (!print_all && !node->has_async_transaction)
continue;
/*
* take a temporary reference on the node so it
* survives and isn't removed from the tree
* while we print it.
*/
binder_inc_node_tmpref_ilocked(node);
/* Need to drop inner lock to take node lock */
binder_inner_proc_unlock(proc);
if (last_node)
binder_put_node(last_node);
binder_node_inner_lock(node);
print_binder_node_nilocked(m, node);
binder_node_inner_unlock(node);
last_node = node;
binder_inner_proc_lock(proc);
last_node = print_next_binder_node_ilocked(m, proc, node,
last_node,
hash_ptrs);
}
binder_inner_proc_unlock(proc);
if (last_node)
@@ -6821,24 +6869,24 @@ static void print_binder_proc(struct seq_file *m,
if (print_all) {
binder_proc_lock(proc);
for (n = rb_first(&proc->refs_by_desc);
n != NULL;
n = rb_next(n))
for (n = rb_first(&proc->refs_by_desc); n; n = rb_next(n))
print_binder_ref_olocked(m, rb_entry(n,
struct binder_ref,
rb_node_desc));
struct binder_ref,
rb_node_desc));
binder_proc_unlock(proc);
}
binder_alloc_print_allocated(m, &proc->alloc);
binder_inner_proc_lock(proc);
list_for_each_entry(w, &proc->todo, entry)
print_binder_work_ilocked(m, proc, " ",
" pending transaction", w);
" pending transaction", w,
hash_ptrs);
trace_android_vh_binder_check_special_work(proc, &special_list);
if (special_list) {
list_for_each_entry(w, special_list, entry)
print_binder_work_ilocked(m, proc, " ",
" special pending transaction", w);
" special pending transaction", w,
hash_ptrs);
}
list_for_each_entry(w, &proc->delivered_death, entry) {
seq_puts(m, " has delivered dead binder\n");
@@ -6972,7 +7020,7 @@ static void print_binder_proc_stats(struct seq_file *m,
count = 0;
ready_threads = 0;
binder_inner_proc_lock(proc);
for (n = rb_first(&proc->threads); n != NULL; n = rb_next(n))
for (n = rb_first(&proc->threads); n; n = rb_next(n))
count++;
list_for_each_entry(thread, &proc->waiting_threads, waiting_thread_node)
@@ -6986,7 +7034,7 @@ static void print_binder_proc_stats(struct seq_file *m,
ready_threads,
free_async_space);
count = 0;
for (n = rb_first(&proc->nodes); n != NULL; n = rb_next(n))
for (n = rb_first(&proc->nodes); n; n = rb_next(n))
count++;
binder_inner_proc_unlock(proc);
seq_printf(m, " nodes: %d\n", count);
@@ -6994,7 +7042,7 @@ static void print_binder_proc_stats(struct seq_file *m,
strong = 0;
weak = 0;
binder_proc_lock(proc);
for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
for (n = rb_first(&proc->refs_by_desc); n; n = rb_next(n)) {
struct binder_ref *ref = rb_entry(n, struct binder_ref,
rb_node_desc);
count++;
@@ -7021,7 +7069,7 @@ static void print_binder_proc_stats(struct seq_file *m,
print_binder_stats(m, " ", &proc->stats);
}
static int state_show(struct seq_file *m, void *unused)
static void print_binder_state(struct seq_file *m, bool hash_ptrs)
{
struct binder_proc *proc;
struct binder_node *node;
@@ -7032,31 +7080,40 @@ static int state_show(struct seq_file *m, void *unused)
spin_lock(&binder_dead_nodes_lock);
if (!hlist_empty(&binder_dead_nodes))
seq_puts(m, "dead nodes:\n");
hlist_for_each_entry(node, &binder_dead_nodes, dead_node) {
/*
* take a temporary reference on the node so it
* survives and isn't removed from the list
* while we print it.
*/
node->tmp_refs++;
spin_unlock(&binder_dead_nodes_lock);
if (last_node)
binder_put_node(last_node);
binder_node_lock(node);
print_binder_node_nilocked(m, node);
binder_node_unlock(node);
last_node = node;
spin_lock(&binder_dead_nodes_lock);
}
hlist_for_each_entry(node, &binder_dead_nodes, dead_node)
last_node = print_next_binder_node_ilocked(m, NULL, node,
last_node,
hash_ptrs);
spin_unlock(&binder_dead_nodes_lock);
if (last_node)
binder_put_node(last_node);
mutex_lock(&binder_procs_lock);
hlist_for_each_entry(proc, &binder_procs, proc_node)
print_binder_proc(m, proc, 1);
print_binder_proc(m, proc, true, hash_ptrs);
mutex_unlock(&binder_procs_lock);
}
static void print_binder_transactions(struct seq_file *m, bool hash_ptrs)
{
struct binder_proc *proc;
seq_puts(m, "binder transactions:\n");
mutex_lock(&binder_procs_lock);
hlist_for_each_entry(proc, &binder_procs, proc_node)
print_binder_proc(m, proc, false, hash_ptrs);
mutex_unlock(&binder_procs_lock);
}
static int state_show(struct seq_file *m, void *unused)
{
print_binder_state(m, false);
return 0;
}
static int state_hashed_show(struct seq_file *m, void *unused)
{
print_binder_state(m, true);
return 0;
}
@@ -7078,14 +7135,13 @@ static int stats_show(struct seq_file *m, void *unused)
static int transactions_show(struct seq_file *m, void *unused)
{
struct binder_proc *proc;
seq_puts(m, "binder transactions:\n");
mutex_lock(&binder_procs_lock);
hlist_for_each_entry(proc, &binder_procs, proc_node)
print_binder_proc(m, proc, 0);
mutex_unlock(&binder_procs_lock);
print_binder_transactions(m, false);
return 0;
}
static int transactions_hashed_show(struct seq_file *m, void *unused)
{
print_binder_transactions(m, true);
return 0;
}
@@ -7098,7 +7154,7 @@ static int proc_show(struct seq_file *m, void *unused)
hlist_for_each_entry(itr, &binder_procs, proc_node) {
if (itr->pid == pid) {
seq_puts(m, "binder proc state:\n");
print_binder_proc(m, itr, 1);
print_binder_proc(m, itr, true, false);
}
}
mutex_unlock(&binder_procs_lock);
@@ -7165,8 +7221,10 @@ const struct file_operations binder_fops = {
};
DEFINE_SHOW_ATTRIBUTE(state);
DEFINE_SHOW_ATTRIBUTE(state_hashed);
DEFINE_SHOW_ATTRIBUTE(stats);
DEFINE_SHOW_ATTRIBUTE(transactions);
DEFINE_SHOW_ATTRIBUTE(transactions_hashed);
DEFINE_SHOW_ATTRIBUTE(transaction_log);
const struct binder_debugfs_entry binder_debugfs_entries[] = {
@@ -7176,6 +7234,12 @@ const struct binder_debugfs_entry binder_debugfs_entries[] = {
.fops = &state_fops,
.data = NULL,
},
{
.name = "state_hashed",
.mode = 0444,
.fops = &state_hashed_fops,
.data = NULL,
},
{
.name = "stats",
.mode = 0444,
@@ -7188,6 +7252,12 @@ const struct binder_debugfs_entry binder_debugfs_entries[] = {
.fops = &transactions_fops,
.data = NULL,
},
{
.name = "transactions_hashed",
.mode = 0444,
.fops = &transactions_hashed_fops,
.data = NULL,
},
{
.name = "transaction_log",
.mode = 0444,

View File

@@ -36,6 +36,8 @@ pub_no_prefix!(
BR_DECREFS,
BR_DEAD_BINDER,
BR_CLEAR_DEATH_NOTIFICATION_DONE,
BR_FROZEN_BINDER,
BR_CLEAR_FREEZE_NOTIFICATION_DONE,
);
pub_no_prefix!(
@@ -57,6 +59,9 @@ pub_no_prefix!(
BC_REQUEST_DEATH_NOTIFICATION,
BC_CLEAR_DEATH_NOTIFICATION,
BC_DEAD_BINDER_DONE,
BC_REQUEST_FREEZE_NOTIFICATION,
BC_CLEAR_FREEZE_NOTIFICATION,
BC_FREEZE_NOTIFICATION_DONE,
);
pub_no_prefix!(
@@ -141,6 +146,8 @@ decl_wrapper!(BinderWriteRead, uapi::binder_write_read);
decl_wrapper!(BinderVersion, uapi::binder_version);
decl_wrapper!(BinderFrozenStatusInfo, uapi::binder_frozen_status_info);
decl_wrapper!(BinderFreezeInfo, uapi::binder_freeze_info);
decl_wrapper!(BinderFrozenStateInfo, uapi::binder_frozen_state_info);
decl_wrapper!(BinderHandleCookie, uapi::binder_handle_cookie);
decl_wrapper!(ExtendedError, uapi::binder_extended_error);
impl BinderVersion {

View File

@@ -0,0 +1,388 @@
// SPDX-License-Identifier: GPL-2.0
// Copyright (C) 2024 Google LLC.
use kernel::{
alloc::AllocError,
list::ListArc,
prelude::*,
rbtree::{self, RBTreeNodeReservation},
seq_file::SeqFile,
seq_print,
sync::{Arc, UniqueArc},
uaccess::UserSliceReader,
};
use crate::{
defs::*, node::Node, process::Process, thread::Thread, BinderReturnWriter, DArc, DLArc,
DTRWrap, DeliverToRead,
};
#[derive(Clone, Copy, Eq, PartialEq, Ord, PartialOrd)]
pub(crate) struct FreezeCookie(u64);
/// Represents a listener for changes to the frozen state of a process.
pub(crate) struct FreezeListener {
/// The node we are listening for.
pub(crate) node: DArc<Node>,
/// The cookie of this freeze listener.
cookie: FreezeCookie,
/// What value of `is_frozen` did we most recently tell userspace about?
last_is_frozen: Option<bool>,
/// We sent a `BR_FROZEN_BINDER` and we are waiting for `BC_FREEZE_NOTIFICATION_DONE` before
/// sending any other commands.
is_pending: bool,
/// Userspace sent `BC_CLEAR_FREEZE_NOTIFICATION` and we need to reply with
/// `BR_CLEAR_FREEZE_NOTIFICATION_DONE` as soon as possible. If `is_pending` is set, then we
/// must wait for it to be unset before we can reply.
is_clearing: bool,
/// Number of cleared duplicates that can't be deleted until userspace sends
/// `BC_FREEZE_NOTIFICATION_DONE`.
num_pending_duplicates: u64,
/// Number of cleared duplicates that can be deleted.
num_cleared_duplicates: u64,
}
impl FreezeListener {
/// Is it okay to create a new listener with the same cookie as this one for the provided node?
///
/// Under some scenarios, userspace may delete a freeze listener and immediately recreate it
/// with the same cookie. This results in duplicate listeners. To avoid issues with ambiguity,
/// we allow this only if the new listener is for the same node, and we also require that the
/// old listener has already been cleared.
fn allow_duplicate(&self, node: &DArc<Node>) -> bool {
Arc::ptr_eq(&self.node, node) && self.is_clearing
}
}
type UninitFM = UniqueArc<core::mem::MaybeUninit<DTRWrap<FreezeMessage>>>;
/// Represents a notification that the freeze state has changed.
pub(crate) struct FreezeMessage {
cookie: FreezeCookie,
}
kernel::list::impl_list_arc_safe! {
impl ListArcSafe<0> for FreezeMessage {
untracked;
}
}
impl FreezeMessage {
fn new(flags: kernel::alloc::Flags) -> Result<UninitFM, AllocError> {
UniqueArc::new_uninit(flags)
}
fn init(ua: UninitFM, cookie: FreezeCookie) -> DLArc<FreezeMessage> {
match ua.pin_init_with(DTRWrap::new(FreezeMessage { cookie })) {
Ok(msg) => ListArc::from(msg),
Err(err) => match err {},
}
}
}
impl DeliverToRead for FreezeMessage {
fn do_work(
self: DArc<Self>,
thread: &Thread,
writer: &mut BinderReturnWriter<'_>,
) -> Result<bool> {
let _removed_listener;
let mut node_refs = thread.process.node_refs.lock();
let Some(mut freeze_entry) = node_refs.freeze_listeners.find_mut(&self.cookie) else {
return Ok(true);
};
let freeze = freeze_entry.get_mut();
if freeze.num_cleared_duplicates > 0 {
freeze.num_cleared_duplicates -= 1;
drop(node_refs);
writer.write_code(BR_CLEAR_FREEZE_NOTIFICATION_DONE)?;
writer.write_payload(&self.cookie.0)?;
return Ok(true);
}
if freeze.is_pending {
return Ok(true);
}
if freeze.is_clearing {
_removed_listener = freeze_entry.remove_node();
drop(node_refs);
writer.write_code(BR_CLEAR_FREEZE_NOTIFICATION_DONE)?;
writer.write_payload(&self.cookie.0)?;
Ok(true)
} else {
let is_frozen = freeze.node.owner.inner.lock().is_frozen;
if freeze.last_is_frozen == Some(is_frozen) {
return Ok(true);
}
let mut state_info = BinderFrozenStateInfo::default();
state_info.is_frozen = is_frozen as u32;
state_info.cookie = freeze.cookie.0;
freeze.is_pending = true;
freeze.last_is_frozen = Some(is_frozen);
drop(node_refs);
writer.write_code(BR_FROZEN_BINDER)?;
writer.write_payload(&state_info)?;
// BR_FROZEN_BINDER notifications can cause transactions
Ok(false)
}
}
fn cancel(self: DArc<Self>) {}
fn on_thread_selected(&self, _thread: &Thread) {}
fn should_sync_wakeup(&self) -> bool {
false
}
#[inline(never)]
fn debug_print(&self, m: &SeqFile, prefix: &str, _tprefix: &str) -> Result<()> {
seq_print!(m, "{}has frozen binder\n", prefix);
Ok(())
}
}
impl FreezeListener {
pub(crate) fn on_process_exit(&self, proc: &Arc<Process>) {
if !self.is_clearing {
self.node.remove_freeze_listener(proc);
}
}
}
impl Process {
pub(crate) fn request_freeze_notif(
self: &Arc<Self>,
reader: &mut UserSliceReader,
) -> Result<()> {
let hc = reader.read::<BinderHandleCookie>()?;
let handle = hc.handle;
let cookie = FreezeCookie(hc.cookie);
let msg = FreezeMessage::new(GFP_KERNEL)?;
let alloc = RBTreeNodeReservation::new(GFP_KERNEL)?;
let mut node_refs_guard = self.node_refs.lock();
let node_refs = &mut *node_refs_guard;
let Some(info) = node_refs.by_handle.get_mut(&handle) else {
pr_warn!("BC_REQUEST_FREEZE_NOTIFICATION invalid ref {}\n", handle);
return Err(EINVAL);
};
if info.freeze().is_some() {
pr_warn!("BC_REQUEST_FREEZE_NOTIFICATION already set\n");
return Err(EINVAL);
}
let node_ref = info.node_ref();
let freeze_entry = node_refs.freeze_listeners.entry(cookie);
if let rbtree::Entry::Occupied(ref dupe) = freeze_entry {
if !dupe.get().allow_duplicate(&node_ref.node) {
pr_warn!("BC_REQUEST_FREEZE_NOTIFICATION duplicate cookie\n");
return Err(EINVAL);
}
}
// All failure paths must come before this call, and all modifications must come after this
// call.
node_ref.node.add_freeze_listener(self, GFP_KERNEL)?;
match freeze_entry {
rbtree::Entry::Vacant(entry) => {
entry.insert(
FreezeListener {
cookie,
node: node_ref.node.clone(),
last_is_frozen: None,
is_pending: false,
is_clearing: false,
num_pending_duplicates: 0,
num_cleared_duplicates: 0,
},
alloc,
);
}
rbtree::Entry::Occupied(mut dupe) => {
let dupe = dupe.get_mut();
if dupe.is_pending {
dupe.num_pending_duplicates += 1;
} else {
dupe.num_cleared_duplicates += 1;
}
dupe.last_is_frozen = None;
dupe.is_pending = false;
dupe.is_clearing = false;
}
}
*info.freeze() = Some(cookie);
let msg = FreezeMessage::init(msg, cookie);
drop(node_refs_guard);
let _ = self.push_work(msg);
Ok(())
}
pub(crate) fn freeze_notif_done(self: &Arc<Self>, reader: &mut UserSliceReader) -> Result<()> {
let cookie = FreezeCookie(reader.read()?);
let alloc = FreezeMessage::new(GFP_KERNEL)?;
let mut node_refs_guard = self.node_refs.lock();
let node_refs = &mut *node_refs_guard;
let Some(freeze) = node_refs.freeze_listeners.get_mut(&cookie) else {
pr_warn!("BC_FREEZE_NOTIFICATION_DONE {:016x} not found\n", cookie.0);
return Err(EINVAL);
};
let mut clear_msg = None;
if freeze.num_pending_duplicates > 0 {
clear_msg = Some(FreezeMessage::init(alloc, cookie));
freeze.num_pending_duplicates -= 1;
freeze.num_cleared_duplicates += 1;
} else {
if !freeze.is_pending {
pr_warn!(
"BC_FREEZE_NOTIFICATION_DONE {:016x} not pending\n",
cookie.0
);
return Err(EINVAL);
}
if freeze.is_clearing {
// Immediately send another FreezeMessage for BR_CLEAR_FREEZE_NOTIFICATION_DONE.
clear_msg = Some(FreezeMessage::init(alloc, cookie));
}
freeze.is_pending = false;
}
drop(node_refs_guard);
if let Some(clear_msg) = clear_msg {
let _ = self.push_work(clear_msg);
}
Ok(())
}
pub(crate) fn clear_freeze_notif(self: &Arc<Self>, reader: &mut UserSliceReader) -> Result<()> {
let hc = reader.read::<BinderHandleCookie>()?;
let handle = hc.handle;
let cookie = FreezeCookie(hc.cookie);
let alloc = FreezeMessage::new(GFP_KERNEL)?;
let mut node_refs_guard = self.node_refs.lock();
let node_refs = &mut *node_refs_guard;
let Some(info) = node_refs.by_handle.get_mut(&handle) else {
pr_warn!("BC_CLEAR_FREEZE_NOTIFICATION invalid ref {}\n", handle);
return Err(EINVAL);
};
let Some(info_cookie) = info.freeze() else {
pr_warn!("BC_CLEAR_FREEZE_NOTIFICATION freeze notification not active\n");
return Err(EINVAL);
};
if *info_cookie != cookie {
pr_warn!("BC_CLEAR_FREEZE_NOTIFICATION freeze notification cookie mismatch\n");
return Err(EINVAL);
}
let Some(listener) = node_refs.freeze_listeners.get_mut(&cookie) else {
pr_warn!("BC_CLEAR_FREEZE_NOTIFICATION invalid cookie {}\n", handle);
return Err(EINVAL);
};
listener.is_clearing = true;
listener.node.remove_freeze_listener(self);
*info.freeze() = None;
let mut msg = None;
if !listener.is_pending {
msg = Some(FreezeMessage::init(alloc, cookie));
}
drop(node_refs_guard);
if let Some(msg) = msg {
let _ = self.push_work(msg);
}
Ok(())
}
fn get_freeze_cookie(&self, node: &DArc<Node>) -> Option<FreezeCookie> {
let node_refs = &mut *self.node_refs.lock();
let handle = node_refs.by_node.get(&node.global_id())?;
let node_ref = node_refs.by_handle.get_mut(handle)?;
*node_ref.freeze()
}
/// Creates a vector of every freeze listener on this process.
///
/// Returns pairs of the remote process listening for notifications and the local node it is
/// listening on.
fn find_freeze_recipients(&self) -> Result<KVVec<(DArc<Node>, Arc<Process>)>, AllocError> {
// Defined before `inner` to drop after releasing spinlock if `push_within_capacity` fails.
let mut node_proc_pair;
// We pre-allocate space for up to 8 recipients before we take the spinlock. However, if
// the allocation fails, use a vector with a capacity of zero instead of failing. After
// all, there might not be any freeze listeners, in which case this operation could still
// succeed.
let mut recipients =
KVVec::with_capacity(8, GFP_KERNEL).unwrap_or_else(|_err| KVVec::new());
let mut inner = self.lock_with_nodes();
let mut curr = inner.nodes.cursor_front();
while let Some(cursor) = curr {
let (key, node) = cursor.current();
let key = *key;
let list = node.freeze_list(&inner.inner);
let len = list.len();
if recipients.spare_capacity_mut().len() < len {
drop(inner);
recipients.reserve(len, GFP_KERNEL)?;
inner = self.lock_with_nodes();
// Find the node we were looking at and try again. If the set of nodes was changed,
// then just proceed to the next node. This is ok because we don't guarantee the
// inclusion of nodes that are added or removed in parallel with this operation.
curr = inner.nodes.cursor_lower_bound(&key);
continue;
}
for proc in list {
node_proc_pair = (node.clone(), proc.clone());
recipients
.push_within_capacity(node_proc_pair)
.map_err(|_| {
pr_err!(
"push_within_capacity failed even though we checked the capacity\n"
);
AllocError
})?;
}
curr = cursor.move_next();
}
Ok(recipients)
}
/// Prepare allocations for sending freeze messages.
pub(crate) fn prepare_freeze_messages(&self) -> Result<FreezeMessages, AllocError> {
let recipients = self.find_freeze_recipients()?;
let mut batch = KVVec::with_capacity(recipients.len(), GFP_KERNEL)?;
for (node, proc) in recipients {
let Some(cookie) = proc.get_freeze_cookie(&node) else {
// If the freeze listener was removed in the meantime, just discard the
// notification.
continue;
};
let msg_alloc = FreezeMessage::new(GFP_KERNEL)?;
let msg = FreezeMessage::init(msg_alloc, cookie);
batch.push((proc, msg), GFP_KERNEL)?;
}
Ok(FreezeMessages { batch })
}
}
pub(crate) struct FreezeMessages {
batch: KVVec<(Arc<Process>, DLArc<FreezeMessage>)>,
}
impl FreezeMessages {
pub(crate) fn send_messages(self) {
for (proc, msg) in self.batch {
let _ = proc.push_work(msg);
}
}
}

View File

@@ -21,6 +21,8 @@ use crate::{
BinderReturnWriter, DArc, DLArc, DTRWrap, DeliverToRead,
};
use core::mem;
mod wrapper;
pub(crate) use self::wrapper::CritIncrWrapper;
@@ -165,6 +167,8 @@ struct NodeInner {
/// List of processes to deliver a notification to when this node is destroyed (usually due to
/// the process dying).
death_list: List<DTRWrap<NodeDeath>, 1>,
/// List of processes to deliver freeze notifications to.
freeze_list: KVVec<Arc<Process>>,
/// The number of active BR_INCREFS or BR_ACQUIRE operations. (should be maximum two)
///
/// If this is non-zero, then we postpone any BR_RELEASE or BR_DECREFS notifications until the
@@ -175,8 +179,8 @@ struct NodeInner {
refs: List<NodeRefInfo, { NodeRefInfo::LIST_NODE }>,
}
use core::mem::offset_of;
use kernel::bindings::rb_node_layout;
use mem::offset_of;
pub(crate) const NODE_LAYOUT: rb_node_layout = rb_node_layout {
arc_offset: Arc::<Node>::DATA_OFFSET + offset_of!(DTRWrap<Node>, wrapped),
debug_id: offset_of!(Node, debug_id),
@@ -187,7 +191,7 @@ pub(crate) const NODE_LAYOUT: rb_node_layout = rb_node_layout {
pub(crate) struct Node {
pub(crate) debug_id: usize,
ptr: u64,
cookie: u64,
pub(crate) cookie: u64,
pub(crate) flags: u32,
pub(crate) owner: Arc<Process>,
inner: LockedBy<NodeInner, ProcessInner>,
@@ -232,6 +236,7 @@ impl Node {
},
death_list: List::new(),
oneway_todo: List::new(),
freeze_list: KVVec::new(),
has_oneway_transaction: false,
active_inc_refs: 0,
refs: List::new(),
@@ -680,6 +685,55 @@ impl Node {
Ok(true)
}
pub(crate) fn add_freeze_listener(
&self,
process: &Arc<Process>,
flags: kernel::alloc::Flags,
) -> Result {
let mut vec_alloc = KVVec::<Arc<Process>>::new();
loop {
let mut guard = self.owner.inner.lock();
// Do not check for `guard.dead`. The `dead` flag that matters here is the owner of the
// listener, no the target.
let inner = self.inner.access_mut(&mut guard);
let len = inner.freeze_list.len();
if len >= inner.freeze_list.capacity() {
if len >= vec_alloc.capacity() {
drop(guard);
vec_alloc = KVVec::with_capacity((1 + len).next_power_of_two(), flags)?;
continue;
}
mem::swap(&mut inner.freeze_list, &mut vec_alloc);
for elem in vec_alloc.drain_all() {
inner.freeze_list.push_within_capacity(elem)?;
}
}
inner.freeze_list.push_within_capacity(process.clone())?;
return Ok(());
}
}
pub(crate) fn remove_freeze_listener(&self, p: &Arc<Process>) {
let _unused_capacity;
let mut guard = self.owner.inner.lock();
let inner = self.inner.access_mut(&mut guard);
let len = inner.freeze_list.len();
inner.freeze_list.retain(|proc| !Arc::ptr_eq(proc, p));
if len == inner.freeze_list.len() {
pr_warn!(
"Could not remove freeze listener for {}\n",
p.task.pid_in_current_ns()
);
}
if inner.freeze_list.is_empty() {
_unused_capacity = mem::replace(&mut inner.freeze_list, KVVec::new());
}
}
pub(crate) fn freeze_list<'a>(&'a self, guard: &'a ProcessInner) -> &'a [Arc<Process>] {
&self.inner.access(guard).freeze_list
}
}
impl DeliverToRead for Node {

View File

@@ -93,6 +93,7 @@ impl Shrinker {
unsafe {
ptr::addr_of_mut!((*shrinker).count_objects).write(Some(rust_shrink_count));
ptr::addr_of_mut!((*shrinker).scan_objects).write(Some(rust_shrink_scan));
ptr::addr_of_mut!((*shrinker).private_data).write(self.list_lru.get().cast());
}
// SAFETY: The new shrinker has been fully initialized, so we can register it.
@@ -655,11 +656,10 @@ unsafe extern "C" fn rust_shrink_count(
shrink: *mut bindings::shrinker,
_sc: *mut bindings::shrink_control,
) -> c_ulong {
// SAFETY: This method is only used with the `Shrinker` type, and the cast is valid since
// `shrinker` is the first field of a #[repr(C)] struct.
let shrinker = unsafe { &*shrink.cast::<Shrinker>() };
// SAFETY: We can access our own private data.
let list_lru = unsafe { (*shrink).private_data.cast::<bindings::list_lru>() };
// SAFETY: Accessing the lru list is okay. Just an FFI call.
unsafe { bindings::list_lru_count(shrinker.list_lru.get()) }
unsafe { bindings::list_lru_count(list_lru) }
}
#[no_mangle]
@@ -667,9 +667,8 @@ unsafe extern "C" fn rust_shrink_scan(
shrink: *mut bindings::shrinker,
sc: *mut bindings::shrink_control,
) -> c_ulong {
// SAFETY: This method is only used with the `Shrinker` type, and the cast is valid since
// `shrinker` is the first field of a #[repr(C)] struct.
let shrinker = unsafe { &*shrink.cast::<Shrinker>() };
// SAFETY: We can access our own private data.
let list_lru = unsafe { (*shrink).private_data.cast::<bindings::list_lru>() };
// SAFETY: Caller guarantees that it is safe to read this field.
let nr_to_scan = unsafe { (*sc).nr_to_scan };
// SAFETY: Accessing the lru list is okay. Just an FFI call.
@@ -684,7 +683,7 @@ unsafe extern "C" fn rust_shrink_scan(
}
bindings::list_lru_walk(
shrinker.list_lru.get(),
list_lru,
Some(rust_shrink_free_page_wrap),
ptr::null_mut(),
nr_to_scan,

View File

@@ -27,7 +27,8 @@ use kernel::{
seq_print,
sync::poll::PollTable,
sync::{
lock::Guard, Arc, ArcBorrow, CondVar, CondVarTimeoutResult, Mutex, SpinLock, UniqueArc,
lock::{spinlock::SpinLockBackend, Guard},
Arc, ArcBorrow, CondVar, CondVarTimeoutResult, Mutex, SpinLock, UniqueArc,
},
task::Task,
types::{ARef, Either},
@@ -50,6 +51,10 @@ use crate::{
BinderfsProcFile, DArc, DLArc, DTRWrap, DeliverToRead,
};
#[path = "freeze.rs"]
mod freeze;
use self::freeze::{FreezeCookie, FreezeListener};
struct Mapping {
address: usize,
alloc: RangeAllocator<AllocationInfo>,
@@ -315,6 +320,8 @@ pub(crate) struct NodeRefInfo {
/// The refcount that this process owns to the node.
node_ref: ListArcField<NodeRef, { Self::LIST_PROC }>,
death: ListArcField<Option<DArc<NodeDeath>>, { Self::LIST_PROC }>,
/// Cookie of the active freeze listener for this node.
freeze: ListArcField<Option<FreezeCookie>, { Self::LIST_PROC }>,
/// Used to store this `NodeRefInfo` in the node's `refs` list.
#[pin]
links: ListLinks<{ Self::LIST_NODE }>,
@@ -335,6 +342,7 @@ impl NodeRefInfo {
debug_id: super::next_debug_id(),
node_ref: ListArcField::new(node_ref),
death: ListArcField::new(None),
freeze: ListArcField::new(None),
links <- ListLinks::new(),
handle,
process,
@@ -343,6 +351,7 @@ impl NodeRefInfo {
kernel::list::define_list_arc_field_getter! {
pub(crate) fn death(&mut self<{Self::LIST_PROC}>) -> &mut Option<DArc<NodeDeath>> { death }
pub(crate) fn freeze(&mut self<{Self::LIST_PROC}>) -> &mut Option<FreezeCookie> { freeze }
pub(crate) fn node_ref(&mut self<{Self::LIST_PROC}>) -> &mut NodeRef { node_ref }
pub(crate) fn node_ref2(&self<{Self::LIST_PROC}>) -> &NodeRef { node_ref }
}
@@ -372,6 +381,11 @@ struct ProcessNodeRefs {
/// Used to look up nodes without knowing their local 32-bit id. The usize is the address of
/// the underlying `Node` struct as returned by `Node::global_id`.
by_node: RBTree<usize, u32>,
/// Used to look up a `FreezeListener` by cookie.
///
/// There might be multiple freeze listeners for the same node, but at most one of them is
/// active.
freeze_listeners: RBTree<FreezeCookie, FreezeListener>,
}
impl ProcessNodeRefs {
@@ -379,6 +393,7 @@ impl ProcessNodeRefs {
Self {
by_handle: RBTree::new(),
by_node: RBTree::new(),
freeze_listeners: RBTree::new(),
}
}
}
@@ -1232,6 +1247,18 @@ impl Process {
}
}
/// Locks the spinlock and move the `nodes` rbtree out.
///
/// This allows you to iterate through `nodes` while also allowing you to give other parts of
/// the codebase exclusive access to `ProcessInner`.
pub(crate) fn lock_with_nodes(&self) -> WithNodes<'_> {
let mut inner = self.inner.lock();
WithNodes {
nodes: take(&mut inner.nodes),
inner,
}
}
fn deferred_flush(&self) {
let inner = self.inner.lock();
for thread in inner.threads.values() {
@@ -1260,12 +1287,10 @@ impl Process {
// Move oneway_todo into the process todolist.
{
let mut inner = self.inner.lock();
let nodes = take(&mut inner.nodes);
for node in nodes.values() {
node.release(&mut inner);
let mut inner = self.lock_with_nodes();
for node in inner.nodes.values() {
node.release(&mut inner.inner);
}
inner.nodes = nodes;
}
// Cancel all pending work items.
@@ -1294,6 +1319,7 @@ impl Process {
// while holding the lock.
let mut refs = self.node_refs.lock();
let mut node_refs = take(&mut refs.by_handle);
let freeze_listeners = take(&mut refs.freeze_listeners);
drop(refs);
for info in node_refs.values_mut() {
// SAFETY: We are removing the `NodeRefInfo` from the right node.
@@ -1308,6 +1334,10 @@ impl Process {
death.set_cleared(false);
}
drop(node_refs);
for listener in freeze_listeners.values() {
listener.on_process_exit(&self);
}
drop(freeze_listeners);
// Do similar dance for the state lock.
let mut inner = self.inner.lock();
@@ -1354,10 +1384,13 @@ impl Process {
pub(crate) fn ioctl_freeze(&self, info: &BinderFreezeInfo) -> Result {
if info.enable == 0 {
let msgs = self.prepare_freeze_messages()?;
let mut inner = self.inner.lock();
inner.sync_recv = false;
inner.async_recv = false;
inner.is_frozen = false;
drop(inner);
msgs.send_messages();
return Ok(());
}
@@ -1395,7 +1428,17 @@ impl Process {
inner.is_frozen = false;
Err(EAGAIN)
} else {
Ok(())
drop(inner);
match self.prepare_freeze_messages() {
Ok(batch) => {
batch.send_messages();
Ok(())
}
Err(kernel::alloc::AllocError) => {
self.inner.lock().is_frozen = false;
Err(ENOMEM)
}
}
}
}
}
@@ -1610,10 +1653,7 @@ pub(crate) struct Registration<'a> {
}
impl<'a> Registration<'a> {
fn new(
thread: &'a Arc<Thread>,
guard: &mut Guard<'_, ProcessInner, kernel::sync::lock::spinlock::SpinLockBackend>,
) -> Self {
fn new(thread: &'a Arc<Thread>, guard: &mut Guard<'_, ProcessInner, SpinLockBackend>) -> Self {
assert!(core::ptr::eq(&thread.process.inner, guard.lock_ref()));
// INVARIANT: We are pushing this thread to the right `ready_threads` list.
if let Ok(list_arc) = ListArc::try_from_arc(thread.clone()) {
@@ -1637,3 +1677,17 @@ impl Drop for Registration<'_> {
unsafe { inner.ready_threads.remove(self.thread) };
}
}
pub(crate) struct WithNodes<'a> {
pub(crate) inner: Guard<'a, ProcessInner, SpinLockBackend>,
pub(crate) nodes: RBTree<u64, DArc<Node>>,
}
impl Drop for WithNodes<'_> {
fn drop(&mut self) {
core::mem::swap(&mut self.nodes, &mut self.inner.nodes);
if self.nodes.iter().next().is_some() {
pr_err!("nodes array was modified while using lock_with_nodes\n");
}
}
}

View File

@@ -141,7 +141,10 @@ impl<T> ArrayRangeAllocator<T> {
state: DescriptorState::new(is_oneway, debug_id, pid),
};
// Insert the value at the given index to keep the array sorted.
self.ranges.insert_within_capacity(insert_at_idx, new_range).ok().unwrap();
self.ranges
.insert_within_capacity(insert_at_idx, new_range)
.ok()
.unwrap();
Ok(insert_at_offset)
}

View File

@@ -283,9 +283,9 @@ impl DeliverToRead for DeliverCode {
}
}
const fn ptr_align(value: usize) -> usize {
fn ptr_align(value: usize) -> Option<usize> {
let size = core::mem::size_of::<usize>() - 1;
(value + size) & !size
Some(value.checked_add(size)? & !size)
}
// SAFETY: We call register in `init`.

View File

@@ -67,6 +67,7 @@ enum binderfs_stats_mode {
struct binder_features {
bool oneway_spam_detection;
bool extended_error;
bool freeze_notification;
};
static const struct constant_table binderfs_param_stats[] = {
@@ -83,6 +84,7 @@ static const struct fs_parameter_spec binderfs_fs_parameters[] = {
static struct binder_features binder_features = {
.oneway_spam_detection = true,
.extended_error = true,
.freeze_notification = true,
};
static inline struct binderfs_info *BINDERFS_SB(const struct super_block *sb)
@@ -622,6 +624,12 @@ static int init_binder_features(struct super_block *sb)
if (IS_ERR(dentry))
return PTR_ERR(dentry);
dentry = rust_binderfs_create_file(dir, "freeze_notification",
&binder_features_fops,
&binder_features.freeze_notification);
if (IS_ERR(dentry))
return PTR_ERR(dentry);
return 0;
}

View File

@@ -205,7 +205,7 @@ impl UnusedBufferSpace {
/// into the buffer is returned.
fn claim_next(&mut self, size: usize) -> Result<usize> {
// We require every chunk to be aligned.
let size = ptr_align(size);
let size = ptr_align(size).ok_or(EINVAL)?;
let new_offset = self.offset.checked_add(size).ok_or(EINVAL)?;
if new_offset <= self.limit {
@@ -1066,15 +1066,15 @@ impl Thread {
};
let data_size = trd.data_size.try_into().map_err(|_| EINVAL)?;
let aligned_data_size = ptr_align(data_size);
let aligned_data_size = ptr_align(data_size).ok_or(EINVAL)?;
let offsets_size = trd.offsets_size.try_into().map_err(|_| EINVAL)?;
let aligned_offsets_size = ptr_align(offsets_size);
let aligned_offsets_size = ptr_align(offsets_size).ok_or(EINVAL)?;
let buffers_size = tr.buffers_size.try_into().map_err(|_| EINVAL)?;
let aligned_buffers_size = ptr_align(buffers_size);
let aligned_secctx_size = secctx
.as_ref()
.map(|(_, ctx)| ptr_align(ctx.len()))
.unwrap_or(0);
let aligned_buffers_size = ptr_align(buffers_size).ok_or(EINVAL)?;
let aligned_secctx_size = match secctx.as_ref() {
Some((_offset, ctx)) => ptr_align(ctx.len()).ok_or(EINVAL)?,
None => 0,
};
// This guarantees that at least `sizeof(usize)` bytes will be allocated.
let len = usize::max(
@@ -1482,6 +1482,9 @@ impl Thread {
}
BC_ENTER_LOOPER => self.inner.lock().looper_enter(),
BC_EXIT_LOOPER => self.inner.lock().looper_exit(),
BC_REQUEST_FREEZE_NOTIFICATION => self.process.request_freeze_notif(&mut reader)?,
BC_CLEAR_FREEZE_NOTIFICATION => self.process.clear_freeze_notif(&mut reader)?,
BC_FREEZE_NOTIFICATION_DONE => self.process.freeze_notif_done(&mut reader)?,
// Fail if given an unknown error code.
// BC_ATTEMPT_ACQUIRE and BC_ACQUIRE_RESULT are no longer supported.

View File

@@ -423,7 +423,7 @@ impl DeliverToRead for Transaction {
tr.data.ptr.buffer = self.data_address as _;
tr.offsets_size = self.offsets_size as _;
if tr.offsets_size > 0 {
tr.data.ptr.offsets = (self.data_address + ptr_align(self.data_size)) as _;
tr.data.ptr.offsets = (self.data_address + ptr_align(self.data_size).unwrap()) as _;
}
tr.sender_euid = self.sender_euid.into_uid_in_current_ns();
tr.sender_pid = 0;

View File

@@ -78,6 +78,7 @@
* Export tracepoints that act as a bare tracehook (ie: have no trace event
* associated with them) to allow external modules to probe them.
*/
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_refrigerator);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_do_send_sig_info);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_killed_process);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_arch_set_freq_scale);
@@ -139,6 +140,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ufs_update_sysfs);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ufs_send_command);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ufs_compl_command);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_cgroup_set_task);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_cgroup_force_kthread_migration);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_syscall_prctl_finished);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ufs_send_uic_command);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ufs_send_tm_command);
@@ -200,6 +202,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_ra_tuning_max_page);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_tune_mmap_readaround);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_hw_protection_shutdown);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_shrink_slab_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_do_shrink_slab_ex);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_drain_all_pages_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_security_audit_log_setid);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_selinux_avc_insert);
@@ -311,6 +314,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_tcp_select_window);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_inet_sock_create);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_inet_sock_release);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_bpf_skb_load_bytes);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_tcp_rcv_spurious_retrans);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_tcp_rtt_estimator);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_udp_enqueue_schedule_skb);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_build_skb_around);
@@ -386,6 +390,7 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_fuse_request_end);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_lruvec_add_folio);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_lruvec_del_folio);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_do_async_mmap_readahead);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mm_free_page);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_tcp_sock_error);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_tcp_fastsyn);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_tcp_state_change);
@@ -558,3 +563,10 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_try_to_unmap_one);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_resume_begin);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_resume_end);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_early_resume_begin);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_use_amu_fie);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_set_tsk_need_resched_lazy);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_pr_set_vma_name_bypass);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_mem_cgroup_charge);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_filemap_add_folio);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_vh_shrink_node);
EXPORT_TRACEPOINT_SYMBOL_GPL(android_rvh_cpuset_fork);

View File

@@ -37,6 +37,12 @@ EXPORT_PER_CPU_SYMBOL_GPL(capacity_freq_ref);
static bool supports_scale_freq_counters(const struct cpumask *cpus)
{
bool use_amu_fie = true;
trace_android_vh_use_amu_fie(&use_amu_fie);
if (!use_amu_fie)
return false;
return cpumask_subset(cpus, &scale_freq_counters_mask);
}

View File

@@ -366,8 +366,8 @@ static void drm_writeback_connector_cleanup(struct drm_device *dev,
spin_lock_irqsave(&wb_connector->job_lock, flags);
list_for_each_entry_safe(pos, n, &wb_connector->job_queue, list_entry) {
drm_writeback_cleanup_job(pos);
list_del(&pos->list_entry);
drm_writeback_cleanup_job(pos);
}
spin_unlock_irqrestore(&wb_connector->job_lock, flags);
}

View File

@@ -158,15 +158,13 @@ static int kvm_arm_smmu_domain_finalize(struct kvm_arm_smmu_domain *kvm_smmu_dom
if (kvm_smmu_domain->smmu)
return 0;
kvm_smmu_domain->smmu = smmu;
if (kvm_smmu_domain->domain.type == IOMMU_DOMAIN_IDENTITY) {
kvm_smmu_domain->id = KVM_IOMMU_DOMAIN_IDMAP_ID;
/*
* Identity domains doesn't use the DMA API, so no need to
* set the domain aperture.
*/
return 0;
goto out;
}
/* Default to stage-1. */
@@ -224,7 +222,9 @@ static int kvm_arm_smmu_domain_finalize(struct kvm_arm_smmu_domain *kvm_smmu_dom
return ret;
}
return 0;
out:
kvm_smmu_domain->smmu = smmu;
return ret;
}
static void kvm_arm_smmu_domain_free(struct iommu_domain *domain)

View File

@@ -92,21 +92,24 @@ static void pviommu_domain_remove_map(struct pviommu_domain *pv_domain,
/* Range can cover multiple entries. */
while (start < end) {
MA_STATE(mas, &pv_domain->mappings, start, end);
u64 entry = xa_to_value(mas_find(&mas, start));
u64 entry;
u64 old_start, old_end;
mtree_lock(mas.tree);
entry = xa_to_value(mas_find(&mas, start));
old_start = mas.index;
old_end = mas.last;
mas_erase(&mas);
/* Insert the rest if not removed. */
if (start > old_start)
mtree_store_range(&pv_domain->mappings, old_start, start - 1,
xa_mk_value(entry), GFP_KERNEL);
if (old_end > end)
mtree_store_range(&pv_domain->mappings, end + 1, old_end,
xa_mk_value(entry + end - old_start + 1), GFP_KERNEL);
if (start > old_start) {
MA_STATE(mas_border, &pv_domain->mappings, old_start, start - 1);
WARN_ON(mas_store_gfp(&mas_border, xa_mk_value(entry), GFP_ATOMIC));
}
if (old_end > end) {
MA_STATE(mas_border, &pv_domain->mappings, end + 1, old_end);
WARN_ON(mas_store_gfp(&mas_border, xa_mk_value(entry + end - old_start + 1),
GFP_ATOMIC));
}
mtree_unlock(mas.tree);
start = old_end + 1;
}
}
@@ -114,8 +117,11 @@ static void pviommu_domain_remove_map(struct pviommu_domain *pv_domain,
static u64 pviommu_domain_find(struct pviommu_domain *pv_domain, u64 key)
{
MA_STATE(mas, &pv_domain->mappings, key, key);
void *entry = mas_find(&mas, key);
void *entry;
mtree_lock(mas.tree);
entry = mas_find(&mas, key);
mtree_unlock(mas.tree);
/* No entry. */
if (!xa_is_value(entry))
return 0;

View File

@@ -101,6 +101,8 @@ EXPORT_SYMBOL_GPL(pci_pwrctrl_device_set_ready);
*/
void pci_pwrctrl_device_unset_ready(struct pci_pwrctrl *pwrctrl)
{
cancel_work_sync(&pwrctrl->work);
/*
* We don't have to delete the link here. Typically, this function
* is only called when the power control device is being detached. If

View File

@@ -1226,8 +1226,8 @@ static int cros_typec_probe(struct platform_device *pdev)
typec->ec = dev_get_drvdata(pdev->dev.parent);
if (!typec->ec) {
dev_err(dev, "couldn't find parent EC device\n");
return -ENODEV;
dev_warn(dev, "couldn't find parent EC device\n");
return -EPROBE_DEFER;
}
platform_set_drvdata(pdev, typec);

View File

@@ -37,6 +37,8 @@ const ASHMEM_FULL_NAME_LEN: usize = bindings::ASHMEM_FULL_NAME_LEN as usize;
const ASHMEM_NAME_PREFIX_LEN: usize = bindings::ASHMEM_NAME_PREFIX_LEN as usize;
const ASHMEM_NAME_PREFIX: [u8; ASHMEM_NAME_PREFIX_LEN] = *b"dev/ashmem/";
const ASHMEM_MAX_SIZE: usize = usize::MAX >> 1;
const PROT_READ: usize = bindings::PROT_READ as usize;
const PROT_EXEC: usize = bindings::PROT_EXEC as usize;
const PROT_WRITE: usize = bindings::PROT_WRITE as usize;
@@ -157,7 +159,7 @@ impl MiscDevice for Ashmem {
let asma = &mut *me.inner.lock();
// User needs to SET_SIZE before mapping.
if asma.size == 0 {
if asma.size == 0 || asma.size >= ASHMEM_MAX_SIZE {
return Err(EINVAL);
}
@@ -413,18 +415,17 @@ impl Ashmem {
None => return Err(EINVAL),
};
let max_size = page_align(asma.size);
let remaining = max_size.checked_sub(offset).ok_or(EINVAL)?;
// Per custom, you can pass zero for len to mean "everything onward".
let len = if cmd_len == 0 {
page_align(asma.size) - offset
} else {
cmd_len
};
let len = if cmd_len == 0 { remaining } else { cmd_len };
if (offset | len) & !PAGE_MASK != 0 {
return Err(EINVAL);
}
let len_plus_offset = offset.checked_add(len).ok_or(EINVAL)?;
if page_align(asma.size) < len_plus_offset {
if max_size < len_plus_offset {
return Err(EINVAL);
}

View File

@@ -670,7 +670,6 @@ int ufshcd_mcq_abort(struct scsi_cmnd *cmd)
int tag = scsi_cmd_to_rq(cmd)->tag;
struct ufshcd_lrb *lrbp = &hba->lrb[tag];
struct ufs_hw_queue *hwq;
unsigned long flags;
int err;
/* Skip task abort in case previous aborts failed and report failure */
@@ -709,10 +708,5 @@ int ufshcd_mcq_abort(struct scsi_cmnd *cmd)
return FAILED;
}
spin_lock_irqsave(&hwq->cq_lock, flags);
if (ufshcd_cmd_inflight(lrbp->cmd))
ufshcd_release_scsi_cmd(hba, lrbp);
spin_unlock_irqrestore(&hwq->cq_lock, flags);
return SUCCESS;
}

View File

@@ -1435,6 +1435,7 @@ static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba, u64 timeout_us)
* make sure that there are no outstanding requests when
* clock scaling is in progress
*/
mutex_lock(&hba->host->scan_mutex);
blk_mq_quiesce_tagset(&hba->host->tag_set);
mutex_lock(&hba->wb_mutex);
down_write(&hba->clk_scaling_lock);
@@ -1445,6 +1446,7 @@ static int ufshcd_clock_scaling_prepare(struct ufs_hba *hba, u64 timeout_us)
up_write(&hba->clk_scaling_lock);
mutex_unlock(&hba->wb_mutex);
blk_mq_unquiesce_tagset(&hba->host->tag_set);
mutex_unlock(&hba->host->scan_mutex);
goto out;
}
@@ -1466,6 +1468,7 @@ static void ufshcd_clock_scaling_unprepare(struct ufs_hba *hba, int err)
mutex_unlock(&hba->wb_mutex);
blk_mq_unquiesce_tagset(&hba->host->tag_set);
mutex_unlock(&hba->host->scan_mutex);
ufshcd_release(hba);
}
@@ -6730,9 +6733,14 @@ static void ufshcd_err_handler(struct work_struct *work)
up(&hba->host_sem);
return;
}
spin_unlock_irqrestore(hba->host->host_lock, flags);
ufshcd_err_handling_prepare(hba);
spin_lock_irqsave(hba->host->host_lock, flags);
ufshcd_set_eh_in_progress(hba);
spin_unlock_irqrestore(hba->host->host_lock, flags);
ufshcd_err_handling_prepare(hba);
/* Complete requests that have door-bell cleared by h/w */
ufshcd_complete_requests(hba, false);
spin_lock_irqsave(hba->host->host_lock, flags);

View File

@@ -1,8 +1,10 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2025 Qualcomm Innovation Center, Inc. All rights reserved. */
/* Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries. */
#include <linux/gunyah.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/pgtable.h>
#include <linux/virtio_balloon.h>
#include <asm/hypervisor.h>
@@ -16,6 +18,28 @@ struct addrspace_info_area_rootvm_addrspace_cap {
static u64 our_addrspace_capid;
static int gunyah_mmio_guard_ioremap_hook(phys_addr_t phys, size_t size, pgprot_t *prot)
{
pteval_t protval = pgprot_val(*prot);
int ret;
/*
* We only expect MMIO emulation for regions mapped with device
* attributes.
*/
if (protval != PROT_DEVICE_nGnRE && protval != PROT_DEVICE_nGnRnE)
return 0;
ret = gunyah_hypercall_addrspc_configure_vmmio_range(our_addrspace_capid,
phys, size, GUNYAH_ADDRSPACE_VMMIO_CONFIGURE_OP_ADD_RANGE);
if (ret == GUNYAH_ERROR_UNIMPLEMENTED || ret == GUNYAH_ERROR_BUSY)
/* Gunyah would have configured VMMIO via DT */
ret = GUNYAH_ERROR_OK;
return gunyah_error_remap(ret);
}
#ifdef CONFIG_VIRTIO_BALLOON_HYP_OPS
static void gunyah_page_relinquish(struct page *page, unsigned int nr)
{
@@ -71,6 +95,7 @@ static int __init gunyah_guest_init(void)
our_addrspace_capid = info->addrspace_cap;
arm64_ioremap_prot_hook_register(&gunyah_mmio_guard_ioremap_hook);
#ifdef CONFIG_VIRTIO_BALLOON_HYP_OPS
virtio_balloon_hyp_ops = &gunyah_virtio_balloon_hyp_ops;
#endif

View File

@@ -299,6 +299,7 @@ static struct workqueue_struct *z_erofs_workqueue __read_mostly;
#ifdef CONFIG_EROFS_FS_PCPU_KTHREAD
static struct kthread_worker __rcu **z_erofs_pcpu_workers;
static atomic_t erofs_percpu_workers_initialized = ATOMIC_INIT(0);
static void erofs_destroy_percpu_workers(void)
{
@@ -344,12 +345,8 @@ static int erofs_init_percpu_workers(void)
}
return 0;
}
#else
static inline void erofs_destroy_percpu_workers(void) {}
static inline int erofs_init_percpu_workers(void) { return 0; }
#endif
#if defined(CONFIG_HOTPLUG_CPU) && defined(CONFIG_EROFS_FS_PCPU_KTHREAD)
#ifdef CONFIG_HOTPLUG_CPU
static DEFINE_SPINLOCK(z_erofs_pcpu_worker_lock);
static enum cpuhp_state erofs_cpuhp_state;
@@ -406,15 +403,53 @@ static void erofs_cpu_hotplug_destroy(void)
if (erofs_cpuhp_state)
cpuhp_remove_state_nocalls(erofs_cpuhp_state);
}
#else /* !CONFIG_HOTPLUG_CPU || !CONFIG_EROFS_FS_PCPU_KTHREAD */
#else /* !CONFIG_HOTPLUG_CPU */
static inline int erofs_cpu_hotplug_init(void) { return 0; }
static inline void erofs_cpu_hotplug_destroy(void) {}
#endif
#endif/* CONFIG_HOTPLUG_CPU */
static int z_erofs_init_pcpu_workers(struct super_block *sb)
{
int err;
if (atomic_xchg(&erofs_percpu_workers_initialized, 1))
return 0;
err = erofs_init_percpu_workers();
if (err) {
erofs_err(sb, "per-cpu workers: failed to allocate.");
goto err_init_percpu_workers;
}
err = erofs_cpu_hotplug_init();
if (err < 0) {
erofs_err(sb, "per-cpu workers: failed CPU hotplug init.");
goto err_cpuhp_init;
}
erofs_info(sb, "initialized per-cpu workers successfully.");
return err;
err_cpuhp_init:
erofs_destroy_percpu_workers();
err_init_percpu_workers:
atomic_set(&erofs_percpu_workers_initialized, 0);
return err;
}
static void z_erofs_destroy_pcpu_workers(void)
{
if (!atomic_xchg(&erofs_percpu_workers_initialized, 0))
return;
erofs_cpu_hotplug_destroy();
erofs_destroy_percpu_workers();
}
#else /* !CONFIG_EROFS_FS_PCPU_KTHREAD */
static inline int z_erofs_init_pcpu_workers(struct super_block *sb) { return 0; }
static inline void z_erofs_destroy_pcpu_workers(void) {}
#endif/* CONFIG_EROFS_FS_PCPU_KTHREAD */
void z_erofs_exit_subsystem(void)
{
erofs_cpu_hotplug_destroy();
erofs_destroy_percpu_workers();
z_erofs_destroy_pcpu_workers();
destroy_workqueue(z_erofs_workqueue);
z_erofs_destroy_pcluster_pool();
z_erofs_exit_decompressor();
@@ -438,19 +473,8 @@ int __init z_erofs_init_subsystem(void)
goto err_workqueue_init;
}
err = erofs_init_percpu_workers();
if (err)
goto err_pcpu_worker;
err = erofs_cpu_hotplug_init();
if (err < 0)
goto err_cpuhp_init;
return err;
err_cpuhp_init:
erofs_destroy_percpu_workers();
err_pcpu_worker:
destroy_workqueue(z_erofs_workqueue);
err_workqueue_init:
z_erofs_destroy_pcluster_pool();
err_pcluster_pool:
@@ -665,8 +689,14 @@ static const struct address_space_operations z_erofs_cache_aops = {
int erofs_init_managed_cache(struct super_block *sb)
{
struct inode *const inode = new_inode(sb);
struct inode *inode;
int err;
err = z_erofs_init_pcpu_workers(sb);
if (err)
return err;
inode = new_inode(sb);
if (!inode)
return -ENOMEM;

View File

@@ -178,8 +178,7 @@ void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct folio *folio)
#ifdef CONFIG_F2FS_FS_LZO
static int lzo_init_compress_ctx(struct compress_ctx *cc)
{
cc->private = f2fs_kvmalloc(F2FS_I_SB(cc->inode),
LZO1X_MEM_COMPRESS, GFP_NOFS);
cc->private = f2fs_vmalloc(LZO1X_MEM_COMPRESS);
if (!cc->private)
return -ENOMEM;
@@ -189,7 +188,7 @@ static int lzo_init_compress_ctx(struct compress_ctx *cc)
static void lzo_destroy_compress_ctx(struct compress_ctx *cc)
{
kvfree(cc->private);
vfree(cc->private);
cc->private = NULL;
}
@@ -246,7 +245,7 @@ static int lz4_init_compress_ctx(struct compress_ctx *cc)
size = LZ4HC_MEM_COMPRESS;
#endif
cc->private = f2fs_kvmalloc(F2FS_I_SB(cc->inode), size, GFP_NOFS);
cc->private = f2fs_vmalloc(size);
if (!cc->private)
return -ENOMEM;
@@ -261,7 +260,7 @@ static int lz4_init_compress_ctx(struct compress_ctx *cc)
static void lz4_destroy_compress_ctx(struct compress_ctx *cc)
{
kvfree(cc->private);
vfree(cc->private);
cc->private = NULL;
}
@@ -342,8 +341,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
params = zstd_get_params(level, cc->rlen);
workspace_size = zstd_cstream_workspace_bound(&params.cParams);
workspace = f2fs_kvmalloc(F2FS_I_SB(cc->inode),
workspace_size, GFP_NOFS);
workspace = f2fs_vmalloc(workspace_size);
if (!workspace)
return -ENOMEM;
@@ -351,7 +349,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
if (!stream) {
f2fs_err_ratelimited(F2FS_I_SB(cc->inode),
"%s zstd_init_cstream failed", __func__);
kvfree(workspace);
vfree(workspace);
return -EIO;
}
@@ -364,7 +362,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc)
static void zstd_destroy_compress_ctx(struct compress_ctx *cc)
{
kvfree(cc->private);
vfree(cc->private);
cc->private = NULL;
cc->private2 = NULL;
}
@@ -423,8 +421,7 @@ static int zstd_init_decompress_ctx(struct decompress_io_ctx *dic)
workspace_size = zstd_dstream_workspace_bound(max_window_size);
workspace = f2fs_kvmalloc(F2FS_I_SB(dic->inode),
workspace_size, GFP_NOFS);
workspace = f2fs_vmalloc(workspace_size);
if (!workspace)
return -ENOMEM;
@@ -432,7 +429,7 @@ static int zstd_init_decompress_ctx(struct decompress_io_ctx *dic)
if (!stream) {
f2fs_err_ratelimited(F2FS_I_SB(dic->inode),
"%s zstd_init_dstream failed", __func__);
kvfree(workspace);
vfree(workspace);
return -EIO;
}
@@ -444,7 +441,7 @@ static int zstd_init_decompress_ctx(struct decompress_io_ctx *dic)
static void zstd_destroy_decompress_ctx(struct decompress_io_ctx *dic)
{
kvfree(dic->private);
vfree(dic->private);
dic->private = NULL;
dic->private2 = NULL;
}

View File

@@ -58,8 +58,8 @@ bool f2fs_is_cp_guaranteed(struct page *page)
struct inode *inode;
struct f2fs_sb_info *sbi;
if (!mapping)
return false;
if (fscrypt_is_bounce_page(page))
return page_private_gcing(fscrypt_pagecache_page(page));
inode = mapping->host;
sbi = F2FS_I_SB(inode);
@@ -3991,7 +3991,7 @@ retry:
if ((pblock - SM_I(sbi)->main_blkaddr) % blks_per_sec ||
nr_pblocks % blks_per_sec ||
!f2fs_valid_pinned_area(sbi, pblock)) {
f2fs_is_sequential_zone_area(sbi, pblock)) {
bool last_extent = false;
not_aligned++;

View File

@@ -366,7 +366,8 @@ start_find_entry:
out:
#if IS_ENABLED(CONFIG_UNICODE)
if (IS_CASEFOLDED(dir) && !de && use_hash) {
if (!sb_no_casefold_compat_fallback(dir->i_sb) &&
IS_CASEFOLDED(dir) && !de && use_hash) {
use_hash = false;
goto start_find_entry;
}

View File

@@ -822,6 +822,7 @@ enum {
FI_ATOMIC_DIRTIED, /* indicate atomic file is dirtied */
FI_ATOMIC_REPLACE, /* indicate atomic replace */
FI_OPENED_FILE, /* indicate file has been opened */
FI_DONATE_FINISHED, /* indicate page donation of file has been finished */
FI_MAX, /* max flag, never be used */
};
@@ -1781,7 +1782,7 @@ struct f2fs_sb_info {
unsigned int dirty_device; /* for checkpoint data flush */
spinlock_t dev_lock; /* protect dirty_device */
bool aligned_blksize; /* all devices has the same logical blksize */
unsigned int first_zoned_segno; /* first zoned segno */
unsigned int first_zoned_segno; /* first segno in sequential zone */
/* For write statistics */
u64 sectors_written_start;
@@ -2543,8 +2544,14 @@ static inline void dec_valid_block_count(struct f2fs_sb_info *sbi,
blkcnt_t sectors = count << F2FS_LOG_SECTORS_PER_BLOCK;
spin_lock(&sbi->stat_lock);
f2fs_bug_on(sbi, sbi->total_valid_block_count < (block_t) count);
sbi->total_valid_block_count -= (block_t)count;
if (unlikely(sbi->total_valid_block_count < count)) {
f2fs_warn(sbi, "Inconsistent total_valid_block_count:%u, ino:%lu, count:%u",
sbi->total_valid_block_count, inode->i_ino, count);
sbi->total_valid_block_count = 0;
set_sbi_flag(sbi, SBI_NEED_FSCK);
} else {
sbi->total_valid_block_count -= count;
}
if (sbi->reserved_blocks &&
sbi->current_reserved_blocks < sbi->reserved_blocks)
sbi->current_reserved_blocks = min(sbi->reserved_blocks,
@@ -3546,6 +3553,11 @@ static inline void *f2fs_kvzalloc(struct f2fs_sb_info *sbi,
return f2fs_kvmalloc(sbi, size, flags | __GFP_ZERO);
}
static inline void *f2fs_vmalloc(size_t size)
{
return vmalloc(size);
}
static inline int get_extra_isize(struct inode *inode)
{
return F2FS_I(inode)->i_extra_isize / sizeof(__le32);
@@ -4647,12 +4659,16 @@ F2FS_FEATURE_FUNCS(readonly, RO);
F2FS_FEATURE_FUNCS(device_alias, DEVICE_ALIAS);
#ifdef CONFIG_BLK_DEV_ZONED
static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi,
block_t blkaddr)
static inline bool f2fs_zone_is_seq(struct f2fs_sb_info *sbi, int devi,
unsigned int zone)
{
unsigned int zno = blkaddr / sbi->blocks_per_blkz;
return test_bit(zone, FDEV(devi).blkz_seq);
}
return test_bit(zno, FDEV(devi).blkz_seq);
static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi,
block_t blkaddr)
{
return f2fs_zone_is_seq(sbi, devi, blkaddr / sbi->blocks_per_blkz);
}
#endif
@@ -4724,15 +4740,31 @@ static inline bool f2fs_lfs_mode(struct f2fs_sb_info *sbi)
return F2FS_OPTION(sbi).fs_mode == FS_MODE_LFS;
}
static inline bool f2fs_valid_pinned_area(struct f2fs_sb_info *sbi,
static inline bool f2fs_is_sequential_zone_area(struct f2fs_sb_info *sbi,
block_t blkaddr)
{
if (f2fs_sb_has_blkzoned(sbi)) {
#ifdef CONFIG_BLK_DEV_ZONED
int devi = f2fs_target_device_index(sbi, blkaddr);
return !bdev_is_zoned(FDEV(devi).bdev);
if (!bdev_is_zoned(FDEV(devi).bdev))
return false;
if (f2fs_is_multi_device(sbi)) {
if (blkaddr < FDEV(devi).start_blk ||
blkaddr > FDEV(devi).end_blk) {
f2fs_err(sbi, "Invalid block %x", blkaddr);
return false;
}
blkaddr -= FDEV(devi).start_blk;
}
return f2fs_blkz_is_seq(sbi, devi, blkaddr);
#else
return false;
#endif
}
return true;
return false;
}
static inline bool f2fs_low_mem_mode(struct f2fs_sb_info *sbi)

View File

@@ -557,19 +557,21 @@ static int f2fs_file_mmap(struct file *file, struct vm_area_struct *vma)
static int finish_preallocate_blocks(struct inode *inode)
{
int ret;
int ret = 0;
bool opened;
f2fs_down_read(&F2FS_I(inode)->i_sem);
opened = is_inode_flag_set(inode, FI_OPENED_FILE);
f2fs_up_read(&F2FS_I(inode)->i_sem);
if (opened)
return 0;
inode_lock(inode);
if (is_inode_flag_set(inode, FI_OPENED_FILE)) {
inode_unlock(inode);
return 0;
}
if (is_inode_flag_set(inode, FI_OPENED_FILE))
goto out_unlock;
if (!file_should_truncate(inode)) {
set_inode_flag(inode, FI_OPENED_FILE);
inode_unlock(inode);
return 0;
}
if (!file_should_truncate(inode))
goto out_update;
f2fs_down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
filemap_invalidate_lock(inode->i_mapping);
@@ -579,16 +581,17 @@ static int finish_preallocate_blocks(struct inode *inode)
filemap_invalidate_unlock(inode->i_mapping);
f2fs_up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]);
if (!ret)
set_inode_flag(inode, FI_OPENED_FILE);
inode_unlock(inode);
if (ret)
return ret;
goto out_unlock;
file_dont_truncate(inode);
return 0;
out_update:
f2fs_down_write(&F2FS_I(inode)->i_sem);
set_inode_flag(inode, FI_OPENED_FILE);
f2fs_up_write(&F2FS_I(inode)->i_sem);
out_unlock:
inode_unlock(inode);
return ret;
}
static int f2fs_file_open(struct inode *inode, struct file *filp)
@@ -2469,19 +2472,20 @@ static int f2fs_ioc_shutdown(struct file *filp, unsigned long arg)
return ret;
}
static void f2fs_keep_noreuse_range(struct inode *inode,
static int f2fs_keep_noreuse_range(struct inode *inode,
loff_t offset, loff_t len)
{
struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
u64 max_bytes = F2FS_BLK_TO_BYTES(max_file_blocks(inode));
u64 start, end;
int ret = 0;
if (!S_ISREG(inode->i_mode))
return;
return 0;
if (offset >= max_bytes || len > max_bytes ||
(offset + len) > max_bytes)
return;
return 0;
start = offset >> PAGE_SHIFT;
end = DIV_ROUND_UP(offset + len, PAGE_SIZE);
@@ -2489,7 +2493,7 @@ static void f2fs_keep_noreuse_range(struct inode *inode,
inode_lock(inode);
if (f2fs_is_atomic_file(inode)) {
inode_unlock(inode);
return;
return 0;
}
spin_lock(&sbi->inode_lock[DONATE_INODE]);
@@ -2498,7 +2502,12 @@ static void f2fs_keep_noreuse_range(struct inode *inode,
if (!list_empty(&F2FS_I(inode)->gdonate_list)) {
list_del_init(&F2FS_I(inode)->gdonate_list);
sbi->donate_files--;
}
if (is_inode_flag_set(inode, FI_DONATE_FINISHED))
ret = -EALREADY;
else
set_inode_flag(inode, FI_DONATE_FINISHED);
} else
ret = -ENOENT;
} else {
if (list_empty(&F2FS_I(inode)->gdonate_list)) {
list_add_tail(&F2FS_I(inode)->gdonate_list,
@@ -2510,9 +2519,12 @@ static void f2fs_keep_noreuse_range(struct inode *inode,
}
F2FS_I(inode)->donate_start = start;
F2FS_I(inode)->donate_end = end - 1;
clear_inode_flag(inode, FI_DONATE_FINISHED);
}
spin_unlock(&sbi->inode_lock[DONATE_INODE]);
inode_unlock(inode);
return ret;
}
static int f2fs_ioc_fitrim(struct file *filp, unsigned long arg)
@@ -5246,8 +5258,8 @@ static int f2fs_file_fadvise(struct file *filp, loff_t offset, loff_t len,
f2fs_compressed_file(inode)))
f2fs_invalidate_compress_pages(F2FS_I_SB(inode), inode->i_ino);
else if (advice == POSIX_FADV_NOREUSE)
f2fs_keep_noreuse_range(inode, offset, len);
return 0;
err = f2fs_keep_noreuse_range(inode, offset, len);
return err;
}
#ifdef CONFIG_COMPAT

View File

@@ -2066,6 +2066,9 @@ int f2fs_gc_range(struct f2fs_sb_info *sbi,
.iroot = RADIX_TREE_INIT(gc_list.iroot, GFP_NOFS),
};
if (IS_CURSEC(sbi, GET_SEC_FROM_SEG(sbi, segno)))
continue;
do_garbage_collect(sbi, segno, &gc_list, FG_GC, true, false);
put_gc_inode(&gc_list);

View File

@@ -34,7 +34,9 @@ void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync)
if (f2fs_inode_dirtied(inode, sync))
return;
if (f2fs_is_atomic_file(inode))
/* only atomic file w/ FI_ATOMIC_COMMITTED can be set vfs dirty */
if (f2fs_is_atomic_file(inode) &&
!is_inode_flag_set(inode, FI_ATOMIC_COMMITTED))
return;
mark_inode_dirty_sync(inode);
@@ -286,6 +288,12 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page)
return false;
}
if (ino_of_node(node_page) == fi->i_xattr_nid) {
f2fs_warn(sbi, "%s: corrupted inode i_ino=%lx, xnid=%x, run fsck to fix.",
__func__, inode->i_ino, fi->i_xattr_nid);
return false;
}
if (f2fs_has_extra_attr(inode)) {
if (!f2fs_sb_has_extra_attr(sbi)) {
f2fs_warn(sbi, "%s: inode (ino=%lx) is with extra_attr, but extra_attr feature is off",

View File

@@ -418,7 +418,7 @@ static int f2fs_link(struct dentry *old_dentry, struct inode *dir,
if (is_inode_flag_set(dir, FI_PROJ_INHERIT) &&
(!projid_eq(F2FS_I(dir)->i_projid,
F2FS_I(old_dentry->d_inode)->i_projid)))
F2FS_I(inode)->i_projid)))
return -EXDEV;
err = f2fs_dquot_initialize(dir);
@@ -573,6 +573,15 @@ static int f2fs_unlink(struct inode *dir, struct dentry *dentry)
goto fail;
}
if (unlikely(inode->i_nlink == 0)) {
f2fs_warn(F2FS_I_SB(inode), "%s: inode (ino=%lx) has zero i_nlink",
__func__, inode->i_ino);
err = -EFSCORRUPTED;
set_sbi_flag(F2FS_I_SB(inode), SBI_NEED_FSCK);
f2fs_put_page(page, 0);
goto fail;
}
f2fs_balance_fs(sbi, true);
f2fs_lock_op(sbi);
@@ -918,7 +927,7 @@ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
if (is_inode_flag_set(new_dir, FI_PROJ_INHERIT) &&
(!projid_eq(F2FS_I(new_dir)->i_projid,
F2FS_I(old_dentry->d_inode)->i_projid)))
F2FS_I(old_inode)->i_projid)))
return -EXDEV;
/*
@@ -1111,10 +1120,10 @@ static int f2fs_cross_rename(struct inode *old_dir, struct dentry *old_dentry,
if ((is_inode_flag_set(new_dir, FI_PROJ_INHERIT) &&
!projid_eq(F2FS_I(new_dir)->i_projid,
F2FS_I(old_dentry->d_inode)->i_projid)) ||
(is_inode_flag_set(new_dir, FI_PROJ_INHERIT) &&
F2FS_I(old_inode)->i_projid)) ||
(is_inode_flag_set(old_dir, FI_PROJ_INHERIT) &&
!projid_eq(F2FS_I(old_dir)->i_projid,
F2FS_I(new_dentry->d_inode)->i_projid)))
F2FS_I(new_inode)->i_projid)))
return -EXDEV;
err = f2fs_dquot_initialize(old_dir);

View File

@@ -1494,12 +1494,10 @@ repeat:
return folio;
err = read_node_page(&folio->page, 0);
if (err < 0) {
if (err < 0)
goto out_put_err;
} else if (err == LOCKED_PAGE) {
err = 0;
if (err == LOCKED_PAGE)
goto page_hit;
}
if (parent)
f2fs_ra_node_pages(parent, start + 1, MAX_RA_NODE);

View File

@@ -376,7 +376,13 @@ out:
} else {
sbi->committed_atomic_block += fi->atomic_write_cnt;
set_inode_flag(inode, FI_ATOMIC_COMMITTED);
/*
* inode may has no FI_ATOMIC_DIRTIED flag due to no write
* before commit.
*/
if (is_inode_flag_set(inode, FI_ATOMIC_DIRTIED)) {
/* clear atomic dirty status and set vfs dirty status */
clear_inode_flag(inode, FI_ATOMIC_DIRTIED);
f2fs_mark_inode_dirty_sync(inode, true);
}
@@ -424,7 +430,7 @@ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need)
if (need && excess_cached_nats(sbi))
f2fs_balance_fs_bg(sbi, false);
if (!f2fs_is_checkpoint_ready(sbi))
if (unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED)))
return;
/*
@@ -2438,7 +2444,7 @@ static void update_segment_mtime(struct f2fs_sb_info *sbi, block_t blkaddr,
* that the consecutive input blocks belong to the same segment.
*/
static int update_sit_entry_for_release(struct f2fs_sb_info *sbi, struct seg_entry *se,
block_t blkaddr, unsigned int offset, int del)
unsigned int segno, block_t blkaddr, unsigned int offset, int del)
{
bool exist;
#ifdef CONFIG_F2FS_CHECK_FS
@@ -2483,15 +2489,22 @@ static int update_sit_entry_for_release(struct f2fs_sb_info *sbi, struct seg_ent
f2fs_test_and_clear_bit(offset + i, se->discard_map))
sbi->discard_blks++;
if (!f2fs_test_bit(offset + i, se->ckpt_valid_map))
if (!f2fs_test_bit(offset + i, se->ckpt_valid_map)) {
se->ckpt_valid_blocks -= 1;
if (__is_large_section(sbi))
android_get_sec_entry(sbi, segno)->
ckpt_valid_blocks -= 1;
}
}
if (__is_large_section(sbi))
sanity_check_valid_blocks(sbi, segno);
return del;
}
static int update_sit_entry_for_alloc(struct f2fs_sb_info *sbi, struct seg_entry *se,
block_t blkaddr, unsigned int offset, int del)
unsigned int segno, block_t blkaddr, unsigned int offset, int del)
{
bool exist;
#ifdef CONFIG_F2FS_CHECK_FS
@@ -2524,12 +2537,23 @@ static int update_sit_entry_for_alloc(struct f2fs_sb_info *sbi, struct seg_entry
* or newly invalidated.
*/
if (!is_sbi_flag_set(sbi, SBI_CP_DISABLED)) {
if (!f2fs_test_and_set_bit(offset, se->ckpt_valid_map))
if (!f2fs_test_and_set_bit(offset, se->ckpt_valid_map)) {
se->ckpt_valid_blocks++;
if (__is_large_section(sbi))
android_get_sec_entry(sbi, segno)->
ckpt_valid_blocks++;
}
}
if (!f2fs_test_bit(offset, se->ckpt_valid_map))
if (!f2fs_test_bit(offset, se->ckpt_valid_map)) {
se->ckpt_valid_blocks += del;
if (__is_large_section(sbi))
android_get_sec_entry(sbi, segno)->
ckpt_valid_blocks += del;
}
if (__is_large_section(sbi))
sanity_check_valid_blocks(sbi, segno);
return del;
}
@@ -2560,9 +2584,9 @@ static void update_sit_entry(struct f2fs_sb_info *sbi, block_t blkaddr, int del)
/* Update valid block bitmap */
if (del > 0) {
del = update_sit_entry_for_alloc(sbi, se, blkaddr, offset, del);
del = update_sit_entry_for_alloc(sbi, se, segno, blkaddr, offset, del);
} else {
del = update_sit_entry_for_release(sbi, se, blkaddr, offset, del);
del = update_sit_entry_for_release(sbi, se, segno, blkaddr, offset, del);
}
__mark_sit_entry_dirty(sbi, segno);
@@ -2836,11 +2860,15 @@ find_other_zone:
}
got_it:
/* set it as dirty segment in free segmap */
f2fs_bug_on(sbi, test_bit(segno, free_i->free_segmap));
if (test_bit(segno, free_i->free_segmap)) {
ret = -EFSCORRUPTED;
f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_CORRUPTED_FREE_BITMAP);
goto out_unlock;
}
/* no free section in conventional zone */
/* no free section in conventional device or conventional zone */
if (new_sec && pinning &&
!f2fs_valid_pinned_area(sbi, START_BLOCK(sbi, segno))) {
f2fs_is_sequential_zone_area(sbi, START_BLOCK(sbi, segno))) {
ret = -EAGAIN;
goto out_unlock;
}
@@ -3311,7 +3339,7 @@ retry:
if (f2fs_sb_has_blkzoned(sbi) && err == -EAGAIN && gc_required) {
f2fs_down_write(&sbi->gc_lock);
err = f2fs_gc_range(sbi, 0, GET_SEGNO(sbi, FDEV(0).end_blk),
err = f2fs_gc_range(sbi, 0, sbi->first_zoned_segno - 1,
true, ZONED_PIN_SEC_REQUIRED_COUNT);
f2fs_up_write(&sbi->gc_lock);
@@ -4696,6 +4724,12 @@ void f2fs_flush_sit_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
&raw_sit->entries[sit_offset]);
}
/* update ckpt_valid_block */
if (__is_large_section(sbi)) {
set_ckpt_valid_blocks(sbi, segno);
sanity_check_valid_blocks(sbi, segno);
}
__clear_bit(segno, bitmap);
sit_i->dirty_sentries--;
ses->entry_cnt--;
@@ -4795,6 +4829,14 @@ static int build_sit_info(struct f2fs_sb_info *sbi)
GFP_KERNEL);
if (!sit_i->sec_entries)
return -ENOMEM;
f2fs_bug_on(sbi, android_sec_entries);
android_sec_entries =
f2fs_kvzalloc(sbi, array_size(sizeof(struct android_sec_entry),
MAIN_SECS(sbi)),
GFP_KERNEL);
if (!android_sec_entries)
return -ENOMEM;
}
/* get information related with SIT */
@@ -5017,6 +5059,16 @@ init_discard_map_done:
}
up_read(&curseg->journal_rwsem);
/* update ckpt_valid_block */
if (__is_large_section(sbi)) {
unsigned int segno;
for (segno = 0; segno < MAIN_SEGS(sbi); segno += SEGS_PER_SEC(sbi)) {
set_ckpt_valid_blocks(sbi, segno);
sanity_check_valid_blocks(sbi, segno);
}
}
if (err)
return err;
@@ -5759,6 +5811,10 @@ static void destroy_sit_info(struct f2fs_sb_info *sbi)
kfree(sit_i->tmp_map);
kvfree(sit_i->sentries);
if (__is_large_section(sbi)) {
kvfree(android_sec_entries);
android_sec_entries = NULL;
}
kvfree(sit_i->sec_entries);
kvfree(sit_i->dirty_sentries_bitmap);

View File

@@ -102,6 +102,8 @@ static inline void sanity_check_seg_type(struct f2fs_sb_info *sbi,
#define CAP_SEGS_PER_SEC(sbi) \
(SEGS_PER_SEC(sbi) - \
BLKS_TO_SEGS(sbi, (sbi)->unusable_blocks_per_sec))
#define GET_START_SEG_FROM_SEC(sbi, segno) \
(rounddown(segno, SEGS_PER_SEC(sbi)))
#define GET_SEC_FROM_SEG(sbi, segno) \
(((segno) == -1) ? -1 : (segno) / SEGS_PER_SEC(sbi))
#define GET_SEG_FROM_SEC(sbi, secno) \
@@ -211,6 +213,16 @@ struct sec_entry {
unsigned int valid_blocks; /* # of valid blocks in a section */
};
/*
* This is supposed to be in the above struct sec_entry from the below
* patch merged in 6.16-rc1, but applied to avoid ABI breakages in Android.
*
* deecd282bc39 "f2fs: add ckpt_valid_blocks to the section entry" in 6.16+
*/
struct android_sec_entry {
unsigned int ckpt_valid_blocks; /* # of valid blocks last cp in a section */
};
#define MAX_SKIP_GC_COUNT 16
struct revoke_entry {
@@ -329,6 +341,22 @@ static inline struct sec_entry *get_sec_entry(struct f2fs_sb_info *sbi,
return &sit_i->sec_entries[GET_SEC_FROM_SEG(sbi, segno)];
}
/*
* This is shared with all other mounts, but ensure only /data will
* get memory allocated when a large section is defined.
* Note, the below android_* are only applied to android16-6.12, since
* the orignal patch [1] adds ckpt_valid_blocks in struct sec_entry,
* which breaks ABI.
*
* [1] deecd282bc39 "f2fs: add ckpt_valid_blocks to the section entry" in 6.16+
*/
static struct android_sec_entry *android_sec_entries = NULL;
static inline struct android_sec_entry *android_get_sec_entry(
struct f2fs_sb_info *sbi, unsigned int segno)
{
return &android_sec_entries[GET_SEC_FROM_SEG(sbi, segno)];
}
static inline unsigned int get_valid_blocks(struct f2fs_sb_info *sbi,
unsigned int segno, bool use_section)
{
@@ -345,22 +373,59 @@ static inline unsigned int get_valid_blocks(struct f2fs_sb_info *sbi,
static inline unsigned int get_ckpt_valid_blocks(struct f2fs_sb_info *sbi,
unsigned int segno, bool use_section)
{
if (use_section && __is_large_section(sbi)) {
unsigned int secno = GET_SEC_FROM_SEG(sbi, segno);
unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno);
unsigned int blocks = 0;
int i;
for (i = 0; i < SEGS_PER_SEC(sbi); i++, start_segno++) {
struct seg_entry *se = get_seg_entry(sbi, start_segno);
blocks += se->ckpt_valid_blocks;
}
return blocks;
}
return get_seg_entry(sbi, segno)->ckpt_valid_blocks;
if (use_section && __is_large_section(sbi))
return android_get_sec_entry(sbi, segno)->ckpt_valid_blocks;
else
return get_seg_entry(sbi, segno)->ckpt_valid_blocks;
}
static inline void set_ckpt_valid_blocks(struct f2fs_sb_info *sbi,
unsigned int segno)
{
unsigned int secno = GET_SEC_FROM_SEG(sbi, segno);
unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno);
unsigned int blocks = 0;
int i;
for (i = 0; i < SEGS_PER_SEC(sbi); i++, start_segno++) {
struct seg_entry *se = get_seg_entry(sbi, start_segno);
blocks += se->ckpt_valid_blocks;
}
android_get_sec_entry(sbi, segno)->ckpt_valid_blocks = blocks;
}
#ifdef CONFIG_F2FS_CHECK_FS
static inline void sanity_check_valid_blocks(struct f2fs_sb_info *sbi,
unsigned int segno)
{
unsigned int secno = GET_SEC_FROM_SEG(sbi, segno);
unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno);
unsigned int blocks = 0;
int i;
for (i = 0; i < SEGS_PER_SEC(sbi); i++, start_segno++) {
struct seg_entry *se = get_seg_entry(sbi, start_segno);
blocks += se->ckpt_valid_blocks;
}
if (blocks != android_get_sec_entry(sbi, segno)->ckpt_valid_blocks) {
f2fs_err(sbi,
"Inconsistent ckpt valid blocks: "
"seg entry(%d) vs sec entry(%d) at secno %d",
blocks,
android_get_sec_entry(sbi, segno)->ckpt_valid_blocks,
secno);
f2fs_bug_on(sbi, 1);
}
}
#else
static inline void sanity_check_valid_blocks(struct f2fs_sb_info *sbi,
unsigned int segno)
{
}
#endif
static inline void seg_info_from_raw_sit(struct seg_entry *se,
struct f2fs_sit_entry *rs)
{
@@ -429,7 +494,6 @@ static inline void __set_free(struct f2fs_sb_info *sbi, unsigned int segno)
unsigned int secno = GET_SEC_FROM_SEG(sbi, segno);
unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno);
unsigned int next;
unsigned int usable_segs = f2fs_usable_segs_in_sec(sbi);
spin_lock(&free_i->segmap_lock);
clear_bit(segno, free_i->free_segmap);
@@ -437,7 +501,7 @@ static inline void __set_free(struct f2fs_sb_info *sbi, unsigned int segno)
next = find_next_bit(free_i->free_segmap,
start_segno + SEGS_PER_SEC(sbi), start_segno);
if (next >= start_segno + usable_segs) {
if (next >= start_segno + f2fs_usable_segs_in_sec(sbi)) {
clear_bit(secno, free_i->free_secmap);
free_i->free_sections++;
}
@@ -463,22 +527,36 @@ static inline void __set_test_and_free(struct f2fs_sb_info *sbi,
unsigned int secno = GET_SEC_FROM_SEG(sbi, segno);
unsigned int start_segno = GET_SEG_FROM_SEC(sbi, secno);
unsigned int next;
unsigned int usable_segs = f2fs_usable_segs_in_sec(sbi);
bool ret;
spin_lock(&free_i->segmap_lock);
if (test_and_clear_bit(segno, free_i->free_segmap)) {
free_i->free_segments++;
ret = test_and_clear_bit(segno, free_i->free_segmap);
if (!ret)
goto unlock_out;
if (!inmem && IS_CURSEC(sbi, secno))
goto skip_free;
next = find_next_bit(free_i->free_segmap,
start_segno + SEGS_PER_SEC(sbi), start_segno);
if (next >= start_segno + usable_segs) {
if (test_and_clear_bit(secno, free_i->free_secmap))
free_i->free_sections++;
}
}
skip_free:
free_i->free_segments++;
if (!inmem && IS_CURSEC(sbi, secno))
goto unlock_out;
/* check large section */
next = find_next_bit(free_i->free_segmap,
start_segno + SEGS_PER_SEC(sbi), start_segno);
if (next < start_segno + f2fs_usable_segs_in_sec(sbi))
goto unlock_out;
ret = test_and_clear_bit(secno, free_i->free_secmap);
if (!ret)
goto unlock_out;
free_i->free_sections++;
if (GET_SEC_FROM_SEG(sbi, sbi->next_victim_seg[BG_GC]) == secno)
sbi->next_victim_seg[BG_GC] = NULL_SEGNO;
if (GET_SEC_FROM_SEG(sbi, sbi->next_victim_seg[FG_GC]) == secno)
sbi->next_victim_seg[FG_GC] = NULL_SEGNO;
unlock_out:
spin_unlock(&free_i->segmap_lock);
}
@@ -569,8 +647,14 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
if (unlikely(segno == NULL_SEGNO))
return false;
left_blocks = CAP_BLKS_PER_SEC(sbi) -
get_ckpt_valid_blocks(sbi, segno, true);
if (f2fs_lfs_mode(sbi) && __is_large_section(sbi)) {
left_blocks = CAP_BLKS_PER_SEC(sbi) -
SEGS_TO_BLKS(sbi, (segno - GET_START_SEG_FROM_SEC(sbi, segno))) -
CURSEG_I(sbi, i)->next_blkoff;
} else {
left_blocks = CAP_BLKS_PER_SEC(sbi) -
get_ckpt_valid_blocks(sbi, segno, true);
}
blocks = i <= CURSEG_COLD_DATA ? data_blocks : node_blocks;
if (blocks > left_blocks)
@@ -583,8 +667,15 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
if (unlikely(segno == NULL_SEGNO))
return false;
left_blocks = CAP_BLKS_PER_SEC(sbi) -
get_ckpt_valid_blocks(sbi, segno, true);
if (f2fs_lfs_mode(sbi) && __is_large_section(sbi)) {
left_blocks = CAP_BLKS_PER_SEC(sbi) -
SEGS_TO_BLKS(sbi, (segno - GET_START_SEG_FROM_SEC(sbi, segno))) -
CURSEG_I(sbi, CURSEG_HOT_DATA)->next_blkoff;
} else {
left_blocks = CAP_BLKS_PER_SEC(sbi) -
get_ckpt_valid_blocks(sbi, segno, true);
}
if (dent_blocks > left_blocks)
return false;
return true;

View File

@@ -184,10 +184,17 @@ static unsigned int do_reclaim_caches(struct f2fs_sb_info *sbi,
if (!inode)
continue;
len = fi->donate_end - fi->donate_start + 1;
npages = npages < len ? 0 : npages - len;
invalidate_inode_pages2_range(inode->i_mapping,
inode_lock(inode);
if (!is_inode_flag_set(inode, FI_DONATE_FINISHED)) {
len = fi->donate_end - fi->donate_start + 1;
npages = npages < len ? 0 : npages - len;
invalidate_inode_pages2_range(inode->i_mapping,
fi->donate_start, fi->donate_end);
set_inode_flag(inode, FI_DONATE_FINISHED);
}
inode_unlock(inode);
iput(inode);
cond_resched();
}

View File

@@ -1535,7 +1535,9 @@ int f2fs_inode_dirtied(struct inode *inode, bool sync)
}
spin_unlock(&sbi->inode_lock[DIRTY_META]);
if (!ret && f2fs_is_atomic_file(inode))
/* if atomic write is not committed, set inode w/ atomic dirty */
if (!ret && f2fs_is_atomic_file(inode) &&
!is_inode_flag_set(inode, FI_ATOMIC_COMMITTED))
set_inode_flag(inode, FI_ATOMIC_DIRTIED);
return ret;
@@ -1810,26 +1812,32 @@ static int f2fs_statfs_project(struct super_block *sb,
limit = min_not_zero(dquot->dq_dqb.dqb_bsoftlimit,
dquot->dq_dqb.dqb_bhardlimit);
if (limit)
limit >>= sb->s_blocksize_bits;
limit >>= sb->s_blocksize_bits;
if (limit) {
uint64_t remaining = 0;
if (limit && buf->f_blocks > limit) {
curblock = (dquot->dq_dqb.dqb_curspace +
dquot->dq_dqb.dqb_rsvspace) >> sb->s_blocksize_bits;
buf->f_blocks = limit;
buf->f_bfree = buf->f_bavail =
(buf->f_blocks > curblock) ?
(buf->f_blocks - curblock) : 0;
if (limit > curblock)
remaining = limit - curblock;
buf->f_blocks = min(buf->f_blocks, limit);
buf->f_bfree = min(buf->f_bfree, remaining);
buf->f_bavail = min(buf->f_bavail, remaining);
}
limit = min_not_zero(dquot->dq_dqb.dqb_isoftlimit,
dquot->dq_dqb.dqb_ihardlimit);
if (limit && buf->f_files > limit) {
buf->f_files = limit;
buf->f_ffree =
(buf->f_files > dquot->dq_dqb.dqb_curinodes) ?
(buf->f_files - dquot->dq_dqb.dqb_curinodes) : 0;
if (limit) {
uint64_t remaining = 0;
if (limit > dquot->dq_dqb.dqb_curinodes)
remaining = limit - dquot->dq_dqb.dqb_curinodes;
buf->f_files = min(buf->f_files, limit);
buf->f_ffree = min(buf->f_ffree, remaining);
}
spin_unlock(&dquot->dq_dqb_lock);
@@ -1888,9 +1896,9 @@ static int f2fs_statfs(struct dentry *dentry, struct kstatfs *buf)
buf->f_fsid = u64_to_fsid(id);
#ifdef CONFIG_QUOTA
if (is_inode_flag_set(dentry->d_inode, FI_PROJ_INHERIT) &&
if (is_inode_flag_set(d_inode(dentry), FI_PROJ_INHERIT) &&
sb_has_quota_limits_enabled(sb, PRJQUOTA)) {
f2fs_statfs_project(sb, F2FS_I(dentry->d_inode)->i_projid, buf);
f2fs_statfs_project(sb, F2FS_I(d_inode(dentry))->i_projid, buf);
}
#endif
return 0;
@@ -3723,6 +3731,7 @@ int f2fs_sanity_check_ckpt(struct f2fs_sb_info *sbi)
block_t user_block_count, valid_user_blocks;
block_t avail_node_count, valid_node_count;
unsigned int nat_blocks, nat_bits_bytes, nat_bits_blocks;
unsigned int sit_blk_cnt;
int i, j;
total = le32_to_cpu(raw_super->segment_count);
@@ -3834,6 +3843,13 @@ skip_cross:
return 1;
}
sit_blk_cnt = DIV_ROUND_UP(main_segs, SIT_ENTRY_PER_BLOCK);
if (sit_bitmap_size * 8 < sit_blk_cnt) {
f2fs_err(sbi, "Wrong bitmap size: sit: %u, sit_blk_cnt:%u",
sit_bitmap_size, sit_blk_cnt);
return 1;
}
cp_pack_start_sum = __start_sum_addr(sbi);
cp_payload = __cp_payload(sbi);
if (cp_pack_start_sum < cp_payload + 1 ||
@@ -4310,14 +4326,35 @@ static void f2fs_record_error_work(struct work_struct *work)
f2fs_record_stop_reason(sbi);
}
static inline unsigned int get_first_zoned_segno(struct f2fs_sb_info *sbi)
static inline unsigned int get_first_seq_zone_segno(struct f2fs_sb_info *sbi)
{
#ifdef CONFIG_BLK_DEV_ZONED
unsigned int zoneno, total_zones;
int devi;
for (devi = 0; devi < sbi->s_ndevs; devi++)
if (bdev_is_zoned(FDEV(devi).bdev))
return GET_SEGNO(sbi, FDEV(devi).start_blk);
return 0;
if (!f2fs_sb_has_blkzoned(sbi))
return NULL_SEGNO;
for (devi = 0; devi < sbi->s_ndevs; devi++) {
if (!bdev_is_zoned(FDEV(devi).bdev))
continue;
total_zones = GET_ZONE_FROM_SEG(sbi, FDEV(devi).total_segments);
for (zoneno = 0; zoneno < total_zones; zoneno++) {
unsigned int segs, blks;
if (!f2fs_zone_is_seq(sbi, devi, zoneno))
continue;
segs = GET_SEG_FROM_SEC(sbi,
zoneno * sbi->secs_per_zone);
blks = SEGS_TO_BLKS(sbi, segs);
return GET_SEGNO(sbi, FDEV(devi).start_blk + blks);
}
}
#endif
return NULL_SEGNO;
}
static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
@@ -4354,6 +4391,14 @@ static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
#endif
for (i = 0; i < max_devices; i++) {
if (max_devices == 1) {
FDEV(i).total_segments =
le32_to_cpu(raw_super->segment_count_main);
FDEV(i).start_blk = 0;
FDEV(i).end_blk = FDEV(i).total_segments *
BLKS_PER_SEG(sbi);
}
if (i == 0)
FDEV(0).bdev_file = sbi->sb->s_bdev_file;
else if (!RDEV(i).path[0])
@@ -4733,7 +4778,7 @@ try_onemore:
sbi->sectors_written_start = f2fs_get_sectors_written(sbi);
/* get segno of first zoned block device */
sbi->first_zoned_segno = get_first_zoned_segno(sbi);
sbi->first_zoned_segno = get_first_seq_zone_segno(sbi);
/* Read accumulated write IO statistics if exists */
seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE);

View File

@@ -274,6 +274,13 @@ static ssize_t encoding_show(struct f2fs_attr *a,
return sysfs_emit(buf, "(none)\n");
}
static ssize_t encoding_flags_show(struct f2fs_attr *a,
struct f2fs_sb_info *sbi, char *buf)
{
return sysfs_emit(buf, "%x\n",
le16_to_cpu(F2FS_RAW_SUPER(sbi)->s_encoding_flags));
}
static ssize_t mounted_time_sec_show(struct f2fs_attr *a,
struct f2fs_sb_info *sbi, char *buf)
{
@@ -1158,6 +1165,7 @@ F2FS_GENERAL_RO_ATTR(features);
F2FS_GENERAL_RO_ATTR(current_reserved_blocks);
F2FS_GENERAL_RO_ATTR(unusable);
F2FS_GENERAL_RO_ATTR(encoding);
F2FS_GENERAL_RO_ATTR(encoding_flags);
F2FS_GENERAL_RO_ATTR(mounted_time_sec);
F2FS_GENERAL_RO_ATTR(main_blkaddr);
F2FS_GENERAL_RO_ATTR(pending_discard);
@@ -1199,6 +1207,9 @@ F2FS_FEATURE_RO_ATTR(readonly);
F2FS_FEATURE_RO_ATTR(compression);
#endif
F2FS_FEATURE_RO_ATTR(pin_file);
#ifdef CONFIG_UNICODE
F2FS_FEATURE_RO_ATTR(linear_lookup);
#endif
#define ATTR_LIST(name) (&f2fs_attr_##name.attr)
static struct attribute *f2fs_attrs[] = {
@@ -1270,6 +1281,7 @@ static struct attribute *f2fs_attrs[] = {
ATTR_LIST(reserved_blocks),
ATTR_LIST(current_reserved_blocks),
ATTR_LIST(encoding),
ATTR_LIST(encoding_flags),
ATTR_LIST(mounted_time_sec),
#ifdef CONFIG_F2FS_STAT_FS
ATTR_LIST(cp_foreground_calls),
@@ -1347,6 +1359,9 @@ static struct attribute *f2fs_feat_attrs[] = {
BASE_ATTR_LIST(compression),
#endif
BASE_ATTR_LIST(pin_file),
#ifdef CONFIG_UNICODE
BASE_ATTR_LIST(linear_lookup),
#endif
NULL,
};
ATTRIBUTE_GROUPS(f2fs_feat);

File diff suppressed because it is too large Load Diff

View File

@@ -3,3 +3,7 @@
type 'enum prs_errcode' changed
enumerator 'PERR_REMOTE' (10) was added
type 'struct sched_dl_entity' changed
member 'unsigned int dl_server_idle:1' was added

View File

@@ -710,6 +710,7 @@
drm_kms_helper_poll_enable
drm_kms_helper_poll_fini
drm_kms_helper_poll_init
drm_match_cea_mode
drmm_mode_config_init
drm_mode_config_cleanup
drm_mode_config_reset
@@ -1008,6 +1009,9 @@
hex2bin
hex_asc
hex_dump_to_buffer
hid_hw_start
hid_hw_stop
hidinput_calc_abs_res
__hid_register_driver
hid_unregister_driver
high_memory
@@ -1422,6 +1426,8 @@
napi_gro_receive
__napi_schedule
napi_schedule_prep
nbcon_device_release
nbcon_device_try_acquire
__ndelay
netdev_alert
__netdev_alloc_skb
@@ -2103,6 +2109,8 @@
smpboot_register_percpu_thread
smp_call_function
smp_call_function_single
snd_card_disconnect
snd_card_free_when_closed
snd_card_ref
snd_card_register
snd_card_rw_proc_new
@@ -2155,6 +2163,7 @@
snd_soc_dai_set_tdm_slot
snd_soc_dapm_get_enum_double
snd_soc_dapm_put_enum_double
snd_soc_get_dai_name
snd_soc_get_volsw
snd_soc_get_volsw_range
snd_soc_info_enum_double
@@ -2169,6 +2178,7 @@
snd_soc_of_parse_audio_simple_widgets
snd_soc_of_parse_card_name
snd_soc_of_parse_tdm_slot
snd_soc_of_put_dai_link_codecs
snd_soc_pm_ops
snd_soc_put_volsw
snd_soc_put_volsw_range
@@ -2183,12 +2193,16 @@
snd_timer_stop
snprintf
__sock_create
sock_kfree_s
sock_kmalloc
sock_kzfree_s
sock_release
sock_wfree
sort
spi_add_device
__spi_alloc_controller
spi_alloc_device
spi_bus_type
spi_controller_resume
spi_controller_suspend
spi_finalize_current_message

File diff suppressed because it is too large Load Diff

View File

@@ -82,6 +82,8 @@
__tracepoint_android_vh_binder_data_preset
__traceiter_android_rvh_init_binder_logs
__tracepoint_android_rvh_init_binder_logs
__traceiter_android_vh_set_tsk_need_resched_lazy
__tracepoint_android_vh_set_tsk_need_resched_lazy
__traceiter_android_rvh_vfree_bypass
__traceiter_android_rvh_vmalloc_node_bypass
__traceiter_android_vh_adjust_alloc_flags

View File

@@ -210,6 +210,7 @@
config_item_set_name
console_lock
console_suspend_enabled
console_trylock
console_unlock
__const_udelay
consume_skb
@@ -261,6 +262,7 @@
csum_partial
csum_tcpudp_nofold
_ctype
current_work
deactivate_locked_super
debugfs_attr_read
debugfs_attr_write
@@ -571,6 +573,7 @@
dmaenginem_async_device_register
dma_fence_add_callback
dma_fence_array_ops
dma_fence_chain_init
dma_fence_context_alloc
dma_fence_default_wait
dma_fence_free
@@ -578,6 +581,7 @@
dma_fence_init
dma_fence_release
dma_fence_remove_callback
dma_fence_set_deadline
dma_fence_signal
dma_fence_signal_locked
dma_fence_wait_timeout
@@ -607,6 +611,7 @@
dma_release_channel
dma_request_chan
__dma_request_channel
dma_resv_get_singleton
dma_set_coherent_mask
dma_set_mask
__dma_sync_sg_for_cpu
@@ -638,15 +643,25 @@
driver_set_override
driver_unregister
drm_add_edid_modes
drm_add_modes_noedid
drm_analog_tv_mode
drm_any_plane_has_format
drm_atomic_add_affected_connectors
drm_atomic_add_affected_planes
drm_atomic_add_encoder_bridges
drm_atomic_bridge_chain_check
drm_atomic_bridge_chain_disable
drm_atomic_bridge_chain_enable
drm_atomic_bridge_chain_post_disable
drm_atomic_bridge_chain_pre_enable
drm_atomic_commit
drm_atomic_get_connector_state
drm_atomic_get_crtc_state
drm_atomic_get_new_bridge_state
drm_atomic_get_new_connector_for_encoder
drm_atomic_get_new_crtc_for_encoder
drm_atomic_get_new_private_obj_state
drm_atomic_get_old_crtc_for_encoder
drm_atomic_get_old_private_obj_state
drm_atomic_get_plane_state
drm_atomic_get_private_obj_state
@@ -676,6 +691,7 @@
drm_atomic_helper_crtc_reset
drm_atomic_helper_damage_iter_init
drm_atomic_helper_damage_iter_next
__drm_atomic_helper_disable_plane
drm_atomic_helper_disable_plane
drm_atomic_helper_disable_planes_on_crtc
drm_atomic_helper_page_flip
@@ -686,18 +702,26 @@
__drm_atomic_helper_plane_reset
drm_atomic_helper_plane_reset
__drm_atomic_helper_private_obj_duplicate_state
__drm_atomic_helper_set_config
drm_atomic_helper_set_config
drm_atomic_helper_shutdown
drm_atomic_helper_update_plane
drm_atomic_helper_wait_for_vblanks
drm_atomic_nonblocking_commit
drm_atomic_normalize_zpos
drm_atomic_private_obj_fini
drm_atomic_private_obj_init
drm_atomic_set_crtc_for_connector
drm_atomic_set_crtc_for_plane
drm_atomic_set_fb_for_plane
drm_atomic_set_mode_prop_for_crtc
drm_atomic_state_alloc
drm_atomic_state_clear
__drm_atomic_state_free
drm_bridge_add
drm_bridge_attach
drm_bridge_chain_mode_set
drm_bridge_chain_mode_valid
drm_bridge_detect
drm_bridge_edid_read
drm_bridge_hpd_disable
@@ -706,6 +730,20 @@
drm_bridge_is_panel
drm_bridge_remove
drm_bus_flags_from_videomode
drm_calc_timestamping_constants
drm_client_buffer_vmap
drm_client_buffer_vunmap
drm_client_dev_hotplug
drm_client_framebuffer_create
drm_client_framebuffer_delete
drm_client_init
drm_client_modeset_commit
drm_client_modeset_commit_locked
drm_client_modeset_dpms
drm_client_modeset_probe
drm_client_register
drm_client_release
drm_client_rotation
drm_compat_ioctl
drm_connector_atomic_hdr_metadata_equal
drm_connector_attach_colorspace_property
@@ -718,7 +756,9 @@
drm_connector_list_iter_begin
drm_connector_list_iter_end
drm_connector_list_iter_next
drm_connector_list_update
drm_connector_register
drm_connector_set_orientation_from_panel
drm_connector_set_panel_orientation
drm_connector_unregister
drm_connector_update_edid_property
@@ -730,6 +770,7 @@
drm_crtc_commit_wait
drm_crtc_handle_vblank
drm_crtc_init_with_planes
drm_crtc_next_vblank_start
drm_crtc_send_vblank_event
drm_crtc_vblank_count
drm_crtc_vblank_get
@@ -737,6 +778,7 @@
drm_crtc_vblank_off
drm_crtc_vblank_on
drm_crtc_vblank_put
drm_crtc_vblank_reset
drm_crtc_wait_one_vblank
__drm_debug
drm_default_rgb_quant_range
@@ -744,6 +786,7 @@
drm_detect_monitor_audio
drm_dev_alloc
__drm_dev_dbg
drm_dev_has_vblank
drm_dev_printk
drm_dev_put
drm_dev_register
@@ -752,13 +795,17 @@
drm_display_mode_from_cea_vic
drm_display_mode_from_videomode
drm_display_mode_to_videomode
drm_driver_color_mode_format
drm_driver_legacy_fb_format
drm_edid_connector_add_modes
drm_edid_connector_update
drm_edid_dup
drm_edid_duplicate
drm_edid_free
drm_edid_get_monitor_name
drm_edid_override_connector_update
drm_edid_raw
drm_edid_read
drm_edid_read_custom
drm_edid_read_ddc
drm_edid_valid
@@ -768,6 +815,10 @@
drm_format_info
drm_format_info_block_height
drm_format_info_block_width
drm_format_info_bpp
drm_format_info_min_pitch
drm_framebuffer_cleanup
drm_framebuffer_init
drm_gem_create_mmap_offset
drm_gem_fb_create
drm_gem_fb_get_obj
@@ -779,9 +830,12 @@
drm_gem_object_release
drm_gem_plane_helper_prepare_fb
drm_gem_prime_import
drm_gem_prime_mmap
drm_gem_private_object_init
drm_gem_vmap_unlocked
drm_gem_vm_close
drm_gem_vm_open
drm_gem_vunmap_unlocked
drm_get_connector_status_name
drm_get_edid
drm_get_format_info
@@ -796,9 +850,13 @@
drm_kms_helper_hotplug_event
drm_kms_helper_poll_fini
drm_kms_helper_poll_init
__drmm_add_action_or_reset
drm_master_internal_acquire
drm_master_internal_release
drm_match_cea_mode
drmm_connector_hdmi_init
drmm_connector_init
__drmm_encoder_alloc
drmm_kmalloc
drmm_mode_config_init
drm_mode_config_cleanup
@@ -807,27 +865,42 @@
drm_mode_config_reset
drm_mode_copy
drm_mode_create
drm_mode_create_from_cmdline_mode
drm_mode_create_hdmi_colorspace_property
drm_mode_debug_printmodeline
drm_mode_destroy
drm_mode_duplicate
drm_mode_equal
drm_mode_find_dmt
drm_mode_get_hv_timing
drm_mode_init
drm_mode_is_420_also
drm_mode_is_420_only
drm_mode_object_get
drm_mode_object_put
drm_mode_probed_add
drm_mode_prune_invalid
drm_modeset_acquire_fini
drm_modeset_acquire_init
drm_modeset_backoff
drm_mode_set_config_internal
drm_mode_set_crtcinfo
drm_modeset_drop_locks
drm_modeset_lock
drm_modeset_lock_all
drm_modeset_lock_all_ctx
drm_modeset_lock_single_interruptible
drm_mode_set_name
drm_modeset_unlock
drm_modeset_unlock_all
drm_mode_sort
drm_mode_validate_driver
drm_mode_validate_size
drm_mode_validate_ycbcr420
drm_mode_vrefresh
__drmm_simple_encoder_alloc
drm_object_attach_property
drm_object_property_get_default_value
drm_object_property_set_value
drm_of_component_match_add
drm_of_encoder_active_endpoint
@@ -858,24 +931,37 @@
drm_plane_create_scaling_filter_property
drm_plane_create_zpos_immutable_property
drm_plane_create_zpos_property
drm_plane_get_damage_clips
drm_plane_get_damage_clips_count
drm_poll
drm_prime_gem_destroy
drm_prime_get_contiguous_size
drm_printf
__drm_printfn_dbg
drm_probe_ddc
drm_property_blob_get
drm_property_blob_put
drm_property_create_blob
drm_property_create_enum
drm_property_create_range
drm_property_destroy
drm_property_replace_blob
drm_read
drm_rect_intersect
drm_release
drm_self_refresh_helper_cleanup
drm_self_refresh_helper_init
drm_set_preferred_mode
drm_simple_encoder_init
drm_sysfs_connector_hotplug_event
drm_sysfs_connector_property_event
drm_sysfs_hotplug_event
__drm_universal_plane_alloc
drm_universal_plane_init
drm_vblank_init
drm_warn_on_modeset_not_all_locked
drm_writeback_cleanup_job
drm_writeback_prepare_job
drop_reasons_register_subsys
drop_reasons_unregister_subsys
dump_stack
@@ -926,6 +1012,8 @@
extcon_set_state_sync
fd_install
fget
file_update_time
file_write_and_wait_range
filp_close
_find_first_bit
_find_first_zero_bit
@@ -951,7 +1039,9 @@
flush_work
__flush_workqueue
__folio_lock
folio_mkclean
__folio_put
folio_unlock
follow_pfnmap_end
follow_pfnmap_start
__fortify_panic
@@ -974,6 +1064,7 @@
fwnode_get_phy_node
fwnode_graph_get_endpoint_by_id
fwnode_graph_get_next_endpoint
fwnode_graph_get_remote_endpoint
fwnode_handle_get
fwnode_irq_get_byname
fwnode_mdio_find_device
@@ -1103,6 +1194,8 @@
__hw_addr_init
__hw_addr_sync
__hw_addr_unsync
hwrng_register
hwrng_unregister
i2c_adapter_type
i2c_add_adapter
i2c_add_numbered_adapter
@@ -1120,6 +1213,7 @@
i2c_get_match_data
i2c_match_id
i2c_new_ancillary_device
i2c_new_client_device
i2c_new_dummy_device
i2c_put_adapter
i2c_put_dma_safe_msg_buf
@@ -1407,6 +1501,7 @@
media_devnode_create
media_devnode_remove
media_entity_find_link
media_entity_get_fwnode_pad
media_entity_pads_init
media_entity_pipeline
media_entity_remove_links
@@ -1430,6 +1525,7 @@
memremap
mem_section
memset
memset64
__memset_io
memstart_addr
memunmap
@@ -1559,6 +1655,7 @@
nla_strscpy
__nla_validate
nonseekable_open
noop_dirty_folio
noop_llseek
noop_qdisc
nr_cpu_ids
@@ -2058,6 +2155,7 @@
register_restart_handler
__register_rpmsg_driver
register_syscore_ops
register_sysrq_key
register_tcf_proto_ops
register_virtio_device
__register_virtio_driver
@@ -2380,6 +2478,7 @@
snd_soc_daifmt_clock_provider_from_bitmap
snd_soc_daifmt_parse_clock_provider_raw
snd_soc_daifmt_parse_format
snd_soc_dai_is_dummy
snd_soc_dai_name_get
snd_soc_dai_set_bclk_ratio
snd_soc_dai_set_fmt
@@ -2664,6 +2763,7 @@
unregister_qdisc
unregister_reboot_notifier
unregister_rpmsg_driver
unregister_sysrq_key
unregister_tcf_proto_ops
unregister_virtio_device
unregister_virtio_driver
@@ -2744,10 +2844,14 @@
v4l2_ctrl_new_std_menu_items
v4l2_ctrl_poll
__v4l2_ctrl_s_ctrl
__v4l2_ctrl_s_ctrl_compound
__v4l2_ctrl_s_ctrl_int64
v4l2_ctrl_subdev_log_status
v4l2_ctrl_subdev_subscribe_event
v4l2_ctrl_subscribe_event
v4l2_ctrl_type_op_equal
v4l2_ctrl_type_op_init
v4l2_ctrl_type_op_validate
v4l2_device_register
__v4l2_device_register_subdev
__v4l2_device_register_subdev_nodes
@@ -2894,6 +2998,7 @@
video_device_release
video_device_release_empty
video_firmware_drivers_only
video_get_options
video_ioctl2
videomode_from_timing
__video_register_device

View File

@@ -1,4 +1,5 @@
[abi_symbol_list]
activate_task
add_cpu
add_timer
add_timer_on
@@ -34,6 +35,7 @@
arc4_setkey
__arch_copy_from_user
__arch_copy_to_user
arch_freq_scale
arch_timer_read_counter
argv_free
argv_split
@@ -46,7 +48,9 @@
atomic_notifier_chain_register
atomic_notifier_chain_unregister
autoremove_wake_function
available_idle_cpu
backlight_device_set_brightness
balance_push_callback
base64_decode
bcmp
bin2hex
@@ -59,6 +63,7 @@
__bitmap_or
bitmap_parse
bitmap_parselist
bitmap_parse_user
bitmap_print_to_pagebuf
__bitmap_set
bitmap_to_arr32
@@ -80,6 +85,7 @@
bpf_trace_run1
bpf_trace_run10
bpf_trace_run11
bpf_trace_run12
bpf_trace_run2
bpf_trace_run3
bpf_trace_run4
@@ -188,12 +194,20 @@
_copy_from_iter
__copy_overflow
_copy_to_iter
__cpu_active_mask
cpu_all_bits
cpu_bit_bitmap
cpu_busy_with_softirqs
cpufreq_add_update_util_hook
cpufreq_cpu_get
cpufreq_cpu_get_raw
cpufreq_cpu_put
cpufreq_disable_fast_switch
cpufreq_driver_fast_switch
cpufreq_driver_resolve_freq
__cpufreq_driver_target
cpufreq_driver_target
cpufreq_enable_fast_switch
cpufreq_freq_attr_scaling_available_freqs
cpufreq_freq_transition_begin
cpufreq_freq_transition_end
@@ -203,13 +217,18 @@
cpufreq_get
cpufreq_get_driver_data
cpufreq_get_policy
cpufreq_policy_transition_delay_us
cpufreq_quick_get
cpufreq_quick_get_max
cpufreq_register_driver
cpufreq_register_governor
cpufreq_register_notifier
cpufreq_remove_update_util_hook
cpufreq_table_index_unsorted
cpufreq_this_cpu_can_update
cpufreq_unregister_driver
cpufreq_update_limits
cpufreq_update_util_data
cpu_hotplug_disable
cpu_hotplug_enable
__cpuhp_remove_state
@@ -228,7 +247,9 @@
cpu_pm_unregister_notifier
__cpu_possible_mask
__cpu_present_mask
cpupri_find_fitness
cpu_scale
cpuset_cpus_allowed
cpus_read_lock
cpus_read_unlock
cpu_subsys
@@ -262,6 +283,7 @@
csum_tcpudp_nofold
_ctype
datagram_poll
deactivate_task
debugfs_attr_read
debugfs_attr_write
debugfs_attr_write_signed
@@ -1028,11 +1050,17 @@
get_reclaim_params
get_sg_io_hdr
__get_task_comm
get_task_cred
get_task_mm
get_unused_fd_flags
get_user_pages
get_user_pages_fast
get_vaddr_frames
glob_match
gov_attr_set_get
gov_attr_set_init
gov_attr_set_put
governor_sysfs_ops
gpiochip_generic_config
gpiochip_generic_free
gpiochip_generic_request
@@ -1074,6 +1102,7 @@
handle_simple_irq
handle_sysrq
hashlen_string
have_governor_per_policy
hdmi_avi_infoframe_pack
hdmi_drm_infoframe_init
hdmi_infoframe_pack
@@ -1092,6 +1121,7 @@
__hw_addr_init
__hw_addr_sync
__hw_addr_unsync
hw_pressure
hwrng_register
hwrng_unregister
i2c_adapter_type
@@ -1152,6 +1182,12 @@
ida_alloc_range
ida_destroy
ida_free
idle_inject_get_duration
idle_inject_register
idle_inject_set_duration
idle_inject_set_latency
idle_inject_start
idle_inject_stop
idr_alloc
idr_alloc_cyclic
idr_alloc_u32
@@ -1282,7 +1318,9 @@
irq_set_irq_type
irq_set_irq_wake
irq_to_desc
irq_work_queue
irq_work_run
irq_work_sync
is_vmalloc_addr
jiffies
jiffies_to_msecs
@@ -1301,6 +1339,7 @@
kernel_sendmsg
kernfs_find_and_get_ns
kernfs_notify
kernfs_path_from_node
kernfs_put
key_put
keyring_alloc
@@ -1349,6 +1388,7 @@
kobj_sysfs_ops
krealloc_noprof
ksize
ksoftirqd
kstat
kstrdup
kstrndup
@@ -1364,9 +1404,11 @@
kstrtou8_from_user
kstrtouint
kstrtouint_from_user
kstrtoul_from_user
kstrtoull
kstrtoull_from_user
kthread_bind
kthread_bind_mask
kthread_cancel_delayed_work_sync
kthread_cancel_work_sync
kthread_complete_and_exit
@@ -1432,6 +1474,8 @@
lru_cache_disable
lru_disable_count
mac_pton
mas_find
max_load_balance_interval
mbox_chan_received_data
mbox_chan_txdone
mbox_controller_register
@@ -1494,6 +1538,7 @@
__mmap_lock_do_trace_released
__mmap_lock_do_trace_start_locking
__mmdrop
mmput
mod_delayed_work_on
mod_node_page_state
mod_timer
@@ -1564,6 +1609,7 @@
noop_llseek
nr_cpu_ids
nr_irqs
ns_capable
ns_capable_noaudit
nsecs_to_jiffies
nsec_to_clock_t
@@ -1880,6 +1926,7 @@
prepare_to_wait_event
print_hex_dump
_printk
_printk_deferred
probe_irq_off
probe_irq_on
proc_create
@@ -1887,6 +1934,7 @@
proc_create_single_data
proc_dointvec
proc_dostring
proc_douintvec_minmax
proc_mkdir
proc_mkdir_data
proc_remove
@@ -1896,6 +1944,7 @@
pskb_expand_head
__pskb_pull_tail
___pskb_trim
push_cpu_stop
__put_cred
put_device
put_iova_domain
@@ -1929,6 +1978,8 @@
_raw_spin_lock_bh
_raw_spin_lock_irq
_raw_spin_lock_irqsave
raw_spin_rq_lock_nested
raw_spin_rq_unlock
_raw_spin_trylock
_raw_spin_unlock
_raw_spin_unlock_bh
@@ -2045,6 +2096,7 @@
__request_percpu_irq
__request_region
request_threaded_irq
resched_curr
reserve_iova
reset_control_assert
reset_control_bulk_assert
@@ -2065,6 +2117,7 @@
__rht_bucket_nested
rht_bucket_nested
rht_bucket_nested_insert
root_task_group
round_jiffies
round_jiffies_relative
round_jiffies_up
@@ -2085,14 +2138,19 @@
rtnl_lock
rtnl_trylock
rtnl_unlock
runqueues
sbitmap_weight
sched_clock
sched_feat_keys
sched_prio_to_weight
sched_prio_to_wmult
sched_setattr_nocheck
sched_set_fifo
sched_set_normal
sched_setscheduler
sched_setscheduler_nocheck
sched_show_task
sched_uclamp_used
schedule
schedule_timeout
schedule_timeout_idle
@@ -2135,6 +2193,7 @@
set_page_dirty_lock
__SetPageMovable
set_reclaim_params
set_task_cpu
set_user_nice
sg_alloc_table
sg_alloc_table_from_pages_segment
@@ -2332,9 +2391,13 @@
__srcu_read_unlock
sscanf
__stack_chk_fail
static_key_count
static_key_disable
static_key_enable
static_key_slow_dec
static_key_slow_inc
stop_machine
stop_one_cpu_nowait
strcasecmp
strchr
strchrnul
@@ -2375,6 +2438,8 @@
syscon_regmap_lookup_by_compatible
syscon_regmap_lookup_by_phandle
syscon_regmap_lookup_by_phandle_args
sysctl_sched_base_slice
sysctl_sched_features
sysfs_add_file_to_group
sysfs_add_link_to_group
sysfs_chmod_file
@@ -2413,6 +2478,8 @@
tasklet_setup
tasklet_unlock_wait
__task_pid_nr_ns
__task_rq_lock
task_rq_lock
tcpci_get_tcpm_port
tcpci_irq
tcpci_register_port
@@ -2445,6 +2512,7 @@
thermal_zone_get_zone_by_name
this_cpu_has_cap
thread_group_cputime_adjusted
tick_nohz_get_idle_calls_cpu
time64_to_tm
timer_delete
timer_delete_sync
@@ -2461,25 +2529,67 @@
trace_event_raw_init
trace_event_reg
trace_handle_return
__traceiter_android_rvh_attach_entity_load_avg
__traceiter_android_rvh_can_migrate_task
__traceiter_android_rvh_cgroup_force_kthread_migration
__traceiter_android_rvh_check_preempt_wakeup_fair
__traceiter_android_rvh_cpu_overutilized
__traceiter_android_rvh_dequeue_task
__traceiter_android_rvh_dequeue_task_fair
__traceiter_android_rvh_detach_entity_load_avg
__traceiter_android_rvh_do_read_fault
__traceiter_android_rvh_enqueue_task
__traceiter_android_rvh_enqueue_task_fair
__traceiter_android_rvh_find_lowest_rq
__traceiter_android_rvh_hw_protection_shutdown
__traceiter_android_rvh_iommu_alloc_insert_iova
__traceiter_android_rvh_iommu_iovad_init_alloc_algo
__traceiter_android_rvh_iommu_limit_align_shift
__traceiter_android_rvh_irqs_disable
__traceiter_android_rvh_irqs_enable
__traceiter_android_rvh_madvise_pageout_begin
__traceiter_android_rvh_madvise_pageout_end
__traceiter_android_rvh_mapping_shrinkable
__traceiter_android_rvh_meminfo_proc_show
__traceiter_android_rvh_post_init_entity_util_avg
__traceiter_android_rvh_preempt_disable
__traceiter_android_rvh_preempt_enable
__traceiter_android_rvh_reclaim_folio_list
__traceiter_android_rvh_remove_entity_load_avg
__traceiter_android_rvh_rtmutex_prepare_setprio
__traceiter_android_rvh_sched_newidle_balance
__traceiter_android_rvh_sched_setaffinity
__traceiter_android_rvh_select_task_rq_fair
__traceiter_android_rvh_select_task_rq_rt
__traceiter_android_rvh_set_cpus_allowed_by_task
__traceiter_android_rvh_set_iowait
__traceiter_android_rvh_setscheduler
__traceiter_android_rvh_setscheduler_prio
__traceiter_android_rvh_set_task_comm
__traceiter_android_rvh_set_task_cpu
__traceiter_android_rvh_set_user_nice_locked
__traceiter_android_rvh_try_to_wake_up_success
__traceiter_android_rvh_uclamp_eff_get
__traceiter_android_rvh_ufs_complete_init
__traceiter_android_rvh_ufs_reprogram_all_keys
__traceiter_android_rvh_update_blocked_fair
__traceiter_android_rvh_update_load_avg
__traceiter_android_rvh_update_rt_rq_load_avg
__traceiter_android_rvh_util_est_update
__traceiter_android_rvh_util_fits_cpu
__traceiter_android_rvh_vmscan_kswapd_done
__traceiter_android_rvh_vmscan_kswapd_wake
__traceiter_android_trigger_vendor_lmk_kill
__traceiter_android_vh_arch_set_freq_scale
__traceiter_android_vh_binder_proc_transaction_finish
__traceiter_android_vh_binder_restore_priority
__traceiter_android_vh_binder_set_priority
__traceiter_android_vh_calculate_totalreserve_pages
__traceiter_android_vh_check_new_page
__traceiter_android_vh_cpu_idle_enter
__traceiter_android_vh_cpu_idle_exit
__traceiter_android_vh_dump_throttled_rt_tasks
__traceiter_android_vh_dup_task_struct
__traceiter_android_vh_early_resume_begin
__traceiter_android_vh_enable_thermal_genl_check
__traceiter_android_vh_filemap_get_folio
@@ -2489,12 +2599,16 @@
__traceiter_android_vh_mm_compaction_end
__traceiter_android_vh_mm_kcompactd_cpu_online
__traceiter_android_vh_post_alloc_hook
__traceiter_android_vh_prio_inheritance
__traceiter_android_vh_prio_restore
__traceiter_android_vh_resume_end
__traceiter_android_vh_rmqueue
__traceiter_android_vh_scheduler_tick
__traceiter_android_vh_setscheduler_uclamp
__traceiter_android_vh_si_meminfo_adjust
__traceiter_android_vh_sysrq_crash
__traceiter_android_vh_tune_swappiness
__traceiter_android_vh_uclamp_validate
__traceiter_android_vh_ufs_check_int_errors
__traceiter_android_vh_ufs_compl_command
__traceiter_android_vh_ufs_fill_prdt
@@ -2504,6 +2618,7 @@
__traceiter_android_vh_ufs_send_uic_command
__traceiter_android_vh_ufs_update_sdev
__traceiter_android_vh_ufs_update_sysfs
__traceiter_android_vh_use_amu_fie
__traceiter_clock_set_rate
__traceiter_cma_alloc_finish
__traceiter_cma_alloc_start
@@ -2518,7 +2633,16 @@
__traceiter_mmap_lock_start_locking
__traceiter_mm_vmscan_direct_reclaim_begin
__traceiter_mm_vmscan_direct_reclaim_end
__traceiter_pelt_cfs_tp
__traceiter_pelt_dl_tp
__traceiter_pelt_irq_tp
__traceiter_pelt_rt_tp
__traceiter_pelt_se_tp
__traceiter_sched_cpu_capacity_tp
__traceiter_sched_overutilized_tp
__traceiter_sched_switch
__traceiter_sched_util_est_cfs_tp
__traceiter_sched_util_est_se_tp
__traceiter_sched_wakeup
__traceiter_softirq_entry
__traceiter_softirq_exit
@@ -2526,25 +2650,67 @@
__traceiter_workqueue_execute_end
__traceiter_workqueue_execute_start
trace_output_call
__tracepoint_android_rvh_attach_entity_load_avg
__tracepoint_android_rvh_can_migrate_task
__tracepoint_android_rvh_cgroup_force_kthread_migration
__tracepoint_android_rvh_check_preempt_wakeup_fair
__tracepoint_android_rvh_cpu_overutilized
__tracepoint_android_rvh_dequeue_task
__tracepoint_android_rvh_dequeue_task_fair
__tracepoint_android_rvh_detach_entity_load_avg
__tracepoint_android_rvh_do_read_fault
__tracepoint_android_rvh_enqueue_task
__tracepoint_android_rvh_enqueue_task_fair
__tracepoint_android_rvh_find_lowest_rq
__tracepoint_android_rvh_hw_protection_shutdown
__tracepoint_android_rvh_iommu_alloc_insert_iova
__tracepoint_android_rvh_iommu_iovad_init_alloc_algo
__tracepoint_android_rvh_iommu_limit_align_shift
__tracepoint_android_rvh_irqs_disable
__tracepoint_android_rvh_irqs_enable
__tracepoint_android_rvh_madvise_pageout_begin
__tracepoint_android_rvh_madvise_pageout_end
__tracepoint_android_rvh_mapping_shrinkable
__tracepoint_android_rvh_meminfo_proc_show
__tracepoint_android_rvh_post_init_entity_util_avg
__tracepoint_android_rvh_preempt_disable
__tracepoint_android_rvh_preempt_enable
__tracepoint_android_rvh_reclaim_folio_list
__tracepoint_android_rvh_remove_entity_load_avg
__tracepoint_android_rvh_rtmutex_prepare_setprio
__tracepoint_android_rvh_sched_newidle_balance
__tracepoint_android_rvh_sched_setaffinity
__tracepoint_android_rvh_select_task_rq_fair
__tracepoint_android_rvh_select_task_rq_rt
__tracepoint_android_rvh_set_cpus_allowed_by_task
__tracepoint_android_rvh_set_iowait
__tracepoint_android_rvh_setscheduler
__tracepoint_android_rvh_setscheduler_prio
__tracepoint_android_rvh_set_task_comm
__tracepoint_android_rvh_set_task_cpu
__tracepoint_android_rvh_set_user_nice_locked
__tracepoint_android_rvh_try_to_wake_up_success
__tracepoint_android_rvh_uclamp_eff_get
__tracepoint_android_rvh_ufs_complete_init
__tracepoint_android_rvh_ufs_reprogram_all_keys
__tracepoint_android_rvh_update_blocked_fair
__tracepoint_android_rvh_update_load_avg
__tracepoint_android_rvh_update_rt_rq_load_avg
__tracepoint_android_rvh_util_est_update
__tracepoint_android_rvh_util_fits_cpu
__tracepoint_android_rvh_vmscan_kswapd_done
__tracepoint_android_rvh_vmscan_kswapd_wake
__tracepoint_android_trigger_vendor_lmk_kill
__tracepoint_android_vh_arch_set_freq_scale
__tracepoint_android_vh_binder_proc_transaction_finish
__tracepoint_android_vh_binder_restore_priority
__tracepoint_android_vh_binder_set_priority
__tracepoint_android_vh_calculate_totalreserve_pages
__tracepoint_android_vh_check_new_page
__tracepoint_android_vh_cpu_idle_enter
__tracepoint_android_vh_cpu_idle_exit
__tracepoint_android_vh_dump_throttled_rt_tasks
__tracepoint_android_vh_dup_task_struct
__tracepoint_android_vh_early_resume_begin
__tracepoint_android_vh_enable_thermal_genl_check
__tracepoint_android_vh_filemap_get_folio
@@ -2554,12 +2720,16 @@
__tracepoint_android_vh_mm_compaction_end
__tracepoint_android_vh_mm_kcompactd_cpu_online
__tracepoint_android_vh_post_alloc_hook
__tracepoint_android_vh_prio_inheritance
__tracepoint_android_vh_prio_restore
__tracepoint_android_vh_resume_end
__tracepoint_android_vh_rmqueue
__tracepoint_android_vh_scheduler_tick
__tracepoint_android_vh_setscheduler_uclamp
__tracepoint_android_vh_si_meminfo_adjust
__tracepoint_android_vh_sysrq_crash
__tracepoint_android_vh_tune_swappiness
__tracepoint_android_vh_uclamp_validate
__tracepoint_android_vh_ufs_check_int_errors
__tracepoint_android_vh_ufs_compl_command
__tracepoint_android_vh_ufs_fill_prdt
@@ -2569,6 +2739,7 @@
__tracepoint_android_vh_ufs_send_uic_command
__tracepoint_android_vh_ufs_update_sdev
__tracepoint_android_vh_ufs_update_sysfs
__tracepoint_android_vh_use_amu_fie
__tracepoint_clock_set_rate
__tracepoint_cma_alloc_finish
__tracepoint_cma_alloc_start
@@ -2583,9 +2754,18 @@
__tracepoint_mmap_lock_start_locking
__tracepoint_mm_vmscan_direct_reclaim_begin
__tracepoint_mm_vmscan_direct_reclaim_end
__tracepoint_pelt_cfs_tp
__tracepoint_pelt_dl_tp
__tracepoint_pelt_irq_tp
__tracepoint_pelt_rt_tp
__tracepoint_pelt_se_tp
tracepoint_probe_register
tracepoint_probe_unregister
__tracepoint_sched_cpu_capacity_tp
__tracepoint_sched_overutilized_tp
__tracepoint_sched_switch
__tracepoint_sched_util_est_cfs_tp
__tracepoint_sched_util_est_se_tp
__tracepoint_sched_wakeup
__tracepoint_softirq_entry
__tracepoint_softirq_exit
@@ -2638,6 +2818,7 @@
uart_update_timeout
uart_write_wakeup
uart_xchar_out
uclamp_eff_value
__udelay
udp4_hwcsum
ufshcd_auto_hibern8_update
@@ -2688,6 +2869,9 @@
unregister_virtio_driver
up
update_devfreq
___update_load_sum
update_misfit_status
update_rq_clock
up_read
up_write
usb_add_function
@@ -2845,6 +3029,7 @@
wait_woken
__wake_up
__wake_up_locked
wakeup_preempt
wake_up_process
wakeup_source_add
wakeup_source_create

View File

@@ -1118,6 +1118,7 @@
extcon_set_state
extcon_set_state_sync
extcon_unregister_notifier
ext_sched_class
fasync_helper
fd_install
fget
@@ -1614,9 +1615,9 @@
irq_create_fwspec_mapping
irq_create_mapping_affinity
irq_dispose_mapping
__irq_domain_alloc_fwnode
__irq_domain_alloc_irqs
irq_domain_alloc_irqs_parent
__irq_domain_alloc_fwnode
irq_domain_create_hierarchy
irq_domain_disconnect_hierarchy
irq_domain_free_fwnode
@@ -1651,9 +1652,9 @@
irq_work_queue
irq_work_queue_on
irq_work_sync
is_vmalloc_addr
isolate_and_split_free_page
isolate_anon_lru_page
is_vmalloc_addr
iterate_fd
jiffies
jiffies_to_msecs
@@ -2321,10 +2322,6 @@
phy_init
phy_init_eee
phy_init_hw
phy_pm_runtime_get
phy_pm_runtime_get_sync
phy_pm_runtime_put
phy_pm_runtime_put_sync
phylink_connect_phy
phylink_create
phylink_destroy
@@ -2355,6 +2352,10 @@
phy_mac_interrupt
phy_modify
phy_modify_mmd
phy_pm_runtime_get
phy_pm_runtime_get_sync
phy_pm_runtime_put
phy_pm_runtime_put_sync
phy_power_off
phy_power_on
phy_print_status
@@ -2509,12 +2510,12 @@
ptp_clock_register
ptp_clock_unregister
ptp_parse_header
putback_movable_pages
put_cmsg
__put_cred
put_device
put_disk
put_iova_domain
putback_movable_pages
__put_net
put_pid
put_sg_io_hdr
@@ -2846,6 +2847,8 @@
scsi_normalize_sense
__scsi_print_sense
scsi_register_interface
__scx_ops_enabled
__scx_switched_all
sdei_event_disable
sdei_event_enable
sdei_event_register
@@ -3402,6 +3405,9 @@
__traceiter_android_vh_dump_throttled_rt_tasks
__traceiter_android_vh_encrypt_page
__traceiter_android_vh_free_task
__traceiter_android_vh_freq_qos_add_request
__traceiter_android_vh_freq_qos_remove_request
__traceiter_android_vh_freq_qos_update_request
__traceiter_android_vh_ftrace_dump_buffer
__traceiter_android_vh_ftrace_format_check
__traceiter_android_vh_ftrace_oops_enter
@@ -3538,9 +3544,12 @@
__tracepoint_android_vh_cpuidle_psci_enter
__tracepoint_android_vh_cpuidle_psci_exit
__tracepoint_android_vh_do_wake_up_sync
__tracepoint_android_vh_encrypt_page
__tracepoint_android_vh_dump_throttled_rt_tasks
__tracepoint_android_vh_encrypt_page
__tracepoint_android_vh_free_task
__tracepoint_android_vh_freq_qos_add_request
__tracepoint_android_vh_freq_qos_remove_request
__tracepoint_android_vh_freq_qos_update_request
__tracepoint_android_vh_ftrace_dump_buffer
__tracepoint_android_vh_ftrace_format_check
__tracepoint_android_vh_ftrace_oops_enter
@@ -4089,6 +4098,7 @@
xsk_tx_peek_desc
xsk_tx_release
xsk_uses_need_wakeup
zap_page_range_single
zap_vma_ptes
zlib_deflate
zlib_deflateEnd

View File

@@ -27,3 +27,4 @@
remove_memory_subsection
send_sig_mceerr
smpboot_unregister_percpu_thread
vmap_pfn

View File

@@ -287,6 +287,9 @@
mmc_cqe_post_req
mmc_put_card
# required mmdvfs.ko
devfreq_event_get_event
# required pinctrl_sprd.ko
pinctrl_register
pinctrl_unregister
@@ -303,6 +306,13 @@
__traceiter_android_vh_regmap_update
__tracepoint_android_vh_regmap_update
# required sc2731_charger.ko
power_supply_get_battery_info
power_supply_put_battery_info
# required sc27xx-vibra.ko
input_ff_create_memless
# required sc8546_charger.ko
__regmap_init_i2c
@@ -314,6 +324,7 @@
mmc_get_ext_csd
mmc_regulator_disable_vqmmc
mmc_regulator_enable_vqmmc
mmc_sd_switch
mmc_send_status
sdhci_enable_v4_mode
sdhci_request
@@ -328,9 +339,18 @@
# required sipa_core.ko
alarm_start
# required spi-sprd-adi.ko
devm_register_restart_handler
# required sprd_ase_driver.ko
alarm_forward
# required sprd_bt_tty.ko
tty_port_link_device
# required sprd_camera.ko
of_irq_to_resource
# required sprd_charger_manager.ko
alarm_expires_remaining
@@ -344,6 +364,9 @@
mipi_dsi_set_maximum_return_packet_size
of_get_drm_display_mode
# required sprd-iommu.ko
generic_single_device_group
# required sprd_pmic_adc.ko
nvmem_cell_read_u16

View File

@@ -80,6 +80,7 @@
__traceiter_android_rvh_alloc_pages_reclaim_start
__traceiter_android_rvh_bpf_skb_load_bytes
__traceiter_android_vh_throttle_direct_reclaim_bypass
__traceiter_android_vh_count_workingset_refault
__traceiter_android_rvh_cpufreq_transition
__traceiter_android_rvh_create_worker
__traceiter_android_rvh_dequeue_task_fair
@@ -99,6 +100,7 @@
__traceiter_android_rvh_udpv6_recvmsg
__traceiter_android_rvh_udpv6_sendmsg
__traceiter_android_rvh_percpu_rwsem_wait_complete
__traceiter_android_rvh_pr_set_vma_name_bypass
__traceiter_android_vh_alter_mutex_list_add
__traceiter_android_vh_alter_rwsem_list_add
__traceiter_android_vh_bd_link_disk_holder
@@ -134,6 +136,7 @@
__traceiter_android_vh_lruvec_add_folio
__traceiter_android_vh_lruvec_del_folio
__traceiter_android_vh_mglru_aging_bypass
__traceiter_android_vh_mm_free_page
__traceiter_android_vh_mmap_region
__traceiter_android_vh_mutex_unlock_slowpath
__traceiter_android_vh_mutex_unlock_slowpath_before_wakeq
@@ -156,6 +159,7 @@
__traceiter_android_vh_shrink_folio_list
__traceiter_android_vh_shrink_node_memcgs
__traceiter_android_vh_shrink_node_memcgs_bypass
__traceiter_android_vh_free_unref_folios_to_pcp_bypass
__traceiter_android_vh_sk_alloc
__traceiter_android_vh_sk_free
__traceiter_android_vh_swapmem_gather_add_bypass
@@ -189,12 +193,14 @@
__tracepoint_android_rvh_alloc_pages_reclaim_start
__tracepoint_android_rvh_bpf_skb_load_bytes
__tracepoint_android_vh_throttle_direct_reclaim_bypass
__tracepoint_android_vh_count_workingset_refault
__tracepoint_android_rvh_cpufreq_transition
__tracepoint_android_rvh_create_worker
__tracepoint_android_rvh_dequeue_task_fair
__tracepoint_android_rvh_enqueue_task_fair
__tracepoint_android_rvh_inet_sock_create
__tracepoint_android_rvh_inet_sock_release
__tracepoint_android_rvh_pr_set_vma_name_bypass
__tracepoint_android_rvh_replace_next_task_fair
__tracepoint_android_rvh_set_user_nice
__tracepoint_android_rvh_set_gfp_zone_flags
@@ -243,6 +249,7 @@
__tracepoint_android_vh_lruvec_add_folio
__tracepoint_android_vh_lruvec_del_folio
__tracepoint_android_vh_mglru_aging_bypass
__tracepoint_android_vh_mm_free_page
__tracepoint_android_vh_mmap_region
__tracepoint_android_vh_mutex_unlock_slowpath
__tracepoint_android_vh_mutex_unlock_slowpath_before_wakeq
@@ -265,6 +272,7 @@
__tracepoint_android_vh_shrink_folio_list
__tracepoint_android_vh_shrink_node_memcgs
__tracepoint_android_vh_shrink_node_memcgs_bypass
__tracepoint_android_vh_free_unref_folios_to_pcp_bypass
__tracepoint_android_vh_sk_alloc
__tracepoint_android_vh_sk_free
__tracepoint_android_vh_swapmem_gather_add_bypass

View File

@@ -8,6 +8,12 @@
#required by perf_helper.ko
__traceiter_android_rvh_dequeue_entity_delayed
__tracepoint_android_rvh_dequeue_entity_delayed
__traceiter_f2fs_submit_folio_write
__tracepoint_f2fs_submit_folio_write
__traceiter_android_vh_fuse_request_send
__tracepoint_android_vh_fuse_request_send
__traceiter_android_vh_fuse_request_end
__tracepoint_android_vh_fuse_request_end
# commonly used symbols
__traceiter_android_vh_logbuf
@@ -28,6 +34,8 @@
# required by mi_mem_engine.ko
__traceiter_android_vh_tune_swappiness
__tracepoint_android_vh_tune_swappiness
__traceiter_android_vh_do_shrink_slab_ex
__tracepoint_android_vh_do_shrink_slab_ex
# required by SAGT module
__traceiter_android_rvh_before_do_sched_yield
@@ -74,6 +82,10 @@
scsi_device_set_state
blk_mq_quiesce_tagset
blk_mq_unquiesce_tagset
dev_pm_opp_find_freq_floor_indexed
blk_mq_alloc_queue
scsi_host_busy
dev_pm_opp_find_freq_ceil_indexed
#required by stability
sock_from_file
@@ -101,6 +113,7 @@
__traceiter_android_vh_rwsem_write_wait_start
__traceiter_android_vh_mutex_wait_start
__traceiter_android_vh_alter_mutex_list_add
__traceiter_android_rvh_cpuset_fork
__traceiter_android_vh_sched_setaffinity_early
__traceiter_android_rvh_set_cpus_allowed_comm
__traceiter_android_rvh_dequeue_task
@@ -108,10 +121,18 @@
__tracepoint_android_vh_rwsem_write_wait_start
__tracepoint_android_vh_mutex_wait_start
__tracepoint_android_vh_alter_mutex_list_add
__tracepoint_android_rvh_cpuset_fork
__tracepoint_android_vh_sched_setaffinity_early
__tracepoint_android_rvh_set_cpus_allowed_comm
__tracepoint_android_rvh_dequeue_task
cpuset_cpus_allowed
cpufreq_update_policy
cgroup_threadgroup_rwsem
#required by millet.ko
__traceiter_android_rvh_refrigerator
__tracepoint_android_rvh_refrigerator
freezer_cgrp_subsys
# required by trace_hook.ko
android_rvh_probe_register
@@ -385,6 +406,13 @@
filp_open_block
blk_crypto_start_using_key
cgroup_rm_cftypes
mem_cgroup_move_account
__traceiter_android_vh_mem_cgroup_charge
__traceiter_android_vh_filemap_add_folio
__traceiter_android_vh_shrink_node
__tracepoint_android_vh_mem_cgroup_charge
__tracepoint_android_vh_filemap_add_folio
__tracepoint_android_vh_shrink_node
#required by mem_reclaim_ctl.ko
__traceiter_android_vh_page_should_be_protected

View File

@@ -42,11 +42,18 @@
* Worker macros, don't use these, use the ones without a leading '_'
*/
#define _ANDROID_KABI_RULE(hint, target, value) \
#if defined(BUILD_VDSO) || defined(__DISABLE_EXPORTS)
#define __ANDROID_KABI_RULE(hint, target, value)
#else
#define __ANDROID_KABI_RULE(hint, target, value) \
static const char CONCATENATE(__gendwarfksyms_rule_, \
__COUNTER__)[] __used __aligned(1) \
__section(".discard.gendwarfksyms.kabi_rules") = \
"1\0" #hint "\0" #target "\0" #value
"1\0" #hint "\0" target "\0" value
#endif
#define _ANDROID_KABI_RULE(hint, target, value) \
__ANDROID_KABI_RULE(hint, #target, #value)
#define _ANDROID_KABI_NORMAL_SIZE_ALIGN(_orig, _new) \
union { \
@@ -63,6 +70,9 @@
__stringify(_new)); \
}
#if defined(BUILD_VDSO) || defined(__DISABLE_EXPORTS)
#define _ANDROID_KABI_REPLACE(_orig, _new) _new
#else
#define _ANDROID_KABI_REPLACE(_orig, _new) \
union { \
_new; \
@@ -71,6 +81,7 @@
}; \
_ANDROID_KABI_NORMAL_SIZE_ALIGN(_orig, _new); \
}
#endif
/*
@@ -120,6 +131,22 @@
#define ANDROID_KABI_ENUMERATOR_VALUE(fqn, field, value) \
_ANDROID_KABI_RULE(enumerator_value, fqn field, value)
/*
* ANDROID_KABI_BYTE_SIZE(fqn, value)
* Set the byte_size attribute for the struct/union/enum fqn to
* value bytes.
*/
#define ANDROID_KABI_BYTE_SIZE(fqn, value) \
_ANDROID_KABI_RULE(byte_size, fqn, value)
/*
* ANDROID_KABI_TYPE_STRING(type, str)
* For the given type, override the type string used in symtypes
* output and version calculation with str.
*/
#define ANDROID_KABI_TYPE_STRING(type, str) \
__ANDROID_KABI_RULE(type_string, type, str)
/*
* ANDROID_KABI_IGNORE
* Add a new field that's ignored in versioning.

View File

@@ -239,12 +239,19 @@ struct ffa_partition_info {
#define FFA_PARTITION_INDIRECT_MSG BIT(2)
/* partition can receive notifications */
#define FFA_PARTITION_NOTIFICATION_RECV BIT(3)
/* partition must be informed about each VM that is created by the Hypervisor */
#define FFA_PARTITION_HYP_CREATE_VM BIT(6)
/* partition must be informed about each VM that is destroyed by the Hypervisor */
#define FFA_PARTITION_HYP_DESTROY_VM BIT(7)
/* partition runs in the AArch64 execution state. */
#define FFA_PARTITION_AARCH64_EXEC BIT(8)
u32 properties;
u32 uuid[4];
};
#define FFA_VM_CREATION_MSG (BIT(31) | (BIT(2)))
#define FFA_VM_DESTRUCTION_MSG (FFA_VM_CREATION_MSG | BIT(1))
static inline
bool ffa_partition_check_property(struct ffa_device *dev, u32 property)
{

View File

@@ -78,6 +78,7 @@ enum stop_cp_reason {
STOP_CP_REASON_UPDATE_INODE,
STOP_CP_REASON_FLUSH_FAIL,
STOP_CP_REASON_NO_SEGMENT,
STOP_CP_REASON_CORRUPTED_FREE_BITMAP,
STOP_CP_REASON_MAX,
};

View File

@@ -1200,11 +1200,19 @@ extern int send_sigurg(struct file *file);
#define SB_NOUSER BIT(31)
/* These flags relate to encoding and casefolding */
#define SB_ENC_STRICT_MODE_FL (1 << 0)
#define SB_ENC_STRICT_MODE_FL (1 << 0)
#define SB_ENC_NO_COMPAT_FALLBACK_FL (1 << 1)
#define sb_has_strict_encoding(sb) \
(sb->s_encoding_flags & SB_ENC_STRICT_MODE_FL)
#if IS_ENABLED(CONFIG_UNICODE)
#define sb_no_casefold_compat_fallback(sb) \
(sb->s_encoding_flags & SB_ENC_NO_COMPAT_FALLBACK_FL)
#else
#define sb_no_casefold_compat_fallback(sb) (1)
#endif
/*
* Umount options
*/

View File

@@ -600,8 +600,9 @@ gunyah_hypercall_vcpu_run(u64 capid, unsigned long *resume_data,
#define GUNYAH_ADDRSPC_MODIFY_FLAG_SANITIZE_BIT 1
enum gunyah_error
gunyah_hypercall_addrspc_modify_pages(u64 capid, u64 addr, u64 size, u64 flags);
enum gunyah_error
gunyah_hypercall_addrspace_find_info_area(unsigned long *ipa, unsigned long *size);
enum gunyah_error
#define GUNYAH_ADDRSPACE_VMMIO_CONFIGURE_OP_ADD_RANGE 0
gunyah_hypercall_addrspc_configure_vmmio_range(u64 capid, u64 base, u64 size, u64 op);
#endif

View File

@@ -376,6 +376,11 @@ void do_traversal_all_lruvec(int (*callback)(struct mem_cgroup *memcg,
void *private),
void *private);
int mem_cgroup_move_account(struct folio *folio,
bool compound,
struct mem_cgroup *from,
struct mem_cgroup *to);
/*
* After the initialization objcg->memcg is always pointing at
* a valid memcg, but can be atomically swapped to the parent memcg.
@@ -1146,6 +1151,14 @@ static inline bool PageMemcgKmem(struct page *page)
return false;
}
static inline int mem_cgroup_move_account(struct folio *folio,
bool compound,
struct mem_cgroup *from,
struct mem_cgroup *to)
{
return 0;
}
static inline bool mem_cgroup_is_root(struct mem_cgroup *memcg)
{
return true;

View File

@@ -689,6 +689,7 @@ struct sched_dl_entity {
unsigned int dl_defer : 1;
unsigned int dl_defer_armed : 1;
unsigned int dl_defer_running : 1;
unsigned int dl_server_idle : 1;
/*
* Bandwidth enforcement timer. Each -deadline task has its
@@ -728,6 +729,34 @@ struct sched_dl_entity {
#endif
};
ANDROID_KABI_TYPE_STRING("s#sched_dl_entity", "structure_type sched_dl_entity "
"{ member s#rb_node rb_node data_member_location(0) , member t#u64 "
"dl_runtime data_member_location(24) , member t#u64 dl_deadline "
"data_member_location(32) , member t#u64 dl_period data_member_location(40) "
", member t#u64 dl_bw data_member_location(48) , member t#u64 dl_density "
"data_member_location(56) , member t#s64 runtime data_member_location(64) , "
"member t#u64 deadline data_member_location(72) , member base_type unsigned "
"int byte_size(4) encoding(7) flags data_member_location(80) , member base_type "
"unsigned int byte_size(4) encoding(7) dl_throttled bit_size(1) "
"data_bit_offset(672) , member base_type unsigned int byte_size(4) encoding(7) "
"dl_yielded bit_size(1) data_bit_offset(673) , member base_type unsigned int "
"byte_size(4) encoding(7) dl_non_contending bit_size(1) data_bit_offset(674) , "
"member base_type unsigned int byte_size(4) encoding(7) dl_overrun bit_size(1) "
"data_bit_offset(675) , member base_type unsigned int byte_size(4) encoding(7) "
"dl_server bit_size(1) data_bit_offset(676) , member base_type unsigned int "
"byte_size(4) encoding(7) dl_server_active bit_size(1) data_bit_offset(677) , "
"member base_type unsigned int byte_size(4) encoding(7) dl_defer bit_size(1) "
"data_bit_offset(678) , member base_type unsigned int byte_size(4) encoding(7) "
"dl_defer_armed bit_size(1) data_bit_offset(679) , member base_type unsigned "
"int byte_size(4) encoding(7) dl_defer_running bit_size(1) "
"data_bit_offset(680) , member s#hrtimer dl_timer data_member_location(88) , "
"member s#hrtimer inactive_timer data_member_location(152) , member "
"pointer_type { s#rq } rq data_member_location(216) , member "
"t#dl_server_has_tasks_f server_has_tasks data_member_location(224) , member "
"t#dl_server_pick_f server_pick_task data_member_location(232) , member "
"pointer_type { s#sched_dl_entity } pi_se data_member_location(240) } "
"byte_size(248)");
#ifdef CONFIG_UCLAMP_TASK
/* Number of utilization clamp buckets (shorter alias) */
#define UCLAMP_BUCKETS CONFIG_UCLAMP_BUCKETS_COUNT

View File

@@ -9765,6 +9765,11 @@ void cfg80211_links_removed(struct net_device *dev, u16 link_mask);
* struct cfg80211_mlo_reconf_done_data - MLO reconfiguration data
* @buf: MLO Reconfiguration Response frame (header + body)
* @len: length of the frame data
* @driver_initiated: Indicates whether the add links request is initiated by
* driver. This is set to true when the link reconfiguration request
* initiated by driver due to AP link recommendation requests
* (Ex: BTM (BSS Transition Management) request) handling offloaded to
* driver.
* @added_links: BIT mask of links successfully added to the association
* @links: per-link information indexed by link ID
* @links.bss: the BSS that MLO reconfiguration was requested for, ownership of
@@ -9777,6 +9782,7 @@ void cfg80211_links_removed(struct net_device *dev, u16 link_mask);
struct cfg80211_mlo_reconf_done_data {
const u8 *buf;
size_t len;
bool driver_initiated;
u16 added_links;
struct {
struct cfg80211_bss *bss;

View File

@@ -15,10 +15,22 @@ DECLARE_HOOK(android_vh_cgroup_set_task,
TP_PROTO(int ret, struct cgroup *cgrp, struct task_struct *task, bool threadgroup),
TP_ARGS(ret, cgrp, task, threadgroup));
DECLARE_RESTRICTED_HOOK(android_rvh_refrigerator,
TP_PROTO(bool f),
TP_ARGS(f), 1);
DECLARE_HOOK(android_vh_cgroup_attach,
TP_PROTO(struct cgroup_subsys *ss, struct cgroup_taskset *tset),
TP_ARGS(ss, tset));
DECLARE_RESTRICTED_HOOK(android_rvh_cgroup_force_kthread_migration,
TP_PROTO(struct task_struct *tsk, struct cgroup *dst_cgrp, bool *force_migration),
TP_ARGS(tsk, dst_cgrp, force_migration), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_cpuset_fork,
TP_PROTO(struct task_struct *p, bool *inherit_cpus),
TP_ARGS(p, inherit_cpus), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_cpu_cgroup_attach,
TP_PROTO(struct cgroup_taskset *tset),
TP_ARGS(tset), 1);

View File

@@ -150,6 +150,10 @@ DECLARE_HOOK(android_vh_exit_check,
DECLARE_RESTRICTED_HOOK(android_rvh_dpm_prepare,
TP_PROTO(int flag),
TP_ARGS(flag), 1);
DECLARE_HOOK(android_vh_set_tsk_need_resched_lazy,
TP_PROTO(struct task_struct *p, struct rq *rq, int *need_lazy),
TP_ARGS(p, rq, need_lazy));
#endif /* _TRACE_HOOK_DTASK_H */
/* This part must be outside protection */

View File

@@ -322,6 +322,9 @@ DECLARE_HOOK(android_vh_add_lazyfree_bypass,
DECLARE_HOOK(android_vh_do_async_mmap_readahead,
TP_PROTO(struct vm_fault *vmf, struct folio *folio, bool *skip),
TP_ARGS(vmf, folio, skip));
DECLARE_HOOK(android_vh_mm_free_page,
TP_PROTO(struct page *page),
TP_ARGS(page));
DECLARE_HOOK(android_vh_alloc_contig_range_not_isolated,
TP_PROTO(unsigned long start, unsigned end),
@@ -523,6 +526,13 @@ DECLARE_HOOK(android_vh_try_to_unmap_one,
TP_PROTO(struct folio *folio, struct vm_area_struct *vma,
unsigned long addr, void *arg, bool ret),
TP_ARGS(folio, vma, addr, arg, ret));
DECLARE_HOOK(android_vh_mem_cgroup_charge,
TP_PROTO(struct folio *folio, struct mem_cgroup **memcg),
TP_ARGS(folio, memcg));
DECLARE_HOOK(android_vh_filemap_add_folio,
TP_PROTO(struct address_space *mapping, struct folio *folio,
pgoff_t index),
TP_ARGS(mapping, folio, index));
#endif /* _TRACE_HOOK_MM_H */
/* This part must be outside protection */

View File

@@ -123,6 +123,8 @@ DECLARE_RESTRICTED_HOOK(android_rvh_bpf_skb_load_bytes,
TP_PROTO(const struct sk_buff *skb, u32 offset, void *to, u32 len,
int *handled, int *err),
TP_ARGS(skb, offset, to, len, handled, err), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_tcp_rcv_spurious_retrans,
TP_PROTO(struct sock *sk), TP_ARGS(sk), 1);
DECLARE_HOOK(android_vh_tcp_rtt_estimator,
TP_PROTO(struct sock *sk, long mrtt_us), TP_ARGS(sk, mrtt_us));
DECLARE_HOOK(android_vh_udp_enqueue_schedule_skb,

View File

@@ -80,6 +80,10 @@ DECLARE_RESTRICTED_HOOK(android_rvh_setscheduler,
TP_PROTO(struct task_struct *p),
TP_ARGS(p), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_setscheduler_prio,
TP_PROTO(struct task_struct *p),
TP_ARGS(p), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_replace_next_task_fair,
TP_PROTO(struct rq *rq, struct task_struct **p, struct task_struct *prev),
TP_ARGS(rq, p, prev), 1);
@@ -322,6 +326,11 @@ DECLARE_HOOK(android_vh_setscheduler_uclamp,
TP_PROTO(struct task_struct *tsk, int clamp_id, unsigned int value),
TP_ARGS(tsk, clamp_id, value));
DECLARE_HOOK(android_vh_uclamp_validate,
TP_PROTO(struct task_struct *p, const struct sched_attr *attr,
int *ret, bool *done),
TP_ARGS(p, attr, ret, done));
DECLARE_HOOK(android_vh_update_topology_flags_workfn,
TP_PROTO(void *unused),
TP_ARGS(unused));
@@ -494,6 +503,25 @@ DECLARE_HOOK(android_vh_prio_restore,
TP_PROTO(int saved_prio),
TP_ARGS(saved_prio));
DECLARE_RESTRICTED_HOOK(android_rvh_update_rt_rq_load_avg,
TP_PROTO(u64 now, struct rq *rq, struct task_struct *tsk, int running),
TP_ARGS(now, rq, tsk, running), 1);
struct sched_attr;
DECLARE_HOOK(android_vh_set_sugov_sched_attr,
TP_PROTO(struct sched_attr *attr),
TP_ARGS(attr));
DECLARE_RESTRICTED_HOOK(android_rvh_set_iowait,
TP_PROTO(struct task_struct *p, struct rq *rq, int *should_iowait_boost),
TP_ARGS(p, rq, should_iowait_boost), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_util_fits_cpu,
TP_PROTO(unsigned long util, unsigned long uclamp_min, unsigned long uclamp_max,
int cpu, bool *fits, bool *done),
TP_ARGS(util, uclamp_min, uclamp_max, cpu, fits, done), 1);
/* macro versions of hooks are no longer required */
#endif /* _TRACE_HOOK_SCHED_H */

View File

@@ -14,6 +14,11 @@ DECLARE_HOOK(android_vh_syscall_prctl_finished,
DECLARE_HOOK(android_vh_security_audit_log_setid,
TP_PROTO(u32 type, u32 old_id, u32 new_id),
TP_ARGS(type, old_id, new_id));
DECLARE_RESTRICTED_HOOK(android_rvh_pr_set_vma_name_bypass,
TP_PROTO(struct mm_struct *mm, unsigned long addr, unsigned long size,
struct anon_vma_name *anon_name, int *error, bool *bypass),
TP_ARGS(mm, addr, size, anon_name, error, bypass), 1);
#endif
#include <trace/define_trace.h>

View File

@@ -20,6 +20,10 @@ DECLARE_RESTRICTED_HOOK(android_rvh_cpu_capacity_show,
TP_PROTO(unsigned long *capacity, int cpu),
TP_ARGS(capacity, cpu), 1);
DECLARE_HOOK(android_vh_use_amu_fie,
TP_PROTO(bool *use_amu_fie),
TP_ARGS(use_amu_fie));
#endif /* _TRACE_HOOK_TOPOLOGY_H */
/* This part must be outside protection */
#include <trace/define_trace.h>

View File

@@ -62,6 +62,10 @@ DECLARE_HOOK(android_vh_should_memcg_bypass,
DECLARE_HOOK(android_vh_do_shrink_slab,
TP_PROTO(struct shrinker *shrinker, long *freeable),
TP_ARGS(shrinker, freeable));
DECLARE_HOOK(android_vh_do_shrink_slab_ex,
TP_PROTO(struct shrink_control *shrinkctl, struct shrinker *shrinker,
long *freeable, int priority),
TP_ARGS(shrinkctl, shrinker, freeable, priority));
DECLARE_HOOK(android_vh_rebalance_anon_lru_bypass,
TP_PROTO(bool *bypass),
TP_ARGS(bypass));
@@ -83,14 +87,14 @@ DECLARE_HOOK(android_vh_shrink_slab_bypass,
TP_ARGS(gfp_mask, nid, memcg, priority, bypass));
DECLARE_HOOK(android_vh_vmscan_kswapd_done,
TP_PROTO(int node_id, unsigned int highest_zoneidx, unsigned int alloc_order,
unsigned int reclaim_order),
unsigned int reclaim_order),
TP_ARGS(node_id, highest_zoneidx, alloc_order, reclaim_order));
DECLARE_RESTRICTED_HOOK(android_rvh_vmscan_kswapd_wake,
TP_PROTO(int node_id, unsigned int highest_zoneidx, unsigned int alloc_order),
TP_ARGS(node_id, highest_zoneidx, alloc_order), 1);
DECLARE_RESTRICTED_HOOK(android_rvh_vmscan_kswapd_done,
TP_PROTO(int node_id, unsigned int highest_zoneidx, unsigned int alloc_order,
unsigned int reclaim_order),
unsigned int reclaim_order),
TP_ARGS(node_id, highest_zoneidx, alloc_order, reclaim_order), 1);
DECLARE_HOOK(android_vh_direct_reclaim_begin,
TP_PROTO(int *prio),
@@ -101,6 +105,9 @@ DECLARE_HOOK(android_vh_direct_reclaim_end,
DECLARE_HOOK(android_vh_throttle_direct_reclaim_bypass,
TP_PROTO(bool *bypass),
TP_ARGS(bypass));
DECLARE_HOOK(android_vh_shrink_node,
TP_PROTO(pg_data_t *pgdat, struct mem_cgroup *memcg),
TP_ARGS(pgdat, memcg));
DECLARE_HOOK(android_vh_shrink_node_memcgs,
TP_PROTO(struct mem_cgroup *memcg, bool *skip),
TP_ARGS(memcg, skip));

View File

@@ -1330,7 +1330,15 @@
* TID to Link mapping for downlink/uplink traffic.
*
* @NL80211_CMD_ASSOC_MLO_RECONF: For a non-AP MLD station, request to
* add/remove links to/from the association.
* add/remove links to/from the association. To indicate link
* reconfiguration request results from the driver, this command is also
* used as an event to notify userspace about the added links information.
* For notifying the removed links information, the existing
* %NL80211_CMD_LINKS_REMOVED command is used. This command is also used to
* notify userspace about newly added links for the current connection in
* case of AP-initiated link recommendation requests, received via
* a BTM (BSS Transition Management) request or a link reconfig notify
* frame, where the driver handles the link recommendation offload.
*
* @NL80211_CMD_MAX: highest used command number
* @__NL80211_CMD_AFTER_LAST: internal use

View File

@@ -251,7 +251,8 @@ int cgroup_attach_task(struct cgroup *dst_cgrp, struct task_struct *leader,
void cgroup_attach_lock(bool lock_threadgroup);
void cgroup_attach_unlock(bool lock_threadgroup);
struct task_struct *cgroup_procs_write_start(char *buf, bool threadgroup,
bool *locked)
bool *locked,
struct cgroup *dst_cgrp);
__acquires(&cgroup_threadgroup_rwsem);
void cgroup_procs_write_finish(struct task_struct *task, bool locked)
__releases(&cgroup_threadgroup_rwsem);

Some files were not shown because too many files have changed in this diff Show More