Merge tag 'mm-nonmm-stable-2024-09-21-07-52' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull non-MM updates from Andrew Morton:
"Many singleton patches - please see the various changelogs for
details.
Quite a lot of nilfs2 work this time around.
Notable patch series in this pull request are:
- "mul_u64_u64_div_u64: new implementation" by Nicolas Pitre, with
assistance from Uwe Kleine-König. Reimplement mul_u64_u64_div_u64()
to provide (much) more accurate results. The current implementation
was causing Uwe some issues in the PWM drivers.
- "xz: Updates to license, filters, and compression options" from
Lasse Collin. Miscellaneous maintenance and kinor feature work to
the xz decompressor.
- "Fix some GDB command error and add some GDB commands" from
Kuan-Ying Lee. Fixes and enhancements to the gdb scripts.
- "treewide: add missing MODULE_DESCRIPTION() macros" from Jeff
Johnson. Adds lots of MODULE_DESCRIPTIONs, thus fixing lots of
warnings about this.
- "nilfs2: add support for some common ioctls" from Ryusuke Konishi.
Adds various commonly-available ioctls to nilfs2.
- "This series fixes a number of formatting issues in kernel doc
comments" from Ryusuke Konishi does that.
- "nilfs2: prevent unexpected ENOENT propagation" from Ryusuke
Konishi. Fix issues where -ENOENT was being unintentionally and
inappropriately returned to userspace.
- "nilfs2: assorted cleanups" from Huang Xiaojia.
- "nilfs2: fix potential issues with empty b-tree nodes" from Ryusuke
Konishi fixes some issues which can occur on corrupted nilfs2
filesystems.
- "scripts/decode_stacktrace.sh: improve error reporting and
usability" from Luca Ceresoli does those things"
* tag 'mm-nonmm-stable-2024-09-21-07-52' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (103 commits)
list: test: increase coverage of list_test_list_replace*()
list: test: fix tests for list_cut_position()
proc: use __auto_type more
treewide: correct the typo 'retun'
ocfs2: cleanup return value and mlog in ocfs2_global_read_info()
nilfs2: remove duplicate 'unlikely()' usage
nilfs2: fix potential oob read in nilfs_btree_check_delete()
nilfs2: determine empty node blocks as corrupted
nilfs2: fix potential null-ptr-deref in nilfs_btree_insert()
user_namespace: use kmemdup_array() instead of kmemdup() for multiple allocation
tools/mm: rm thp_swap_allocator_test when make clean
squashfs: fix percpu address space issues in decompressor_multi_percpu.c
lib: glob.c: added null check for character class
nilfs2: refactor nilfs_segctor_thread()
nilfs2: use kthread_create and kthread_stop for the log writer thread
nilfs2: remove sc_timer_task
nilfs2: do not repair reserved inode bitmap in nilfs_new_inode()
nilfs2: eliminate the shared counter and spinlock for i_generation
nilfs2: separate inode type information from i_state field
nilfs2: use the BITS_PER_LONG macro
...
This commit is contained in:
@@ -115,6 +115,6 @@ What: /sys/devices/system/memory/crash_hotplug
|
||||
Date: Aug 2023
|
||||
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
|
||||
Description:
|
||||
(RO) indicates whether or not the kernel directly supports
|
||||
modifying the crash elfcorehdr for memory hot un/plug and/or
|
||||
on/offline changes.
|
||||
(RO) indicates whether or not the kernel updates relevant kexec
|
||||
segments on memory hot un/plug and/or on/offline events, avoiding the
|
||||
need to reload kdump kernel.
|
||||
|
||||
@@ -704,9 +704,9 @@ What: /sys/devices/system/cpu/crash_hotplug
|
||||
Date: Aug 2023
|
||||
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
|
||||
Description:
|
||||
(RO) indicates whether or not the kernel directly supports
|
||||
modifying the crash elfcorehdr for CPU hot un/plug and/or
|
||||
on/offline changes.
|
||||
(RO) indicates whether or not the kernel updates relevant kexec
|
||||
segments on memory hot un/plug and/or on/offline events, avoiding the
|
||||
need to reload kdump kernel.
|
||||
|
||||
What: /sys/devices/system/cpu/enabled
|
||||
Date: Nov 2022
|
||||
|
||||
@@ -294,8 +294,9 @@ The following files are currently defined:
|
||||
``crash_hotplug`` read-only: when changes to the system memory map
|
||||
occur due to hot un/plug of memory, this file contains
|
||||
'1' if the kernel updates the kdump capture kernel memory
|
||||
map itself (via elfcorehdr), or '0' if userspace must update
|
||||
the kdump capture kernel memory map.
|
||||
map itself (via elfcorehdr and other relevant kexec
|
||||
segments), or '0' if userspace must update the kdump
|
||||
capture kernel memory map.
|
||||
|
||||
Availability depends on the CONFIG_MEMORY_HOTPLUG kernel
|
||||
configuration option.
|
||||
|
||||
@@ -737,8 +737,9 @@ can process the event further.
|
||||
|
||||
When changes to the CPUs in the system occur, the sysfs file
|
||||
/sys/devices/system/cpu/crash_hotplug contains '1' if the kernel
|
||||
updates the kdump capture kernel list of CPUs itself (via elfcorehdr),
|
||||
or '0' if userspace must update the kdump capture kernel list of CPUs.
|
||||
updates the kdump capture kernel list of CPUs itself (via elfcorehdr and
|
||||
other relevant kexec segment), or '0' if userspace must update the kdump
|
||||
capture kernel list of CPUs.
|
||||
|
||||
The availability depends on the CONFIG_HOTPLUG_CPU kernel configuration
|
||||
option.
|
||||
@@ -750,8 +751,9 @@ file can be used in a udev rule as follows:
|
||||
SUBSYSTEM=="cpu", ATTRS{crash_hotplug}=="1", GOTO="kdump_reload_end"
|
||||
|
||||
For a CPU hot un/plug event, if the architecture supports kernel updates
|
||||
of the elfcorehdr (which contains the list of CPUs), then the rule skips
|
||||
the unload-then-reload of the kdump capture kernel.
|
||||
of the elfcorehdr (which contains the list of CPUs) and other relevant
|
||||
kexec segments, then the rule skips the unload-then-reload of the kdump
|
||||
capture kernel.
|
||||
|
||||
Kernel Inline Documentations Reference
|
||||
======================================
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
.. SPDX-License-Identifier: 0BSD
|
||||
|
||||
============================
|
||||
XZ data compression in Linux
|
||||
============================
|
||||
@@ -6,62 +8,55 @@ Introduction
|
||||
============
|
||||
|
||||
XZ is a general purpose data compression format with high compression
|
||||
ratio and relatively fast decompression. The primary compression
|
||||
algorithm (filter) is LZMA2. Additional filters can be used to improve
|
||||
compression ratio even further. E.g. Branch/Call/Jump (BCJ) filters
|
||||
improve compression ratio of executable data.
|
||||
ratio. The XZ decompressor in Linux is called XZ Embedded. It supports
|
||||
the LZMA2 filter and optionally also Branch/Call/Jump (BCJ) filters
|
||||
for executable code. CRC32 is supported for integrity checking.
|
||||
|
||||
The XZ decompressor in Linux is called XZ Embedded. It supports
|
||||
the LZMA2 filter and optionally also BCJ filters. CRC32 is supported
|
||||
for integrity checking. The home page of XZ Embedded is at
|
||||
<https://tukaani.org/xz/embedded.html>, where you can find the
|
||||
latest version and also information about using the code outside
|
||||
the Linux kernel.
|
||||
See the `XZ Embedded`_ home page for the latest version which includes
|
||||
a few optional extra features that aren't required in the Linux kernel
|
||||
and information about using the code outside the Linux kernel.
|
||||
|
||||
For userspace, XZ Utils provide a zlib-like compression library
|
||||
and a gzip-like command line tool. XZ Utils can be downloaded from
|
||||
<https://tukaani.org/xz/>.
|
||||
For userspace, `XZ Utils`_ provide a zlib-like compression library
|
||||
and a gzip-like command line tool.
|
||||
|
||||
.. _XZ Embedded: https://tukaani.org/xz/embedded.html
|
||||
.. _XZ Utils: https://tukaani.org/xz/
|
||||
|
||||
XZ related components in the kernel
|
||||
===================================
|
||||
|
||||
The xz_dec module provides XZ decompressor with single-call (buffer
|
||||
to buffer) and multi-call (stateful) APIs. The usage of the xz_dec
|
||||
module is documented in include/linux/xz.h.
|
||||
|
||||
The xz_dec_test module is for testing xz_dec. xz_dec_test is not
|
||||
useful unless you are hacking the XZ decompressor. xz_dec_test
|
||||
allocates a char device major dynamically to which one can write
|
||||
.xz files from userspace. The decompressed output is thrown away.
|
||||
Keep an eye on dmesg to see diagnostics printed by xz_dec_test.
|
||||
See the xz_dec_test source code for the details.
|
||||
to buffer) and multi-call (stateful) APIs in include/linux/xz.h.
|
||||
|
||||
For decompressing the kernel image, initramfs, and initrd, there
|
||||
is a wrapper function in lib/decompress_unxz.c. Its API is the
|
||||
same as in other decompress_*.c files, which is defined in
|
||||
include/linux/decompress/generic.h.
|
||||
|
||||
scripts/xz_wrap.sh is a wrapper for the xz command line tool found
|
||||
from XZ Utils. The wrapper sets compression options to values suitable
|
||||
for compressing the kernel image.
|
||||
For kernel makefiles, three commands are provided for use with
|
||||
``$(call if_changed)``. They require the xz tool from XZ Utils.
|
||||
|
||||
For kernel makefiles, two commands are provided for use with
|
||||
$(call if_needed). The kernel image should be compressed with
|
||||
$(call if_needed,xzkern) which will use a BCJ filter and a big LZMA2
|
||||
dictionary. It will also append a four-byte trailer containing the
|
||||
uncompressed size of the file, which is needed by the boot code.
|
||||
Other things should be compressed with $(call if_needed,xzmisc)
|
||||
which will use no BCJ filter and 1 MiB LZMA2 dictionary.
|
||||
- ``$(call if_changed,xzkern)`` is for compressing the kernel image.
|
||||
It runs the script scripts/xz_wrap.sh which uses arch-optimized
|
||||
options and a big LZMA2 dictionary.
|
||||
|
||||
- ``$(call if_changed,xzkern_with_size)`` is like ``xzkern`` above but
|
||||
this also appends a four-byte trailer containing the uncompressed size
|
||||
of the file. The trailer is needed by the boot code on some archs.
|
||||
|
||||
- Other things can be compressed with ``$(call if_needed,xzmisc)``
|
||||
which will use no BCJ filter and 1 MiB LZMA2 dictionary.
|
||||
|
||||
Notes on compression options
|
||||
============================
|
||||
|
||||
Since the XZ Embedded supports only streams with no integrity check or
|
||||
CRC32, make sure that you don't use some other integrity check type
|
||||
when encoding files that are supposed to be decoded by the kernel. With
|
||||
liblzma, you need to use either LZMA_CHECK_NONE or LZMA_CHECK_CRC32
|
||||
when encoding. With the xz command line tool, use --check=none or
|
||||
--check=crc32.
|
||||
Since the XZ Embedded supports only streams with CRC32 or no integrity
|
||||
check, make sure that you don't use some other integrity check type
|
||||
when encoding files that are supposed to be decoded by the kernel.
|
||||
With liblzma from XZ Utils, you need to use either ``LZMA_CHECK_CRC32``
|
||||
or ``LZMA_CHECK_NONE`` when encoding. With the ``xz`` command line tool,
|
||||
use ``--check=crc32`` or ``--check=none`` to override the default
|
||||
``--check=crc64``.
|
||||
|
||||
Using CRC32 is strongly recommended unless there is some other layer
|
||||
which will verify the integrity of the uncompressed data anyway.
|
||||
@@ -71,57 +66,33 @@ by the decoder; you can only change the integrity check type (or
|
||||
disable it) for the actual uncompressed data.
|
||||
|
||||
In userspace, LZMA2 is typically used with dictionary sizes of several
|
||||
megabytes. The decoder needs to have the dictionary in RAM, thus big
|
||||
dictionaries cannot be used for files that are intended to be decoded
|
||||
by the kernel. 1 MiB is probably the maximum reasonable dictionary
|
||||
size for in-kernel use (maybe more is OK for initramfs). The presets
|
||||
in XZ Utils may not be optimal when creating files for the kernel,
|
||||
so don't hesitate to use custom settings. Example::
|
||||
megabytes. The decoder needs to have the dictionary in RAM:
|
||||
|
||||
xz --check=crc32 --lzma2=dict=512KiB inputfile
|
||||
- In multi-call mode the dictionary is allocated as part of the
|
||||
decoder state. The reasonable maximum dictionary size for in-kernel
|
||||
use will depend on the target hardware: a few megabytes is fine for
|
||||
desktop systems while 64 KiB to 1 MiB might be more appropriate on
|
||||
some embedded systems.
|
||||
|
||||
An exception to above dictionary size limitation is when the decoder
|
||||
is used in single-call mode. Decompressing the kernel itself is an
|
||||
example of this situation. In single-call mode, the memory usage
|
||||
doesn't depend on the dictionary size, and it is perfectly fine to
|
||||
use a big dictionary: for maximum compression, the dictionary should
|
||||
be at least as big as the uncompressed data itself.
|
||||
- In single-call mode the output buffer is used as the dictionary
|
||||
buffer. That is, the size of the dictionary doesn't affect the
|
||||
decompressor memory usage at all. Only the base data structures
|
||||
are allocated which take a little less than 30 KiB of memory.
|
||||
For the best compression, the dictionary should be at least
|
||||
as big as the uncompressed data. A notable example of single-call
|
||||
mode is decompressing the kernel itself (except on PowerPC).
|
||||
|
||||
Future plans
|
||||
============
|
||||
The compression presets in XZ Utils may not be optimal when creating
|
||||
files for the kernel, so don't hesitate to use custom settings to,
|
||||
for example, set the dictionary size. Also, xz may produce a smaller
|
||||
file in single-threaded mode so setting that explicitly is recommended.
|
||||
Example::
|
||||
|
||||
Creating a limited XZ encoder may be considered if people think it is
|
||||
useful. LZMA2 is slower to compress than e.g. Deflate or LZO even at
|
||||
the fastest settings, so it isn't clear if LZMA2 encoder is wanted
|
||||
into the kernel.
|
||||
xz --threads=1 --check=crc32 --lzma2=dict=512KiB inputfile
|
||||
|
||||
Support for limited random-access reading is planned for the
|
||||
decompression code. I don't know if it could have any use in the
|
||||
kernel, but I know that it would be useful in some embedded projects
|
||||
outside the Linux kernel.
|
||||
xz_dec API
|
||||
==========
|
||||
|
||||
Conformance to the .xz file format specification
|
||||
================================================
|
||||
This is available with ``#include <linux/xz.h>``.
|
||||
|
||||
There are a couple of corner cases where things have been simplified
|
||||
at expense of detecting errors as early as possible. These should not
|
||||
matter in practice all, since they don't cause security issues. But
|
||||
it is good to know this if testing the code e.g. with the test files
|
||||
from XZ Utils.
|
||||
|
||||
Reporting bugs
|
||||
==============
|
||||
|
||||
Before reporting a bug, please check that it's not fixed already
|
||||
at upstream. See <https://tukaani.org/xz/embedded.html> to get the
|
||||
latest code.
|
||||
|
||||
Report bugs to <lasse.collin@tukaani.org> or visit #tukaani on
|
||||
Freenode and talk to Larhzu. I don't actively read LKML or other
|
||||
kernel-related mailing lists, so if there's something I should know,
|
||||
you should email to me personally or use IRC.
|
||||
|
||||
Don't bother Igor Pavlov with questions about the XZ implementation
|
||||
in the kernel or about XZ Utils. While these two implementations
|
||||
include essential code that is directly based on Igor Pavlov's code,
|
||||
these implementations aren't maintained nor supported by him.
|
||||
.. kernel-doc:: include/linux/xz.h
|
||||
|
||||
23
LICENSES/deprecated/0BSD
Normal file
23
LICENSES/deprecated/0BSD
Normal file
@@ -0,0 +1,23 @@
|
||||
Valid-License-Identifier: 0BSD
|
||||
SPDX-URL: https://spdx.org/licenses/0BSD.html
|
||||
Usage-Guide:
|
||||
To use the BSD Zero Clause License put the following SPDX tag/value
|
||||
pair into a comment according to the placement guidelines in the
|
||||
licensing rules documentation:
|
||||
SPDX-License-Identifier: 0BSD
|
||||
License-Text:
|
||||
|
||||
BSD Zero Clause License
|
||||
|
||||
Copyright (c) <year> <copyright holders>
|
||||
|
||||
Permission to use, copy, modify, and/or distribute this software for any
|
||||
purpose with or without fee is hereby granted.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
|
||||
SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION
|
||||
OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
|
||||
CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
14
MAINTAINERS
14
MAINTAINERS
@@ -8612,6 +8612,7 @@ M: Akinobu Mita <akinobu.mita@gmail.com>
|
||||
S: Supported
|
||||
F: Documentation/fault-injection/
|
||||
F: lib/fault-inject.c
|
||||
F: tools/testing/fault-injection/
|
||||
|
||||
FBTFT Framebuffer drivers
|
||||
L: dri-devel@lists.freedesktop.org
|
||||
@@ -25459,6 +25460,19 @@ S: Maintained
|
||||
F: drivers/spi/spi-xtensa-xtfpga.c
|
||||
F: sound/soc/xtensa/xtfpga-i2s.c
|
||||
|
||||
XZ EMBEDDED
|
||||
M: Lasse Collin <lasse.collin@tukaani.org>
|
||||
S: Maintained
|
||||
W: https://tukaani.org/xz/embedded.html
|
||||
B: https://github.com/tukaani-project/xz-embedded/issues
|
||||
C: irc://irc.libera.chat/tukaani
|
||||
F: Documentation/staging/xz.rst
|
||||
F: include/linux/decompress/unxz.h
|
||||
F: include/linux/xz.h
|
||||
F: lib/decompress_unxz.c
|
||||
F: lib/xz/
|
||||
F: scripts/xz_wrap.sh
|
||||
|
||||
YAM DRIVER FOR AX.25
|
||||
M: Jean-Paul Roubelat <jpr@f6fbb.org>
|
||||
L: linux-hams@vger.kernel.org
|
||||
|
||||
@@ -8,6 +8,7 @@
|
||||
#include <linux/raid/xor.h>
|
||||
#include <linux/module.h>
|
||||
|
||||
MODULE_DESCRIPTION("NEON accelerated XOR implementation");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
#ifndef __ARM_NEON__
|
||||
|
||||
@@ -333,7 +333,7 @@ int omap4_hotplug_cpu(unsigned int cpu, unsigned int power_state)
|
||||
omap_pm_ops.scu_prepare(cpu, power_state);
|
||||
|
||||
/*
|
||||
* CPU never retuns back if targeted power state is OFF mode.
|
||||
* CPU never returns back if targeted power state is OFF mode.
|
||||
* CPU ONLINE follows normal CPU ONLINE ptah via
|
||||
* omap4_secondary_startup().
|
||||
*/
|
||||
|
||||
@@ -17,7 +17,7 @@
|
||||
OBJCOPYFLAGS_Image :=-O binary -R .note -R .note.gnu.build-id -R .comment -S
|
||||
|
||||
targets := Image Image.bz2 Image.gz Image.lz4 Image.lzma Image.lzo \
|
||||
Image.zst image.fit
|
||||
Image.zst Image.xz image.fit
|
||||
|
||||
$(obj)/Image: vmlinux FORCE
|
||||
$(call if_changed,objcopy)
|
||||
@@ -40,6 +40,9 @@ $(obj)/Image.lzo: $(obj)/Image FORCE
|
||||
$(obj)/Image.zst: $(obj)/Image FORCE
|
||||
$(call if_changed,zstd)
|
||||
|
||||
$(obj)/Image.xz: $(obj)/Image FORCE
|
||||
$(call if_changed,xzkern)
|
||||
|
||||
$(obj)/image.fit: $(obj)/Image $(obj)/dts/dtbs-list FORCE
|
||||
$(call if_changed,fit)
|
||||
|
||||
|
||||
@@ -50,11 +50,8 @@ static inline void put_unaligned_be32(u32 val, void *p)
|
||||
/* prevent the inclusion of the xz-preboot MM headers */
|
||||
#define DECOMPR_MM_H
|
||||
#define memmove memmove
|
||||
#define XZ_EXTERN static
|
||||
|
||||
/* xz.h needs to be included directly since we need enum xz_mode */
|
||||
#include "../../../include/linux/xz.h"
|
||||
|
||||
#undef XZ_EXTERN
|
||||
|
||||
#endif
|
||||
|
||||
@@ -158,6 +158,7 @@ config RISCV
|
||||
select HAVE_KERNEL_LZO if !XIP_KERNEL && !EFI_ZBOOT
|
||||
select HAVE_KERNEL_UNCOMPRESSED if !XIP_KERNEL && !EFI_ZBOOT
|
||||
select HAVE_KERNEL_ZSTD if !XIP_KERNEL && !EFI_ZBOOT
|
||||
select HAVE_KERNEL_XZ if !XIP_KERNEL && !EFI_ZBOOT
|
||||
select HAVE_KPROBES if !XIP_KERNEL
|
||||
select HAVE_KRETPROBES if !XIP_KERNEL
|
||||
# https://github.com/ClangBuiltLinux/linux/issues/1881
|
||||
|
||||
@@ -159,6 +159,7 @@ boot-image-$(CONFIG_KERNEL_LZ4) := Image.lz4
|
||||
boot-image-$(CONFIG_KERNEL_LZMA) := Image.lzma
|
||||
boot-image-$(CONFIG_KERNEL_LZO) := Image.lzo
|
||||
boot-image-$(CONFIG_KERNEL_ZSTD) := Image.zst
|
||||
boot-image-$(CONFIG_KERNEL_XZ) := Image.xz
|
||||
ifdef CONFIG_RISCV_M_MODE
|
||||
boot-image-$(CONFIG_ARCH_CANAAN) := loader.bin
|
||||
endif
|
||||
@@ -183,12 +184,12 @@ endif
|
||||
vdso-install-y += arch/riscv/kernel/vdso/vdso.so.dbg
|
||||
vdso-install-$(CONFIG_COMPAT) += arch/riscv/kernel/compat_vdso/compat_vdso.so.dbg
|
||||
|
||||
BOOT_TARGETS := Image Image.gz Image.bz2 Image.lz4 Image.lzma Image.lzo Image.zst loader loader.bin xipImage vmlinuz.efi
|
||||
BOOT_TARGETS := Image Image.gz Image.bz2 Image.lz4 Image.lzma Image.lzo Image.zst Image.xz loader loader.bin xipImage vmlinuz.efi
|
||||
|
||||
all: $(notdir $(KBUILD_IMAGE))
|
||||
|
||||
loader.bin: loader
|
||||
Image.gz Image.bz2 Image.lz4 Image.lzma Image.lzo Image.zst loader xipImage vmlinuz.efi: Image
|
||||
Image.gz Image.bz2 Image.lz4 Image.lzma Image.lzo Image.zst Image.xz loader xipImage vmlinuz.efi: Image
|
||||
|
||||
$(BOOT_TARGETS): vmlinux
|
||||
$(Q)$(MAKE) $(build)=$(boot) $(boot)/$@
|
||||
@@ -225,6 +226,7 @@ define archhelp
|
||||
echo ' Image.lzma - Compressed kernel image (arch/riscv/boot/Image.lzma)'
|
||||
echo ' Image.lzo - Compressed kernel image (arch/riscv/boot/Image.lzo)'
|
||||
echo ' Image.zst - Compressed kernel image (arch/riscv/boot/Image.zst)'
|
||||
echo ' Image.xz - Compressed kernel image (arch/riscv/boot/Image.xz)'
|
||||
echo ' vmlinuz.efi - Compressed EFI kernel image (arch/riscv/boot/vmlinuz.efi)'
|
||||
echo ' Default when CONFIG_EFI_ZBOOT=y'
|
||||
echo ' xipImage - Execute-in-place kernel image (arch/riscv/boot/xipImage)'
|
||||
|
||||
@@ -64,6 +64,9 @@ $(obj)/Image.lzo: $(obj)/Image FORCE
|
||||
$(obj)/Image.zst: $(obj)/Image FORCE
|
||||
$(call if_changed,zstd)
|
||||
|
||||
$(obj)/Image.xz: $(obj)/Image FORCE
|
||||
$(call if_changed,xzkern)
|
||||
|
||||
$(obj)/loader.bin: $(obj)/loader FORCE
|
||||
$(call if_changed,objcopy)
|
||||
|
||||
|
||||
@@ -144,3 +144,4 @@ static void __exit cleanup(void)
|
||||
module_init(init);
|
||||
module_exit(cleanup);
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("Test module for mmiotrace");
|
||||
|
||||
@@ -231,7 +231,7 @@ struct dpu_crtc_state {
|
||||
container_of(x, struct dpu_crtc_state, base)
|
||||
|
||||
/**
|
||||
* dpu_crtc_frame_pending - retun the number of pending frames
|
||||
* dpu_crtc_frame_pending - return the number of pending frames
|
||||
* @crtc: Pointer to drm crtc object
|
||||
*/
|
||||
static inline int dpu_crtc_frame_pending(struct drm_crtc *crtc)
|
||||
|
||||
@@ -357,12 +357,10 @@ void msm_debugfs_init(struct drm_minor *minor)
|
||||
if (priv->kms && priv->kms->funcs->debugfs_init)
|
||||
priv->kms->funcs->debugfs_init(priv->kms, minor);
|
||||
|
||||
#ifdef CONFIG_FAULT_INJECTION
|
||||
fault_create_debugfs_attr("fail_gem_alloc", minor->debugfs_root,
|
||||
&fail_gem_alloc);
|
||||
fault_create_debugfs_attr("fail_gem_iova", minor->debugfs_root,
|
||||
&fail_gem_iova);
|
||||
#endif
|
||||
}
|
||||
#endif
|
||||
|
||||
|
||||
@@ -7,6 +7,7 @@
|
||||
|
||||
#include <linux/dma-mapping.h>
|
||||
#include <linux/fault-inject.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/of_address.h>
|
||||
#include <linux/uaccess.h>
|
||||
|
||||
@@ -58,10 +59,8 @@ static bool modeset = true;
|
||||
MODULE_PARM_DESC(modeset, "Use kernel modesetting [KMS] (1=on (default), 0=disable)");
|
||||
module_param(modeset, bool, 0600);
|
||||
|
||||
#ifdef CONFIG_FAULT_INJECTION
|
||||
DECLARE_FAULT_ATTR(fail_gem_alloc);
|
||||
DECLARE_FAULT_ATTR(fail_gem_iova);
|
||||
#endif
|
||||
|
||||
static int msm_drm_uninit(struct device *dev)
|
||||
{
|
||||
|
||||
@@ -33,12 +33,8 @@
|
||||
#include <drm/msm_drm.h>
|
||||
#include <drm/drm_gem.h>
|
||||
|
||||
#ifdef CONFIG_FAULT_INJECTION
|
||||
extern struct fault_attr fail_gem_alloc;
|
||||
extern struct fault_attr fail_gem_iova;
|
||||
#else
|
||||
# define should_fail(attr, size) 0
|
||||
#endif
|
||||
|
||||
struct msm_kms;
|
||||
struct msm_gpu;
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
#include "xe_debugfs.h"
|
||||
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/fault-inject.h>
|
||||
#include <linux/string_helpers.h>
|
||||
|
||||
#include <drm/drm_debugfs.h>
|
||||
@@ -26,10 +27,7 @@
|
||||
#include "xe_vm.h"
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_FAULT_INJECTION
|
||||
#include <linux/fault-inject.h> /* XXX: fault-inject.h is broken */
|
||||
DECLARE_FAULT_ATTR(gt_reset_failure);
|
||||
#endif
|
||||
|
||||
static struct xe_device *node_to_xe(struct drm_info_node *node)
|
||||
{
|
||||
@@ -213,8 +211,5 @@ void xe_debugfs_register(struct xe_device *xe)
|
||||
for_each_gt(gt, xe, id)
|
||||
xe_gt_debugfs_register(gt);
|
||||
|
||||
#ifdef CONFIG_FAULT_INJECTION
|
||||
fault_create_debugfs_attr("fail_gt_reset", root, >_reset_failure);
|
||||
#endif
|
||||
|
||||
}
|
||||
|
||||
@@ -6,6 +6,8 @@
|
||||
#ifndef _XE_GT_H_
|
||||
#define _XE_GT_H_
|
||||
|
||||
#include <linux/fault-inject.h>
|
||||
|
||||
#include <drm/drm_util.h>
|
||||
|
||||
#include "xe_device.h"
|
||||
@@ -19,19 +21,11 @@
|
||||
|
||||
#define CCS_MASK(gt) (((gt)->info.engine_mask & XE_HW_ENGINE_CCS_MASK) >> XE_HW_ENGINE_CCS0)
|
||||
|
||||
#ifdef CONFIG_FAULT_INJECTION
|
||||
#include <linux/fault-inject.h> /* XXX: fault-inject.h is broken */
|
||||
extern struct fault_attr gt_reset_failure;
|
||||
static inline bool xe_fault_inject_gt_reset(void)
|
||||
{
|
||||
return should_fail(>_reset_failure, 1);
|
||||
}
|
||||
#else
|
||||
static inline bool xe_fault_inject_gt_reset(void)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
#endif
|
||||
|
||||
struct xe_gt *xe_gt_alloc(struct xe_tile *tile);
|
||||
int xe_gt_init_hwconfig(struct xe_gt *gt);
|
||||
|
||||
@@ -1420,7 +1420,7 @@ enum opa_pr_supported {
|
||||
/*
|
||||
* opa_pr_query_possible - Check if current PR query can be an OPA query.
|
||||
*
|
||||
* Retuns PR_NOT_SUPPORTED if a path record query is not
|
||||
* Returns PR_NOT_SUPPORTED if a path record query is not
|
||||
* possible, PR_OPA_SUPPORTED if an OPA path record query
|
||||
* is possible and PR_IB_SUPPORTED if an IB path record
|
||||
* query is possible.
|
||||
|
||||
@@ -1075,7 +1075,7 @@ static void wistron_led_init(struct device *parent)
|
||||
}
|
||||
|
||||
if (leds_present & FE_MAIL_LED) {
|
||||
/* bios_get_default_setting(MAIL) always retuns 0, so just turn the led off */
|
||||
/* bios_get_default_setting(MAIL) always returns 0, so just turn the led off */
|
||||
wistron_mail_led.brightness = LED_OFF;
|
||||
if (led_classdev_register(parent, &wistron_mail_led))
|
||||
leds_present &= ~FE_MAIL_LED;
|
||||
|
||||
@@ -7,6 +7,7 @@
|
||||
#include <linux/iommu.h>
|
||||
#include <linux/xarray.h>
|
||||
#include <linux/file.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/anon_inodes.h>
|
||||
#include <linux/fault-inject.h>
|
||||
#include <linux/platform_device.h>
|
||||
|
||||
@@ -12,6 +12,7 @@
|
||||
#include <asm/xilinx_mb_manager.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/of.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/platform_device.h>
|
||||
#include <linux/fault-inject.h>
|
||||
|
||||
|
||||
@@ -1381,7 +1381,7 @@ static inline union ns_mem *NS_GET_PAGE(struct nandsim *ns)
|
||||
}
|
||||
|
||||
/*
|
||||
* Retuns a pointer to the current byte, within the current page.
|
||||
* Returns a pointer to the current byte, within the current page.
|
||||
*/
|
||||
static inline u_char *NS_PAGE_BYTE_OFF(struct nandsim *ns)
|
||||
{
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
*/
|
||||
|
||||
#include <linux/moduleparam.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include "nvme.h"
|
||||
|
||||
static DECLARE_FAULT_ATTR(fail_default_attr);
|
||||
|
||||
@@ -1431,7 +1431,7 @@ bfa_cb_lps_flogo_comp(void *bfad, void *uarg)
|
||||
* param[in] vf_id - VF_ID
|
||||
*
|
||||
* return
|
||||
* If lookup succeeds, retuns fcs vf object, otherwise returns NULL
|
||||
* If lookup succeeds, returns fcs vf object, otherwise returns NULL
|
||||
*/
|
||||
bfa_fcs_vf_t *
|
||||
bfa_fcs_vf_lookup(struct bfa_fcs_s *fcs, u16 vf_id)
|
||||
|
||||
@@ -4009,7 +4009,7 @@ static void pmcraid_tasklet_function(unsigned long instance)
|
||||
* This routine un-registers registered interrupt handler and
|
||||
* also frees irqs/vectors.
|
||||
*
|
||||
* Retun Value
|
||||
* Return Value
|
||||
* None
|
||||
*/
|
||||
static
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
#include <linux/kconfig.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/fault-inject.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/module.h>
|
||||
#include <ufs/ufshcd.h>
|
||||
#include "ufs-fault-injection.h"
|
||||
|
||||
@@ -3447,7 +3447,7 @@ static int decode_attr_link_support(struct xdr_stream *xdr, uint32_t *bitmap, ui
|
||||
*res = be32_to_cpup(p);
|
||||
bitmap[0] &= ~FATTR4_WORD0_LINK_SUPPORT;
|
||||
}
|
||||
dprintk("%s: link support=%s\n", __func__, *res == 0 ? "false" : "true");
|
||||
dprintk("%s: link support=%s\n", __func__, str_false_true(*res == 0));
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -3465,7 +3465,7 @@ static int decode_attr_symlink_support(struct xdr_stream *xdr, uint32_t *bitmap,
|
||||
*res = be32_to_cpup(p);
|
||||
bitmap[0] &= ~FATTR4_WORD0_SYMLINK_SUPPORT;
|
||||
}
|
||||
dprintk("%s: symlink support=%s\n", __func__, *res == 0 ? "false" : "true");
|
||||
dprintk("%s: symlink support=%s\n", __func__, str_false_true(*res == 0));
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -3607,7 +3607,7 @@ static int decode_attr_case_insensitive(struct xdr_stream *xdr, uint32_t *bitmap
|
||||
*res = be32_to_cpup(p);
|
||||
bitmap[0] &= ~FATTR4_WORD0_CASE_INSENSITIVE;
|
||||
}
|
||||
dprintk("%s: case_insensitive=%s\n", __func__, *res == 0 ? "false" : "true");
|
||||
dprintk("%s: case_insensitive=%s\n", __func__, str_false_true(*res == 0));
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -3625,7 +3625,7 @@ static int decode_attr_case_preserving(struct xdr_stream *xdr, uint32_t *bitmap,
|
||||
*res = be32_to_cpup(p);
|
||||
bitmap[0] &= ~FATTR4_WORD0_CASE_PRESERVING;
|
||||
}
|
||||
dprintk("%s: case_preserving=%s\n", __func__, *res == 0 ? "false" : "true");
|
||||
dprintk("%s: case_preserving=%s\n", __func__, str_false_true(*res == 0));
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -4333,8 +4333,7 @@ static int decode_attr_xattrsupport(struct xdr_stream *xdr, uint32_t *bitmap,
|
||||
*res = be32_to_cpup(p);
|
||||
bitmap[2] &= ~FATTR4_WORD2_XATTR_SUPPORT;
|
||||
}
|
||||
dprintk("%s: XATTR support=%s\n", __func__,
|
||||
*res == 0 ? "false" : "true");
|
||||
dprintk("%s: XATTR support=%s\n", __func__, str_false_true(*res == 0));
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -37,7 +37,7 @@ void *nilfs_palloc_block_get_entry(const struct inode *, __u64,
|
||||
int nilfs_palloc_count_max_entries(struct inode *, u64, u64 *);
|
||||
|
||||
/**
|
||||
* nilfs_palloc_req - persistent allocator request and reply
|
||||
* struct nilfs_palloc_req - persistent allocator request and reply
|
||||
* @pr_entry_nr: entry number (vblocknr or inode number)
|
||||
* @pr_desc_bh: buffer head of the buffer containing block group descriptors
|
||||
* @pr_bitmap_bh: buffer head of the buffer containing a block group bitmap
|
||||
|
||||
@@ -349,7 +349,7 @@ int nilfs_bmap_propagate(struct nilfs_bmap *bmap, struct buffer_head *bh)
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_bmap_lookup_dirty_buffers -
|
||||
* nilfs_bmap_lookup_dirty_buffers - collect dirty block buffers
|
||||
* @bmap: bmap
|
||||
* @listp: pointer to buffer head list
|
||||
*/
|
||||
|
||||
@@ -44,6 +44,19 @@ struct nilfs_bmap_stats {
|
||||
|
||||
/**
|
||||
* struct nilfs_bmap_operations - bmap operation table
|
||||
* @bop_lookup: single block search operation
|
||||
* @bop_lookup_contig: consecutive block search operation
|
||||
* @bop_insert: block insertion operation
|
||||
* @bop_delete: block delete operation
|
||||
* @bop_clear: block mapping resource release operation
|
||||
* @bop_propagate: operation to propagate dirty state towards the
|
||||
* mapping root
|
||||
* @bop_lookup_dirty_buffers: operation to collect dirty block buffers
|
||||
* @bop_assign: disk block address assignment operation
|
||||
* @bop_mark: operation to mark in-use blocks as dirty for
|
||||
* relocation by GC
|
||||
* @bop_seek_key: find valid block key operation
|
||||
* @bop_last_key: find last valid block key operation
|
||||
*/
|
||||
struct nilfs_bmap_operations {
|
||||
int (*bop_lookup)(const struct nilfs_bmap *, __u64, int, __u64 *);
|
||||
@@ -66,7 +79,7 @@ struct nilfs_bmap_operations {
|
||||
int (*bop_seek_key)(const struct nilfs_bmap *, __u64, __u64 *);
|
||||
int (*bop_last_key)(const struct nilfs_bmap *, __u64 *);
|
||||
|
||||
/* The following functions are internal use only. */
|
||||
/* private: internal use only */
|
||||
int (*bop_check_insert)(const struct nilfs_bmap *, __u64);
|
||||
int (*bop_check_delete)(struct nilfs_bmap *, __u64);
|
||||
int (*bop_gather_data)(struct nilfs_bmap *, __u64 *, __u64 *, int);
|
||||
@@ -74,9 +87,8 @@ struct nilfs_bmap_operations {
|
||||
|
||||
|
||||
#define NILFS_BMAP_SIZE (NILFS_INODE_BMAP_SIZE * sizeof(__le64))
|
||||
#define NILFS_BMAP_KEY_BIT (sizeof(unsigned long) * 8 /* CHAR_BIT */)
|
||||
#define NILFS_BMAP_NEW_PTR_INIT \
|
||||
(1UL << (sizeof(unsigned long) * 8 /* CHAR_BIT */ - 1))
|
||||
#define NILFS_BMAP_KEY_BIT BITS_PER_LONG
|
||||
#define NILFS_BMAP_NEW_PTR_INIT (1UL << (BITS_PER_LONG - 1))
|
||||
|
||||
static inline int nilfs_bmap_is_new_ptr(unsigned long ptr)
|
||||
{
|
||||
|
||||
@@ -179,11 +179,32 @@ void nilfs_btnode_delete(struct buffer_head *bh)
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_btnode_prepare_change_key
|
||||
* prepare to move contents of the block for old key to one of new key.
|
||||
* the old buffer will not be removed, but might be reused for new buffer.
|
||||
* it might return -ENOMEM because of memory allocation errors,
|
||||
* and might return -EIO because of disk read errors.
|
||||
* nilfs_btnode_prepare_change_key - prepare to change the search key of a
|
||||
* b-tree node block
|
||||
* @btnc: page cache in which the b-tree node block is buffered
|
||||
* @ctxt: structure for exchanging context information for key change
|
||||
*
|
||||
* nilfs_btnode_prepare_change_key() prepares to move the contents of the
|
||||
* b-tree node block of the old key given in the "oldkey" member of @ctxt to
|
||||
* the position of the new key given in the "newkey" member of @ctxt in the
|
||||
* page cache @btnc. Here, the key of the block is an index in units of
|
||||
* blocks, and if the page and block sizes match, it matches the page index
|
||||
* in the page cache.
|
||||
*
|
||||
* If the page size and block size match, this function attempts to move the
|
||||
* entire folio, and in preparation for this, inserts the original folio into
|
||||
* the new index of the cache. If this insertion fails or if the page size
|
||||
* and block size are different, it falls back to a copy preparation using
|
||||
* nilfs_btnode_create_block(), inserts a new block at the position
|
||||
* corresponding to "newkey", and stores the buffer head pointer in the
|
||||
* "newbh" member of @ctxt.
|
||||
*
|
||||
* Note that the current implementation does not support folio sizes larger
|
||||
* than the page size.
|
||||
*
|
||||
* Return: 0 on success, or the following negative error code on failure.
|
||||
* * %-EIO - I/O error (metadata corruption).
|
||||
* * %-ENOMEM - Insufficient memory available.
|
||||
*/
|
||||
int nilfs_btnode_prepare_change_key(struct address_space *btnc,
|
||||
struct nilfs_btnode_chkey_ctxt *ctxt)
|
||||
@@ -245,8 +266,21 @@ retry:
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_btnode_commit_change_key
|
||||
* commit the change_key operation prepared by prepare_change_key().
|
||||
* nilfs_btnode_commit_change_key - commit the change of the search key of
|
||||
* a b-tree node block
|
||||
* @btnc: page cache in which the b-tree node block is buffered
|
||||
* @ctxt: structure for exchanging context information for key change
|
||||
*
|
||||
* nilfs_btnode_commit_change_key() executes the key change based on the
|
||||
* context @ctxt prepared by nilfs_btnode_prepare_change_key(). If no valid
|
||||
* block buffer is prepared in "newbh" of @ctxt (i.e., a full folio move),
|
||||
* this function removes the folio from the old index and completes the move.
|
||||
* Otherwise, it copies the block data and inherited flag states of "oldbh"
|
||||
* to "newbh" and clears the "oldbh" from the cache. In either case, the
|
||||
* relocated buffer is marked as dirty.
|
||||
*
|
||||
* As with nilfs_btnode_prepare_change_key(), the current implementation does
|
||||
* not support folio sizes larger than the page size.
|
||||
*/
|
||||
void nilfs_btnode_commit_change_key(struct address_space *btnc,
|
||||
struct nilfs_btnode_chkey_ctxt *ctxt)
|
||||
@@ -285,8 +319,19 @@ void nilfs_btnode_commit_change_key(struct address_space *btnc,
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_btnode_abort_change_key
|
||||
* abort the change_key operation prepared by prepare_change_key().
|
||||
* nilfs_btnode_abort_change_key - abort the change of the search key of a
|
||||
* b-tree node block
|
||||
* @btnc: page cache in which the b-tree node block is buffered
|
||||
* @ctxt: structure for exchanging context information for key change
|
||||
*
|
||||
* nilfs_btnode_abort_change_key() cancels the key change associated with the
|
||||
* context @ctxt prepared via nilfs_btnode_prepare_change_key() and performs
|
||||
* any necessary cleanup. If no valid block buffer is prepared in "newbh" of
|
||||
* @ctxt, this function removes the folio from the destination index and aborts
|
||||
* the move. Otherwise, it clears "newbh" from the cache.
|
||||
*
|
||||
* As with nilfs_btnode_prepare_change_key(), the current implementation does
|
||||
* not support folio sizes larger than the page size.
|
||||
*/
|
||||
void nilfs_btnode_abort_change_key(struct address_space *btnc,
|
||||
struct nilfs_btnode_chkey_ctxt *ctxt)
|
||||
|
||||
@@ -350,7 +350,7 @@ static int nilfs_btree_node_broken(const struct nilfs_btree_node *node,
|
||||
if (unlikely(level < NILFS_BTREE_LEVEL_NODE_MIN ||
|
||||
level >= NILFS_BTREE_LEVEL_MAX ||
|
||||
(flags & NILFS_BTREE_NODE_ROOT) ||
|
||||
nchildren < 0 ||
|
||||
nchildren <= 0 ||
|
||||
nchildren > NILFS_BTREE_NODE_NCHILDREN_MAX(size))) {
|
||||
nilfs_crit(inode->i_sb,
|
||||
"bad btree node (ino=%lu, blocknr=%llu): level = %d, flags = 0x%x, nchildren = %d",
|
||||
@@ -381,7 +381,8 @@ static int nilfs_btree_root_broken(const struct nilfs_btree_node *node,
|
||||
if (unlikely(level < NILFS_BTREE_LEVEL_NODE_MIN ||
|
||||
level >= NILFS_BTREE_LEVEL_MAX ||
|
||||
nchildren < 0 ||
|
||||
nchildren > NILFS_BTREE_ROOT_NCHILDREN_MAX)) {
|
||||
nchildren > NILFS_BTREE_ROOT_NCHILDREN_MAX ||
|
||||
(nchildren == 0 && level > NILFS_BTREE_LEVEL_NODE_MIN))) {
|
||||
nilfs_crit(inode->i_sb,
|
||||
"bad btree root (ino=%lu): level = %d, flags = 0x%x, nchildren = %d",
|
||||
inode->i_ino, level, flags, nchildren);
|
||||
@@ -1658,13 +1659,16 @@ static int nilfs_btree_check_delete(struct nilfs_bmap *btree, __u64 key)
|
||||
int nchildren, ret;
|
||||
|
||||
root = nilfs_btree_get_root(btree);
|
||||
nchildren = nilfs_btree_node_get_nchildren(root);
|
||||
if (unlikely(nchildren == 0))
|
||||
return 0;
|
||||
|
||||
switch (nilfs_btree_height(btree)) {
|
||||
case 2:
|
||||
bh = NULL;
|
||||
node = root;
|
||||
break;
|
||||
case 3:
|
||||
nchildren = nilfs_btree_node_get_nchildren(root);
|
||||
if (nchildren > 1)
|
||||
return 0;
|
||||
ptr = nilfs_btree_node_get_ptr(root, nchildren - 1,
|
||||
@@ -1673,12 +1677,12 @@ static int nilfs_btree_check_delete(struct nilfs_bmap *btree, __u64 key)
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
node = (struct nilfs_btree_node *)bh->b_data;
|
||||
nchildren = nilfs_btree_node_get_nchildren(node);
|
||||
break;
|
||||
default:
|
||||
return 0;
|
||||
}
|
||||
|
||||
nchildren = nilfs_btree_node_get_nchildren(node);
|
||||
maxkey = nilfs_btree_node_get_key(node, nchildren - 1);
|
||||
nextmaxkey = (nchildren > 1) ?
|
||||
nilfs_btree_node_get_key(node, nchildren - 2) : 0;
|
||||
|
||||
@@ -24,6 +24,7 @@
|
||||
* @bp_index: index of child node
|
||||
* @bp_oldreq: ptr end request for old ptr
|
||||
* @bp_newreq: ptr alloc request for new ptr
|
||||
* @bp_ctxt: context information for changing the key of a b-tree node block
|
||||
* @bp_op: rebalance operation
|
||||
*/
|
||||
struct nilfs_btree_path {
|
||||
|
||||
@@ -125,10 +125,17 @@ static void nilfs_cpfile_block_init(struct inode *cpfile,
|
||||
}
|
||||
}
|
||||
|
||||
static inline int nilfs_cpfile_get_header_block(struct inode *cpfile,
|
||||
struct buffer_head **bhp)
|
||||
static int nilfs_cpfile_get_header_block(struct inode *cpfile,
|
||||
struct buffer_head **bhp)
|
||||
{
|
||||
return nilfs_mdt_get_block(cpfile, 0, 0, NULL, bhp);
|
||||
int err = nilfs_mdt_get_block(cpfile, 0, 0, NULL, bhp);
|
||||
|
||||
if (unlikely(err == -ENOENT)) {
|
||||
nilfs_error(cpfile->i_sb,
|
||||
"missing header block in checkpoint metadata");
|
||||
err = -EIO;
|
||||
}
|
||||
return err;
|
||||
}
|
||||
|
||||
static inline int nilfs_cpfile_get_checkpoint_block(struct inode *cpfile,
|
||||
@@ -283,14 +290,9 @@ int nilfs_cpfile_create_checkpoint(struct inode *cpfile, __u64 cno)
|
||||
|
||||
down_write(&NILFS_MDT(cpfile)->mi_sem);
|
||||
ret = nilfs_cpfile_get_header_block(cpfile, &header_bh);
|
||||
if (unlikely(ret < 0)) {
|
||||
if (ret == -ENOENT) {
|
||||
nilfs_error(cpfile->i_sb,
|
||||
"checkpoint creation failed due to metadata corruption.");
|
||||
ret = -EIO;
|
||||
}
|
||||
if (unlikely(ret < 0))
|
||||
goto out_sem;
|
||||
}
|
||||
|
||||
ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, 1, &cp_bh);
|
||||
if (unlikely(ret < 0))
|
||||
goto out_header;
|
||||
@@ -704,9 +706,15 @@ ssize_t nilfs_cpfile_get_cpinfo(struct inode *cpfile, __u64 *cnop, int mode,
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_cpfile_delete_checkpoint -
|
||||
* @cpfile:
|
||||
* @cno:
|
||||
* nilfs_cpfile_delete_checkpoint - delete a checkpoint
|
||||
* @cpfile: checkpoint file inode
|
||||
* @cno: checkpoint number to delete
|
||||
*
|
||||
* Return: 0 on success, or the following negative error code on failure.
|
||||
* * %-EBUSY - Checkpoint in use (snapshot specified).
|
||||
* * %-EIO - I/O error (including metadata corruption).
|
||||
* * %-ENOENT - No valid checkpoint found.
|
||||
* * %-ENOMEM - Insufficient memory available.
|
||||
*/
|
||||
int nilfs_cpfile_delete_checkpoint(struct inode *cpfile, __u64 cno)
|
||||
{
|
||||
@@ -968,21 +976,15 @@ static int nilfs_cpfile_clear_snapshot(struct inode *cpfile, __u64 cno)
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_cpfile_is_snapshot -
|
||||
* nilfs_cpfile_is_snapshot - determine if checkpoint is a snapshot
|
||||
* @cpfile: inode of checkpoint file
|
||||
* @cno: checkpoint number
|
||||
* @cno: checkpoint number
|
||||
*
|
||||
* Description:
|
||||
*
|
||||
* Return Value: On success, 1 is returned if the checkpoint specified by
|
||||
* @cno is a snapshot, or 0 if not. On error, one of the following negative
|
||||
* error codes is returned.
|
||||
*
|
||||
* %-EIO - I/O error.
|
||||
*
|
||||
* %-ENOMEM - Insufficient amount of memory available.
|
||||
*
|
||||
* %-ENOENT - No such checkpoint.
|
||||
* Return: 1 if the checkpoint specified by @cno is a snapshot, 0 if not, or
|
||||
* the following negative error code on failure.
|
||||
* * %-EIO - I/O error (including metadata corruption).
|
||||
* * %-ENOENT - No such checkpoint.
|
||||
* * %-ENOMEM - Insufficient memory available.
|
||||
*/
|
||||
int nilfs_cpfile_is_snapshot(struct inode *cpfile, __u64 cno)
|
||||
{
|
||||
|
||||
@@ -271,18 +271,15 @@ void nilfs_dat_abort_update(struct inode *dat,
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_dat_mark_dirty -
|
||||
* @dat: DAT file inode
|
||||
* nilfs_dat_mark_dirty - mark the DAT block buffer containing the specified
|
||||
* virtual block address entry as dirty
|
||||
* @dat: DAT file inode
|
||||
* @vblocknr: virtual block number
|
||||
*
|
||||
* Description:
|
||||
*
|
||||
* Return Value: On success, 0 is returned. On error, one of the following
|
||||
* negative error codes is returned.
|
||||
*
|
||||
* %-EIO - I/O error.
|
||||
*
|
||||
* %-ENOMEM - Insufficient amount of memory available.
|
||||
* Return: 0 on success, or the following negative error code on failure.
|
||||
* * %-EINVAL - Invalid DAT entry (internal code).
|
||||
* * %-EIO - I/O error (including metadata corruption).
|
||||
* * %-ENOMEM - Insufficient memory available.
|
||||
*/
|
||||
int nilfs_dat_mark_dirty(struct inode *dat, __u64 vblocknr)
|
||||
{
|
||||
|
||||
@@ -231,37 +231,6 @@ static struct nilfs_dir_entry *nilfs_next_entry(struct nilfs_dir_entry *p)
|
||||
nilfs_rec_len_from_disk(p->rec_len));
|
||||
}
|
||||
|
||||
static unsigned char
|
||||
nilfs_filetype_table[NILFS_FT_MAX] = {
|
||||
[NILFS_FT_UNKNOWN] = DT_UNKNOWN,
|
||||
[NILFS_FT_REG_FILE] = DT_REG,
|
||||
[NILFS_FT_DIR] = DT_DIR,
|
||||
[NILFS_FT_CHRDEV] = DT_CHR,
|
||||
[NILFS_FT_BLKDEV] = DT_BLK,
|
||||
[NILFS_FT_FIFO] = DT_FIFO,
|
||||
[NILFS_FT_SOCK] = DT_SOCK,
|
||||
[NILFS_FT_SYMLINK] = DT_LNK,
|
||||
};
|
||||
|
||||
#define S_SHIFT 12
|
||||
static unsigned char
|
||||
nilfs_type_by_mode[(S_IFMT >> S_SHIFT) + 1] = {
|
||||
[S_IFREG >> S_SHIFT] = NILFS_FT_REG_FILE,
|
||||
[S_IFDIR >> S_SHIFT] = NILFS_FT_DIR,
|
||||
[S_IFCHR >> S_SHIFT] = NILFS_FT_CHRDEV,
|
||||
[S_IFBLK >> S_SHIFT] = NILFS_FT_BLKDEV,
|
||||
[S_IFIFO >> S_SHIFT] = NILFS_FT_FIFO,
|
||||
[S_IFSOCK >> S_SHIFT] = NILFS_FT_SOCK,
|
||||
[S_IFLNK >> S_SHIFT] = NILFS_FT_SYMLINK,
|
||||
};
|
||||
|
||||
static void nilfs_set_de_type(struct nilfs_dir_entry *de, struct inode *inode)
|
||||
{
|
||||
umode_t mode = inode->i_mode;
|
||||
|
||||
de->file_type = nilfs_type_by_mode[(mode & S_IFMT)>>S_SHIFT];
|
||||
}
|
||||
|
||||
static int nilfs_readdir(struct file *file, struct dir_context *ctx)
|
||||
{
|
||||
loff_t pos = ctx->pos;
|
||||
@@ -297,10 +266,7 @@ static int nilfs_readdir(struct file *file, struct dir_context *ctx)
|
||||
if (de->inode) {
|
||||
unsigned char t;
|
||||
|
||||
if (de->file_type < NILFS_FT_MAX)
|
||||
t = nilfs_filetype_table[de->file_type];
|
||||
else
|
||||
t = DT_UNKNOWN;
|
||||
t = fs_ftype_to_dtype(de->file_type);
|
||||
|
||||
if (!dir_emit(ctx, de->name, de->name_len,
|
||||
le64_to_cpu(de->inode), t)) {
|
||||
@@ -444,7 +410,7 @@ void nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de,
|
||||
err = nilfs_prepare_chunk(folio, from, to);
|
||||
BUG_ON(err);
|
||||
de->inode = cpu_to_le64(inode->i_ino);
|
||||
nilfs_set_de_type(de, inode);
|
||||
de->file_type = fs_umode_to_ftype(inode->i_mode);
|
||||
nilfs_commit_chunk(folio, mapping, from, to);
|
||||
inode_set_mtime_to_ts(dir, inode_set_ctime_current(dir));
|
||||
}
|
||||
@@ -531,7 +497,7 @@ got_it:
|
||||
de->name_len = namelen;
|
||||
memcpy(de->name, name, namelen);
|
||||
de->inode = cpu_to_le64(inode->i_ino);
|
||||
nilfs_set_de_type(de, inode);
|
||||
de->file_type = fs_umode_to_ftype(inode->i_mode);
|
||||
nilfs_commit_chunk(folio, folio->mapping, from, to);
|
||||
inode_set_mtime_to_ts(dir, inode_set_ctime_current(dir));
|
||||
nilfs_mark_inode_dirty(dir);
|
||||
@@ -612,14 +578,14 @@ int nilfs_make_empty(struct inode *inode, struct inode *parent)
|
||||
de->rec_len = nilfs_rec_len_to_disk(NILFS_DIR_REC_LEN(1));
|
||||
memcpy(de->name, ".\0\0", 4);
|
||||
de->inode = cpu_to_le64(inode->i_ino);
|
||||
nilfs_set_de_type(de, inode);
|
||||
de->file_type = fs_umode_to_ftype(inode->i_mode);
|
||||
|
||||
de = (struct nilfs_dir_entry *)(kaddr + NILFS_DIR_REC_LEN(1));
|
||||
de->name_len = 2;
|
||||
de->rec_len = nilfs_rec_len_to_disk(chunk_size - NILFS_DIR_REC_LEN(1));
|
||||
de->inode = cpu_to_le64(parent->i_ino);
|
||||
memcpy(de->name, "..\0", 4);
|
||||
nilfs_set_de_type(de, inode);
|
||||
de->file_type = fs_umode_to_ftype(inode->i_mode);
|
||||
kunmap_local(kaddr);
|
||||
nilfs_commit_chunk(folio, mapping, 0, chunk_size);
|
||||
fail:
|
||||
|
||||
@@ -15,6 +15,7 @@
|
||||
#include <linux/writeback.h>
|
||||
#include <linux/uio.h>
|
||||
#include <linux/fiemap.h>
|
||||
#include <linux/random.h>
|
||||
#include "nilfs.h"
|
||||
#include "btnode.h"
|
||||
#include "segment.h"
|
||||
@@ -28,17 +29,13 @@
|
||||
* @ino: inode number
|
||||
* @cno: checkpoint number
|
||||
* @root: pointer on NILFS root object (mounted checkpoint)
|
||||
* @for_gc: inode for GC flag
|
||||
* @for_btnc: inode for B-tree node cache flag
|
||||
* @for_shadow: inode for shadowed page cache flag
|
||||
* @type: inode type
|
||||
*/
|
||||
struct nilfs_iget_args {
|
||||
u64 ino;
|
||||
__u64 cno;
|
||||
struct nilfs_root *root;
|
||||
bool for_gc;
|
||||
bool for_btnc;
|
||||
bool for_shadow;
|
||||
unsigned int type;
|
||||
};
|
||||
|
||||
static int nilfs_iget_test(struct inode *inode, void *opaque);
|
||||
@@ -162,7 +159,7 @@ static int nilfs_writepages(struct address_space *mapping,
|
||||
int err = 0;
|
||||
|
||||
if (sb_rdonly(inode->i_sb)) {
|
||||
nilfs_clear_dirty_pages(mapping, false);
|
||||
nilfs_clear_dirty_pages(mapping);
|
||||
return -EROFS;
|
||||
}
|
||||
|
||||
@@ -186,7 +183,7 @@ static int nilfs_writepage(struct page *page, struct writeback_control *wbc)
|
||||
* have dirty pages that try to be flushed in background.
|
||||
* So, here we simply discard this dirty page.
|
||||
*/
|
||||
nilfs_clear_folio_dirty(folio, false);
|
||||
nilfs_clear_folio_dirty(folio);
|
||||
folio_unlock(folio);
|
||||
return -EROFS;
|
||||
}
|
||||
@@ -315,8 +312,7 @@ static int nilfs_insert_inode_locked(struct inode *inode,
|
||||
unsigned long ino)
|
||||
{
|
||||
struct nilfs_iget_args args = {
|
||||
.ino = ino, .root = root, .cno = 0, .for_gc = false,
|
||||
.for_btnc = false, .for_shadow = false
|
||||
.ino = ino, .root = root, .cno = 0, .type = NILFS_I_TYPE_NORMAL
|
||||
};
|
||||
|
||||
return insert_inode_locked4(inode, ino, nilfs_iget_test, &args);
|
||||
@@ -325,7 +321,6 @@ static int nilfs_insert_inode_locked(struct inode *inode,
|
||||
struct inode *nilfs_new_inode(struct inode *dir, umode_t mode)
|
||||
{
|
||||
struct super_block *sb = dir->i_sb;
|
||||
struct the_nilfs *nilfs = sb->s_fs_info;
|
||||
struct inode *inode;
|
||||
struct nilfs_inode_info *ii;
|
||||
struct nilfs_root *root;
|
||||
@@ -343,25 +338,13 @@ struct inode *nilfs_new_inode(struct inode *dir, umode_t mode)
|
||||
root = NILFS_I(dir)->i_root;
|
||||
ii = NILFS_I(inode);
|
||||
ii->i_state = BIT(NILFS_I_NEW);
|
||||
ii->i_type = NILFS_I_TYPE_NORMAL;
|
||||
ii->i_root = root;
|
||||
|
||||
err = nilfs_ifile_create_inode(root->ifile, &ino, &bh);
|
||||
if (unlikely(err))
|
||||
goto failed_ifile_create_inode;
|
||||
/* reference count of i_bh inherits from nilfs_mdt_read_block() */
|
||||
|
||||
if (unlikely(ino < NILFS_USER_INO)) {
|
||||
nilfs_warn(sb,
|
||||
"inode bitmap is inconsistent for reserved inodes");
|
||||
do {
|
||||
brelse(bh);
|
||||
err = nilfs_ifile_create_inode(root->ifile, &ino, &bh);
|
||||
if (unlikely(err))
|
||||
goto failed_ifile_create_inode;
|
||||
} while (ino < NILFS_USER_INO);
|
||||
|
||||
nilfs_info(sb, "repaired inode bitmap for reserved inodes");
|
||||
}
|
||||
ii->i_bh = bh;
|
||||
|
||||
atomic64_inc(&root->inodes_count);
|
||||
@@ -385,9 +368,7 @@ struct inode *nilfs_new_inode(struct inode *dir, umode_t mode)
|
||||
/* ii->i_dir_acl = 0; */
|
||||
ii->i_dir_start_lookup = 0;
|
||||
nilfs_set_inode_flags(inode);
|
||||
spin_lock(&nilfs->ns_next_gen_lock);
|
||||
inode->i_generation = nilfs->ns_next_generation++;
|
||||
spin_unlock(&nilfs->ns_next_gen_lock);
|
||||
inode->i_generation = get_random_u32();
|
||||
if (nilfs_insert_inode_locked(inode, root, ino) < 0) {
|
||||
err = -EIO;
|
||||
goto failed_after_creation;
|
||||
@@ -546,23 +527,10 @@ static int nilfs_iget_test(struct inode *inode, void *opaque)
|
||||
return 0;
|
||||
|
||||
ii = NILFS_I(inode);
|
||||
if (test_bit(NILFS_I_BTNC, &ii->i_state)) {
|
||||
if (!args->for_btnc)
|
||||
return 0;
|
||||
} else if (args->for_btnc) {
|
||||
if (ii->i_type != args->type)
|
||||
return 0;
|
||||
}
|
||||
if (test_bit(NILFS_I_SHADOW, &ii->i_state)) {
|
||||
if (!args->for_shadow)
|
||||
return 0;
|
||||
} else if (args->for_shadow) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (!test_bit(NILFS_I_GCINODE, &ii->i_state))
|
||||
return !args->for_gc;
|
||||
|
||||
return args->for_gc && args->cno == ii->i_cno;
|
||||
return !(args->type & NILFS_I_TYPE_GC) || args->cno == ii->i_cno;
|
||||
}
|
||||
|
||||
static int nilfs_iget_set(struct inode *inode, void *opaque)
|
||||
@@ -572,15 +540,9 @@ static int nilfs_iget_set(struct inode *inode, void *opaque)
|
||||
inode->i_ino = args->ino;
|
||||
NILFS_I(inode)->i_cno = args->cno;
|
||||
NILFS_I(inode)->i_root = args->root;
|
||||
NILFS_I(inode)->i_type = args->type;
|
||||
if (args->root && args->ino == NILFS_ROOT_INO)
|
||||
nilfs_get_root(args->root);
|
||||
|
||||
if (args->for_gc)
|
||||
NILFS_I(inode)->i_state = BIT(NILFS_I_GCINODE);
|
||||
if (args->for_btnc)
|
||||
NILFS_I(inode)->i_state |= BIT(NILFS_I_BTNC);
|
||||
if (args->for_shadow)
|
||||
NILFS_I(inode)->i_state |= BIT(NILFS_I_SHADOW);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -588,8 +550,7 @@ struct inode *nilfs_ilookup(struct super_block *sb, struct nilfs_root *root,
|
||||
unsigned long ino)
|
||||
{
|
||||
struct nilfs_iget_args args = {
|
||||
.ino = ino, .root = root, .cno = 0, .for_gc = false,
|
||||
.for_btnc = false, .for_shadow = false
|
||||
.ino = ino, .root = root, .cno = 0, .type = NILFS_I_TYPE_NORMAL
|
||||
};
|
||||
|
||||
return ilookup5(sb, ino, nilfs_iget_test, &args);
|
||||
@@ -599,8 +560,7 @@ struct inode *nilfs_iget_locked(struct super_block *sb, struct nilfs_root *root,
|
||||
unsigned long ino)
|
||||
{
|
||||
struct nilfs_iget_args args = {
|
||||
.ino = ino, .root = root, .cno = 0, .for_gc = false,
|
||||
.for_btnc = false, .for_shadow = false
|
||||
.ino = ino, .root = root, .cno = 0, .type = NILFS_I_TYPE_NORMAL
|
||||
};
|
||||
|
||||
return iget5_locked(sb, ino, nilfs_iget_test, nilfs_iget_set, &args);
|
||||
@@ -631,8 +591,7 @@ struct inode *nilfs_iget_for_gc(struct super_block *sb, unsigned long ino,
|
||||
__u64 cno)
|
||||
{
|
||||
struct nilfs_iget_args args = {
|
||||
.ino = ino, .root = NULL, .cno = cno, .for_gc = true,
|
||||
.for_btnc = false, .for_shadow = false
|
||||
.ino = ino, .root = NULL, .cno = cno, .type = NILFS_I_TYPE_GC
|
||||
};
|
||||
struct inode *inode;
|
||||
int err;
|
||||
@@ -677,9 +636,7 @@ int nilfs_attach_btree_node_cache(struct inode *inode)
|
||||
args.ino = inode->i_ino;
|
||||
args.root = ii->i_root;
|
||||
args.cno = ii->i_cno;
|
||||
args.for_gc = test_bit(NILFS_I_GCINODE, &ii->i_state) != 0;
|
||||
args.for_btnc = true;
|
||||
args.for_shadow = test_bit(NILFS_I_SHADOW, &ii->i_state) != 0;
|
||||
args.type = ii->i_type | NILFS_I_TYPE_BTNC;
|
||||
|
||||
btnc_inode = iget5_locked(inode->i_sb, inode->i_ino, nilfs_iget_test,
|
||||
nilfs_iget_set, &args);
|
||||
@@ -733,8 +690,8 @@ void nilfs_detach_btree_node_cache(struct inode *inode)
|
||||
struct inode *nilfs_iget_for_shadow(struct inode *inode)
|
||||
{
|
||||
struct nilfs_iget_args args = {
|
||||
.ino = inode->i_ino, .root = NULL, .cno = 0, .for_gc = false,
|
||||
.for_btnc = false, .for_shadow = true
|
||||
.ino = inode->i_ino, .root = NULL, .cno = 0,
|
||||
.type = NILFS_I_TYPE_SHADOW
|
||||
};
|
||||
struct inode *s_inode;
|
||||
int err;
|
||||
@@ -900,7 +857,7 @@ static void nilfs_clear_inode(struct inode *inode)
|
||||
if (test_bit(NILFS_I_BMAP, &ii->i_state))
|
||||
nilfs_bmap_clear(ii->i_bmap);
|
||||
|
||||
if (!test_bit(NILFS_I_BTNC, &ii->i_state))
|
||||
if (!(ii->i_type & NILFS_I_TYPE_BTNC))
|
||||
nilfs_detach_btree_node_cache(inode);
|
||||
|
||||
if (ii->i_root && inode->i_ino == NILFS_ROOT_INO)
|
||||
|
||||
@@ -17,6 +17,7 @@
|
||||
#include <linux/mount.h> /* mnt_want_write_file(), mnt_drop_write_file() */
|
||||
#include <linux/buffer_head.h>
|
||||
#include <linux/fileattr.h>
|
||||
#include <linux/string.h>
|
||||
#include "nilfs.h"
|
||||
#include "segment.h"
|
||||
#include "bmap.h"
|
||||
@@ -114,7 +115,11 @@ static int nilfs_ioctl_wrap_copy(struct the_nilfs *nilfs,
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_fileattr_get - ioctl to support lsattr
|
||||
* nilfs_fileattr_get - retrieve miscellaneous file attributes
|
||||
* @dentry: the object to retrieve from
|
||||
* @fa: fileattr pointer
|
||||
*
|
||||
* Return: always 0 as success.
|
||||
*/
|
||||
int nilfs_fileattr_get(struct dentry *dentry, struct fileattr *fa)
|
||||
{
|
||||
@@ -126,7 +131,12 @@ int nilfs_fileattr_get(struct dentry *dentry, struct fileattr *fa)
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_fileattr_set - ioctl to support chattr
|
||||
* nilfs_fileattr_set - change miscellaneous file attributes
|
||||
* @idmap: idmap of the mount
|
||||
* @dentry: the object to change
|
||||
* @fa: fileattr pointer
|
||||
*
|
||||
* Return: 0 on success, or a negative error code on failure.
|
||||
*/
|
||||
int nilfs_fileattr_set(struct mnt_idmap *idmap,
|
||||
struct dentry *dentry, struct fileattr *fa)
|
||||
@@ -159,6 +169,10 @@ int nilfs_fileattr_set(struct mnt_idmap *idmap,
|
||||
|
||||
/**
|
||||
* nilfs_ioctl_getversion - get info about a file's version (generation number)
|
||||
* @inode: inode object
|
||||
* @argp: userspace memory where the generation number of @inode is stored
|
||||
*
|
||||
* Return: 0 on success, or %-EFAULT on error.
|
||||
*/
|
||||
static int nilfs_ioctl_getversion(struct inode *inode, void __user *argp)
|
||||
{
|
||||
@@ -1266,6 +1280,91 @@ out:
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_ioctl_get_fslabel - get the volume name of the file system
|
||||
* @sb: super block instance
|
||||
* @argp: pointer to userspace memory where the volume name should be stored
|
||||
*
|
||||
* Return: 0 on success, %-EFAULT if copying to userspace memory fails.
|
||||
*/
|
||||
static int nilfs_ioctl_get_fslabel(struct super_block *sb, void __user *argp)
|
||||
{
|
||||
struct the_nilfs *nilfs = sb->s_fs_info;
|
||||
char label[NILFS_MAX_VOLUME_NAME + 1];
|
||||
|
||||
BUILD_BUG_ON(NILFS_MAX_VOLUME_NAME >= FSLABEL_MAX);
|
||||
|
||||
down_read(&nilfs->ns_sem);
|
||||
memtostr_pad(label, nilfs->ns_sbp[0]->s_volume_name);
|
||||
up_read(&nilfs->ns_sem);
|
||||
|
||||
if (copy_to_user(argp, label, sizeof(label)))
|
||||
return -EFAULT;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_ioctl_set_fslabel - set the volume name of the file system
|
||||
* @sb: super block instance
|
||||
* @filp: file object
|
||||
* @argp: pointer to userspace memory that contains the volume name
|
||||
*
|
||||
* Return: 0 on success, or the following negative error code on failure.
|
||||
* * %-EFAULT - Error copying input data.
|
||||
* * %-EINVAL - Label length exceeds record size in superblock.
|
||||
* * %-EIO - I/O error.
|
||||
* * %-EPERM - Operation not permitted (insufficient permissions).
|
||||
* * %-EROFS - Read only file system.
|
||||
*/
|
||||
static int nilfs_ioctl_set_fslabel(struct super_block *sb, struct file *filp,
|
||||
void __user *argp)
|
||||
{
|
||||
char label[NILFS_MAX_VOLUME_NAME + 1];
|
||||
struct the_nilfs *nilfs = sb->s_fs_info;
|
||||
struct nilfs_super_block **sbp;
|
||||
size_t len;
|
||||
int ret;
|
||||
|
||||
if (!capable(CAP_SYS_ADMIN))
|
||||
return -EPERM;
|
||||
|
||||
ret = mnt_want_write_file(filp);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
if (copy_from_user(label, argp, NILFS_MAX_VOLUME_NAME + 1)) {
|
||||
ret = -EFAULT;
|
||||
goto out_drop_write;
|
||||
}
|
||||
|
||||
len = strnlen(label, NILFS_MAX_VOLUME_NAME + 1);
|
||||
if (len > NILFS_MAX_VOLUME_NAME) {
|
||||
nilfs_err(sb, "unable to set label with more than %zu bytes",
|
||||
NILFS_MAX_VOLUME_NAME);
|
||||
ret = -EINVAL;
|
||||
goto out_drop_write;
|
||||
}
|
||||
|
||||
down_write(&nilfs->ns_sem);
|
||||
sbp = nilfs_prepare_super(sb, false);
|
||||
if (unlikely(!sbp)) {
|
||||
ret = -EIO;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
strtomem_pad(sbp[0]->s_volume_name, label, 0);
|
||||
if (sbp[1])
|
||||
strtomem_pad(sbp[1]->s_volume_name, label, 0);
|
||||
|
||||
ret = nilfs_commit_super(sb, NILFS_SB_COMMIT_ALL);
|
||||
|
||||
out_unlock:
|
||||
up_write(&nilfs->ns_sem);
|
||||
out_drop_write:
|
||||
mnt_drop_write_file(filp);
|
||||
return ret;
|
||||
}
|
||||
|
||||
long nilfs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
struct inode *inode = file_inode(filp);
|
||||
@@ -1308,6 +1407,10 @@ long nilfs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
|
||||
return nilfs_ioctl_set_alloc_range(inode, argp);
|
||||
case FITRIM:
|
||||
return nilfs_ioctl_trim_fs(inode, argp);
|
||||
case FS_IOC_GETFSLABEL:
|
||||
return nilfs_ioctl_get_fslabel(inode->i_sb, argp);
|
||||
case FS_IOC_SETFSLABEL:
|
||||
return nilfs_ioctl_set_fslabel(inode->i_sb, filp, argp);
|
||||
default:
|
||||
return -ENOTTY;
|
||||
}
|
||||
@@ -1334,6 +1437,8 @@ long nilfs_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
|
||||
case NILFS_IOCTL_RESIZE:
|
||||
case NILFS_IOCTL_SET_ALLOC_RANGE:
|
||||
case FITRIM:
|
||||
case FS_IOC_GETFSLABEL:
|
||||
case FS_IOC_SETFSLABEL:
|
||||
break;
|
||||
default:
|
||||
return -ENOIOCTLCMD;
|
||||
|
||||
@@ -411,7 +411,7 @@ nilfs_mdt_write_page(struct page *page, struct writeback_control *wbc)
|
||||
* have dirty folios that try to be flushed in background.
|
||||
* So, here we simply discard this dirty folio.
|
||||
*/
|
||||
nilfs_clear_folio_dirty(folio, false);
|
||||
nilfs_clear_folio_dirty(folio);
|
||||
folio_unlock(folio);
|
||||
return -EROFS;
|
||||
}
|
||||
@@ -638,10 +638,10 @@ void nilfs_mdt_restore_from_shadow_map(struct inode *inode)
|
||||
if (mi->mi_palloc_cache)
|
||||
nilfs_palloc_clear_cache(inode);
|
||||
|
||||
nilfs_clear_dirty_pages(inode->i_mapping, true);
|
||||
nilfs_clear_dirty_pages(inode->i_mapping);
|
||||
nilfs_copy_back_pages(inode->i_mapping, shadow->inode->i_mapping);
|
||||
|
||||
nilfs_clear_dirty_pages(ii->i_assoc_inode->i_mapping, true);
|
||||
nilfs_clear_dirty_pages(ii->i_assoc_inode->i_mapping);
|
||||
nilfs_copy_back_pages(ii->i_assoc_inode->i_mapping,
|
||||
NILFS_I(shadow->inode)->i_assoc_inode->i_mapping);
|
||||
|
||||
|
||||
@@ -22,6 +22,7 @@
|
||||
/**
|
||||
* struct nilfs_inode_info - nilfs inode data in memory
|
||||
* @i_flags: inode flags
|
||||
* @i_type: inode type (combination of flags that inidicate usage)
|
||||
* @i_state: dynamic state flags
|
||||
* @i_bmap: pointer on i_bmap_data
|
||||
* @i_bmap_data: raw block mapping
|
||||
@@ -37,6 +38,7 @@
|
||||
*/
|
||||
struct nilfs_inode_info {
|
||||
__u32 i_flags;
|
||||
unsigned int i_type;
|
||||
unsigned long i_state; /* Dynamic state flags */
|
||||
struct nilfs_bmap *i_bmap;
|
||||
struct nilfs_bmap i_bmap_data;
|
||||
@@ -90,9 +92,16 @@ enum {
|
||||
NILFS_I_UPDATED, /* The file has been written back */
|
||||
NILFS_I_INODE_SYNC, /* dsync is not allowed for inode */
|
||||
NILFS_I_BMAP, /* has bmap and btnode_cache */
|
||||
NILFS_I_GCINODE, /* inode for GC, on memory only */
|
||||
NILFS_I_BTNC, /* inode for btree node cache */
|
||||
NILFS_I_SHADOW, /* inode for shadowed page cache */
|
||||
};
|
||||
|
||||
/*
|
||||
* Flags to identify the usage of on-memory inodes (i_type)
|
||||
*/
|
||||
enum {
|
||||
NILFS_I_TYPE_NORMAL = 0,
|
||||
NILFS_I_TYPE_GC = 0x0001, /* For data caching during GC */
|
||||
NILFS_I_TYPE_BTNC = 0x0002, /* For btree node cache */
|
||||
NILFS_I_TYPE_SHADOW = 0x0004, /* For shadowed page cache */
|
||||
};
|
||||
|
||||
/*
|
||||
@@ -103,6 +112,18 @@ enum {
|
||||
NILFS_SB_COMMIT_ALL /* Commit both super blocks */
|
||||
};
|
||||
|
||||
/**
|
||||
* define NILFS_MAX_VOLUME_NAME - maximum number of characters (bytes) in a
|
||||
* file system volume name
|
||||
*
|
||||
* Defined by the size of the volume name field in the on-disk superblocks.
|
||||
* This volume name does not include the terminating NULL byte if the string
|
||||
* length matches the field size, so use (NILFS_MAX_VOLUME_NAME + 1) for the
|
||||
* size of the buffer that requires a NULL byte termination.
|
||||
*/
|
||||
#define NILFS_MAX_VOLUME_NAME \
|
||||
sizeof_field(struct nilfs_super_block, s_volume_name)
|
||||
|
||||
/*
|
||||
* Macros to check inode numbers
|
||||
*/
|
||||
|
||||
@@ -262,7 +262,7 @@ repeat:
|
||||
NILFS_FOLIO_BUG(folio, "inconsistent dirty state");
|
||||
|
||||
dfolio = filemap_grab_folio(dmap, folio->index);
|
||||
if (unlikely(IS_ERR(dfolio))) {
|
||||
if (IS_ERR(dfolio)) {
|
||||
/* No empty page is added to the page cache */
|
||||
folio_unlock(folio);
|
||||
err = PTR_ERR(dfolio);
|
||||
@@ -357,9 +357,8 @@ repeat:
|
||||
/**
|
||||
* nilfs_clear_dirty_pages - discard dirty pages in address space
|
||||
* @mapping: address space with dirty pages for discarding
|
||||
* @silent: suppress [true] or print [false] warning messages
|
||||
*/
|
||||
void nilfs_clear_dirty_pages(struct address_space *mapping, bool silent)
|
||||
void nilfs_clear_dirty_pages(struct address_space *mapping)
|
||||
{
|
||||
struct folio_batch fbatch;
|
||||
unsigned int i;
|
||||
@@ -380,7 +379,7 @@ void nilfs_clear_dirty_pages(struct address_space *mapping, bool silent)
|
||||
* was acquired. Skip processing in that case.
|
||||
*/
|
||||
if (likely(folio->mapping == mapping))
|
||||
nilfs_clear_folio_dirty(folio, silent);
|
||||
nilfs_clear_folio_dirty(folio);
|
||||
|
||||
folio_unlock(folio);
|
||||
}
|
||||
@@ -392,20 +391,13 @@ void nilfs_clear_dirty_pages(struct address_space *mapping, bool silent)
|
||||
/**
|
||||
* nilfs_clear_folio_dirty - discard dirty folio
|
||||
* @folio: dirty folio that will be discarded
|
||||
* @silent: suppress [true] or print [false] warning messages
|
||||
*/
|
||||
void nilfs_clear_folio_dirty(struct folio *folio, bool silent)
|
||||
void nilfs_clear_folio_dirty(struct folio *folio)
|
||||
{
|
||||
struct inode *inode = folio->mapping->host;
|
||||
struct super_block *sb = inode->i_sb;
|
||||
struct buffer_head *bh, *head;
|
||||
|
||||
BUG_ON(!folio_test_locked(folio));
|
||||
|
||||
if (!silent)
|
||||
nilfs_warn(sb, "discard dirty page: offset=%lld, ino=%lu",
|
||||
folio_pos(folio), inode->i_ino);
|
||||
|
||||
folio_clear_uptodate(folio);
|
||||
folio_clear_mappedtodisk(folio);
|
||||
|
||||
@@ -419,11 +411,6 @@ void nilfs_clear_folio_dirty(struct folio *folio, bool silent)
|
||||
bh = head;
|
||||
do {
|
||||
lock_buffer(bh);
|
||||
if (!silent)
|
||||
nilfs_warn(sb,
|
||||
"discard dirty block: blocknr=%llu, size=%zu",
|
||||
(u64)bh->b_blocknr, bh->b_size);
|
||||
|
||||
set_mask_bits(&bh->b_state, clear_bits, 0);
|
||||
unlock_buffer(bh);
|
||||
} while (bh = bh->b_this_page, bh != head);
|
||||
|
||||
@@ -41,8 +41,8 @@ void nilfs_folio_bug(struct folio *);
|
||||
|
||||
int nilfs_copy_dirty_pages(struct address_space *, struct address_space *);
|
||||
void nilfs_copy_back_pages(struct address_space *, struct address_space *);
|
||||
void nilfs_clear_folio_dirty(struct folio *, bool);
|
||||
void nilfs_clear_dirty_pages(struct address_space *, bool);
|
||||
void nilfs_clear_folio_dirty(struct folio *folio);
|
||||
void nilfs_clear_dirty_pages(struct address_space *mapping);
|
||||
unsigned int nilfs_page_count_clean_buffers(struct page *, unsigned int,
|
||||
unsigned int);
|
||||
unsigned long nilfs_find_uncommitted_extent(struct inode *inode,
|
||||
|
||||
@@ -433,8 +433,17 @@ static int nilfs_prepare_segment_for_recovery(struct the_nilfs *nilfs,
|
||||
* The next segment is invalidated by this recovery.
|
||||
*/
|
||||
err = nilfs_sufile_free(sufile, segnum[1]);
|
||||
if (unlikely(err))
|
||||
if (unlikely(err)) {
|
||||
if (err == -ENOENT) {
|
||||
nilfs_err(sb,
|
||||
"checkpoint log inconsistency at block %llu (segment %llu): next segment %llu is unallocated",
|
||||
(unsigned long long)nilfs->ns_last_pseg,
|
||||
(unsigned long long)nilfs->ns_segnum,
|
||||
(unsigned long long)segnum[1]);
|
||||
err = -EINVAL;
|
||||
}
|
||||
goto failed;
|
||||
}
|
||||
|
||||
for (i = 1; i < 4; i++) {
|
||||
err = nilfs_segment_list_add(head, segnum[i]);
|
||||
|
||||
@@ -519,7 +519,7 @@ static void nilfs_segctor_end_finfo(struct nilfs_sc_info *sci,
|
||||
|
||||
ii = NILFS_I(inode);
|
||||
|
||||
if (test_bit(NILFS_I_GCINODE, &ii->i_state))
|
||||
if (ii->i_type & NILFS_I_TYPE_GC)
|
||||
cno = ii->i_cno;
|
||||
else if (NILFS_ROOT_METADATA_FILE(inode->i_ino))
|
||||
cno = 0;
|
||||
@@ -1102,12 +1102,64 @@ static int nilfs_segctor_scan_file_dsync(struct nilfs_sc_info *sci,
|
||||
return err;
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_free_segments - free the segments given by an array of segment numbers
|
||||
* @nilfs: nilfs object
|
||||
* @segnumv: array of segment numbers to be freed
|
||||
* @nsegs: number of segments to be freed in @segnumv
|
||||
*
|
||||
* nilfs_free_segments() wraps nilfs_sufile_freev() and
|
||||
* nilfs_sufile_cancel_freev(), and edits the segment usage metadata file
|
||||
* (sufile) to free all segments given by @segnumv and @nsegs at once. If
|
||||
* it fails midway, it cancels the changes so that none of the segments are
|
||||
* freed. If @nsegs is 0, this function does nothing.
|
||||
*
|
||||
* The freeing of segments is not finalized until the writing of a log with
|
||||
* a super root block containing this sufile change is complete, and it can
|
||||
* be canceled with nilfs_sufile_cancel_freev() until then.
|
||||
*
|
||||
* Return: 0 on success, or the following negative error code on failure.
|
||||
* * %-EINVAL - Invalid segment number.
|
||||
* * %-EIO - I/O error (including metadata corruption).
|
||||
* * %-ENOMEM - Insufficient memory available.
|
||||
*/
|
||||
static int nilfs_free_segments(struct the_nilfs *nilfs, __u64 *segnumv,
|
||||
size_t nsegs)
|
||||
{
|
||||
size_t ndone;
|
||||
int ret;
|
||||
|
||||
if (!nsegs)
|
||||
return 0;
|
||||
|
||||
ret = nilfs_sufile_freev(nilfs->ns_sufile, segnumv, nsegs, &ndone);
|
||||
if (unlikely(ret)) {
|
||||
nilfs_sufile_cancel_freev(nilfs->ns_sufile, segnumv, ndone,
|
||||
NULL);
|
||||
/*
|
||||
* If a segment usage of the segments to be freed is in a
|
||||
* hole block, nilfs_sufile_freev() will return -ENOENT.
|
||||
* In this case, -EINVAL should be returned to the caller
|
||||
* since there is something wrong with the given segment
|
||||
* number array. This error can only occur during GC, so
|
||||
* there is no need to worry about it propagating to other
|
||||
* callers (such as fsync).
|
||||
*/
|
||||
if (ret == -ENOENT) {
|
||||
nilfs_err(nilfs->ns_sb,
|
||||
"The segment usage entry %llu to be freed is invalid (in a hole)",
|
||||
(unsigned long long)segnumv[ndone]);
|
||||
ret = -EINVAL;
|
||||
}
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int nilfs_segctor_collect_blocks(struct nilfs_sc_info *sci, int mode)
|
||||
{
|
||||
struct the_nilfs *nilfs = sci->sc_super->s_fs_info;
|
||||
struct list_head *head;
|
||||
struct nilfs_inode_info *ii;
|
||||
size_t ndone;
|
||||
int err = 0;
|
||||
|
||||
switch (nilfs_sc_cstage_get(sci)) {
|
||||
@@ -1201,14 +1253,10 @@ static int nilfs_segctor_collect_blocks(struct nilfs_sc_info *sci, int mode)
|
||||
nilfs_sc_cstage_inc(sci);
|
||||
fallthrough;
|
||||
case NILFS_ST_SUFILE:
|
||||
err = nilfs_sufile_freev(nilfs->ns_sufile, sci->sc_freesegs,
|
||||
sci->sc_nfreesegs, &ndone);
|
||||
if (unlikely(err)) {
|
||||
nilfs_sufile_cancel_freev(nilfs->ns_sufile,
|
||||
sci->sc_freesegs, ndone,
|
||||
NULL);
|
||||
err = nilfs_free_segments(nilfs, sci->sc_freesegs,
|
||||
sci->sc_nfreesegs);
|
||||
if (unlikely(err))
|
||||
break;
|
||||
}
|
||||
sci->sc_stage.flags |= NILFS_CF_SUFREED;
|
||||
|
||||
err = nilfs_segctor_scan_file(sci, nilfs->ns_sufile,
|
||||
@@ -2456,7 +2504,7 @@ static void nilfs_construction_timeout(struct timer_list *t)
|
||||
{
|
||||
struct nilfs_sc_info *sci = from_timer(sci, t, sc_timer);
|
||||
|
||||
wake_up_process(sci->sc_timer_task);
|
||||
wake_up_process(sci->sc_task);
|
||||
}
|
||||
|
||||
static void
|
||||
@@ -2582,123 +2630,85 @@ static int nilfs_segctor_flush_mode(struct nilfs_sc_info *sci)
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_segctor_thread - main loop of the segment constructor thread.
|
||||
* nilfs_log_write_required - determine whether log writing is required
|
||||
* @sci: nilfs_sc_info struct
|
||||
* @modep: location for storing log writing mode
|
||||
*
|
||||
* Return: true if log writing is required, false otherwise. If log writing
|
||||
* is required, the mode is stored in the location pointed to by @modep.
|
||||
*/
|
||||
static bool nilfs_log_write_required(struct nilfs_sc_info *sci, int *modep)
|
||||
{
|
||||
bool timedout, ret = true;
|
||||
|
||||
spin_lock(&sci->sc_state_lock);
|
||||
timedout = ((sci->sc_state & NILFS_SEGCTOR_COMMIT) &&
|
||||
time_after_eq(jiffies, sci->sc_timer.expires));
|
||||
if (timedout || sci->sc_seq_request != sci->sc_seq_done)
|
||||
*modep = SC_LSEG_SR;
|
||||
else if (sci->sc_flush_request)
|
||||
*modep = nilfs_segctor_flush_mode(sci);
|
||||
else
|
||||
ret = false;
|
||||
|
||||
spin_unlock(&sci->sc_state_lock);
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_segctor_thread - main loop of the log writer thread
|
||||
* @arg: pointer to a struct nilfs_sc_info.
|
||||
*
|
||||
* nilfs_segctor_thread() initializes a timer and serves as a daemon
|
||||
* to execute segment constructions.
|
||||
* nilfs_segctor_thread() is the main loop function of the log writer kernel
|
||||
* thread, which determines whether log writing is necessary, and if so,
|
||||
* performs the log write in the background, or waits if not. It is also
|
||||
* used to decide the background writeback of the superblock.
|
||||
*
|
||||
* Return: Always 0.
|
||||
*/
|
||||
static int nilfs_segctor_thread(void *arg)
|
||||
{
|
||||
struct nilfs_sc_info *sci = (struct nilfs_sc_info *)arg;
|
||||
struct the_nilfs *nilfs = sci->sc_super->s_fs_info;
|
||||
int timeout = 0;
|
||||
|
||||
sci->sc_timer_task = current;
|
||||
timer_setup(&sci->sc_timer, nilfs_construction_timeout, 0);
|
||||
|
||||
/* start sync. */
|
||||
sci->sc_task = current;
|
||||
wake_up(&sci->sc_wait_task); /* for nilfs_segctor_start_thread() */
|
||||
nilfs_info(sci->sc_super,
|
||||
"segctord starting. Construction interval = %lu seconds, CP frequency < %lu seconds",
|
||||
sci->sc_interval / HZ, sci->sc_mjcp_freq / HZ);
|
||||
|
||||
set_freezable();
|
||||
spin_lock(&sci->sc_state_lock);
|
||||
loop:
|
||||
for (;;) {
|
||||
|
||||
while (!kthread_should_stop()) {
|
||||
DEFINE_WAIT(wait);
|
||||
bool should_write;
|
||||
int mode;
|
||||
|
||||
if (sci->sc_state & NILFS_SEGCTOR_QUIT)
|
||||
goto end_thread;
|
||||
|
||||
if (timeout || sci->sc_seq_request != sci->sc_seq_done)
|
||||
mode = SC_LSEG_SR;
|
||||
else if (sci->sc_flush_request)
|
||||
mode = nilfs_segctor_flush_mode(sci);
|
||||
else
|
||||
break;
|
||||
|
||||
spin_unlock(&sci->sc_state_lock);
|
||||
nilfs_segctor_thread_construct(sci, mode);
|
||||
spin_lock(&sci->sc_state_lock);
|
||||
timeout = 0;
|
||||
}
|
||||
|
||||
|
||||
if (freezing(current)) {
|
||||
spin_unlock(&sci->sc_state_lock);
|
||||
try_to_freeze();
|
||||
spin_lock(&sci->sc_state_lock);
|
||||
} else {
|
||||
DEFINE_WAIT(wait);
|
||||
int should_sleep = 1;
|
||||
if (freezing(current)) {
|
||||
try_to_freeze();
|
||||
continue;
|
||||
}
|
||||
|
||||
prepare_to_wait(&sci->sc_wait_daemon, &wait,
|
||||
TASK_INTERRUPTIBLE);
|
||||
|
||||
if (sci->sc_seq_request != sci->sc_seq_done)
|
||||
should_sleep = 0;
|
||||
else if (sci->sc_flush_request)
|
||||
should_sleep = 0;
|
||||
else if (sci->sc_state & NILFS_SEGCTOR_COMMIT)
|
||||
should_sleep = time_before(jiffies,
|
||||
sci->sc_timer.expires);
|
||||
|
||||
if (should_sleep) {
|
||||
spin_unlock(&sci->sc_state_lock);
|
||||
should_write = nilfs_log_write_required(sci, &mode);
|
||||
if (!should_write)
|
||||
schedule();
|
||||
spin_lock(&sci->sc_state_lock);
|
||||
}
|
||||
finish_wait(&sci->sc_wait_daemon, &wait);
|
||||
timeout = ((sci->sc_state & NILFS_SEGCTOR_COMMIT) &&
|
||||
time_after_eq(jiffies, sci->sc_timer.expires));
|
||||
|
||||
if (nilfs_sb_dirty(nilfs) && nilfs_sb_need_update(nilfs))
|
||||
set_nilfs_discontinued(nilfs);
|
||||
}
|
||||
goto loop;
|
||||
|
||||
end_thread:
|
||||
if (should_write)
|
||||
nilfs_segctor_thread_construct(sci, mode);
|
||||
}
|
||||
|
||||
/* end sync. */
|
||||
spin_lock(&sci->sc_state_lock);
|
||||
sci->sc_task = NULL;
|
||||
timer_shutdown_sync(&sci->sc_timer);
|
||||
wake_up(&sci->sc_wait_task); /* for nilfs_segctor_kill_thread() */
|
||||
spin_unlock(&sci->sc_state_lock);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int nilfs_segctor_start_thread(struct nilfs_sc_info *sci)
|
||||
{
|
||||
struct task_struct *t;
|
||||
|
||||
t = kthread_run(nilfs_segctor_thread, sci, "segctord");
|
||||
if (IS_ERR(t)) {
|
||||
int err = PTR_ERR(t);
|
||||
|
||||
nilfs_err(sci->sc_super, "error %d creating segctord thread",
|
||||
err);
|
||||
return err;
|
||||
}
|
||||
wait_event(sci->sc_wait_task, sci->sc_task != NULL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void nilfs_segctor_kill_thread(struct nilfs_sc_info *sci)
|
||||
__acquires(&sci->sc_state_lock)
|
||||
__releases(&sci->sc_state_lock)
|
||||
{
|
||||
sci->sc_state |= NILFS_SEGCTOR_QUIT;
|
||||
|
||||
while (sci->sc_task) {
|
||||
wake_up(&sci->sc_wait_daemon);
|
||||
spin_unlock(&sci->sc_state_lock);
|
||||
wait_event(sci->sc_wait_task, sci->sc_task == NULL);
|
||||
spin_lock(&sci->sc_state_lock);
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Setup & clean-up functions
|
||||
*/
|
||||
@@ -2719,7 +2729,6 @@ static struct nilfs_sc_info *nilfs_segctor_new(struct super_block *sb,
|
||||
|
||||
init_waitqueue_head(&sci->sc_wait_request);
|
||||
init_waitqueue_head(&sci->sc_wait_daemon);
|
||||
init_waitqueue_head(&sci->sc_wait_task);
|
||||
spin_lock_init(&sci->sc_state_lock);
|
||||
INIT_LIST_HEAD(&sci->sc_dirty_files);
|
||||
INIT_LIST_HEAD(&sci->sc_segbufs);
|
||||
@@ -2774,8 +2783,12 @@ static void nilfs_segctor_destroy(struct nilfs_sc_info *sci)
|
||||
|
||||
up_write(&nilfs->ns_segctor_sem);
|
||||
|
||||
if (sci->sc_task) {
|
||||
wake_up(&sci->sc_wait_daemon);
|
||||
kthread_stop(sci->sc_task);
|
||||
}
|
||||
|
||||
spin_lock(&sci->sc_state_lock);
|
||||
nilfs_segctor_kill_thread(sci);
|
||||
flag = ((sci->sc_state & NILFS_SEGCTOR_COMMIT) || sci->sc_flush_request
|
||||
|| sci->sc_seq_request != sci->sc_seq_done);
|
||||
spin_unlock(&sci->sc_state_lock);
|
||||
@@ -2823,14 +2836,15 @@ static void nilfs_segctor_destroy(struct nilfs_sc_info *sci)
|
||||
* This allocates a log writer object, initializes it, and starts the
|
||||
* log writer.
|
||||
*
|
||||
* Return Value: On success, 0 is returned. On error, one of the following
|
||||
* negative error code is returned.
|
||||
*
|
||||
* %-ENOMEM - Insufficient memory available.
|
||||
* Return: 0 on success, or the following negative error code on failure.
|
||||
* * %-EINTR - Log writer thread creation failed due to interruption.
|
||||
* * %-ENOMEM - Insufficient memory available.
|
||||
*/
|
||||
int nilfs_attach_log_writer(struct super_block *sb, struct nilfs_root *root)
|
||||
{
|
||||
struct the_nilfs *nilfs = sb->s_fs_info;
|
||||
struct nilfs_sc_info *sci;
|
||||
struct task_struct *t;
|
||||
int err;
|
||||
|
||||
if (nilfs->ns_writer) {
|
||||
@@ -2843,15 +2857,23 @@ int nilfs_attach_log_writer(struct super_block *sb, struct nilfs_root *root)
|
||||
return 0;
|
||||
}
|
||||
|
||||
nilfs->ns_writer = nilfs_segctor_new(sb, root);
|
||||
if (!nilfs->ns_writer)
|
||||
sci = nilfs_segctor_new(sb, root);
|
||||
if (unlikely(!sci))
|
||||
return -ENOMEM;
|
||||
|
||||
err = nilfs_segctor_start_thread(nilfs->ns_writer);
|
||||
if (unlikely(err))
|
||||
nilfs->ns_writer = sci;
|
||||
t = kthread_create(nilfs_segctor_thread, sci, "segctord");
|
||||
if (IS_ERR(t)) {
|
||||
err = PTR_ERR(t);
|
||||
nilfs_err(sb, "error %d creating segctord thread", err);
|
||||
nilfs_detach_log_writer(sb);
|
||||
return err;
|
||||
}
|
||||
sci->sc_task = t;
|
||||
timer_setup(&sci->sc_timer, nilfs_construction_timeout, 0);
|
||||
|
||||
return err;
|
||||
wake_up_process(sci->sc_task);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -22,10 +22,10 @@ struct nilfs_root;
|
||||
* struct nilfs_recovery_info - Recovery information
|
||||
* @ri_need_recovery: Recovery status
|
||||
* @ri_super_root: Block number of the last super root
|
||||
* @ri_ri_cno: Number of the last checkpoint
|
||||
* @ri_cno: Number of the last checkpoint
|
||||
* @ri_lsegs_start: Region for roll-forwarding (start block number)
|
||||
* @ri_lsegs_end: Region for roll-forwarding (end block number)
|
||||
* @ri_lseg_start_seq: Sequence value of the segment at ri_lsegs_start
|
||||
* @ri_lsegs_start_seq: Sequence value of the segment at ri_lsegs_start
|
||||
* @ri_used_segments: List of segments to be mark active
|
||||
* @ri_pseg_start: Block number of the last partial segment
|
||||
* @ri_seq: Sequence number on the last partial segment
|
||||
@@ -105,9 +105,8 @@ struct nilfs_segsum_pointer {
|
||||
* @sc_flush_request: inode bitmap of metadata files to be flushed
|
||||
* @sc_wait_request: Client request queue
|
||||
* @sc_wait_daemon: Daemon wait queue
|
||||
* @sc_wait_task: Start/end wait queue to control segctord task
|
||||
* @sc_seq_request: Request counter
|
||||
* @sc_seq_accept: Accepted request count
|
||||
* @sc_seq_accepted: Accepted request count
|
||||
* @sc_seq_done: Completion counter
|
||||
* @sc_sync: Request of explicit sync operation
|
||||
* @sc_interval: Timeout value of background construction
|
||||
@@ -158,7 +157,6 @@ struct nilfs_sc_info {
|
||||
|
||||
wait_queue_head_t sc_wait_request;
|
||||
wait_queue_head_t sc_wait_daemon;
|
||||
wait_queue_head_t sc_wait_task;
|
||||
|
||||
__u32 sc_seq_request;
|
||||
__u32 sc_seq_accepted;
|
||||
@@ -171,7 +169,6 @@ struct nilfs_sc_info {
|
||||
unsigned long sc_watermark;
|
||||
|
||||
struct timer_list sc_timer;
|
||||
struct task_struct *sc_timer_task;
|
||||
struct task_struct *sc_task;
|
||||
};
|
||||
|
||||
@@ -192,7 +189,6 @@ enum {
|
||||
};
|
||||
|
||||
/* sc_state */
|
||||
#define NILFS_SEGCTOR_QUIT 0x0001 /* segctord is being destroyed */
|
||||
#define NILFS_SEGCTOR_COMMIT 0x0004 /* committed transaction exists */
|
||||
|
||||
/*
|
||||
|
||||
@@ -79,10 +79,17 @@ nilfs_sufile_block_get_segment_usage(const struct inode *sufile, __u64 segnum,
|
||||
NILFS_MDT(sufile)->mi_entry_size;
|
||||
}
|
||||
|
||||
static inline int nilfs_sufile_get_header_block(struct inode *sufile,
|
||||
struct buffer_head **bhp)
|
||||
static int nilfs_sufile_get_header_block(struct inode *sufile,
|
||||
struct buffer_head **bhp)
|
||||
{
|
||||
return nilfs_mdt_get_block(sufile, 0, 0, NULL, bhp);
|
||||
int err = nilfs_mdt_get_block(sufile, 0, 0, NULL, bhp);
|
||||
|
||||
if (unlikely(err == -ENOENT)) {
|
||||
nilfs_error(sufile->i_sb,
|
||||
"missing header block in segment usage metadata");
|
||||
err = -EIO;
|
||||
}
|
||||
return err;
|
||||
}
|
||||
|
||||
static inline int
|
||||
@@ -506,8 +513,15 @@ int nilfs_sufile_mark_dirty(struct inode *sufile, __u64 segnum)
|
||||
|
||||
down_write(&NILFS_MDT(sufile)->mi_sem);
|
||||
ret = nilfs_sufile_get_segment_usage_block(sufile, segnum, 0, &bh);
|
||||
if (ret)
|
||||
if (unlikely(ret)) {
|
||||
if (ret == -ENOENT) {
|
||||
nilfs_error(sufile->i_sb,
|
||||
"segment usage for segment %llu is unreadable due to a hole block",
|
||||
(unsigned long long)segnum);
|
||||
ret = -EIO;
|
||||
}
|
||||
goto out_sem;
|
||||
}
|
||||
|
||||
kaddr = kmap_local_page(bh->b_page);
|
||||
su = nilfs_sufile_block_get_segment_usage(sufile, segnum, bh, kaddr);
|
||||
@@ -840,21 +854,17 @@ out:
|
||||
}
|
||||
|
||||
/**
|
||||
* nilfs_sufile_get_suinfo -
|
||||
* nilfs_sufile_get_suinfo - get segment usage information
|
||||
* @sufile: inode of segment usage file
|
||||
* @segnum: segment number to start looking
|
||||
* @buf: array of suinfo
|
||||
* @sisz: byte size of suinfo
|
||||
* @nsi: size of suinfo array
|
||||
* @buf: array of suinfo
|
||||
* @sisz: byte size of suinfo
|
||||
* @nsi: size of suinfo array
|
||||
*
|
||||
* Description:
|
||||
*
|
||||
* Return Value: On success, 0 is returned and .... On error, one of the
|
||||
* following negative error codes is returned.
|
||||
*
|
||||
* %-EIO - I/O error.
|
||||
*
|
||||
* %-ENOMEM - Insufficient amount of memory available.
|
||||
* Return: Count of segment usage info items stored in the output buffer on
|
||||
* success, or the following negative error code on failure.
|
||||
* * %-EIO - I/O error (including metadata corruption).
|
||||
* * %-ENOMEM - Insufficient memory available.
|
||||
*/
|
||||
ssize_t nilfs_sufile_get_suinfo(struct inode *sufile, __u64 segnum, void *buf,
|
||||
unsigned int sisz, size_t nsi)
|
||||
@@ -1241,9 +1251,15 @@ int nilfs_sufile_read(struct super_block *sb, size_t susize,
|
||||
if (err)
|
||||
goto failed;
|
||||
|
||||
err = nilfs_sufile_get_header_block(sufile, &header_bh);
|
||||
if (err)
|
||||
err = nilfs_mdt_get_block(sufile, 0, 0, NULL, &header_bh);
|
||||
if (unlikely(err)) {
|
||||
if (err == -ENOENT) {
|
||||
nilfs_err(sb,
|
||||
"missing header block in segment usage metadata");
|
||||
err = -EINVAL;
|
||||
}
|
||||
goto failed;
|
||||
}
|
||||
|
||||
sui = NILFS_SUI(sufile);
|
||||
kaddr = kmap_local_page(header_bh->b_page);
|
||||
|
||||
@@ -105,6 +105,10 @@ static void nilfs_set_error(struct super_block *sb)
|
||||
|
||||
/**
|
||||
* __nilfs_error() - report failure condition on a filesystem
|
||||
* @sb: super block instance
|
||||
* @function: name of calling function
|
||||
* @fmt: format string for message to be output
|
||||
* @...: optional arguments to @fmt
|
||||
*
|
||||
* __nilfs_error() sets an ERROR_FS flag on the superblock as well as
|
||||
* reporting an error message. This function should be called when
|
||||
@@ -156,6 +160,7 @@ struct inode *nilfs_alloc_inode(struct super_block *sb)
|
||||
return NULL;
|
||||
ii->i_bh = NULL;
|
||||
ii->i_state = 0;
|
||||
ii->i_type = 0;
|
||||
ii->i_cno = 0;
|
||||
ii->i_assoc_inode = NULL;
|
||||
ii->i_bmap = &ii->i_bmap_data;
|
||||
@@ -1063,6 +1068,10 @@ nilfs_fill_super(struct super_block *sb, struct fs_context *fc)
|
||||
if (err)
|
||||
goto failed_nilfs;
|
||||
|
||||
super_set_uuid(sb, nilfs->ns_sbp[0]->s_uuid,
|
||||
sizeof(nilfs->ns_sbp[0]->s_uuid));
|
||||
super_set_sysfs_name_bdev(sb);
|
||||
|
||||
cno = nilfs_last_cno(nilfs);
|
||||
err = nilfs_attach_checkpoint(sb, cno, true, &fsroot);
|
||||
if (err) {
|
||||
|
||||
@@ -12,7 +12,6 @@
|
||||
#include <linux/slab.h>
|
||||
#include <linux/blkdev.h>
|
||||
#include <linux/backing-dev.h>
|
||||
#include <linux/random.h>
|
||||
#include <linux/log2.h>
|
||||
#include <linux/crc32.h>
|
||||
#include "nilfs.h"
|
||||
@@ -69,7 +68,6 @@ struct the_nilfs *alloc_nilfs(struct super_block *sb)
|
||||
INIT_LIST_HEAD(&nilfs->ns_dirty_files);
|
||||
INIT_LIST_HEAD(&nilfs->ns_gc_inodes);
|
||||
spin_lock_init(&nilfs->ns_inode_lock);
|
||||
spin_lock_init(&nilfs->ns_next_gen_lock);
|
||||
spin_lock_init(&nilfs->ns_last_segment_lock);
|
||||
nilfs->ns_cptree = RB_ROOT;
|
||||
spin_lock_init(&nilfs->ns_cptree_lock);
|
||||
@@ -754,9 +752,6 @@ int init_nilfs(struct the_nilfs *nilfs, struct super_block *sb)
|
||||
nilfs->ns_blocksize_bits = sb->s_blocksize_bits;
|
||||
nilfs->ns_blocksize = blocksize;
|
||||
|
||||
get_random_bytes(&nilfs->ns_next_generation,
|
||||
sizeof(nilfs->ns_next_generation));
|
||||
|
||||
err = nilfs_store_disk_layout(nilfs, sbp);
|
||||
if (err)
|
||||
goto failed_sbh;
|
||||
|
||||
@@ -71,8 +71,6 @@ enum {
|
||||
* @ns_dirty_files: list of dirty files
|
||||
* @ns_inode_lock: lock protecting @ns_dirty_files
|
||||
* @ns_gc_inodes: dummy inodes to keep live blocks
|
||||
* @ns_next_generation: next generation number for inodes
|
||||
* @ns_next_gen_lock: lock protecting @ns_next_generation
|
||||
* @ns_mount_opt: mount options
|
||||
* @ns_resuid: uid for reserved blocks
|
||||
* @ns_resgid: gid for reserved blocks
|
||||
@@ -161,10 +159,6 @@ struct the_nilfs {
|
||||
/* GC inode list */
|
||||
struct list_head ns_gc_inodes;
|
||||
|
||||
/* Inode allocator */
|
||||
u32 ns_next_generation;
|
||||
spinlock_t ns_next_gen_lock;
|
||||
|
||||
/* Mount options */
|
||||
unsigned long ns_mount_opt;
|
||||
|
||||
|
||||
@@ -1187,7 +1187,7 @@ static int ocfs2_write_cluster(struct address_space *mapping,
|
||||
|
||||
/* This is the direct io target page. */
|
||||
if (wc->w_pages[i] == NULL) {
|
||||
p_blkno++;
|
||||
p_blkno += (1 << (PAGE_SHIFT - inode->i_sb->s_blocksize_bits));
|
||||
continue;
|
||||
}
|
||||
|
||||
|
||||
@@ -3512,16 +3512,6 @@ static int dx_leaf_sort_cmp(const void *a, const void *b)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void dx_leaf_sort_swap(void *a, void *b, int size)
|
||||
{
|
||||
struct ocfs2_dx_entry *entry1 = a;
|
||||
struct ocfs2_dx_entry *entry2 = b;
|
||||
|
||||
BUG_ON(size != sizeof(*entry1));
|
||||
|
||||
swap(*entry1, *entry2);
|
||||
}
|
||||
|
||||
static int ocfs2_dx_leaf_same_major(struct ocfs2_dx_leaf *dx_leaf)
|
||||
{
|
||||
struct ocfs2_dx_entry_list *dl_list = &dx_leaf->dl_list;
|
||||
@@ -3782,7 +3772,7 @@ static int ocfs2_dx_dir_rebalance(struct ocfs2_super *osb, struct inode *dir,
|
||||
*/
|
||||
sort(dx_leaf->dl_list.de_entries, num_used,
|
||||
sizeof(struct ocfs2_dx_entry), dx_leaf_sort_cmp,
|
||||
dx_leaf_sort_swap);
|
||||
NULL);
|
||||
|
||||
ocfs2_journal_dirty(handle, dx_leaf_bh);
|
||||
|
||||
|
||||
@@ -3151,11 +3151,8 @@ static int ocfs2_dlm_seq_show(struct seq_file *m, void *v)
|
||||
#ifdef CONFIG_OCFS2_FS_STATS
|
||||
if (!lockres->l_lock_wait && dlm_debug->d_filter_secs) {
|
||||
now = ktime_to_us(ktime_get_real());
|
||||
if (lockres->l_lock_prmode.ls_last >
|
||||
lockres->l_lock_exmode.ls_last)
|
||||
last = lockres->l_lock_prmode.ls_last;
|
||||
else
|
||||
last = lockres->l_lock_exmode.ls_last;
|
||||
last = max(lockres->l_lock_prmode.ls_last,
|
||||
lockres->l_lock_exmode.ls_last);
|
||||
/*
|
||||
* Use d_filter_secs field to filter lock resources dump,
|
||||
* the default d_filter_secs(0) value filters nothing,
|
||||
|
||||
@@ -1002,6 +1002,25 @@ static int ocfs2_sync_local_to_main(struct ocfs2_super *osb,
|
||||
start = bit_off + 1;
|
||||
}
|
||||
|
||||
/* clear the contiguous bits until the end boundary */
|
||||
if (count) {
|
||||
blkno = la_start_blk +
|
||||
ocfs2_clusters_to_blocks(osb->sb,
|
||||
start - count);
|
||||
|
||||
trace_ocfs2_sync_local_to_main_free(
|
||||
count, start - count,
|
||||
(unsigned long long)la_start_blk,
|
||||
(unsigned long long)blkno);
|
||||
|
||||
status = ocfs2_release_clusters(handle,
|
||||
main_bm_inode,
|
||||
main_bm_bh, blkno,
|
||||
count);
|
||||
if (status < 0)
|
||||
mlog_errno(status);
|
||||
}
|
||||
|
||||
bail:
|
||||
if (status)
|
||||
mlog_errno(status);
|
||||
|
||||
@@ -371,12 +371,16 @@ int ocfs2_global_read_info(struct super_block *sb, int type)
|
||||
|
||||
status = ocfs2_extent_map_get_blocks(oinfo->dqi_gqinode, 0, &oinfo->dqi_giblk,
|
||||
&pcount, NULL);
|
||||
if (status < 0)
|
||||
if (status < 0) {
|
||||
mlog_errno(status);
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
status = ocfs2_qinfo_lock(oinfo, 0);
|
||||
if (status < 0)
|
||||
if (status < 0) {
|
||||
mlog_errno(status);
|
||||
goto out_unlock;
|
||||
}
|
||||
status = sb->s_op->quota_read(sb, type, (char *)&dinfo,
|
||||
sizeof(struct ocfs2_global_disk_dqinfo),
|
||||
OCFS2_GLOBAL_INFO_OFF);
|
||||
@@ -404,12 +408,11 @@ int ocfs2_global_read_info(struct super_block *sb, int type)
|
||||
schedule_delayed_work(&oinfo->dqi_sync_work,
|
||||
msecs_to_jiffies(oinfo->dqi_syncms));
|
||||
|
||||
out_err:
|
||||
return status;
|
||||
return 0;
|
||||
out_unlock:
|
||||
ocfs2_unlock_global_qf(oinfo, 0);
|
||||
mlog_errno(status);
|
||||
goto out_err;
|
||||
out_err:
|
||||
return status;
|
||||
}
|
||||
|
||||
/* Write information to global quota file. Expects exclusive lock on quota
|
||||
|
||||
@@ -1392,13 +1392,6 @@ static int cmp_refcount_rec_by_cpos(const void *a, const void *b)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void swap_refcount_rec(void *a, void *b, int size)
|
||||
{
|
||||
struct ocfs2_refcount_rec *l = a, *r = b;
|
||||
|
||||
swap(*l, *r);
|
||||
}
|
||||
|
||||
/*
|
||||
* The refcount cpos are ordered by their 64bit cpos,
|
||||
* But we will use the low 32 bit to be the e_cpos in the b-tree.
|
||||
@@ -1474,7 +1467,7 @@ static int ocfs2_divide_leaf_refcount_block(struct buffer_head *ref_leaf_bh,
|
||||
*/
|
||||
sort(&rl->rl_recs, le16_to_cpu(rl->rl_used),
|
||||
sizeof(struct ocfs2_refcount_rec),
|
||||
cmp_refcount_rec_by_low_cpos, swap_refcount_rec);
|
||||
cmp_refcount_rec_by_low_cpos, NULL);
|
||||
|
||||
ret = ocfs2_find_refcount_split_pos(rl, &cpos, &split_index);
|
||||
if (ret) {
|
||||
@@ -1499,11 +1492,11 @@ static int ocfs2_divide_leaf_refcount_block(struct buffer_head *ref_leaf_bh,
|
||||
|
||||
sort(&rl->rl_recs, le16_to_cpu(rl->rl_used),
|
||||
sizeof(struct ocfs2_refcount_rec),
|
||||
cmp_refcount_rec_by_cpos, swap_refcount_rec);
|
||||
cmp_refcount_rec_by_cpos, NULL);
|
||||
|
||||
sort(&new_rl->rl_recs, le16_to_cpu(new_rl->rl_used),
|
||||
sizeof(struct ocfs2_refcount_rec),
|
||||
cmp_refcount_rec_by_cpos, swap_refcount_rec);
|
||||
cmp_refcount_rec_by_cpos, NULL);
|
||||
|
||||
*split_cpos = cpos;
|
||||
return 0;
|
||||
|
||||
@@ -2357,8 +2357,8 @@ static int ocfs2_verify_volume(struct ocfs2_dinode *di,
|
||||
(unsigned long long)bh->b_blocknr);
|
||||
} else if (le32_to_cpu(di->id2.i_super.s_clustersize_bits) < 12 ||
|
||||
le32_to_cpu(di->id2.i_super.s_clustersize_bits) > 20) {
|
||||
mlog(ML_ERROR, "bad cluster size found: %u\n",
|
||||
1 << le32_to_cpu(di->id2.i_super.s_clustersize_bits));
|
||||
mlog(ML_ERROR, "bad cluster size bit found: %u\n",
|
||||
le32_to_cpu(di->id2.i_super.s_clustersize_bits));
|
||||
} else if (!le64_to_cpu(di->id2.i_super.s_root_blkno)) {
|
||||
mlog(ML_ERROR, "bad root_blkno: 0\n");
|
||||
} else if (!le64_to_cpu(di->id2.i_super.s_system_dir_blkno)) {
|
||||
|
||||
@@ -4167,15 +4167,6 @@ static int cmp_xe(const void *a, const void *b)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void swap_xe(void *a, void *b, int size)
|
||||
{
|
||||
struct ocfs2_xattr_entry *l = a, *r = b, tmp;
|
||||
|
||||
tmp = *l;
|
||||
memcpy(l, r, sizeof(struct ocfs2_xattr_entry));
|
||||
memcpy(r, &tmp, sizeof(struct ocfs2_xattr_entry));
|
||||
}
|
||||
|
||||
/*
|
||||
* When the ocfs2_xattr_block is filled up, new bucket will be created
|
||||
* and all the xattr entries will be moved to the new bucket.
|
||||
@@ -4241,7 +4232,7 @@ static void ocfs2_cp_xattr_block_to_bucket(struct inode *inode,
|
||||
trace_ocfs2_cp_xattr_block_to_bucket_end(offset, size, off_change);
|
||||
|
||||
sort(target + offset, count, sizeof(struct ocfs2_xattr_entry),
|
||||
cmp_xe, swap_xe);
|
||||
cmp_xe, NULL);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -4436,7 +4427,7 @@ static int ocfs2_defrag_xattr_bucket(struct inode *inode,
|
||||
*/
|
||||
sort(entries, le16_to_cpu(xh->xh_count),
|
||||
sizeof(struct ocfs2_xattr_entry),
|
||||
cmp_xe_offset, swap_xe);
|
||||
cmp_xe_offset, NULL);
|
||||
|
||||
/* Move all name/values to the end of the bucket. */
|
||||
xe = xh->xh_entries;
|
||||
@@ -4478,7 +4469,7 @@ static int ocfs2_defrag_xattr_bucket(struct inode *inode,
|
||||
/* sort the entries by their name_hash. */
|
||||
sort(entries, le16_to_cpu(xh->xh_count),
|
||||
sizeof(struct ocfs2_xattr_entry),
|
||||
cmp_xe, swap_xe);
|
||||
cmp_xe, NULL);
|
||||
|
||||
buf = bucket_buf;
|
||||
for (i = 0; i < bucket->bu_blocks; i++, buf += blocksize)
|
||||
|
||||
@@ -303,9 +303,7 @@ static ssize_t proc_reg_read_iter(struct kiocb *iocb, struct iov_iter *iter)
|
||||
|
||||
static ssize_t pde_read(struct proc_dir_entry *pde, struct file *file, char __user *buf, size_t count, loff_t *ppos)
|
||||
{
|
||||
typeof_member(struct proc_ops, proc_read) read;
|
||||
|
||||
read = pde->proc_ops->proc_read;
|
||||
__auto_type read = pde->proc_ops->proc_read;
|
||||
if (read)
|
||||
return read(file, buf, count, ppos);
|
||||
return -EIO;
|
||||
@@ -327,9 +325,7 @@ static ssize_t proc_reg_read(struct file *file, char __user *buf, size_t count,
|
||||
|
||||
static ssize_t pde_write(struct proc_dir_entry *pde, struct file *file, const char __user *buf, size_t count, loff_t *ppos)
|
||||
{
|
||||
typeof_member(struct proc_ops, proc_write) write;
|
||||
|
||||
write = pde->proc_ops->proc_write;
|
||||
__auto_type write = pde->proc_ops->proc_write;
|
||||
if (write)
|
||||
return write(file, buf, count, ppos);
|
||||
return -EIO;
|
||||
@@ -351,9 +347,7 @@ static ssize_t proc_reg_write(struct file *file, const char __user *buf, size_t
|
||||
|
||||
static __poll_t pde_poll(struct proc_dir_entry *pde, struct file *file, struct poll_table_struct *pts)
|
||||
{
|
||||
typeof_member(struct proc_ops, proc_poll) poll;
|
||||
|
||||
poll = pde->proc_ops->proc_poll;
|
||||
__auto_type poll = pde->proc_ops->proc_poll;
|
||||
if (poll)
|
||||
return poll(file, pts);
|
||||
return DEFAULT_POLLMASK;
|
||||
@@ -375,9 +369,7 @@ static __poll_t proc_reg_poll(struct file *file, struct poll_table_struct *pts)
|
||||
|
||||
static long pde_ioctl(struct proc_dir_entry *pde, struct file *file, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
typeof_member(struct proc_ops, proc_ioctl) ioctl;
|
||||
|
||||
ioctl = pde->proc_ops->proc_ioctl;
|
||||
__auto_type ioctl = pde->proc_ops->proc_ioctl;
|
||||
if (ioctl)
|
||||
return ioctl(file, cmd, arg);
|
||||
return -ENOTTY;
|
||||
@@ -400,9 +392,7 @@ static long proc_reg_unlocked_ioctl(struct file *file, unsigned int cmd, unsigne
|
||||
#ifdef CONFIG_COMPAT
|
||||
static long pde_compat_ioctl(struct proc_dir_entry *pde, struct file *file, unsigned int cmd, unsigned long arg)
|
||||
{
|
||||
typeof_member(struct proc_ops, proc_compat_ioctl) compat_ioctl;
|
||||
|
||||
compat_ioctl = pde->proc_ops->proc_compat_ioctl;
|
||||
__auto_type compat_ioctl = pde->proc_ops->proc_compat_ioctl;
|
||||
if (compat_ioctl)
|
||||
return compat_ioctl(file, cmd, arg);
|
||||
return -ENOTTY;
|
||||
@@ -424,9 +414,7 @@ static long proc_reg_compat_ioctl(struct file *file, unsigned int cmd, unsigned
|
||||
|
||||
static int pde_mmap(struct proc_dir_entry *pde, struct file *file, struct vm_area_struct *vma)
|
||||
{
|
||||
typeof_member(struct proc_ops, proc_mmap) mmap;
|
||||
|
||||
mmap = pde->proc_ops->proc_mmap;
|
||||
__auto_type mmap = pde->proc_ops->proc_mmap;
|
||||
if (mmap)
|
||||
return mmap(file, vma);
|
||||
return -EIO;
|
||||
@@ -483,7 +471,6 @@ static int proc_reg_open(struct inode *inode, struct file *file)
|
||||
struct proc_dir_entry *pde = PDE(inode);
|
||||
int rv = 0;
|
||||
typeof_member(struct proc_ops, proc_open) open;
|
||||
typeof_member(struct proc_ops, proc_release) release;
|
||||
struct pde_opener *pdeo;
|
||||
|
||||
if (!pde->proc_ops->proc_lseek)
|
||||
@@ -510,7 +497,7 @@ static int proc_reg_open(struct inode *inode, struct file *file)
|
||||
if (!use_pde(pde))
|
||||
return -ENOENT;
|
||||
|
||||
release = pde->proc_ops->proc_release;
|
||||
__auto_type release = pde->proc_ops->proc_release;
|
||||
if (release) {
|
||||
pdeo = kmem_cache_alloc(pde_opener_cache, GFP_KERNEL);
|
||||
if (!pdeo) {
|
||||
@@ -547,9 +534,7 @@ static int proc_reg_release(struct inode *inode, struct file *file)
|
||||
struct pde_opener *pdeo;
|
||||
|
||||
if (pde_is_permanent(pde)) {
|
||||
typeof_member(struct proc_ops, proc_release) release;
|
||||
|
||||
release = pde->proc_ops->proc_release;
|
||||
__auto_type release = pde->proc_ops->proc_release;
|
||||
if (release) {
|
||||
return release(inode, file);
|
||||
}
|
||||
|
||||
@@ -543,21 +543,6 @@ static int do_procmap_query(struct proc_maps_private *priv, void __user *uarg)
|
||||
}
|
||||
}
|
||||
|
||||
if (karg.build_id_size) {
|
||||
__u32 build_id_sz;
|
||||
|
||||
err = build_id_parse(vma, build_id_buf, &build_id_sz);
|
||||
if (err) {
|
||||
karg.build_id_size = 0;
|
||||
} else {
|
||||
if (karg.build_id_size < build_id_sz) {
|
||||
err = -ENAMETOOLONG;
|
||||
goto out;
|
||||
}
|
||||
karg.build_id_size = build_id_sz;
|
||||
}
|
||||
}
|
||||
|
||||
if (karg.vma_name_size) {
|
||||
size_t name_buf_sz = min_t(size_t, PATH_MAX, karg.vma_name_size);
|
||||
const struct path *path;
|
||||
|
||||
@@ -46,7 +46,7 @@ static void *squashfs_decompressor_create(struct squashfs_sb_info *msblk,
|
||||
}
|
||||
|
||||
kfree(comp_opts);
|
||||
return (__force void *) percpu;
|
||||
return (void *)(__force unsigned long) percpu;
|
||||
|
||||
out:
|
||||
for_each_possible_cpu(cpu) {
|
||||
@@ -61,7 +61,7 @@ out:
|
||||
static void squashfs_decompressor_destroy(struct squashfs_sb_info *msblk)
|
||||
{
|
||||
struct squashfs_stream __percpu *percpu =
|
||||
(struct squashfs_stream __percpu *) msblk->stream;
|
||||
(void __percpu *)(unsigned long) msblk->stream;
|
||||
struct squashfs_stream *stream;
|
||||
int cpu;
|
||||
|
||||
@@ -79,7 +79,7 @@ static int squashfs_decompress(struct squashfs_sb_info *msblk, struct bio *bio,
|
||||
{
|
||||
struct squashfs_stream *stream;
|
||||
struct squashfs_stream __percpu *percpu =
|
||||
(struct squashfs_stream __percpu *) msblk->stream;
|
||||
(void __percpu *)(unsigned long) msblk->stream;
|
||||
int res;
|
||||
|
||||
local_lock(&percpu->lock);
|
||||
|
||||
@@ -1,10 +1,9 @@
|
||||
/* SPDX-License-Identifier: 0BSD */
|
||||
|
||||
/*
|
||||
* Wrapper for decompressing XZ-compressed kernel, initramfs, and initrd
|
||||
*
|
||||
* Author: Lasse Collin <lasse.collin@tukaani.org>
|
||||
*
|
||||
* This file has been put into the public domain.
|
||||
* You can do whatever you want with this file.
|
||||
*/
|
||||
|
||||
#ifndef DECOMPRESS_UNXZ_H
|
||||
|
||||
@@ -2,13 +2,17 @@
|
||||
#ifndef _LINUX_FAULT_INJECT_H
|
||||
#define _LINUX_FAULT_INJECT_H
|
||||
|
||||
#include <linux/err.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
struct dentry;
|
||||
struct kmem_cache;
|
||||
|
||||
#ifdef CONFIG_FAULT_INJECTION
|
||||
|
||||
#include <linux/types.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/atomic.h>
|
||||
#include <linux/configfs.h>
|
||||
#include <linux/ratelimit.h>
|
||||
#include <linux/atomic.h>
|
||||
|
||||
/*
|
||||
* For explanation of the elements of this struct, see
|
||||
@@ -51,6 +55,28 @@ int setup_fault_attr(struct fault_attr *attr, char *str);
|
||||
bool should_fail_ex(struct fault_attr *attr, ssize_t size, int flags);
|
||||
bool should_fail(struct fault_attr *attr, ssize_t size);
|
||||
|
||||
#else /* CONFIG_FAULT_INJECTION */
|
||||
|
||||
struct fault_attr {
|
||||
};
|
||||
|
||||
#define DECLARE_FAULT_ATTR(name) struct fault_attr name = {}
|
||||
|
||||
static inline int setup_fault_attr(struct fault_attr *attr, char *str)
|
||||
{
|
||||
return 0; /* Note: 0 means error for __setup() handlers! */
|
||||
}
|
||||
static inline bool should_fail_ex(struct fault_attr *attr, ssize_t size, int flags)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
static inline bool should_fail(struct fault_attr *attr, ssize_t size)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
#endif /* CONFIG_FAULT_INJECTION */
|
||||
|
||||
#ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
|
||||
|
||||
struct dentry *fault_create_debugfs_attr(const char *name,
|
||||
@@ -87,10 +113,6 @@ static inline void fault_config_init(struct fault_config *config,
|
||||
|
||||
#endif /* CONFIG_FAULT_INJECTION_CONFIGFS */
|
||||
|
||||
#endif /* CONFIG_FAULT_INJECTION */
|
||||
|
||||
struct kmem_cache;
|
||||
|
||||
#ifdef CONFIG_FAIL_PAGE_ALLOC
|
||||
bool should_fail_alloc_page(gfp_t gfp_mask, unsigned int order);
|
||||
#else
|
||||
|
||||
@@ -119,7 +119,7 @@ write intent log information, three of which are mentioned here.
|
||||
*/
|
||||
|
||||
/* this defines an element in a tracked set
|
||||
* .colision is for hash table lookup.
|
||||
* .collision is for hash table lookup.
|
||||
* When we process a new IO request, we know its sector, thus can deduce the
|
||||
* region number (label) easily. To do the label -> object lookup without a
|
||||
* full list walk, we use a simple hash table.
|
||||
@@ -145,7 +145,7 @@ write intent log information, three of which are mentioned here.
|
||||
* But it avoids high order page allocations in kmalloc.
|
||||
*/
|
||||
struct lc_element {
|
||||
struct hlist_node colision;
|
||||
struct hlist_node collision;
|
||||
struct list_head list; /* LRU list or free list */
|
||||
unsigned refcnt;
|
||||
/* back "pointer" into lc_cache->element[index],
|
||||
|
||||
@@ -10,6 +10,7 @@
|
||||
#include <linux/sched.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/fault-inject.h>
|
||||
#include <linux/debugfs.h>
|
||||
|
||||
#include <linux/mmc/core.h>
|
||||
#include <linux/mmc/card.h>
|
||||
|
||||
@@ -19,8 +19,8 @@ struct ratelimit_state {
|
||||
int burst;
|
||||
int printed;
|
||||
int missed;
|
||||
unsigned int flags;
|
||||
unsigned long begin;
|
||||
unsigned long flags;
|
||||
};
|
||||
|
||||
#define RATELIMIT_STATE_INIT_FLAGS(name, interval_init, burst_init, flags_init) { \
|
||||
|
||||
@@ -1,11 +1,10 @@
|
||||
/* SPDX-License-Identifier: 0BSD */
|
||||
|
||||
/*
|
||||
* XZ decompressor
|
||||
*
|
||||
* Authors: Lasse Collin <lasse.collin@tukaani.org>
|
||||
* Igor Pavlov <https://7-zip.org/>
|
||||
*
|
||||
* This file has been put into the public domain.
|
||||
* You can do whatever you want with this file.
|
||||
*/
|
||||
|
||||
#ifndef XZ_H
|
||||
@@ -19,11 +18,6 @@
|
||||
# include <stdint.h>
|
||||
#endif
|
||||
|
||||
/* In Linux, this is used to make extern functions static when needed. */
|
||||
#ifndef XZ_EXTERN
|
||||
# define XZ_EXTERN extern
|
||||
#endif
|
||||
|
||||
/**
|
||||
* enum xz_mode - Operation mode
|
||||
*
|
||||
@@ -143,7 +137,7 @@ struct xz_buf {
|
||||
size_t out_size;
|
||||
};
|
||||
|
||||
/**
|
||||
/*
|
||||
* struct xz_dec - Opaque type to hold the XZ decoder state
|
||||
*/
|
||||
struct xz_dec;
|
||||
@@ -191,7 +185,7 @@ struct xz_dec;
|
||||
* ready to be used with xz_dec_run(). If memory allocation fails,
|
||||
* xz_dec_init() returns NULL.
|
||||
*/
|
||||
XZ_EXTERN struct xz_dec *xz_dec_init(enum xz_mode mode, uint32_t dict_max);
|
||||
struct xz_dec *xz_dec_init(enum xz_mode mode, uint32_t dict_max);
|
||||
|
||||
/**
|
||||
* xz_dec_run() - Run the XZ decoder
|
||||
@@ -211,7 +205,7 @@ XZ_EXTERN struct xz_dec *xz_dec_init(enum xz_mode mode, uint32_t dict_max);
|
||||
* get that amount valid data from the beginning of the stream. You must use
|
||||
* the multi-call decoder if you don't want to uncompress the whole stream.
|
||||
*/
|
||||
XZ_EXTERN enum xz_ret xz_dec_run(struct xz_dec *s, struct xz_buf *b);
|
||||
enum xz_ret xz_dec_run(struct xz_dec *s, struct xz_buf *b);
|
||||
|
||||
/**
|
||||
* xz_dec_reset() - Reset an already allocated decoder state
|
||||
@@ -224,32 +218,38 @@ XZ_EXTERN enum xz_ret xz_dec_run(struct xz_dec *s, struct xz_buf *b);
|
||||
* xz_dec_run(). Thus, explicit call to xz_dec_reset() is useful only in
|
||||
* multi-call mode.
|
||||
*/
|
||||
XZ_EXTERN void xz_dec_reset(struct xz_dec *s);
|
||||
void xz_dec_reset(struct xz_dec *s);
|
||||
|
||||
/**
|
||||
* xz_dec_end() - Free the memory allocated for the decoder state
|
||||
* @s: Decoder state allocated using xz_dec_init(). If s is NULL,
|
||||
* this function does nothing.
|
||||
*/
|
||||
XZ_EXTERN void xz_dec_end(struct xz_dec *s);
|
||||
void xz_dec_end(struct xz_dec *s);
|
||||
|
||||
/**
|
||||
* DOC: MicroLZMA decompressor
|
||||
*
|
||||
* This MicroLZMA header format was created for use in EROFS but may be used
|
||||
* by others too. **In most cases one needs the XZ APIs above instead.**
|
||||
*
|
||||
* The compressed format supported by this decoder is a raw LZMA stream
|
||||
* whose first byte (always 0x00) has been replaced with bitwise-negation
|
||||
* of the LZMA properties (lc/lp/pb) byte. For example, if lc/lp/pb is
|
||||
* 3/0/2, the first byte is 0xA2. This way the first byte can never be 0x00.
|
||||
* Just like with LZMA2, lc + lp <= 4 must be true. The LZMA end-of-stream
|
||||
* marker must not be used. The unused values are reserved for future use.
|
||||
*/
|
||||
|
||||
/*
|
||||
* Decompressor for MicroLZMA, an LZMA variant with a very minimal header.
|
||||
* See xz_dec_microlzma_alloc() below for details.
|
||||
*
|
||||
* These functions aren't used or available in preboot code and thus aren't
|
||||
* marked with XZ_EXTERN. This avoids warnings about static functions that
|
||||
* are never defined.
|
||||
*/
|
||||
/**
|
||||
* struct xz_dec_microlzma - Opaque type to hold the MicroLZMA decoder state
|
||||
*/
|
||||
struct xz_dec_microlzma;
|
||||
|
||||
/**
|
||||
* xz_dec_microlzma_alloc() - Allocate memory for the MicroLZMA decoder
|
||||
* @mode XZ_SINGLE or XZ_PREALLOC
|
||||
* @dict_size LZMA dictionary size. This must be at least 4 KiB and
|
||||
* @mode: XZ_SINGLE or XZ_PREALLOC
|
||||
* @dict_size: LZMA dictionary size. This must be at least 4 KiB and
|
||||
* at most 3 GiB.
|
||||
*
|
||||
* In contrast to xz_dec_init(), this function only allocates the memory
|
||||
@@ -262,40 +262,30 @@ struct xz_dec_microlzma;
|
||||
* On success, xz_dec_microlzma_alloc() returns a pointer to
|
||||
* struct xz_dec_microlzma. If memory allocation fails or
|
||||
* dict_size is invalid, NULL is returned.
|
||||
*
|
||||
* The compressed format supported by this decoder is a raw LZMA stream
|
||||
* whose first byte (always 0x00) has been replaced with bitwise-negation
|
||||
* of the LZMA properties (lc/lp/pb) byte. For example, if lc/lp/pb is
|
||||
* 3/0/2, the first byte is 0xA2. This way the first byte can never be 0x00.
|
||||
* Just like with LZMA2, lc + lp <= 4 must be true. The LZMA end-of-stream
|
||||
* marker must not be used. The unused values are reserved for future use.
|
||||
* This MicroLZMA header format was created for use in EROFS but may be used
|
||||
* by others too.
|
||||
*/
|
||||
extern struct xz_dec_microlzma *xz_dec_microlzma_alloc(enum xz_mode mode,
|
||||
uint32_t dict_size);
|
||||
struct xz_dec_microlzma *xz_dec_microlzma_alloc(enum xz_mode mode,
|
||||
uint32_t dict_size);
|
||||
|
||||
/**
|
||||
* xz_dec_microlzma_reset() - Reset the MicroLZMA decoder state
|
||||
* @s Decoder state allocated using xz_dec_microlzma_alloc()
|
||||
* @comp_size Compressed size of the input stream
|
||||
* @uncomp_size Uncompressed size of the input stream. A value smaller
|
||||
* @s: Decoder state allocated using xz_dec_microlzma_alloc()
|
||||
* @comp_size: Compressed size of the input stream
|
||||
* @uncomp_size: Uncompressed size of the input stream. A value smaller
|
||||
* than the real uncompressed size of the input stream can
|
||||
* be specified if uncomp_size_is_exact is set to false.
|
||||
* uncomp_size can never be set to a value larger than the
|
||||
* expected real uncompressed size because it would eventually
|
||||
* result in XZ_DATA_ERROR.
|
||||
* @uncomp_size_is_exact This is an int instead of bool to avoid
|
||||
* @uncomp_size_is_exact: This is an int instead of bool to avoid
|
||||
* requiring stdbool.h. This should normally be set to true.
|
||||
* When this is set to false, error detection is weaker.
|
||||
*/
|
||||
extern void xz_dec_microlzma_reset(struct xz_dec_microlzma *s,
|
||||
uint32_t comp_size, uint32_t uncomp_size,
|
||||
int uncomp_size_is_exact);
|
||||
void xz_dec_microlzma_reset(struct xz_dec_microlzma *s, uint32_t comp_size,
|
||||
uint32_t uncomp_size, int uncomp_size_is_exact);
|
||||
|
||||
/**
|
||||
* xz_dec_microlzma_run() - Run the MicroLZMA decoder
|
||||
* @s Decoder state initialized using xz_dec_microlzma_reset()
|
||||
* @s: Decoder state initialized using xz_dec_microlzma_reset()
|
||||
* @b: Input and output buffers
|
||||
*
|
||||
* This works similarly to xz_dec_run() with a few important differences.
|
||||
@@ -329,15 +319,14 @@ extern void xz_dec_microlzma_reset(struct xz_dec_microlzma *s,
|
||||
* may be changed normally like with XZ_PREALLOC. This way input data can be
|
||||
* provided from non-contiguous memory.
|
||||
*/
|
||||
extern enum xz_ret xz_dec_microlzma_run(struct xz_dec_microlzma *s,
|
||||
struct xz_buf *b);
|
||||
enum xz_ret xz_dec_microlzma_run(struct xz_dec_microlzma *s, struct xz_buf *b);
|
||||
|
||||
/**
|
||||
* xz_dec_microlzma_end() - Free the memory allocated for the decoder state
|
||||
* @s: Decoder state allocated using xz_dec_microlzma_alloc().
|
||||
* If s is NULL, this function does nothing.
|
||||
*/
|
||||
extern void xz_dec_microlzma_end(struct xz_dec_microlzma *s);
|
||||
void xz_dec_microlzma_end(struct xz_dec_microlzma *s);
|
||||
|
||||
/*
|
||||
* Standalone build (userspace build or in-kernel build for boot time use)
|
||||
@@ -358,13 +347,13 @@ extern void xz_dec_microlzma_end(struct xz_dec_microlzma *s);
|
||||
* This must be called before any other xz_* function to initialize
|
||||
* the CRC32 lookup table.
|
||||
*/
|
||||
XZ_EXTERN void xz_crc32_init(void);
|
||||
void xz_crc32_init(void);
|
||||
|
||||
/*
|
||||
* Update CRC32 value using the polynomial from IEEE-802.3. To start a new
|
||||
* calculation, the third argument must be zero. To continue the calculation,
|
||||
* the previously returned value is passed as the third argument.
|
||||
*/
|
||||
XZ_EXTERN uint32_t xz_crc32(const uint8_t *buf, size_t size, uint32_t crc);
|
||||
uint32_t xz_crc32(const uint8_t *buf, size_t size, uint32_t crc);
|
||||
#endif
|
||||
#endif
|
||||
|
||||
@@ -17,6 +17,7 @@
|
||||
#include <linux/blk-mq.h>
|
||||
#include <linux/devfreq.h>
|
||||
#include <linux/fault-inject.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/msi.h>
|
||||
#include <linux/pm_runtime.h>
|
||||
#include <linux/dma-direction.h>
|
||||
|
||||
@@ -310,8 +310,9 @@ config KERNEL_XZ
|
||||
BCJ filters which can improve compression ratio of executable
|
||||
code. The size of the kernel is about 30% smaller with XZ in
|
||||
comparison to gzip. On architectures for which there is a BCJ
|
||||
filter (i386, x86_64, ARM, IA-64, PowerPC, and SPARC), XZ
|
||||
will create a few percent smaller kernel than plain LZMA.
|
||||
filter (i386, x86_64, ARM, ARM64, RISC-V, big endian PowerPC,
|
||||
and SPARC), XZ will create a few percent smaller kernel than
|
||||
plain LZMA.
|
||||
|
||||
The speed is about the same as with LZMA: The decompression
|
||||
speed of XZ is better than that of bzip2 but worse than gzip
|
||||
|
||||
@@ -505,7 +505,7 @@ int crash_check_hotplug_support(void)
|
||||
crash_hotplug_lock();
|
||||
/* Obtain lock while reading crash information */
|
||||
if (!kexec_trylock()) {
|
||||
pr_info("kexec_trylock() failed, elfcorehdr may be inaccurate\n");
|
||||
pr_info("kexec_trylock() failed, kdump image may be inaccurate\n");
|
||||
crash_hotplug_unlock();
|
||||
return 0;
|
||||
}
|
||||
@@ -520,18 +520,25 @@ int crash_check_hotplug_support(void)
|
||||
}
|
||||
|
||||
/*
|
||||
* To accurately reflect hot un/plug changes of cpu and memory resources
|
||||
* (including onling and offlining of those resources), the elfcorehdr
|
||||
* (which is passed to the crash kernel via the elfcorehdr= parameter)
|
||||
* must be updated with the new list of CPUs and memories.
|
||||
* To accurately reflect hot un/plug changes of CPU and Memory resources
|
||||
* (including onling and offlining of those resources), the relevant
|
||||
* kexec segments must be updated with latest CPU and Memory resources.
|
||||
*
|
||||
* In order to make changes to elfcorehdr, two conditions are needed:
|
||||
* First, the segment containing the elfcorehdr must be large enough
|
||||
* to permit a growing number of resources; the elfcorehdr memory size
|
||||
* is based on NR_CPUS_DEFAULT and CRASH_MAX_MEMORY_RANGES.
|
||||
* Second, purgatory must explicitly exclude the elfcorehdr from the
|
||||
* list of segments it checks (since the elfcorehdr changes and thus
|
||||
* would require an update to purgatory itself to update the digest).
|
||||
* Architectures must ensure two things for all segments that need
|
||||
* updating during hotplug events:
|
||||
*
|
||||
* 1. Segments must be large enough to accommodate a growing number of
|
||||
* resources.
|
||||
* 2. Exclude the segments from SHA verification.
|
||||
*
|
||||
* For example, on most architectures, the elfcorehdr (which is passed
|
||||
* to the crash kernel via the elfcorehdr= parameter) must include the
|
||||
* new list of CPUs and memory. To make changes to the elfcorehdr, it
|
||||
* should be large enough to permit a growing number of CPU and Memory
|
||||
* resources. One can estimate the elfcorehdr memory size based on
|
||||
* NR_CPUS_DEFAULT and CRASH_MAX_MEMORY_RANGES. The elfcorehdr is
|
||||
* excluded from SHA verification by default if the architecture
|
||||
* supports crash hotplug.
|
||||
*/
|
||||
static void crash_handle_hotplug_event(unsigned int hp_action, unsigned int cpu, void *arg)
|
||||
{
|
||||
@@ -540,7 +547,7 @@ static void crash_handle_hotplug_event(unsigned int hp_action, unsigned int cpu,
|
||||
crash_hotplug_lock();
|
||||
/* Obtain lock while changing crash information */
|
||||
if (!kexec_trylock()) {
|
||||
pr_info("kexec_trylock() failed, elfcorehdr may be inaccurate\n");
|
||||
pr_info("kexec_trylock() failed, kdump image may be inaccurate\n");
|
||||
crash_hotplug_unlock();
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -335,6 +335,9 @@ int __init parse_crashkernel(char *cmdline,
|
||||
if (!*crash_size)
|
||||
ret = -EINVAL;
|
||||
|
||||
if (*crash_size >= system_ram)
|
||||
ret = -EINVAL;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
@@ -34,6 +34,7 @@
|
||||
#include <linux/compat.h>
|
||||
#include <linux/jhash.h>
|
||||
#include <linux/pagemap.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/plist.h>
|
||||
#include <linux/memblock.h>
|
||||
#include <linux/fault-inject.h>
|
||||
|
||||
@@ -23,7 +23,8 @@ int kimage_is_destination_range(struct kimage *image,
|
||||
extern atomic_t __kexec_lock;
|
||||
static inline bool kexec_trylock(void)
|
||||
{
|
||||
return atomic_cmpxchg_acquire(&__kexec_lock, 0, 1) == 0;
|
||||
int old = 0;
|
||||
return atomic_try_cmpxchg_acquire(&__kexec_lock, &old, 1);
|
||||
}
|
||||
static inline void kexec_unlock(void)
|
||||
{
|
||||
|
||||
@@ -697,3 +697,4 @@ module_exit(test_ww_mutex_exit);
|
||||
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_AUTHOR("Intel Corporation");
|
||||
MODULE_DESCRIPTION("API test facility for ww_mutexes");
|
||||
|
||||
@@ -853,9 +853,8 @@ static int sort_idmaps(struct uid_gid_map *map)
|
||||
cmp_extents_forward, NULL);
|
||||
|
||||
/* Only copy the memory from forward we actually need. */
|
||||
map->reverse = kmemdup(map->forward,
|
||||
map->nr_extents * sizeof(struct uid_gid_extent),
|
||||
GFP_KERNEL);
|
||||
map->reverse = kmemdup_array(map->forward, map->nr_extents,
|
||||
sizeof(struct uid_gid_extent), GFP_KERNEL);
|
||||
if (!map->reverse)
|
||||
return -ENOMEM;
|
||||
|
||||
|
||||
@@ -1203,7 +1203,10 @@ static void __init lockup_detector_delay_init(struct work_struct *work)
|
||||
|
||||
ret = watchdog_hardlockup_probe();
|
||||
if (ret) {
|
||||
pr_info("Delayed init of the lockup detector failed: %d\n", ret);
|
||||
if (ret == -ENODEV)
|
||||
pr_info("NMI not fully supported\n");
|
||||
else
|
||||
pr_info("Delayed init of the lockup detector failed: %d\n", ret);
|
||||
pr_info("Hard watchdog permanently disabled\n");
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -1515,7 +1515,7 @@ config LOCKDEP_BITS
|
||||
config LOCKDEP_CHAINS_BITS
|
||||
int "Bitsize for MAX_LOCKDEP_CHAINS"
|
||||
depends on LOCKDEP && !LOCKDEP_SMALL
|
||||
range 10 30
|
||||
range 10 21
|
||||
default 16
|
||||
help
|
||||
Try increasing this value if you hit "BUG: MAX_LOCKDEP_CHAINS too low!" message.
|
||||
@@ -2289,6 +2289,16 @@ config TEST_DIV64
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
config TEST_MULDIV64
|
||||
tristate "mul_u64_u64_div_u64() test"
|
||||
depends on DEBUG_KERNEL || m
|
||||
help
|
||||
Enable this to turn on 'mul_u64_u64_div_u64()' function test.
|
||||
This test is executed only once during system boot (so affects
|
||||
only boot time), or at module load time.
|
||||
|
||||
If unsure, say N.
|
||||
|
||||
config TEST_IOV_ITER
|
||||
tristate "Test iov_iter operation" if !KUNIT_ALL_TESTS
|
||||
depends on KUNIT
|
||||
|
||||
@@ -14,6 +14,7 @@ KCOV_INSTRUMENT_list_debug.o := n
|
||||
KCOV_INSTRUMENT_debugobjects.o := n
|
||||
KCOV_INSTRUMENT_dynamic_debug.o := n
|
||||
KCOV_INSTRUMENT_fault-inject.o := n
|
||||
KCOV_INSTRUMENT_find_bit.o := n
|
||||
|
||||
# string.o implements standard library functions like memset/memcpy etc.
|
||||
# Use -ffreestanding to ensure that the compiler does not try to "optimize"
|
||||
|
||||
@@ -10,6 +10,8 @@ EXPORT_SYMBOL(_bcd2bin);
|
||||
|
||||
unsigned char _bin2bcd(unsigned val)
|
||||
{
|
||||
return ((val / 10) << 4) + val % 10;
|
||||
const unsigned int t = (val * 103) >> 10;
|
||||
|
||||
return (t << 4) | (val - t * 10);
|
||||
}
|
||||
EXPORT_SYMBOL(_bin2bcd);
|
||||
|
||||
@@ -468,12 +468,9 @@ static __wsum to_wsum(u32 x)
|
||||
|
||||
static void assert_setup_correct(struct kunit *test)
|
||||
{
|
||||
CHECK_EQ(sizeof(random_buf) / sizeof(random_buf[0]), MAX_LEN);
|
||||
CHECK_EQ(sizeof(expected_results) / sizeof(expected_results[0]),
|
||||
MAX_LEN);
|
||||
CHECK_EQ(sizeof(init_sums_no_overflow) /
|
||||
sizeof(init_sums_no_overflow[0]),
|
||||
MAX_LEN);
|
||||
CHECK_EQ(ARRAY_SIZE(random_buf), MAX_LEN);
|
||||
CHECK_EQ(ARRAY_SIZE(expected_results), MAX_LEN);
|
||||
CHECK_EQ(ARRAY_SIZE(init_sums_no_overflow), MAX_LEN);
|
||||
}
|
||||
|
||||
/*
|
||||
|
||||
@@ -278,7 +278,7 @@ static int debug_show(struct seq_file *f, void *data)
|
||||
seq_printf(f, " W %pS\n",
|
||||
(void *) cl->waiting_on);
|
||||
|
||||
seq_puts(f, "\n");
|
||||
seq_putc(f, '\n');
|
||||
}
|
||||
|
||||
spin_unlock_irq(&closure_list_lock);
|
||||
|
||||
@@ -1,10 +1,9 @@
|
||||
// SPDX-License-Identifier: 0BSD
|
||||
|
||||
/*
|
||||
* Wrapper for decompressing XZ-compressed kernel, initramfs, and initrd
|
||||
*
|
||||
* Author: Lasse Collin <lasse.collin@tukaani.org>
|
||||
*
|
||||
* This file has been put into the public domain.
|
||||
* You can do whatever you want with this file.
|
||||
*/
|
||||
|
||||
/*
|
||||
@@ -103,12 +102,11 @@
|
||||
#ifdef STATIC
|
||||
# define XZ_PREBOOT
|
||||
#else
|
||||
#include <linux/decompress/unxz.h>
|
||||
# include <linux/decompress/unxz.h>
|
||||
#endif
|
||||
#ifdef __KERNEL__
|
||||
# include <linux/decompress/mm.h>
|
||||
#endif
|
||||
#define XZ_EXTERN STATIC
|
||||
|
||||
#ifndef XZ_PREBOOT
|
||||
# include <linux/slab.h>
|
||||
@@ -127,11 +125,21 @@
|
||||
#ifdef CONFIG_X86
|
||||
# define XZ_DEC_X86
|
||||
#endif
|
||||
#ifdef CONFIG_PPC
|
||||
#if defined(CONFIG_PPC) && defined(CONFIG_CPU_BIG_ENDIAN)
|
||||
# define XZ_DEC_POWERPC
|
||||
#endif
|
||||
#ifdef CONFIG_ARM
|
||||
# define XZ_DEC_ARM
|
||||
# ifdef CONFIG_THUMB2_KERNEL
|
||||
# define XZ_DEC_ARMTHUMB
|
||||
# else
|
||||
# define XZ_DEC_ARM
|
||||
# endif
|
||||
#endif
|
||||
#ifdef CONFIG_ARM64
|
||||
# define XZ_DEC_ARM64
|
||||
#endif
|
||||
#ifdef CONFIG_RISCV
|
||||
# define XZ_DEC_RISCV
|
||||
#endif
|
||||
#ifdef CONFIG_SPARC
|
||||
# define XZ_DEC_SPARC
|
||||
@@ -220,7 +228,7 @@ void *memmove(void *dest, const void *src, size_t size)
|
||||
#endif
|
||||
|
||||
/*
|
||||
* Since we need memmove anyway, would use it as memcpy too.
|
||||
* Since we need memmove anyway, we could use it as memcpy too.
|
||||
* Commented out for now to avoid breaking things.
|
||||
*/
|
||||
/*
|
||||
@@ -390,17 +398,17 @@ error_alloc_state:
|
||||
}
|
||||
|
||||
/*
|
||||
* This macro is used by architecture-specific files to decompress
|
||||
* This function is used by architecture-specific files to decompress
|
||||
* the kernel image.
|
||||
*/
|
||||
#ifdef XZ_PREBOOT
|
||||
STATIC int INIT __decompress(unsigned char *buf, long len,
|
||||
long (*fill)(void*, unsigned long),
|
||||
long (*flush)(void*, unsigned long),
|
||||
unsigned char *out_buf, long olen,
|
||||
long *pos,
|
||||
void (*error)(char *x))
|
||||
STATIC int INIT __decompress(unsigned char *in, long in_size,
|
||||
long (*fill)(void *dest, unsigned long size),
|
||||
long (*flush)(void *src, unsigned long size),
|
||||
unsigned char *out, long out_size,
|
||||
long *in_used,
|
||||
void (*error)(char *x))
|
||||
{
|
||||
return unxz(buf, len, fill, flush, out_buf, pos, error);
|
||||
return unxz(in, in_size, fill, flush, out, in_used, error);
|
||||
}
|
||||
#endif
|
||||
|
||||
@@ -4,4 +4,4 @@
|
||||
|
||||
obj-$(CONFIG_DIMLIB) += dimlib.o
|
||||
|
||||
dimlib-objs := dim.o net_dim.o rdma_dim.o
|
||||
dimlib-y := dim.o net_dim.o rdma_dim.o
|
||||
|
||||
@@ -1147,7 +1147,7 @@ static int ddebug_proc_show(struct seq_file *m, void *p)
|
||||
iter->table->mod_name, dp->function,
|
||||
ddebug_describe_flags(dp->flags, &flags));
|
||||
seq_escape_str(m, dp->format, ESCAPE_SPACE, "\t\r\n\"");
|
||||
seq_puts(m, "\"");
|
||||
seq_putc(m, '"');
|
||||
|
||||
if (dp->class_id != _DPRINTK_CLASS_DFLT) {
|
||||
class = ddebug_class_name(iter, dp);
|
||||
@@ -1156,7 +1156,7 @@ static int ddebug_proc_show(struct seq_file *m, void *p)
|
||||
else
|
||||
seq_printf(m, " class unknown, _id:%d", dp->class_id);
|
||||
}
|
||||
seq_puts(m, "\n");
|
||||
seq_putc(m, '\n');
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/init.h>
|
||||
#include <linux/random.h>
|
||||
#include <linux/debugfs.h>
|
||||
#include <linux/sched.h>
|
||||
#include <linux/stat.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
@@ -68,6 +68,8 @@ bool __pure glob_match(char const *pat, char const *str)
|
||||
back_str = --str; /* Allow zero-length match */
|
||||
break;
|
||||
case '[': { /* Character class */
|
||||
if (c == '\0') /* No possible match */
|
||||
return false;
|
||||
bool match = false, inverted = (*pat == '!');
|
||||
char const *class = pat + inverted;
|
||||
unsigned char a = *class++;
|
||||
|
||||
@@ -102,6 +102,8 @@ static void list_test_list_replace(struct kunit *test)
|
||||
/* now: [list] -> a_new -> b */
|
||||
KUNIT_EXPECT_PTR_EQ(test, list.next, &a_new);
|
||||
KUNIT_EXPECT_PTR_EQ(test, b.prev, &a_new);
|
||||
KUNIT_EXPECT_PTR_EQ(test, a_new.next, &b);
|
||||
KUNIT_EXPECT_PTR_EQ(test, a_new.prev, &list);
|
||||
}
|
||||
|
||||
static void list_test_list_replace_init(struct kunit *test)
|
||||
@@ -118,6 +120,8 @@ static void list_test_list_replace_init(struct kunit *test)
|
||||
/* now: [list] -> a_new -> b */
|
||||
KUNIT_EXPECT_PTR_EQ(test, list.next, &a_new);
|
||||
KUNIT_EXPECT_PTR_EQ(test, b.prev, &a_new);
|
||||
KUNIT_EXPECT_PTR_EQ(test, a_new.next, &b);
|
||||
KUNIT_EXPECT_PTR_EQ(test, a_new.prev, &list);
|
||||
|
||||
/* check a_old is empty (initialized) */
|
||||
KUNIT_EXPECT_TRUE(test, list_empty_careful(&a_old));
|
||||
@@ -404,10 +408,13 @@ static void list_test_list_cut_position(struct kunit *test)
|
||||
|
||||
KUNIT_EXPECT_EQ(test, i, 2);
|
||||
|
||||
i = 0;
|
||||
list_for_each(cur, &list1) {
|
||||
KUNIT_EXPECT_PTR_EQ(test, cur, &entries[i]);
|
||||
i++;
|
||||
}
|
||||
|
||||
KUNIT_EXPECT_EQ(test, i, 1);
|
||||
}
|
||||
|
||||
static void list_test_list_cut_before(struct kunit *test)
|
||||
@@ -432,10 +439,13 @@ static void list_test_list_cut_before(struct kunit *test)
|
||||
|
||||
KUNIT_EXPECT_EQ(test, i, 1);
|
||||
|
||||
i = 0;
|
||||
list_for_each(cur, &list1) {
|
||||
KUNIT_EXPECT_PTR_EQ(test, cur, &entries[i]);
|
||||
i++;
|
||||
}
|
||||
|
||||
KUNIT_EXPECT_EQ(test, i, 2);
|
||||
}
|
||||
|
||||
static void list_test_list_splice(struct kunit *test)
|
||||
|
||||
@@ -243,7 +243,7 @@ static struct lc_element *__lc_find(struct lru_cache *lc, unsigned int enr,
|
||||
|
||||
BUG_ON(!lc);
|
||||
BUG_ON(!lc->nr_elements);
|
||||
hlist_for_each_entry(e, lc_hash_slot(lc, enr), colision) {
|
||||
hlist_for_each_entry(e, lc_hash_slot(lc, enr), collision) {
|
||||
/* "about to be changed" elements, pending transaction commit,
|
||||
* are hashed by their "new number". "Normal" elements have
|
||||
* lc_number == lc_new_number. */
|
||||
@@ -303,7 +303,7 @@ void lc_del(struct lru_cache *lc, struct lc_element *e)
|
||||
BUG_ON(e->refcnt);
|
||||
|
||||
e->lc_number = e->lc_new_number = LC_FREE;
|
||||
hlist_del_init(&e->colision);
|
||||
hlist_del_init(&e->collision);
|
||||
list_move(&e->list, &lc->free);
|
||||
RETURN();
|
||||
}
|
||||
@@ -324,9 +324,9 @@ static struct lc_element *lc_prepare_for_change(struct lru_cache *lc, unsigned n
|
||||
PARANOIA_LC_ELEMENT(lc, e);
|
||||
|
||||
e->lc_new_number = new_number;
|
||||
if (!hlist_unhashed(&e->colision))
|
||||
__hlist_del(&e->colision);
|
||||
hlist_add_head(&e->colision, lc_hash_slot(lc, new_number));
|
||||
if (!hlist_unhashed(&e->collision))
|
||||
__hlist_del(&e->collision);
|
||||
hlist_add_head(&e->collision, lc_hash_slot(lc, new_number));
|
||||
list_move(&e->list, &lc->to_be_changed);
|
||||
|
||||
return e;
|
||||
|
||||
@@ -7,4 +7,5 @@ obj-$(CONFIG_RATIONAL) += rational.o
|
||||
|
||||
obj-$(CONFIG_INT_POW_TEST) += tests/int_pow_kunit.o
|
||||
obj-$(CONFIG_TEST_DIV64) += test_div64.o
|
||||
obj-$(CONFIG_TEST_MULDIV64) += test_mul_u64_u64_div_u64.o
|
||||
obj-$(CONFIG_RATIONAL_KUNIT_TEST) += rational-test.o
|
||||
|
||||
117
lib/math/div64.c
117
lib/math/div64.c
@@ -186,55 +186,84 @@ EXPORT_SYMBOL(iter_div_u64_rem);
|
||||
#ifndef mul_u64_u64_div_u64
|
||||
u64 mul_u64_u64_div_u64(u64 a, u64 b, u64 c)
|
||||
{
|
||||
u64 res = 0, div, rem;
|
||||
int shift;
|
||||
if (ilog2(a) + ilog2(b) <= 62)
|
||||
return div64_u64(a * b, c);
|
||||
|
||||
/* can a * b overflow ? */
|
||||
if (ilog2(a) + ilog2(b) > 62) {
|
||||
#if defined(__SIZEOF_INT128__)
|
||||
|
||||
/* native 64x64=128 bits multiplication */
|
||||
u128 prod = (u128)a * b;
|
||||
u64 n_lo = prod, n_hi = prod >> 64;
|
||||
|
||||
#else
|
||||
|
||||
/* perform a 64x64=128 bits multiplication manually */
|
||||
u32 a_lo = a, a_hi = a >> 32, b_lo = b, b_hi = b >> 32;
|
||||
u64 x, y, z;
|
||||
|
||||
x = (u64)a_lo * b_lo;
|
||||
y = (u64)a_lo * b_hi + (u32)(x >> 32);
|
||||
z = (u64)a_hi * b_hi + (u32)(y >> 32);
|
||||
y = (u64)a_hi * b_lo + (u32)y;
|
||||
z += (u32)(y >> 32);
|
||||
x = (y << 32) + (u32)x;
|
||||
|
||||
u64 n_lo = x, n_hi = z;
|
||||
|
||||
#endif
|
||||
|
||||
/* make sure c is not zero, trigger exception otherwise */
|
||||
#pragma GCC diagnostic push
|
||||
#pragma GCC diagnostic ignored "-Wdiv-by-zero"
|
||||
if (unlikely(c == 0))
|
||||
return 1/0;
|
||||
#pragma GCC diagnostic pop
|
||||
|
||||
int shift = __builtin_ctzll(c);
|
||||
|
||||
/* try reducing the fraction in case the dividend becomes <= 64 bits */
|
||||
if ((n_hi >> shift) == 0) {
|
||||
u64 n = shift ? (n_lo >> shift) | (n_hi << (64 - shift)) : n_lo;
|
||||
|
||||
return div64_u64(n, c >> shift);
|
||||
/*
|
||||
* Note that the algorithm after the if block below might lose
|
||||
* some precision and the result is more exact for b > a. So
|
||||
* exchange a and b if a is bigger than b.
|
||||
*
|
||||
* For example with a = 43980465100800, b = 100000000, c = 1000000000
|
||||
* the below calculation doesn't modify b at all because div == 0
|
||||
* and then shift becomes 45 + 26 - 62 = 9 and so the result
|
||||
* becomes 4398035251080. However with a and b swapped the exact
|
||||
* result is calculated (i.e. 4398046510080).
|
||||
* The remainder value if needed would be:
|
||||
* res = div64_u64_rem(n, c >> shift, &rem);
|
||||
* rem = (rem << shift) + (n_lo - (n << shift));
|
||||
*/
|
||||
if (a > b)
|
||||
swap(a, b);
|
||||
|
||||
/*
|
||||
* (b * a) / c is equal to
|
||||
*
|
||||
* (b / c) * a +
|
||||
* (b % c) * a / c
|
||||
*
|
||||
* if nothing overflows. Can the 1st multiplication
|
||||
* overflow? Yes, but we do not care: this can only
|
||||
* happen if the end result can't fit in u64 anyway.
|
||||
*
|
||||
* So the code below does
|
||||
*
|
||||
* res = (b / c) * a;
|
||||
* b = b % c;
|
||||
*/
|
||||
div = div64_u64_rem(b, c, &rem);
|
||||
res = div * a;
|
||||
b = rem;
|
||||
|
||||
shift = ilog2(a) + ilog2(b) - 62;
|
||||
if (shift > 0) {
|
||||
/* drop precision */
|
||||
b >>= shift;
|
||||
c >>= shift;
|
||||
if (!c)
|
||||
return res;
|
||||
}
|
||||
}
|
||||
|
||||
return res + div64_u64(a * b, c);
|
||||
if (n_hi >= c) {
|
||||
/* overflow: result is unrepresentable in a u64 */
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Do the full 128 by 64 bits division */
|
||||
|
||||
shift = __builtin_clzll(c);
|
||||
c <<= shift;
|
||||
|
||||
int p = 64 + shift;
|
||||
u64 res = 0;
|
||||
bool carry;
|
||||
|
||||
do {
|
||||
carry = n_hi >> 63;
|
||||
shift = carry ? 1 : __builtin_clzll(n_hi);
|
||||
if (p < shift)
|
||||
break;
|
||||
p -= shift;
|
||||
n_hi <<= shift;
|
||||
n_hi |= n_lo >> (64 - shift);
|
||||
n_lo <<= shift;
|
||||
if (carry || (n_hi >= c)) {
|
||||
n_hi -= c;
|
||||
res |= 1ULL << p;
|
||||
}
|
||||
} while (n_hi);
|
||||
/* The remainder value if needed would be n_hi << p */
|
||||
|
||||
return res;
|
||||
}
|
||||
EXPORT_SYMBOL(mul_u64_u64_div_u64);
|
||||
#endif
|
||||
|
||||
99
lib/math/test_mul_u64_u64_div_u64.c
Normal file
99
lib/math/test_mul_u64_u64_div_u64.c
Normal file
@@ -0,0 +1,99 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (C) 2024 BayLibre SAS
|
||||
*/
|
||||
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
|
||||
#include <linux/init.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/printk.h>
|
||||
#include <linux/math64.h>
|
||||
|
||||
typedef struct { u64 a; u64 b; u64 c; u64 result; } test_params;
|
||||
|
||||
static test_params test_values[] = {
|
||||
/* this contains many edge values followed by a couple random values */
|
||||
{ 0xb, 0x7, 0x3, 0x19 },
|
||||
{ 0xffff0000, 0xffff0000, 0xf, 0x1110eeef00000000 },
|
||||
{ 0xffffffff, 0xffffffff, 0x1, 0xfffffffe00000001 },
|
||||
{ 0xffffffff, 0xffffffff, 0x2, 0x7fffffff00000000 },
|
||||
{ 0x1ffffffff, 0xffffffff, 0x2, 0xfffffffe80000000 },
|
||||
{ 0x1ffffffff, 0xffffffff, 0x3, 0xaaaaaaa9aaaaaaab },
|
||||
{ 0x1ffffffff, 0x1ffffffff, 0x4, 0xffffffff00000000 },
|
||||
{ 0xffff000000000000, 0xffff000000000000, 0xffff000000000001, 0xfffeffffffffffff },
|
||||
{ 0x3333333333333333, 0x3333333333333333, 0x5555555555555555, 0x1eb851eb851eb851 },
|
||||
{ 0x7fffffffffffffff, 0x2, 0x3, 0x5555555555555554 },
|
||||
{ 0xffffffffffffffff, 0x2, 0x8000000000000000, 0x3 },
|
||||
{ 0xffffffffffffffff, 0x2, 0xc000000000000000, 0x2 },
|
||||
{ 0xffffffffffffffff, 0x4000000000000004, 0x8000000000000000, 0x8000000000000007 },
|
||||
{ 0xffffffffffffffff, 0x4000000000000001, 0x8000000000000000, 0x8000000000000001 },
|
||||
{ 0xffffffffffffffff, 0x8000000000000001, 0xffffffffffffffff, 0x8000000000000001 },
|
||||
{ 0xfffffffffffffffe, 0x8000000000000001, 0xffffffffffffffff, 0x8000000000000000 },
|
||||
{ 0xffffffffffffffff, 0x8000000000000001, 0xfffffffffffffffe, 0x8000000000000001 },
|
||||
{ 0xffffffffffffffff, 0x8000000000000001, 0xfffffffffffffffd, 0x8000000000000002 },
|
||||
{ 0x7fffffffffffffff, 0xffffffffffffffff, 0xc000000000000000, 0xaaaaaaaaaaaaaaa8 },
|
||||
{ 0xffffffffffffffff, 0x7fffffffffffffff, 0xa000000000000000, 0xccccccccccccccca },
|
||||
{ 0xffffffffffffffff, 0x7fffffffffffffff, 0x9000000000000000, 0xe38e38e38e38e38b },
|
||||
{ 0x7fffffffffffffff, 0x7fffffffffffffff, 0x5000000000000000, 0xccccccccccccccc9 },
|
||||
{ 0xffffffffffffffff, 0xfffffffffffffffe, 0xffffffffffffffff, 0xfffffffffffffffe },
|
||||
{ 0xe6102d256d7ea3ae, 0x70a77d0be4c31201, 0xd63ec35ab3220357, 0x78f8bf8cc86c6e18 },
|
||||
{ 0xf53bae05cb86c6e1, 0x3847b32d2f8d32e0, 0xcfd4f55a647f403c, 0x42687f79d8998d35 },
|
||||
{ 0x9951c5498f941092, 0x1f8c8bfdf287a251, 0xa3c8dc5f81ea3fe2, 0x1d887cb25900091f },
|
||||
{ 0x374fee9daa1bb2bb, 0x0d0bfbff7b8ae3ef, 0xc169337bd42d5179, 0x03bb2dbaffcbb961 },
|
||||
{ 0xeac0d03ac10eeaf0, 0x89be05dfa162ed9b, 0x92bb1679a41f0e4b, 0xdc5f5cc9e270d216 },
|
||||
};
|
||||
|
||||
/*
|
||||
* The above table can be verified with the following shell script:
|
||||
*
|
||||
* #!/bin/sh
|
||||
* sed -ne 's/^{ \+\(.*\), \+\(.*\), \+\(.*\), \+\(.*\) },$/\1 \2 \3 \4/p' \
|
||||
* lib/math/test_mul_u64_u64_div_u64.c |
|
||||
* while read a b c r; do
|
||||
* expected=$( printf "obase=16; ibase=16; %X * %X / %X\n" $a $b $c | bc )
|
||||
* given=$( printf "%X\n" $r )
|
||||
* if [ "$expected" = "$given" ]; then
|
||||
* echo "$a * $b / $c = $r OK"
|
||||
* else
|
||||
* echo "$a * $b / $c = $r is wrong" >&2
|
||||
* echo "should be equivalent to 0x$expected" >&2
|
||||
* exit 1
|
||||
* fi
|
||||
* done
|
||||
*/
|
||||
|
||||
static int __init test_init(void)
|
||||
{
|
||||
int i;
|
||||
|
||||
pr_info("Starting mul_u64_u64_div_u64() test\n");
|
||||
|
||||
for (i = 0; i < ARRAY_SIZE(test_values); i++) {
|
||||
u64 a = test_values[i].a;
|
||||
u64 b = test_values[i].b;
|
||||
u64 c = test_values[i].c;
|
||||
u64 expected_result = test_values[i].result;
|
||||
u64 result = mul_u64_u64_div_u64(a, b, c);
|
||||
|
||||
if (result != expected_result) {
|
||||
pr_err("ERROR: 0x%016llx * 0x%016llx / 0x%016llx\n", a, b, c);
|
||||
pr_err("ERROR: expected result: %016llx\n", expected_result);
|
||||
pr_err("ERROR: obtained result: %016llx\n", result);
|
||||
}
|
||||
}
|
||||
|
||||
pr_info("Completed mul_u64_u64_div_u64() test\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void __exit test_exit(void)
|
||||
{
|
||||
}
|
||||
|
||||
module_init(test_init);
|
||||
module_exit(test_exit);
|
||||
|
||||
MODULE_AUTHOR("Nicolas Pitre");
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("mul_u64_u64_div_u64() test module");
|
||||
@@ -209,7 +209,7 @@ int __percpu_counter_init_many(struct percpu_counter *fbc, s64 amount,
|
||||
INIT_LIST_HEAD(&fbc[i].list);
|
||||
#endif
|
||||
fbc[i].count = amount;
|
||||
fbc[i].counters = (void *)counters + (i * counter_size);
|
||||
fbc[i].counters = (void __percpu *)counters + i * counter_size;
|
||||
|
||||
debug_percpu_counter_activate(&fbc[i]);
|
||||
}
|
||||
|
||||
@@ -189,7 +189,7 @@ static struct bucket_table *bucket_table_alloc(struct rhashtable *ht,
|
||||
|
||||
size = nbuckets;
|
||||
|
||||
if (tbl == NULL && (gfp & ~__GFP_NOFAIL) != GFP_KERNEL) {
|
||||
if (tbl == NULL && !gfpflags_allow_blocking(gfp)) {
|
||||
tbl = nested_bucket_table_alloc(ht, nbuckets, gfp);
|
||||
nbuckets = 0;
|
||||
}
|
||||
|
||||
@@ -42,7 +42,7 @@ static int __init test_fpu_init(void)
|
||||
return -EINVAL;
|
||||
|
||||
selftest_dir = debugfs_create_dir("selftest_helpers", NULL);
|
||||
if (!selftest_dir)
|
||||
if (IS_ERR(selftest_dir))
|
||||
return -ENOMEM;
|
||||
|
||||
debugfs_create_file_unsafe("test_fpu", 0444, selftest_dir, NULL,
|
||||
|
||||
@@ -687,4 +687,5 @@ static void __exit ot_mod_exit(void)
|
||||
module_init(ot_mod_init);
|
||||
module_exit(ot_mod_exit);
|
||||
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_DESCRIPTION("Test module for lockless object pool");
|
||||
MODULE_LICENSE("GPL");
|
||||
|
||||
@@ -5,7 +5,8 @@ config XZ_DEC
|
||||
help
|
||||
LZMA2 compression algorithm and BCJ filters are supported using
|
||||
the .xz file format as the container. For integrity checking,
|
||||
CRC32 is supported. See Documentation/staging/xz.rst for more information.
|
||||
CRC32 is supported. See Documentation/staging/xz.rst for more
|
||||
information.
|
||||
|
||||
if XZ_DEC
|
||||
|
||||
@@ -29,11 +30,21 @@ config XZ_DEC_ARMTHUMB
|
||||
default y
|
||||
select XZ_DEC_BCJ
|
||||
|
||||
config XZ_DEC_ARM64
|
||||
bool "ARM64 BCJ filter decoder" if EXPERT
|
||||
default y
|
||||
select XZ_DEC_BCJ
|
||||
|
||||
config XZ_DEC_SPARC
|
||||
bool "SPARC BCJ filter decoder" if EXPERT
|
||||
default y
|
||||
select XZ_DEC_BCJ
|
||||
|
||||
config XZ_DEC_RISCV
|
||||
bool "RISC-V BCJ filter decoder" if EXPERT
|
||||
default y
|
||||
select XZ_DEC_BCJ
|
||||
|
||||
config XZ_DEC_MICROLZMA
|
||||
bool "MicroLZMA decoder"
|
||||
default n
|
||||
|
||||
@@ -1,11 +1,10 @@
|
||||
// SPDX-License-Identifier: 0BSD
|
||||
|
||||
/*
|
||||
* CRC32 using the polynomial from IEEE-802.3
|
||||
*
|
||||
* Authors: Lasse Collin <lasse.collin@tukaani.org>
|
||||
* Igor Pavlov <https://7-zip.org/>
|
||||
*
|
||||
* This file has been put into the public domain.
|
||||
* You can do whatever you want with this file.
|
||||
*/
|
||||
|
||||
/*
|
||||
@@ -27,9 +26,9 @@
|
||||
|
||||
STATIC_RW_DATA uint32_t xz_crc32_table[256];
|
||||
|
||||
XZ_EXTERN void xz_crc32_init(void)
|
||||
void xz_crc32_init(void)
|
||||
{
|
||||
const uint32_t poly = CRC32_POLY_LE;
|
||||
const uint32_t poly = 0xEDB88320;
|
||||
|
||||
uint32_t i;
|
||||
uint32_t j;
|
||||
@@ -46,7 +45,7 @@ XZ_EXTERN void xz_crc32_init(void)
|
||||
return;
|
||||
}
|
||||
|
||||
XZ_EXTERN uint32_t xz_crc32(const uint8_t *buf, size_t size, uint32_t crc)
|
||||
uint32_t xz_crc32(const uint8_t *buf, size_t size, uint32_t crc)
|
||||
{
|
||||
crc = ~crc;
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user