[RFC PATCH 0/5] EFI Special Purpose Memory Support
by Dan Williams
The EFI 2.8 Specification [1] introduces the EFI_MEMORY_SP ("special
purpose") memory attribute. This attribute bit replaces the deprecated
"reservation hint" that was introduced in ACPI 6.2 and removed in ACPI
6.3.
Given the increasing diversity of memory types that might be advertised
to the operating system, there is a need for platform firmware to hint
which memory ranges are free for the OS to use as general purpose memory
and which ranges are intended for application specific usage. For
example, an application with prior knowledge of the platform may expect
to be able to exclusively allocate a precious / limited pool of high
bandwidth memory. Alternatively, for the general purpose case, the
operating system may want to make the memory available on a best effort
basis as a unique numa-node with performance properties by the new
CONFIG_HMEM_REPORTING [2] facility.
In support of allowing for both exclusive and core-kernel-mm managed
access to differentiated memory, claim EFI_MEMORY_SP ranges for exposure
as device-dax instances by default. Those instances can be directly
owned / mapped by a platform-topology-aware application. However, with
the new kmem facility [3], the administrator has the option to instead
designate that those memory ranges be hot-added to the core-kernel-mm as
a unique memory numa-node. In short, allow for the decision about what
software agent manages special purpose memory to be made at runtime.
The patches are based on v8 of Keith's "HMEM" series currently in Greg's
driver-core-testing branch [4], and have not been tested. This is an RFC
proposal on how to handle the new EFI memory attribute.
[1]: https://uefi.org/sites/default/files/resources/UEFI_Spec_2_8_final.pdf
[2]: https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git/co...
[3]: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit...
[4]: https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git/lo...
---
Dan Williams (5):
efi: Detect UEFI 2.8 Special Purpose Memory
lib/memregion: Uplevel the pmem "region" ida to a global allocator
acpi/hmat: Track target address ranges
acpi/hmat: Register special purpose memory as a device
device-dax: Add a driver for "hmem" devices
arch/x86/Kconfig | 18 +++++
arch/x86/boot/compressed/eboot.c | 5 +
arch/x86/boot/compressed/kaslr.c | 2 -
arch/x86/include/asm/e820/types.h | 9 ++
arch/x86/kernel/e820.c | 9 ++
arch/x86/platform/efi/efi.c | 10 ++-
drivers/acpi/hmat/Kconfig | 1
drivers/acpi/hmat/hmat.c | 140 +++++++++++++++++++++++++++++++------
drivers/dax/Kconfig | 26 ++++++-
drivers/dax/Makefile | 2 +
drivers/dax/hmem.c | 58 +++++++++++++++
drivers/nvdimm/Kconfig | 1
drivers/nvdimm/core.c | 1
drivers/nvdimm/nd-core.h | 1
drivers/nvdimm/region_devs.c | 13 +--
include/linux/efi.h | 14 ++++
include/linux/ioport.h | 1
include/linux/memregion.h | 9 ++
lib/Kconfig | 6 ++
lib/Makefile | 1
lib/memregion.c | 22 ++++++
21 files changed, 304 insertions(+), 45 deletions(-)
create mode 100644 drivers/dax/hmem.c
create mode 100644 include/linux/memregion.h
create mode 100644 lib/memregion.c
3 years, 4 months
[PATCH] acpi/nfit: ensure that intel passphrase length is not larger than nvdimm
by Li RongQing
Both are same now, but if length of nvdimm passphrase is changed to
less than intel passphrase length, OOB access will happen
Signed-off-by: Li RongQing <lirongqing(a)baidu.com>
---
drivers/acpi/nfit/intel.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/acpi/nfit/intel.c b/drivers/acpi/nfit/intel.c
index f70de71f79d6..9517fb0fd8b9 100644
--- a/drivers/acpi/nfit/intel.c
+++ b/drivers/acpi/nfit/intel.c
@@ -248,6 +248,8 @@ static int __maybe_unused intel_security_erase(struct nvdimm *nvdimm,
},
};
+ BUILD_BUG_ON(NVDIMM_PASSPHRASE_LEN < ND_INTEL_PASSPHRASE_SIZE);
+
if (!test_bit(cmd, &nfit_mem->dsm_mask))
return -ENOTTY;
--
2.16.2
3 years, 4 months
[ndctl PATCH] ndctl: add a 'clear-errors' command
by Vishal Verma
Add a nre command, ndctl-clear-errors, to clear any errors (badblocks)
on a namespace. This is in preparation for a 'system-ram' mode for
devdax devices using the kernel's 'kmem' facility. Since the device is
being used as volatile RAM, we can take the opportunity to clear any
badblocks on the device before reconfiguration, so that the user doesn't
come across one unexpectedly.
Make this error clearing facility generic to all namespace types (i.e.
devdax, fsdax, and raw; sector mode namespaces are not supported). To
clear errors, use the "Clear Uncorrectable Errors" ACPI DSM command via
the helpers provided by libndctl.
Cc: Dan Williams <dan.j.williams(a)intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma(a)intel.com>
---
Documentation/ndctl/Makefile.am | 1 +
Documentation/ndctl/ndctl-clear-errors.txt | 106 ++++++++
contrib/ndctl | 3 +
ndctl/action.h | 1 +
ndctl/builtin.h | 1 +
ndctl/namespace.c | 279 +++++++++++++++++++++
ndctl/ndctl.c | 1 +
7 files changed, 392 insertions(+)
create mode 100644 Documentation/ndctl/ndctl-clear-errors.txt
diff --git a/Documentation/ndctl/Makefile.am b/Documentation/ndctl/Makefile.am
index 2593dbd..fb46d7c 100644
--- a/Documentation/ndctl/Makefile.am
+++ b/Documentation/ndctl/Makefile.am
@@ -43,6 +43,7 @@ man1_MANS = \
ndctl-create-namespace.1 \
ndctl-destroy-namespace.1 \
ndctl-check-namespace.1 \
+ ndctl-clear-errors.1 \
ndctl-inject-error.1 \
ndctl-inject-smart.1 \
ndctl-update-firmware.1 \
diff --git a/Documentation/ndctl/ndctl-clear-errors.txt b/Documentation/ndctl/ndctl-clear-errors.txt
new file mode 100644
index 0000000..206ab58
--- /dev/null
+++ b/Documentation/ndctl/ndctl-clear-errors.txt
@@ -0,0 +1,106 @@
+// SPDX-License-Identifier: GPL-2.0
+
+ndctl-clear-errors(1)
+=====================
+
+NAME
+----
+ndctl-clear-errors - clear all errors (badblocks) on the given namespace
+
+SYNOPSIS
+--------
+[verse]
+'ndctl clear-errors' <namespace> [<options>]
+
+DESCRIPTION
+-----------
+
+A namespace may have one or more 'media errors', either known to the kernel
+or in a latent state. These error locations, or 'badblocks' can cause poison
+consumption events if read in an unsafe manner.
+
+Moreover, these badblocks also indicate that due to media corruption, any data
+that may have been in these locations has been unrecoverably lost.
+
+Normally, in the presence of such errors, the administrator is expected to
+recover the data from out of band means (such as backups), destroy the
+namespace, recreate it, and then restore the data. When the data is re-written,
+the writes will allow any errors to be cleared as they are encountered. In such
+a workflow, one should *never* need to use the 'clear-errors' command.
+
+However, there may be special use cases, where the data currently on the
+namespace does not matter - for example, if a 'devdax' mode namespace is being
+prepared for use as 'system-ram'. In such cases, it may be desirable to clear
+any errors on the namespace prior to switching its mode to prevent disruptive
+machine checks due to poison consumption.
+
+NOTE: *Only* use this command when the data on the namespace is immaterial.
+For any blocks that are cleared via this command, any data on the blocks in
+question will be lost, and replaced with content that is implementation
+(platform) defined, and unpredictable.
+
+WARNING: This is a DANGEROUS command, and should only be used after fully
+understanding its implications and consequences. This WILL erase your data.
+
+For namespaces in one of 'fsdax' or 'davdax' modes, this command will
+only consider the 'data' area for error clearing. Namespace metadata, such as
+info-blocks, will not be touched. For namespaces in 'raw' mode, the full
+available capacity of the namespace is considered for error clearing.
+Namespaces that are in 'sector' mode are not supported, and will be skipped.
+
+NOTE: It is expected that the command is run with the namespace 'enabled'.
+A namespace in the 'disabled' state will appear as, and will be treated as a
+'raw' namespace, and error clearing will be performed for the full available
+capacity of the namespace, including any potential metadata areas. If there
+happen to be errors in the metadata area, clearing them may result in
+unpredictable outcomes. You have been warned!
+
+Known errors are ones that the kernel has encountered before, either via a
+previous scrub, or by an attempted read from those locations. These can be
+listed by running 'ndctl list --media-errors' for a given namespace. Latent
+errors, as the name indicates, are unknown to the kernel. These can be found
+by running a scrub operation on the NVDIMMs in question. By default, the
+ndctl-clear-errors command only clears known errors. This can be overridden
+using the '--scrub' option to clear *all* errors.
+
+NOTE: If a scrub is in progress when the command is called, it will
+unconditionally wait for it to complete.
+
+EXAMPLES
+--------
+
+Clear errors on namespace 0.0
+[verse]
+ ndctl clear-errors namespace0.0
+
+Clear errors on all namespaces belonging to region1, including scrubbing for
+latent errors
+[verse]
+ ndctl clear-errors --scrub --region=region1 all
+
+OPTIONS
+-------
+
+-s::
+--scrub::
+ Perform a 'scrub' on the bus prior to clearing errors. This allows
+ for the clearing of any latent media errors in addition to errors
+ the kernel already knows about.
+
+NOTE: This will cause the command to start and wait for a full scrub, and this
+can potentially be a very long-running operation.
+
+-v::
+--verbose::
+ Emit debug messages.
+
+-r::
+--region=::
+include::xable-region-options.txt[]
+
+include::../copyright.txt[]
+
+SEE ALSO
+--------
+linkndctl:ndctl-start-scrub[1],
+linkndctl:ndctl-list[1]
diff --git a/contrib/ndctl b/contrib/ndctl
index e17fb0b..396a344 100755
--- a/contrib/ndctl
+++ b/contrib/ndctl
@@ -328,6 +328,9 @@ __ndctl_comp_non_option_args()
check-namespace)
opts="$(__ndctl_get_ns -i) all"
;;
+ clear-errors)
+ opts="$(__ndctl_get_ns) all"
+ ;;
enable-region)
opts="$(__ndctl_get_regions -i) all"
;;
diff --git a/ndctl/action.h b/ndctl/action.h
index 1ecad49..50da010 100644
--- a/ndctl/action.h
+++ b/ndctl/action.h
@@ -13,5 +13,6 @@ enum device_action {
ACTION_CHECK,
ACTION_WAIT,
ACTION_START,
+ ACTION_CLEAR,
};
#endif /* __NDCTL_ACTION_H__ */
diff --git a/ndctl/builtin.h b/ndctl/builtin.h
index 681a69f..94ab317 100644
--- a/ndctl/builtin.h
+++ b/ndctl/builtin.h
@@ -10,6 +10,7 @@ int cmd_create_namespace(int argc, const char **argv, struct ndctl_ctx *ctx);
int cmd_destroy_namespace(int argc, const char **argv, struct ndctl_ctx *ctx);
int cmd_disable_namespace(int argc, const char **argv, struct ndctl_ctx *ctx);
int cmd_check_namespace(int argc, const char **argv, struct ndctl_ctx *ctx);
+int cmd_clear_errors(int argc, const char **argv, struct ndctl_ctx *ctx);
int cmd_enable_region(int argc, const char **argv, struct ndctl_ctx *ctx);
int cmd_disable_region(int argc, const char **argv, struct ndctl_ctx *ctx);
int cmd_enable_dimm(int argc, const char **argv, struct ndctl_ctx *ctx);
diff --git a/ndctl/namespace.c b/ndctl/namespace.c
index 03d805a..c7abcbf 100644
--- a/ndctl/namespace.c
+++ b/ndctl/namespace.c
@@ -36,6 +36,7 @@ static bool verbose;
static bool force;
static bool repair;
static bool logfix;
+static bool scrub;
static struct parameters {
bool do_scan;
bool mode_default;
@@ -120,6 +121,9 @@ OPT_BOOLEAN('R', "repair", &repair, "perform metadata repairs"), \
OPT_BOOLEAN('L', "rewrite-log", &logfix, "regenerate the log"), \
OPT_BOOLEAN('f', "force", &force, "check namespace even if currently active")
+#define CLEAR_OPTIONS() \
+OPT_BOOLEAN('s', "scrub", &scrub, "run a scrub to find latent errors")
+
static const struct option base_options[] = {
BASE_OPTIONS(),
OPT_END(),
@@ -144,6 +148,12 @@ static const struct option check_options[] = {
OPT_END(),
};
+static const struct option clear_options[] = {
+ BASE_OPTIONS(),
+ CLEAR_OPTIONS(),
+ OPT_END(),
+};
+
static int set_defaults(enum device_action mode)
{
int rc = 0;
@@ -285,6 +295,9 @@ static const char *parse_namespace_options(int argc, const char **argv,
case ACTION_CHECK:
action_string = "check";
break;
+ case ACTION_CLEAR:
+ action_string = "clear errors for";
+ break;
default:
action_string = "<>";
break;
@@ -1051,6 +1064,251 @@ static int namespace_reconfig(struct ndctl_region *region,
int namespace_check(struct ndctl_namespace *ndns, bool verbose, bool force,
bool repair, bool logfix);
+static int bus_send_clear(struct ndctl_bus *bus, unsigned long long start,
+ unsigned long long size)
+{
+ const char *busname = ndctl_bus_get_provider(bus);
+ struct ndctl_cmd *cmd_cap, *cmd_clear;
+ unsigned long long cleared;
+ struct ndctl_range range;
+ int rc;
+
+ /* get ars_cap */
+ cmd_cap = ndctl_bus_cmd_new_ars_cap(bus, start, size);
+ if (!cmd_cap) {
+ debug("bus: %s failed to create cmd\n", busname);
+ return -ENOTTY;
+ }
+
+ rc = ndctl_cmd_submit_xlat(cmd_cap);
+ if (rc < 0) {
+ debug("bus: %s failed to submit cmd: %d\n", busname, rc);
+ ndctl_cmd_unref(cmd_cap);
+ return rc;
+ }
+
+ /* send clear_error */
+ if (ndctl_cmd_ars_cap_get_range(cmd_cap, &range)) {
+ debug("bus: %s failed to get ars_cap range\n", busname);
+ return -ENXIO;
+ }
+
+ cmd_clear = ndctl_bus_cmd_new_clear_error(range.address,
+ range.length, cmd_cap);
+ if (!cmd_clear) {
+ debug("bus: %s failed to create cmd\n", busname);
+ return -ENOTTY;
+ }
+
+ rc = ndctl_cmd_submit_xlat(cmd_clear);
+ if (rc < 0) {
+ debug("bus: %s failed to submit cmd: %d\n", busname, rc);
+ ndctl_cmd_unref(cmd_clear);
+ return rc;
+ }
+
+ cleared = ndctl_cmd_clear_error_get_cleared(cmd_clear);
+ if (cleared != range.length) {
+ debug("bus: %s expected to clear: %lld actual: %lld\n",
+ busname, range.length, cleared);
+ return -ENXIO;
+ }
+
+ ndctl_cmd_unref(cmd_cap);
+ ndctl_cmd_unref(cmd_clear);
+ return 0;
+}
+
+static int nstype_clear_badblocks(struct ndctl_namespace *ndns,
+ const char *devname, unsigned long long dev_begin,
+ unsigned long long dev_size)
+{
+ struct ndctl_region *region = ndctl_namespace_get_region(ndns);
+ struct ndctl_bus *bus = ndctl_region_get_bus(region);
+ unsigned long long region_begin, dev_end;
+ unsigned int cleared = 0;
+ struct badblock *bb;
+ int rc = 0;
+
+ region_begin = ndctl_region_get_resource(region);
+ if (region_begin == ULLONG_MAX) {
+ ndctl_namespace_enable(ndns);
+ return -errno;
+ }
+
+ dev_end = dev_begin + dev_size - 1;
+
+ ndctl_region_badblock_foreach(region, bb) {
+ unsigned long long bb_begin, bb_end, bb_len;
+
+ bb_begin = region_begin + (bb->offset << 9);
+ bb_len = bb->len << 9;
+ bb_end = bb_begin + bb_len - 1;
+
+ /* bb is not fully contained in the usable area */
+ if (bb_begin < dev_begin || bb_end > dev_end)
+ continue;
+
+ rc = bus_send_clear(bus, bb_begin, bb_len);
+ if (rc) {
+ error("%s: failed to clear badblock at {%lld, %u}\n",
+ devname, bb->offset, bb->len);
+ break;
+ }
+ cleared += bb->len;
+ }
+ debug("%s: cleared %u badblocks\n", devname, cleared);
+
+ rc = ndctl_namespace_enable(ndns);
+ if (rc < 0)
+ return rc;
+ return 0;
+}
+
+static int dax_clear_badblocks(struct ndctl_dax *dax)
+{
+ struct ndctl_namespace *ndns = ndctl_dax_get_namespace(dax);
+ const char *devname = ndctl_dax_get_devname(dax);
+ unsigned long long begin, size;
+ int rc;
+
+ begin = ndctl_dax_get_resource(dax);
+ if (begin == ULLONG_MAX)
+ return -ENXIO;
+
+ size = ndctl_dax_get_size(dax);
+ if (size == ULLONG_MAX)
+ return -ENXIO;
+
+ rc = ndctl_namespace_disable_safe(ndns);
+ if (rc) {
+ error("%s: unable to disable namespace: %s\n", devname,
+ strerror(-rc));
+ return rc;
+ }
+ return nstype_clear_badblocks(ndns, devname, begin, size);
+}
+
+static int pfn_clear_badblocks(struct ndctl_pfn *pfn)
+{
+ struct ndctl_namespace *ndns = ndctl_pfn_get_namespace(pfn);
+ const char *devname = ndctl_pfn_get_devname(pfn);
+ unsigned long long begin, size;
+ int rc;
+
+ begin = ndctl_pfn_get_resource(pfn);
+ if (begin == ULLONG_MAX)
+ return -ENXIO;
+
+ size = ndctl_pfn_get_size(pfn);
+ if (size == ULLONG_MAX)
+ return -ENXIO;
+
+ rc = ndctl_namespace_disable_safe(ndns);
+ if (rc) {
+ error("%s: unable to disable namespace: %s\n", devname,
+ strerror(-rc));
+ return rc;
+ }
+ return nstype_clear_badblocks(ndns, devname, begin, size);
+}
+
+static int raw_clear_badblocks(struct ndctl_namespace *ndns)
+{
+ const char *devname = ndctl_namespace_get_devname(ndns);
+ unsigned long long begin, size;
+ int rc;
+
+ begin = ndctl_namespace_get_resource(ndns);
+ if (begin == ULLONG_MAX)
+ return -ENXIO;
+
+ size = ndctl_namespace_get_size(ndns);
+ if (size == ULLONG_MAX)
+ return -ENXIO;
+
+ rc = ndctl_namespace_disable_safe(ndns);
+ if (rc) {
+ error("%s: unable to disable namespace: %s\n", devname,
+ strerror(-rc));
+ return rc;
+ }
+ return nstype_clear_badblocks(ndns, devname, begin, size);
+}
+
+static int namespace_wait_scrub(struct ndctl_namespace *ndns)
+{
+ const char *devname = ndctl_namespace_get_devname(ndns);
+ struct ndctl_bus *bus = ndctl_namespace_get_bus(ndns);
+ int in_progress, rc;
+
+ in_progress = ndctl_bus_get_scrub_state(bus);
+ if (in_progress < 0) {
+ error("%s: Unable to determine scrub state: %s\n", devname,
+ strerror(-in_progress));
+ return in_progress;
+ }
+
+ /* start a scrub if asked and if one isn't in progress */
+ if (scrub && (!in_progress)) {
+ rc = ndctl_bus_start_scrub(bus);
+ if (rc) {
+ error("%s: Unable to start scrub: %s\n", devname,
+ strerror(-rc));
+ return rc;
+ }
+ }
+
+ /*
+ * wait for any in-progress scrub, whether started above, or
+ * started automatically at boot time
+ */
+ rc = ndctl_bus_wait_for_scrub_completion(bus);
+ if (rc) {
+ error("%s: Error waiting for scrub: %s\n", devname,
+ strerror(-rc));
+ return rc;
+ }
+
+ return 0;
+}
+
+static int namespace_clear_bb(struct ndctl_namespace *ndns)
+{
+ struct ndctl_pfn *pfn = ndctl_namespace_get_pfn(ndns);
+ struct ndctl_dax *dax = ndctl_namespace_get_dax(ndns);
+ struct ndctl_btt *btt = ndctl_namespace_get_btt(ndns);
+ struct json_object *jndns;
+ int rc;
+
+ if (btt) {
+ /* skip btt error clearing for now */
+ debug("%s: skip error clearing for btt\n",
+ ndctl_btt_get_devname(btt));
+ return 1;
+ }
+
+ rc = namespace_wait_scrub(ndns);
+ if (rc)
+ return rc;
+
+ if (dax)
+ rc = dax_clear_badblocks(dax);
+ else if (pfn)
+ rc = pfn_clear_badblocks(pfn);
+ else
+ rc = raw_clear_badblocks(ndns);
+
+ if (rc)
+ return rc;
+
+ jndns = util_namespace_to_json(ndns, UTIL_JSON_MEDIA_ERRORS);
+ if (jndns)
+ printf("%s\n", json_object_to_json_string_ext(jndns,
+ JSON_C_TO_STRING_PRETTY));
+ return 0;
+}
+
static int do_xaction_namespace(const char *namespace,
enum device_action action, struct ndctl_ctx *ctx,
int *processed)
@@ -1131,6 +1389,11 @@ static int do_xaction_namespace(const char *namespace,
if (rc == 0)
(*processed)++;
break;
+ case ACTION_CLEAR:
+ rc = namespace_clear_bb(ndns);
+ if (rc == 0)
+ (*processed)++;
+ break;
case ACTION_CREATE:
rc = namespace_reconfig(region, ndns);
if (rc == 0)
@@ -1240,3 +1503,19 @@ int cmd_check_namespace(int argc , const char **argv, struct ndctl_ctx *ctx)
checked == 1 ? "" : "s");
return rc;
}
+
+int cmd_clear_errors(int argc , const char **argv, struct ndctl_ctx *ctx)
+{
+ char *xable_usage = "ndctl clear_errors <namespace> [<options>]";
+ const char *namespace = parse_namespace_options(argc, argv,
+ ACTION_CLEAR, clear_options, xable_usage);
+ int cleared, rc;
+
+ rc = do_xaction_namespace(namespace, ACTION_CLEAR, ctx, &cleared);
+ if (rc < 0)
+ fprintf(stderr, "error clearing namespaces: %s\n",
+ strerror(-rc));
+ fprintf(stderr, "cleared %d namespace%s\n", cleared,
+ cleared == 1 ? "" : "s");
+ return rc;
+}
diff --git a/ndctl/ndctl.c b/ndctl/ndctl.c
index bd333b2..6c4975c 100644
--- a/ndctl/ndctl.c
+++ b/ndctl/ndctl.c
@@ -74,6 +74,7 @@ static struct cmd_struct commands[] = {
{ "create-namespace", { cmd_create_namespace } },
{ "destroy-namespace", { cmd_destroy_namespace } },
{ "check-namespace", { cmd_check_namespace } },
+ { "clear-errors", { cmd_clear_errors } },
{ "enable-region", { cmd_enable_region } },
{ "disable-region", { cmd_disable_region } },
{ "enable-dimm", { cmd_enable_dimm } },
--
2.20.1
3 years, 4 months
High danger. Your account was attacked.
by linux-nvdimm@lists.01.org
Hello!
I hacked your device, because I sent you this message from your account.
If you have already changed your password, my malware will be intercepts it every time.
You may not know me, and you are most likely wondering why you are receiving this email, right?
In fact, I posted a malicious program on adults (pornography) of some websites, and you know that you visited these websites to enjoy
(you know what I mean).
While you were watching video clips,
my trojan started working as a RDP (remote desktop) with a keylogger that gave me access to your screen as well as a webcam.
Immediately after this, my program gathered all your contacts from messenger, social networks, and also by e-mail.
What I've done?
I made a double screen video.
The first part shows the video you watched (you have good taste, yes ... but strange for me and other normal people),
and the second part shows the recording of your webcam.
What should you do?
Well, I think $704 (USD dollars) is a fair price for our little secret.
You will make a bitcoin payment (if you don't know, look for "how to buy bitcoins" on Google).
BTC Address: 14DvFghvkzQujf5Kd5AL2VKjxaYm5KidxR
(This is CASE sensitive, please copy and paste it)
Remarks:
You have 2 days (48 hours) to pay. (I have a special code, and at the moment I know that you have read this email).
If I don't get bitcoins, I will send your video to all your contacts, including family members, colleagues, etc.
However, if I am paid, I will immediately destroy the video, and my trojan will be destruct someself.
If you want to get proof, answer "Yes!" and resend this letter to youself.
And I will definitely send your video to your any 13 contacts.
This is a non-negotiable offer, so please do not waste my personal and other people's time by replying to this email.
Bye!
3 years, 4 months
[PATCH v5 00/10] mm: Sub-section memory hotplug support
by Dan Williams
Changes since v4 [1]:
- Given v4 was from March of 2017 the bulk of the changes result from
rebasing the patch set from a v4.11-rc2 baseline to v5.1-rc1.
- A unit test is added to ndctl to exercise the creation and dax
mounting of multiple independent namespaces in a single 128M section.
[1]: https://lwn.net/Articles/717383/
---
Quote patch7:
"The libnvdimm sub-system has suffered a series of hacks and broken
workarounds for the memory-hotplug implementation's awkward
section-aligned (128MB) granularity. For example the following backtrace
is emitted when attempting arch_add_memory() with physical address
ranges that intersect 'System RAM' (RAM) with 'Persistent Memory' (PMEM)
within a given section:
WARNING: CPU: 0 PID: 558 at kernel/memremap.c:300 devm_memremap_pages+0x3b5/0x4c0
devm_memremap_pages attempted on mixed region [mem 0x200000000-0x2fbffffff flags 0x200]
[..]
Call Trace:
dump_stack+0x86/0xc3
__warn+0xcb/0xf0
warn_slowpath_fmt+0x5f/0x80
devm_memremap_pages+0x3b5/0x4c0
__wrap_devm_memremap_pages+0x58/0x70 [nfit_test_iomap]
pmem_attach_disk+0x19a/0x440 [nd_pmem]
Recently it was discovered that the problem goes beyond RAM vs PMEM
collisions as some platform produce PMEM vs PMEM collisions within a
given section. The libnvdimm workaround for that case revealed that the
libnvdimm section-alignment-padding implementation has been broken for a
long while. A fix for that long-standing breakage introduces as many
problems as it solves as it would require a backward-incompatible change
to the namespace metadata interpretation. Instead of that dubious route
[2], address the root problem in the memory-hotplug implementation."
The approach is taken is to observe that each section already maintains
an array of 'unsigned long' values to hold the pageblock_flags. A single
additional 'unsigned long' is added to house a 'sub-section active'
bitmask. Each bit tracks the mapped state of one sub-section's worth of
capacity which is SECTION_SIZE / BITS_PER_LONG, or 2MB on x86-64.
The implication of allowing sections to be piecemeal mapped/unmapped is
that the valid_section() helper is no longer authoritative to determine
if a section is fully mapped. Instead pfn_valid() is updated to consult
the section-active bitmask. Given that typical memory hotplug still has
deep "section" dependencies the sub-section capability is limited to
'want_memblock=false' invocations of arch_add_memory(), effectively only
devm_memremap_pages() users for now.
With this in place the hacks in the libnvdimm sub-system can be
dropped, and other devm_memremap_pages() users need no longer be
constrained to 128MB mapping granularity.
[2]: https://lore.kernel.org/r/155000671719.348031.2347363160141119237.stgit@d...
---
Dan Williams (10):
mm/sparsemem: Introduce struct mem_section_usage
mm/sparsemem: Introduce common definitions for the size and mask of a section
mm/sparsemem: Add helpers track active portions of a section at boot
mm/hotplug: Prepare shrink_{zone,pgdat}_span for sub-section removal
mm/sparsemem: Convert kmalloc_section_memmap() to populate_section_memmap()
mm/sparsemem: Prepare for sub-section ranges
mm/sparsemem: Support sub-section hotplug
mm/devm_memremap_pages: Enable sub-section remap
libnvdimm/pfn: Fix fsdax-mode namespace info-block zero-fields
libnvdimm/pfn: Stop padding pmem namespaces to section alignment
arch/x86/mm/init_64.c | 15 +-
drivers/nvdimm/dax_devs.c | 2
drivers/nvdimm/pfn.h | 12 -
drivers/nvdimm/pfn_devs.c | 93 +++-------
include/linux/memory_hotplug.h | 7 -
include/linux/mm.h | 4
include/linux/mmzone.h | 60 ++++++
kernel/memremap.c | 57 ++----
mm/hmm.c | 2
mm/memory_hotplug.c | 119 +++++++-----
mm/page_alloc.c | 6 -
mm/sparse-vmemmap.c | 21 +-
mm/sparse.c | 382 ++++++++++++++++++++++++++++------------
13 files changed, 476 insertions(+), 304 deletions(-)
3 years, 4 months
[PATCH RFC tip/core/rcu 0/4] Forbid static SRCU use in modules
by Paul E. McKenney
Hello!
This series prohibits use of DEFINE_SRCU() and DEFINE_STATIC_SRCU()
by loadable modules. The reason for this prohibition is the fact
that using these two macros within modules requires that the size of
the reserved region be increased, which is not something we want to
be doing all that often. Instead, loadable modules should define an
srcu_struct and invoke init_srcu_struct() from their module_init function
and cleanup_srcu_struct() from their module_exit function. Note that
modules using call_srcu() will also need to invoke srcu_barrier() from
their module_exit function.
This series consist of the following:
1. Dynamically allocate dax_srcu.
2. Dynamically allocate drm_unplug_srcu.
3. Dynamically allocate kfd_processes_srcu.
These build and have been subjected to 0day testing, but might also need
testing by someone having the requisite hardware.
Thanx, Paul
------------------------------------------------------------------------
drivers/dax/super.c | 10 +++++-
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c | 5 +++
drivers/gpu/drm/amd/amdkfd/kfd_process.c | 2 -
drivers/gpu/drm/drm_drv.c | 8 ++++
include/linux/srcutree.h | 19 +++++++++--
kernel/rcu/rcuperf.c | 40 +++++++++++++++++++-----
kernel/rcu/rcutorture.c | 48 +++++++++++++++++++++--------
7 files changed, 105 insertions(+), 27 deletions(-)
3 years, 4 months
[PATCH v2 1/1] treewide: Switch printk users from %pf and %pF to %ps and %pS, respectively
by Sakari Ailus
%pF and %pf are functionally equivalent to %pS and %ps conversion
specifiers. The former are deprecated, therefore switch the current users
to use the preferred variant.
The changes have been produced by the following command:
git grep -l '%p[fF]' | grep -v '^\(tools\|Documentation\)/' | \
while read i; do perl -i -pe 's/%pf/%ps/g; s/%pF/%pS/g;' $i; done
And verifying the result.
Signed-off-by: Sakari Ailus <sakari.ailus(a)linux.intel.com>
Acked-by: David Sterba <dsterba(a)suse.com> (for btrfs)
Acked-by: Mike Rapoport <rppt(a)linux.ibm.com> (for mm/memblock.c)
Acked-by: Rafael J. Wysocki <rafael.j.wysocki(a)intel.com>
---
I split this off from the set as there's a change in
include/trace/events/timer.h that conflicts with v1 of this patch and
should the second patch to be applied without that change, results into
invalid use of %pf. To address the matter safely without conflicts or
trying to print invalid pointer conversions, this patch and the other one
changing include/trace/events/timer.h must be merged before second patch
in v1 of this set can go in.
since v1:
- Drop such changes to include/trace/events/timer.h where %pf has already
been converted to %ps in linux-next master.
arch/alpha/kernel/pci_iommu.c | 20 ++++++++++----------
arch/arm/mach-imx/pm-imx6.c | 2 +-
arch/arm/mm/alignment.c | 2 +-
arch/arm/nwfpe/fpmodule.c | 2 +-
arch/microblaze/mm/pgtable.c | 2 +-
arch/sparc/kernel/ds.c | 2 +-
arch/um/kernel/sysrq.c | 2 +-
arch/x86/include/asm/trace/exceptions.h | 2 +-
arch/x86/kernel/irq_64.c | 2 +-
arch/x86/mm/extable.c | 4 ++--
arch/x86/xen/multicalls.c | 2 +-
drivers/acpi/device_pm.c | 2 +-
drivers/base/power/main.c | 6 +++---
drivers/base/syscore.c | 12 ++++++------
drivers/block/drbd/drbd_receiver.c | 2 +-
drivers/block/floppy.c | 10 +++++-----
drivers/cpufreq/cpufreq.c | 2 +-
drivers/mmc/core/quirks.h | 2 +-
drivers/nvdimm/bus.c | 2 +-
drivers/nvdimm/dimm_devs.c | 2 +-
drivers/pci/pci-driver.c | 14 +++++++-------
drivers/pci/quirks.c | 4 ++--
drivers/pnp/quirks.c | 2 +-
drivers/scsi/esp_scsi.c | 2 +-
fs/btrfs/tests/free-space-tree-tests.c | 4 ++--
fs/f2fs/f2fs.h | 2 +-
fs/pstore/inode.c | 2 +-
include/trace/events/btrfs.h | 2 +-
include/trace/events/cpuhp.h | 4 ++--
include/trace/events/preemptirq.h | 2 +-
include/trace/events/rcu.h | 4 ++--
include/trace/events/sunrpc.h | 2 +-
include/trace/events/vmscan.h | 4 ++--
include/trace/events/workqueue.h | 4 ++--
include/trace/events/xen.h | 2 +-
init/main.c | 6 +++---
kernel/async.c | 4 ++--
kernel/events/uprobes.c | 2 +-
kernel/fail_function.c | 2 +-
kernel/irq/debugfs.c | 2 +-
kernel/irq/handle.c | 2 +-
kernel/irq/manage.c | 2 +-
kernel/irq/spurious.c | 4 ++--
kernel/rcu/tree.c | 2 +-
kernel/stop_machine.c | 2 +-
kernel/time/sched_clock.c | 2 +-
kernel/time/timer.c | 2 +-
kernel/workqueue.c | 12 ++++++------
lib/error-inject.c | 2 +-
lib/percpu-refcount.c | 4 ++--
mm/memblock.c | 12 ++++++------
mm/memory.c | 2 +-
mm/vmscan.c | 2 +-
net/ceph/osd_client.c | 2 +-
net/core/net-procfs.c | 2 +-
net/core/netpoll.c | 4 ++--
56 files changed, 105 insertions(+), 105 deletions(-)
diff --git a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c
index 3034d6d936d2..242108439f42 100644
--- a/arch/alpha/kernel/pci_iommu.c
+++ b/arch/alpha/kernel/pci_iommu.c
@@ -249,7 +249,7 @@ static int pci_dac_dma_supported(struct pci_dev *dev, u64 mask)
ok = 0;
/* If both conditions above are met, we are fine. */
- DBGA("pci_dac_dma_supported %s from %pf\n",
+ DBGA("pci_dac_dma_supported %s from %ps\n",
ok ? "yes" : "no", __builtin_return_address(0));
return ok;
@@ -281,7 +281,7 @@ pci_map_single_1(struct pci_dev *pdev, void *cpu_addr, size_t size,
&& paddr + size <= __direct_map_size) {
ret = paddr + __direct_map_base;
- DBGA2("pci_map_single: [%p,%zx] -> direct %llx from %pf\n",
+ DBGA2("pci_map_single: [%p,%zx] -> direct %llx from %ps\n",
cpu_addr, size, ret, __builtin_return_address(0));
return ret;
@@ -292,7 +292,7 @@ pci_map_single_1(struct pci_dev *pdev, void *cpu_addr, size_t size,
if (dac_allowed) {
ret = paddr + alpha_mv.pci_dac_offset;
- DBGA2("pci_map_single: [%p,%zx] -> DAC %llx from %pf\n",
+ DBGA2("pci_map_single: [%p,%zx] -> DAC %llx from %ps\n",
cpu_addr, size, ret, __builtin_return_address(0));
return ret;
@@ -329,7 +329,7 @@ pci_map_single_1(struct pci_dev *pdev, void *cpu_addr, size_t size,
ret = arena->dma_base + dma_ofs * PAGE_SIZE;
ret += (unsigned long)cpu_addr & ~PAGE_MASK;
- DBGA2("pci_map_single: [%p,%zx] np %ld -> sg %llx from %pf\n",
+ DBGA2("pci_map_single: [%p,%zx] np %ld -> sg %llx from %ps\n",
cpu_addr, size, npages, ret, __builtin_return_address(0));
return ret;
@@ -396,14 +396,14 @@ static void alpha_pci_unmap_page(struct device *dev, dma_addr_t dma_addr,
&& dma_addr < __direct_map_base + __direct_map_size) {
/* Nothing to do. */
- DBGA2("pci_unmap_single: direct [%llx,%zx] from %pf\n",
+ DBGA2("pci_unmap_single: direct [%llx,%zx] from %ps\n",
dma_addr, size, __builtin_return_address(0));
return;
}
if (dma_addr > 0xffffffff) {
- DBGA2("pci64_unmap_single: DAC [%llx,%zx] from %pf\n",
+ DBGA2("pci64_unmap_single: DAC [%llx,%zx] from %ps\n",
dma_addr, size, __builtin_return_address(0));
return;
}
@@ -435,7 +435,7 @@ static void alpha_pci_unmap_page(struct device *dev, dma_addr_t dma_addr,
spin_unlock_irqrestore(&arena->lock, flags);
- DBGA2("pci_unmap_single: sg [%llx,%zx] np %ld from %pf\n",
+ DBGA2("pci_unmap_single: sg [%llx,%zx] np %ld from %ps\n",
dma_addr, size, npages, __builtin_return_address(0));
}
@@ -458,7 +458,7 @@ static void *alpha_pci_alloc_coherent(struct device *dev, size_t size,
cpu_addr = (void *)__get_free_pages(gfp | __GFP_ZERO, order);
if (! cpu_addr) {
printk(KERN_INFO "pci_alloc_consistent: "
- "get_free_pages failed from %pf\n",
+ "get_free_pages failed from %ps\n",
__builtin_return_address(0));
/* ??? Really atomic allocation? Otherwise we could play
with vmalloc and sg if we can't find contiguous memory. */
@@ -477,7 +477,7 @@ static void *alpha_pci_alloc_coherent(struct device *dev, size_t size,
goto try_again;
}
- DBGA2("pci_alloc_consistent: %zx -> [%p,%llx] from %pf\n",
+ DBGA2("pci_alloc_consistent: %zx -> [%p,%llx] from %ps\n",
size, cpu_addr, *dma_addrp, __builtin_return_address(0));
return cpu_addr;
@@ -497,7 +497,7 @@ static void alpha_pci_free_coherent(struct device *dev, size_t size,
pci_unmap_single(pdev, dma_addr, size, PCI_DMA_BIDIRECTIONAL);
free_pages((unsigned long)cpu_addr, get_order(size));
- DBGA2("pci_free_consistent: [%llx,%zx] from %pf\n",
+ DBGA2("pci_free_consistent: [%llx,%zx] from %ps\n",
dma_addr, size, __builtin_return_address(0));
}
diff --git a/arch/arm/mach-imx/pm-imx6.c b/arch/arm/mach-imx/pm-imx6.c
index 54add0178b96..e527532f6931 100644
--- a/arch/arm/mach-imx/pm-imx6.c
+++ b/arch/arm/mach-imx/pm-imx6.c
@@ -633,7 +633,7 @@ static void imx6_pm_stby_poweroff(void)
static int imx6_pm_stby_poweroff_probe(void)
{
if (pm_power_off) {
- pr_warn("%s: pm_power_off already claimed %p %pf!\n",
+ pr_warn("%s: pm_power_off already claimed %p %ps!\n",
__func__, pm_power_off, pm_power_off);
return -EBUSY;
}
diff --git a/arch/arm/mm/alignment.c b/arch/arm/mm/alignment.c
index b54f8f8def36..e376883ab35b 100644
--- a/arch/arm/mm/alignment.c
+++ b/arch/arm/mm/alignment.c
@@ -133,7 +133,7 @@ static const char *usermode_action[] = {
static int alignment_proc_show(struct seq_file *m, void *v)
{
seq_printf(m, "User:\t\t%lu\n", ai_user);
- seq_printf(m, "System:\t\t%lu (%pF)\n", ai_sys, ai_sys_last_pc);
+ seq_printf(m, "System:\t\t%lu (%pS)\n", ai_sys, ai_sys_last_pc);
seq_printf(m, "Skipped:\t%lu\n", ai_skipped);
seq_printf(m, "Half:\t\t%lu\n", ai_half);
seq_printf(m, "Word:\t\t%lu\n", ai_word);
diff --git a/arch/arm/nwfpe/fpmodule.c b/arch/arm/nwfpe/fpmodule.c
index 1365e8650843..ee34c76e6624 100644
--- a/arch/arm/nwfpe/fpmodule.c
+++ b/arch/arm/nwfpe/fpmodule.c
@@ -147,7 +147,7 @@ void float_raise(signed char flags)
#ifdef CONFIG_DEBUG_USER
if (flags & debug)
printk(KERN_DEBUG
- "NWFPE: %s[%d] takes exception %08x at %pf from %08lx\n",
+ "NWFPE: %s[%d] takes exception %08x at %ps from %08lx\n",
current->comm, current->pid, flags,
__builtin_return_address(0), GET_USERREG()->ARM_pc);
#endif
diff --git a/arch/microblaze/mm/pgtable.c b/arch/microblaze/mm/pgtable.c
index c2ce1e42b888..8fe54fda31dc 100644
--- a/arch/microblaze/mm/pgtable.c
+++ b/arch/microblaze/mm/pgtable.c
@@ -75,7 +75,7 @@ static void __iomem *__ioremap(phys_addr_t addr, unsigned long size,
p >= memory_start && p < virt_to_phys(high_memory) &&
!(p >= __virt_to_phys((phys_addr_t)__bss_stop) &&
p < __virt_to_phys((phys_addr_t)__bss_stop))) {
- pr_warn("__ioremap(): phys addr "PTE_FMT" is RAM lr %pf\n",
+ pr_warn("__ioremap(): phys addr "PTE_FMT" is RAM lr %ps\n",
(unsigned long)p, __builtin_return_address(0));
return NULL;
}
diff --git a/arch/sparc/kernel/ds.c b/arch/sparc/kernel/ds.c
index f87265afb175..cad08ccce625 100644
--- a/arch/sparc/kernel/ds.c
+++ b/arch/sparc/kernel/ds.c
@@ -876,7 +876,7 @@ void ldom_power_off(void)
static void ds_conn_reset(struct ds_info *dp)
{
- printk(KERN_ERR "ds-%llu: ds_conn_reset() from %pf\n",
+ printk(KERN_ERR "ds-%llu: ds_conn_reset() from %ps\n",
dp->id, __builtin_return_address(0));
}
diff --git a/arch/um/kernel/sysrq.c b/arch/um/kernel/sysrq.c
index 6b995e870d55..05585eef11d9 100644
--- a/arch/um/kernel/sysrq.c
+++ b/arch/um/kernel/sysrq.c
@@ -20,7 +20,7 @@
static void _print_addr(void *data, unsigned long address, int reliable)
{
- pr_info(" [<%08lx>] %s%pF\n", address, reliable ? "" : "? ",
+ pr_info(" [<%08lx>] %s%pS\n", address, reliable ? "" : "? ",
(void *)address);
}
diff --git a/arch/x86/include/asm/trace/exceptions.h b/arch/x86/include/asm/trace/exceptions.h
index e0e6d7f21399..6b1e87194809 100644
--- a/arch/x86/include/asm/trace/exceptions.h
+++ b/arch/x86/include/asm/trace/exceptions.h
@@ -30,7 +30,7 @@ DECLARE_EVENT_CLASS(x86_exceptions,
__entry->error_code = error_code;
),
- TP_printk("address=%pf ip=%pf error_code=0x%lx",
+ TP_printk("address=%ps ip=%ps error_code=0x%lx",
(void *)__entry->address, (void *)__entry->ip,
__entry->error_code) );
diff --git a/arch/x86/kernel/irq_64.c b/arch/x86/kernel/irq_64.c
index 0469cd078db1..4dff56658427 100644
--- a/arch/x86/kernel/irq_64.c
+++ b/arch/x86/kernel/irq_64.c
@@ -58,7 +58,7 @@ static inline void stack_overflow_check(struct pt_regs *regs)
if (regs->sp >= estack_top && regs->sp <= estack_bottom)
return;
- WARN_ONCE(1, "do_IRQ(): %s has overflown the kernel stack (cur:%Lx,sp:%lx,irq stk top-bottom:%Lx-%Lx,exception stk top-bottom:%Lx-%Lx,ip:%pF)\n",
+ WARN_ONCE(1, "do_IRQ(): %s has overflown the kernel stack (cur:%Lx,sp:%lx,irq stk top-bottom:%Lx-%Lx,exception stk top-bottom:%Lx-%Lx,ip:%pS)\n",
current->comm, curbase, regs->sp,
irq_stack_top, irq_stack_bottom,
estack_top, estack_bottom, (void *)regs->ip);
diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
index 3c4568f8fb28..b0a2de8d2f9e 100644
--- a/arch/x86/mm/extable.c
+++ b/arch/x86/mm/extable.c
@@ -145,7 +145,7 @@ __visible bool ex_handler_rdmsr_unsafe(const struct exception_table_entry *fixup
unsigned long error_code,
unsigned long fault_addr)
{
- if (pr_warn_once("unchecked MSR access error: RDMSR from 0x%x at rIP: 0x%lx (%pF)\n",
+ if (pr_warn_once("unchecked MSR access error: RDMSR from 0x%x at rIP: 0x%lx (%pS)\n",
(unsigned int)regs->cx, regs->ip, (void *)regs->ip))
show_stack_regs(regs);
@@ -162,7 +162,7 @@ __visible bool ex_handler_wrmsr_unsafe(const struct exception_table_entry *fixup
unsigned long error_code,
unsigned long fault_addr)
{
- if (pr_warn_once("unchecked MSR access error: WRMSR to 0x%x (tried to write 0x%08x%08x) at rIP: 0x%lx (%pF)\n",
+ if (pr_warn_once("unchecked MSR access error: WRMSR to 0x%x (tried to write 0x%08x%08x) at rIP: 0x%lx (%pS)\n",
(unsigned int)regs->cx, (unsigned int)regs->dx,
(unsigned int)regs->ax, regs->ip, (void *)regs->ip))
show_stack_regs(regs);
diff --git a/arch/x86/xen/multicalls.c b/arch/x86/xen/multicalls.c
index 0766a08bdf45..07054572297f 100644
--- a/arch/x86/xen/multicalls.c
+++ b/arch/x86/xen/multicalls.c
@@ -105,7 +105,7 @@ void xen_mc_flush(void)
for (i = 0; i < b->mcidx; i++) {
if (b->entries[i].result < 0) {
#if MC_DEBUG
- pr_err(" call %2d: op=%lu arg=[%lx] result=%ld\t%pF\n",
+ pr_err(" call %2d: op=%lu arg=[%lx] result=%ld\t%pS\n",
i + 1,
b->debug[i].op,
b->debug[i].args[0],
diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
index 824ae985ad93..1aa0d014dc34 100644
--- a/drivers/acpi/device_pm.c
+++ b/drivers/acpi/device_pm.c
@@ -414,7 +414,7 @@ static void acpi_pm_notify_handler(acpi_handle handle, u32 val, void *not_used)
if (adev->wakeup.flags.notifier_present) {
pm_wakeup_ws_event(adev->wakeup.ws, 0, acpi_s2idle_wakeup());
if (adev->wakeup.context.func) {
- acpi_handle_debug(handle, "Running %pF for %s\n",
+ acpi_handle_debug(handle, "Running %pS for %s\n",
adev->wakeup.context.func,
dev_name(adev->wakeup.context.dev));
adev->wakeup.context.func(&adev->wakeup.context);
diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index f80d298de3fa..a619be025056 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -207,7 +207,7 @@ static ktime_t initcall_debug_start(struct device *dev, void *cb)
if (!pm_print_times_enabled)
return 0;
- dev_info(dev, "calling %pF @ %i, parent: %s\n", cb,
+ dev_info(dev, "calling %pS @ %i, parent: %s\n", cb,
task_pid_nr(current),
dev->parent ? dev_name(dev->parent) : "none");
return ktime_get();
@@ -225,7 +225,7 @@ static void initcall_debug_report(struct device *dev, ktime_t calltime,
rettime = ktime_get();
nsecs = (s64) ktime_to_ns(ktime_sub(rettime, calltime));
- dev_info(dev, "%pF returned %d after %Ld usecs\n", cb, error,
+ dev_info(dev, "%pS returned %d after %Ld usecs\n", cb, error,
(unsigned long long)nsecs >> 10);
}
@@ -2063,7 +2063,7 @@ EXPORT_SYMBOL_GPL(dpm_suspend_start);
void __suspend_report_result(const char *function, void *fn, int ret)
{
if (ret)
- pr_err("%s(): %pF returns %d\n", function, fn, ret);
+ pr_err("%s(): %pS returns %d\n", function, fn, ret);
}
EXPORT_SYMBOL_GPL(__suspend_report_result);
diff --git a/drivers/base/syscore.c b/drivers/base/syscore.c
index 6e076f359dcc..0d346a307140 100644
--- a/drivers/base/syscore.c
+++ b/drivers/base/syscore.c
@@ -62,19 +62,19 @@ int syscore_suspend(void)
list_for_each_entry_reverse(ops, &syscore_ops_list, node)
if (ops->suspend) {
if (initcall_debug)
- pr_info("PM: Calling %pF\n", ops->suspend);
+ pr_info("PM: Calling %pS\n", ops->suspend);
ret = ops->suspend();
if (ret)
goto err_out;
WARN_ONCE(!irqs_disabled(),
- "Interrupts enabled after %pF\n", ops->suspend);
+ "Interrupts enabled after %pS\n", ops->suspend);
}
trace_suspend_resume(TPS("syscore_suspend"), 0, false);
return 0;
err_out:
- pr_err("PM: System core suspend callback %pF failed.\n", ops->suspend);
+ pr_err("PM: System core suspend callback %pS failed.\n", ops->suspend);
list_for_each_entry_continue(ops, &syscore_ops_list, node)
if (ops->resume)
@@ -100,10 +100,10 @@ void syscore_resume(void)
list_for_each_entry(ops, &syscore_ops_list, node)
if (ops->resume) {
if (initcall_debug)
- pr_info("PM: Calling %pF\n", ops->resume);
+ pr_info("PM: Calling %pS\n", ops->resume);
ops->resume();
WARN_ONCE(!irqs_disabled(),
- "Interrupts enabled after %pF\n", ops->resume);
+ "Interrupts enabled after %pS\n", ops->resume);
}
trace_suspend_resume(TPS("syscore_resume"), 0, false);
}
@@ -122,7 +122,7 @@ void syscore_shutdown(void)
list_for_each_entry_reverse(ops, &syscore_ops_list, node)
if (ops->shutdown) {
if (initcall_debug)
- pr_info("PM: Calling %pF\n", ops->shutdown);
+ pr_info("PM: Calling %pS\n", ops->shutdown);
ops->shutdown();
}
diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
index c7ad88d91a09..3e5fd97a3b4d 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -6116,7 +6116,7 @@ int drbd_ack_receiver(struct drbd_thread *thi)
err = cmd->fn(connection, &pi);
if (err) {
- drbd_err(connection, "%pf failed\n", cmd->fn);
+ drbd_err(connection, "%ps failed\n", cmd->fn);
goto reconnect;
}
diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index 95f608d1a098..49f89db0766f 100644
--- a/drivers/block/floppy.c
+++ b/drivers/block/floppy.c
@@ -1693,7 +1693,7 @@ irqreturn_t floppy_interrupt(int irq, void *dev_id)
/* we don't even know which FDC is the culprit */
pr_info("DOR0=%x\n", fdc_state[0].dor);
pr_info("floppy interrupt on bizarre fdc %d\n", fdc);
- pr_info("handler=%pf\n", handler);
+ pr_info("handler=%ps\n", handler);
is_alive(__func__, "bizarre fdc");
return IRQ_NONE;
}
@@ -1752,7 +1752,7 @@ static void reset_interrupt(void)
debugt(__func__, "");
result(); /* get the status ready for set_fdc */
if (FDCS->reset) {
- pr_info("reset set in interrupt, calling %pf\n", cont->error);
+ pr_info("reset set in interrupt, calling %ps\n", cont->error);
cont->error(); /* a reset just after a reset. BAD! */
}
cont->redo();
@@ -1793,7 +1793,7 @@ static void show_floppy(void)
pr_info("\n");
pr_info("floppy driver state\n");
pr_info("-------------------\n");
- pr_info("now=%lu last interrupt=%lu diff=%lu last called handler=%pf\n",
+ pr_info("now=%lu last interrupt=%lu diff=%lu last called handler=%ps\n",
jiffies, interruptjiffies, jiffies - interruptjiffies,
lasthandler);
@@ -1812,9 +1812,9 @@ static void show_floppy(void)
pr_info("status=%x\n", fd_inb(FD_STATUS));
pr_info("fdc_busy=%lu\n", fdc_busy);
if (do_floppy)
- pr_info("do_floppy=%pf\n", do_floppy);
+ pr_info("do_floppy=%ps\n", do_floppy);
if (work_pending(&floppy_work))
- pr_info("floppy_work.func=%pf\n", floppy_work.func);
+ pr_info("floppy_work.func=%ps\n", floppy_work.func);
if (delayed_work_pending(&fd_timer))
pr_info("delayed work.function=%p expires=%ld\n",
fd_timer.work.func,
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index e10922709d13..bf78a3d9e0e9 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -426,7 +426,7 @@ static void cpufreq_list_transition_notifiers(void)
mutex_lock(&cpufreq_transition_notifier_list.mutex);
for (nb = cpufreq_transition_notifier_list.head; nb; nb = nb->next)
- pr_info("%pF\n", nb->notifier_call);
+ pr_info("%pS\n", nb->notifier_call);
mutex_unlock(&cpufreq_transition_notifier_list.mutex);
}
diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
index dd2f73af8f2c..2d2d9ea8be4f 100644
--- a/drivers/mmc/core/quirks.h
+++ b/drivers/mmc/core/quirks.h
@@ -159,7 +159,7 @@ static inline void mmc_fixup_device(struct mmc_card *card,
(f->ext_csd_rev == EXT_CSD_REV_ANY ||
f->ext_csd_rev == card->ext_csd.rev) &&
rev >= f->rev_start && rev <= f->rev_end) {
- dev_dbg(&card->dev, "calling %pf\n", f->vendor_fixup);
+ dev_dbg(&card->dev, "calling %ps\n", f->vendor_fixup);
f->vendor_fixup(card, f->data);
}
}
diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
index 7bbff0af29b2..7ff684159f29 100644
--- a/drivers/nvdimm/bus.c
+++ b/drivers/nvdimm/bus.c
@@ -581,7 +581,7 @@ int __nd_driver_register(struct nd_device_driver *nd_drv, struct module *owner,
struct device_driver *drv = &nd_drv->drv;
if (!nd_drv->type) {
- pr_debug("driver type bitmask not set (%pf)\n",
+ pr_debug("driver type bitmask not set (%ps)\n",
__builtin_return_address(0));
return -EINVAL;
}
diff --git a/drivers/nvdimm/dimm_devs.c b/drivers/nvdimm/dimm_devs.c
index 91b9abbf689c..ecbab2d66e38 100644
--- a/drivers/nvdimm/dimm_devs.c
+++ b/drivers/nvdimm/dimm_devs.c
@@ -58,7 +58,7 @@ static int validate_dimm(struct nvdimm_drvdata *ndd)
rc = nvdimm_check_config_data(ndd->dev);
if (rc)
- dev_dbg(ndd->dev, "%pf: %s error: %d\n",
+ dev_dbg(ndd->dev, "%ps: %s error: %d\n",
__builtin_return_address(0), __func__, rc);
return rc;
}
diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index 71853befd435..cae630fe6387 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -578,7 +578,7 @@ static int pci_legacy_suspend(struct device *dev, pm_message_t state)
if (!pci_dev->state_saved && pci_dev->current_state != PCI_D0
&& pci_dev->current_state != PCI_UNKNOWN) {
WARN_ONCE(pci_dev->current_state != prev,
- "PCI PM: Device state not saved by %pF\n",
+ "PCI PM: Device state not saved by %pS\n",
drv->suspend);
}
}
@@ -605,7 +605,7 @@ static int pci_legacy_suspend_late(struct device *dev, pm_message_t state)
if (!pci_dev->state_saved && pci_dev->current_state != PCI_D0
&& pci_dev->current_state != PCI_UNKNOWN) {
WARN_ONCE(pci_dev->current_state != prev,
- "PCI PM: Device state not saved by %pF\n",
+ "PCI PM: Device state not saved by %pS\n",
drv->suspend_late);
goto Fixup;
}
@@ -773,7 +773,7 @@ static int pci_pm_suspend(struct device *dev)
if (!pci_dev->state_saved && pci_dev->current_state != PCI_D0
&& pci_dev->current_state != PCI_UNKNOWN) {
WARN_ONCE(pci_dev->current_state != prev,
- "PCI PM: State of device not saved by %pF\n",
+ "PCI PM: State of device not saved by %pS\n",
pm->suspend);
}
}
@@ -821,7 +821,7 @@ static int pci_pm_suspend_noirq(struct device *dev)
if (!pci_dev->state_saved && pci_dev->current_state != PCI_D0
&& pci_dev->current_state != PCI_UNKNOWN) {
WARN_ONCE(pci_dev->current_state != prev,
- "PCI PM: State of device not saved by %pF\n",
+ "PCI PM: State of device not saved by %pS\n",
pm->suspend_noirq);
goto Fixup;
}
@@ -1260,11 +1260,11 @@ static int pci_pm_runtime_suspend(struct device *dev)
* log level.
*/
if (error == -EBUSY || error == -EAGAIN) {
- dev_dbg(dev, "can't suspend now (%pf returned %d)\n",
+ dev_dbg(dev, "can't suspend now (%ps returned %d)\n",
pm->runtime_suspend, error);
return error;
} else if (error) {
- dev_err(dev, "can't suspend (%pf returned %d)\n",
+ dev_err(dev, "can't suspend (%ps returned %d)\n",
pm->runtime_suspend, error);
return error;
}
@@ -1276,7 +1276,7 @@ static int pci_pm_runtime_suspend(struct device *dev)
&& !pci_dev->state_saved && pci_dev->current_state != PCI_D0
&& pci_dev->current_state != PCI_UNKNOWN) {
WARN_ONCE(pci_dev->current_state != prev,
- "PCI PM: State of device not saved by %pF\n",
+ "PCI PM: State of device not saved by %pS\n",
pm->runtime_suspend);
return 0;
}
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index a59ad09ce911..b56c2a75d42f 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -36,7 +36,7 @@ static ktime_t fixup_debug_start(struct pci_dev *dev,
void (*fn)(struct pci_dev *dev))
{
if (initcall_debug)
- pci_info(dev, "calling %pF @ %i\n", fn, task_pid_nr(current));
+ pci_info(dev, "calling %pS @ %i\n", fn, task_pid_nr(current));
return ktime_get();
}
@@ -51,7 +51,7 @@ static void fixup_debug_report(struct pci_dev *dev, ktime_t calltime,
delta = ktime_sub(rettime, calltime);
duration = (unsigned long long) ktime_to_ns(delta) >> 10;
if (initcall_debug || duration > 10000)
- pci_info(dev, "%pF took %lld usecs\n", fn, duration);
+ pci_info(dev, "%pS took %lld usecs\n", fn, duration);
}
static void pci_do_fixups(struct pci_dev *dev, struct pci_fixup *f,
diff --git a/drivers/pnp/quirks.c b/drivers/pnp/quirks.c
index 803666ae3635..de99f371d362 100644
--- a/drivers/pnp/quirks.c
+++ b/drivers/pnp/quirks.c
@@ -458,7 +458,7 @@ void pnp_fixup_device(struct pnp_dev *dev)
for (f = pnp_fixups; *f->id; f++) {
if (!compare_pnp_id(dev->id, f->id))
continue;
- pnp_dbg(&dev->dev, "%s: calling %pF\n", f->id,
+ pnp_dbg(&dev->dev, "%s: calling %pS\n", f->id,
f->quirk_function);
f->quirk_function(dev);
}
diff --git a/drivers/scsi/esp_scsi.c b/drivers/scsi/esp_scsi.c
index 465df475f753..76fd02ccbf49 100644
--- a/drivers/scsi/esp_scsi.c
+++ b/drivers/scsi/esp_scsi.c
@@ -1031,7 +1031,7 @@ static int esp_check_spur_intr(struct esp *esp)
static void esp_schedule_reset(struct esp *esp)
{
- esp_log_reset("esp_schedule_reset() from %pf\n",
+ esp_log_reset("esp_schedule_reset() from %ps\n",
__builtin_return_address(0));
esp->flags |= ESP_FLAG_RESETTING;
esp_event(esp, ESP_EVENT_RESET);
diff --git a/fs/btrfs/tests/free-space-tree-tests.c b/fs/btrfs/tests/free-space-tree-tests.c
index 09c27628e305..201fcd45fc23 100644
--- a/fs/btrfs/tests/free-space-tree-tests.c
+++ b/fs/btrfs/tests/free-space-tree-tests.c
@@ -539,7 +539,7 @@ static int run_test_both_formats(test_func_t test_func, u32 sectorsize,
ret = run_test(test_func, 0, sectorsize, nodesize, alignment);
if (ret) {
test_err(
- "%pf failed with extents, sectorsize=%u, nodesize=%u, alignment=%u",
+ "%ps failed with extents, sectorsize=%u, nodesize=%u, alignment=%u",
test_func, sectorsize, nodesize, alignment);
test_ret = ret;
}
@@ -547,7 +547,7 @@ static int run_test_both_formats(test_func_t test_func, u32 sectorsize,
ret = run_test(test_func, 1, sectorsize, nodesize, alignment);
if (ret) {
test_err(
- "%pf failed with bitmaps, sectorsize=%u, nodesize=%u, alignment=%u",
+ "%ps failed with bitmaps, sectorsize=%u, nodesize=%u, alignment=%u",
test_func, sectorsize, nodesize, alignment);
test_ret = ret;
}
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 5bc7b99fb9c1..41584c961d5c 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -1338,7 +1338,7 @@ struct f2fs_private_dio {
#ifdef CONFIG_F2FS_FAULT_INJECTION
#define f2fs_show_injection_info(type) \
- printk_ratelimited("%sF2FS-fs : inject %s in %s of %pF\n", \
+ printk_ratelimited("%sF2FS-fs : inject %s in %s of %pS\n", \
KERN_INFO, f2fs_fault_name[type], \
__func__, __builtin_return_address(0))
static inline bool time_to_inject(struct f2fs_sb_info *sbi, int type)
diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c
index c60ee46f3e39..29e94e0b6d73 100644
--- a/fs/pstore/inode.c
+++ b/fs/pstore/inode.c
@@ -115,7 +115,7 @@ static int pstore_ftrace_seq_show(struct seq_file *s, void *v)
rec = (struct pstore_ftrace_record *)(ps->record->buf + data->off);
- seq_printf(s, "CPU:%d ts:%llu %08lx %08lx %pf <- %pF\n",
+ seq_printf(s, "CPU:%d ts:%llu %08lx %08lx %ps <- %pS\n",
pstore_ftrace_decode_cpu(rec),
pstore_ftrace_read_timestamp(rec),
rec->ip, rec->parent_ip, (void *)rec->ip,
diff --git a/include/trace/events/btrfs.h b/include/trace/events/btrfs.h
index 8b12753fee78..9621498d42e9 100644
--- a/include/trace/events/btrfs.h
+++ b/include/trace/events/btrfs.h
@@ -1380,7 +1380,7 @@ DECLARE_EVENT_CLASS(btrfs__work,
__entry->normal_work = &work->normal_work;
),
- TP_printk_btrfs("work=%p (normal_work=%p) wq=%p func=%pf ordered_func=%p "
+ TP_printk_btrfs("work=%p (normal_work=%p) wq=%p func=%ps ordered_func=%p "
"ordered_free=%p",
__entry->work, __entry->normal_work, __entry->wq,
__entry->func, __entry->ordered_func, __entry->ordered_free)
diff --git a/include/trace/events/cpuhp.h b/include/trace/events/cpuhp.h
index fe1d6e8cd99d..ad16f77310c6 100644
--- a/include/trace/events/cpuhp.h
+++ b/include/trace/events/cpuhp.h
@@ -30,7 +30,7 @@ TRACE_EVENT(cpuhp_enter,
__entry->fun = fun;
),
- TP_printk("cpu: %04u target: %3d step: %3d (%pf)",
+ TP_printk("cpu: %04u target: %3d step: %3d (%ps)",
__entry->cpu, __entry->target, __entry->idx, __entry->fun)
);
@@ -58,7 +58,7 @@ TRACE_EVENT(cpuhp_multi_enter,
__entry->fun = fun;
),
- TP_printk("cpu: %04u target: %3d step: %3d (%pf)",
+ TP_printk("cpu: %04u target: %3d step: %3d (%ps)",
__entry->cpu, __entry->target, __entry->idx, __entry->fun)
);
diff --git a/include/trace/events/preemptirq.h b/include/trace/events/preemptirq.h
index 9a0d4ceeb166..95fba0471e5b 100644
--- a/include/trace/events/preemptirq.h
+++ b/include/trace/events/preemptirq.h
@@ -27,7 +27,7 @@ DECLARE_EVENT_CLASS(preemptirq_template,
__entry->parent_offs = (u32)(parent_ip - (unsigned long)_stext);
),
- TP_printk("caller=%pF parent=%pF",
+ TP_printk("caller=%pS parent=%pS",
(void *)((unsigned long)(_stext) + __entry->caller_offs),
(void *)((unsigned long)(_stext) + __entry->parent_offs))
);
diff --git a/include/trace/events/rcu.h b/include/trace/events/rcu.h
index f0c4d10e614b..80339fd14c1c 100644
--- a/include/trace/events/rcu.h
+++ b/include/trace/events/rcu.h
@@ -491,7 +491,7 @@ TRACE_EVENT(rcu_callback,
__entry->qlen = qlen;
),
- TP_printk("%s rhp=%p func=%pf %ld/%ld",
+ TP_printk("%s rhp=%p func=%ps %ld/%ld",
__entry->rcuname, __entry->rhp, __entry->func,
__entry->qlen_lazy, __entry->qlen)
);
@@ -587,7 +587,7 @@ TRACE_EVENT(rcu_invoke_callback,
__entry->func = rhp->func;
),
- TP_printk("%s rhp=%p func=%pf",
+ TP_printk("%s rhp=%p func=%ps",
__entry->rcuname, __entry->rhp, __entry->func)
);
diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
index 7e899e635d33..f0a6f0c5549c 100644
--- a/include/trace/events/sunrpc.h
+++ b/include/trace/events/sunrpc.h
@@ -146,7 +146,7 @@ DECLARE_EVENT_CLASS(rpc_task_running,
__entry->flags = task->tk_flags;
),
- TP_printk("task:%u@%d flags=%s runstate=%s status=%d action=%pf",
+ TP_printk("task:%u@%d flags=%s runstate=%s status=%d action=%ps",
__entry->task_id, __entry->client_id,
rpc_show_task_flags(__entry->flags),
rpc_show_runstate(__entry->runstate),
diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
index a1cb91342231..252327dbfa51 100644
--- a/include/trace/events/vmscan.h
+++ b/include/trace/events/vmscan.h
@@ -226,7 +226,7 @@ TRACE_EVENT(mm_shrink_slab_start,
__entry->priority = priority;
),
- TP_printk("%pF %p: nid: %d objects to shrink %ld gfp_flags %s cache items %ld delta %lld total_scan %ld priority %d",
+ TP_printk("%pS %p: nid: %d objects to shrink %ld gfp_flags %s cache items %ld delta %lld total_scan %ld priority %d",
__entry->shrink,
__entry->shr,
__entry->nid,
@@ -265,7 +265,7 @@ TRACE_EVENT(mm_shrink_slab_end,
__entry->total_scan = total_scan;
),
- TP_printk("%pF %p: nid: %d unused scan count %ld new scan count %ld total_scan %ld last shrinker return val %d",
+ TP_printk("%pS %p: nid: %d unused scan count %ld new scan count %ld total_scan %ld last shrinker return val %d",
__entry->shrink,
__entry->shr,
__entry->nid,
diff --git a/include/trace/events/workqueue.h b/include/trace/events/workqueue.h
index 9a761bc6a251..e172549283be 100644
--- a/include/trace/events/workqueue.h
+++ b/include/trace/events/workqueue.h
@@ -60,7 +60,7 @@ TRACE_EVENT(workqueue_queue_work,
__entry->cpu = pwq->pool->cpu;
),
- TP_printk("work struct=%p function=%pf workqueue=%p req_cpu=%u cpu=%u",
+ TP_printk("work struct=%p function=%ps workqueue=%p req_cpu=%u cpu=%u",
__entry->work, __entry->function, __entry->workqueue,
__entry->req_cpu, __entry->cpu)
);
@@ -102,7 +102,7 @@ TRACE_EVENT(workqueue_execute_start,
__entry->function = work->func;
),
- TP_printk("work struct %p: function %pf", __entry->work, __entry->function)
+ TP_printk("work struct %p: function %ps", __entry->work, __entry->function)
);
/**
diff --git a/include/trace/events/xen.h b/include/trace/events/xen.h
index fdcf88bcf0ea..9a0e8af21310 100644
--- a/include/trace/events/xen.h
+++ b/include/trace/events/xen.h
@@ -73,7 +73,7 @@ TRACE_EVENT(xen_mc_callback,
__entry->fn = fn;
__entry->data = data;
),
- TP_printk("callback %pf, data %p",
+ TP_printk("callback %ps, data %p",
__entry->fn, __entry->data)
);
diff --git a/init/main.c b/init/main.c
index 598e278b46f7..204e87ec3419 100644
--- a/init/main.c
+++ b/init/main.c
@@ -840,7 +840,7 @@ trace_initcall_start_cb(void *data, initcall_t fn)
{
ktime_t *calltime = (ktime_t *)data;
- printk(KERN_DEBUG "calling %pF @ %i\n", fn, task_pid_nr(current));
+ printk(KERN_DEBUG "calling %pS @ %i\n", fn, task_pid_nr(current));
*calltime = ktime_get();
}
@@ -854,7 +854,7 @@ trace_initcall_finish_cb(void *data, initcall_t fn, int ret)
rettime = ktime_get();
delta = ktime_sub(rettime, *calltime);
duration = (unsigned long long) ktime_to_ns(delta) >> 10;
- printk(KERN_DEBUG "initcall %pF returned %d after %lld usecs\n",
+ printk(KERN_DEBUG "initcall %pS returned %d after %lld usecs\n",
fn, ret, duration);
}
@@ -911,7 +911,7 @@ int __init_or_module do_one_initcall(initcall_t fn)
strlcat(msgbuf, "disabled interrupts ", sizeof(msgbuf));
local_irq_enable();
}
- WARN(msgbuf[0], "initcall %pF returned with %s\n", fn, msgbuf);
+ WARN(msgbuf[0], "initcall %pS returned with %s\n", fn, msgbuf);
add_latent_entropy();
return ret;
diff --git a/kernel/async.c b/kernel/async.c
index f6bd0d9885e1..12c332e4e13e 100644
--- a/kernel/async.c
+++ b/kernel/async.c
@@ -119,7 +119,7 @@ static void async_run_entry_fn(struct work_struct *work)
/* 1) run (and print duration) */
if (initcall_debug && system_state < SYSTEM_RUNNING) {
- pr_debug("calling %lli_%pF @ %i\n",
+ pr_debug("calling %lli_%pS @ %i\n",
(long long)entry->cookie,
entry->func, task_pid_nr(current));
calltime = ktime_get();
@@ -128,7 +128,7 @@ static void async_run_entry_fn(struct work_struct *work)
if (initcall_debug && system_state < SYSTEM_RUNNING) {
rettime = ktime_get();
delta = ktime_sub(rettime, calltime);
- pr_debug("initcall %lli_%pF returned 0 after %lld usecs\n",
+ pr_debug("initcall %lli_%pS returned 0 after %lld usecs\n",
(long long)entry->cookie,
entry->func,
(long long)ktime_to_ns(delta) >> 10);
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index c5cde87329c7..4a1ef880253c 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -2028,7 +2028,7 @@ static void handler_chain(struct uprobe *uprobe, struct pt_regs *regs)
if (uc->handler) {
rc = uc->handler(uc, regs);
WARN(rc & ~UPROBE_HANDLER_MASK,
- "bad rc=0x%x from %pf()\n", rc, uc->handler);
+ "bad rc=0x%x from %ps()\n", rc, uc->handler);
}
if (uc->ret_handler)
diff --git a/kernel/fail_function.c b/kernel/fail_function.c
index 17f75b545f66..feb80712b913 100644
--- a/kernel/fail_function.c
+++ b/kernel/fail_function.c
@@ -210,7 +210,7 @@ static int fei_seq_show(struct seq_file *m, void *v)
{
struct fei_attr *attr = list_entry(v, struct fei_attr, list);
- seq_printf(m, "%pf\n", attr->kp.addr);
+ seq_printf(m, "%ps\n", attr->kp.addr);
return 0;
}
diff --git a/kernel/irq/debugfs.c b/kernel/irq/debugfs.c
index 516c00a5e867..c1eccd4f6520 100644
--- a/kernel/irq/debugfs.c
+++ b/kernel/irq/debugfs.c
@@ -152,7 +152,7 @@ static int irq_debug_show(struct seq_file *m, void *p)
raw_spin_lock_irq(&desc->lock);
data = irq_desc_get_irq_data(desc);
- seq_printf(m, "handler: %pf\n", desc->handle_irq);
+ seq_printf(m, "handler: %ps\n", desc->handle_irq);
seq_printf(m, "device: %s\n", desc->dev_name);
seq_printf(m, "status: 0x%08x\n", desc->status_use_accessors);
irq_debug_show_bits(m, 0, desc->status_use_accessors, irqdesc_states,
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index 6df5ddfdb0f8..a4ace611f47f 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -149,7 +149,7 @@ irqreturn_t __handle_irq_event_percpu(struct irq_desc *desc, unsigned int *flags
res = action->handler(irq, action->dev_id);
trace_irq_handler_exit(irq, action, res);
- if (WARN_ONCE(!irqs_disabled(),"irq %u handler %pF enabled interrupts\n",
+ if (WARN_ONCE(!irqs_disabled(),"irq %u handler %pS enabled interrupts\n",
irq, action->handler))
local_irq_disable();
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 53a081392115..78f3ddeb7fe4 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -781,7 +781,7 @@ int __irq_set_trigger(struct irq_desc *desc, unsigned long flags)
ret = 0;
break;
default:
- pr_err("Setting trigger mode %lu for irq %u failed (%pF)\n",
+ pr_err("Setting trigger mode %lu for irq %u failed (%pS)\n",
flags, irq_desc_get_irq(desc), chip->irq_set_type);
}
if (unmask)
diff --git a/kernel/irq/spurious.c b/kernel/irq/spurious.c
index 6d2fa6914b30..2ed97a7c9b2a 100644
--- a/kernel/irq/spurious.c
+++ b/kernel/irq/spurious.c
@@ -212,9 +212,9 @@ static void __report_bad_irq(struct irq_desc *desc, irqreturn_t action_ret)
*/
raw_spin_lock_irqsave(&desc->lock, flags);
for_each_action_of_desc(desc, action) {
- printk(KERN_ERR "[<%p>] %pf", action->handler, action->handler);
+ printk(KERN_ERR "[<%p>] %ps", action->handler, action->handler);
if (action->thread_fn)
- printk(KERN_CONT " threaded [<%p>] %pf",
+ printk(KERN_CONT " threaded [<%p>] %ps",
action->thread_fn, action->thread_fn);
printk(KERN_CONT "\n");
}
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index acd6ccf56faf..8eee921b384d 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2870,7 +2870,7 @@ __call_rcu(struct rcu_head *head, rcu_callback_t func, int cpu, bool lazy)
* Use rcu:rcu_callback trace event to find the previous
* time callback was passed to __call_rcu().
*/
- WARN_ONCE(1, "__call_rcu(): Double-freed CB %p->%pF()!!!\n",
+ WARN_ONCE(1, "__call_rcu(): Double-freed CB %p->%pS()!!!\n",
head, head->func);
WRITE_ONCE(head->func, rcu_leak_callback);
return;
diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index 067cb83f37ea..7231fb5953fc 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -513,7 +513,7 @@ static void cpu_stopper_thread(unsigned int cpu)
}
preempt_count_dec();
WARN_ONCE(preempt_count(),
- "cpu_stop: %pf(%p) leaked preempt count\n", fn, arg);
+ "cpu_stop: %ps(%p) leaked preempt count\n", fn, arg);
goto repeat;
}
}
diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
index 16b80c2b4fe8..6be81ef90ee5 100644
--- a/kernel/time/sched_clock.c
+++ b/kernel/time/sched_clock.c
@@ -231,7 +231,7 @@ sched_clock_register(u64 (*read)(void), int bits, unsigned long rate)
if (irqtime > 0 || (irqtime == -1 && rate >= 1000000))
enable_sched_clock_irqtime();
- pr_debug("Registered %pF as sched_clock source\n", read);
+ pr_debug("Registered %pS as sched_clock source\n", read);
}
void __init generic_sched_clock_init(void)
diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index a9b1bbc2d88d..343c7ba33b1c 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1325,7 +1325,7 @@ static void call_timer_fn(struct timer_list *timer,
lock_map_release(&lockdep_map);
if (count != preempt_count()) {
- WARN_ONCE(1, "timer: %pF preempt leak: %08x -> %08x\n",
+ WARN_ONCE(1, "timer: %pS preempt leak: %08x -> %08x\n",
fn, count, preempt_count());
/*
* Restore the preempt count. That gives us a decent
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 6bc7b180fdf6..2d896f574323 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2278,7 +2278,7 @@ __acquires(&pool->lock)
if (unlikely(in_atomic() || lockdep_depth(current) > 0)) {
pr_err("BUG: workqueue leaked lock or atomic: %s/0x%08x/%d\n"
- " last function: %pf\n",
+ " last function: %ps\n",
current->comm, preempt_count(), task_pid_nr(current),
worker->current_func);
debug_show_held_locks(current);
@@ -2597,11 +2597,11 @@ static void check_flush_dependency(struct workqueue_struct *target_wq,
worker = current_wq_worker();
WARN_ONCE(current->flags & PF_MEMALLOC,
- "workqueue: PF_MEMALLOC task %d(%s) is flushing !WQ_MEM_RECLAIM %s:%pf",
+ "workqueue: PF_MEMALLOC task %d(%s) is flushing !WQ_MEM_RECLAIM %s:%ps",
current->pid, current->comm, target_wq->name, target_func);
WARN_ONCE(worker && ((worker->current_pwq->wq->flags &
(WQ_MEM_RECLAIM | __WQ_LEGACY)) == WQ_MEM_RECLAIM),
- "workqueue: WQ_MEM_RECLAIM %s:%pf is flushing !WQ_MEM_RECLAIM %s:%pf",
+ "workqueue: WQ_MEM_RECLAIM %s:%ps is flushing !WQ_MEM_RECLAIM %s:%ps",
worker->current_pwq->wq->name, worker->current_func,
target_wq->name, target_func);
}
@@ -4589,7 +4589,7 @@ void print_worker_info(const char *log_lvl, struct task_struct *task)
probe_kernel_read(desc, worker->desc, sizeof(desc) - 1);
if (fn || name[0] || desc[0]) {
- printk("%sWorkqueue: %s %pf", log_lvl, name, fn);
+ printk("%sWorkqueue: %s %ps", log_lvl, name, fn);
if (strcmp(name, desc))
pr_cont(" (%s)", desc);
pr_cont("\n");
@@ -4614,7 +4614,7 @@ static void pr_cont_work(bool comma, struct work_struct *work)
pr_cont("%s BAR(%d)", comma ? "," : "",
task_pid_nr(barr->task));
} else {
- pr_cont("%s %pf", comma ? "," : "", work->func);
+ pr_cont("%s %ps", comma ? "," : "", work->func);
}
}
@@ -4646,7 +4646,7 @@ static void show_pwq(struct pool_workqueue *pwq)
if (worker->current_pwq != pwq)
continue;
- pr_cont("%s %d%s:%pf", comma ? "," : "",
+ pr_cont("%s %d%s:%ps", comma ? "," : "",
task_pid_nr(worker->task),
worker == pwq->wq->rescuer ? "(RESCUER)" : "",
worker->current_func);
diff --git a/lib/error-inject.c b/lib/error-inject.c
index c0d4600f4896..aa63751c916f 100644
--- a/lib/error-inject.c
+++ b/lib/error-inject.c
@@ -189,7 +189,7 @@ static int ei_seq_show(struct seq_file *m, void *v)
{
struct ei_entry *ent = list_entry(v, struct ei_entry, list);
- seq_printf(m, "%pf\t%s\n", (void *)ent->start_addr,
+ seq_printf(m, "%ps\t%s\n", (void *)ent->start_addr,
error_type_string(ent->etype));
return 0;
}
diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c
index 9877682e49c7..da54318d3b55 100644
--- a/lib/percpu-refcount.c
+++ b/lib/percpu-refcount.c
@@ -151,7 +151,7 @@ static void percpu_ref_switch_to_atomic_rcu(struct rcu_head *rcu)
atomic_long_add((long)count - PERCPU_COUNT_BIAS, &ref->count);
WARN_ONCE(atomic_long_read(&ref->count) <= 0,
- "percpu ref (%pf) <= 0 (%ld) after switching to atomic",
+ "percpu ref (%ps) <= 0 (%ld) after switching to atomic",
ref->release, atomic_long_read(&ref->count));
/* @ref is viewed as dead on all CPUs, send out switch confirmation */
@@ -333,7 +333,7 @@ void percpu_ref_kill_and_confirm(struct percpu_ref *ref,
spin_lock_irqsave(&percpu_ref_switch_lock, flags);
WARN_ONCE(ref->percpu_count_ptr & __PERCPU_REF_DEAD,
- "%s called more than once on %pf!", __func__, ref->release);
+ "%s called more than once on %ps!", __func__, ref->release);
ref->percpu_count_ptr |= __PERCPU_REF_DEAD;
__percpu_ref_switch_mode(ref, confirm_kill);
diff --git a/mm/memblock.c b/mm/memblock.c
index 28fa8926d9f8..f315eca9f4a1 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -702,7 +702,7 @@ int __init_memblock memblock_add(phys_addr_t base, phys_addr_t size)
{
phys_addr_t end = base + size - 1;
- memblock_dbg("memblock_add: [%pa-%pa] %pF\n",
+ memblock_dbg("memblock_add: [%pa-%pa] %pS\n",
&base, &end, (void *)_RET_IP_);
return memblock_add_range(&memblock.memory, base, size, MAX_NUMNODES, 0);
@@ -821,7 +821,7 @@ int __init_memblock memblock_free(phys_addr_t base, phys_addr_t size)
{
phys_addr_t end = base + size - 1;
- memblock_dbg(" memblock_free: [%pa-%pa] %pF\n",
+ memblock_dbg(" memblock_free: [%pa-%pa] %pS\n",
&base, &end, (void *)_RET_IP_);
kmemleak_free_part_phys(base, size);
@@ -832,7 +832,7 @@ int __init_memblock memblock_reserve(phys_addr_t base, phys_addr_t size)
{
phys_addr_t end = base + size - 1;
- memblock_dbg("memblock_reserve: [%pa-%pa] %pF\n",
+ memblock_dbg("memblock_reserve: [%pa-%pa] %pS\n",
&base, &end, (void *)_RET_IP_);
return memblock_add_range(&memblock.reserved, base, size, MAX_NUMNODES, 0);
@@ -1511,7 +1511,7 @@ void * __init memblock_alloc_try_nid_raw(
{
void *ptr;
- memblock_dbg("%s: %llu bytes align=0x%llx nid=%d from=%pa max_addr=%pa %pF\n",
+ memblock_dbg("%s: %llu bytes align=0x%llx nid=%d from=%pa max_addr=%pa %pS\n",
__func__, (u64)size, (u64)align, nid, &min_addr,
&max_addr, (void *)_RET_IP_);
@@ -1547,7 +1547,7 @@ void * __init memblock_alloc_try_nid(
{
void *ptr;
- memblock_dbg("%s: %llu bytes align=0x%llx nid=%d from=%pa max_addr=%pa %pF\n",
+ memblock_dbg("%s: %llu bytes align=0x%llx nid=%d from=%pa max_addr=%pa %pS\n",
__func__, (u64)size, (u64)align, nid, &min_addr,
&max_addr, (void *)_RET_IP_);
ptr = memblock_alloc_internal(size, align,
@@ -1572,7 +1572,7 @@ void __init __memblock_free_late(phys_addr_t base, phys_addr_t size)
phys_addr_t cursor, end;
end = base + size - 1;
- memblock_dbg("%s: [%pa-%pa] %pF\n",
+ memblock_dbg("%s: [%pa-%pa] %pS\n",
__func__, &base, &end, (void *)_RET_IP_);
kmemleak_free_part_phys(base, size);
cursor = PFN_UP(base);
diff --git a/mm/memory.c b/mm/memory.c
index c0391a9f18b8..42c156db12d6 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -519,7 +519,7 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
dump_page(page, "bad pte");
pr_alert("addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
(void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
- pr_alert("file:%pD fault:%pf mmap:%pf readpage:%pf\n",
+ pr_alert("file:%pD fault:%ps mmap:%ps readpage:%ps\n",
vma->vm_file,
vma->vm_ops ? vma->vm_ops->fault : NULL,
vma->vm_file ? vma->vm_file->f_op->mmap : NULL,
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 07f74e9507b6..7ec5785d7715 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -493,7 +493,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
total_scan += delta;
if (total_scan < 0) {
- pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n",
+ pr_err("shrink_slab: %pS negative objects to delete nr=%ld\n",
shrinker->scan_objects, total_scan);
total_scan = freeable;
next_deferred = nr;
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index fa9530dd876e..6f739de28918 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -2398,7 +2398,7 @@ static void finish_request(struct ceph_osd_request *req)
static void __complete_request(struct ceph_osd_request *req)
{
- dout("%s req %p tid %llu cb %pf result %d\n", __func__, req,
+ dout("%s req %p tid %llu cb %ps result %d\n", __func__, req,
req->r_tid, req->r_callback, req->r_result);
if (req->r_callback)
diff --git a/net/core/net-procfs.c b/net/core/net-procfs.c
index 63881f72ef71..36347933ec3a 100644
--- a/net/core/net-procfs.c
+++ b/net/core/net-procfs.c
@@ -258,7 +258,7 @@ static int ptype_seq_show(struct seq_file *seq, void *v)
else
seq_printf(seq, "%04x", ntohs(pt->type));
- seq_printf(seq, " %-8s %pf\n",
+ seq_printf(seq, " %-8s %ps\n",
pt->dev ? pt->dev->name : "", pt->func);
}
diff --git a/net/core/netpoll.c b/net/core/netpoll.c
index e365e8fb1c40..a0f05416657b 100644
--- a/net/core/netpoll.c
+++ b/net/core/netpoll.c
@@ -149,7 +149,7 @@ static void poll_one_napi(struct napi_struct *napi)
* indicate that we are clearing the Tx path only.
*/
work = napi->poll(napi, 0);
- WARN_ONCE(work, "%pF exceeded budget in poll\n", napi->poll);
+ WARN_ONCE(work, "%pS exceeded budget in poll\n", napi->poll);
trace_napi_poll(napi, work, 0);
clear_bit(NAPI_STATE_NPSVC, &napi->state);
@@ -346,7 +346,7 @@ void netpoll_send_skb_on_dev(struct netpoll *np, struct sk_buff *skb,
}
WARN_ONCE(!irqs_disabled(),
- "netpoll_send_skb_on_dev(): %s enabled interrupts in poll (%pF)\n",
+ "netpoll_send_skb_on_dev(): %s enabled interrupts in poll (%pS)\n",
dev->name, dev->netdev_ops->ndo_start_xmit);
}
--
2.11.0
3 years, 4 months
Re: [GIT PULL] tpmdd fixes for Linux v5.1
by Dan Williams
On Fri, Mar 29, 2019 at 11:42 AM James Morris <jmorris(a)namei.org> wrote:
>
> On Fri, 29 Mar 2019, Jarkko Sakkinen wrote:
>
> > Hi James,
> >
> > These are critical fixes for v5.1. Contains also couple of new selftests for
> > v5.1 features (partial reads in /dev/tpm0). I hope these could still reach
> > the release. Thanks.
>
> Applied to
> git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security.git next-tpm
Hi James,
Friendly ping on when might this go to Linus?
I was hoping that 3d0b1a381f6e ("KEYS: trusted: allow trusted.ko to
initialize w/o a TPM") would have hit -rc4. The NVDIMM subsystem has
been broken since -rc1.
3 years, 4 months
[PATCH v2] fs/dax: deposit pagetable even when installing zero page
by Aneesh Kumar K.V
Architectures like ppc64 use the deposited page table to store hardware
page table slot information. Make sure we deposit a page table when
using zero page at the pmd level for hash.
Without this we hit
Unable to handle kernel paging request for data at address 0x00000000
Faulting instruction address: 0xc000000000082a74
Oops: Kernel access of bad area, sig: 11 [#1]
....
NIP [c000000000082a74] __hash_page_thp+0x224/0x5b0
LR [c0000000000829a4] __hash_page_thp+0x154/0x5b0
Call Trace:
hash_page_mm+0x43c/0x740
do_hash_page+0x2c/0x3c
copy_from_iter_flushcache+0xa4/0x4a0
pmem_copy_from_iter+0x2c/0x50 [nd_pmem]
dax_copy_from_iter+0x40/0x70
dax_iomap_actor+0x134/0x360
iomap_apply+0xfc/0x1b0
dax_iomap_rw+0xac/0x130
ext4_file_write_iter+0x254/0x460 [ext4]
__vfs_write+0x120/0x1e0
vfs_write+0xd8/0x220
SyS_write+0x6c/0x110
system_call+0x3c/0x130
Fixes: b5beae5e224f ("powerpc/pseries: Add driver for PAPR SCM regions")
Reviewed-by: Jan Kara <jack(a)suse.cz>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar(a)linux.ibm.com>
---
Changes from v1:
* Add reviewed-by:
* Add Fixes:
fs/dax.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)
diff --git a/fs/dax.c b/fs/dax.c
index 6959837cc465..01bfb2ac34f9 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -33,6 +33,7 @@
#include <linux/sizes.h>
#include <linux/mmu_notifier.h>
#include <linux/iomap.h>
+#include <asm/pgalloc.h>
#include "internal.h"
#define CREATE_TRACE_POINTS
@@ -1410,7 +1411,9 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
{
struct address_space *mapping = vmf->vma->vm_file->f_mapping;
unsigned long pmd_addr = vmf->address & PMD_MASK;
+ struct vm_area_struct *vma = vmf->vma;
struct inode *inode = mapping->host;
+ pgtable_t pgtable = NULL;
struct page *zero_page;
spinlock_t *ptl;
pmd_t pmd_entry;
@@ -1425,12 +1428,22 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn,
DAX_PMD | DAX_ZERO_PAGE, false);
+ if (arch_needs_pgtable_deposit()) {
+ pgtable = pte_alloc_one(vma->vm_mm);
+ if (!pgtable)
+ return VM_FAULT_OOM;
+ }
+
ptl = pmd_lock(vmf->vma->vm_mm, vmf->pmd);
if (!pmd_none(*(vmf->pmd))) {
spin_unlock(ptl);
goto fallback;
}
+ if (pgtable) {
+ pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable);
+ mm_inc_nr_ptes(vma->vm_mm);
+ }
pmd_entry = mk_pmd(zero_page, vmf->vma->vm_page_prot);
pmd_entry = pmd_mkhuge(pmd_entry);
set_pmd_at(vmf->vma->vm_mm, pmd_addr, vmf->pmd, pmd_entry);
@@ -1439,6 +1452,8 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
return VM_FAULT_NOPAGE;
fallback:
+ if (pgtable)
+ pte_free(vma->vm_mm, pgtable);
trace_dax_pmd_load_hole_fallback(inode, vmf, zero_page, *entry);
return VM_FAULT_FALLBACK;
}
--
2.20.1
3 years, 4 months
[mm PATCH v7 0/4] Deferred page init improvements
by Alexander Duyck
This patchset is essentially a refactor of the page initialization logic
that is meant to provide for better code reuse while providing a
significant improvement in deferred page initialization performance.
In my testing on an x86_64 system with 384GB of RAM I have seen the
following. In the case of regular memory initialization the deferred init
time was decreased from 3.75s to 1.38s on average. This amounts to a 172%
improvement for the deferred memory initialization performance.
I have called out the improvement observed with each patch.
v1->v2:
Fixed build issue on PowerPC due to page struct size being 56
Added new patch that removed __SetPageReserved call for hotplug
v2->v3:
Rebased on latest linux-next
Removed patch that had removed __SetPageReserved call from init
Added patch that folded __SetPageReserved into set_page_links
Tweaked __init_pageblock to use start_pfn to get section_nr instead of pfn
v3->v4:
Updated patch description and comments for mm_zero_struct_page patch
Replaced "default" with "case 64"
Removed #ifndef mm_zero_struct_page
Fixed typo in comment that ommited "_from" in kerneldoc for iterator
Added Reviewed-by for patches reviewed by Pavel
Added Acked-by from Michal Hocko
Added deferred init times for patches that affect init performance
Swapped patches 5 & 6, pulled some code/comments from 4 into 5
v4->v5:
Updated Acks/Reviewed-by
Rebased on latest linux-next
Split core bits of zone iterator patch from MAX_ORDER_NR_PAGES init
v5->v6:
Rebased on linux-next with previous v5 reverted
Drop the "This patch" or "This change" from patch descriptions.
Cleaned up patch descriptions for patches 3 & 4
Fixed kerneldoc for __next_mem_pfn_range_in_zone
Updated several Reviewed-by, and incorporated suggestions from Pavel
Added __init_single_page_nolru to patch 5 to consolidate code
Refactored iterator in patch 7 and fixed several issues
v6->v7:
Updated MAX_ORDER_NR_PAGES patch to stop on section aligned boundaries
Dropped patches 5-7
Will follow-up later with reserved bit rework before resubmitting
---
Alexander Duyck (4):
mm: Use mm_zero_struct_page from SPARC on all 64b architectures
mm: Drop meminit_pfn_in_nid as it is redundant
mm: Implement new zone specific memblock iterator
mm: Initialize MAX_ORDER_NR_PAGES at a time instead of doing larger sections
arch/sparc/include/asm/pgtable_64.h | 30 -----
include/linux/memblock.h | 41 +++++++
include/linux/mm.h | 41 ++++++-
mm/memblock.c | 64 ++++++++++
mm/page_alloc.c | 218 ++++++++++++++++++++++-------------
5 files changed, 277 insertions(+), 117 deletions(-)
--
3 years, 4 months