Mark Salter | 28 Jul 16:32 2015

[PATCH 0/2] arm64: support initrd outside of mapped RAM

When booting an arm64 kernel w/initrd using UEFI/grub, use of mem= will likely
cut off part or all of the initrd. This leaves it outside the kernel linear
map which leads to failure when unpacking. The x86 code has a similar need to
relocate an initrd outside of mapped memory in some cases.

The current x86 code uses early_memremap() to copy the original initrd from
unmapped to mapped RAM. This patchset creates a generic copy_from_early_mem()
utility based on that x86 code and has arm64 use it to relocate the initrd
if necessary.

Mark Salter (2):
  mm: add utility for early copy from unmapped ram
  arm64: support initrd outside kernel linear map

 arch/arm64/kernel/setup.c           | 55 +++++++++++++++++++++++++++++++++++++
 include/asm-generic/early_ioremap.h |  6 ++++
 mm/early_ioremap.c                  | 22 +++++++++++++++
 3 files changed, 83 insertions(+)



Mark Damion | 27 Jul 11:48 2015


Attention: Email ID Owner RE: 2000 - 2015 SCAM VICTIM'S COMPENSATION

The International Monetary Fund (IMF) is compensating all the scam
victims and your email address was found in the scam victim's list.
This Western Union office has been fully mandated by the IMF to
transfer your compensation to you via Western Union® Money Transfer.
However, we have concluded to affect your own payment through Western
Union® Money Transfer, $5,000 twice daily until the total sum of
$1.800.000 Million is completely transferred to you. We can not be
able to send the payment with your email address alone; thereby we
need your information as to where we will be sending the funds, such

Receiver’s name:................... (Your Full Name)
Phone  number:...................
Copy of your ID ..................
Sex ....................
Age ......................

Reply back to this E-mail: (markdamion46 <at> with your full
information. Note that your payment files will be returned to the IMF
within 72 hours if we did not hear from you, this instruction was
given by general official of IMF Benin. We will start the transfer as
soon as we received your information.

Much Oblige,
Mr.Mark Damion
(Continue reading)

Dan Williams | 25 Jul 04:37 2015

[PATCH v2 00/25] replace ioremap_{cache|wt} with memremap

Changes since v1 [1]:

1/ Drop the attempt at unifying ioremap() prototypes, just focus on
   converting ioremap_cache and ioremap_wt over to memremap (Christoph)

2/ Drop the unrelated cleanups to use %pa in __ioremap_caller (Thomas)

3/ Add support for memremap() attempts on "System RAM" to simply return
   the kernel virtual address for that range.  ARM depends on this
   functionality in ioremap_cache() and ACPI was open coding a similar
   solution. (Mark)

4/ Split the conversions of ioremap_{cache|wt} into separate patches per
   driver / arch.

5/ Fix bisection breakage and other reports from 0day-kbuild

While developing the pmem driver we noticed that the __iomem annotation
on the return value from ioremap_cache() was being mishandled by several
callers.  We also observed that all of the call sites expected to be
able to treat the return value from ioremap_cache() as normal
(non-__iomem) pointer to memory.

This patchset takes the opportunity to clean up the above confusion as
well as a few issues with the ioremap_{cache|wt} interface, including:

1/ Eliminating the possibility of function prototypes differing between
   architectures by defining a central memremap() prototype that takes
   flags to determine the mapping type.
(Continue reading)

Paul E. McKenney | 24 Jul 17:29 2015

perf_mmap__write_tail() and control dependencies

Hello, Peter,

The ring-buffer code uses control dependencies, and the shiny new
READ_ONCE_CTRL() is now in mainline.  I was idly curious about whether
the write side could use smp_store_release(), and I found this:

static inline void perf_mmap__write_tail(struct perf_mmap *md, u64 tail)
	struct perf_event_mmap_page *pc = md->base;

	 * ensure all reads are done before we write the tail out.
	pc->data_tail = tail;

I see mb() rather than smp_mb().  Did I find the correct code for the
write side?  If so, why mb() rather than smp_mb()?  To serialize against
MMIO interactions with hardware counters or some such?

							Thanx, Paul

Konstantin Khlebnikov | 24 Jul 14:55 2015

[PATCH] modules: don't print out loaded modules too often if nothing changed

List of loaded modules is useful but printing it at each splat
adds too much noise. This patch prints place-holder "<unchanged>"
if kernel not in panic, nothing have changed and since last print
passed less then 15 minutes. First warning will have it for sure.

Signed-off-by: Konstantin Khlebnikov <khlebnikov <at>>
 kernel/module.c |   13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/kernel/module.c b/kernel/module.c
index 4d2b82e610e2..e185b8d53205 100644
--- a/kernel/module.c
+++ b/kernel/module.c
 <at>  <at>  -101,6 +101,7  <at>  <at> 
 static LIST_HEAD(modules);
+static unsigned long modules_printed;


 <at>  <at>  -1005,6 +1006,7  <at>  <at>  SYSCALL_DEFINE2(delete_module, const char __user *, name_user,

 	/* Store the name of the last unloaded module for diagnostic purposes */
 	strlcpy(last_unloaded_module, mod->name, sizeof(last_unloaded_module));
+	modules_printed = 0;

 	return 0;
(Continue reading)

Dan Williams | 20 Jul 02:17 2015

[PATCH 00/10] unify ioremap definitions and introduce memremap

While developing the pmem driver it became clear that not all
architectures implement all the various ioremap types, and when they do
implement an ioremap instance the declaration is inconsistent.

In addition to ioremap prototype confusion, it was also noticed that
several usages of ioremap_cache() were ignoring the __iomem annotation
on returned pointers.  The common theme of these call sites is treating
the return value from ioremap() as an unannotated pointer to normal
memory.  Introduce memremap() as a method to treat a given resource as
memory (safe to speculatively read, pre-fetch, write-combine, etc).

Note that Christoph proposed, in the longer term, changing the calling
convention of ioremap() to take a mapping and prot flags.  It seems that
outside of the _nocache usages of ioremap_*() most instances should be
converted to some form of memremap().  For this reason memremap() takes
a mapping type 'flags' argument and breaks from the pattern of

This series also folds in, with a few fixups, Toshi's fixes for
region_is_ram().  The memremap() implementation needs a functional
region_is_ram() to block attempts to remap system memory.

The series applies against latest 4.2-rc and is targeted for -tip, but
I'm open to carrying a branch, or, if proposed, a better alternative to
handle this cross-tree thrash.  This content has passed a cycle through
the 0day-kbuild infrastructure.


Dan Williams (7):
(Continue reading)

Arnaldo Carvalho de Melo | 15 Jul 17:20 2015

[GIT PULL 0/2] perf/urgent fixes

Hi Ingo,

	Please consider pulling,

- Arnaldo

The following changes since commit 65ea03e31e5ab47f784b1a701419264af97d3205:

  Merge tag 'perf-urgent-for-mingo' of git://
into perf/urgent (2015-07-15 13:31:21 +0200)

are available in the git repository at:

  git:// tags/perf-urgent-for-mingo

for you to fetch changes up to 3c71ba3f80bbd476bbfb2a008da9b676031cbd32:

  perf tools: Really allow to specify custom CC, AR or LD (2015-07-15 11:57:28 -0300)

perf/urgent fixes:

User visible:

- Fix misplaced check for HAVE_SYNC_COMPARE_AND_SWAP_SUPPORT in
  the auxtrace code, which made 'perf record' fail straight away
  in some architectures, even when auxtrace wasn't involved. (Adrian Hunter)

Developer stuff:

(Continue reading)

Baolin Wang | 15 Jul 07:47 2015

[PATCH 0/6] Introduce 64bit accessors and structures required to address y2038 issues in the posix_clock subsystem

This patch series change the 32-bit time types (timespec/itimerspec) to
the 64-bit types (timespec64/itimerspec64), and add new 64bit accessor
functions, which are required in order to avoid y2038 issues in the
posix_clock subsystem.

In order to avoid spamming people too much, I'm only sending the first
few patches of the patch series, and left the other patches for later.

And if you are interested in the whole patch series, see:

Thoughts and feedback would be appreciated.

Baolin Wang (6):
  time: Introduce struct itimerspec64
  timekeeping: Introduce current_kernel_time64()
  security: Introduce security_settime64()
  time: Introduce do_sys_settimeofday64()
  time: Introduce timespec64_to_jiffies()/jiffies_to_timespec64()
  cputime: Introduce cputime_to_timespec64()/timespec64_to_cputime()

 arch/powerpc/include/asm/cputime.h    |    6 +++---
 arch/s390/include/asm/cputime.h       |    8 ++++----
 include/asm-generic/cputime_jiffies.h |   10 +++++-----
 include/asm-generic/cputime_nsecs.h   |    6 +++---
 include/linux/cputime.h               |   16 +++++++++++++++
 include/linux/jiffies.h               |   22 ++++++++++++++++++---
 include/linux/lsm_hooks.h             |    5 +++--
 include/linux/security.h              |   20 ++++++++++++++++---
 include/linux/time64.h                |   35 +++++++++++++++++++++++++++++++++
(Continue reading)

Alexey Brodkin | 14 Jul 14:12 2015

"perf record" if BITS_PER_LONG != 64 && !defined(HAVE_SYNC_COMPARE_AND_SWAP_SUPPORT)

Hi Adrian,

Just noticed that starting from Linux v4.2-rc1 "perf record"
doesn't work on ARC. That's what I see:
 # perf record ls -la
 Cannot use AUX area tracing mmaps
 failed to mmap with 38 (Function not implemented)

I believe that happens because by default auxtrace is enabled
(NO_AUXTRACE=0) and so auxtrace_mmap__mmap() from
"tools/perf/util/auxtrace.c" gets compiled in and following
check fails:
 	pr_err("Cannot use AUX area tracing mmaps\n");
 	return -1;

Unfortunately we don't have __sync_val_compare_and_swap()
in our current toolchain. And ARC as of today is 32-bit architecture.

Now if I pass NO_AUXTRACE=1 in perf building command then everything
works as expected.

So I'm wondering what would be the best way to get perf properly
working for ARC (at least) and probably other architecture/toolchain
(Continue reading)

Will Deacon | 13 Jul 14:15 2015

[RFC PATCH v2] memory-barriers: remove smp_mb__after_unlock_lock()

smp_mb__after_unlock_lock is used to promote an UNLOCK + LOCK sequence
into a full memory barrier.


  - This ordering guarantee is already provided without the barrier on
    all architectures apart from PowerPC

  - The barrier only applies to UNLOCK + LOCK, not general
    RELEASE + ACQUIRE operations

  - Locks are generally assumed to offer SC ordering semantics, so
    having this additional barrier is error-prone and complicates the
    callers of LOCK/UNLOCK primitives

  - The barrier is not well used outside of RCU and, because it was
    retrofitted into the kernel, it's not clear whether other areas of
    the kernel are incorrectly relying on UNLOCK + LOCK implying a full

This patch removes the barrier and instead requires architectures to
provide full barrier semantics for an UNLOCK + LOCK sequence.

Cc: Benjamin Herrenschmidt <benh <at>>
Cc: Paul McKenney <paulmck <at>>
Cc: Peter Zijlstra <peterz <at>>
Signed-off-by: Will Deacon <will.deacon <at>>

This didn't go anywhere last time I posted it, but here it is again.
(Continue reading)

Alexey Brodkin | 13 Jul 13:10 2015

[PATCH] Revert "perf tools: Allow to specify custom linker command"

This reverts commit 5ef7bbb09f7b
("perf tools: Allow to specify custom linker command").

LD is a pre-defined variable in GNU Make. I.e. it is always defined.
Which means there's no point to check "LD ?= ..." because it will never
succeed. And so LD will be either that explicitly passed to make like
 make LD=path_to_my_ld ...
 or default value, which is host's "ld".

Latter leads to failure of cross-linkage because instead of cross linker
"$(CROSS_COMPILE)ld" host's "ld" is used.

As for commit which is reverted here:
 [1] Usually for selection of non-default flavour of CPU core/options
     linker flags are used like "-mtune=xxx" or "-mMyCPUType" etc.

 [2] Still to implement ability to use "ld" that differs from
    "$(CROSS_COMPILE)ld" one will need to add new makefile variable like
    TARGET_LD and then check if $(TARGET_LD) is not specified on make
    invocation then use "$(CROSS_COMPILE)ld".

But for now to fix cross-building of perf this revert is enough.

Signed-off-by: Alexey Brodkin <abrodkin <at>>
Cc: Vineet Gupta <vgupta <at>>
Cc: Aaro Koskinen <aaro.koskinen <at>>
Cc: Jiri Olsa <jolsa <at>>
(Continue reading)