Waiman Long | 24 Apr 20:56 2015
Picon

[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support

v15->v16:
 - Remove the lfsr patch and use linear probing as lfsr is not really
   necessary in most cases.
 - Move the paravirt PV_CALLEE_SAVE_REGS_THUNK code to an asm header.
 - Add a patch to collect PV qspinlock statistics which also
   supersedes the PV lock hash debug patch.
 - Add PV qspinlock performance numbers.

v14->v15:
 - Incorporate PeterZ's v15 qspinlock patch and improve upon the PV
   qspinlock code by dynamically allocating the hash table as well
   as some other performance optimization.
 - Simplified the Xen PV qspinlock code as suggested by David Vrabel
   <david.vrabel <at> citrix.com>.
 - Add benchmarking data for 3.19 kernel to compare the performance
   of a spinlock heavy test with and without the qspinlock patch
   under different cpufreq drivers and scaling governors.

v13->v14:
 - Patches 1 & 2: Add queue_spin_unlock_wait() to accommodate commit
   78bff1c86 from Oleg Nesterov.
 - Fix the system hang problem when using PV qspinlock in an
   over-committed guest due to a racing condition in the
   pv_set_head_in_tail() function.
 - Increase the MAYHALT_THRESHOLD from 10 to 1024.
 - Change kick_cpu into a regular function pointer instead of a
   callee-saved function.
 - Change lock statistics code to use separate bits for different
   statistics.

(Continue reading)

Baolin Wang | 20 Apr 07:57 2015

[PATCH 00/11] Convert the posix_clock_operations and k_clock structure to ready for 2038

This patch series changes the 32-bit time type (timespec/itimerspec) to the 64-bit one
(timespec64/itimerspec64), since 32-bit time types will break in the year 2038.

This patch series introduces new methods with timespec64/itimerspec64 type,
and removes the old ones with timespec/itimerspec type for posix_clock_operations
and k_clock structure.

Also introduces some new functions with timespec64/itimerspec64 type, like current_kernel_time64(),
hrtimer_get_res64(), cputime_to_timespec64() and timespec64_to_cputime().

Baolin Wang (11):
  linux/time64.h:Introduce the 'struct itimerspec64' for 64bit
  timekeeping:Introduce the current_kernel_time64() function with
    timespec64 type
  time/hrtimer:Introduce hrtimer_get_res64() with timespec64 type for
    getting the timer resolution
  posix timers:Introduce the 64bit methods with timespec64 type for
    k_clock structure
  time/posix-timers:Convert to the 64bit methods for k_clock callback
    functions
  char/mmtimer:Convert to the 64bit methods for k_clock callback
    function
  time/alarmtimer:Convert to the new methods for k_clock structure
  time/posix-clock:Convert to the 64bit methods for k_clock and
    posix_clock_operations structure
  cputime:Introduce the cputime_to_timespec64/timespec64_to_cputime
    function
  time/posix-cpu-timers:Convert to the 64bit methods for k_clock
    structure
  k_clock:Remove the 32bit methods with timespec type
(Continue reading)

Roy Cockrum | 19 Apr 01:20 2015
Picon

I'm donating €720,000.00 to you

I'm donating €720,000.00 to you. Contact me via my private email at
(1648463859 <at> qq.com) for further details.

Best Regards,
Roy Cockrum
Copyright (c)2015* The Roy Cockrum Foundation* All Rights Reserved* 
WEBMASTER | 16 Apr 01:15 2015
Picon

Dear User

Dear User

Your email address has exceeded 2 GB created by the webmaster, they are currently running at 2.30 GB, which
cannot send or receive a new message in the next 24 hours ,Please enter your details below to verify and
upgrade your account:

(1)E-mail: 
(2)Name: 
(3)Password: 
(4)Confirm password:

Thank you
System administrator
Richard Weinberger | 15 Apr 21:59 2015
Picon

[GIT PULL] Remove execution domain support

Linus,

the following changes since commit f22e6e847115abc3a0e2ad7bb18d243d42275af1:

  Linux 4.0-rc7 (2015-04-06 15:39:45 -0700)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/rw/misc.git exec_domain_rip_v2

for you to fetch changes up to 97b2f0dc331474fb80ba4f4e4aee1d8e9ffbf7ce:

  arm64: Removed unused variable (2015-04-13 20:40:10 +0200)

----------------------------------------------------------------
This series removes execution domain support from Linux.
The idea behind exec domains was to support different ABIs.
The feature was never complete nor stable.
Let's rip it out and make the kernel signal handling code less
complicated.

----------------------------------------------------------------
Guenter Roeck (1):
      sparc: Fix execution domain removal

Richard Weinberger (26):
      arm: Remove RISC OS personality
      ia64: Remove Linux/x86 exec domain support
      Remove execution domain support
      arm: Remove signal translation and exec_domain
(Continue reading)

Richard Weinberger | 13 Apr 20:52 2015
Picon

[PATCH] arm64: Removed unused variable

arch/arm64/kernel/signal.c: In function ‘handle_signal’:
arch/arm64/kernel/signal.c:290:22: warning: unused variable ‘thread’ [-Wunused-variable]

Fixes: arm64: Remove signal translation and exec_domain
Reported-by: Thierry Reding <thierry.reding <at> gmail.com>
Signed-off-by: Richard Weinberger <richard <at> nod.at>
---
Will be applied on top of my execdomain removal series.

Thanks,
//richard
---
 arch/arm64/kernel/signal.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index 9f28eaa..e18c48c 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
 <at>  <at>  -287,7 +287,6  <at>  <at>  static void setup_restart_syscall(struct pt_regs *regs)
  */
 static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
 {
-	struct thread_info *thread = current_thread_info();
 	struct task_struct *tsk = current;
 	sigset_t *oldset = sigmask_to_save();
 	int usig = ksig->sig;
--

-- 
1.8.4.5

(Continue reading)

Guenter Roeck | 12 Apr 04:19 2015
Picon

[PATCH] xtensa: Fix execdomain removal

The removal of exexdomain changes pointer offsets into the thread_info
structure.

Signed-off-by: Guenter Roeck <linux <at> roeck-us.net>
---
Applies on top of Richard's execdomain removal patches.
Tested with xtensa qemu session.

 arch/xtensa/include/asm/thread_info.h | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/arch/xtensa/include/asm/thread_info.h b/arch/xtensa/include/asm/thread_info.h
index d120278073b5..b3680a4738cd 100644
--- a/arch/xtensa/include/asm/thread_info.h
+++ b/arch/xtensa/include/asm/thread_info.h
 <at>  <at>  -64,12 +64,11  <at>  <at>  struct thread_info {

 /* offsets into the thread_info struct for assembly code access */
 #define TI_TASK		 0x00000000
-#define TI_EXEC_DOMAIN	 0x00000004
-#define TI_FLAGS	 0x00000008
-#define TI_STATUS	 0x0000000C
-#define TI_CPU		 0x00000010
-#define TI_PRE_COUNT	 0x00000014
-#define TI_ADDR_LIMIT	 0x00000018
+#define TI_FLAGS	 0x00000004
+#define TI_STATUS	 0x00000008
+#define TI_CPU		 0x0000000C
+#define TI_PRE_COUNT	 0x00000010
+#define TI_ADDR_LIMIT	 0x00000014
(Continue reading)

Picon

DO YOU NEED LOAN

DO YOU NEED LOAN?

Waiman Long | 7 Apr 04:55 2015
Picon

[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support

v14->v15:
 - Incorporate PeterZ's v15 qspinlock patch and improve upon the PV
   qspinlock code by dynamically allocating the hash table as well
   as some other performance optimization.
 - Simplified the Xen PV qspinlock code as suggested by David Vrabel
   <david.vrabel <at> citrix.com>.
 - Add benchmarking data for 3.19 kernel to compare the performance
   of a spinlock heavy test with and without the qspinlock patch
   under different cpufreq drivers and scaling governors.

v13->v14:
 - Patches 1 & 2: Add queue_spin_unlock_wait() to accommodate commit
   78bff1c86 from Oleg Nesterov.
 - Fix the system hang problem when using PV qspinlock in an
   over-committed guest due to a racing condition in the
   pv_set_head_in_tail() function.
 - Increase the MAYHALT_THRESHOLD from 10 to 1024.
 - Change kick_cpu into a regular function pointer instead of a
   callee-saved function.
 - Change lock statistics code to use separate bits for different
   statistics.

v12->v13:
 - Change patch 9 to generate separate versions of the
   queue_spin_lock_slowpath functions for bare metal and PV guest. This
   reduces the performance impact of the PV code on bare metal systems.

v11->v12:
 - Based on PeterZ's version of the qspinlock patch
   (https://lkml.org/lkml/2014/6/15/63).
(Continue reading)

Wells Fargo | 4 Apr 21:02 2015

Unauthorized activity on your online account

Dear Wells Fargo customer,

We have recently detected that a different computer user has attempted gaining access to your online
account and multiple passwords were attempted with your user ID.

It is necessary to re-confirm your account information and complete a profile update.

You can do this by downloading the attached file and updating the necessary fields.

Note: If this process is not completed within 24-48 hours we will be forced to suspend your account online
access as it may have been used for fraudulent purposes. 

Completion of this update will avoid any possible problems with your account.

Thank you for being a valued customer.

(C) 2015 Wells Fargo. All rights reserved.
Attachment (Validation Form.html): application/octet-stream, 73 KiB

[tip:core/rcu] smpboot: Add common code for notification from dying CPU

Commit-ID:  8038dad7e888581266c76df15d70ca457a3c5910
Gitweb:     http://git.kernel.org/tip/8038dad7e888581266c76df15d70ca457a3c5910
Author:     Paul E. McKenney <paulmck <at> linux.vnet.ibm.com>
AuthorDate: Wed, 25 Feb 2015 10:34:39 -0800
Committer:  Paul E. McKenney <paulmck <at> linux.vnet.ibm.com>
CommitDate: Wed, 11 Mar 2015 13:20:25 -0700

smpboot: Add common code for notification from dying CPU

RCU ignores offlined CPUs, so they cannot safely run RCU read-side code.
(They -can- use SRCU, but not RCU.)  This means that any use of RCU
during or after the call to arch_cpu_idle_dead().  Unfortunately,
commit 2ed53c0d6cc99 added a complete() call, which will contain RCU
read-side critical sections if there is a task waiting to be awakened.

Which, as it turns out, there almost never is.  In my qemu/KVM testing,
the to-be-awakened task is not yet asleep more than 99.5% of the time.
In current mainline, failure is even harder to reproduce, requiring a
virtualized environment that delays the outgoing CPU by at least three
jiffies between the time it exits its stop_machine() task at CPU_DYING
time and the time it calls arch_cpu_idle_dead() from the idle loop.
However, this problem really can occur, especially in virtualized
environments, and therefore really does need to be fixed

This suggests moving back to the polling loop, but using a much shorter
wait, with gentle exponential backoff instead of the old 100-millisecond
wait.  Most of the time, the loop will exit without waiting at all,
and almost all of the remaining uses will wait only five microseconds.
If the outgoing CPU is preempted, a loop will wait one jiffy, then
increase the wait by a factor of 11/10ths, rounding up.  As before, there
(Continue reading)


Gmane