Andy Lutomirski | 29 Oct 18:36 2014
Picon

[PATCH v2] all arches, signal: Move restart_block to struct task_struct

If an attacker can cause a controlled kernel stack overflow,
overwriting the restart block is a very juicy exploit target.
Moving the restart block to struct task_struct prevents this
exploit.

Note that there are other fields in thread_info that are also easy
targets, at least on some architectures.

It's also a decent simplification, since the restart code is more or
less identical on all architectures.

Signed-off-by: Andy Lutomirski <luto <at> amacapital.net>
---

Changes from v1:
 - Fixed sparc issues and similar issues on other architectures, I think.
   (Sam Ravnborg)
 - Improved changelog message (hopefully addressing Al Viro's concerns)

 arch/alpha/include/asm/thread_info.h      |  5 -----
 arch/alpha/kernel/signal.c                |  2 +-
 arch/arc/include/asm/thread_info.h        |  4 ----
 arch/arc/kernel/signal.c                  |  2 +-
 arch/arm/include/asm/thread_info.h        |  4 ----
 arch/arm/kernel/signal.c                  |  4 ++--
 arch/arm/kernel/traps.c                   |  2 +-
 arch/arm64/include/asm/thread_info.h      |  4 ----
 arch/arm64/kernel/signal.c                |  2 +-
 arch/arm64/kernel/signal32.c              |  4 ++--
 arch/avr32/include/asm/thread_info.h      |  4 ----
(Continue reading)

Joseph Saxer | 25 Oct 08:09 2014
Picon

YOUR DONATION FINAL NOTIFICATION FOR THIS YEAR


--

-- 

Dear Beneficiary

I'm donating $620,000.00 to you. Contact me via my private email at
(Josephsaxer234 <at> qq.com) for further details.

Best Regards,
Joseph Saxer*
Copyright (c)2014* The Joseph Saxer Foundation* All Rights Reserved*
David Drysdale | 22 Oct 13:44 2014
Picon

[PATCHv5 0/3] syscalls,x86: Add execveat() system call

This patch set adds execveat(2) for x86, and is derived from Meredydd
Luff's patch from Sept 2012 (https://lkml.org/lkml/2012/9/11/528).

The primary aim of adding an execveat syscall is to allow an
implementation of fexecve(3) that does not rely on the /proc
filesystem.  The current glibc version of fexecve(3) is implemented
via /proc, which causes problems in sandboxed or otherwise restricted
environments.

Given the desire for a /proc-free fexecve() implementation, HPA
suggested (https://lkml.org/lkml/2006/7/11/556) that an execveat(2)
syscall would be an appropriate generalization.

Also, having a new syscall means that it can take a flags argument
without back-compatibility concerns.  The current implementation just
defines the AT_EMPTY_PATH and AT_SYMLINK_NOFOLLOW flags, but other
flags could be added in future -- for example, flags for new namespaces
(as suggested at https://lkml.org/lkml/2006/7/11/474).

Related history:
 - https://lkml.org/lkml/2006/12/27/123 is an example of someone
   realizing that fexecve() is likely to fail in a chroot environment.
 - http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=514043 covered
   documenting the /proc requirement of fexecve(3) in its manpage, to
   "prevent other people from wasting their time".
 - https://bugzilla.kernel.org/show_bug.cgi?id=74481 documented that
   it's not possible to fexecve() a file descriptor for a script with
   close-on-exec set (which is possible with the implementation here).
 - https://bugzilla.redhat.com/show_bug.cgi?id=241609 described a
   problem where a process that did setuid() could not fexecve()
(Continue reading)

Aneesh Kumar K.V | 15 Oct 18:34 2014
Picon

[PATCH 1/2] mm: Update generic gup implementation to handle hugepage directory

Update generic gup implementation with powerpc specific details.
On powerpc at pmd level we can have hugepte, normal pmd pointer
or a pointer to the hugepage directory.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar <at> linux.vnet.ibm.com>
---
 include/linux/hugetlb.h |   1 +
 include/linux/mm.h      |  26 +++++++++++
 mm/gup.c                | 113 +++++++++++++++++++++++-------------------------
 3 files changed, 81 insertions(+), 59 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 6e6d338641fe..65e12a24ce1d 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
 <at>  <at>  -138,6 +138,7  <at>  <at>  static inline void hugetlb_show_meminfo(void)
 #define prepare_hugepage_range(file, addr, len)	(-EINVAL)
 #define pmd_huge(x)	0
 #define pud_huge(x)	0
+#define pgd_huge(x)	0
 #define is_hugepage_only_range(mm, addr, len)	0
 #define hugetlb_free_pgd_range(tlb, addr, end, floor, ceiling) ({BUG(); 0; })
 #define hugetlb_fault(mm, vma, addr, flags)	({ BUG(); 0; })
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 02d11ee7f19d..f97732412cb4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
 <at>  <at>  -1219,6 +1219,32  <at>  <at>  long get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
 		    struct vm_area_struct **vmas);
 int get_user_pages_fast(unsigned long start, int nr_pages, int write,
(Continue reading)

Aneesh Kumar K.V | 14 Oct 12:57 2014
Picon

[PATCH 1/2] mm: Introduce a general RCU get_user_pages_fast.

get_user_pages_fast attempts to pin user pages by walking the page
tables directly and avoids taking locks. Thus the walker needs to be
protected from page table pages being freed from under it, and needs
to block any THP splits.

One way to achieve this is to have the walker disable interrupts, and
rely on IPIs from the TLB flushing code blocking before the page table
pages are freed.

On some platforms we have hardware broadcast of TLB invalidations, thus
the TLB flushing code doesn't necessarily need to broadcast IPIs; and
spuriously broadcasting IPIs can hurt system performance if done too
often.

This problem has been solved on PowerPC and Sparc by batching up page
table pages belonging to more than one mm_user, then scheduling an
rcu_sched callback to free the pages. This RCU page table free logic
has been promoted to core code and is activated when one enables
HAVE_RCU_TABLE_FREE. Unfortunately, these architectures implement
their own get_user_pages_fast routines.

The RCU page table free logic coupled with a an IPI broadcast on THP
split (which is a rare event), allows one to protect a page table
walker by merely disabling the interrupts during the walk.

This patch provides a general RCU implementation of get_user_pages_fast
that can be used by architectures that perform hardware broadcast of
TLB invalidations.

It is based heavily on the PowerPC implementation.
(Continue reading)

Webmaster | 14 Oct 04:17 2014
Picon

Verify Your Account Else It Will Be Blocked

--

-- 

Dear user,
We are undergoing maintenance therefore all accounts must be updated,
this is to reduce the number of dormant accounts.
Accounts not updated in 48 hours will be suspended.
Please follow the hyper link below to update your account

Click Here To Update Account.
http://zimbra1.my3gb.com/

Best Regards,
Administrator
Jose Calvache | 11 Oct 21:34 2014
Picon

Please reply

Dear Sir/Madam, Here is a pdf attachment of my proposal to you. Please
read and reply I would be grateful. Jose Calvache

小龚 | 26 Sep 07:56 2014

用了一次你就会觉得是最好的短信发送平台 1i


  这个短信发送平台实在是太好用了,再也不愁找不到好的平台了,只要是你发的内容是正规的,轻松就可以发送出去。加QQ193989725具体咨询,可以开账号测效果。





 






















(Continue reading)

Will Deacon | 24 Sep 19:17 2014

[PATCH v3 00/17] Cross-architecture definitions of relaxed MMIO accessors

Hello everybody,

This is version three of the series I've originally posted here:

  v1: https://lkml.org/lkml/2014/4/17/269
  v2: https://lkml.org/lkml/2014/5/22/468

This is basically just a rebase on top of 3.17-rc6, minus the alpha patch
(which was merged into mainline).

I looked at reworking the non-relaxed accessors to imply mmiowb, but it
quickly got messy as some architectures (e.g. mips) deliberately keep
mmiowb and readX/writeX separate whilst others (e.g. powerpc) don't trust
drivers to get mmiowb correct, so add barriers to both. Given that
arm/arm64/x86 don't care about mmiowb, I've left that as an exercise for
an architecture that does care.

In order to get this lot merged, we probably want to merge the asm-generic
patch (1/17) first, so Acks would be much appreciated on the architecture
bits.

As before, I've included the original cover letter below, as that describes
what I'm trying to do in more detail.

Thanks,

Will

--->8

(Continue reading)

Benjamin Siaka | 22 Sep 03:16 2014
Picon

HOW ARE YOU?

Hello my Dear,

I will greatly appreciate my correspondence meets you in good health condition.

My name is Mr. Benjamin Siaka. I am seeking for your co-operation for investment partnership in your
Country. I shall provide the FUND for the investment. When you acknowledged the receipt of this
correspondence, thereafter I will give you the Full Details of my investment proposal.

I await your response in earliest.

My regards,
Mr. Benjamin Siaka.
Daniel Thompson | 9 Sep 14:12 2014

[PATCH] asm-generic/io.h: Implement read[bwlq]_relaxed()

Currently the read[bwlq]_relaxed() family are implemented on every
architecture except blackfin, m68k[1], metag, openrisc, s390[2] and
score. Increasingly drivers are being optimized to exploit relaxed
reads putting these architectures at risk of compilation failures for
shared drivers.

This patch addresses this by providing implementations of
read[bwlq]_relaxed() that are identical to the equivalent read[bwlq]().
All the above architectures include asm-generic/io.h .

Note that currently only eight architectures (alpha, arm, arm64, avr32,
hexagon, microblaze, mips and sh) implement write[bwlq]_relaxed() meaning
these functions are deliberately not included in this patch.

[1] m68k includes the relaxed family only when configured *without* MMU.
[2] s390 requires CONFIG_PCI to include the relaxed family.

Signed-off-by: Daniel Thompson <daniel.thompson <at> linaro.org>
Cc: Will Deacon <will.deacon <at> arm.com>
Cc: Arnd Bergmann <arnd <at> arndb.de>
Cc: linux-arch <at> vger.kernel.org
---
 include/asm-generic/io.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h
index 975e1cc..85ea117 100644
--- a/include/asm-generic/io.h
+++ b/include/asm-generic/io.h
 <at>  <at>  -66,6 +66,16  <at>  <at>  static inline u32 readl(const volatile void __iomem *addr)
(Continue reading)


Gmane