Rohan Desai | 5 Feb 22:09 2016
Picon

Synchronizing if_xennet_xenbus detach with event-handling

I'm getting a panic due to a race between the detach path (xennet_xenbus_detach) and event handling (xennet_handler called from triggered event channel). xennet_xenbus_detach frees up the tx ring and xennet_handler tries to access it. I'm not sure what event the event channel is being poked for (tx complete, rx ready, other). Does anybody know how these two code-paths are supposed to be synchronized ?
Edgar Fuss | 3 Feb 16:42 2016
Picon

4.0 userland on 6.1 kernel: sa_register failed

I tried updating a 4.0.1 machine (amd64) to 6.1 by first updating the kernel 
and I get a ``libpthread: sa_register failed: Invalid argument'' (when trying 
to start nslcd). Any hints?

Julian Coleman | 2 Feb 22:55 2016
Picon

envsys design questions

Hi,

I've been looking at some of the sensors using the envsys framework, and
adding the ability to get and set the hardware (sensor chip) limits.  Two
aspects of envsys, related to initial values, seem strange to me:

  1) we call setlimits immediately after getlimits
  2) we call getlimits before reading sensor values

1) is entertaining because if we read a spurious value, the driver could
then program it back to the hardware.  (I have had this happen - a reading
of 0xff (-1'C) was written back and the machine powered off instantly!).
The workaround was to ignore 0xff in this driver, but it doesn't handle the
general case of spurious readings, and if the hardware does a power off,
you can't tell what happens without debugging in the driver [#].  It also
seems pointless to me, because even if everything is read correctly, we
needlessly write those same values back to the hardware.

2) is awkward for chips that might not have sensors attached.  For example,
if the fan reading is 0 at boot, we can assume that no fan is attached and
not set limits for that sensor.  However, because envsys sets the limits
first, we have to read the values in the driver before setting the limits
to detect this.  Altering the envsys framework to read the sensors before
the limits seems more sensible to me.

Is there a reason that envsys works like this?  Does anyone know of any
adverse impact if I:

  1) don't call setlimits at boot time, andonly call it if requested
     (e.g by processing either sysctl.conf or envsys.conf)
  2) read the sensor values at boot before calling getlimits

?  Alternatively, is there a better way that I've missed?

Regards,

J

[#] This means that your machine will just shut down part way through boot.
As soon as interrupts are enabled and envsys runs, the limits that were
read in getlimits will be written back to the chip in setlimits.

--

-- 
   My other computer runs NetBSD too   -         http://www.netbsd.org/

Jeff Rizzo | 1 Feb 23:01 2016
Picon

DTrace "sched" provider - reviews?

I've taken a stab at implementing the DTrace "sched" provider, for most 
of the probes listed here which made sense: 
http://dtrace.org/guide/chp-sched.html#tbl-sched

Please note that this was my first foray into the scheduling code, as 
well an early effort at implementing a dtrace provider;  it's entirely 
possible I got big honking things flat-out wrong.

This has been tested to a limited degree; to the effect of making sure 
probes fire (though I'm not sure I've tested all of them), and also that 
they seem to make some amount of sense.  I haven't yet put them through 
any great paces.

Any and all feedback welcome.

Attachment (sched.diff): text/x-patch, 8 KiB
Edgar Fuß | 30 Jan 20:01 2016
Picon

dbcool, envsys, powerd shutting down my machine

I don't know whether this is a userland or kernel issue or a layer 8 problem.

After running a customized kernel, I found a server powered down.
The culprit turned out to be dbcool->envsys->powerd fabulating some temperature 
rose above limits.

envstat -d dbcool0 says:
           Current  CritMax  WarnMax  WarnMin  CritMin  Unit
[...]
r2_temp:    53.250   54.000                     45.000 degC
[...]

sysctl hw.dbcool0 says:
[...]
hw.dbcool0.r2_temp.Tmin = 44
hw.dbcool0.r2_temp.Ttherm = 57
hw.dbcool0.r2_temp.Thyst = 2

If I read that correctly, it means that at 54 degC, it's time for emergency 
shut-down, while only at 57 degC, fans have to run at full speed.
(Also, it seems to be threatning the hardware if that temp falls below 45.)

I have no clue where that magic value of 54 degC comes from. It's not in any 
config file I can find, I don't find such a value in sys/dev/i2c/dbcool.c. 
Is it the BIOS writing that value into the IC? Is it a chip manufacturer 
default?

The board is a Tyan S2882-D, in case that matters.
(Btw., does anyone know what r2_temp on that board is?)

I turned off powerd for now.

Thanks for any hints.

Michael McConville | 27 Jan 18:37 2016

Undefined int shift in ifwatchd

I think my analysis here applies to this instance as well:

https://marc.info/?l=openbsd-tech&m=145377854103866&w=2

I also changed the chained condition to a switch statement because I
find that more readable.

Thanks for your time,
Michael

Index: ifwatchd.c
===================================================================
RCS file: /cvsroot/src/usr.sbin/ifwatchd/ifwatchd.c,v
retrieving revision 1.26
diff -u -p -r1.26 ifwatchd.c
--- ifwatchd.c	30 Aug 2011 18:57:38 -0000	1.26
+++ ifwatchd.c	27 Jan 2016 17:34:12 -0000
 <at>  <at>  -292,7 +292,8  <at>  <at>  check_addrs(char *cp, int addrs, enum ev
 	struct sockaddr *sa, *ifa = NULL, *brd = NULL;
 	char ifname_buf[IFNAMSIZ];
 	const char *ifname;
-	int ifndx = 0, i;
+	int ifndx = 0;
+	unsigned int i;

 	if (addrs == 0)
 		return;
 <at>  <at>  -300,7 +301,8  <at>  <at>  check_addrs(char *cp, int addrs, enum ev
 		if ((i & addrs) == 0)
 			continue;
 		sa = (struct sockaddr *)cp;
-		if (i == RTA_IFP) {
+		switch (i) {
+		case RTA_IFP:
 			struct sockaddr_dl * li = (struct sockaddr_dl*)sa;
 			ifndx = li->sdl_index;
 			if (!find_interface(ifndx)) {
 <at>  <at>  -308,10 +310,16  <at>  <at>  check_addrs(char *cp, int addrs, enum ev
 					printf("ignoring change on interface #%d\n", ifndx);
 				return;
 			}
-		} else if (i == RTA_IFA)
+			break;
+		case RTA_IFA:
 			ifa = sa;
-		else if (i == RTA_BRD)
+			break;
+		case RTA_BRD:
 			brd = sa;
+			break;
+		default:
+			break;
+		}
 		RT_ADVANCE(cp, sa);
 	}
 	if (ifa != NULL) {

Christos Zoulas | 25 Jan 20:31 2016

cookies and kmem_alloc


Hi,

The directory functions pass around ap_cookies, and ap_ncookies, but
if one uses kmem_alloc() instead of malloc(), there is no way to kmem_free()
the buffer, since we don't pass the size. I suggest that we add a new
field called ap_acookies, which holds the size we allocated with kmem_alloc(),
or it is 0 for compatibility for code that allocated the cookie buffer with
malloc(). Any better ideas?

christos

Taylor R Campbell | 24 Jan 20:10 2016
Picon

passive references

To make the network stack scale well to multiple cores, the packet-
processing path needs to share resources such as routes, tunnels,
pcbs, &c., between cores without incurring much interprocessor
synchronization.

It would be nice to use pserialize(9) for this, but many of these
resources are held by code paths during packet processing that may
sleep, which is not allowed in a pserialize read section.  The two
obvious ways to resolve this are:

- Change all of these paths so that they don't sleep and can be run
  inside a pserialize read section.  This is a major engineering
  effort, because the network stack is such a complex interdependent
  beast.

- Add a reference count to each route, tunnel, pcb, &c.  This would
  work to make the network stack *safe* to run on multiple cores, but
  it incurs interprocessor synchronization for each use and hence
  fails to make the network stack *scalable* to multiple cores.

Prompted by discussion with rmind <at>  and dyoung <at> , I threw together a
sketch for an abstraction rmind called `passive references' which can
be held across sleeps on a single CPU -- e.g., in a softint LWP or
CPU-bound kthread -- but which incur no interprocessor synchronization
to acquire and release.  This would serve as an intermediary between
the two options so that we can incrementally adapt the network stack.

The idea is that acquiring a reference puts an entry on a CPU-local
list, which can be done inside a pserialize read section.  Releasing
the reference removes the entry.  When an object is about to be
destroyed -- e.g., you are unconfiguring a tunnel -- then you mark it
as unusable so that nobody can acquire new references, and wait until
there are no references on any CPU's list.

The attached file contains a summary of the design, an example of use,
and a sketch of an implementation, with input and proof-reading from
riz <at> .

Thoughts?

A variant of this approach which dyoung <at>  has used in the past is to
count the number of references, instead of putting them on a list, on
each CPU.  I first wrote a sketch with a count instead of a list,
thinking mainly of using this just for ip_encap tunnels, of which
there are likely relatively few, and not for routes or pcbs.

However, if there are many more objects than references -- as I expect
to be with most kinds of packet flow that the packet-processing path
will handle one or two of at a time --, it would waste a lot of space
to have one count on each CPU for each object, yet the list of all
references on each CPU (to any object) would be relatively short.
/*
 * Passive references
 *
 *	Passive references are references to objects that guarantee the
 *	object will not be destroyed until the reference is released.
 *
 *	Passive references require no interprocessor synchronization to
 *	acquire or release.  However, destroying the target of passive
 *	references requires expensive interprocessor synchronization --
 *	xcalls to determine on which CPUs the object is still in use.
 *
 *	Passive references may be held only on a single CPU and by a
 *	single LWP.  They require the caller to allocate a little stack
 *	space, a struct psref object.  Sleeping while a passive
 *	reference is held is allowed, provided that the owner's LWP is
 *	bound to a CPU -- e.g., the owner is a softint or a bound
 *	kthread.  However, sleeping should be kept to a short duration,
 *	e.g. sleeping on an adaptive lock.
 *
 *	Passive references serve as an intermediate stage between
 *	reference counting and passive serialization (pserialize(9)):
 *
 *	- If you need references to transfer from CPU to CPU or LWP to
 *	  LWP, or if you need long-term references, you must use
 *	  reference counting, e.g. with atomic operations or locks,
 *	  which incurs interprocessor synchronization for every use --
 *	  cheaper than an xcall, but not scalable.
 *
 *	- If all users *guarantee* that they will not sleep, then it is
 *	  not necessary to use passive references: you may as well just
 *	  use the even cheaper pserialize(9), because you have
 *	  satisfied the requirements of a pserialize read section.
 */

#if EXAMPLE
struct frotz {
	struct psref_target	frotz_psref;
	LIST_ENTRY(frotz)	frotz_entry;
	...
};

static struct {
	kmutex_t		lock;
	pserialize_t		psz;
	LIST_HEAD(frotz)	head;
	struct psref_class	*class;
} frobbotzim __cacheline_aligned;

static int
frobbotzim_init(void)
{

	mutex_init(&frobbotzim.lock, MUTEX_DEFAULT, IPL_NONE);
	frobbotzim.psz = pserialize_create();
	if (frobbotzim.psz == NULL)
		goto fail0;
	LIST_INIT(&frobbotzim.head);
	frobbotzim.class = psref_class_create("frotz", IPL_SOFTNET);
	if (frobbotzim.class == NULL)
		goto fail1;

	return 0;

fail2: __unused
	psref_class_destroy(frobbotzim.class);
fail1:	pserialize_destroy(frobbotzim.psz);
fail0:	mutex_destroy(&frobbotzim.lock);
	return ENOMEM;
}

static void
frobbotzim_exit(void)
{

	KASSERT(LIST_EMPTY(&frobbotzim.head));

	psref_class_destroy(frobbotzim.class);
	pserialize_destroy(frobbotzim.psz);
	mutex_destroy(&frobbotzim.lock);
}

static struct frotz *
frotz_create(...)
{
	struct frotz *frotz;

	frotz = kmem_alloc(sizeof(*frotz), KM_SLEEP);
	if (frotz == NULL)
		return NULL;

	psref_target_init(&frotz->frotz_psref, frobbotzim.class);
	...initialize fields...;

	mutex_enter(&frobbotzim.lock);
	LIST_INSERT_HEAD(&frobbotzim.head, frotz, frotz_entry);
	mutex_exit(&frobbotzim.lock);

	return frotz;
}

static void
frotz_destroy(struct frotz *frotz)
{

	psref_target_drain(&frotz->frotz_psref, frobbotzim.class);
	mutex_enter(&frobbotzim.lock);
	LIST_REMOVE(frotz, frotz_entry);
	pserialize_perform(frobbotzim.psz);
	mutex_exit(&frobbotzim.lock);

	...destroy fields...;

	kmem_free(frotz, sizeof(*frotz));
}

static struct frotz *
frotz_lookup(uint64_t key, struct psref *psref)
{
	struct frotz *frotz;
	int s;

	s = pserialize_read_enter();
	LIST_FOREACH(frotz, &frobbotzim.head, frotz_entry) {
		membar_datadep_consumer();
		if (!match(frotz, key))
			continue;
		if (psref_acquire(psref, &frotz->frotz_psref, frobbotzim.class)
		    != 0)
			continue;
		break;
	}
	pserialize_read_exit(s);

	return frotz;
}

static void
frotz_input(struct mbuf *m, ...)
{
	struct frotz *frotz;
	struct psref psref;

	...parse m...;
	frotz = frotz_lookup(key, &psref);
	if (frotz == NULL) {
		/* Drop packet.  */
		m_freem(m);
		return;
	}

	(*frotz->frotz_input)(m, ...);
	psref_release(&psref, &frotz->frotz_psref, frobbotzim.class);
}
#endif

#define	PSREF_DEBUG	0

/*
 * struct psref_target
 *
 *	Bookkeeping for an object to which users can acquire passive
 *	references.  This is compact so that it can easily be embedded
 *	into many multitudes of objects, e.g. IP packet flows.
 */
struct psref_target {
	bool			prt_draining;
#if PSREF_DEBUG
	struct psref_class	*prt_class;
#endif
};

/*
 * struct psref
 *
 *	Bookkeeping for a single passive reference.  There should only
 *	be a few of these per CPU in the system at once, no matter how
 *	many targets are stored, so these are a bit larger than struct
 *	psref_target.
 */
struct psref {
	LIST_ENTRY(psref)	psref_entry;
	struct psref_target	*psref_target;
#if PSREF_DEBUG
	struct lwp		*psref_lwp;
	struct cpu_info		*psref_cpu;
#endif
};

/*
 * struct psref_class
 *
 *	Private global state for a class of passive reference targets.
 *	Opaque to callers.
 */
struct psref_class {
	kmutex_t		prc_lock;
	kcondvar_t		prc_cv;
	struct percpu		*prc_percpu; /* struct psref_cpu */
	ipl_cookie_t		prc_iplcookie;
};

/*
 * struct psref_cpu
 *
 *	Private per-CPU state for a class of passive reference targets.
 *	Not exposed by the API.
 */
struct psref_cpu {
	LIST_HEAD(psref)		pcpu_head;
};

/*
 * psref_class_create(name, ipl)
 *
 *	Create a new passive reference class, with the given wchan name
 *	and ipl.
 */
struct psref_class *
psref_class_create(const char *name, int ipl)
{
	struct psref_class *class;

	class = kmem_alloc(sizeof(*class), KM_SLEEP);
	if (class == NULL)
		goto fail0;

	class->prc_percpu = percpu_alloc(sizeof(struct psref_cpu));
	if (class->prc_percpu == NULL)
		goto fail1;

	mutex_init(&class->prc_lock, MUTEX_DEFAULT, ipl);
	cv_init(&class->prc_cv, name);
	class->prc_iplcookie = makeiplcookie(ipl);

fail1:	kmem_free(class, sizeof(*class));
fail0:	return NULL;
}

/*
 * psref_class_destroy(class)
 *
 *	Destroy a passive reference class and free memory associated
 *	with it.  All targets in this class must have been drained and
 *	destroyed already.
 */
void
psref_class_destroy(struct psref_class *class)
{

	cv_destroy(&class->prc_cv);
	mutex_destroy(&class->prc_lock);
	percpu_free(class->prc_percpu, sizeof(struct psref_cpu));
	kmem_free(class, sizeof(*class));
}

/*
 * psref_target_init(target, class)
 *
 *	Initialize a passive reference target in the specified class.
 *	The caller is responsible for issuing a membar_producer before
 *	exposing a pointer to the target to other CPUs.
 */
void
psref_target_init(struct psref_target *target, struct psref_class *class)
{

	target->prt_draining = false;
#if PSREF_DEBUG
	target->prt_class = class;
#endif
}

/*
 * psref_target_destroy(target, class)
 *
 *	Destroy a passive reference target.  It must have previously
 *	been drained.
 */
void
psref_target_destroy(struct psref_target *target, struct psref_class *class)
{

	KASSERT(target->prt_draining);
#if PSREF_DEBUG
	KASSERT(target->prt_class == class);
	target->prt_class = NULL;
#endif
}

/*
 * psref_acquire(psref, target, class)
 *
 *	Try to acquire a passive reference to the specified target,
 *	which must be in the specified class.  On success, returns
 *	zero; on failure, returns a nonzero error code.  If the target
 *	is draining, returns ENOENT.
 *
 *	The caller must guarantee that it will not switch CPUs before
 *	releasing the passive reference, either by disabling
 *	kpreemption and avoiding sleeps, or by being in a softint or in
 *	an LWP bound to a CPU.
 */
int
psref_acquire(struct psref *psref, struct psref_target *target,
    struct psref_class *class)
{
	struct psref_cpu *pcpu;
	int s, error;

	KASSERTMSG((kpreempt_disabled() || cpu_softintr_p() ||
		ISSET(curlwp->l_pflag, LP_BOUND)),
	    "passive references are CPU-local,"
	    " but preemption is enabled and the caller is not"
	    " in a softint or CPU-bound LWP");

#if PSREF_DEBUG
	KASSERT(target->prt_class == class);
#endif

	/* Block interrupts and acquire the current CPU's reference list.  */
	s = splraiseipl(class->prc_iplcookie);
	pcpu = percpu_getref(class->prc_percpu);

	/* Is this target going away?  */
	if (__predict_false(target->prt_draining)) {
		/* Yes: fail.  */
		error = ENOENT;
	} else {
		/* No: record our reference.  */
		LIST_INSERT_HEAD(&pcpu->pcpu_head, psref, psref_entry);
		psref->psref_target = target;
#if PSREF_DEBUG
		psref->psref_lwp = curlwp;
		psref->psref_cpu = curcpu();
#endif
		error = 0;
	}

	/* Release the CPU list and restore interrupts.  */
	percpu_putref(class->prc_percpu);
	splx(s);

	return error;
}

/*
 * psref_release(psref, target, class)
 *
 *	Release a passive reference to the specified target, which must
 *	be in the specified class.
 *
 *	The caller must not have switched CPUs or LWPs since acquiring
 *	the passive reference.
 */
void
psref_release(struct psref *psref, struct psref_target *target,
    struct psref_class *class)
{
	int s;

	KASSERTMSG((kpreempt_disabled() || cpu_softintr_p() ||
		ISSET(curlwp->l_pflag, LP_BOUND)),
	    "passive references are CPU-local,"
	    " but preemption is enabled and the caller is not"
	    " in a softint or CPU-bound LWP");

	KASSERT(psref->psref_target == target);
#if PSREF_DEBUG
	KASSERT(target->prt_class == class);
	KASSERTMSG((psref->psref_lwp == curlwp),
	    "passive reference transferred from lwp %p to lwp %p",
	    psref->psref_lwp, curlwp);
	KASSERT((psref->psref_cpu == curcpu()),
	    "passive reference transferred from CPU %u to CPU %u",
	    cpu_index(psref->psref_cpu), cpu_index(curcpu()));
#endif

	/*
	 * Block interrupts and remove the psref from the current CPU's
	 * list.  No need to percpu_getref or get the head of the list,
	 * and the caller guarantees that we are bound to a CPU anyway
	 * (as does blocking interrupts).
	 */
	s = splraiseipl(class->prc_iplcookie);
	LIST_REMOVE(psref, psref_entry);
	splx(s);

	/* If someone is waiting for users to drain, notify 'em.  */
	if (__predict_false(target->prt_draining))
		cv_broadcast(&class->prc_cv);
}

struct psreffed {
	struct psref_class	*class;
	struct psref_target	*target;
	bool			ret;
};

static void
psreffed_p_xc(void *cookie0, void *cookie1 __unused)
{
	struct psreffed *P = cookie0;
	struct psref_class *class = P->class;
	struct psref_target *target = P->target;
	struct psref_cpu *pcpu;
	struct psref *psref;
	int s;

	/* Block interrupts and acquire the current CPU's reference list.  */
	s = splraiseipl(class->prc_iplcookie);
	pcpu = percpu_getref(class->prc_percpu);

	/*
	 * Check the CPU's reference list for any references to this
	 * target.  This loop shouldn't take very long because any
	 * single CPU should hold only a small number of references at
	 * any given time unless there is a bug.
	 */
	LIST_FOREACH(psref, pcpu->pcpu_head, psref_entry) {
		if (psref->psref_target == target) {
			/*
			 * No need to lock anything here: every write
			 * transitions from false to true, so as long
			 * as any write goes through we're good.  No
			 * need for a memory barrier because this is
			 * read only after xc_wait, which has already
			 * issued any necessary memory barriers.
			 */
			P->ret = true;
			break;
		}
	}

	/* Release the CPU list and restore interrupts.  */
	percpu_putref(class->prc_percpu);
	splx(s);
}

static bool
psreffed_p(struct psref_target *target, struct psref_class *class)
{
	struct psreffed P = {
		.class = class,
		.target = target,
		.ret = false,
	};

	xc_wait(xc_broadcast(0, &psreffed_p_xc, &P, NULL));

	return P.ret;
}

/*
 * psref_target_drain(target, class)
 *
 *	Prevent new references to target and wait for existing ones to
 *	drain.  May sleep.
 */
void
psref_target_drain(struct psref_target *target, struct psref_class *class)
{

#if PSREF_DEBUG
	KASSERT(target->prt_class == class);
#endif

	KASSERT(!target->prt_draining);
	target->prt_draining = true;

	/* Wait until there are no more references on any CPU.  */
	while (psreffed_p(target)) {
		/*
		 * This enter/wait/exit business looks wrong, but it is
		 * both necessary, because psreffed_p performs a
		 * low-priority xcall and hence cannot run while a
		 * mutex is locked, and OK, because the wait is timed
		 * -- explicit wakeups are only an optimization.
		 */
		mutex_enter(&class->prc_lock);
		(void)cv_timedwait(&class->prc_cv, &class->prc_lock, hz);
		mutex_exit(&class->prc_lock);
	}
}
Frank Zerangue | 20 Jan 17:16 2016
Picon

Re: arm/arm32/pmap.c assert

Thanks Nick,

Here are the pertinent options in my kernel config:
	options 	CPU_ARM1136		# Support the ARM1136 core
	options 	CPU_ARM11			# Support the ARM11 core
	options	FPU_VFP			# Support the ARM1136 vfp

Build options are:
	makeoptions	CPUFLAGS= "-march=armv6 -mtune=arm1136j-s -mfpu=vfp -mfloat-abi=hard”

The IMX31 has an ARM11 r0p4 core which does not support  armv6k extensions, i.e. LDREX(B,H,D),
STREX(B,H,D), CLREX. To proceed I implemented atomic ops for 1 byte, 2 byte, and 4 byte operations using
LDREX and STREX and do not support 8 byte atomic ops.

I’ve discovered so far that pmap.c assumes PAGE_SIZE = 8096 in pmap_alloc_l1(), pmap_delete_l1(), as
they use PAGE_SIZE to allocate L1 half table for user space. Since I specify PAGE_SIZE = 4096, what happens
is that L1[0..1023] = 0 and L1[1024..2047] = 0xffffffff thus the failed assert. When I change the
allocation and deallocation to L1_TABLE_SIZE/2 pmap_enter() succeeds but I get data aborts.

Frank

> On Jan 20, 2016, at 1:39 AM, Nick Hudson <skrll <at> netbsd.org> wrote:
> 
> On 01/19/16 17:45, Frank Zerangue wrote:
>> Can anyone help me with this?
> 
> Maybe :)
> 
>> 
>> This is a private arm port for the IMX31 application processor. I began development with NetBSD-5.0.1
and have been able to migrate through out the generations of NetBSD with ease up until NetBSD-7.0 from 6.1.5.
>> 
>>  I now get as for as "init: copying out path `/sbin/init’ 11” but fail an assert in pmap.c:
>> 	 "panic: kernel diagnostic assertion "*pdep == 0" failed:
>> 		file “/.../NetBSD-7.0/usr/src/sys/arch/arm/arm32/pmap.c”.
>> 
>> My configuration define ARM_MMU_V6N, ARM_MMU_EXTENDED and using EABI.
> 
> Can you share more details?
> 
> Your CPU option should define the options you state above, cf.
> 
>        http://nxr.netbsd.org/xref/src/sys/arch/evbarm/conf/RPI#12
> 
> I have RPI working with ARM11_COMPAT_MMU removed, but I've not switched as
> it breaks an emulator package that seems to rely on XN=0.
> 
> What's the backtrace of the panic?
> 
> Changing that KASSERT to
> 
>        KASSERTMSG(*pdep == 0, "va %lx pdep %p *pdep %x npde %x\n", va, pdep, *pdep, npde);
> 
> will also help
> 
> 
>> 
>> Thank you,
>> Frank Zerangue
>> 
> 
> Nick

Frank Zerangue | 19 Jan 18:45 2016
Picon

arm/arm32/pmap.c assert

Can anyone help me with this? 

This is a private arm port for the IMX31 application processor. I began development with NetBSD-5.0.1 and
have been able to migrate through out the generations of NetBSD with ease up until NetBSD-7.0 from 6.1.5.

 I now get as for as "init: copying out path `/sbin/init’ 11” but fail an assert in pmap.c:
	 "panic: kernel diagnostic assertion "*pdep == 0" failed: 
		file “/.../NetBSD-7.0/usr/src/sys/arch/arm/arm32/pmap.c”.

My configuration define ARM_MMU_V6N, ARM_MMU_EXTENDED and using EABI.

Thank you,
Frank Zerangue

Martin Husemann | 17 Jan 19:02 2016
Picon

Longer timeout for getting usb descriptors

Hi folks,

I have a strange usb<->ide thingy that only spins up the ata disk behind
it after being asked for a config descriptor - and does not return the
descriptor before the disk has identified.

This causes a timeout in our code, printing "device problem, disabling port .."
and makes the device unusable.

The patch below works around that. OK to commit?

Martin
Index: usbdi.h
===================================================================
RCS file: /cvsroot/src/sys/dev/usb/usbdi.h,v
retrieving revision 1.90
diff -u -p -r1.90 usbdi.h
--- usbdi.h	17 Jul 2014 18:42:37 -0000	1.90
+++ usbdi.h	17 Jan 2016 17:59:03 -0000
 <at>  <at>  -84,9 +84,10  <at>  <at>  typedef void (*usbd_callback)(usbd_xfer_
 #define USBD_SYNCHRONOUS_SIG	0x10	/* if waiting for completion,
 					 * also take signals */

-#define USBD_NO_TIMEOUT 0
-#define USBD_DEFAULT_TIMEOUT 5000 /* ms = 5 s */
-#define	USBD_CONFIG_TIMEOUT  (3*USBD_DEFAULT_TIMEOUT)
+#define USBD_NO_TIMEOUT		0
+#define USBD_DEFAULT_TIMEOUT	5000 /* ms = 5 s */
+#define USBD_DESC_TIMEOUT	(2*USBD_DEFAULT_TIMEOUT)
+#define	USBD_CONFIG_TIMEOUT	(3*USBD_DEFAULT_TIMEOUT)

 #define DEVINFOSIZE 1024

Index: usbdi_util.c
===================================================================
RCS file: /cvsroot/src/sys/dev/usb/usbdi_util.c,v
retrieving revision 1.64
diff -u -p -r1.64 usbdi_util.c
--- usbdi_util.c	27 Mar 2015 12:46:51 -0000	1.64
+++ usbdi_util.c	17 Jan 2016 17:59:03 -0000
 <at>  <at>  -74,7 +74,8  <at>  <at>  usbd_get_desc(usbd_device_handle dev, in
 	USETW2(req.wValue, type, index);
 	USETW(req.wIndex, 0);
 	USETW(req.wLength, len);
-	return (usbd_do_request(dev, &req, desc));
+	return (usbd_do_request_flags(dev, &req, desc, 0, 0,
+	    USBD_DESC_TIMEOUT));
 }

 usbd_status

Gmane