Peter Postma | 1 Dec 01:03 2004
Picon

clonable lo(4)

Hi all,

I've posted this diff a few months ago, but now I'd like to commit it.
The diff just makes the device clonable, removes loif array and adds
a lo0ifp pointer which points to the lo0 ifnet structure.

I'll commit it at the end of the week if there are no complaints.

-- 
Peter Postma
Index: conf/files
===================================================================
RCS file: /cvsroot/src/sys/conf/files,v
retrieving revision 1.700
diff -u -r1.700 files
--- conf/files	30 Nov 2004 04:28:43 -0000	1.700
+++ conf/files	30 Nov 2004 23:59:56 -0000
 <at>  <at>  -1296,7 +1296,7  <at>  <at> 
 file	net/if_gre.c			gre			needs-flag
 file	net/if_hippisubr.c		hippi			needs-flag
 file	net/if_ieee1394subr.c		ieee1394
-file	net/if_loop.c			loop			needs-count
+file	net/if_loop.c			loop			needs-flag
 file	net/if_media.c
 file	net/if_ppp.c			ppp			needs-count
 file	net/if_stf.c			stf & inet & inet6	needs-flag
Index: net/if.c
===================================================================
(Continue reading)

Greg A. Woods | 1 Dec 02:04 2004
X-Face

Re: wm(4) with i82544EI on the AlphaServer ES40...

[ On Monday, November 29, 2004 at 20:57:01 (-0500), Thor Lancelot Simon wrote: ]
> Subject: Re: wm(4) with i82544EI on the AlphaServer ES40...
>
> I've repeatedly seen strange, unexpectedly low throughput numbers from
> wm cards hooked up back-to-back (including with the Intel-supplied driver
> on Linux).  I was able to reproduce this with Marvell cards that supposedly
> use the same PHY block, as well, which makes me think that there may be
> something about that combination of that PHY and the use of another such
> (instead of a switch) as the link partner that causes poor performance.

Weird.  It does seem to come up with the right link status on both ends
and I never see any errors on the interfaces at either end, but I guess
that doesn't mean much...  :-)

> Can you throw a cheap (but modern) switch in the middle and see if you
> get the same or different results?

I think I have a decent switch in there now (at least that's what my
on-site eyes and hands tell me they've done :-), but it made no
difference.

I have also managed to convince them to "borrow" a 2-port fibre card
without any PHY (i82546GB) from another not-quite-production box and
sadly it's no better either.  It's hooked back-to-back with an identical
card in the dual-Opteron FreeBSD box I've been testing against.  I can
just barely push 36 MB/s through it with ideal(?) TTCP parameters and
with all the hardware checksum features turned on, and about 10% less
without them on.

If I had not seen the bge0 card do well over 50 MB/s under True64 I'd be
(Continue reading)

Jonathan Stone | 1 Dec 02:00 2004
Picon

Re: clonable lo(4)


In message <20041201000301.GA90340 <at> gateway.pointless.nl>,
Peter Postma writes:
>
>Hi all,
>
>I've posted this diff a few months ago, but now I'd like to commit it.

Why? What does it it buy us?  Multiple local addresses for protocols
which (unlike IP) dont allow multiple addreses on a single interface?
DIfferent MTUs on difference instances of a local-loopback interface?

If you're sure you're not just looking for a bikeshed to spraypaint
your name on, maybe you could update lo(4) to document the change and
the new usage(s) that it enables?

Peter Postma | 1 Dec 12:32 2004
Picon

Re: clonable lo(4)

On Tue, Nov 30, 2004 at 05:00:02PM -0800, Jonathan Stone wrote:
> 
> In message <20041201000301.GA90340 <at> gateway.pointless.nl>,
> Peter Postma writes:
> >
> >Hi all,
> >
> >I've posted this diff a few months ago, but now I'd like to commit it.
> 
> Why? What does it it buy us?  Multiple local addresses for protocols
> which (unlike IP) dont allow multiple addreses on a single interface?
> DIfferent MTUs on difference instances of a local-loopback interface?
> 

The reasons I want this are
1) no more static limits
2) regression tests for pfctl(8) uses it

I'm not sure if there are real world examples to think about where this
could be useful, but I'm not looking for a discussion on this...
The point here is that you now need to rebuild the kernel to add new devices
and this removes that limit. How people are going to use the multiple
devices is beyond the scope of this thread.

> If you're sure you're not just looking for a bikeshed to spraypaint
> your name on, maybe you could update lo(4) to document the change and
> the new usage(s) that it enables?
> 

Sure, I'll update it...
(Continue reading)

Hubert Feyrer | 1 Dec 12:37 2004
Picon

Re: clonable lo(4)


On Wed, 1 Dec 2004, Peter Postma wrote:
> and this removes that limit. How people are going to use the multiple
> devices is beyond the scope of this thread.

No, I think it is exactly the question being asked - why would we want 
this?

  - Hubert

--

-- 
NetBSD - Free AND Open!      (And of course secure, portable, yadda yadda)

Christos Zoulas | 1 Dec 23:23 2004

Re: clonable lo(4)

In article <Pine.GSO.4.61.0412011235160.13890 <at> rfhpc8317>,
Hubert Feyrer <hubert <at> feyrer.de> wrote:
>
>On Wed, 1 Dec 2004, Peter Postma wrote:
>> and this removes that limit. How people are going to use the multiple
>> devices is beyond the scope of this thread.
>
>No, I think it is exactly the question being asked - why would we want 
>this?

Because it is pointless to have needs-count devices. There are very
few left (most of them are isdn and ppp/sl related). We can have
all devices able to have an arbitrary number of instances. For the
most part you either have allocated too many that you'll never use
wasting kva, or you've allocated too little and you need to recompile
a new kernel. The code diffs to do this in most changes save kva!
Also, in a dynamically loaded driver environment you'd like to
limit static allocations to a minimum.

For this particular case I can envision cisco-like "LoopbackN"
functionality where you create loopback devices to agregate multiple
physical interfaces, or as a virtual interface with the highest IP
address of the router, used in protocols that choose the highest
numbered interface such as OSPF.

christos

Christos Zoulas | 1 Dec 23:40 2004

Re: clonable lo(4)

On Dec 1,  5:35pm, erplefoo <at> gmail.com (Sean Davis) wrote:
-- Subject: Re: clonable lo(4)

| I wouldn't mind bpf being something that you didn't need to specify a
| number for - I have, on numerous occasions, built a kernel for a
| machine with, say, 3 bpf devices, then later on discovered I need
| more.

Too late, I've already committed the changes for that in head. Bpf is a cloner
now.

christos

Sean Davis | 1 Dec 23:35 2004
Picon

Re: clonable lo(4)

On Wed, 1 Dec 2004 22:23:32 GMT, Christos Zoulas <christos <at> zoulas.com> wrote:
> In article <Pine.GSO.4.61.0412011235160.13890 <at> rfhpc8317>,
> Hubert Feyrer <hubert <at> feyrer.de> wrote:
> >
> >On Wed, 1 Dec 2004, Peter Postma wrote:
> >> and this removes that limit. How people are going to use the multiple
> >> devices is beyond the scope of this thread.
> >
> >No, I think it is exactly the question being asked - why would we want
> >this?
> 
> Because it is pointless to have needs-count devices. There are very
> few left (most of them are isdn and ppp/sl related). We can have
> all devices able to have an arbitrary number of instances. For the
> most part you either have allocated too many that you'll never use
> wasting kva, or you've allocated too little and you need to recompile
> a new kernel. The code diffs to do this in most changes save kva!
> Also, in a dynamically loaded driver environment you'd like to
> limit static allocations to a minimum.
> 
> For this particular case I can envision cisco-like "LoopbackN"
> functionality where you create loopback devices to agregate multiple
> physical interfaces, or as a virtual interface with the highest IP
> address of the router, used in protocols that choose the highest
> numbered interface such as OSPF.
> 
> christos
> 

I wouldn't mind bpf being something that you didn't need to specify a
(Continue reading)

Mihai CHELARU | 2 Dec 11:13 2004

NetBSD and large pps

Hello,

I observed that NetBSD is not able to route more than 30kpps. When it 
reaches this limit it usually stays there for a while and then suddenly 
reboots. I don't use NAPI nics so I get around 30k irqs/sec. Is this a 
known issue ? What can I do about it ?

--

-- 
Thanks,
Mihai

Mihai CHELARU | 2 Dec 12:08 2004

Re: NetBSD and large pps

Martin Husemann wrote:
> On Thu, Dec 02, 2004 at 12:13:04PM +0200, Mihai CHELARU wrote:
> 
>>I don't use NAPI nics so I get around 30k irqs/sec. Is this a 
>>known issue ? What can I do about it ?
> 
> 
> What is NAPI?
> What nics do you use?
> 
> Martin
> 

NAPI is also known as RX Polling. From what I understood it's a way of 
buffering ingress packets at layer 2 and not generating an IRQ for every 
packet it receives. It's available for Intel Pro/1000 (epro 1000).

I use broadcom pci-x cards (bge) BCM5702X.

For what I see there is a problem with IRQ limit in NetBSD. On other 
machine I have a pgsql that sometimes goes also at around 30k IRQs/sec 
and also suddenly reboots (don't ask me what that pgsql is doing there, 
some strange half-page select that is driving that multi-proc machine 
into nuts - 30k tlb shootdowns IRQs).

--

-- 
Mihai


Gmane