Brian Pane | 1 Sep 01:54 2002
Picon

Bucket management strategies for async MPMs?

I've been thinking about strategies for building a
multiple-connection-per-thread MPM for 2.0.  It's
conceptually easy to do this:

  * Start with worker.

  * Keep the model of one worker thread per request,
    so that blocking or CPU-intensive modules don't
    need to be rewritten as state machines.

  * In the core output filter, instead of doing
    actual socket writes, hand off the output
    brigades to a "writer thread."

  * As soon as the worker thread has sent an EOS
    to the writer thread, let the worker thread
    move on to the next request.

  * In the writer thread, use a big event loop
    (with /dev/poll or RT signals or kqueue, depending
    on platform) to do nonblocking writes for all
    open connections.

This would allow us to use a much smaller number of
worker threads for the same amount amount of traffic
(at least for typical workloads in which the network
write time constitutes the majority of each requests's
duration).

The problem, though, is that passing brigades between
(Continue reading)

Cliff Woolley | 1 Sep 01:57 2002

Re: Bucket management strategies for async MPMs?

On Sat, 31 Aug 2002, Brian Pane wrote:

>   * The bucket allocator alloc/free code isn't
>     thread-safe, so bad things will happen if the
>     writer thread tries to free a bucket (that's
>     just been written to the client) at the same
>     time that a worker thread is allocating a new
>     bucket for a subsequent request on the same
>     connection.

We designed with this in mind... basically what's supposed to happen is
that rather than having a bucket allocator per thread you have a group of
available bucket allocators, and you assign one to each new connection.
Since each connection will be processed by at most one thread at a time,
you're safe.  When the connection is closed, the allocator is placed back
into the list of available allocators for reuse on future connections.

--Cliff

Brian Pane | 1 Sep 02:08 2002
Picon

Re: Bucket management strategies for async MPMs?

Cliff Woolley wrote:

>On Sat, 31 Aug 2002, Brian Pane wrote:
>
>  
>
>>  * The bucket allocator alloc/free code isn't
>>    thread-safe, so bad things will happen if the
>>    writer thread tries to free a bucket (that's
>>    just been written to the client) at the same
>>    time that a worker thread is allocating a new
>>    bucket for a subsequent request on the same
>>    connection.
>>    
>>
>
>We designed with this in mind... basically what's supposed to happen is
>that rather than having a bucket allocator per thread you have a group of
>available bucket allocators, and you assign one to each new connection.
>Since each connection will be processed by at most one thread at a time,
>you're safe.  When the connection is closed, the allocator is placed back
>into the list of available allocators for reuse on future connections.
>  
>

I don't think we can count on the assumption that each conn will
only be processed by one thread at a time.  For example, this race
condition can happen on a keepalive connection with pipelined
requests:

(Continue reading)

Cliff Woolley | 1 Sep 02:10 2002

Re: Bucket management strategies for async MPMs?

On Sat, 31 Aug 2002, Brian Pane wrote:

> I don't think we can count on the assumption that each conn will
> only be processed by one thread at a time.  For example, this race

Then we have to at least guarantee that each request can only be processed
by one thread at a time, I think.  *None* of the buckets code is
threadsafe, and it's done that way intentionally.  A brigade (and its
allocator) can exist in exactly one thread at a time.

--Cliff

Brian Pane | 1 Sep 02:18 2002
Picon

Re: Bucket management strategies for async MPMs?

Cliff Woolley wrote:

>On Sat, 31 Aug 2002, Brian Pane wrote:
>
>  
>
>>I don't think we can count on the assumption that each conn will
>>only be processed by one thread at a time.  For example, this race
>>    
>>
>
>Then we have to at least guarantee that each request can only be processed
>by one thread at a time, I think.  *None* of the buckets code is
>threadsafe, and it's done that way intentionally.  A brigade (and its
>allocator) can exist in exactly one thread at a time.
>  
>

Wouldn't it be sufficient to guarantee that:
 * each *bucket* can only be processed by one thread at a time, and
 * allocating/freeing buckets is thread-safe?

Brian

Cliff Woolley | 1 Sep 02:21 2002

Re: Bucket management strategies for async MPMs?

On Sat, 31 Aug 2002, Brian Pane wrote:

> Wouldn't it be sufficient to guarantee that:
>  * each *bucket* can only be processed by one thread at a time, and
>  * allocating/freeing buckets is thread-safe?

No.  You'd need to also guarantee that all of the buckets sharing a
private data structure (copies or splits of a single bucket) were, as a
group, processed by only one thread at a time (and those buckets can exist
across multiple brigades even).  You'd also have to guarantee that no
buckets are added/removed from a given brigade by more than one thread at
a time.  When you add up the implications of all these things, it
basically ends up with the whole request being in one thread at a time.

Brian Pane | 1 Sep 02:56 2002
Picon

Re: Bucket management strategies for async MPMs?

Cliff Woolley wrote:

>On Sat, 31 Aug 2002, Brian Pane wrote:
>
>  
>
>>Wouldn't it be sufficient to guarantee that:
>> * each *bucket* can only be processed by one thread at a time, and
>> * allocating/freeing buckets is thread-safe?
>>    
>>
>
>No.  You'd need to also guarantee that all of the buckets sharing a
>private data structure (copies or splits of a single bucket) were, as a
>group, processed by only one thread at a time (and those buckets can exist
>across multiple brigades even).
>

I *think* this one can be solved by making the increment/decrement
of the bucket refcount atomic.

>You'd also have to guarantee that no
>buckets are added/removed from a given brigade by more than one thread at
>a time.
>

This part is easy to guarantee.  When the worker thread passes
buckets to the writer thread, it hands off a whole brigade at once,
so that ownership of the brigade passes from one thread to another.

(Continue reading)

Graham Leggett | 1 Sep 18:03 2002

Re: Segmentation fault when downloading large files

Peter Van Biesen wrote:

> I now have a reproducable error, a httpd which I can recompile ( it's
> till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
> I've seen in the code of ap_proxy_http_request that the variable e is
> used many times but I can't seem to find a free somewhere ...

This may be part of the problem. In apr memory is allocated from a pool 
of memory, and is then freed in one go. In this case, there is one pool 
per request, which is only freed when the request is complete. But 
during the request, 100MB of data is transfered, resulting buckets which 
are allocated, but not freed (yet). The machine runs out of memory and 
that process segfaults.

Regards,
Graham
--

-- 
-----------------------------------------
minfrin <at> sharp.fm 
	"There's a moon
					over Bourbon Street
						tonight..."

Justin Clift | 1 Sep 20:11 2002

Build breaking on fresh install of FreeBSD 4.6.2 with make, but not gmake

Hi everyone,

Just tried installing httpd-2.0 + apr + apr-util, all straight out of
cvs HEAD, and it breaks when using the FreeBSD 4.6.2 make, but not when
using gmake.

Not sure if anyone else has come across this.  The error message is:

***********

/usr/local/bin/bash /usr/install/httpd-2.0/srclib/apr/libtool --silent
--mode=compile gcc  -g -O2    -D_REENTRANT -D_THR
EAD_SAFE -DAP_HAVE_DESIGNATED_INITIALIZER  
-I/usr/install/httpd-2.0/srclib/apr/include
-I/usr/install/httpd-2.0/srclib/
apr-util/include -I/usr/local/include -I.
-I/usr/install/httpd-2.0/os/unix
-I/usr/install/httpd-2.0/server/mpm/prefork -
I/usr/install/httpd-2.0/modules/http
-I/usr/install/httpd-2.0/modules/filters
-I/usr/install/httpd-2.0/modules/proxy -I/
usr/install/httpd-2.0/include -I/usr/include/openssl
-I/usr/install/httpd-2.0/modules/dav/main  -c /usr/install/httpd-2.
0/server/util_filter.c && touch util_filter.lo
make: don't know how to make /usr/install/httpd-2.0/server/exports.c.
Stop
*** Error code 1

Stop in /usr/install/httpd-2.0/server.
*** Error code 1
(Continue reading)

Brian Pane | 1 Sep 20:33 2002
Picon

Re: Segmentation fault when downloading large files

Graham Leggett wrote:

> Peter Van Biesen wrote:
>
>> I now have a reproducable error, a httpd which I can recompile ( it's
>> till a 2.0.39 ), so, if anyone wants me to test something, shoot ! Btw,
>> I've seen in the code of ap_proxy_http_request that the variable e is
>> used many times but I can't seem to find a free somewhere ...
>
>
> This may be part of the problem. In apr memory is allocated from a 
> pool of memory, and is then freed in one go. In this case, there is 
> one pool per request, which is only freed when the request is 
> complete. But during the request, 100MB of data is transfered, 
> resulting buckets which are allocated, but not freed (yet). The 
> machine runs out of memory and that process segfaults. 

But the memory involved here ought to be in buckets (which can
be freed long before the entire request is done).

In 2.0.39 and 2.0.40, the content-length filter's habit of
buffering the entire response would keep the httpd from freeing
buckets incrementally during the request.  That particular
problem is gone in the latest 2.0.41-dev CVS head.  If the
segfault problem still exists in 2.0.41-dev, we need to take
a look at whether there's any buffering in the proxy code that
can be similarly fixed.

Brian

(Continue reading)


Gmane