Andrius Semionovas | 6 Feb 13:23 2009
Picon

Re: How to block specific IPs with nginx

maybe

deny   xxx.xxx.xxx.xxx;
allow  all;

On Fri, 06 Feb 2009 13:37:52 +0200, Asif Ali <azifali@...> wrote:

> Dear Group,
>
> How do I create a block list?. I tried DENY but that does not work well  
> and
> looks like it is only for blocking access to specific folders.
>
> Please advise
>
> regards
>
> Asif Ali

Igor Sysoev | 6 Feb 13:22 2009
Picon

Re: The two slashes issue - or how to append a constant to a parameter.

On Fri, Feb 06, 2009 at 11:42:13AM +0000, Ian Hobson wrote:

> Hi
> 
> I have a situation that I have had to fix using set...
> 
>        # handle directory names by adding /index.htm
>        location ~ ^(.*)/$ {
>                # matches directories
>                set $path index.htm;
>                fastcgi_param QUERY_STRING f=$request_uri$path;
>                fastcgi_param DOCUMENT_ROOT $document_root;
>                fastcgi_param SCRIPT_FILENAME /var/www/builder/parser.php;
>                fastcgi_pass 127.0.0.1:9000;
>        }
> 
> The issue was that with 
> 
> fastcgi_param QUERY_STRING f=$request_uri/index.htm;
> 
> I was getting two slashes - one added by nginx and one on the line 
> above, even if the entered URI contained none.

> Is there a better way of adding a constant after a parameter?

Yes:

        location ~ ^(.*)/$ {
                # matches directories
-               set $path index.htm;
(Continue reading)

Igor Sysoev | 6 Feb 13:39 2009
Picon

Re: The two slashes issue - or how to append a constant to a parameter.

On Fri, Feb 06, 2009 at 11:42:13AM +0000, Ian Hobson wrote:

> Hi
> 
> I have a situation that I have had to fix using set...
> 
>        # handle directory names by adding /index.htm
>        location ~ ^(.*)/$ {
>                # matches directories
>                set $path index.htm;
>                fastcgi_param QUERY_STRING f=$request_uri$path;
>                fastcgi_param DOCUMENT_ROOT $document_root;
>                fastcgi_param SCRIPT_FILENAME /var/www/builder/parser.php;
>                fastcgi_pass 127.0.0.1:9000;
>        }
> 
> The issue was that with 
> 
> fastcgi_param QUERY_STRING f=$request_uri/index.htm;
> 
> I was getting two slashes - one added by nginx and one on the line 
> above, even if the entered URI contained none.
> 
> Is there a better way of adding a constant after a parameter?

I did not read thoughtfully, you may use

- fastcgi_param QUERY_STRING f=$request_uri/index.htm;
+ fastcgi_param QUERY_STRING f=${request_uri}index.htm;

(Continue reading)

Atif Ghaffar | 6 Feb 15:11 2009
Picon

Re: Server optimizations for php



On Fri, Jan 23, 2009 at 4:46 PM, Marlon de Boer <marlon <at> hyves.nl> wrote:
Jure Pečar wrote:
> On Fri, 23 Jan 2009 15:24:01 +0100
> Atif Ghaffar <atif.ghaffar <at> gmail.com> wrote:
>
>> Hi. Jure,
>>
>> Thanks for the valuable advice.
>> I will look in the cool-thread servers from Sun. We are usually buying
>> from Sun but moslty the x64 server.

I tested a cool-thread t2250 with 64 threads from sun a couple of weeks
ago. My conclusion is that for our php application was that one thread
wasn't powerful enough to serve a php page fast enough. So in our case
we would end up with a lot of parallel but slower processes. Our current
x86_64 hardware could deliver the pages about 2 secs faster per php-cgi
process.

Marlon,
I have just finished testing on the T5210 with 64 threads and have come to the same conculsion as you.
thanks for the correct advice. I had to try it out myself though.

best regards
 

Tomáš Hála | 6 Feb 15:15 2009
Picon

bugreport - connection broke on slow clients in proxy mode

Hello,
we use nginx as reverse proxy to apache server and if we use 
proxy_max_temp_file_size directive to limit the size of files buffering, 
downloading larger files with slow connection is always be broken and it 
is nessesary to start downloading again.

For example, how to replicate the problem:
Use "proxy_max_temp_file_size 10M" in the proxy configuration, generate 
about 40MB binary file in document root of the proxyed apache (or maybe 
other webserver as well) and try to download it with speed limited to 
100k/sec. For example with wget:
wget -t 1 --limit-rate=100k http://server/file

Downloading will fail aproximatly in 12MB. If you download it with full 
speed (10M in my case), there will be no problem. If you download it 
directly from the apache server running on diferent tcp port, there will 
by also no problem. The problem appears on latest stable (0.6.35) 
version as well as on latest development (0.7.33).
Feel free to ask me about more details.
Best Regards Tomas Hala

Maxim Dounin | 6 Feb 15:53 2009
Picon

Re: bugreport - connection broke on slow clients in proxy mode

Hello!

On Fri, Feb 06, 2009 at 03:15:26PM +0100, Tomáš Hála wrote:

> Hello,
> we use nginx as reverse proxy to apache server and if we use  
> proxy_max_temp_file_size directive to limit the size of files buffering,  
> downloading larger files with slow connection is always be broken and it  
> is nessesary to start downloading again.
>
> For example, how to replicate the problem:
> Use "proxy_max_temp_file_size 10M" in the proxy configuration, generate  
> about 40MB binary file in document root of the proxyed apache (or maybe  
> other webserver as well) and try to download it with speed limited to  
> 100k/sec. For example with wget:
> wget -t 1 --limit-rate=100k http://server/file
>
> Downloading will fail aproximatly in 12MB. If you download it with full  
> speed (10M in my case), there will be no problem. If you download it  
> directly from the apache server running on diferent tcp port, there will  
> by also no problem. The problem appears on latest stable (0.6.35)  
> version as well as on latest development (0.7.33).
> Feel free to ask me about more details.

I've seen similar problem caused by client timeouts in Apache, 
since from Apache's point of view client downloads about 10M (+ 
nginx proxy memory buffers) and then stops downloading for a 
relatively long time (the time needed for client to download at 
least one memory buffer from nginx).

Maxim Dounin

Tomáš Hála | 6 Feb 16:35 2009
Picon

Re: bugreport - connection broke on slow clients in proxy mode

Maxim Dounin wrote:
> Hello!
> 
> On Fri, Feb 06, 2009 at 03:15:26PM +0100, Tomáš Hála wrote:
> 
>> Hello,
>> we use nginx as reverse proxy to apache server and if we use  
>> proxy_max_temp_file_size directive to limit the size of files buffering,  
>> downloading larger files with slow connection is always be broken and it  
>> is nessesary to start downloading again.
>>
>> For example, how to replicate the problem:
>> Use "proxy_max_temp_file_size 10M" in the proxy configuration, generate  
>> about 40MB binary file in document root of the proxyed apache (or maybe  
>> other webserver as well) and try to download it with speed limited to  
>> 100k/sec. For example with wget:
>> wget -t 1 --limit-rate=100k http://server/file
>>
>> Downloading will fail aproximatly in 12MB. If you download it with full  
>> speed (10M in my case), there will be no problem. If you download it  
>> directly from the apache server running on diferent tcp port, there will  
>> by also no problem. The problem appears on latest stable (0.6.35)  
>> version as well as on latest development (0.7.33).
>> Feel free to ask me about more details.
> 
> I've seen similar problem caused by client timeouts in Apache, 
> since from Apache's point of view client downloads about 10M (+ 
> nginx proxy memory buffers) and then stops downloading for a 
> relatively long time (the time needed for client to download at 
> least one memory buffer from nginx).
> 
> Maxim Dounin
> 
> 

Hello,
that makes sense. It's probably problem with understanding of meaning of 
proxy_max_temp_file_size directive. With reference to documentation 
(wiki) we tought, that if the file is larger then limit, it will be 
served synchronously. When I try to strace the apache process serving 
this file, it seams, that at first it downloads size acording to 
proxy_max_temp_file_size and after reaching this point by client it 
starts transfering synchronous. So the documentation is little bit 
misguided. But I probably understand, why it is implemented like this 
because it is easyer wait until it will fill the buffer then to detect 
the size of serving file before.
Thanks for your hint.
BR Tomas Hala

Petite Abeille | 6 Feb 18:45 2009
Picon

Re: Mail module: auth cram-md5 does not work


On Feb 6, 2009, at 1:01 PM, Maxim Dounin wrote:

> But actually I recommend avoid using both CRAM-MD5 and APOP since
> they require plaintext passwords to be stored on server.  It's
> much better to use plain authentication with security added by SSL
> layer.

Yes, if you can afford it, STARTTLS and AUTH PLAIN is the way to go.

Cheers,

--
PA.
http://alt.textdrive.com/nanoki/

Petite Abeille | 6 Feb 18:52 2009
Picon

Re: Mail module: auth cram-md5 does not work


On Feb 5, 2009, at 11:41 PM, Miguel Beccari wrote:

> I have username and chanllenge (7c74db51a3adfc16a65a47aca136d518).  
> Could I go back to password?

No.

This is how it goes:

(1) Use the username to retrieve the password
(2) Use that password to HMAC-MD5 the challenge
(3) Compare the HMAC to the digest
(4) If digest and HMAC match, the authentication has succeeded

HTH.

Cheers,

--
PA.
http://alt.textdrive.com/nanoki/

Stefan Scott | 6 Feb 19:11 2009

Re: Problem using Magento with nginx

TYPO:
> When I point my browser at http://mysubdom.dom.net, I get the message:

SHOULD SAY:
When I point my browser at http://mysubdom.MYdom.net, I get the message:
--

-- 
Posted via http://www.ruby-forum.com/.


Gmane