Alex V. Koval | 3 May 22:01 2016

uwsgi hangs on SIGHUP

Hi All,

I have a problem with uWSGI (2.0.12 on Ubuntu 14.04) - since some change
in my python code (a few thousand lines of code were changed, its
difficult understand now what caused the probelm) uwsgi
does not reloads gracefully on SIGHUP as it were before - instead
process detached only on 'harakiri' which is quite slow and all client
connections are hang:

...gracefully killing workers...
Gracefully killing worker 1 (pid: 2017)...
Tue May  3 18:59:11 2016 - worker 1 (pid: 2017) is taking too much time to die...NO MERCY !!!
worker 1 buried after 1 seconds
binary reloading uWSGI...
chdir() to /var/www/my_project/my_project
closing all non-uwsgi socket fds > 2 (max_fd = 1024)...
found fd 3 mapped to socket 0 (192.168.128.65:9991)
running /var/www/my_project/my_project/env/bin/uwsgi
[uWSGI] getting INI configuration from uwsgi-test.ini
*** Starting uWSGI 2.0.12 (64bit) on [Tue May  3 18:59:12 2016] ***
compiled with version: 4.8.4 on 03 May 2016 18:42:43
os: Linux-4.5.0-x86_64-linode65 #2 SMP Mon Mar 14 18:01:58 EDT 2016
nodename: lin-yp0-py2
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 4
current working directory: /var/www/my_project/my_project
detected binary path: /var/www/my_project/my_project/env/bin/uwsgi
chdir() to /var/www/my_project/my_project/
(Continue reading)

Konstantin Ryabitsev | 22 Apr 20:59 2016
Gravatar

STDIN never closed with CGI and POST

Hi, all:

I'm trying to set up uwsgi with cgi plugin and git-http-backend, to
see if we get better performance with uwsgi over fastcgi (http-git
performance is fairly important to us for git.kernel.org).

However, all my attempts have been in vain thus far, because attempting
to clone always hangs -- GETs work just fine, but when we try to do a
POST, git-http-backend enters endless read() and then things time out.
Apparently, this is because STDIN with POST data is never closed.

Here's my configuration:

nginx (version 1.8.1):

-------
server {
    root /var/www/html;
    server_name git.example.com;

    location / {
        include uwsgi_params;

        uwsgi_modifier1 9;
        uwsgi_buffering off;
        uwsgi_param GIT_HTTP_EXPORT_ALL "1";
        uwsgi_param GIT_PROJECT_ROOT    /var/lib/git/repos;
        uwsgi_param PATH_INFO           $1;
        uwsgi_pass 127.0.0.1:10002;
    }
(Continue reading)

Daniel Nicoletti | 14 Apr 13:04 2016
Picon
Gravatar

Fork memory usage

Hi,

yesterday I was doing an experiment to see how fast would it be to uwsgi
spawn 1000 workers with my app loaded, it took ~2s but 1.5GB of RAM,
then instead of 1000 process I started 1 process and 1000 threads, namely
my plugin will instantiate 1000 QThreads, and memory usage was 300MB
now I got surprised of this, so I wrote an application that would load my
app like uwsgi and fork 1000, granted this app doesn't have any protocols
handling, and forking 1000 used 300MB as using threads did.

uwsgi process here is of 800KB of size and my test app was of 100KB,
which is aproximately the difference.

Now I wonder why is uwsgi so big? Can it's protocols be split in plugins
loaded at runtime to reduce this? It even seemed that share libraries
didn't add up to free due shared nature, maybe it would be useful if
uwsgi had a shared library, to possibily share among process. Or even
is there something different about how uwsgi forks()? maybe explicity
sharing less if that's even possible?

Note that I dunno which build options Debian used to uwsgi, maybe it's
possible to compile with reduced size or maybe it would just be better if
it had it's code split into plugins.

Best,

--

-- 
Daniel Nicoletti

KDE Developer - http://dantti.wordpress.com
(Continue reading)

Akash Shende | 13 Apr 15:59 2016
Picon
Gravatar

uwsgi readline() warning

Hey guys,

I'm using webpy+uWsgi (2.0.10) on CentOS 6. Recently uwsgi log filled with following line,

----
[uwsgi-warning] you are using readline() on request body allocating over than 8 MB, that is really bad and can be avoided..
----

I changed "body-read-warning" to 1 (1MB) on my development setup but issue didn't appear.

Where does "readline" actually got used? Somebody please explain this.

Thanks.




_______________________________________________
uWSGI mailing list
uWSGI@...
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
Dmitry Sivachenko | 12 Apr 10:37 2016
Picon

mod_proxy_uwsgi broken with recent apache update

Hello,

it seems mod_proxy_uwsgi does not compile with recent apache-2.4.20:

(cd /wrkdirs/usr/ports/www/mod_proxy_uwsgi/work/uwsgi-2.0.12 && /usr/local/sbin/apxs -c  -o
mod_proxy_uwsgi.la apache2/mod_proxy_uwsgi.c)
/usr/local/share/apr/build-1/libtool --silent --mode=compile cc -prefer-pic -O2 -pipe
-I/usr/include -DLIBICONV_PLUG -fstack-protector -fno-strict-aliasing   
-I/usr/local/include/apache24  -I/usr/local/include/apr-1   -I/usr/local/include/apr-1
-I/usr/include -I/usr/local/include -I/usr/local/include/db5  -c -o apache2/mod_proxy_uwsgi.lo
apache2/mod_proxy_uwsgi.c && touch apache2/mod_proxy_uwsgi.slo
apache2/mod_proxy_uwsgi.c:262:21: error: static declaration of
'ap_proxy_buckets_lifetime_transform' follows non-static declaration
static apr_status_t ap_proxy_buckets_lifetime_transform(request_rec *r,
                   ^
/usr/local/include/apache24/mod_proxy.h:1069:29: note: previous declaration is here
PROXY_DECLARE(apr_status_t) ap_proxy_buckets_lifetime_transform(request_rec *r,
                           ^
1 error generated.
apxs:Error: Command failed with rc=65536
Eugene M. Zheganin | 7 Apr 20:08 2016
Picon
Gravatar

uwsgi performance

Hi,

I'm trying to use uwsgi with nginx to server some cgi scripts. The thing 
is, no matter what I'm doing, I'm stuck at 53 rps with AB testing (50% 
user CPU load, 45% system CPU load), while the old setup with apache 
gives a 83 rps (83% user CPU load, 15% system CPU load). I have read the 
CGI chapter in docs, and the FAQ, but what bothers me is that no matter 
what I'm doint, the rps is stuck at 53.

I tried the default blank config, the config with non-default number of 
concurrent processes, the async/ugreen approach, but it seems like 
nothing changes. Yeah, when I set the number of processes close to 1, 
the performance degrades, but nothing can boost it from 53 rps, and from 
my point of view it should, right ? Considering that the cgi script in 
testing is always the same, may be it's just me doing something wrong ?

The config is now as follows:

[uwsgi]
plugins = 
/usr/local/lib/uwsgi/cgi_plugin.so,/usr/local/lib/uwsgi/python_plugin.so,/usr/local/lib/uwsgi/ugreen_plugin.so

async = 200
ugreen = true

cgi = /cgi-bin=/var/www/zakaz/cgi-bin
stats = /var/tmp/uwsgi-ftp-stats.sock
listen = 1024

I tried adding this, instead of async/ugreen:

processes = 8
threads = 20

and this:

rpc-max = 128
lock-engine = ipcsem
ftok = /tmp/uwsgiftp

Nothing changes.
I will be glad to head the ideas or explanations.

Thanks.
Eugene.
Pinakee BIswas | 1 Apr 08:11 2016

Uwsgi with Gevent

Hi,

We are running a ecommerce site with Django. We are using uwsgi behind Nginx.

I was planning to use Gevent for our platform for scalability but I am a bit confused with documentation provided regarding the same. We are using a ini file for uWsgi configuration. Is there a standard configuration that could be used with gevent?

I found couple of options:
  • gevent - number of async cores
  • gevent-monkey-patch

Can the above two options be used together?

Is there a standard formula for the number of gevents for optimal performance?

If I configure gevent-monkey-patch, do I need to call monkey.patch_all() in my django application? We are using MySQL as DB.

uWsgi documentation mentions with gevent set, django threads could still be used - does it mean I can have "enable-threads" set?

I am sorry for the plethora of queries as I am new to uwsgi.

Would appreciate if I could get clarifications on the above.

Thanks,
Pinakee Biswas


_______________________________________________
uWSGI mailing list
uWSGI@...
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
Craig Bruce | 1 Apr 01:09 2016
Gravatar

Capturing SQL logs from Django

Hi,

 

I run my Django server with uWSGI. In my logging configuration I write to syslog. In the uWSGI config I also send my logs to syslog. When Django DEBUG=True I would expect to see the SQL logs in syslog as well, like I do when I run manage.py runserver using the same Django settings. However, my syslog never has any SQL logs. I altered my configuration to write the console, a file or syslog, but I can never get the SQL logs appears. I’ve searched extensively and can’t find anything specific about missing SQL logs when running via uWSGI. Is this a known issue or am I missing some configuration value?

 

Any advice would be greatly appreciated.

 

Cheers

Craig

_______________________________________________
uWSGI mailing list
uWSGI@...
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
Davide Setti | 29 Mar 17:19 2016
Picon
Gravatar

call a "streaming" LUA RPC from python

Hi,
I have a python web service that does an HTTP request to ElasticSearch and generates a JSON with all the results. The problem is that the result set can be quite big, therefore we switched to ijson for streaming JSON decoding and chunked encoding for the output.

But now it's slow, because ijson is: the whole thing is now CPU bound.

One possible solution would be to write the JSON parsing in C, but I can't write proper C code anymore... :(

Another solution would be to call a LUA RPC, but we would have memory problems because uWSGI RPC only supports strings as an exchange format, a single string I think. Are there tricks to easily dump to file/unix pipes or something?

Regards.
--

Davide Setti
code: http://github.com/vad
_______________________________________________
uWSGI mailing list
uWSGI@...
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
Srikanth Bemineni | 29 Mar 03:27 2016
Picon

import uwsgi outside uwsgi server

Hi,

I am using UWSGI for a pyramid project with cassandra driver from datastax. UWSGI works fine when ran as a server.

uwsgi --ini-paste  development.ini

But pyramid also has some initializing scripts which load the app, before starting the actions.like setting up a database etc..

How to import uwsgi while running Initialize scripts. ?

I have been importing uwsgi module in my main __init__ app  to deal with forking issue faced by cassandra once the server launches.

import uwsgi 

from uwsgidecorators import *
<at> postfork
def connect_cassandra_client():
    CaSession.connect(['127.0.0.1'], certificate='/path/here')
    print("connection to cassandra made")


After adding uwsgi modules, none of the initialize scripts will be functional, since these scripts don't know how to import uwsgi, which is accessible only when ran from a uwsgi server.

Traceback (most recent call last):
  File "/home/izero/devel/xxxxxxxxxx/xxxxxxxxxx_env/bin/initialize_yyyyyyyyy_db", line 9, in <module>
    load_entry_point('yyyyyyyyy==0.0', 'console_scripts', 'initialize_yyyyyyyyy_db')()
  File "/home/izero/devel/xxxxxxxxxx/xxxxxxxxxx_env/lib/python3.4/site-packages/pkg_resources.py", line 351, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/home/izero/devel/xxxxxxxxxx/xxxxxxxxxx_env/lib/python3.4/site-packages/pkg_resources.py", line 2363, in load_entry_point
    return ep.load()
  File "/home/izero/devel/xxxxxxxxxx/xxxxxxxxxx_env/lib/python3.4/site-packages/pkg_resources.py", line 2088, in load
    entry = __import__(self.module_name, globals(),globals(), ['__name__'])
  File "/home/izero/devel/xxxxxxxxxx/yyyyyyyyy/yyyyyyyyy/__init__.py", line 26, in <module>
    import uwsgi    
ImportError: No module named 'uwsgi'

Does any one know how to deal with this issue ?

Srikanth
_______________________________________________
uWSGI mailing list
uWSGI@...
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
Theodor-Iulian Ciobanu | 21 Mar 21:56 2016
Face
Picon

custom routing/workers dispatching

Hello,

I'm a uWSGI newbie trying to get a Flask app with a small peculiarity
running: I want certain requests to be handled by certain workers only,
e.g. locations starting with /1 will be sent to worker #1, those
starting with /2 to the 2nd worker etc, and if none of the patterns
match, then just fallback to a normal balancing scheme.

Searching the docs, I found multiple ways of implementing a reverse
proxy that would forward requests to different backends that, if I
understood correctly, would actually be considered separate
applications, listening on different sockets, and as such requiring
multiple instances of uWSGI (or just one running in Emperor mode, I
guess). But this sounds like a configuration nightmare, since
adding/removing workers would actually mean adding/remove single worker
applications.

The subscription model would help with this, but it seems to cover
only domains, not locations on the virtual host. The closest I could
find was a page (that I'd link to link but can't find again) that
mentioned routing based on the request uri, but taking into account
directories only, i.e. I could specify a policy for /foobar and /foobaz
to be treated differently, while what I'm actually looking for is for a
way to e.g.:
- send /foo.* to worker #1
- send /ba[rz].* to worker #2
- send /qux.* to worker #3
- send everything else to anyone of the workers

(I'm aware this could theoretically cause an uneven load on the workers,
but in my particular case it actually shouldn't)

So, is there any way to implement this in uWSGI or do I need to go the
reverse proxy way? If the latter, what would be the best approach to it?

Thank you in advance and kind regards,

--

-- 
Theo

Gmane