Roberto De Ioris | 26 Aug 17:20 2014
Picon

[ANNOUNCE] datadog plugin


https://github.com/unbit/uwsgi-datadog

it currently supports the metric subsystem, expect alarm integration
pretty soon

--

-- 
Roberto De Ioris
http://unbit.it
Francois Gaudin | 19 Aug 02:35 2014

When the backlog is full, is uwsgi supposed to accept connections?

Hi,

we ran into an issue a few weeks ago and I don't know if it's an expected behavior or not.

Short version: uwsgi still accepts connections even when the backlog is full. Is it normal? If it is, is there a way to refuse connection once it's full?

Long version: we have several instances on Heroku serving our python app with uwsgi. Their load-balancer has the following routing algorithm:

  1. Accept a new request for the app
  2. Look up the list of web dynos (instance name on Heroku) for the app
  3. Randomly select a dyno from that list
  4. Attempt to open a connection to that dyno's IP and port
  5. If the connection was successful, proxy the request to the dyno, and proxy the response back to the client
  6. If it takes more than 30 seconds, the request is killed.

If a wsgi worker get stale, the backlog quickly fills up and then it shouldn't accept the connection to let the router know that it should route the requests to another dyno. The problem is that it doesn't and all the traffic going to a stale dyno/worker will get killed eventually.

I've managed to reproduce the behavior locally.

Thanks

--
Francois
_______________________________________________
uWSGI mailing list
uWSGI@...
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
André Cruz | 18 Aug 18:16 2014
Picon

uWSGI as SSL terminator

Hello.

I'm trying to use uWSGI as an SSL terminator, but I can only get it to speak uwsgi with backend servers. Am I
missing some option here? I would like to forward the http request itself, I'm using the http-to option.

Also, I would like to include an X-Forwarded-For header when passing the request to backend servers. Is
this possible at the moment?

Thank you and best regards,
André
_______________________________________________
uWSGI mailing list
uWSGI@...
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
mlus | 16 Aug 16:49 2014

unix domain socket problem

Hello, I'm newbie uwsgi.

uwsgi --version :1.9.17.1
nginx --version :1.4.7-3.9.

I want to use uwsgi on unix domain socket with nginx.

I try to this

== nginx.conf  ==

http {
  include       mime.types;
  default_type  application/octet-stream;
  sendfile        on;
  keepalive_timeout  65;
  server {
      listen       80;
      server_name  localhost;
#        location / {
#            root   /srv/www/htdocs/;
#            index  index.html index.htm;
#        }
      location / {
        root   /srv/www/htdocs/;
        index  index.html index.htm;
        include uwsgi_params;
        uwsgi_pass unix:///tmp/aaa.sock;
       #uwsgi_pass 127.0.0.1:3031;
      }
==========================

== uwsgi.ini ==
[uwsgi]
plugins = python
#socket=127.0.0.1:3031
socket=/tmp/aaa.sock
wsgi-file = /home/aaa/uwsgistatus.py
processes = 10
uid = nginx
gid = nginx
master = true
===========================

conntcto to "http://localhost"  use browser
Erro raise.
---- /var/log/nginx/error.log  -----
connect() to unix:///tmp/aaa.sock failed (2: No such file or
directory) while connecting to upstream, client: 127.0.0.1, server:
localhost, request: "GET / HTTP/1.1", upstream:
"uwsgi://unix:///tmp/aaa.sock:", host: "localhost"
------------------------------------------

in case socket = 127.0.0.1:3031 , always no error.

How Can I use unix domain socket ?

Thank you
Igor Katson | 11 Aug 20:50 2014
Picon

Reload on fixed syntax errors in development

Hi,

with py-autoreload enabled, if uwsgi encounters a syntax error or some other import-time failure it stops trying to reload and needs to be relaunched.

Is there any workaround for that?
_______________________________________________
uWSGI mailing list
uWSGI@...
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
Alfredo Benés | 11 Aug 14:23 2014
Picon

Doubt about balance load between all cpus

We have a Django application with heavy workload.

Now, we have a system with 3 servers and each server with 8 cores.

Current deployment:
- Django is running through twisted.
- We have launched 16 twisted processes per server
- We use NGINX as load balancer between all nodes (48).
- Our application is a REST API. We don't use Django for user interface. 

This way we can distribute workload between all processors and I can confirm is working ok when we increase activity. I see all processors working and %CPU growing. This solution works in LINUX and Windows (changing NGINX with APACHE for load balancing). 

I would like to do this with uwsgi but I cannot expand uwsgi to more than one core. I have tried running uwsgi  with 8 and 16 processes and I can see activity in one processor only. I have tried with cpu-affinity, processes, workers, ... but I never get to use more than one CPU. 

Our processes have intensive python computation that must be made synchronously. 

I would like to know if UWSGI is able to manage multiple processes distributing workload between all system CPUs.

Thanks in advance

_______________________________________________
uWSGI mailing list
uWSGI@...
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
Jeff Potter | 9 Aug 23:15 2014

graceful shutdown?


Hi Uwsgi List,

Is there a way to gracefully stop uwsgi? As in: stop accepting new requests, but finish any existing
in-flight requests.

I see an option for a graceful reload, but that won’t work for our situation. Both SIGTERM and SIGINT are
instant stops; any in-process requests get a 503 response generated (at least, that’s what I’m
seeing in our setup). Running another process in front of uwsgi is not a good solution for us either — we
already have varnish talking to our backend nodes.

Thanks!

-Jeff
Desktop Ready | 6 Aug 23:08 2014
Picon

First request slow after idle time

Hello,

I am building my website with the following approach:
Nginx -> uWSGI -> Django -> PostgreSQL

Everything works well but I have one odd problem: it seems that the
first request after a long idle time (>10-20 mn) is always very slow
(page load in 30 seconds instead of 1 second).

Strangely it doesn't happen if I make a request, restart uWSGI and make
another request. So it is not because of Django lazy loading/import.

After some debug it is not because of Nginx or because of PostgreSQL.
So somewhere between them is the problem.

Perhaps some too aggressive memory freeing ? I'm on Debian stable with
uWSGI 1.2.3, running on a VPS (OpenVZ).

Can someone propose a way to identify the problem and solve this ?

Here is my uWSGI params:

uwsgi_param  QUERY_STRING       $query_string;
uwsgi_param  REQUEST_METHOD     $request_method;
uwsgi_param  CONTENT_TYPE       $content_type;
uwsgi_param  CONTENT_LENGTH     $content_length;

uwsgi_param  REQUEST_URI        $request_uri;
uwsgi_param  PATH_INFO          $document_uri;
uwsgi_param  DOCUMENT_ROOT      $document_root;
uwsgi_param  SERVER_PROTOCOL    $server_protocol;
uwsgi_param  HTTPS              $https if_not_empty;

uwsgi_param  REMOTE_ADDR        $remote_addr;
uwsgi_param  REMOTE_PORT        $remote_port;
uwsgi_param  SERVER_PORT        $server_port;
uwsgi_param  SERVER_NAME        $server_name;

And here is uWSGI configuration:

[uwsgi]

chdir = /home/user/website
module = website.wsgi
home = /home/user/.virtualenvs/website
plugin          = python
master          = true
processes       = 4
cpu-affinity    = 1
socket          = /var/uwsgi/website.sock
chmod-socket    = 664
vacuum          = true
env = HTTPS=on
post-buffering  = 4096

Any hints is welcomed !

Thanks
Mikko Ohtamaa | 5 Aug 10:25 2014

Rationale of choosing uWSGI + Nginx over plain uWSGI?

Hi,

With the current feature set of uWSGI, what could be the reasons to use Nginx + uWSGI combo over standalone uWSGI?

Why I am asking this: the most of tutorials I find assume you want to set up uWSGI + Nginx stack (without explaining why). The current incarnation of uWSGI can do both SSL and static file serving fine. It even gives out compatible access.log (with some tinkering). I assume these tutorials were written by the time when uWSGI did not have as many features as it does today.

If you want to keep your stack simple (as we all do) would one recommend setting up standalone uWSGI unless there is some specific reason to go for Nginx?

Here is my pet project (SSL, static files) etc. running on plain uWSGI directly in port 80 and 443. I have not encountered any problems yet, but I'll let you know if I hit something:

https://github.com/miohtama/LibertyMusicStore

Cheers from Finland,
Mikko
_______________________________________________
uWSGI mailing list
uWSGI@...
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
laada | 30 Jul 16:43 2014
Picon

harakiri and spooler-harakiri issue

Hello,
I have set up in my uwsgi config harakiri and spooler-harakiri.

harakiri = 5
spooler-harakiri = 600

But tasks running in spooler never go over harakiri limit (5).

Is it possible somehow avoid this behavior?

All the best
thnx
laada
Andrew Knapp | 22 Jul 00:54 2014
Picon

Hung worker processes after restarting under emperor mode

We are having a problem when we are restarting our app that runs under emperor mode. Sometimes, when we reload the config (an ini file) one or two workers will not die and will start to consume 100% of a cpu, and then die off ~60 seconds later. This will sporadically happen no matter how many workers we spawn.

We are running Django under uwsgi (version 2.0.5) on Ubuntu 14.04 on Amazon EC2.

Configs, logs and strace output (for one of the workers that hung) are below. Has anyone seen/experienced this problem before? My assumption for the 60 second time is the harakiri time, though I'm not 100% sure on that.

Here's the emperor log when a worker was hung:
Mon Jul 21 22:39:41 2014 - [emperor] reload the uwsgi instance <app>
Mon Jul 21 22:40:44 2014 - [emperor] vassal <app> is ready to accept requests

Here's our app ini config (some info removed, though all commands are here that are in the config):
[uwsgi]
uid = <uid>
gid = <gid>
socket = 127.0.0.1:<port>
listen = 16384
workers = 4
threads = 2
thunder-lock = true
max-requests = 20000
harakiri = 60
harakiri-verbose = true
master = true
single-interpreter = true
virtualenv = <virtualenv>
pythonpath = <pythonpath>
env = DJANGO_SETTINGS_MODULE=<module>
module = <wsgi_file>
pidfile2 = <pidfile>
logto2 = <logfile>
logfile-chmod = 644
stats = 127.0.0.1:<stats_port>
post-buffering = 65536
buffer-size = 32768
disable-logging = true
chdir = <dir>

I was able to get an strace off of one of the hung workers, and this is what I got (starting from when they get the signal to reload:
close(4)                                = 0
futex(0x7f3a15c37000, FUTEX_LOCK_PI, 1) = ? ERESTARTNOINTR (To be restarted)
--- SIGHUP {si_signo=SIGHUP, si_code=SI_USER, si_pid=1660, si_uid=601} ---
write(2, "Gracefully killing worker 6 (pid"..., 44) = -1 EPIPE (Broken pipe)
open("/usr/lib/libgcc_s.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 4
fstat(4, {st_mode=S_IFREG|0644, st_size=46184, ...}) = 0
mmap(NULL, 46184, PROT_READ, MAP_PRIVATE, 4, 0) = 0x7f3a15b3d000
close(4)                                = 0
access("/etc/ld.so.nohwcap", F_OK)      = -1 ENOENT (No such file or directory)
open("/lib/x86_64-linux-gnu/libgcc_s.so.1", O_RDONLY|O_CLOEXEC) = 4
read(4, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\260*\0\0\0\0\0\0"..., 832) = 832
fstat(4, {st_mode=S_IFREG|0644, st_size=90080, ...}) = 0
mmap(NULL, 2185952, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 4, 0) = 0x7f3a09947000
mprotect(0x7f3a0995d000, 2093056, PROT_NONE) = 0
mmap(0x7f3a09b5c000, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 4, 0x15000) = 0x7f3a09b5c000
close(4)                                = 0
munmap(0x7f3a15b3d000, 46184)           = 0
tgkill(16665, 16668, SIGRTMIN)          = 0
rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x7f3a15826340}, {0x460790, [], SA_RESTORER, 0x7f3a15826340}, 8) = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a09907000
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
munmap(0x7f3a098c7000, 262144)          = 0
mmap(NULL, 262144, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f3a098c7000
+++ killed by SIGKILL +++

Any help would be appreciated. If anyone wants any other info, just let me know and I'll supply it.

Thanks,
Andy

_______________________________________________
uWSGI mailing list
uWSGI@...
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Gmane