Salvatore Iovene | 1 Nov 12:49 2011
Picon

Run a certain task serially, everything else in parallel. Possible?

Hello,

I have a django app that has several celery tasks. One of these tasks, I'd like to always run serially (one at a time) so I never have two if the same running at the same time.

I could use only one worker and set CELERYD_CONCURRENCY=1, but that would make all the tasks serial.

Is there a way to do what I want (assuming I explained it well?)

Thanks!
  Salvatore Iovene.

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To view this discussion on the web visit https://groups.google.com/d/msg/celery-users/-/9QaWN1ZJ388J.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To unsubscribe from this group, send email to celery-users+unsubscribe <at> googlegroups.com.
For more options, visit this group at http://groups.google.com/group/celery-users?hl=en.
Alexander Koval | 1 Nov 14:11 2011
Picon

Re: Run a certain task serially, everything else in parallel. Possible?

You can manually manage locks:
http://ask.github.com/celery/cookbook/tasks.html#ensuring-a-task-is-only-executed-one-at-a-time

Or create 2 queues (start one with concurency 1) and configure routes
to send that task to needed queue.

On 1 ноя, 13:49, Salvatore Iovene <salvatore.iov...@...m> wrote:
> Hello,
> I have a django app that has several celery tasks. One of these tasks, I'd
> like to always run serially (one at a time) so I never have two if the same
> running at the same time.
>
> I could use only one worker and set CELERYD_CONCURRENCY=1, but that would
> make *all* the tasks serial.
>
> Is there a way to do what I want (assuming I explained it well?)
>
> Thanks!
>   Salvatore Iovene.

--

-- 
You received this message because you are subscribed to the Google Groups "celery-users" group.
To post to this group, send email to celery-users@...
To unsubscribe from this group, send email to celery-users+unsubscribe <at> googlegroups.com.
For more options, visit this group at http://groups.google.com/group/celery-users?hl=en.

Ask Solem | 1 Nov 16:08 2011

Re: Can anyone help w/ this error?


On 31 Oct 2011, at 23:56, Mike wrote:

> Please anyone give me a clue how to to fix this error, it is the only
> thing preventing me from deploying my site.  I need to find a solution
> or will have to go back to cron.  :(
> 
> (This task works fine if I run the function from an interactive python
> session.  As do all my other tasks.)
> 
> I've updated to all new versions of software using Ubuntu Oneiric.  It
> hasn't helped, but am getting a slightly different error now:
> 
> /usr/bin/python manage_prod.py celeryd --beat --loglevel info
> 

Does it happen with celerybeat standalone?

And there is nothing in the postgresql logs?

--

-- 
Ask Solem
twitter.com/asksol | +44 (0)7713357179

Ask Solem | 1 Nov 16:12 2011

Re: Internal error <type 'exceptions.TypeError'>: coercing to Unicode: need string or buffer, NoneType found


On 30 Oct 2011, at 16:22, Salvatore Iovene wrote:

> Hi,
> I'm running django-celery 2.3.3 and I noticed the following error when I
> restart celeryd after there were some failed tasks:
> 
> [..]

> astrobin/Code/astrobin/venv/lib/python2.7/site-packages/celery/
> concurrency/base.py",
> line 84, in apply_async
>    target, args, kwargs))
>  File
> "/home/astrobin/Code/astrobin/venv/lib/python2.7/site-packages/django/
> db/models/base.py",
> line 370, in __repr__
>    u = unicode(self)
> TypeError: coercing to Unicode: need string or buffer, NoneType found
> 
> Can anyone help?
> 
> Thanks,
>  Salvatore Iovene

I think this is a bug that is fixed in master,
could you try running that?

pip install -U https://github.com/ask/celery/tarball/master#egg=celery

master is final, just awaiting more testing before being released as 2.4.0.
--

-- 
Ask Solem
twitter.com/asksol | +44 (0)7713357179

Ask Solem | 1 Nov 16:14 2011

Re: how to debug messages stuck in a rabbit queue


On 30 Oct 2011, at 10:48, danj wrote:

> Hi,
> 
> I'm using RabbitMQ and Celery with a few workers configured. I
> constantly monitor the number of messages in each queue.
> pre-fetch is set to 1. Some of my tasks have delayed eta - I think
> this effects the pre-fetch?

Yeah, the prefetch is incremented for every eta message received,
it is decremented once the message is processed.

> 
> Occasionally, the number of unacked messages in the queue starts
> growing and seems to get stuck at a certain level (e.g. 20),
> while all the workers are idle. After restarting all the workers those
> messages go back to the queue, and then get processed as normal.
> 

What is the output of:

    $ celeryctl inspect scheduled
    $ celeryctl inspect reserved

?

> So I'm hoping to get some pointers how to debug what is going on,
> and figure out why some of the messages seem to get stuck in the
> queue?

--

-- 
Ask Solem
twitter.com/asksol | +44 (0)7713357179

Ask Solem | 1 Nov 16:15 2011

Re: Remote worker gets stuck after processing a few messages


On 30 Oct 2011, at 10:20, Fed wrote:

> Hi,
> 
> I'm new to celery and may be doing something wrong, but I already
> spent a lot of trying to figure out how to configure celery
> correctly.
> 
> So, in my environment I have 2 remote servers; one is main (it has
> public IP address and most of the stuff like database server, rabbitmq
> server and web server running my web application is there) and another
> is used for specific tasks which I want to asynchronously invoke from
> the main server using celery.
> 
> I was planning to use RabbitMQ as a broker and as results back-end.
> Celery config is very basic:
> 
> CELERY_IMPORTS = ("main.tasks", )
> 
> BROKER_HOST = "Public IP of my main server"
> BROKER_PORT = 5672
> BROKER_USER = "user"
> BROKER_PASSWORD = "password"
> BROKER_VHOST = "/"
> 
> CELERY_RESULT_BACKEND = "amqp"
> 
> When I'm running a worker on the main server tasks are executed just
> fine, but when I'm running it on the remote server only a few tasks
> are executed and then worker gets stuck not being able to executed any
> task. When I restart the worker it executes a few more tasks and gets
> stuck again. There is nothing special inside the task and I even tried
> a test task that just adds 2 numbers. I tried to run the worker
> differently (demonizing and not, setting different concurrency and
> using celeryd_multi), nothing really helped.
> 
> What could be the reason? Did I miss something? Do I have to run
> something on the main server other than the broker (RabbitMQ)? Or is
> it a bug in the celery (I tried a few version: 2.2.4, 2.3.3 and dev,
> but none of them worked)?
> 
> P.S. I've just reproduced the same problem on the local worker, so I
> don't really know what it is... Is it required to restart celery
> worker after every N tasks executed?
> 
> Any help will be very much appreciated :)

What version of RabbitMQ is this?

--

-- 
Ask Solem
twitter.com/asksol | +44 (0)7713357179

Fed | 1 Nov 19:13 2011
Picon

Re: Remote worker gets stuck after processing a few messages

Ask, thank you so much for the response.

It is 1.7.2 installed from standard package for 64-bit Ubuntu 10.04
(Lucid)

Is it too old? I just noticed that recent Ubuntu has 2.6.1

BR,
Fedor

On Nov 1, 5:15 pm, Ask Solem <a...@...> wrote:
> On 30 Oct 2011, at 10:20, Fed wrote:
>
>
>
>
>
>
>
>
>
> > Hi,
>
> > I'm new to celery and may be doing something wrong, but I already
> > spent a lot of trying to figure out how to configure celery
> > correctly.
>
> > So, in my environment I have 2 remote servers; one is main (it has
> > public IP address and most of the stuff like database server, rabbitmq
> > server and web server running my web application is there) and another
> > is used for specific tasks which I want to asynchronously invoke from
> > the main server using celery.
>
> > I was planning to use RabbitMQ as a broker and as results back-end.
> > Celery config is very basic:
>
> > CELERY_IMPORTS = ("main.tasks", )
>
> > BROKER_HOST = "Public IP of my main server"
> > BROKER_PORT = 5672
> > BROKER_USER = "user"
> > BROKER_PASSWORD = "password"
> > BROKER_VHOST = "/"
>
> > CELERY_RESULT_BACKEND = "amqp"
>
> > When I'm running a worker on the main server tasks are executed just
> > fine, but when I'm running it on the remote server only a few tasks
> > are executed and then worker gets stuck not being able to executed any
> > task. When I restart the worker it executes a few more tasks and gets
> > stuck again. There is nothing special inside the task and I even tried
> > a test task that just adds 2 numbers. I tried to run the worker
> > differently (demonizing and not, setting different concurrency and
> > using celeryd_multi), nothing really helped.
>
> > What could be the reason? Did I miss something? Do I have to run
> > something on the main server other than the broker (RabbitMQ)? Or is
> > it a bug in the celery (I tried a few version: 2.2.4, 2.3.3 and dev,
> > but none of them worked)?
>
> > P.S. I've just reproduced the same problem on the local worker, so I
> > don't really know what it is... Is it required to restart celery
> > worker after every N tasks executed?
>
> > Any help will be very much appreciated :)
>
> What version of RabbitMQ is this?
>
> --
> Ask Solem
> twitter.com/asksol | +44 (0)7713357179

--

-- 
You received this message because you are subscribed to the Google Groups "celery-users" group.
To post to this group, send email to celery-users@...
To unsubscribe from this group, send email to celery-users+unsubscribe <at> googlegroups.com.
For more options, visit this group at http://groups.google.com/group/celery-users?hl=en.

Salvatore Iovene | 2 Nov 09:24 2011
Picon

Re: Run a certain task serially, everything else in parallel. Possible?

On Tuesday, November 1, 2011 3:11:50 PM UTC+2, Alexander Koval wrote:
>
> You can manually manage locks: 
>
>
http://ask.github.com/celery/cookbook/tasks.html#ensuring-a-task-is-only-executed-one-at-a-time 
>

Thanks, that did it for me.

  Salvatore. 

Fed | 2 Nov 13:32 2011
Picon

Re: Remote worker gets stuck after processing a few messages

I upgraded RabbitMQ to 2.6.1, but the problem remained.

On Nov 1, 5:15 pm, Ask Solem <a...@...> wrote:
> On 30 Oct 2011, at 10:20, Fed wrote:
>
>
>
>
>
>
>
>
>
> > Hi,
>
> > I'm new to celery and may be doing something wrong, but I already
> > spent a lot of trying to figure out how to configure celery
> > correctly.
>
> > So, in my environment I have 2 remote servers; one is main (it has
> > public IP address and most of the stuff like database server, rabbitmq
> > server and web server running my web application is there) and another
> > is used for specific tasks which I want to asynchronously invoke from
> > the main server using celery.
>
> > I was planning to use RabbitMQ as a broker and as results back-end.
> > Celery config is very basic:
>
> > CELERY_IMPORTS = ("main.tasks", )
>
> > BROKER_HOST = "Public IP of my main server"
> > BROKER_PORT = 5672
> > BROKER_USER = "user"
> > BROKER_PASSWORD = "password"
> > BROKER_VHOST = "/"
>
> > CELERY_RESULT_BACKEND = "amqp"
>
> > When I'm running a worker on the main server tasks are executed just
> > fine, but when I'm running it on the remote server only a few tasks
> > are executed and then worker gets stuck not being able to executed any
> > task. When I restart the worker it executes a few more tasks and gets
> > stuck again. There is nothing special inside the task and I even tried
> > a test task that just adds 2 numbers. I tried to run the worker
> > differently (demonizing and not, setting different concurrency and
> > using celeryd_multi), nothing really helped.
>
> > What could be the reason? Did I miss something? Do I have to run
> > something on the main server other than the broker (RabbitMQ)? Or is
> > it a bug in the celery (I tried a few version: 2.2.4, 2.3.3 and dev,
> > but none of them worked)?
>
> > P.S. I've just reproduced the same problem on the local worker, so I
> > don't really know what it is... Is it required to restart celery
> > worker after every N tasks executed?
>
> > Any help will be very much appreciated :)
>
> What version of RabbitMQ is this?
>
> --
> Ask Solem
> twitter.com/asksol | +44 (0)7713357179

--

-- 
You received this message because you are subscribed to the Google Groups "celery-users" group.
To post to this group, send email to celery-users@...
To unsubscribe from this group, send email to celery-users+unsubscribe <at> googlegroups.com.
For more options, visit this group at http://groups.google.com/group/celery-users?hl=en.

Salvatore Iovene | 2 Nov 17:11 2011
Picon

Re: Run a certain task serially, everything else in parallel. Possible?

Hi again,

On Tuesday, November 1, 2011 3:11:50 PM UTC+2, Alexander Koval wrote:

You can manually manage locks:
http://ask.github.com/celery/cookbook/tasks.html#ensuring-a-task-is-only-executed-one-at-a-time

I have found that this doesn't really work for me. The reason is that I have some really fast tasks and one task that is always very time consuming. I have modified the code you linked so that there is only one lock, and when a task is started, it will sleep() until is able to acquire the lock.

I was satisfied with that until I realized that I can have the following situation:

Worker 1: really time consuming task
Worker 2: really time consuming task, sleeping waiting to acquire lock
Worker 3: really time consuming task, sleeping waiting to acquire lock
Worker 4: really time consuming task, sleeping waiting to acquire lock
 
From the point of view of celery, those four tasks are running, because celery doesn't care that 2, 3 and 4 are repeatedly calling time.sleep(5) until the lock is free.
But the problem with this solution is that all my fast running tasks are starved.

So I need something else.

Or create 2 queues (start one with concurency 1) and configure routes
to send that task to needed queue.

I have tried this, but I have some trouble with the configuration. I have added the following to my django settings:

CELERY_QUEUES = {"default" : {"exchange":"default", "binding_key":"default"}, "plate_solve": {"exchange":"plate_solve", "binding_key":"plate_solve"} } CELERY_DEFAULT_QUEUE = "default"  
CELERY_ROUTES = {"astrobin.tasks.solve_image" : {"queue":"plate_solve", "routing_key":"solve_image"}}  

The two queues are created, as I can read from the log, but the task astrobin.tasks.solve_image, destined to the plate_solve queue, is never executed.

I have tried changing the "routing_key" to "plate_solve" (same value as the binding_key) and all the tasks worked that way, but I was back at the problem above: the fast tasks were being starved by the slow ones.

Can you tell me where my config is wrong, and what else I'm doing wrong, please?

Thanks!
  Salvatore. 
 

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To view this discussion on the web visit https://groups.google.com/d/msg/celery-users/-/Rm0km6o4_eQJ.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To unsubscribe from this group, send email to celery-users+unsubscribe <at> googlegroups.com.
For more options, visit this group at http://groups.google.com/group/celery-users?hl=en.

Gmane