Sugita Shinsuke | 24 Jan 09:46 2015
Picon

How to run celery command in background?

Hi there

I'd like to run celery in background.

But celery doesn't have option.

So, I run like this.
---
nohup nohup celery -A apps worker -l info &
---

is it best way?

Thank you.

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Anna Kostikova | 22 Jan 21:25 2015
Picon

Celery fails for 3+ simultaneous tasks

We have celery 3.1.17 (Cipater) installed on our server. It is running with the following settings:
celery --workdir=/var/www/project -A tasks worker --autoscale=10,2 --autoreload --pool=eventlet
--logfile=/var/log/celery.log --concurrency=4

The problem is that if we simultaneously run 3 or more tasks than celery returns an error:
[2015-01-11 16:52:35,623: ERROR/MainProcess] Unrecoverable error: AttributeError("'TaskPool'
object has no attribute 'grow'",)
How we can fix it?

Thanks a lot ,
Anna

Greg Svitak | 22 Jan 18:24 2015

How to share connection variable(s) across workers

Hello,

I am trying to optimize the time it takes to connect to my backend stores (rackspace files and postgres) when executing a celery task. I would like to preload the connections at the time the workers boot up and share the connection objects across all tasks. 

What is the best method of achieving this goal?

Thanks,
Greg

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Lauris | 21 Jan 15:36 2015

Task stops retrying after random number of tries

Hi all,

  As the subject says, tasks that used to retry until they reach defined 
"max_retries" count, now sometimes stop doing that after random number 
of times :/. Sometimes they stop retrying after couple of hundred times, 
sometimes after just few times.

What I noticed is that if I restart "celery beat" process - after some 
minutes, some tasks, that "were quite" for hours, start retrying again 
as they should.

Can't pinpoint precisely when it started happening, but it might be 
after upgrade of Celery, RabbitMQ or Django.

Anybody have an idea why this is happening ?

I'm running:
   Django: 1.7.3
   RabbitMQ: 3.4.2
   celery:3.1.17
   kombu:3.0.24
   billiard:3.3.0.19
   python:2.7.3
   py-amqp:1.4.6

Celery settings:
   CELERY_ACKS_LATE            = True
   CELERY_SEND_EVENTS          = True
   CELERY_TRACK_STARTED        = True
   CELERY_DISABLE_RATE_LIMITS  = True
   CELERYD_PREFETCH_MULTIPLIER = 1
   CELERY_SEND_TASK_SENT_EVENT = True

Task code looks smth. like this:

class ABCTask(AbortableTask):
     ignore_result = False
     max_retries = 288*5

     def run(self):
         try:
             [...]
         except NoAvailableDevices as e:
             try:
                 self.retry(exc=e)
             except MaxRetriesExceededError, e:
                 [...]

Thanks,
Lauris

Sugita Shinsuke | 20 Jan 12:32 2015
Picon

how to check background process.

Hi there

I kicked the background process by celery.
I want to know check what process is running.
The celery only know the executed program was success or not.
But, I can't check the process was running or not.

I would appreciate it if you would give me some advice.

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Philippe David | 19 Jan 10:30 2015

Questions on chains

Hi,
I'm trying to solve this problem using celery: my backend has to do multiple calls to API and respect rate limits per user. In order to respect the limit, I chain all tasks for a user to prevent multiple parallel calls.
* So I create all task, create a chain with them, and if one of them reach the limit it calls retry with a the delay that the API asks me to wait.
* Second point: each task may discover other tasks to do. In order to respect the limit properly, I would like to add these new tasks to the chain, right after the current task, or at the end.

Can someone tell me:
* if calling retry will work has expected in a chain ?
* if adding a task is possible when the chain is running ?
* the overall chain may take a long time, what happens if the worker stops in the middle ? I guess this is related to how callbacks are implemented. I am using rabbitmq as broker.

Thanks,
Philippe

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Ami | 16 Jan 12:17 2015

breaking tasks into several projects

Dear Community,

I've tried separating Celery tasks into few Python files, thinking each one of them a separate (logical) project. However, it seems like when I'm running one project it runs the other as well. I'm confused as to how Celery worker start-up process works, and I was not able to find details in the documentation. Would much appreciate to be pointed to the right documentation (and best practices people use..)

Thank you so much,

A

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Robert Eveleigh | 15 Jan 18:02 2015
Picon

Understanding error output

Hi,

I'm having trouble pinpointing this error. Could you please shed some light on this. Thanks in advanced.

 -------------- celery <at> insilico-vb v3.1.17 (Cipater)
---- **** -----
--- * ***  * -- Linux-3.13.0-44-generic-x86_64-with-Ubuntu-14.04-trusty
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app:         kvasir:0x7fe645c1c5d0
- ** ---------- .> transport:   redis://localhost:6379/0
- ** ---------- .> results:     redis
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ----
--- ***** ----- [queues]
 -------------- .> celery           exchange=celery(direct) key=celery
            

[2015-01-15 11:48:42,880: WARNING/MainProcess] celery <at> insilico-vb ready.
[2015-01-15 11:49:37,550: ERROR/Worker-1] Process Worker-1
Traceback (most recent call last):
  File "/home/insilico/Downloads/kvasir/alpha/local/lib/python2.7/site-packages/billiard/process.py", line 292, in _bootstrap
    self.run()
  File "/home/insilico/Downloads/kvasir/alpha/local/lib/python2.7/site-packages/billiard/pool.py", line 294, in run
    self._do_exit(pid, _exitcode[0], None)
  File "/home/insilico/Downloads/kvasir/alpha/local/lib/python2.7/site-packages/billiard/pool.py", line 308, in _do_exit
    os._exit(exitcode)
TypeError: an integer is required
[2015-01-15 11:49:37,887: ERROR/MainProcess] Process 'Worker-1' pid:11013 exited with 'exitcode 1'
[2015-01-15 11:49:37,900: ERROR/MainProcess] Task kvasir.kmethods.run_gemini_query[f20402a1-20c3-4d42-8029-04662a0af5fd] raised unexpected: WorkerLostError('Worker exited prematurely: exitcode 1.',)
Traceback (most recent call last):
  File "/home/insilico/Downloads/kvasir/alpha/local/lib/python2.7/site-packages/billiard/pool.py", line 1169, in mark_as_worker_lost
    human_status(exitcode)),
WorkerLostError: Worker exited prematurely: exitcode 1.

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Sugita Shinsuke | 10 Jan 07:52 2015
Picon

How to get return value from shared_task?

Hi there

I started to use celery in Django 1.6 and Python 2.7 project.

I called the shared_task from my view

<at> shared_task
def my_process():
    cmd = "python hoge.py"
    proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)

the hoge.py is background running process program.

I want to know this result is false or not.
but, If I added like this.

 ret = proc.wait()

This program runs forever....

And I also tried like this

(out, err) = proc.communicate()

This program runs forever too.

When you use celery, how do you get result value from program?

Thank you

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Chris Withers | 7 Jan 18:27 2015
Picon

monitoring queue lengths from python

Hi All,

I know about these docs:

http://docs.celeryproject.org/en/2.2/userguide/monitoring.html#inspecting-queues

...but I'm surprised that there aren't any python apis to get:

- the queues available
- the length of those queues

Surely the worker must need to know these?
Even if not, a python api that abstracts away from the implementation would be really useful, something like:

for queue in app.configured_queues():
    print queue.name, queue.length()

How would I go about doing that?

cheers,

Chris

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
miticojo | 6 Jan 19:30 2015
Picon

Handle expired task

Hi All,
I've written my task (using Class) and I'm managing backend result using my custom flow with ORM... I've only the problem to catch expired job and update related db object state as "EXPIRED".

Could you help me? 
I've already tried with "after_return" and "on_failure" but didn't found solution for that need.

Thanks
Jo

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.

Gmane