Rob Linton | 26 Aug 18:36 2014
Picon

Sequential execution celery / redis

Hi,

I have a problem where I want to make all tasks pertaining to a certain ID execute sequentially, i.e. Never more than one will be executing at once.  There are multiple IDs, multiple tasks, and multiple celery workers.

Right now to accomplish this, I'm creating a queue named for the ID, assigning the queue to a consumer (with add_consumer), assigning any tasks associated with the ID to the appropriate queue, then using cancel_consumer once the processing tasks have finished.

So... first off, is there a better way to accomplish this?  If so we can cut the rest of this message short.  :)

If this approach is reasonable... I'm having a problem where my queue IDs are accumulating in Redis.  It feels like the queues are never removed.  This makes sense to me since add_consumer would reasonably create a queue if it didn't exist, and that cancel_consumer *wouldn't* up and delete your queue if there happen to be no more consumers.  It's not a urgent problem since Redis will store a huge number of keys... but it feels like a bad result that over time will bite me with lower performance and eventually crashes.  In addition, these queues are preserved between celeryd restarts so they really never go away.

It should also be noted (if not already clear) that I'm pretty new to Celery and may well be overlooking an entirely obvious solution.

Thanks very much for any advice you may have!

Rob


 

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Miki Tebeka | 26 Aug 12:31 2014
Picon

Parent id in signal?

Greetings,

I remember someone (Ask?) saying there's work on including parent id in the signals. This way users will be able to construct the call graph themselves (for canvas items).

Do I remember right? If so what is that status of this task?

Thanks,
--
Miki

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
tapan pandita | 25 Aug 23:23 2014
Picon

Celery becomes unresponsive

Hi,

We've been running celery workers on heroku using redis as a queue. Over the last few weeks, a couple of times celery has become unresponsive. It stops processing any of the jobs on the queue. When we restart the celery dyno on heroku, it starts executing the (backlog) tasks from the queue again and becomes responsive. We use the following command to start celery:
celery worker --app=app_name --pool=prefork --autoscale=16,8 --workdir=dirname

And these are the config values that we've set in celeryconfig.py:
BROKER_URL = 'redis://user:pass <at> host:port'
CELERY_RESULT_BACKEND
= 'redis://user:pass <at> host:port'
CELERY_TASK_SERIALIZER
= 'json'
CELERY_ACCEPT_CONTENT
= ['json']
CELERY_RESULT_SERIALIZER
= 'json'
CELERY_REDIS_MAX_CONNECTIONS
= 6
CELERY_SEND_TASK_ERROR_EMAILS
= True
CELERYD_MAX_TASKS_PER_CHILD
= 50
CELERY_TIMEZONE
= 'America/Los_Angeles'
CELERY_SEND_EVENTS
= True
CELERY_SEND_TASK_SENT_EVENT
= True


Any ideas why this could be happening? Thanks for the help!

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Andres Riancho | 25 Aug 23:09 2014
Picon

Rejecting / re-queuing tasks

List,

    I have a special case in which I would like to reject a task and
re-queue it so that other worker can handle it. I've read about the
Reject exception [0], but the Task.acks_late doesn't fit my
architecture.

    Is there any way of doing this? What I expect is for the task code
to be able to do something like:

def some_task(param1, param2):
    if check_internal_state():
        raise RejectRequeue

    process(param1, param2)

    Where Celery would put the task again in the queue and set it in a
state where other worker can read it.

[0] http://celery.readthedocs.org/en/latest/userguide/tasks.html#reject

Regards,
-- 
Andrés Riancho
Project Leader at w3af - http://w3af.org/
Web Application Attack and Audit Framework
Twitter:  <at> w3af
GPG: 0x93C344F3

--

-- 
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe@...
To post to this group, send email to celery-users@...
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.

xeon Mailinglist | 25 Aug 17:35 2014
Picon

set a timeout in celery requests

I want to set a time limit to a request, so that, if the queue is down, the client doesn't have to wait for a long time. I tried with time_lime, soft_time_lime

task = clusterWorking.apply_async(queue=q, soft_time_limit=2, time_limit=5)


<at> task(name='manager.pingdaemon.clusterWorking')
def clusterWorking():
    return "up"


How I do use set timeout that a request must wait to get a response, before it can fail?

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Zoltan Szalai | 25 Aug 12:27 2014
Picon

sensitive task kwargs in error emails

Hi,

Is there any way to replace/hide sensitive args/kwargs in celery error 
emails?

Thanks,
Zoltan

Victor Poluksht | 25 Aug 11:32 2014
Picon

celery init.d + virtualenv

Hi all.

I have virtualenv (with celery) installed at /opt/venv
My celery tasks are at /opt/tasks

I'm trying to run celery worker with init.d script provided https://github.com/celery/celery/blob/3.1/extra/generic-init.d/celeryd

My /etc/default/celeryd file is:

CELERYD_NODES="worker"
CELERY_BIN="/opt/venv/bin/celery"
CELERY_APP="tasks.tasks:capp"
CELERYD_CHDIR="/opt/tasks"
CELERYD_OPTS="--time-limit=300 --concurrency=40"
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
CELERYD_LOG_LEVEL="DEBUG"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERY_CREATE_DIRS=1

When i run /opt/venv/bin/celery from console, it runs as supposed

When i try to start /etc/init.d/celeryd i get error message:

celery init v10.1.
Using config script: /etc/default/celeryd
celery multi v3.1.13 (Cipater)
> Starting nodes...
/usr/bin/python3.4: No module named celery
> worker <at> tasks: * Child terminated with errorcode 1
FAILED

Any ideas?

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Miki Tebeka | 21 Aug 18:40 2014
Picon

chain/chord with failed task get stuck in celery.chord_unlock

Hello,

Some background: We have several sensor web servers. For each sensor we scp the log file every hour, process and upload to database hourly table. When logs from all sensors are in the hourly table, we copy the hourly table to the main fact table.

The flow I'm trying to do in Celery is have a chain for (scp -> process -> upload), all of these are in a group and when the group is done I'm calling copy from hourly to main. I tried using both a chain and a chord to create the dependency group->copy to fact, both seems to work fine if there's no error, but if there is an error I see 
[2014-08-21 14:31:55,425: INFO/MainProcess] Received task: celery.chord_unlock[076bb3d7-f934-4c8a-a7f5-c86e2239ae60] eta:[2014-08-21 11:31:56.424264+00:00]
Every 1 seconds.

Any advice? The code is here and example of running is here (we schedule the run from cron and *wait* for the flow to be done).

Thanks,
--
Mik

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
sandeep G Kurdagi | 21 Aug 13:36 2014
Picon

Restarting Celery beat Service.

I am running daily basis jobs using celery beat service. I want to stop it and restart the service . i want to know whether the previously scheduled jobs will be loaded back ?? And from where they will be loaded.please guide. 
There is no documentation regarding the same.


Thanks

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Nicolas Bouliane | 21 Aug 04:14 2014
Picon

Worker starts, then shuts down. No errors.

I've spent the whole night trying to get Celery to work correctly with the same exact setup as the #1 tutorial and the official tutorial to no avail. My tasks are found, but nothing really happens.

When I start a celery worker, here's what I get:

(env)nicolas:source$ python manage.py celeryd -B -l DEBUG
[2014-08-21 02:13:05,627: DEBUG/MainProcess] | Worker: Preparing bootsteps.
[2014-08-21 02:13:05,629: DEBUG/MainProcess] | Worker: Building graph...
[2014-08-21 02:13:05,630: DEBUG/MainProcess] | Worker: New boot order: {Beat, StateDB, Timer, Hub, Queues (intra), Pool, Autoscaler, Autoreloader, Consumer}
[2014-08-21 02:13:05,637: DEBUG/MainProcess] | Consumer: Preparing bootsteps.
[2014-08-21 02:13:05,637: DEBUG/MainProcess] | Consumer: Building graph...
[2014-08-21 02:13:05,648: DEBUG/MainProcess] | Consumer: New boot order: {Connection, Agent, Events, Mingle, Tasks, Control, Gossip, Heart, event loop}
 
 -------------- celery <at> macbook v3.1.13 (Cipater)
---- **** ----- 
--- * ***  * -- Darwin-13.3.0-x86_64-i386-64bit
-- * - **** --- 
- ** ---------- [config]
- ** ---------- .> app:         default:0x108465250 (djcelery.loaders.DjangoLoader)
- ** ---------- .> transport:   django://localhost//
- ** ---------- .> results:     ('djcelery.backends.database:DatabaseBackend',)
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ---- 
--- ***** ----- [queues]
 -------------- .> celery           exchange=celery(direct) key=celery
                

[tasks]
  . celery.backend_cleanup
  . celery.chain
  . celery.chord
  . celery.chord_unlock
  . celery.chunks
  . celery.group
  . celery.map
  . celery.starmap
  . status.tasks.test

[2014-08-21 02:13:05,660: DEBUG/MainProcess] | Worker: Starting Beat
[2014-08-21 02:13:05,662: DEBUG/MainProcess] ^-- substep ok
[2014-08-21 02:13:05,664: DEBUG/MainProcess] | Worker: Starting Pool
[2014-08-21 02:13:05,667: INFO/Beat] beat: Starting...
[2014-08-21 02:13:05,684: DEBUG/MainProcess] ^-- substep ok
[2014-08-21 02:13:05,685: DEBUG/MainProcess] | Worker: Starting Consumer
[2014-08-21 02:13:05,685: DEBUG/MainProcess] | Consumer: Starting Connection
[2014-08-21 02:13:05,695: INFO/MainProcess] Connected to django://localhost//
[2014-08-21 02:13:05,695: DEBUG/MainProcess] ^-- substep ok
[2014-08-21 02:13:05,695: DEBUG/MainProcess] | Consumer: Starting Events
[2014-08-21 02:13:05,701: DEBUG/MainProcess] ^-- substep ok
[2014-08-21 02:13:05,701: DEBUG/MainProcess] | Consumer: Starting Tasks
[2014-08-21 02:13:05,702: DEBUG/MainProcess] | Worker: Closing Beat...
[2014-08-21 02:13:05,702: DEBUG/MainProcess] | Worker: Closing Pool...
[2014-08-21 02:13:06,483: DEBUG/MainProcess] | Worker: Closing Consumer...
[2014-08-21 02:13:06,484: DEBUG/MainProcess] | Worker: Stopping Consumer...
[2014-08-21 02:13:06,484: DEBUG/MainProcess] | Worker: Stopping Pool...
[2014-08-21 02:13:06,484: DEBUG/MainProcess] | Worker: Stopping Beat...
[2014-08-21 02:13:06,485: INFO/MainProcess] beat: Shutting down...
[2014-08-21 02:13:06,485: DEBUG/MainProcess] | Consumer: Shutdown Heart...
[2014-08-21 02:13:06,485: DEBUG/MainProcess] | Consumer: Shutdown Control...
[2014-08-21 02:13:06,485: DEBUG/MainProcess] | Consumer: Shutdown Tasks...
[2014-08-21 02:13:06,485: DEBUG/MainProcess] | Consumer: Shutdown Events...
[2014-08-21 02:13:06,485: DEBUG/MainProcess] | Consumer: Shutdown Connection...

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
sandeep G Kurdagi | 19 Aug 16:27 2014
Picon

Received unregistered task for celery


I am getting unregistered error when i run the worker to take jobs from a queue. This is how i am doing

celery -A Tasks beat

The above command will schedule a job at specific time. After that, the task will be added to default queue.Now i run celery worker in other terminal as below

celery worker -Q default

But i am getting the following error

[2014-08-19 19:34:02,466: ERROR/MainProcess] Received unregistered task of type 'TasksReg.vodafone_v2'. The message has been ignored and discarded. Did you remember to import the module containing this task? Or maybe you are using relative imports? Please see http://bit.ly/gLye1c for more information. The full contents of the message body was: {'utc': False, 'chord': None, 'args': [[u'Kerala,Karnataka']], 'retries': 0, 'expires': None, 'task': 'TasksReg.vodafone_v2', 'callbacks': None, 'errbacks': None, 'timelimit': (None, None), 'taskset': None, 'kwargs': {}, 'eta': None, 'id': 'd4390336-9110-4e47-9e3a-017017cb509c'} (244b) Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 455, in on_task_received strategies[name](message, body, KeyError: 'TasksReg.vodafone_v2'

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.

Gmane