nl fox | 2 May 02:54 2015
Picon

How to customize my own method to deal with the result

Help ...In celery ,can I customize my own method to deal with the result that returned by worker? I need to check if there is already same task has been carried out previously and if the task exists I can update it instead of insert a new one

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
minusf | 1 May 14:21 2015
Picon

mysql deadlocks and myisam engine

hello,

seems like a no brainer, but i was wondering,
the FAQ regarding the mysql deadlocks applies
only to innodb and not to myisam engine, right?

Mike | 2 May 18:32 2015
Picon

Switched from prefork to greenlet, getting new transient errors

I'm running Celery on Solaris (SunOS 5.10).  I found that the prefork workers seem to grow in size over time (even with max tasks per child set)- it's as if it inherits a lot of memory from the main process as time goes on. I can create another thread just on this topic if anyone would like to help me dig more into it.  I switched the process to eventlets.

Using: 

celery (3.1.16)
eventlet (0.16.0.dev)
billiard (3.3.0.18)
amqp (1.4.6)
greenlet (0.4.4)
librabbitmq (1.5.2)


Since switching over, though, I get the following three transient errors which shut the worker down.  Any ideas what might cause this?



2015-05-02 06:13:37,452: WARNING/MainProcess] /opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/app/trace.py:364:
 
RuntimeWarning: Exception raised outside body: SystemError('error return without exception set',):
Traceback (most recent call last):
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/app/trace.py", line 283, in trace_task
    uuid
, retval, SUCCESS, request=task_request,
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/backends/base.py", line 248, in store_result
    request
=request, **kwargs)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/backends/base.py", line 481, in _store_result
   
self.set(self.get_key_for_task(task_id), self.encode(meta))
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/backends/cache.py", line 126, in set
   
return self.client.set(key, value, self.expires)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/memcache.py", line 584, in set
   
return self._set("set", key, val, time, min_compress_len)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/memcache.py", line 835, in _set
   
return _unsafe_set()
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/memcache.py", line 827, in _unsafe_set
   
return(server.expect("STORED", raise_exception=True)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/memcache.py", line 1196, in expect
    line
= self.readline(raise_exception)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/memcache.py", line 1182, in readline
    data
= recv(4096)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/eventlet-0.16.0.dev-py2.7.egg/eventlet/greenio.py", line 325, in recv
    timeout_exc
=socket.timeout("timed out"))
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/eventlet-0.16.0.dev-py2.7.egg/eventlet/greenio.py", line 200, in _trampoline
    mark_as_closed
=self._mark_as_closed)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/eventlet-0.16.0.dev-py2.7.egg/eventlet/hubs/__init__.py", line 159, in trampoline
   
return hub.switch()
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/eventlet-0.16.0.dev-py2.7.egg/eventlet/hubs/hub.py", line 293, in switch
   
return self.greenlet.switch()
SystemError: error return without exception set





Traceback (most recent call last):
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/worker/__init__.py", line 227, in _process_task
    req
.execute_using_pool(self.pool)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/worker/job.py", line 263, in execute_using_pool
    correlation_id
=uuid,
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/concurrency/base.py", line 156, in apply_async
   
**options)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/concurrency/eventlet.py", line 144, in on_apply
   
self.getpid)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/eventlet-0.16.0.dev-py2.7.egg/eventlet/greenpool.py", line 106, in spawn_n
   
self.sem.acquire()
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/eventlet-0.16.0.dev-py2.7.egg/eventlet/semaphore.py", line 96, in acquire
    hubs
.get_hub().switch()
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/eventlet-0.16.0.dev-py2.7.egg/eventlet/hubs/hub.py", line 293, in switch
   
return self.greenlet.switch()
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/eventlet-0.16.0.dev-py2.7.egg/eventlet/greenpool.py", line 93, in _spawn_n_impl
   
self._spawn_done(coro)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/eventlet-0.16.0.dev-py2.7.egg/eventlet/greenpool.py", line 125, in _spawn_done
   
self.coroutines_running.remove(coro)
KeyError: <greenlet.greenlet object at 0xc759e0>





[2015-05-02 08:41:48,786: WARNING/MainProcess] /opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/app/trace.py:364:
 
RuntimeWarning: Exception raised outside body: TypeError('sequence item 1: expected string, NoneType found',):
Traceback (most recent call last):
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/app/trace.py", line 253, in trace_task
    I
, R, state, retval = on_error(task_request, exc, uuid)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/app/trace.py", line 201, in on_error
    R
= I.handle_error_state(task, eager=eager)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/app/trace.py", line 85, in handle_error_state
   
}[self.state](task, store_errors=store_errors)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/app/trace.py", line 118, in handle_failure
    req
.id, exc, einfo.traceback, request=req,
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/backends/base.py", line 121, in mark_as_failure
    traceback
=traceback, request=request)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/backends/base.py", line 248, in store_result
    request
=request, **kwargs)
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/backends/base.py", line 481, in _store_result
   
self.set(self.get_key_for_task(task_id), self.encode(meta))
 
File "/opt/app/thisapp/software/python/lib/python2.7/site-packages/celery-3.1.16-py2.7.egg/celery/backends/base.py", line 406, in get_key_for_task
   
self.task_keyprefix, key_t(task_id), key_t(key),
TypeError: sequence item 1: expected string, NoneType found





--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Mike | 1 May 17:24 2015
Picon

Memcached filling up, advice on usage?


I'm trying to understand better how memcached works as a backend for celery. 

I'm using memcached in two ways: 

1. As a celery backend
2. A place to initiate distributed locks to prevent two tasks from simultaneously executing. (My database cannot do this unfortunately). 

So I have tasks... and some of the tasks feed results into other tasks in a chain.  For this situation, do I need to store results in the backend?  Or can I set up the ignore results flag for those tasks? 

I also have a chord where multiple tasks execute in parallel and then execute a post task to inventory status etc.  Am I able to ignore results on this? 

Last question is... how do I set backend items to expire when using memcached?  Because I use it for distributed locking, I have to be pretty careful about allowing dropped entries so I have that disabled. I believe the backed celery entries are filling up my memcached so I'm looking to find the right config for my situation.  Perhaps i need more than one memcached server, one for backend that allows dropping and the other for locks.  Thoughts?

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
tristan | 29 Apr 21:35 2015
Picon

Celery + Eventlet pool does not improve execution speed of asynchronous web requests

As mentioned in the celery docs, the eventlet pool should be faster than the prefork pool for evented I/O such as asynchronous HTTP requests.

They even mention that

"In an informal test with a feed hub system the Eventlet pool could fetch and process hundreds of feeds every second, while the prefork pool spent 14 seconds processing 100 feeds."

However, we are unable to produce any kind of results similar to this. Running the example tasks,urlopen and crawl exactly as described and opening thousands of urls, it appears that the prefork pool almost always performs better.

We tested with all sorts of concurrencies (prefork with concurrency 200, eventlet with concurrencies 200, 2000, 5000). In all of these cases the tasks complete in fewer seconds using the prefork pool.The machine being run on is a 2014 Macbook Pro with a RabbitMQ server running.

We are looking to make thousands of asynchronous HTTP requests at once and are wondering if the eventlet pool is even worth implementing? If it is, what are we missing?

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
willian coelho | 29 Apr 21:22 2015
Picon

Problem when importing a task on a flask blueprint

Hi folks,

I`m working on a web app (flask) where i`m using celery to distribute some tasks. I`m using the factory pattern described on http://flask.pocoo.org/docs/0.10/patterns/celery/
and getting the following error when running:

celery worker -A portautos.tasks -l info

error:
Traceback (most recent call last):
  File "/Users/macbookair2011/Documents/envs/portautosapp/bin/celery", line 11, in <module>
    sys.exit(main())
  File "/Users/macbookair2011/Documents/envs/portautosapp/lib/python2.7/site-packages/celery/__main__.py", line 30, in main
    main()
  File "/Users/macbookair2011/Documents/envs/portautosapp/lib/python2.7/site-packages/celery/bin/celery.py", line 81, in main
    cmd.execute_from_commandline(argv)
  File "/Users/macbookair2011/Documents/envs/portautosapp/lib/python2.7/site-packages/celery/bin/celery.py", line 769, in execute_from_commandline
    super(CeleryCommand, self).execute_from_commandline(argv)))
  File "/Users/macbookair2011/Documents/envs/portautosapp/lib/python2.7/site-packages/celery/bin/base.py", line 305, in execute_from_commandline
    argv = self.setup_app_from_commandline(argv)
  File "/Users/macbookair2011/Documents/envs/portautosapp/lib/python2.7/site-packages/celery/bin/base.py", line 465, in setup_app_from_commandline
    self.app = self.find_app(app)
  File "/Users/macbookair2011/Documents/envs/portautosapp/lib/python2.7/site-packages/celery/bin/base.py", line 485, in find_app
    return find_app(app, symbol_by_name=self.symbol_by_name)
  File "/Users/macbookair2011/Documents/envs/portautosapp/lib/python2.7/site-packages/celery/app/utils.py", line 232, in find_app
    sym = imp(app)
  File "/Users/macbookair2011/Documents/envs/portautosapp/lib/python2.7/site-packages/celery/utils/imports.py", line 101, in import_from_cwd
    return imp(module, package=package)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
  File "/Users/macbookair2011/Desktop/projects/portautos/portautos/tasks.py", line 15, in <module>
    celery = create_celery_app()
  File "/Users/macbookair2011/Desktop/projects/portautos/portautos/app.py", line 46, in create_celery_app
    app = app or create_app()
  File "/Users/macbookair2011/Desktop/projects/portautos/portautos/app.py", line 35, in create_app
    from .apps.accounts import module as accounts
  File "/Users/macbookair2011/Desktop/projects/portautos/portautos/apps/accounts/__init__.py", line 6, in <module>
    from .views import LoginView, RegisterView, LogoutView, ConfirmView
  File "/Users/macbookair2011/Desktop/projects/portautos/portautos/apps/accounts/views.py", line 10, in <module>
    from ...models import db, User
  File "/Users/macbookair2011/Desktop/projects/portautos/portautos/models.py", line 10, in <module>
    from .tasks import send_mail
ImportError: cannot import name send_mail


As you can see, when i am importing the tasks to call it the error is raised. All my tasks are in tasks.py on application root.

├── README.md
├── bower.json
├── gulpfile.js
├── manage.py
├── migrations
├── package.json
├── portautos
│   ├── __init__.py
│   ├── __init__.pyc
│   ├── app.py
│   ├── apps
│   │   ├── __init__.py
│   │   ├── accounts
│   │   ├── aquisition
│   │   └── home
│   ├── core
│   │   ├── __init__.py
│   │   ├── exceptions.py
│   │   ├── views.py
│   ├── extensions.py
│   ├── models.py
│   ├── settings
│   │   ├── __init__.py
│   │   ├── local_settings.py
│   │   └── test_settings.py
│   ├── static
│      ├── tasks.py
│   ├── tasks.pyc
│   ├── templates

My tasks.py:

# coding:utf-8
"""tasks.py
"""
from celery.utils.log import get_task_logger

from flask import render_template as render
from flask.ext.mail import Message

from .extensions import mail
from .app import create_celery_app

# logging
logger = get_task_logger(__name__)

celery = create_celery_app()


<at> celery.task
def send_mail(to, subject, template, **kwargs):
    """"""
    logger.info('Sending email to {}.'.format(to))
    try:
        msg = Message(sender=celery.conf['FLASK_MAIL_SENDER'],
                      recipients=[to],
                      subject=subject)
        msg.body = render(template + '.txt', **kwargs)
        msg.html = render(template + '.html', **kwargs)
        mail.send(msg)
    except Exception as e:
        logger.error(e)
    logger.info('An email has been sent to {}.'.format(to))

I`ve realized when removing the import statement and run the celery again it starts without errors. But i need to import it to call the tasks :-).


Can you help me?

Thanks in advance,
Michael Coelho.

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Rhett Garber | 28 Apr 21:26 2015
Picon

Celery and Upstart

We use Upstart to manage our celeryd workers. It's a pretty simple script that establishes our virtualenv and executes the worker.

I just started digging into a long time ignored failure during a deployment process where we receive:

    WorkerLostError('Worker exited prematurely: signal 15 (SIGTERM).',)

While digging around in the upstart documentation (http://upstart.ubuntu.com/cookbook/) I found this little gem:

    "The signal specified by the kill signal stanza is sent to the process group of the main process. (such that all processes belonging to the jobs main process are killed). By default this signal is SIGTERM."

So this sound like a plausible underlying cause? If Upstart sends SIGTERM to all the children processes directly there is likely a race condition where the workers exit before the master is ready for them to start exiting?

I noticed that of all the project provided scripts to run celery in production, upstart is not one of them. Is it because it's not possible to run under upstart without errors like this?

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Marc Aymerich | 27 Apr 18:35 2015
Picon

celerybeat exception on detach mode

Hi, 
I had celerybeat working nicely on my project but recently I upgraded from postgres 9.1 to 9.4 and also upgraded celery to latest stable version, so I'm not sure what had cause the problem, but now I've noticed that celerybeat refuses to start with the --detach option. (working normally otherwise).

Celery beat logs are

[2015-04-27 16:02:34,545: CRITICAL/MainProcess] beat raised exception <class 'django.db.utils.DatabaseError'>: DatabaseError('SSL SYSCALL error: EOF detected\n',)
Traceback (most recent call last):
  File "/usr/local/lib/python3.4/dist-packages/kombu/utils/__init__.py", line 320, in __get__
    return obj.__dict__[self.__name__]
KeyError: 'scheduler'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.4/dist-packages/django/db/backends/utils.py", line 64, in execute
    return self.cursor.execute(sql, params)
psycopg2.DatabaseError: SSL SYSCALL error: EOF detected

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.4/dist-packages/celery/apps/beat.py", line 112, in start_scheduler
    beat.start()
  File "/usr/local/lib/python3.4/dist-packages/celery/beat.py", line 454, in start
    humanize_seconds(self.scheduler.max_interval))
  File "/usr/local/lib/python3.4/dist-packages/kombu/utils/__init__.py", line 322, in __get__
    value = obj.__dict__[self.__name__] = self.__get(obj)
  File "/usr/local/lib/python3.4/dist-packages/celery/beat.py", line 494, in scheduler
    return self.get_scheduler()
  File "/usr/local/lib/python3.4/dist-packages/celery/beat.py", line 489, in get_scheduler
    lazy=lazy)
  File "/usr/local/lib/python3.4/dist-packages/celery/utils/imports.py", line 53, in instantiate
    return symbol_by_name(name)(*args, **kwargs)
  File "/usr/local/lib/python3.4/dist-packages/djcelery/schedulers.py", line 151, in __init__
    Scheduler.__init__(self, *args, **kwargs)
  File "/usr/local/lib/python3.4/dist-packages/celery/beat.py", line 185, in __init__
    self.setup_schedule()
  File "/usr/local/lib/python3.4/dist-packages/djcelery/schedulers.py", line 158, in setup_schedule
    self.install_default_entries(self.schedule)
  File "/usr/local/lib/python3.4/dist-packages/djcelery/schedulers.py", line 251, in schedule
    self._schedule = self.all_as_schedule()
  File "/usr/local/lib/python3.4/dist-packages/djcelery/schedulers.py", line 164, in all_as_schedule
    for model in self.Model.objects.enabled():
  File "/usr/local/lib/python3.4/dist-packages/django/db/models/query.py", line 162, in __iter__
    self._fetch_all()
  File "/usr/local/lib/python3.4/dist-packages/django/db/models/query.py", line 965, in _fetch_all
    self._result_cache = list(self.iterator())
  File "/usr/local/lib/python3.4/dist-packages/django/db/models/query.py", line 238, in iterator
    results = compiler.execute_sql()
  File "/usr/local/lib/python3.4/dist-packages/django/db/models/sql/compiler.py", line 829, in execute_sql
    cursor.execute(sql, params)
  File "/usr/local/lib/python3.4/dist-packages/django/db/backends/utils.py", line 64, in execute
    return self.cursor.execute(sql, params)
  File "/usr/local/lib/python3.4/dist-packages/django/db/utils.py", line 97, in __exit__
    six.reraise(dj_exc_type, dj_exc_value, traceback)
  File "/usr/local/lib/python3.4/dist-packages/django/utils/six.py", line 658, in reraise
    raise value.with_traceback(tb)
  File "/usr/local/lib/python3.4/dist-packages/django/db/backends/utils.py", line 64, in execute
    return self.cursor.execute(sql, params)
django.db.utils.DatabaseError: SSL SYSCALL error: EOF detected


Postgres logs the following

2015-04-27 16:26:40 UTC [3275-1] orchestra <at> orchestra LOG:  could not receive data from client: Connection reset by peer


Any idea? maybe a celery bug? 


--
Marc

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
xeon Mailinglist | 27 Apr 16:40 2015
Picon

how much time i wait for a task to finish

I have 4 hosts that are spread around the world, and they must exchange messeges between them. Eg., I have 1 host running in the following locations: Singapore, California, New York, and London. The Rabbit MQ server is running in NY. All 4 hosts are connected to the Rabbit MQ server using celery. Each host has a queue where messages are published.
Also, sometimes the connection is cut, because the hosts are far away from each other.

The client sends a message to the Rabbit MQ server with the queue name, and the message will be delivered to the host. These messages are invocations to run programs, and the client must wait for the output at the end of execution. The problem is that sometimes I don't know how much time a client must wait for the task to finish. 

I use this both examples [1] [2], but I don't know if they are the best practice. In [1], I simply wait indefinitely. In [2], I am all the time pinging the remote host to see if it is alive. In [2], sometimes I get the exception because the network is cut.

What I want is to keep the client running properly even when there is problems in the network. Is there a best practice for this use case? 


[1]

```
 called_task.wait(timeout=None, interval=5)

```

[2]
```

def waiting(cluster, f):
    """ wait for a task to finish """
    error_counter = 0
    f.wait(timeout=None, interval=5)
    while not f.ready():
        try:
            f1 = my_apply_async(ping, queue=cluster)
            time.sleep(5)
            f1.get(timeout=5)
        except TimeoutError:
            error_counter += 1
            if error_counter < 3:
                continue
            else:
                logging.warn("WARNING: Cluster %s is down" % (cluster))
                setRunningClusters()
                raise Exception("Job execution failed")

    return f


```

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
Michael Nachbar | 24 Apr 17:06 2015

Some Tasks Never Make It to RabbitMQ, Stuck in Pending

Hi everyone,

I'm having an issue where about 33% of my celery tasks never run and stay permanently in PENDING status. The tasks that never run never show up in the RabbitMQ logs. The other 66% of tasks do show up in the RabbitMQ logs and end up being executed.

Does anyone know a reason why certain tasks would disappear and never reach RabbitMQ? I confess I am pretty lost here.

For some background, my client is a webpage on a Gunicorn server. My backend where the tasks are executed is separate machine. For the BROKER_URL in the Celery settings I specify the backend machine's IP address (I pasted full Celery settings below).

Thanks,

Mike

Below are the settings from my Celery settings.py file:

BROKER_URL = "amqp://guest:guest <at> backend_ip:5672//"
BROKER_HEARTBEAT = 0
CELERY_SEND_TASK_ERROR_EMAILS = True
CELERYD_CONCURRENCY = 4
CELERY_TASK_RESULT_EXPIRES = None

CELERYBEAT_SCHEDULER = "djcelery.schedulers.DatabaseScheduler"

CELERY_DEFAULT_QUEUE = 'default'

CELERY_DEFAULT_EXCHANGE_TYPE = 'direct'
CELERY_DEFAULT_ROUTING_KEY = 'default'
CELERY_QUEUES = {
    "default": {
        "exchange": "default",
        "binding_key": "default",
    },
    "urgent": {
        "exchange": "amq.rabbitmq.trace",
        "exchange_type": "topic",
        "binding_key": "urgent",
    },
    }

# Listing the tasks which should be routed to the "urgent" queue. All others will default to the "default" queue.
CELERY_ROUTES={
    "mytask": {"queue": "urgent",}
}

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.
francois | 23 Apr 11:01 2015
Picon

Multiple tasks with same ID on a single worker process

Hello all,

We've recently run into a small issue with Celery on one of our servers, and while I am pretty sure I know what went wrong, I would like to know a bit more about how Celery handles this case.

Tasks handled by our workers can take a long time (anywhere from a few minutes to a few hours), and we recently found a task that ran multiple times on the same worker process even though it was only delayed once.

The logs looked similar to this :

  [MainProcess 17:00:00] Task [7afe8c...] received
  [MainProcess 18:10:00] Task [7afe8c...] received
  [MainProcess 19:25:00] Task [7afe8c...] received
  [Worker7 22:00:00] Task [7afe8c...] run
  [Worker7 22:01:00] Task [7afe8c...] run
  [Worker7 22:02:00] Task [7afe8c...] run

I figured out that we should have set a longer visibility timeout after going through the documentation. This should solve our issue.

However, is there a reason why a task with the same ID is being accepted and run multiple times by the exact same process? Shouldn't tasks with the same ID be merged together in that case?

Thanks for your help!

François-Xavier

--
You received this message because you are subscribed to the Google Groups "celery-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to celery-users+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To post to this group, send email to celery-users-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
Visit this group at http://groups.google.com/group/celery-users.
For more options, visit https://groups.google.com/d/optout.

Gmane