Serafeim Zanikolas | 1 Apr 19:13 2012
Picon

Re: Beanstalkd 1.5 .deb package

On Sun, Mar 18, 2012 at 10:20:21PM +0100, Serafeim Zanikolas wrote:
> On Thu, Mar 15, 2012 at 02:12:49AM -0700, Keith Rarick wrote:
> > On Sat, Mar 3, 2012 at 4:05 PM, Serafeim Zanikolas
<serzan@...> wrote:
> > > one of the tests fails
> > 
> > That test (pause-tube) has a race that causes it to fail
> > sometimes, and I neglected to fix it before the release.
> > 
> > The output you posted is consistent with correct beanstalkd
> > behavior.
> 
> FYI uploaded to debian unstable, with the test disabled.

more test failures (on amd64 & kfreebsd-i386 respectively):

On line 47 sh-tests/binlog-sizelimit.sh
  first binlog wrong size
  FAIL: sh-tests/binlog-sizelimit.sh

On line 80 sh-tests/binlog-diskfull-delete.sh
  Second binlog file is missing
  FAIL: sh-tests/binlog-diskfull-delete.sh

these are tests that were previously (prior to 1.5) passing on all
Debian-supported arches.

https://buildd.debian.org/status/package.php?p=beanstalkd

sez
(Continue reading)

Keith Rarick | 2 Apr 00:20 2012
Picon

Re: Beanstalkd 1.5 .deb package

On Sun, Apr 1, 2012 at 10:13 AM, Serafeim Zanikolas <serzan@...> wrote:
> more test failures (on amd64 & kfreebsd-i386 respectively):

Thanks!

https://github.com/kr/beanstalkd/issues/108

--

-- 
You received this message because you are subscribed to the Google Groups "beanstalk-talk" group.
To post to this group, send email to beanstalk-talk@...
To unsubscribe from this group, send email to beanstalk-talk+unsubscribe@...
For more options, visit this group at http://groups.google.com/group/beanstalk-talk?hl=en.

Roy Smith | 2 Apr 15:38 2012
Picon

NotFound exception in delete() -- is beanstalk thread safe?

We're using pybeanstalk 0.11.1, python 2.6.5, Ubuntu 10.04.3 LTS.  The main loop of our multi-threaded beanstalk client looks like:


    while True:
        m = mq.reserve()
        jid = m['jid']
        try:
            # process job                                                                                                                                                   
        except Exception:
            logger.error("skipped job %d" % jid)
        finally:
            logger.info("deleted job %d" % jid)
            mq.delete(jid)

Every once in a while (yesterday it happened once in 60k jobs processed), the delete() call in the finally block raises a NotFound exception.  I've included a stack dump below.

2012-04-02T04:56:57+00:00 cluster1.songza.com 2012-04-02 04:56:57,497 - scrobble_mill - Caught NotFound('Server returned: NOT_FOUND',) in <Thread(Thread-9, started 139969929115392)>
Traceback (most recent call last):
  File "/home/songza/deploy/current/scrobble/scrobble_mill.py", line 98, in listen_wrapper
    listen(host, port)
  File "/home/songza/deploy/current/scrobble/scrobble_mill.py", line 162, in listen
    mq.delete(jid)
  File "/usr/lib/pymodules/python2.6/beanstalk/serverconn.py", line 94, in caller
    *getattr(protohandler, 'process_%s' % attr)(*args, **kw))
  File "/usr/lib/pymodules/python2.6/beanstalk/serverconn.py", line 68, in _do_interaction
    return self._get_response(handler)
  File "/usr/lib/pymodules/python2.6/beanstalk/serverconn.py", line 58, in _get_response
    res = handler(recv)
  File "/usr/lib/pymodules/python2.6/beanstalk/protohandler.py", line 56, in __call__
    return self.__h(val)
  File "/usr/lib/pymodules/python2.6/beanstalk/protohandler.py", line 68, in handler
    checkError(response)
  File "/usr/lib/pymodules/python2.6/beanstalk/errors.py", line 35, in checkError
    raise err
NotFound: Server returned: NOT_FOUND

Looking over the logs, it appears that two different threads got the same job in their reserve() calls.  Is this a bug?  I was assuming that beanstalk was thread safe, but maybe not?


--
You received this message because you are subscribed to the Google Groups "beanstalk-talk" group.
To view this discussion on the web visit https://groups.google.com/d/msg/beanstalk-talk/-/GMkJoHdsoOgJ.
To post to this group, send email to beanstalk-talk-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To unsubscribe from this group, send email to beanstalk-talk+unsubscribe <at> googlegroups.com.
For more options, visit this group at http://groups.google.com/group/beanstalk-talk?hl=en.
Jurian Sluiman | 2 Apr 11:46 2012
Picon

Peek at complete list of jobs

I've seen similar requests on this ML before, but am not aware if any
progress is made since. What I am looking for is a method to peek all
jobs, like peek and peek-* currently do for a single job. I know it
might have drawbacks for heavy usage queues or very fast queues, but I
think this is up to the developer to decide. If a peek-all command is
load dependent, developers can choose whether or not to use it
depending on their own situation.

On the ML others have showed some good use cases why it might come in
handy, and I have something similar for myself. My jobs can succeed
(and will be deleted) or not. If not, a job might be retried later
(e.g. a request to a 3rd party service timed out) or buried (something
else happened). I want to use bury because it gives a user the
possibility to delete or kick the job. In my case it is often a human
being who can decide if a job has to be retried or deleted anyhow.

For such system an interface is required to which the user can select
one or multiple jobs to be retried and one or multiple jobs to be
deleted. For UX reasons I cannot let the user process the buried jobs
one by one.

--
Jurian Sluiman

--

-- 
You received this message because you are subscribed to the Google Groups "beanstalk-talk" group.
To post to this group, send email to beanstalk-talk@...
To unsubscribe from this group, send email to beanstalk-talk+unsubscribe@...
For more options, visit this group at http://groups.google.com/group/beanstalk-talk?hl=en.

Roy Smith | 2 Apr 18:19 2012
Picon

Re: NotFound exception in delete() -- is beanstalk thread safe?

I think we've got this figured out.  It looks like what probably happened was the job timed out so it went back on the queue.



On Apr 2, 2012, at 9:38 AM, Roy Smith wrote:

We're using pybeanstalk 0.11.1, python 2.6.5, Ubuntu 10.04.3 LTS.  The main loop of our multi-threaded beanstalk client looks like:

    while True:
        m = mq.reserve()
        jid = m['jid']
        try:
            # process job                                                                                                                                                   
        except Exception:
            logger.error("skipped job %d" % jid)
        finally:
            logger.info("deleted job %d" % jid)
            mq.delete(jid)

Every once in a while (yesterday it happened once in 60k jobs processed), the delete() call in the finally block raises a NotFound exception.  I've included a stack dump below.

2012-04-02T04:56:57+00:00 cluster1.songza.com 2012-04-02 04:56:57,497 - scrobble_mill - Caught NotFound('Server returned: NOT_FOUND',) in <Thread(Thread-9, started 139969929115392)>
Traceback (most recent call last):
  File "/home/songza/deploy/current/scrobble/scrobble_mill.py", line 98, in listen_wrapper
    listen(host, port)
  File "/home/songza/deploy/current/scrobble/scrobble_mill.py", line 162, in listen
    mq.delete(jid)
  File "/usr/lib/pymodules/python2.6/beanstalk/serverconn.py", line 94, in caller
    *getattr(protohandler, 'process_%s' % attr)(*args, **kw))
  File "/usr/lib/pymodules/python2.6/beanstalk/serverconn.py", line 68, in _do_interaction
    return self._get_response(handler)
  File "/usr/lib/pymodules/python2.6/beanstalk/serverconn.py", line 58, in _get_response
    res = handler(recv)
  File "/usr/lib/pymodules/python2.6/beanstalk/protohandler.py", line 56, in __call__
    return self.__h(val)
  File "/usr/lib/pymodules/python2.6/beanstalk/protohandler.py", line 68, in handler
    checkError(response)
  File "/usr/lib/pymodules/python2.6/beanstalk/errors.py", line 35, in checkError
    raise err
NotFound: Server returned: NOT_FOUND

Looking over the logs, it appears that two different threads got the same job in their reserve() calls.  Is this a bug?  I was assuming that beanstalk was thread safe, but maybe not?



--
You received this message because you are subscribed to the Google Groups "beanstalk-talk" group.
To view this discussion on the web visit https://groups.google.com/d/msg/beanstalk-talk/-/GMkJoHdsoOgJ.
To post to this group, send email to beanstalk-talk-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org.
To unsubscribe from this group, send email to beanstalk-talk+unsubscribe <at> googlegroups.com.
For more options, visit this group at http://groups.google.com/group/beanstalk-talk?hl=en.


--
You received this message because you are subscribed to the Google Groups "beanstalk-talk" group.
To post to this group, send email to beanstalk-talk-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
To unsubscribe from this group, send email to beanstalk-talk+unsubscribe <at> googlegroups.com.
For more options, visit this group at http://groups.google.com/group/beanstalk-talk?hl=en.

Keith Rarick | 3 Apr 10:39 2012
Picon

Re: Peek at complete list of jobs

If you're not concerned with performance, you can
reserve all the jobs in the tube, then release
them all again. This will give you a chance to
inspect the entire contents.

kr

On Mon, Apr 2, 2012 at 2:46 AM, Jurian Sluiman <jurian@...> wrote:
> I've seen similar requests on this ML before, but am not aware if any
> progress is made since. What I am looking for is a method to peek all
> jobs, like peek and peek-* currently do for a single job. I know it
> might have drawbacks for heavy usage queues or very fast queues, but I
> think this is up to the developer to decide. If a peek-all command is
> load dependent, developers can choose whether or not to use it
> depending on their own situation.
>
> On the ML others have showed some good use cases why it might come in
> handy, and I have something similar for myself. My jobs can succeed
> (and will be deleted) or not. If not, a job might be retried later
> (e.g. a request to a 3rd party service timed out) or buried (something
> else happened). I want to use bury because it gives a user the
> possibility to delete or kick the job. In my case it is often a human
> being who can decide if a job has to be retried or deleted anyhow.
>
> For such system an interface is required to which the user can select
> one or multiple jobs to be retried and one or multiple jobs to be
> deleted. For UX reasons I cannot let the user process the buried jobs
> one by one.
>
> --
> Jurian Sluiman
>
> --
> You received this message because you are subscribed to the Google Groups "beanstalk-talk" group.
> To post to this group, send email to beanstalk-talk@...
> To unsubscribe from this group, send email to beanstalk-talk+unsubscribe@...
> For more options, visit this group at http://groups.google.com/group/beanstalk-talk?hl=en.
>

--

-- 
You received this message because you are subscribed to the Google Groups "beanstalk-talk" group.
To post to this group, send email to beanstalk-talk@...
To unsubscribe from this group, send email to beanstalk-talk+unsubscribe@...
For more options, visit this group at http://groups.google.com/group/beanstalk-talk?hl=en.

Jurian Sluiman | 3 Apr 12:13 2012
Picon

Re: Peek at complete list of jobs

Hi Keith,

On Apr 3, 10:39 am, Keith Rarick <k...@...> wrote:
> If you're not concerned with performance, you can
> reserve all the jobs in the tube, then release
> them all again. This will give you a chance to
> inspect the entire contents.
>
> kr

This is a valid option if you want to have a list of all ready jobs.
Because I want to give users the chance to kick or delete buried (i.e.
failed, in my case) jobs, is there an option to "reserve" failed jobs
without kicking them? If I kick them, they will interfere with other
normal jobs, which should be passed through to the normal workers.

--
Jurian Sluiman

--

-- 
You received this message because you are subscribed to the Google Groups "beanstalk-talk" group.
To post to this group, send email to beanstalk-talk@...
To unsubscribe from this group, send email to beanstalk-talk+unsubscribe <at> googlegroups.com.
For more options, visit this group at http://groups.google.com/group/beanstalk-talk?hl=en.

Keith Rarick | 4 Apr 06:24 2012
Picon

Re: Peek at complete list of jobs

On Tue, Apr 3, 2012 at 3:13 AM, Jurian Sluiman <jurian@...> wrote:
> is there an option to "reserve" failed jobs
> without kicking them?

No, but you can copy each job into another place
(in memory, into a database, or into another tube
in beanstalkd) and delete the original.

kr

--

-- 
You received this message because you are subscribed to the Google Groups "beanstalk-talk" group.
To post to this group, send email to beanstalk-talk@...
To unsubscribe from this group, send email to beanstalk-talk+unsubscribe@...
For more options, visit this group at http://groups.google.com/group/beanstalk-talk?hl=en.

Serafeim Zanikolas | 6 Apr 14:45 2012
Picon

Re: Beanstalkd 1.5 .deb package

On Sun, Apr 01, 2012 at 03:20:38PM -0700, Keith Rarick wrote:
> On Sun, Apr 1, 2012 at 10:13 AM, Serafeim Zanikolas
<serzan@...> wrote:
> > more test failures (on amd64 & kfreebsd-i386 respectively):
> 
> Thanks!
> 
> https://github.com/kr/beanstalkd/issues/108

FWIW failing tests that previously ran fine are release-critical in Debian

--

-- 
You received this message because you are subscribed to the Google Groups "beanstalk-talk" group.
To post to this group, send email to beanstalk-talk@...
To unsubscribe from this group, send email to beanstalk-talk+unsubscribe@...
For more options, visit this group at http://groups.google.com/group/beanstalk-talk?hl=en.

Deepak Prasanna | 12 Apr 13:16 2012
Picon

Exception Stalker::JobTimeout

Hi guys,

First of all, a big thanks for beanstalk team.
I am using stalker for managing my background jobs.
I have some process which would run for more than 119s.
Those jobs which takes more than 119s are raising "Exception
Stalker::JobTimeout".
I know I can manually specify the ttr like below.

Stalker.enqueue("video.webm", {:id =>  <at> video.id}, {:ttr => 10})

But the problem is, I really don't have any idea how much the time
process would consume. It may take minutes to hours.
The ideal approach is use "touch". I don't find any resources online
on how do the stalker way of touching beanstalkd saying im still
alive.

job "run.query" do |args|
   <at> query_event = QueryEvent.find(args["id"])
   <at> query_event.execute_query!
end

My job looks something like the above. Help is appreciated.

Thanks,
Deepak.

--

-- 
You received this message because you are subscribed to the Google Groups "beanstalk-talk" group.
To post to this group, send email to beanstalk-talk@...
To unsubscribe from this group, send email to beanstalk-talk+unsubscribe@...
For more options, visit this group at http://groups.google.com/group/beanstalk-talk?hl=en.


Gmane