Johan Tibell | 1 May 10:39 2011
Picon

Re: Incrementally consuming the eventlog

On Thu, Apr 28, 2011 at 11:53 PM, Donnie Jones <donnie <at> darthik.com> wrote:
> Anyway, from your description, I don't understand how a listener would
> consume the eventlog incrementally?

I simply meant that I want to be able to register listeners for events
instead of having to parse the eventlog file after the fact.

> I do think it would be useful to register listeners for events.  I do
> not think the invocation of a callback would be too much overhead,
> rather the action the callback performs could be a very significant
> overhead, such as sending eventlog data over a network connection.
> But, if you are willing to accept the performance loss from the
> callback's action to gain the event data then it seems worthwhile to
> me.

A typical use of the callback would be to update some internal data
structure of the program itself, thereby making the program
self-monitoring. I've been toying with introducing log levels to the
eventlog command line API so the consumer of the event log can specify
the number of events it would like to receive. We could do something
similar for the API e.g.

registerEventListener (schedEvents .|. ioManagerEvents) (\ e -> ...)

Johan
Johan Tibell | 1 May 10:44 2011
Picon

Re: Incrementally consuming the eventlog

On Fri, Apr 29, 2011 at 12:00 AM, Don Stewart <dons00 <at> gmail.com> wrote:
> I'm very interested in what the best way to get incremental event data
> from a running GHC process would be.
>
> Looking at the code, we flush the event buffer fairly regularly, but
> the event parser is currently strict.
>
> So we'd need a lazy (or incremental) parser, that'll return a list of
> successful event parses, then block. I suspect this mode would be
> supported.
>
> *My evil plan is to write a little monitoring web app that just
> attaches to the event stream and renders it in a useful "heartbeat"
> format* , but I need incremental parsing.

A less general solution might be to have the program itself start a
little web server on some port and use the API I proposed to serve
JSON data with the aggregate statistics you care about. Example:

    main = do
      eventData <- newIORef
      server <- serveOn 8080 $ \ _req -> readIORef eventData >>=
sendResponse eventData
      registerEventListener $ \ ev -> updateEventData eventData ev
      runNormalProgram

You can wrap the creation of the webserver in a little helper function
an make any program "monitorable" simply by doing

    main = withMonitoring runApp
(Continue reading)

Don Stewart | 1 May 19:22 2011
Picon

Re: Incrementally consuming the eventlog

I've put a library for incremental parsing of the event log here:

    http://code.haskell.org/~dons/code/ghc-events-stream/

The goal is to implement something like:

   http://www.erlang.org/doc/man/heart.html

On Sun, May 1, 2011 at 1:44 AM, Johan Tibell <johan.tibell <at> gmail.com> wrote:
> On Fri, Apr 29, 2011 at 12:00 AM, Don Stewart <dons00 <at> gmail.com> wrote:
>> I'm very interested in what the best way to get incremental event data
>> from a running GHC process would be.
>>
>> Looking at the code, we flush the event buffer fairly regularly, but
>> the event parser is currently strict.
>>
>> So we'd need a lazy (or incremental) parser, that'll return a list of
>> successful event parses, then block. I suspect this mode would be
>> supported.
>>
>> *My evil plan is to write a little monitoring web app that just
>> attaches to the event stream and renders it in a useful "heartbeat"
>> format* , but I need incremental parsing.
>
> A less general solution might be to have the program itself start a
> little web server on some port and use the API I proposed to serve
> JSON data with the aggregate statistics you care about. Example:
>
>    main = do
>      eventData <- newIORef
(Continue reading)

Duncan Coutts | 1 May 22:51 2011

Re: Incrementally consuming the eventlog

On Thu, 2011-04-28 at 23:31 +0200, Johan Tibell wrote:

> The RTS would invoke listeners every time a new event is written. This
> design has many benefits:
> 
> - We don't need to introduce the serialization, deserialization, and
> I/O overhead of first writing the eventlog to file and then parsing it
> again.

The events are basically generated in serialised form (via C code that
writes them directly into the event buffer). They never exist as Haskell
data structures, or even C structures.

> - Programs could monitor themselves and provide debug output (e.g. via
> some UI component).
> - Users could write code that redirects the output elsewhere e.g. to a
> socket for remote monitoring.
> 
> Would invoking a callback on each event add too big of an overhead?

Yes, by orders of magnitude. In fact it's impossible because the act of
invoking the callback would generate more events... :-)

> How about invoking the callback once every time the event buffer is
> full?

That's much more realistic. Still, do we need the generality of pushing
the event buffers through the Haskell code? For some reason it makes me
slightly nervous. How about just setting which output FD the event
buffers get written to.
(Continue reading)

Duncan Coutts | 1 May 22:56 2011

Re: Package management

On Tue, 2011-04-26 at 14:05 -0700, Brandon Moore wrote:
> Based on my own misadventures and Albert Y. C. Lai's SICP 
> (http://www.vex.net/~trebla/haskell/sicp.xhtml)
> it seems the that root of all install problems is that reinstalling a
> particular version of a particular package deletes any other existing
> builds of that version, even if other packages already depend on them.
> 
> Deleting perfectly good versions seems to be the root of all package
> management problems.

Yes.

> There are already hashes to keep incompatible builds of a package separate. 
> Would anything break if existing packages were left alone when a new
> version was installed? (perhaps preferring the most recent if a
> package flag specifies version but not hash).

That is the nix solution. It is also my favoured long term solution.

> The obvious difficulty is a little more trouble to manually specify packages. 
> Are there any other problems with this idea?

See nix and how it handles the configuration and policy issues thrown up
by allowing multiple instances of the same version of each package. For
example, they introduce the notion of a package environment which is a
subset of the universe of installed packages.

Duncan
Don Stewart | 2 May 00:34 2011
Picon

Re: Incrementally consuming the eventlog

I've got a proof of concept event-log monitoring server and
incremental parser for event streams:

 * http://code.haskell.org/~dons/code/ghc-events-stream/

 * http://code.haskell.org/~dons/code/ghc-monitor/

Little screen shot of the snap server running, watching a Haskell
process' eventlog fifo:

 * http://i.imgur.com/Xfr9I.png

The main issue at the moment is that GHC is irregular in scheduling
flusing of the event log stream, so it might be hours or days before
you see any activity. This isn't useful for heartbeat style
monitoring.

Also, we need to break out a bit of ThreadScope to give access to its
analytics (e.g. rendering time series).

-- Don

On Sun, May 1, 2011 at 1:51 PM, Duncan Coutts
<duncan.coutts <at> googlemail.com> wrote:
> On Thu, 2011-04-28 at 23:31 +0200, Johan Tibell wrote:
>
>> The RTS would invoke listeners every time a new event is written. This
>> design has many benefits:
>>
>> - We don't need to introduce the serialization, deserialization, and
(Continue reading)

Bryan O'Sullivan | 2 May 04:59 2011

Re: Incrementally consuming the eventlog

On Thu, Apr 28, 2011 at 3:00 PM, Don Stewart <dons00 <at> gmail.com> wrote:
So we'd need a lazy (or incremental) parser, that'll return a list of
successful event parses, then block. I suspect this mode would be
supported.

A while ago, I hacked something together on top of the current eventlog parser that would consume an event at a time, and record the seek offset of each successful parse. If parsing failed (due to unflushed data), it would try again later. I think I might even claim that this is a somewhat sensible and parsimonious approach, but I'm drinking wine right now, so my judgment might be impaired.
_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
Don Stewart | 2 May 05:11 2011
Picon

Re: Incrementally consuming the eventlog

I managed to build one on top of attoparsec's lazy parser that "seems
to work" -- but I'd like ghc to flush a bit more regularly so I could
test it better.

-- Don

On Sun, May 1, 2011 at 7:59 PM, Bryan O'Sullivan <bos <at> serpentine.com> wrote:
> On Thu, Apr 28, 2011 at 3:00 PM, Don Stewart <dons00 <at> gmail.com> wrote:
>>
>> So we'd need a lazy (or incremental) parser, that'll return a list of
>> successful event parses, then block. I suspect this mode would be
>> supported.
>
> A while ago, I hacked something together on top of the current eventlog
> parser that would consume an event at a time, and record the seek offset of
> each successful parse. If parsing failed (due to unflushed data), it would
> try again later. I think I might even claim that this is a somewhat sensible
> and parsimonious approach, but I'm drinking wine right now, so my judgment
> might be impaired.
hsing-chou chen | 2 May 16:57 2011
Picon

(unknown)

http://www.buonviaggioitaly.com/modules/mod_osdonate/myhome.html
C Rodrigues | 2 May 23:20 2011
Picon

Elimination of absurd patterns

I was experimenting with using GADTs for subtyping when I found something interesting.  Hopefully someone can satisfy my curiosity.

Here are two equivalent GADTs.  My understanding was that GHC would translate "Foo" and "Bar" into isomorphic data types.  However, GHC 6.12.3 generates better code for 'fooName' than for 'barName'.  In 'fooName', there is no pattern match against 'FooExtra'.  In 'barName', there is a pattern match against 'BarExtra'.  What makes these data types different?


data Tag
data TagExtra

--------

data Foo a where
  Foo :: String -> Foo a
  FooExtra :: IORef String -> Foo TagExtra

-- The cmm code for fooName does not match against 'FooExtra'
fooName :: Foo Tag -> String
fooName (Foo s) = s

--------

data Bar a where
  Bar :: String -> Bar a
  BarExtra :: a ~ TagExtra => IORef String -> Bar a

-- The cmm code for barName will try to pattern-match against 'BarExtra'
barName :: Bar Tag -> String
barName (Bar s) = s

_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at> haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

Gmane