Markus Ongyerth | 7 Oct 09:49 2015

Handling multiple fds with GHC


the last few days, I tried to get an IO-Event system running with GHC i.e. trigger an IO action when there is data to read from a fd.
I looked at a few different implementations, but all of them have some downside.

 * using select package
   - This uses the select syscall. select is rather limited (fd cannot be >1024)

 * using GHC.Event
   - GHC.Event is broken in 7.10.1 (unless unsafeCoerce and a hacky trick are used)
   - GHC.Event is GHC internal according to hackage
   - Both Network libraries I looked at (networking (Network.Socket) and socket (System.Socket)) crash the application with GHC.Event
    - with 7.8+ I didn't see a way to create your own EventManager, so it only works with -threaded

 * using forkIO and threadWaitRead for each fd in a loop
    - needs some kind of custom control structure around it
    - uses a separate thread for each fd
    - might become pretty awkward to handle multiple events

 * using poll package
   - blocks in a safe foreign call
   - needs some kind of wrapper

From the above list, GHC.Event isn't usable (for me) right now. It would require some work for my usecase.
The select option is usable, but suffers from the same problems as poll + the limitation mentioned, so it is strictly worse.

This leaves me with two options: poll and forkIO + blocking.

Those are based on two completely different approaches to event handling..

poll can be used in a rather classic event handling system with a main loop that blocks until an event occurs (or a timeout triggers) and handles the event in the loop.
forkIO + blocking is closer to registering an action later that should be triggered by an event.

My main questions right now are:
1. How bad is it for the (non-threaded) runtime to be blocking in a foreign call most of the time?
2. How significant will the overhead be for the forkIO version?
3. Is there a *good* way to use something like threadWaitRead that allows to wake up on other events as well?
4. Is there a better way to handle multiple fds that may get readable data at any time, in Haskell/with GHC right now?

Thanks in advance,
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at>
Evan Laforge | 2 Oct 04:02 2015

using nmitchell's space leak detection technique

Neil Mitchell wrote an article about finding space leaks by limiting
the stack size:

I'm giving it a try, but the results don't make any sense to me.  My
understanding is that the too-deep stack indicates that someone
created too many thunks, so when they get evaluated you have to
descend too many times.  And if the stack trace seems to be a
reasonable size (say 100 stack frames somehow consuming 1mb of stack)
then it means there is a recursive call in there somewhere that ghc is
helpfully eliding.  And lastly, that the thing about "Stack space
overflow: current size 33568 bytes." always showing 33568 bytes is due
to a ghc bug and it should actually be whatever limit I gave it.  Is
all this accurate?

The stack trace jumps around a lot, to places which are not lexically
present in the caller.  I assume this is just lazy evaluation, so e.g.
maybe 'f' doesn't call 'g', but if 'f' forces a value returned from
'g' which has not yet been forced, the stack will go to 'g'.  Also
sometimes things which seem like they should be run consecutively wind
up as caller and callee... but perhaps this is due to using a monad
with a success continuation, and without the SCCs would be eliminated
since they are tail calls... or maybe they are always eliminated and I
only see them in the stack when the elimination failed for some

At first I got a stack trace that ended in a function which is
recursive, but should be productive guarded recursion:

events_of :: [LEvent d] -> [d]
events_of [] = []
events_of (Event e : rest) = e : events_of rest
events_of (Log _ : rest) = events_of rest

It's my understanding that while this isn't tail recursive, it should
require only constant space.  Of course the consumer could retain it
all, but that wouldn't show up as too-deep recursion on that
particular function, would it?

Since ghc elides self-calls from the stack, the culprit could be one
of the functions above, and the one immediately above was part of
transforming a large list which could well be insufficiently strict

So I tried to break that out into its own call without the distraction
of the continuation monad etc., just a function from a list to a list.
The result is even more confusing:

% build/profile/RunProfile-SpaceLeak +RTS -K2905K -xc -RTS
force - *** Exception (reporting due to +RTS -xc): (THUNK_STATIC), stack trace:
  called from Derive.DeriveTest.mkevent_scale,
  called from Derive.DeriveTest.mkevent,
  called from Derive.SpaceLeak_profile.profile_cancel.make,
  called from Derive.SpaceLeak_profile.profile_cancel.f,
  called from Derive.SpaceLeak_profile.profile_cancel,
  called from Derive.SpaceLeak_profile.CAF

Somehow these 7 calls are consuming almost 3mb of stack, so there must
be a ton of elided recursion hiding in one of them.  The confusing
thing is that the profile_cancel function is basically just:

rnf $ suspicious_function $ map make [0 .. 1024*50]

So the stack is going into the 'make' function which is just a bunch
of constructors to produce the values that suspicious_function will
transform.  As far as I can tell, the only recursion is in 'map', and
the thunks from 'mkevent' down to 'PSignal.signal' only go 3 deep.
But suspicious_function is definitely suspicious, because if I remove
it, I no longer get an exception.  But how does it manage to not be in
the stack?

If suspicious_function is not a good consumer and is accumulating
thunks, how can I get 3 stack entries consuming 3mb?  If I rnf the
result of 'make' before running suspicious_function, I get a
completely different stack trace, and this one with '--> evaluated
by:' entries, and I don't really know what those mean.  But this one
at least has some recursive calls within suspicious_function.

I can keep debugging via trial and error and successive rnf's as Neil
describes in another article, but it would be nice to know what's
going on with the stack.  The stack seems fairly straightforward in
strict languages, is there some documentation on how to understand the
ghc version?

Any clues appreciated!
Sven Panne | 30 Sep 20:04 2015

Typing pattern synonyms

The type of a pattern synonym like

   pattern FOO = 1234

seems to be '(Eq a, Num a) => a', which makes partially makes sense, although it's not immediately clear to me where the 'Eq a' part comes from. But probably that would be clear if I read the desugaring rules closely enough. ;-) My real question is about:

   pattern FOO = 1234 :: Int

This doesn't compile out of the box, GHC seems to require ScopedTypeVariables, too:

    Illegal type signature: `Int'
      Perhaps you intended to use ScopedTypeVariables
    In a pattern type-signature

Why is this the case? From a user perspective, the error is totally bogus, there are no visible type variables at all. Can GHC be fixed to avoid enabling ScopedTypeVariables?


Glasgow-haskell-users mailing list
Glasgow-haskell-users <at>
Moritz Drexl | 29 Sep 17:39 2015

HEADS UP: Need 7.10.3?

Hi all,

For me, a 7.10.3 release is also urgently needed.

Showstopper is the problem regarding invocation of GCC under Windows,
with response files as an already implemented solution. See:

I am unable to compile a working GHC version of the patched 7.10 branch.
I exhausted all other avenues and am unable to make releases for
customers using Windows.


Moritz Drexl
Will Sewell | 29 Sep 11:03 2015

Fwd: Removing latency spikes. Garbage collector related?

Thanks for the reply Greg. I have already tried tweaking these values
a bit, and this is what I found:

* I first tried -A256k because the L2 cache is that size (Simon Marlow
mentioned this can lead to good performance
* I then tried a value of -A2048k because he also said "using a very
large young generation size might outweigh the cache benefits". I
don't exactly know what he meant by "a very large young generation
size", so I guessed at this value. Is it in the right ballpark?
* With -H, I tried values of -H8m, -H32m, -H128m, -H512m, -H1024m

But all lead to worse performance over the defaults (and -H didn't
really have much affect at all).

I will try your suggestion of setting -A to the L3 cache size.

Are there any other values I should try setting these at?

As for your final point, I have run space profiling, and it looks like
>90% of the memory is used for our message index, which is a temporary
store of messages that have gone through the system. These messages
are stored in aligned chunks in memory that are merged together. I
initially though this was causing the spikes, but they were still
there even after I removed the component. I will try and run space
profiling in the build with the message index.

Thanks again.

On 28 September 2015 at 19:02, Gregory Collins <greg <at>> wrote:
> On Mon, Sep 28, 2015 at 9:08 AM, Will Sewell <me <at>> wrote:
>> If it is the GC, then is there anything that can be done about it?
> Increase value of -A (the default is too small) -- best value for this is L3
> cache size of the chip
> Increase value of -H (total heap size) -- this will use more ram but you'll
> run GC less often
> This will sound flip, but: generate less garbage. Frequency of GC runs is
> proportional to the amount of garbage being produced, so if you can lower
> mutator allocation rate then you will also increase net productivity.
> Built-up thunks can transparently hide a lot of allocation so fire up the
> profiler and tighten those up (there's an 80-20 rule here). Reuse output
> buffers if you aren't already, etc.
> G
> --
> Gregory Collins <greg <at>>
Will Sewell | 28 Sep 18:08 2015

Removing latency spikes. Garbage collector related?

Hi, I was told in the #haskell IRC channel that this would be a good
place to ask this question, so here goes!

We’re writing a low-latency messaging system. The problem is we are
getting a lot of latency spikes. See this image: (yellow to red is the 90th percentile),
which shows end-to-end latency of messages through the system.

I have tried to eliminate the problem by removing parts of the system
that I suspected to be expensive, but the spikes are still there.

I’m now thinking that it’s the GC. As you can see in this output from
ghc-events-analyze, work on the GC thread (red) seems to be blocking
work on the main program thread (green)
(x axis is time, darkness of buckets is % CPU time).

*Note: the graphs are not of the same run, but are typical*

Do you think the GC is the most likely culprit?
Is there anything I can do to confirm this hypothesis? (I looked into
turning off the GC, but this seems tricky)
If it is the GC, then is there anything that can be done about it?

Glasgow-haskell-users mailing list
Glasgow-haskell-users <at>
Herbert Valerio Riedel | 23 Sep 13:17 2015

ANN: CfN for new Haskell Prime language committee

Dear Haskell Community,

In short, it's time to assemble a new Haskell Prime language
committee. Please refer to the CfN at

for more details.



PGP fingerprint: 427C B69A AC9D 00F2 A43C  AF1C BA3C BA3F FE22 B574
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at>
Austin Seipp | 14 Sep 15:53 2015

HEADS UP: Need 7.10.3?

Hi *,

(This is an email primarily aimed at users reading this list and
developers who have any interest).

As some of you may know, there's currently a 7.10.3 milestone and
status page on our wiki:

The basic summary is best captured on the above page:

"We have not yet decided when, or even whether, to release GHC 7.10.3.
We will do so if (but only if!) we have documented cases of
"show-stoppers" in 7.10.2. Namely, cases from users where

  - You are unable to use 7.10.2 because of some bug
  - There is no reasonable workaround, so you are truly stuck
  - We know how to fix it
  - The fix is not too disruptive; i.e. does not risk introducing a
raft of new bugs"

That is, we're currently not fully sold on the need for a release.
However, the milestone and issue page serve as a useful guide, and
also make it easier to keep track of smaller, point-release worthy

So in the wake of the 8.0 roadmap I just sent: If you *need* 7.10.3
because the 7.10.x series has a major regression or problem you can't
work around, let us know!

  - Find or file a bug in Trac
  - Make sure it's highest priority
  - Assign it to the 7.10.3 milestone
  - Follow up on this email if possible, or edit it on the status page
text above - it would be nice to get some public feedback in one place
about what everyone needs.

Currently we have two bugs on the listed page in the 'show stopper
category', possibly the same bug, which is a deal-breaker for HERMIT I
believe. Knowing of anything else would be very useful.

Thanks all!



Austin Seipp, Haskell Consultant
Well-Typed LLP,
Austin Seipp | 14 Sep 15:47 2015

HEADS UP (devs, users): 8.0.1 Roadmap

Hi *,

I've returned from vacation, and last week Simon, Simon and I met up
again after a long break, and talked a bit about the upcoming release.

The good news is that it is going to be an exciting one! The flip side
is, there's a lot of work to be done!

The current plan we'd roughly like to follow is...

  - November: Fork the new `ghc-8.0` STABLE branch
    - At this point, `master` development will probably slow as we fix bugs.
    - This gives us 2 months or so until branch, from Today.
    - This is nice as the branch is close to the first RC.
  - December: First release candidate
  - Mid/late February: Final release.

Here's our current feature roadmap (in basically the same place as all
our previous pages):

As time goes on, this page will be updated to reflect Reality™ and
track it as closely as possible. So keep an eye on it! It's got the
roadmap (near top) and large bug list (bottom).

Now, there are some things we need, so depending on who you are, please...

  - *Users*: please look over the bug list! If there's a bug you need
fixed that isn't there, set it to the 8.0.1 milestone (updated in
Trac). If this bug is critical to you, please let us know! You can
bump the priority (if we disagree, or it's workaround-able, it might
get bumped down). We just need a way to see what you need, so please
let us know somehow!

As a reminder, our regular policy is this: if a bug is NOT marked
highest or high priority, it is essentially 100% the case we will not
look at it. So please make sure this is accurate. Or if you can, write
a patch yourself!

  - *Developers*: double check the roadmap list, _and if you're
responsible for something, make sure it is accurate!_

There are some great things planned to land in HEAD, but we'll have to
work for it. Onward!

  - A better LLVM backend for Tier-1 platforms
  - Types are kinds and kind equality
  - Overloaded record fields!
  - Enhancements to DWARF debugging
  - ApplicativeDo
  - ... and many more...

Thanks everyone!



Austin Seipp, Haskell Consultant
Well-Typed LLP,
Glasgow-haskell-users mailing list
Glasgow-haskell-users <at>
Evan Laforge | 31 Jul 20:10 2015

simultaneous ghc versions

The recent release of ghc 7.10.2 reminded me of something I meant to
ask about a long time ago.  Most of the binaries ghc installs are
versioned (x linked to x-7.10.2), with some exceptions (hpc and
hsc2hs).  Shouldn't they all be versioned?  Also, 'haddock' is
inconsistent with all the rest, in that it's haddock linked to

I've long used a few shell scripts (recently upgraded to python) to
manage ghc installs.  A 'set' which creates symlinks to make a
particular version current, and an 'rm' to remove all traces of a
version.  But due to the inconsistency, I have to remember to run
"fix" first, which moves the unversioned binaries to versioned names.

As an aside, I have three scripts I use all the time: set version,
remove version, and remove library.  Come to think of it, shouldn't
ghc include this, instead of everyone creating their own shell scripts
by hand?
Christian Maeder | 24 Jul 09:59 2015

broken source link


when trying to look up the original definition for
Data.List.transpose in
I found that the source link
does not work.

Could this be fixed? Or should I look elsewhere for the sources?

Cheers Christian

P.S. my looking up transpose was inspired by