David Miller | 1 Oct 07:24 2009
Picon

Re: ClojureCLR and CLR structs


Should be fixed in the latest commit.
Any of the following will work.

(System.Reflection.Assembly/Load "WindowsBase, Version=3.0.0.0,
Culture=neutral, PublicKeyToken=31bf3856ad364e35")
(import '(System.Windows.Media Matrix))
(defn b [m] (doto m (.Scale 2.0 3.0)))
(defn a1 [] (b (Matrix.)))
(defn a2 [] (doto (Matrix.) (.Scale 2.0 3.0)))
(defn a3 [] (let [m (Matrix.)] (doto m (.Scale 2.0 3.0))))
(defn a4 [] (let [m (Matrix.)] (doto m (.Scale 2.0 3.0)) m))
(defn a5 [] (let [m (Matrix.)] (.Scale m 2.0 3.0) m))
(defn a6 [] (let [m (Matrix.)] (. m (Scale 2.0 3.0)) m))
(defn a7 [] (let [#^Matrix m (Matrix.)] (. m (Scale 2.0 3.0)) m))
(def a (Matrix.))
(defn a8 [] (let [ m a] (. m (Scale 2.0 3.0)) m))
(defn a9 [] (let [#^Matrix m a] (. m (Scale 2.0 3.0)) m))

-David
samppi | 1 Oct 08:07 2009
Picon

On Google App Engine and reference types


I've heard a lot that Google App Engine's use of many JVMs makes
Clojure refs, etc. less useful than they are on one JVM. I'm curious—
what are the consequences of using a ref normally in Google App
Engine? How would it be broken? Would each JVM, invisibly, each store
a different value for the ref at any time?
Ze maria | 1 Oct 02:06 2009
Picon

Debugging questions

Hello guys,

I've read the Stuart Halloway's book, worked through it, and I've being programming in clojure in the last few weeks.
I'm using Clojure 1.0.1-alpha-SNAPSHOT with emacs + slime + swank.

Until now, everything went smoothly, however as my code went bigger it become harder to debug obviously, when I have a bug, say in a file called example.clj:

; trying to get the first element of an unordered set
(nth (clojure.set/select #(= 0 (mod % 2)) #{1 2 5 10}) 0)

When I loaded the file and run the code, i was expecting the same answer as i would get by running the same line in the REPL:

java.lang.UnsupportedOperationException: nth not supported on this type: PersistentHashSet (NO_SOURCE_FILE:0)

However, the message i got was this one:
java.lang.RuntimeException: java.lang.ClassCastException: clojure.lang.LazySeq (NO_SOURCE_FILE:0)

Why do I get different messages running from the REPL or loading from a clj file ?
Shouldn't I have (in the message returned) the line where the error occurred ? and the stack trace ? Without the line number, things are wayy harder :)

At some point (i've no idea why)
I even got the following message from REPL (while not interacting with it):

Invalid memory access of location 0x4 eip=0x13a9be5
Process inferior-lisp bus error

Any clues ?
Thanks in advance,
Jose

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure <at> googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
clojure+unsubscribe <at> googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
-~----------~----~----~----~------~----~------~--~---

Fredrik Ohrstrom | 1 Oct 09:47 2009
Picon

Re: Timing, JIT, Heisen-code


You are trusting nanoTime to actually return values that are useful!
They might be completely useless for your timing purposes. :-)
It all depends on the hardware you are running.

Read the spec:
http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#nanoTime()

The reason for this is that the granularity of the system timer can be 1ms,
if you are running a Linux kernel that ticks 1000 times every second,
then you will get nanoTime samples spaced 1ms apart. Eh...
(On even older systems you have 100 ticks per second.)

Of course modern cpus usually have a counter in the cpu-hardware
that updates with every hardware clock tick. This can usually be
recalculated into
nano-time. But there is no guarantee that you will be actually executing
at any time, the OS might execute sometime else. Also the cpu might
enter low power mode and stall the clock cycles and suddenly you
have bad resolution again.

To be on the safe side, the inner loop should run many, many
times so that the measured execution time is in several seconds and
use System.currentTimeMillis to do the measuring. In more complex
tests that actually work with the disk, you also have to run the test once
to warm up all the disk caches.

//Fredrik

2009/9/30 Matt Brown <mrbrown23 <at> gmail.com>:
>
> Hi.
>
> Thanks all, for your comments.
>
>> You need to use a bigger number than 1000 for these results to be meaningful.
>
> Out of curiousity, why is this?
> Does the JIT do something different after 1000 iterations?
> Or is the concern simply that the variance of the mean estimate is too
> high with 1000 vs. 10^6 iterations (due to OS background processes,
> etc.)? I originally reported ranges for average execution times (over
> about a dozen runs) to address this particular concern.
>
>> FWIW, I've run both on my Toshiba dual core laptop with ubuntu, and
>> they return approximately the same values.
>>
>> (and there is some JIT trickery going on, as I got:
>> user=> (myavgtime (+ 1 2 3) 1000 mytime1)
>> (myavgtime (+ 1 2 3) 1000 mytime1)
>> 0.0029811580000000306
>> user=> (myavgtime (+ 1 2 3) 100000 mytime1)
>> (myavgtime (+ 1 2 3) 100000 mytime1)
>> 0.0017426112899993846
>> user=> (myavgtime (+ 1 2 3) 1e8 mytime1)
>> (myavgtime (+ 1 2 3) 1e8 mytime1)
>> 0.0015456479935035251
>>
>> Although the last one ran for quite a bit longer than ,0015)
>
> Thanks for posting this!
> One explanation for a decreasing mean execution time with an
> increasing number of iterations is this: The first iteration's
> execution time is relatively large because it's not JIT optimized
> (0.018 msec on my system). Increasing the number of iterations means
> you're averaging a larger number of small, JIT optimized individual
> execution times (reported as 0.000-0.001 msec on my system) into that
> initial larger value. The mean therefore becomes asymptotically
> smaller with larger numbers of iterations. Is there something else
> going on here as well though (eg: JIT stuff)?
>
> Using a million iterations, I got:
> (myavgtime (+ 1 2 3) 1e6 mytime1) -> 0.00027 - 0.00029 msec (over a
> dozen repeats)
> (myavgtime (+ 1 2 3) 1e6 mytime2) -> 0.00068 msec (single run,
> printing 10^6 lines takes a long time, I was too impatient for
> repeats)
>
> So, using mytime1 is still just over 2x faster than mytime2 with 10^6
> iterations.
>
> cheers
> Matt
>
>>
>> On Sep 29, 12:08 pm, Matt <mrbrow... <at> gmail.com> wrote:
>>
>> > Hi. I'm getting a three-fold difference in timing results when I add a
>> > seemingly trivial println to observe what's going on. Consider:
>>
>> > (defmacro mytime1
>> >   "Returns execution time of expr in millisecs"
>> >   [expr]
>> >   `(let [time0# (. System nanoTime)
>> >          exprval# ~expr
>> >          time1# (/ (double (- (. System nanoTime) time0#)) 1000000.0)]
>> >     time1#))
>>
>> > (defmacro mytime2
>> >   "Prints out execution time of expr in millisecs and returns it"
>> >   [expr]
>> >   `(let [time0# (. System nanoTime)
>> >          exprval# ~expr
>> >          time1# (/ (double (- (. System nanoTime) time0#)) 1000000.0)]
>> >     (println "elapsed time (msec): " time1#)
>> >     time1#))
>>
>> > Timing macros mytime1 and mytime2 differ only in that mytime2 has the
>> > println expression in the second last line. The println in mytime2
>> > comes after time1# is assigned, so the println expression's execution
>> > time shouldn't be counted. I confirmed this assumption by testing.
>> > (mytime1 (+ 1 2 3)) and (mytime2 (+ 1 2 3)) both return values in the
>> > 0.05 to 0.08 msec range (on a single call, i.e. without Hotspot
>> > optimization).
>>
>> > (defmacro myavgtime
>> >   "Calls timefunc on expr n times and returns average of the n
>> > execution times"
>> >   [expr n timefunc]
>> >   `(loop [cumsum# 0 i# ~n]
>> >     (if (<= i# 0)
>> >       (/ cumsum# ~n )
>> >       (recur (+ cumsum# (~timefunc ~expr)) (dec i#) ))))
>>
>> > Results:
>> > (myavgtime (+ 1 2 3) 1000 mytime1) returns average execution times in
>> > the 0.0005 - 0.0008 msec range.
>>
>> > (myavgtime (+ 1 2 3) 1000 mytime2) returns average execution times in
>> > the 0.0014 - 0.0018 msec range, after printing 1000 lines:
>> > elapsed time (msec):  0.0870
>> > elapsed time (msec):  0.0010
>> > elapsed time (msec):  0.0020
>> > elapsed time (msec):  0.0010
>> > elapsed time (msec):  0.0010
>> > ...
>> > <990 similar output lines not shown>
>> > ...
>> > elapsed time (msec):  0.0010
>> > elapsed time (msec):  0.0010
>> > elapsed time (msec):  0.0010
>> > elapsed time (msec):  0.0010
>> > elapsed time (msec):  0.0010
>>
>> > So, using mytime2 with the myavgtime macro gives average execution
>> > times for the expression (+ 1 2 3) of 2 to 3 times longer than when
>> > using mytime1. Why is this? Does the JIT optimize differently with all
>> > those println's when using mytime2? (Kind of "quantum mechanics-y" -
>> > observing what's going on changes it.)
>>
>> > thanks for any insight here!
>> > Matt
>>
>> > System specs:
>> > MacBook Pro, Core2Duo 2.33GHz, 2GB RAM
>> > OSX 10.5.8 Leopard
>> > Clojure 1.1.0-alpha-SNAPSHOT
>> > java version "1.6.0_15"
>> > Java(TM) SE Runtime Environment (build 1.6.0_15-b03-226)
>> > Java HotSpot(TM) 64-Bit Server VM (build 14.1-b02-92, mixed mode)
> >
>

Fredrik Ohrstrom | 1 Oct 10:13 2009
Picon

Re: Timing, JIT, Heisen-code


Oh, and by the way: the expression (+ 1 2 3)
is indeed a constant.  Be very wary of
tests that verify the execution speed of
constants since it is definitely the goal of
the JVM developers to make the JVM compile
these into a single constant mov in the machine code!
Of course such code is so fast to execute that
the changes in timing are dependent on completely
unrelated things.

Not only that, you do not make use of the result of
the expression! This means that the JVM might
dead code eliminate the whole expression! And you
will be timing....nothing.....

The JVM might not always succeed with these two
optimizations, but I think you would be surprised
how often it does! To be on the safe side,
the expression should in some way rely
on a value from a static (non-final) object field.
And you should store the result of the expression
into some global static field.

This usually  guarantees that the expression
cannot be compiled into a constant or be dead-code
eliminated.

//Fredrik

2009/10/1 Fredrik Ohrstrom <oehrstroem <at> gmail.com>:
> You are trusting nanoTime to actually return values that are useful!
> They might be completely useless for your timing purposes. :-)
> It all depends on the hardware you are running.
>
> Read the spec:
> http://java.sun.com/j2se/1.5.0/docs/api/java/lang/System.html#nanoTime()
>
> The reason for this is that the granularity of the system timer can be 1ms,
> if you are running a Linux kernel that ticks 1000 times every second,
> then you will get nanoTime samples spaced 1ms apart. Eh...
> (On even older systems you have 100 ticks per second.)
>
> Of course modern cpus usually have a counter in the cpu-hardware
> that updates with every hardware clock tick. This can usually be
> recalculated into
> nano-time. But there is no guarantee that you will be actually executing
> at any time, the OS might execute sometime else. Also the cpu might
> enter low power mode and stall the clock cycles and suddenly you
> have bad resolution again.
>
> To be on the safe side, the inner loop should run many, many
> times so that the measured execution time is in several seconds and
> use System.currentTimeMillis to do the measuring. In more complex
> tests that actually work with the disk, you also have to run the test once
> to warm up all the disk caches.
>
> //Fredrik
>
>
> 2009/9/30 Matt Brown <mrbrown23 <at> gmail.com>:
>>
>> Hi.
>>
>> Thanks all, for your comments.
>>
>>> You need to use a bigger number than 1000 for these results to be meaningful.
>>
>> Out of curiousity, why is this?
>> Does the JIT do something different after 1000 iterations?
>> Or is the concern simply that the variance of the mean estimate is too
>> high with 1000 vs. 10^6 iterations (due to OS background processes,
>> etc.)? I originally reported ranges for average execution times (over
>> about a dozen runs) to address this particular concern.
>>
>>> FWIW, I've run both on my Toshiba dual core laptop with ubuntu, and
>>> they return approximately the same values.
>>>
>>> (and there is some JIT trickery going on, as I got:
>>> user=> (myavgtime (+ 1 2 3) 1000 mytime1)
>>> (myavgtime (+ 1 2 3) 1000 mytime1)
>>> 0.0029811580000000306
>>> user=> (myavgtime (+ 1 2 3) 100000 mytime1)
>>> (myavgtime (+ 1 2 3) 100000 mytime1)
>>> 0.0017426112899993846
>>> user=> (myavgtime (+ 1 2 3) 1e8 mytime1)
>>> (myavgtime (+ 1 2 3) 1e8 mytime1)
>>> 0.0015456479935035251
>>>
>>> Although the last one ran for quite a bit longer than ,0015)
>>
>> Thanks for posting this!
>> One explanation for a decreasing mean execution time with an
>> increasing number of iterations is this: The first iteration's
>> execution time is relatively large because it's not JIT optimized
>> (0.018 msec on my system). Increasing the number of iterations means
>> you're averaging a larger number of small, JIT optimized individual
>> execution times (reported as 0.000-0.001 msec on my system) into that
>> initial larger value. The mean therefore becomes asymptotically
>> smaller with larger numbers of iterations. Is there something else
>> going on here as well though (eg: JIT stuff)?
>>
>> Using a million iterations, I got:
>> (myavgtime (+ 1 2 3) 1e6 mytime1) -> 0.00027 - 0.00029 msec (over a
>> dozen repeats)
>> (myavgtime (+ 1 2 3) 1e6 mytime2) -> 0.00068 msec (single run,
>> printing 10^6 lines takes a long time, I was too impatient for
>> repeats)
>>
>> So, using mytime1 is still just over 2x faster than mytime2 with 10^6
>> iterations.
>>
>> cheers
>> Matt
>>
>>>
>>> On Sep 29, 12:08 pm, Matt <mrbrow... <at> gmail.com> wrote:
>>>
>>> > Hi. I'm getting a three-fold difference in timing results when I add a
>>> > seemingly trivial println to observe what's going on. Consider:
>>>
>>> > (defmacro mytime1
>>> >   "Returns execution time of expr in millisecs"
>>> >   [expr]
>>> >   `(let [time0# (. System nanoTime)
>>> >          exprval# ~expr
>>> >          time1# (/ (double (- (. System nanoTime) time0#)) 1000000.0)]
>>> >     time1#))
>>>
>>> > (defmacro mytime2
>>> >   "Prints out execution time of expr in millisecs and returns it"
>>> >   [expr]
>>> >   `(let [time0# (. System nanoTime)
>>> >          exprval# ~expr
>>> >          time1# (/ (double (- (. System nanoTime) time0#)) 1000000.0)]
>>> >     (println "elapsed time (msec): " time1#)
>>> >     time1#))
>>>
>>> > Timing macros mytime1 and mytime2 differ only in that mytime2 has the
>>> > println expression in the second last line. The println in mytime2
>>> > comes after time1# is assigned, so the println expression's execution
>>> > time shouldn't be counted. I confirmed this assumption by testing.
>>> > (mytime1 (+ 1 2 3)) and (mytime2 (+ 1 2 3)) both return values in the
>>> > 0.05 to 0.08 msec range (on a single call, i.e. without Hotspot
>>> > optimization).
>>>
>>> > (defmacro myavgtime
>>> >   "Calls timefunc on expr n times and returns average of the n
>>> > execution times"
>>> >   [expr n timefunc]
>>> >   `(loop [cumsum# 0 i# ~n]
>>> >     (if (<= i# 0)
>>> >       (/ cumsum# ~n )
>>> >       (recur (+ cumsum# (~timefunc ~expr)) (dec i#) ))))
>>>
>>> > Results:
>>> > (myavgtime (+ 1 2 3) 1000 mytime1) returns average execution times in
>>> > the 0.0005 - 0.0008 msec range.
>>>
>>> > (myavgtime (+ 1 2 3) 1000 mytime2) returns average execution times in
>>> > the 0.0014 - 0.0018 msec range, after printing 1000 lines:
>>> > elapsed time (msec):  0.0870
>>> > elapsed time (msec):  0.0010
>>> > elapsed time (msec):  0.0020
>>> > elapsed time (msec):  0.0010
>>> > elapsed time (msec):  0.0010
>>> > ...
>>> > <990 similar output lines not shown>
>>> > ...
>>> > elapsed time (msec):  0.0010
>>> > elapsed time (msec):  0.0010
>>> > elapsed time (msec):  0.0010
>>> > elapsed time (msec):  0.0010
>>> > elapsed time (msec):  0.0010
>>>
>>> > So, using mytime2 with the myavgtime macro gives average execution
>>> > times for the expression (+ 1 2 3) of 2 to 3 times longer than when
>>> > using mytime1. Why is this? Does the JIT optimize differently with all
>>> > those println's when using mytime2? (Kind of "quantum mechanics-y" -
>>> > observing what's going on changes it.)
>>>
>>> > thanks for any insight here!
>>> > Matt
>>>
>>> > System specs:
>>> > MacBook Pro, Core2Duo 2.33GHz, 2GB RAM
>>> > OSX 10.5.8 Leopard
>>> > Clojure 1.1.0-alpha-SNAPSHOT
>>> > java version "1.6.0_15"
>>> > Java(TM) SE Runtime Environment (build 1.6.0_15-b03-226)
>>> > Java HotSpot(TM) 64-Bit Server VM (build 14.1-b02-92, mixed mode)
>> >>
>>
>

DomS | 1 Oct 14:29 2009

REPL integration in a existing java project


Hello all,

I'm trying to integrate an REPL in a existing java project, but i dont
know how it works or where to start.

At first it should only have the functionality from an ordinary clj
repl. Later we want to extend it with some functions and macros, to
control our software via "console".

Help would be appreciated.

Thanks,
Dominik

Shawn Hoover | 1 Oct 16:24 2009
Picon

Re: ClojureCLR and CLR structs

Works for me. Thanks!

On Thu, Oct 1, 2009 at 1:24 AM, David Miller <dmiller2718 <at> gmail.com> wrote:

Should be fixed in the latest commit.
Any of the following will work.


(System.Reflection.Assembly/Load "WindowsBase, Version=3.0.0.0,
Culture=neutral, PublicKeyToken=31bf3856ad364e35")
(import '(System.Windows.Media Matrix))
(defn b [m] (doto m (.Scale 2.0 3.0)))
(defn a1 [] (b (Matrix.)))
(defn a2 [] (doto (Matrix.) (.Scale 2.0 3.0)))
(defn a3 [] (let [m (Matrix.)] (doto m (.Scale 2.0 3.0))))
(defn a4 [] (let [m (Matrix.)] (doto m (.Scale 2.0 3.0)) m))
(defn a5 [] (let [m (Matrix.)] (.Scale m 2.0 3.0) m))
(defn a6 [] (let [m (Matrix.)] (. m (Scale 2.0 3.0)) m))
(defn a7 [] (let [#^Matrix m (Matrix.)] (. m (Scale 2.0 3.0)) m))
(def a (Matrix.))
(defn a8 [] (let [ m a] (. m (Scale 2.0 3.0)) m))
(defn a9 [] (let [#^Matrix m a] (. m (Scale 2.0 3.0)) m))


-David



--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure <at> googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
clojure+unsubscribe <at> googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
-~----------~----~----~----~------~----~------~--~---

Hadley Wickham | 1 Oct 16:49 2009
Picon

Citing clojure


Hi all,

Are there any preferences for citing clojure in academic publications?

Thanks,

Hadley

Jonathan Smith | 1 Oct 18:08 2009
Picon

Re: Timing, JIT, Heisen-code


On Sep 30, 1:18 pm, Matt Brown <mrbrow... <at> gmail.com> wrote:
> Hi.
>
> Thanks all, for your comments.
>
> > You need to use a bigger number than 1000 for these results to be meaningful.
>
> Out of curiousity, why is this?
> Does the JIT do something different after 1000 iterations?
> Or is the concern simply that the variance of the mean estimate is too
> high with 1000 vs. 10^6 iterations (due to OS background processes,
> etc.)? I originally reported ranges for average execution times (over
> about a dozen runs) to address this particular concern.
>

Generally you get some error introduced by the GC and OS background
processes.

Doing it over 12 processes is similar to doing it like 10^5, but that
still would run for less than a second total, and is still subject to
interpretation.

But anyway, doing it for different periods of time also lets you see
that there is something else going on, and gets you more meaningful
questions. Why do i do this a 10million times and get the same (close
enough) result as doing it 1000 times?.

> > FWIW, I've run both on my Toshiba dual core laptop with ubuntu, and
> > they return approximately the same values.
>
> > (and there is some JIT trickery going on, as I got:
> > user=> (myavgtime (+ 1 2 3) 1000 mytime1)
> > (myavgtime (+ 1 2 3) 1000 mytime1)
> > 0.0029811580000000306
> > user=> (myavgtime (+ 1 2 3) 100000 mytime1)
> > (myavgtime (+ 1 2 3) 100000 mytime1)
> > 0.0017426112899993846
> > user=> (myavgtime (+ 1 2 3) 1e8 mytime1)
> > (myavgtime (+ 1 2 3) 1e8 mytime1)
> > 0.0015456479935035251
>
> > Although the last one ran for quite a bit longer than ,0015)
>
> Thanks for posting this!
> One explanation for a decreasing mean execution time with an
> increasing number of iterations is this: The first iteration's
> execution time is relatively large because it's not JIT optimized
> (0.018 msec on my system). Increasing the number of iterations means
> you're averaging a larger number of small, JIT optimized individual
> execution times (reported as 0.000-0.001 msec on my system) into that
> initial larger value. The mean therefore becomes asymptotically
> smaller with larger numbers of iterations. Is there something else
> going on here as well though (eg: JIT stuff)?
>

Most likely it is because the JIT compiler has completely eliminated
the operation.
You are measuring the time elapsed between system/nano calls.

System/nano has some granularity, so it could be that doing a print
forces it to always be worth at least 1 tick of system/nano, where-as
not doing a print it can somehow call multiple calls to measure all
within the same tick of system/nano?

Meaning system/nano incrementing counter isn't necessarily synchronous
with calls to measure? But doing the print forces it to do count on at
least 1 seperate tick per measurement.

> Using a million iterations, I got:
> (myavgtime (+ 1 2 3) 1e6 mytime1) -> 0.00027 - 0.00029 msec (over a
> dozen repeats)
> (myavgtime (+ 1 2 3) 1e6 mytime2) -> 0.00068 msec (single run,
> printing 10^6 lines takes a long time, I was too impatient for
> repeats)
>
> So, using mytime1 is still just over 2x faster than mytime2 with 10^6
> iterations.
>
> cheers
> Matt
>
Perhaps you could retry with some other side-effecting function, like
making a java array of 1 boolean and flipping the boolean back and
forth? (and also something with a longer timeframe)

In cases of measurement, I think it makes more sense to time the loop
itself than to time the individual calls and sum them.

Looping overhead is small, and with a non-trivial operation, it should
be outweighed by whatever else is going on.

>
>
> > On Sep 29, 12:08 pm, Matt <mrbrow... <at> gmail.com> wrote:
>
> > > Hi. I'm getting a three-fold difference in timing results when I add a
> > > seemingly trivial println to observe what's going on. Consider:
>
> > > (defmacro mytime1
> > >   "Returns execution time of expr in millisecs"
> > >   [expr]
> > >   `(let [time0# (. System nanoTime)
> > >          exprval# ~expr
> > >          time1# (/ (double (- (. System nanoTime) time0#)) 1000000.0)]
> > >     time1#))
>
> > > (defmacro mytime2
> > >   "Prints out execution time of expr in millisecs and returns it"
> > >   [expr]
> > >   `(let [time0# (. System nanoTime)
> > >          exprval# ~expr
> > >          time1# (/ (double (- (. System nanoTime) time0#)) 1000000.0)]
> > >     (println "elapsed time (msec): " time1#)
> > >     time1#))
>
> > > Timing macros mytime1 and mytime2 differ only in that mytime2 has the
> > > println expression in the second last line. The println in mytime2
> > > comes after time1# is assigned, so the println expression's execution
> > > time shouldn't be counted. I confirmed this assumption by testing.
> > > (mytime1 (+ 1 2 3)) and (mytime2 (+ 1 2 3)) both return values in the
> > > 0.05 to 0.08 msec range (on a single call, i.e. without Hotspot
> > > optimization).
>
> > > (defmacro myavgtime
> > >   "Calls timefunc on expr n times and returns average of the n
> > > execution times"
> > >   [expr n timefunc]
> > >   `(loop [cumsum# 0 i# ~n]
> > >     (if (<= i# 0)
> > >       (/ cumsum# ~n )
> > >       (recur (+ cumsum# (~timefunc ~expr)) (dec i#) ))))
>
> > > Results:
> > > (myavgtime (+ 1 2 3) 1000 mytime1) returns average execution times in
> > > the 0.0005 - 0.0008 msec range.
>
> > > (myavgtime (+ 1 2 3) 1000 mytime2) returns average execution times in
> > > the 0.0014 - 0.0018 msec range, after printing 1000 lines:
> > > elapsed time (msec):  0.0870
> > > elapsed time (msec):  0.0010
> > > elapsed time (msec):  0.0020
> > > elapsed time (msec):  0.0010
> > > elapsed time (msec):  0.0010
> > > ...
> > > <990 similar output lines not shown>
> > > ...
> > > elapsed time (msec):  0.0010
> > > elapsed time (msec):  0.0010
> > > elapsed time (msec):  0.0010
> > > elapsed time (msec):  0.0010
> > > elapsed time (msec):  0.0010
>
> > > So, using mytime2 with the myavgtime macro gives average execution
> > > times for the expression (+ 1 2 3) of 2 to 3 times longer than when
> > > using mytime1. Why is this? Does the JIT optimize differently with all
> > > those println's when using mytime2? (Kind of "quantum mechanics-y" -
> > > observing what's going on changes it.)
>
> > > thanks for any insight here!
> > > Matt
>
> > > System specs:
> > > MacBook Pro, Core2Duo 2.33GHz, 2GB RAM
> > > OSX 10.5.8 Leopard
> > > Clojure 1.1.0-alpha-SNAPSHOT
> > > java version "1.6.0_15"
> > > Java(TM) SE Runtime Environment (build 1.6.0_15-b03-226)
> > > Java HotSpot(TM) 64-Bit Server VM (build 14.1-b02-92, mixed mode)
Emeka | 1 Oct 18:24 2009
Picon

Re: Re: "Schema" for data structures

Artyom,

(provide/contract
   [interp  (-> AE? number?)])

;; interpret an arithmetical expression yielding a number
(define (interp exp)
  ;; type-case is very much like a "case ... of" in Haskell/ML
  (type-case AE exp
    (num (n) n)
    (plus (l r) (+ (interp l) (interp r)))
    (sub (l r) (- (interp l) (interp r)))))

It also looks like clojure condp.

Emeka

Contracts work only between module boundaries though.

Cheers,
Artyom Shalkhakov.




--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure <at> googlegroups.com
Note that posts from new members are moderated - please be patient with your first post.
To unsubscribe from this group, send email to
clojure+unsubscribe <at> googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
-~----------~----~----~----~------~----~------~--~---


Gmane