Re: jsr166y.forkjoin API comments
2008-02-02 00:46:39 GMT
I thought there was a relaxation in one of the more recent JDKs that allowed you to say, "just use the hardware FP for this operation".
Extremetech's podcast this week talks with someone from the NVidia Cuda group. I think this stuff is probably closer than I anticipated. They can do single precision now, and double precision is a priority for them. If you're going to bend the ParallelArray API enough to compete with other languages on numerical processing, it has to be 'bent' far enough to support this sort of hardware acceleration, otherwise JNI is the correct choice, not ParallelArray. If you can't support hardware accelleration, then the implementation should merely be clear and concise, because getting a 4-fold or 10-fold im provement in throughput will mean nothing if JNI offers a 100-fold improvement.
In an my JVM utopia, the JVM would be able to detect that far too much auto-boxing or auto-unboxing is going on in a particular call graph, and it would generate specializations of all of the methods that are reversed from whatever the person implemented (switch to Integer from int, or vice versa). The tricky bit would be that the new methods would have to be added without colliding with the namespace that already exists. I suspect that could be quite a pain for debuggers, profilers, and stack traces.
Besides computational overhead, the only thing that separates Integer from int or Float from float is:
1) Nullability (this one is tough, since Java can't easily determine that a reference can never be null)
3) convenience methods
If you could tell that all 3 don't happen (and on numerical code, none of those had better be happening), and that the computation would benefit from being transformed into using primitives, then ParallelArray wouldn't need specialization at the API level, because the VM could make the same optimizations.
jason marshall wrote:The IEEE incompatibilities may make that a non starter for Java.
> For floats, the prevailing winds suggest you're going to use GPGPU for
> SIMD. IEEE incompatibilities notwithstanding.
Currently Java can't even use instructions like the fused Multiply
Accumulate present on some processors. JSR-84 was withdrawn.
_______________________________________________ Concurrency-interest mailing list Concurrency-interest <at> altair.cs.oswego.edu http://altair.cs.oswego.edu/mailman/listinfo/concurrency-interest