Nathan Cain | 2 Mar 19:15 2009

FPGA targets?

Hello,

I am wondering what work is being done/planned for targeting FPGA platforms.  My interests overlap with this field, and I would like to contribute towards such an effort.  I have experience with quite a few "non traditional" HDLs, and would like to help OMeta to grow into the first "meta" design language.

Any information about the intended direction would be great.

Thanks,
--Nate

<div><p>Hello,<br><br>I am wondering what work is being done/planned for targeting FPGA platforms.&nbsp; My interests overlap with this field, and I would like to contribute towards such an effort.&nbsp; I have experience with quite a few "non traditional" HDLs, and would like to help OMeta to grow into the first "meta" design language.<br><br>Any information about the intended direction would be great.<br><br>Thanks,<br>--Nate<br></p></div>
Ian Piumarta | 2 Mar 22:46 2009
Picon

Re: [Ometa] FPGA targets?

Nate,

> I am wondering what work is being done/planned for targeting FPGA  
> platforms.

We have lots of desires in this direction, but no concrete plans.

Several people in our group are interested in FPGAs and own small  
experimenter's boards.  We keep an eye on developments (notably the  
BEE3) and spend some time tinkering with our smaller boards, but lack  
expertise and manpower to do anything serious.  If we did have  
expertise and resources we'd probably be looking at new hardware  
architectures that are optimised for the structures at the lowest  
level of our software architecture and at targeting FPGAs the same  
way we target off-the-shelf CPUs.

Alan once said that hardware is just software crystalised early (or  
something similar) and I think that echoes the instincts of many  
people in this group.

> My interests overlap with this field, and I would like to  
> contribute towards such an effort.

I'd love to see somebody figure out how to dynamically generate bit  
files from an intermediate representation (Jolt ASTs, for example) to  
allow reprogramming of the hardware on the fly.  (My memory is  
terrible but I think Hans-Martin Mosner was interested in this and  
maybe even making headway.)  It would be a lot of fun to use this to  
make self-modifying hardware.  Chuck Thacker has designed a very  
simple CPU (three pages of verilog) that could be a 'traditional'  
target CPU for our ASTs, letting us bring up the dynamic netlist  
generation and hence a fully self-modifying environment on FPGA.   
Again, we lack expertise in this area and don't really even know how  
feasible it is for these devices to address their own LUTs and  
reprogram themselves selectively on the fly.

> I have experience with quite a few "non traditional" HDLs, and  
> would like to help OMeta to grow into the first "meta" design  
> language.

We would love to see this happen too, and can offer encouragement and  
consultation as and when you need them.

Regards,
Ian

Luke Breuer | 3 Mar 05:42 2009
Picon

Re: FPGA targets?

On Mon, Mar 2, 2009 at 10:15 AM, Nathan Cain <nate <at> inverse-engineering.com> wrote:
I am wondering what work is being done/planned for targeting FPGA platforms.  My interests overlap with this field, and I would like to contribute towards such an effort.  I have experience with quite a few "non traditional" HDLs, and would like to help OMeta to grow into the first "meta" design language.

I would like to be kept in the loop.  I started looking into higher level HDLs and took some notes [1], but did not get very far.  I will be refreshing my VHDL knowledge within the next month for a project at school and would love to also discuss how to take things "higher level".  I wonder if IS (see STEPS [2]) could be made to target FPGAs -- in fact, this might almost be an ideal situation if I understand IS correctly.  (I am not sure how OMeta compares to IS.) 

I think developing a higher level HDL is a fantastic idea and almost required to research what "mega-core" processors should look like.  People who are trying to figure out parallel programming "languages" without understanding HDLs are, well, an enigma to me.  The "orthogonalization" that the VPRI folks are working on (see [2]) seems like it will also be crucial.  I have not found anyone else trying to break programs down in a truly interesting way.

Luke


[1] http://luke.breuer.com/time/item/Lukes_FPGA_work/327.aspx
[2] http://www.vpri.org/pdf/tr2007008_steps.pdf
<div>
<div class="gmail_quote">On Mon, Mar 2, 2009 at 10:15 AM, Nathan Cain <span dir="ltr">&lt;<a href="mailto:nate@...">nate <at> inverse-engineering.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote">
I am wondering what work is being done/planned for targeting FPGA platforms.&nbsp; My interests overlap with this field, and I would like to contribute towards such an effort.&nbsp; I have experience with quite a few "non traditional" HDLs, and would like to help OMeta to grow into the first "meta" design language.</blockquote>
</div>
<br>I would like to be kept in the loop.&nbsp; I started looking into higher level HDLs and took some notes [1], but did not get very far.&nbsp; I will be refreshing my VHDL knowledge within the next month for a project at school and would love to also discuss how to take things "higher level".&nbsp; I wonder if IS (see STEPS [2]) could be made to target FPGAs -- in fact, this might almost be an ideal situation if I understand IS correctly.&nbsp; (I am not sure how OMeta compares to IS.)&nbsp; <br><br>I think developing a higher level HDL is a fantastic idea and almost required to research what "mega-core" processors should look like.&nbsp; People who are trying to figure out parallel programming "languages" without understanding HDLs are, well, an enigma to me.&nbsp; The "orthogonalization" that the VPRI folks are working on (see [2]) seems like it will also be crucial.&nbsp; I have not found anyone else trying to break programs down in a truly interesting way.<br><br>Luke<br><br><br>[1] <a href="http://luke.breuer.com/time/item/Lukes_FPGA_work/327.aspx">http://luke.breuer.com/time/item/Lukes_FPGA_work/327.aspx</a><br>[2] <a href="http://www.vpri.org/pdf/tr2007008_steps.pdf">http://www.vpri.org/pdf/tr2007008_steps.pdf</a><br>
</div>
Gerardo Richarte | 3 Mar 12:14 2009

Re: Re: [Ometa] FPGA targets?

Hi Ian,

Ian Piumarta wrote:
> I'd love to see somebody figure out how to dynamically generate bit
> files from an intermediate representation (Jolt ASTs, for example) to
> allow reprogramming of the hardware on the fly.
Take a look at project Madeo
(http://www.esug.org/Conferences/2008/Innovation+Technology+Awards/Submissions)
If I correctly recall from their ESUG presentation, they could generate
bit files for some specific FPGAs, and all their work was Smalltalk based.

    richie

Jecel Assumpcao Jr | 3 Mar 20:34 2009

Re: Re: [Ometa] FPGA targets?

Gerardo Richarte wrote on Tue, 03 Mar 2009 09:14:50 -0200
> Hi Ian,
> 
> Ian Piumarta wrote:
> > I'd love to see somebody figure out how to dynamically generate bit
> > files from an intermediate representation (Jolt ASTs, for example) to
> > allow reprogramming of the hardware on the fly.
> Take a look at project Madeo
> (http://www.esug.org/Conferences/2008/Innovation+Technology+Awards/Submissions)
> If I correctly recall from their ESUG presentation, they could generate
> bit files for some specific FPGAs, and all their work was Smalltalk based.

This is a very impressive project! A more limited one, also implemented
in VisualWorks Smalltalk, is the "Interactive Design and Simulation
System"

http://www.xs4all.nl/~averschu/idass/

This lets you describe and simulate digital systems with a combination
of graphical and simple text notations. When you are done, you can use
the "alien" translator to generate Verilog files that can be used, for
example, to program an FPGA development board. The rules-based text to
text translator used for this could probably be done far more
efficiently in OMeta.

A friend of mine, Reinaldo Silveira, was very interested in the
parallels between hardware blocks wired together and software objects
sending messages to each other. He initially investigated Java for his
PhD thesis, then played around with Squeak and did some complete system
simulations and then finally decided to extend Self into a SelfHDL.
Unfortunately, all of his work is in Portuguese. The pictures in his
thesis or on page 6 of this paper can give even those who can't read his
texts an idea of how he exteded Morphic:

http://www.iberchip.org/IX/Articles/PAP-046.pdf

He was particularly fascinated with how all the process related stuff in
Self is implemented in the language with a single primitive (TWAINS -
transfer and wait for interrupt or signal) so that he could integrate
the operation of "normal" code with his simulation framework.

My own master's project is called "Adaptive Compilation for
Reconfigurable Computers in Mobile Robotics" and is also based on the
idea that hardware and objects can be made to look the same. The initial
hardware configuration will be six SiliconSqueak processors running a
software-only implementation of the application. Note that this is not a
sequential Smalltalk-80 program, but is divided into parts that
communicate with each other using a given protocol (see stream based
programming). After the app is running for a while, the most critical
parts of the code can be recompiled (using type feedback) to increase
their performance. If some part of even the optimized code remains a
major hotspot, then it would be recompiled a second time but now into
dedicated hardware which would replace one of the six SiliconSqueak
cores (this is a Virtex-4 FPGA which can be partially reconfigured while
the rest continues to operate normally). Eventually this hardware might
get changed back to a general processor or else a second processor might
get replaced with a different dedicated hardware block (leaving just
four processors for the software part), depending on the current needs
of the application.

Sadly all my texts (except for the robot vision parts) are also in
Portuguese. There are two obvious problems with compiling software
blocks into hardware: it would take a very long time and the actual bits
for the FPGA can only be generated with closed tools running on a PC
(and not on the FPGA hardware itself). So I will have to cheat and have
to pre-compile a library of key blocks which the FPGA will then load as
needed. In this case it isn't really adaptive compilation anymore but
more like the hardware "executables" as in Borph -

http://www.eee.hku.hk/~hso/publications.html

The key thing is to have some model of parallelism which is used by the
source code. If you try to extract automatically parallelism from
"normal" application code you will just make things needlessly
complicated for yourself.

-- Jecel

Nathan Cain | 3 Mar 21:43 2009

Re: FPGA targets?



On Mon, Mar 2, 2009 at 11:42 PM, Luke Breuer <labreuer-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
On Mon, Mar 2, 2009 at 10:15 AM, Nathan Cain <nate-/QcOO/mnQ2gZux3j3Bed6fC9HSW9iNxf@public.gmane.org> wrote:
I am wondering what work is being done/planned for targeting FPGA platforms.  My interests overlap with this field, and I would like to contribute towards such an effort.  I have experience with quite a few "non traditional" HDLs, and would like to help OMeta to grow into the first "meta" design language.

I would like to be kept in the loop.  I started looking into higher level HDLs and took some notes [1], but did not get very far.  I will be refreshing my VHDL knowledge within the next month for a project at school and would love to also discuss how to take things "higher level".  I wonder if IS (see STEPS [2]) could be made to target FPGAs -- in fact, this might almost be an ideal situation if I understand IS correctly.  (I am not sure how OMeta compares to IS.) 

I notice that you link to Confluence and Atom.  Confluence is one of the approaches that I've tinkered with rather extensively (having written the JHDL generator) but I've never gone beyond toy examples in Atom.  I've done very little with VHDL, as I'm a verilog designer by preference.  What little VHDL experience I do have is mostly in integrating or gluing others' IP.

The Progress Report makes mention (Pg. 29) of a small CPU implemented in verilog as a potential target.  However, where I see real potential in OMeta/IS/idst as a hardware platform is precisely in eliminating the traditional fsm cpu model.  As I see it, the fact that the very core of the transformation engine (sequence or parallel choice pattern match) maps directly to the nature of FPGA architecture (route through sequential clocked registers or parallel combinatorial logic) hints towards some very strong potential performance gains by moving to a finer granularity.  Also, in my experience, it is precisely in this area that projects such as Atom et al, MyHDL, and SystemC (shudder) fall very short, especially when it comes to verification time.  Hardware may be "software crystallized early"... but our control over and confidence in that crystallization process currently feels rudimentary at best.

The ideal would be to, as Ian mentioned, go directly to bitstream.  Unfortunately, with most netlist formats being proprietary and closed, we start to reach the boundaries of uncharted territory very quickly.  Also, self-reprogramming is still a ways out, and partially-reprogrammable FPGAs (that could be coupled in pairs to reprogram each other at run-time, for example) are only just beginning to see daylight, and the waters are murky at best.  For the time being, work will most likely have to be done in some awkward master/slave relationship with some host PC, using EDIF as a surrogate bitstream format, and vendor synthesis tools.  I've often wondered if the whole vendor problem could be side-stepped somehow, but have yet to see an answer materialize.  A proper bootstrap will certainly be a grand task, but not an insurmountable one.

 

I think developing a higher level HDL is a fantastic idea and almost required to research what "mega-core" processors should look like. People who are trying to figure out parallel programming "languages" without understanding HDLs are, well, an enigma to me.  The "orthogonalization" that the VPRI folks are working on (see [2]) seems like it will also be crucial.  I have not found anyone else trying to break programs down in a truly interesting way.

I couldn't agree more.  If a clean morphism can be found between the OMeta/IS abstractions and physical gates then in a way the "mega-core" processors already exist, sitting in source control, today!  There is also no reason that these "processor specs" couldn't be "spun on the fly"... Your mega-core processor specification might just be some ecmascript code, or a css sheet.  It might be pulled from a URL or tapped into a repl.  Taking this a step further, one could consider x86 machine code (or more practically, now, JVM bytecode or the like) as a "frontend language" on this sort of fine-grained, self-programming (and assumed self-optimizing, to a point) hardware architecture.   Could existing, real-world, possibly even binary-only applications be "virtualized" to run as a "sea of gates" sitting "below the metal?"  Is this worth exploring?

--Nate
 


Luke


[1] http://luke.breuer.com/time/item/Lukes_FPGA_work/327.aspx
[2] http://www.vpri.org/pdf/tr2007008_steps.pdf

_______________________________________________
fonc mailing list
fonc-uVco7kAcSAQ@public.gmane.org
http://vpri.org/mailman/listinfo/fonc


<div>
<br><br><div class="gmail_quote">On Mon, Mar 2, 2009 at 11:42 PM, Luke Breuer <span dir="ltr">&lt;<a href="mailto:labreuer@..." target="_blank">labreuer@...</a>&gt;</span> wrote:<br><blockquote class="gmail_quote">

<div>
<div class="gmail_quote">On Mon, Mar 2, 2009 at 10:15 AM, Nathan Cain <span dir="ltr">&lt;<a href="mailto:nate@..." target="_blank">nate@...</a>&gt;</span> wrote:<br><blockquote class="gmail_quote">
I am wondering what work is being done/planned for targeting FPGA platforms.&nbsp; My interests overlap with this field, and I would like to contribute towards such an effort.&nbsp; I have experience with quite a few "non traditional" HDLs, and would like to help OMeta to grow into the first "meta" design language.</blockquote>

</div>
<br>
</div>I would like to be kept in the loop.&nbsp; I started looking into higher level HDLs and took some notes [1], but did not get very far.&nbsp; I will be refreshing my VHDL knowledge within the next month for a project at school and would love to also discuss how to take things "higher level".&nbsp; I wonder if IS (see STEPS [2]) could be made to target FPGAs -- in fact, this might almost be an ideal situation if I understand IS correctly.&nbsp; (I am not sure how OMeta compares to IS.)&nbsp; <br>
</blockquote>
<div>
<br>I notice that you link to Confluence and Atom.&nbsp; Confluence is one of the approaches that I've tinkered with rather extensively (having written the JHDL generator) but I've never gone beyond toy examples in Atom.&nbsp; I've done very little with VHDL, as I'm a verilog designer by preference.&nbsp; What little VHDL experience I do have is mostly in integrating or gluing others' IP.<br><br>The Progress Report makes mention (Pg. 29) of a small CPU implemented in verilog as a potential target.&nbsp; However, where I see real potential in OMeta/IS/idst as a hardware platform is precisely in eliminating the traditional fsm cpu model.&nbsp; As I see it, the fact that the very core of the transformation engine (sequence or parallel choice pattern match) maps directly to the nature of FPGA architecture (route through sequential clocked registers or parallel combinatorial logic) hints towards some very strong potential performance gains by moving to a finer granularity.&nbsp; Also, in my experience, it is precisely in this area that projects such as Atom et al, MyHDL, and SystemC (shudder) fall very short, especially when it comes to verification time.&nbsp; Hardware may be "software crystallized early"... but our control over and confidence in that crystallization process currently feels rudimentary at best.<br><br>The ideal would be to, as Ian mentioned, go directly to bitstream.&nbsp; Unfortunately, with most netlist formats being proprietary and closed, we start to reach the boundaries of uncharted territory very quickly.&nbsp; Also, self-reprogramming is still a ways out, and partially-reprogrammable FPGAs (that could be coupled in pairs to reprogram each other at run-time, for example) are only just beginning to see daylight, and the waters are murky at best.&nbsp; For the time being, work will most likely have to be done in some awkward master/slave relationship with some host PC, using EDIF as a surrogate bitstream format, and vendor synthesis tools.&nbsp; I've often wondered if the whole vendor problem could be side-stepped somehow, but have yet to see an answer materialize.&nbsp; A proper bootstrap will certainly be a grand task, but not an insurmountable one.<br><br>&nbsp;</div>
<blockquote class="gmail_quote">
<br>I think developing a higher level HDL is a fantastic idea and almost required to research what "mega-core" processors should look like. People who are trying to figure out parallel programming "languages" without understanding HDLs are, well, an enigma to me.&nbsp; The "orthogonalization" that the VPRI folks are working on (see [2]) seems like it will also be crucial.&nbsp; I have not found anyone else trying to break programs down in a truly interesting way.</blockquote>

<div>
<br>I couldn't agree more.&nbsp; If a clean morphism can be found between the OMeta/IS abstractions and physical gates then in a way the "mega-core" processors already exist, sitting in source control, today!&nbsp; There is also no reason that these "processor specs" couldn't be "spun on the fly"... Your mega-core processor specification might just be some ecmascript code, or a css sheet.&nbsp; It might be pulled from a URL or tapped into a repl.&nbsp; Taking this a step further, one could consider x86 machine code (or more practically, now, JVM bytecode or the like) as a "frontend language" on this sort of fine-grained, self-programming (and assumed self-optimizing, to a point) hardware architecture. &nbsp; Could existing, real-world, possibly even binary-only applications be "virtualized" to run as a "sea of gates" sitting "below the metal?"&nbsp; Is this worth exploring?<br><br>--Nate<br>&nbsp;</div>
<blockquote class="gmail_quote">
<br><br>Luke<br><br><br>[1] <a href="http://luke.breuer.com/time/item/Lukes_FPGA_work/327.aspx" target="_blank">http://luke.breuer.com/time/item/Lukes_FPGA_work/327.aspx</a><br>[2] <a href="http://www.vpri.org/pdf/tr2007008_steps.pdf" target="_blank">http://www.vpri.org/pdf/tr2007008_steps.pdf</a><br><br>_______________________________________________<br>
fonc mailing list<br><a href="mailto:fonc@..." target="_blank">fonc@...</a><br><a href="http://vpri.org/mailman/listinfo/fonc" target="_blank">http://vpri.org/mailman/listinfo/fonc</a><br><br>
</blockquote>
</div>
<br>
</div>
Nathan Cain | 3 Mar 22:08 2009

Re: Re: [Ometa] FPGA targets?



On Tue, Mar 3, 2009 at 2:34 PM, Jecel Assumpcao Jr <jecel-/J8iz1DznIp8UrSeD/g0lQ@public.gmane.org> wrote:
The key thing is to have some model of parallelism which is used by the
source code. If you try to extract automatically parallelism from
"normal" application code you will just make things needlessly
complicated for yourself.

-- Jecel


Precisely.  You've hit the nail on the head.  Most of the world is trying to shoehorn imperative specification into a mixed imperative/combinatorial environment, usually a heterogeneous one at that (cpu+fpga or cpu+gpu or even cpu+OtherArchCpu), and wondering why they are running into a semantics mismatch.  The only real win here so far has been cpu+cpu (same arch) where the combinatorial nature is self evident, and maps naturally to "simple" abstraction. (SIMD, etc.)

However, FONC stands out as somewhat "blessed" here, as the base fundamental abstraction (is/ometa) on which everything else is to be built (and the gist of it seems to be that "everything" here is end to end, from the cpu to the os to the userspace to the network and beyond...) already contains a natural separation of the sequential and the parallel...  it even relies on it.... in fact it is the "whole point" so to speak.

Traditional HDLs do their "best" by composing abstractions of both behavioral (imperative/sequential) and structural (combinatorial/parallel) specification, but to the best of my knowledge this project is unique in having a single abstraction mechanism covering both domains in tandem.  Atom, bluespec, et al come close by hiding the sequential abstractions behind a rule scheduler, effectively allowing the human designer into a mindset that the sequencing is not a design concern.  I suppose it is debatable which is a better approach, but I know which camp I'd be in...

--Nate
<div>
<br><br><div class="gmail_quote">On Tue, Mar 3, 2009 at 2:34 PM, Jecel Assumpcao Jr <span dir="ltr">&lt;<a href="mailto:jecel@...">jecel@...</a>&gt;</span> wrote:<br><blockquote class="gmail_quote">

The key thing is to have some model of parallelism which is used by the<br>
source code. If you try to extract automatically parallelism from<br>
"normal" application code you will just make things needlessly<br>
complicated for yourself.<br><br>
-- Jecel<br><div><br></div>
</blockquote>
</div>
<br>Precisely.&nbsp; You've hit the nail on the head.&nbsp; Most of the world is trying to shoehorn imperative specification into a mixed imperative/combinatorial environment, usually a heterogeneous one at that (cpu+fpga or cpu+gpu or even cpu+OtherArchCpu), and wondering why they are running into a semantics mismatch.&nbsp; The only real win here so far has been cpu+cpu (same arch) where the combinatorial nature is self evident, and maps naturally to "simple" abstraction. (SIMD, etc.)<br><br>However, FONC stands out as somewhat "blessed" here, as the base fundamental abstraction (is/ometa) on which everything else is to be built (and the gist of it seems to be that "everything" here is end to end, from the cpu to the os to the userspace to the network and beyond...) already contains a natural separation of the sequential and the parallel...&nbsp; it even relies on it.... in fact it is the "whole point" so to speak.<br><br>Traditional HDLs do their "best" by composing abstractions of both behavioral (imperative/sequential) and structural (combinatorial/parallel) specification, but to the best of my knowledge this project is unique in having a single abstraction mechanism covering both domains in tandem.&nbsp; Atom, bluespec, et al come close by hiding the sequential abstractions behind a rule scheduler, effectively allowing the human designer into a mindset that the sequencing is not a design concern.&nbsp; I suppose it is debatable which is a better approach, but I know which camp I'd be in...<br><br>--Nate<br>
</div>
Chris Warburton | 4 Mar 02:36 2009

Distributed COLA

Hello, I've just joined this list but I've been following the posts for a 
while, along with the whitepapers and software on the VPRI site. I've also 
written http://en.wikipedia.org/wiki/COLA_(software_architecture) (and was 
given a much-appreciated thank-you from Ian Piumarta), but now that I've 
taught myself the basics of Smalltalk/COLA I'm looking for something useful to 
hack on to teach myself through doing.

I'm a student Physicist and Computer Scientist, and my current computing 
project is on "distributed" computing using Java's Remote Method Invocation 
system. This has given me an idea for something to extend the COLA system 
with, which I'll elaborate on below.

The RMI system allows an object to be accessed over a network through the use 
of stub objects on each machine in a setup like the following:

obj.method1() -> stub.method1() -> TCP -> stub.method1() -> obj.method1()

So that calling method1 on the object obj on the left (on one machine) 
actually calls it on the object obj on the right (on a different machine), 
encapsulating the networking through the stub objects.

This all works through Java interfaces, ensuring that the two objects and 
their stubs all have the same API. To me a much more elegant solution would be 
to use an Id style method lookup mechanism, then simply subclass/clone a 
VTable which performs its lookups over TCP (ie. doing the job of the stub 
objects). I'm researching how easy it would be to do such a thing in Sun's 
JVM, but all of my solutions are inelegant hacks to get around Java (as most 
of my Java programs are TBH).

I'd like to implement this in COLA, firstly since I would like to see if this 
elegant way actually works, and also since I know there's been work on using 
TCP within COLA (in the examples).

A couple of questions: Is this worth doing now, or waiting a bit? I've heard 
from Ian that the COLA implementation is getting a facelift? Also, are there 
any recommendations on where to begin with such a thing? Are the TCP examples 
robust enough to use or should they be rewritten? From a quick peruse it seems 
to have large chunks of OMeta cluttering the code, and from other messages it 
seems that COLA's OMeta is out-of-date.

Also, on a slightly related topic, are there plans for restricting access to 
state in the COLA system, since from reading the ALBERT paper it seems like 
direct state access is a cause of some headaches. With state modified by 
messages (presumably in the/a VTable) it would turn the object system into an 
actor system, which is inherently concurrent. This, combined with transparent 
access to remote objects/actors, would make the COLA system parallel and 
distributed. I think such a thing is worth investigating, and having remote 
access to objects is a prerequisite I think I could take on.

Regards,
Chris Warburton

Faré | 4 Mar 06:27 2009
Picon

Re: Re: [Ometa] FPGA targets?

>>: Jecel Assumpcao Jr <jecel@...>
>> The key thing is to have some model of parallelism which is used by the
>> source code. If you try to extract automatically parallelism from
>> "normal" application code you will just make things needlessly
>> complicated for yourself.

>: Nathan Cain <nate@...>
>
> Precisely.  You've hit the nail on the head.  Most of the world is trying to
> shoehorn imperative specification into a mixed imperative/combinatorial
> environment, usually a heterogeneous one at that (cpu+fpga or cpu+gpu or
> even cpu+OtherArchCpu), and wondering why they are running into a semantics
> mismatch.

AFAIK, the paradigms that fit are reactive and synchronous programming.
In the last few decades, there have been plenty of successful
reactive and/or synchronous programming systems with backends to
C, VHDL, model checkers and theorem provers.

[ François-René ÐVB Rideau | Reflection&Cybernethics | http://fare.tunes.org ]
A cuddle a day keeps the shrink away

Michael Haupt | 4 Mar 22:13 2009
Picon

Re: "type constructors" and unquote splicing

Hi,

no one responded, so I had to figure this out myself. ;-)

My original definition of LIST was obviously flawed:

(syntax LIST (lambda (node compiler)
     `[OrderedCollection withAll: , <at> [node copyFrom: '1]]))

But this works as intended:

(syntax LIST (lambda (node compiler)
     (let ((o [OrderedCollection new]) (i '1))
         (while [i < [node size]]
             [o add: [[node at: i] _eval]]
             (set i [i + '1]))
         `'(, <at> o))))

Hooray to #_eval - that made my day. Please let me know if it can be  
optimised. ;-)

Best,

Michael

Am 05.01.2009 um 16:18 schrieb Michael Haupt:

> Dear list,
>
> having solved the trouble with shared libraries, I wrote some
> "constructor" functions in Jolt to be able to build muy objects more
> conveniently (i.e., avoiding the talkative keyword message syntax).
> They look like this, for example:
>
> (TYPE '23)
>
> (assuming that TYPE is some type parameterised with a number) with
> this implementation:
>
> (syntax TYPE (lambda (node compiler) `[MyType with: ,[node at: '1]]))
>
> This works fine unless I want to build a list of such "tagged type
> constructions". What I would like to have is a LIST syntax definition
> that accepts an arbitrary number of type constructor applications and
> effectively returns an AST with all the expanded type constructions.
>
> (LIST (TYPE '23) (TYPE '42))
>
> Another usage example: (ANOTHER-TYPE (LIST (TYPE '23) (TYPE '42)))
>
> My initial thought was that quasiquotation would help. I've been
> experimenting like mad but could not come up with a solution. Here's
> the (non-working) quasiquotation thing:
>
> (syntax LIST (lambda (node compiler)
>     `[OrderedCollection withAll: , <at> [node copyFrom: '1]]))
>
> The trouble seems to be that , <at>  expands all collection items in place,
> instead of providing a collection again. #withAll:, however, obviously
> expects a collection.
>
> How should the syntax definition look?
>
> Best,
>
> Michael
>
> -- 
> Dr.-Ing. Michael Haupt                michael.haupt@...
> Software Architecture Group           Phone:  ++49 (0) 331-5509-542
> Hasso Plattner Institute for          Fax:    ++49 (0) 331-5509-229
> Software Systems Engineering          http://www.hpi.uni-potsdam.de/swa/
> Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany
>
> Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
> Amtsgericht Potsdam, HRB 12184
> Geschäftsführung: Prof. Dr. Christoph Meinel
>
>
>
>
>
>
> _______________________________________________
> fonc mailing list
> fonc@...
> http://vpri.org/mailman/listinfo/fonc

-- 
Dr.-Ing. Michael Haupt                michael.haupt@...
Software Architecture Group           Phone:  ++49 (0) 331-5509-542
Hasso Plattner Institute for          Fax:    ++49 (0) 331-5509-229
Software Systems Engineering          http://www.hpi.uni-potsdam.de/swa/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany

Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel

Attachment (smime.p7s): application/pkcs7-signature, 3830 bytes
Hi,

no one responded, so I had to figure this out myself. ;-)

My original definition of LIST was obviously flawed:

(syntax LIST (lambda (node compiler)
     `[OrderedCollection withAll: , <at> [node copyFrom: '1]]))

But this works as intended:

(syntax LIST (lambda (node compiler)
     (let ((o [OrderedCollection new]) (i '1))
         (while [i < [node size]]
             [o add: [[node at: i] _eval]]
             (set i [i + '1]))
         `'(, <at> o))))

Hooray to #_eval - that made my day. Please let me know if it can be  
optimised. ;-)

Best,

Michael

Am 05.01.2009 um 16:18 schrieb Michael Haupt:

> Dear list,
>
> having solved the trouble with shared libraries, I wrote some
> "constructor" functions in Jolt to be able to build muy objects more
> conveniently (i.e., avoiding the talkative keyword message syntax).
> They look like this, for example:
>
> (TYPE '23)
>
> (assuming that TYPE is some type parameterised with a number) with
> this implementation:
>
> (syntax TYPE (lambda (node compiler) `[MyType with: ,[node at: '1]]))
>
> This works fine unless I want to build a list of such "tagged type
> constructions". What I would like to have is a LIST syntax definition
> that accepts an arbitrary number of type constructor applications and
> effectively returns an AST with all the expanded type constructions.
>
> (LIST (TYPE '23) (TYPE '42))
>
> Another usage example: (ANOTHER-TYPE (LIST (TYPE '23) (TYPE '42)))
>
> My initial thought was that quasiquotation would help. I've been
> experimenting like mad but could not come up with a solution. Here's
> the (non-working) quasiquotation thing:
>
> (syntax LIST (lambda (node compiler)
>     `[OrderedCollection withAll: , <at> [node copyFrom: '1]]))
>
> The trouble seems to be that , <at>  expands all collection items in place,
> instead of providing a collection again. #withAll:, however, obviously
> expects a collection.
>
> How should the syntax definition look?
>
> Best,
>
> Michael
>
> -- 
> Dr.-Ing. Michael Haupt                michael.haupt@...
> Software Architecture Group           Phone:  ++49 (0) 331-5509-542
> Hasso Plattner Institute for          Fax:    ++49 (0) 331-5509-229
> Software Systems Engineering          http://www.hpi.uni-potsdam.de/swa/
> Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany
>
> Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
> Amtsgericht Potsdam, HRB 12184
> Geschäftsführung: Prof. Dr. Christoph Meinel
>
>
>
>
>
>
> _______________________________________________
> fonc mailing list
> fonc@...
> http://vpri.org/mailman/listinfo/fonc

--

-- 
Dr.-Ing. Michael Haupt                michael.haupt@...
Software Architecture Group           Phone:  ++49 (0) 331-5509-542
Hasso Plattner Institute for          Fax:    ++49 (0) 331-5509-229
Software Systems Engineering          http://www.hpi.uni-potsdam.de/swa/
Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany

Hasso-Plattner-Institut für Softwaresystemtechnik GmbH, Potsdam
Amtsgericht Potsdam, HRB 12184
Geschäftsführung: Prof. Dr. Christoph Meinel


Gmane