Mitchell Timin | 16 Oct 03:51 2007
Picon

Welcome Coz

We got a new subscriber yesterday.

Welcome, Coz!

I should mention that the group is rather inactive at the moment.  There are 
lots of archives you can read, and files that you can download.  And you can 
post here, and people will read your post, and maybe reply.

There is also a lot of information on the website, annevolve.sf.net.

m

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

Mitchell Timin | 5 Oct 00:48 2007
Picon

Welome to 2 newcomers

We have had two new subscribers in the last two days.  I suppose it's due to 
the beginning of the school year.

Although there is a lot of downloadable software, and a lot of mail in the 
archives, there is no development going on at the moment.  I'm still willing to 
answer questions about the existing software.

m

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

Mitchell Timin | 21 Aug 00:14 2007
Picon

LinkedIn and Facebook

I recently started an account on each of those free services.  Prolly everyone 
has heard of Facebook.  Linkedin is somewhat similar, but it is oriented to 
professional or business contacts.  If anyone here has an account on either of 
those, I would like to make a connection to you.

My name on LinkedIn is my real name, Mitchell Timin
My name on Facebook is Gaia Biosphere.

m

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/

rex | 14 May 20:19 2007

trans: a new biologically inspired computer language

This might be of interest to some of you. Jeff Wunderlich is the author
of Aurora, my all-time favorite editor.

-rex

http://www.transmuter.org/intro.html

What is Trans?

The Transmuter Programming Language, or Trans for short, is a new
prototype-based dynamic programming language that has been under careful
design and development for several years, and is currently in a testing
phase. Trans is a biologically inspired language, providing a framework
for experimenting with naturally evolving systems of objects over the
net, and for exploring new ideas about recombinant software, code
morphing, and evolutionary programming in general. The Trans model is an
ambitious attempt to fuse modern programming language paradigms with
novel evolutionary programming techniques. It is a modern
object-oriented dynamic language with a built-in capacity for
evolutionary transformation.

Since modern programming language paradigms are well-known and widely
used, Trans may have the potential to foster increased interest,
development, and acceptance of evolutionary programming. But Trans is
more than just an experimental language. A major long-term goal is to
provide a practical approach to developing and maintaining real-world
systems using new self-evolving, self-organizing methods.

Trans can also be used as a general-purpose dynamic programming
language. It's fast, flexible, compact, object-oriented, highly
(Continue reading)

rex | 20 Apr 22:12 2007

GA program: Pikaia

I was hohum about this until I saw the problem it solved in 50
generations with default settings. I'll attach it, but it may get
stripped. You can see it at the URL. Also, there's a Python interface
that's has recently been released by another author.

-rex

http://www.hao.ucar.edu/Public/models/pikaia/pikaia.html

Consider an optimization problem that consists of maximizing a function
of two variables f(x,y), with x and y bounded between 0 and 1. The
function defines a 2-D landscape, in which one is seeking the highest
elevation point. If the landscape is smooth and simple this problem is
readily treated with conventional hill-climbing methods. However a
landscape such as the following would be a much harder task:

This is a surface plot of the function f(x,y), and the inset in the
upper right is a color-coded version of the same function. The global
maximum (indicated by the arrow and located at (x,y)=(0.5,0.5), where
f=1) is surrounded by concentric rings of secondary maxima, where a
simple hill-climbing method would most likely get stuck. This problem is
easily solved with PIKAIA. An individual is an (x,y) pair, and fitness
can be directly defined as the altitude in the 2-D landscape, i.e., the
value returned by f(x,y). Examination of the corresponding fitness
function and driver code shows how simple the use of PIKAIA is for such
a problem. The following animation illustrates the evolution of the
population's distribution in parameter space. Each individual is shown
as a solid green dot, and the best of each generation as a larger,
yellow dot. Observe how the ``best solution remains stuck, for a little
while, on the innermost ring of secondary extrema, but eventually
(Continue reading)

larryl | 25 Feb 12:57 2007
Picon

ping

hi

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV

rex | 28 Oct 06:08 2006

Controling chaotic systems with evolution + NN

Paper + software available

http://chaos.utexas.edu/research/dsane/slog.html

-rex

-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642

Mitchell Timin | 18 Sep 19:37 2006
Picon

Re: C++ and NEAT

Gangadhar NPK wrote:
> Hi Mitchell,
> I saw your mail just now. I am comfortable with C++, but I need to
> take a look at the NEAT software. Right now I can't say that I can
> work on this fulltime (with a fulltime job it kind of becomes
> difficult). But, let me look at the NEAT toolkit and get back to you.
> In case no-one else picks up the mantle, then I will take it up. And
> if someone else does, I will be more than willing to help them.
> What I want to know is what are the features that you want to use from
> NEAT ? I haven't worked on NEAT, so I think the above is not an
> invalid question. Also, what is it that is missing in ANNEvolve that
> we need to pick from NEAT ?
I'm not thinking of combining NEAT and ANNEvolve.  They will remain as
separate software systems.
What we need to do with NEAT is first to get it to compile, and then to
run it and understand how to use it.
At first, it can be run with one or more of the fitness functions that
come with it.  NEAT, just as ANNEvolve does, requires some fitness
function in order to run it.  (I'm not sure if they use the same phrase,
"fitness function".)
After demonstrating that is works with the supplied fitness functions,
then we need to try one of our own.
A good simple one for testing and comparison is a common multiplication
function, similar to the ANNEvolve mult project.  The ANN has two inputs
and one output.  The goal is to have the output equal the product of the
two inputs.
The fitness function is the mean square error between the actual product
and the ANN output for all input combinations from 1 through 9.

Later, we can try the fitness function from EvSail.  It should not have
(Continue reading)

Mitchell Timin | 17 Sep 20:47 2006
Picon

C++ and NEAT

In order to get ANNEvolve moving again, we need some active involvement 
by more than one person.   Nothing that has been proposed recently has 
accomplished that.  So here is a more radical proposal.  Perhaps I'm 
thinking of the old saying: "If you can't beat 'em, join 'em."

My suggestion is that some of us learn how to use the NEAT software, and 
then apply it to some of the problems that ANNEvolve has tackled in the 
past.  If we can show that it is superior to ANNEvolve, then we use NEAT 
for future projects.   If we can show that ANNEvolve is superior, then 
we can publicize that and get lots of attention.  Actually, we will get 
a lot of attention either way.

NEAT exists now for several languages, but the C++ version is the 
original, and since C++ is fast in execution, that seems to me to be the 
obvious choice.  I have some experience with C++, but I'm not confident 
nor comfortable with it.  Some of our readers are, however, experienced 
and capable with C++.  (I wrote the original RARS in Borland C++ for DOS 
in 1995.)

Is anyone interested in working on this?

m

--

-- 
I'm proud of http://ANNEvolve.sourceforge.net.  If you want to write software, or articles, or do testing
or research for ANNEvolve, let me know.  

-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
(Continue reading)

larryl | 10 Sep 20:17 2006
Picon

xor

Hi
I've made a program that using GA-ANN finds xor.
This one is quite stable now. I've made it so I can explore some GA and 
ANN techniques.
At this stage it is still an educational program with clean code.
If you would like to try it fetch it from repository: 
annevolve/trunk/users/liimatainen/gann

/larry

-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642

nataraj chakraborty | 29 Aug 19:52 2006
Picon

SCRA Analysis Initial results!

Hi ANNEvolve,
 
This about SCRA, which has been quite for sometime left apart by me.  The original definiton of SCRA can be found in the attached SCRA.txt file by Mitchel Timin.
Another file attached is Nets_5_30.piz which has to be renamed as Nets_5_30.zip has a text file listing all the SCRA topology from networks of size 5 to 30. As can be seen there are many Networks which reduces down to FCBA, many in which few of the Output nodes are left unconnected(Pruned). This analysis is to find a formula from which we can know before constructing the network whether the resulting network will be FCBA of Pruned or a Perfect desirable network.
So I call upon people of Maths for some help.
The code for this is a single C++ file, yet not updated.Also this program writes two other files where it writes the statistics of Nets. They need some bugfixing so I have yet not uploaded them in SVN. I guess I'll need SVN help preety soon.
Nataraj Chakraborty
			SCRA - Sparsely Connected Recurrent ANN
            		by M. Timin, April, 2006

The SCRA does not have to be a binary ANN; the same concept works for an ANN
using neurons with any activation function, but for our present purpose we
will assume that the activation function is the unit step function, hence we
will have a binary ANN, with all outputs being 0 or 1.

The diagram, Fig. 1, shows an example with six neurons, but the actual neuron
count is unlimited.  Several dozen would be a typical number for many of the
more difficult applications, although simple functions can be done with as few
as 2 neurons. (illustrated in VizANN, downloadable from
http://sourceforge.net/projects/annevolve)


The same diagram, if we added all possible connections, would then illustrate
the FCBA (Fully Connected Binary ANN).  There would then be 36 connection
lines in the diagram, as compared to the 12 lines in this SCRA example.

The motivation for the SCRA is to avoid the weight explosion that occurs with
the FCBA when the neuron count is increased.  The FCBA has at least N*N
weights, whereas the SCRA may have a small fraction of that.  (N = neuron
count)  For either the SCRA or FCBA, N is also the number of bits of active
memory, AKA scratchpad memory, where the net can store and retrieve data
items.  For applications requiring many bits of scratchpad RAM, the SCRA can
provide them with far fewer weights.  This is important because the weight
count is the primary factor influencing computation time, as well as storage
for the array that describes the ANN.  

Notice in the example, beginning with the first connection, from the output of
neuron 0 to the input of neuron 0, that 2 possible connections are skipped
before the next connection, to the input of neuron 3.  The 2 here is a
parameter of the net, i.e., a SCRA can be made with a skip count of 2, or 3 or
whatever.  If the skip count is 0 then we would have an FCBA, so the FCBA is a
special case of the SCRA.  A skip count of 1 would connect each neuron to
every other input.  So skip count, in addition to N, is a parameter that is
needed to specify a particular net.  The process of skipping some possible
inputs wraps around to the beginning, and continue until all neuron inputs
have been considered.

Notice that in the example neuron 0 connects to neuron 0, which is itself, but
neuron 1 connect first to neuron 2.  We have advanced the relative position of
the first connection by 1.  This 1 is another parameter of the net.
Similarly, the first connection of neuron 2 connects to neuron 4.  Why do we
need this parameter?  Suppose it were zero; if that were the case then each
neuron would input to itself.  We have no reason to believe that every neuron
should have direct feedback from itself, so the "advancement" parameter needs
to be at least 1.

Let's call the three parameters N, S & A, for Neurons, Skip, and Advance.

I am not sure if there is benefit to A being larger than 1, but intuitively
I'm guessing that there is.  If we first build a system where N, S & A can
evolve, then we will find out.  If the best nets all have an A of 1, then we
can simplify the code to have that value built into it.

My object in designing the net in this manner is to enable it to be coded for
very fast execution.  I envision the code as being similar to the updateANN()
routine in 4Play.  Of course it will have to be somewhat more complex because
of the use of S and A in the code.  My hope is that mostly we will just be
adding S and/or A to a pointer instead of simply incrementing it.

Recurrency:

The SCRA has several levels of feedback built into it.  In the example, 0
feeds into itself directly.  1 feeds into 2 which feeds back into 1.  2 feeds
into 4 which feeds back into 2.  In fact, every neuron has a 2-step feedback
path to itself, but only neurons 0 & 3 have direct feedback.  I have not
analyzed this carefully, but I hope that different values of A will result in
assortment of feedback paths, or that A and S together will do so.

If processing time were not an issue we could use a random matrix of
connections, and encode it into a chromosome to guide our connections, but I'm
pretty sure that such an implementation would be much slower computationally.
(But we could consider it.)

So I'm seeking a sparse net which is very fast to compute, can be quite sparse
if necessary, and has a broad spectrum of feedback levels.

The A parameter does not effect the length of the chromosome, but N and S do.
We are liable to get chromosomes of very different lengths, and it is not
clear how to "mate" them, i.e., perform the crossover operation.  We might
restrict mating to chromosomes that are within, say, 10% of each other's
length.  We might even have 3-way sex, where 2 short chromosomes mate with one
long one to produce 3 offspring.  (Bizarre, eh!)


How many weights:

We need to calculate how many weights are in a chromosome, given N, S, and the
number of inputs.  In 4play, that was calculated with this formula:
#define CHROMSIZE(M,N) (N)*(1+(N)+(M))  /* a weight for every connection */
The 1 in the formula is for the biases, one for each neuron.  Now for the
SCRA, there is one additonal parameter that affects the weight count, and that
is S.  If you can devise a formula, fine, but it will be tricky because the
number of output lines from a neuron is not exactly N/(S+1).  It's
approximately N/(S+1), but not exactly because that division can yield a
non-integer result, whereas the correct answer is an integer.  However, I'm
sure you can invent a formula, or short algorithm to compute it.

There is another way, and it will be useful to have both, as a check to see
that the both give the same result.  You may not need that in the final code,
but it will be useful during development.  The other way is, assuming you have
created code that updates the SCRA, is to make a version of that that computes
how many weights have been considered.  The SCRA update code considers every
weight and decides whether or not to add it to some sum that is being
accumulated.  So it's an easy thing to count the number of such weights.
Since it is a bit tricky to get the SCRA update code to be correct, it is a
good check to see if the number of weight that it wants to use is the same as
your formula for the number of weights.


-fin-











Attachment (Nets_5_30.piz): application/x-zip-compressed, 312 KiB
-------------------------------------------------------------------------
Using Tomcat but need to do more? Need to support web services, security?
Get stuff done quickly with pre-integrated technology to make your job easier
Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642

Gmane