>

> bests

>

> On Tue, Sep 29, 2009 at 7:34 PM, Sascha Krissler <sascha.krissler <at> web.

> de> wrote:

> dividing the number of chains needed by the available resources gives

> you the time needed to finish the tables. so given 2000 chains/sec

> which

> is the speed of all nodes reporting status, and 2^37 chains needed,

> that would be: 105 weeks. if we want to be done by xmas, we need

> 8 times as much power.

>

> a gtx260 (700mhz) with 216 cores gives

> 162 chains/sec peak, a 9600M-GT (500mhz) with 32 cores gives

> 20 chains peak. so you get around 0.00107 (gtx260) and 0.00125(9600)

> chains per (core * mhz * second). although the frequency is not

> really the core frequency,

> but since shader freq is usually linked to GPU freq, i got used to

> calculate with the

> GPU freq.

> more new text below.

>

> > i really didnt understand your answer . anyway according to table

> > structure can we divide the amount of chains needed to produce with

> > the amount of computing resource ( Cores , any other factor ? ) come

> > up with a parametric number ?

> >

> > ---------- Forwarded message ----------

> > From: *Sascha Krissler*<

sascha.krissler-S0/GAf8tV78@public.gmane.org>

> >

> > since the tables will be uploaded, there is no need to do this.

>

> it does not make sense to compute the tables for yourself, since they

> will

> already be produced with the current network.

>

> > if you want to decrypt messages without the network, you will

> probably

> > want to use an FPGA with the proper size and you would need some

> > very fast SSDs. take a look at the TableStructure node in the trac

> > wiki

>

> with the network that is proposed, you distribute the precomputation

> time

> and disk accesses across several nodes. if you wanted to do this all

> on your

> own, you would need much computing power and hardware that can do

> many IO Operations per unit of time. If you want to do all the

> precomputation

> yourself, you would need 380 fast gpus, which will need 38 kW of

> power.

> and you would have to do 2,5 million disk accesses which would take

> half an hour with one hard disk (assuming 1ms access time).

>

> > for some computation. if you used a hundred GPUs to do the

> precalculati

> > on

> > during the lookup, you would need a several kW power line.

> >

> > if somebody wants to build all the tables in house how to compute

> > > needed resources and time ? i want to simplify things by having a

> > > formula around to put the amount of cores , frequency (

> considering

> > > overclocking is possible and this is also a variable ) and other

> > > factors all together . all ideas are appreciated

> > >

> >

>

> > _______________________________________________ A51 mailing list

> A51 <at>