Carter Bullard | 3 Apr 21:44 2001

how to deal with patches?

Gentle people,
   We have a patch for argus that fixes two problems,
one with a Vlan tag information error that mangles the
argus record, and a tiny little problem reading MOAT
packets (generating erroneous "Fragment extension buffer
not found" errors).

   How would you guys like to handle these patches?
Do they represent unstable release 2.0.1 code, or simple
patches for 2.0.0?  Should patches go out immediately
and managed in a separate area on the web site?
Should I mail patches to the argus mailing list automatically?
Should I go ahead and release 2.0.1?  How about
argus-2.0.1.beta.1a?  .... Just kidding ;o)

   I personally would like to avoid putting out minor
version releases every week, so I'd like to either put
up patches for those that are having problems, and then
have a release with all the patches rolled up, possibly
every 8 weeks or so.

   This of course is just an opinion, does anyone have a
suggestion as to how to handle patches and minor version
releases?


Thanks!!!

Carter

Carter Bullard
QoSient, LLC
300 E. 56th Street, Suite 18K
New York, New York  10022

carter <at> qosient.com
Phone +1 212 588-9133
Fax   +1 212 588-9134
http://qosient.com

Scott A. McIntyre | 3 Apr 22:26 2001
Picon
Picon

Re: how to deal with patches?

>       How would you guys like to handle these patches?
>    Do they represent unstable release 2.0.1 code, or simple
>    patches for 2.0.0?  Should patches go out immediately
>    and managed in a separate area on the web site?
>    Should I mail patches to the argus mailing list automatically?
>    Should I go ahead and release 2.0.1?  How about
>    argus-2.0.1.beta.1a?  .... Just kidding ;o)

Can we go back to letters, start over with an A release?  ;-)

Personally, I'd like to see a few things:

1)  Use of CVS for these incremental changes prior to the next .x
number; the core functionality of argus wasn't really changed, no new
features added, and if I had to make a guess not a lot of folks noticed
any problems (/me coughs).

2)  A context diff / patch for these minor fixes to apply to 2.0.0, to
create 2.0.1

3)  Inform the mailing list(s) of the availability of the patch(es) and
what they apply to; let folks decide if they want/need to bother
retreiving.

However, as you point out, it's a bit silly to have minor version
upgrades every week or two, so perhaps tie it to some arbitrary event,
calendar month, 5 patches, etc.  End of the month and all patches that
were applied creates the next minor version number increment, or, after
getting a good handful of issues resolved/patched, increment.

Until then, notification of changes and use of CVS for the
brave/foolhardy would do me nicely.

I'd prefer not to have versions like 2.0.182-p38, though, whatever
happens.  

Scott

Carter Bullard | 4 Apr 17:44 2001

argus-2.0.0 tuning

Gentle people,
   Argus-2.0.0 seems to be doing OK, the only real
issue has been in DDOS attacks, where it can get
overwhelmed.  I have had some good luck with changing
some internal variables, and removing a syslog() call
in the code, and so tuning definitely has its benefits.
 
   Default 2.0.0 is configured for a maximum record output
of 1024 records per second, and have a buffer capacity of
8 seconds. With an average record size of 128 bytes,
this is just 1Mbps (128KB) of output that Argus can
generate.  This seems tooooooo low.

   We should engineer for a target max output bit rate
for argus.  Do numbers like 10-20Mbps seem reasonable?
Based on your logs, what kind of argus load are you
generating?   What's the best IDE throughput for writing?

Thanks!!

Carter


Carter Bullard
QoSient, LLC
300 E. 56th Street, Suite 18K
New York, New York  10022

carter <at> qosient.com
Phone +1 212 588-9133
Fax   +1 212 588-9134
http://qosient.com

Chris Newton | 4 Apr 18:18 2001
Picon

RE: argus-2.0.0 tuning

Hi Carter and all,

  I would say, by guessing, that most newish IDE/ATA drives could do 10 MB/s 
at _least_ for short periods of time, and sustain near that for long periods 
(given it has the disk to itself).  Also, some of the setups I am going to 
test, will have a dedicated network channel for writing argus records to 
another machine... at least 100 Mbit.  So, I would _guess_ you could aim for 
upwards of 10 MB/s... quite the jump over .128 MB/s it is currently limited 
to.

  I am monitoring a very active and agressive network, that has about 12 
universities and companies on it, and monitored from 1 single point, with 1 
argus machine (P3, 256MB ram, 600 MB swap).  This network has about 2 DoS 
attacks per week... mainly due to the residence students.  Argus does not, 
currently, like these attacks very much... often over running the current 
buffer settings, and, swallowing memory in huge gobs.  I have seen argus 
processes in excess of 280 MB in size, during an attack.  This, I'd guess from 
your comments, that this is because it is not expunging records to disk/port 
as fast as it could be.

  I am wondering about the calculations you have though...  because, during 1 
attack I had a 140 MB log file (I move it form argus.out to argus{timestamp} 
every 30 seconds).  The records in that log file, have lots of the optional 
output functions turned on (kitter... ICMP, so on)... but, still, argus 
managed to pump out 4.66 MB/s, if you take 140 MB/30 seconds.  Argus did, 
during that attack though, die. :)  That is the most recent event I had mailed 
you about.

  I am very interested in tuning this to at least be able to deal with the 
worst that can be thrown at you on a 100 Mbit, full duplex pipe (for now), 
and, larger later :)

Chris

>===== Original Message From <carter <at> qosient.com> =====
>Gentle people,
>   Argus-2.0.0 seems to be doing OK, the only real
>issue has been in DDOS attacks, where it can get
>overwhelmed.  I have had some good luck with changing
>some internal variables, and removing a syslog() call
>in the code, and so tuning definitely has its benefits.
>
>   Default 2.0.0 is configured for a maximum record output
>of 1024 records per second, and have a buffer capacity of
>8 seconds. With an average record size of 128 bytes,
>this is just 1Mbps (128KB) of output that Argus can
>generate.  This seems tooooooo low.
>
>   We should engineer for a target max output bit rate
>for argus.  Do numbers like 10-20Mbps seem reasonable?
>Based on your logs, what kind of argus load are you
>generating?   What's the best IDE throughput for writing?
>
>Thanks!!
>
>Carter
>
>
>Carter Bullard
>QoSient, LLC
>300 E. 56th Street, Suite 18K
>New York, New York  10022
>
>carter <at> qosient.com
>Phone +1 212 588-9133
>Fax   +1 212 588-9134
>http://qosient.com

_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/

Chris Newton, Systems Analyst
Computing Services, University of New Brunswick
newton <at> unb.ca 506-447-3212(voice) 506-453-3590(fax)

Carter Bullard | 4 Apr 18:31 2001

RE: argus-2.0.0 tuning

Hey Chris,
   Hmmm, my math must be off, but with all options on
the average record size may be near 228-256 bytes, and
of course if your capturing user data, then upwards of
400-500 bytes per record is a better number.

   One of the CMU machines that we're using is in the
same performance range as yours.  240MB processes
are the norm, they are handling around 85K to 100K
simultaneous flows, and generating near max record
throughput at peak.  The tuning we've done has eliminated
the load exits that you are seeing, but the patches that
I am doing now should make this much more stable under
sustained load, which is the goal.

   I should have the patches out by Friday, after testing
on the CMU machine for a while.

   Any chance you could test on a dual-processor machine?
That would eliminate your problems, after the tuning.

Carter

Carter Bullard
QoSient, LLC
300 E. 56th Street, Suite 18K
New York, New York  10022

carter <at> qosient.com
Phone +1 212 588-9133
Fax   +1 212 588-9134
http://qosient.com

> -----Original Message-----
> From: Chris Newton [mailto:newton <at> unb.ca]
> Sent: Wednesday, April 04, 2001 12:19 PM
> To: Argus (E-mail); Carter Bullard
> Subject: RE: argus-2.0.0 tuning
>
>
> Hi Carter and all,
>
>   I would say, by guessing, that most newish IDE/ATA drives
> could do 10 MB/s
> at _least_ for short periods of time, and sustain near that
> for long periods
> (given it has the disk to itself).  Also, some of the setups
> I am going to
> test, will have a dedicated network channel for writing argus
> records to
> another machine... at least 100 Mbit.  So, I would _guess_
> you could aim for
> upwards of 10 MB/s... quite the jump over .128 MB/s it is
> currently limited
> to.
>
>   I am monitoring a very active and agressive network, that
> has about 12
> universities and companies on it, and monitored from 1 single
> point, with 1
> argus machine (P3, 256MB ram, 600 MB swap).  This network has
> about 2 DoS
> attacks per week... mainly due to the residence students. 
> Argus does not,
> currently, like these attacks very much... often over running
> the current
> buffer settings, and, swallowing memory in huge gobs.  I have
> seen argus
> processes in excess of 280 MB in size, during an attack. 
> This, I'd guess from
> your comments, that this is because it is not expunging
> records to disk/port
> as fast as it could be.
>
>   I am wondering about the calculations you have though... 
> because, during 1
> attack I had a 140 MB log file (I move it form argus.out to
> argus{timestamp}
> every 30 seconds).  The records in that log file, have lots
> of the optional
> output functions turned on (kitter... ICMP, so on)... but,
> still, argus
> managed to pump out 4.66 MB/s, if you take 140 MB/30 seconds.
>  Argus did,
> during that attack though, die. :)  That is the most recent
> event I had mailed
> you about.
>
>   I am very interested in tuning this to at least be able to
> deal with the
> worst that can be thrown at you on a 100 Mbit, full duplex
> pipe (for now),
> and, larger later :)
>
> Chris
>
> >===== Original Message From <carter <at> qosient.com> =====
> >Gentle people,
> >   Argus-2.0.0 seems to be doing OK, the only real
> >issue has been in DDOS attacks, where it can get
> >overwhelmed.  I have had some good luck with changing
> >some internal variables, and removing a syslog() call
> >in the code, and so tuning definitely has its benefits.
> >
> >   Default 2.0.0 is configured for a maximum record output
> >of 1024 records per second, and have a buffer capacity of
> >8 seconds. With an average record size of 128 bytes,
> >this is just 1Mbps (128KB) of output that Argus can
> >generate.  This seems tooooooo low.
> >
> >   We should engineer for a target max output bit rate
> >for argus.  Do numbers like 10-20Mbps seem reasonable?
> >Based on your logs, what kind of argus load are you
> >generating?   What's the best IDE throughput for writing?
> >
> >Thanks!!
> >
> >Carter
> >
> >
> >Carter Bullard
> >QoSient, LLC
> >300 E. 56th Street, Suite 18K
> >New York, New York  10022
> >
> >carter <at> qosient.com
> >Phone +1 212 588-9133
> >Fax   +1 212 588-9134
> >http://qosient.com
>
> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
>
> Chris Newton, Systems Analyst
> Computing Services, University of New Brunswick
> newton <at> unb.ca 506-447-3212(voice) 506-453-3590(fax)
>
>

Chris Newton | 4 Apr 19:15 2001
Picon

RE: argus-2.0.0 tuning

>===== Original Message From <carter <at> qosient.com> =====
>Hey Chris,
>   Hmmm, my math must be off, but with all options on
>the average record size may be near 228-256 bytes, and
>of course if your capturing user data, then upwards of
>400-500 bytes per record is a better number.

  Yes, you are very close.  I am calculating the average record size each time 
I process the logs... and I get about 241 bytes.

>   One of the CMU machines that we're using is in the
>same performance range as yours.  240MB processes
>are the norm, they are handling around 85K to 100K
>simultaneous flows, and generating near max record
>throughput at peak.  The tuning we've done has eliminated
>the load exits that you are seeing, but the patches that
>I am doing now should make this much more stable under
>sustained load, which is the goal.

  Yes, this is an important goal.  I'd like to see Argus be able to handle a 
whallop from the network (many many thousands of tiny packets), and still deal 
with it (assuming the hardware can deliver it to argus, that is).

>   Any chance you could test on a dual-processor machine?
>That would eliminate your problems, after the tuning.

  I do have a dual processor P2 400 Mhz.  I understand the basics of why you 
want a dual processor machine, but maybe you could explain some of the load 
characteristics of Argus as to why a dualy is optimal.  I am about to order 
some hardware... so, I might change my puchasing plans. :)

Thanks,

Chris

_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/

Chris Newton, Systems Analyst
Computing Services, University of New Brunswick
newton <at> unb.ca 506-447-3212(voice) 506-453-3590(fax)

Carter Bullard | 4 Apr 23:13 2001

RE: argus-2.0.0 tuning

Hey Chris,
   The existing architecture for Argus is designed with
SMP boxes in mind.  Argus is interrupt driven (packets),
so having another processor to redistribute the interrupt
load will be very nice.  Argus constrains itself a lot
so that it won't be away from the packet input queue too
long.   The self scheduling strategies are what limit our
ArgusRecord throughput, and another processor would make
most of those self scheduling issues go away.

   With the FlowModeler dedicated to a single processor,
and all the other stuff hanging on the other processor, the
Multiplexor, and the Record Dispatcher(s) and the kernel,
the total amount of system record throughput will go up
dramatically, and packet loss will go down as well.

   Also, you need cycles to do the audit file management,
aggregation, compression and distribution of the audit data.
So having some extra cycles is always a good thing.

Carter

Carter Bullard
QoSient, LLC
300 E. 56th Street, Suite 18K
New York, New York  10022

carter <at> qosient.com
Phone +1 212 588-9133
Fax   +1 212 588-9134
http://qosient.com


> -----Original Message-----
> From: owner-argus-info <at> lists.andrew.cmu.edu
> [mailto:owner-argus-info <at> lists.andrew.cmu.edu]On Behalf Of
> Chris Newton
> Sent: Wednesday, April 04, 2001 1:15 PM
> To: Argus (E-mail); Carter Bullard
> Subject: RE: argus-2.0.0 tuning
>
>
> >===== Original Message From <carter <at> qosient.com> =====
> >Hey Chris,
> >   Hmmm, my math must be off, but with all options on
> >the average record size may be near 228-256 bytes, and
> >of course if your capturing user data, then upwards of
> >400-500 bytes per record is a better number.
>
>   Yes, you are very close.  I am calculating the average
> record size each time
> I process the logs... and I get about 241 bytes.
>
> >   One of the CMU machines that we're using is in the
> >same performance range as yours.  240MB processes
> >are the norm, they are handling around 85K to 100K
> >simultaneous flows, and generating near max record
> >throughput at peak.  The tuning we've done has eliminated
> >the load exits that you are seeing, but the patches that
> >I am doing now should make this much more stable under
> >sustained load, which is the goal.
>
>   Yes, this is an important goal.  I'd like to see Argus be
> able to handle a
> whallop from the network (many many thousands of tiny
> packets), and still deal
> with it (assuming the hardware can deliver it to argus, that is).
>
>
> >   Any chance you could test on a dual-processor machine?
> >That would eliminate your problems, after the tuning.
>
>   I do have a dual processor P2 400 Mhz.  I understand the
> basics of why you
> want a dual processor machine, but maybe you could explain
> some of the load
> characteristics of Argus as to why a dualy is optimal.  I am
> about to order
> some hardware... so, I might change my puchasing plans. :)
>
> Thanks,
>
> Chris
>
> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
>
> Chris Newton, Systems Analyst
> Computing Services, University of New Brunswick
> newton <at> unb.ca 506-447-3212(voice) 506-453-3590(fax)
>
>

Chris Newton | 5 Apr 02:49 2001
Picon

RE: argus-2.0.0 tuning

Ahh!  Ok. :)  A dualy it is then...

  Dell PowerEdge 300SC
  Dual 800 Mhz PIII processors
  Intel EtherExpress Pro 100
  64 MB PC133 Ram
  20 GB 7200 RPM IDE disk

  $1898 Canadian... or, roughly $1200-$1300 US.

  then, throw in an additional 512 MB of ECC ram at $140 CDN a stick...
  and an additional EtherExpress Pro... for $55 CDN.

  That should be a good sensor, no?

  Also, do you think this will help?  I have seen some utilities around that 
will allow you to bind a process to a CPU, to stop it from bouncing from one 
CPU to another when the kernel scheduler feels like it.  It would help cache 
hits, and performance to bind one of the argus threads to one CPU, and the 
other 2 less important ones to the other CPU, would it not?  Then, they'd 
never end up on the same CPU, regardless of whats going on.

 Also, I have seen utilities for binding particular ethernet cards to a 
particular CPU... so, we could have 1 CPU servicing interrupts from the main 
monitoring card, and the lightweight argus processes on that CPU as well... 
and have the other CPU bound to the main argus process.  Comments?

 Also, what is polling mode on an ethernet card... which cards support it, it 
it faster, and how is it enabled? :)

Sorry for all the questions people... but, thanks for any answers.

>===== Original Message From <carter <at> qosient.com> =====
>Hey Chris,
>   The existing architecture for Argus is designed with
>SMP boxes in mind.  Argus is interrupt driven (packets),
>so having another processor to redistribute the interrupt
>load will be very nice.  Argus constrains itself a lot
>so that it won't be away from the packet input queue too
>long.   The self scheduling strategies are what limit our
>ArgusRecord throughput, and another processor would make
>most of those self scheduling issues go away.
>
>   With the FlowModeler dedicated to a single processor,
>and all the other stuff hanging on the other processor, the
>Multiplexor, and the Record Dispatcher(s) and the kernel,
>the total amount of system record throughput will go up
>dramatically, and packet loss will go down as well.
>
>   Also, you need cycles to do the audit file management,
>aggregation, compression and distribution of the audit data.
>So having some extra cycles is always a good thing.
>
>Carter
>
>Carter Bullard
>QoSient, LLC
>300 E. 56th Street, Suite 18K
>New York, New York  10022
>
>carter <at> qosient.com
>Phone +1 212 588-9133
>Fax   +1 212 588-9134
>http://qosient.com
>
>
>> -----Original Message-----
>> From: owner-argus-info <at> lists.andrew.cmu.edu
>> [mailto:owner-argus-info <at> lists.andrew.cmu.edu]On Behalf Of
>> Chris Newton
>> Sent: Wednesday, April 04, 2001 1:15 PM
>> To: Argus (E-mail); Carter Bullard
>> Subject: RE: argus-2.0.0 tuning
>>
>>
>> >===== Original Message From <carter <at> qosient.com> =====
>> >Hey Chris,
>> >   Hmmm, my math must be off, but with all options on
>> >the average record size may be near 228-256 bytes, and
>> >of course if your capturing user data, then upwards of
>> >400-500 bytes per record is a better number.
>>
>>   Yes, you are very close.  I am calculating the average
>> record size each time
>> I process the logs... and I get about 241 bytes.
>>
>> >   One of the CMU machines that we're using is in the
>> >same performance range as yours.  240MB processes
>> >are the norm, they are handling around 85K to 100K
>> >simultaneous flows, and generating near max record
>> >throughput at peak.  The tuning we've done has eliminated
>> >the load exits that you are seeing, but the patches that
>> >I am doing now should make this much more stable under
>> >sustained load, which is the goal.
>>
>>   Yes, this is an important goal.  I'd like to see Argus be
>> able to handle a
>> whallop from the network (many many thousands of tiny
>> packets), and still deal
>> with it (assuming the hardware can deliver it to argus, that is).
>>
>>
>> >   Any chance you could test on a dual-processor machine?
>> >That would eliminate your problems, after the tuning.
>>
>>   I do have a dual processor P2 400 Mhz.  I understand the
>> basics of why you
>> want a dual processor machine, but maybe you could explain
>> some of the load
>> characteristics of Argus as to why a dualy is optimal.  I am
>> about to order
>> some hardware... so, I might change my puchasing plans. :)
>>
>> Thanks,
>>
>> Chris
>>
>> _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
>>
>> Chris Newton, Systems Analyst
>> Computing Services, University of New Brunswick
>> newton <at> unb.ca 506-447-3212(voice) 506-453-3590(fax)
>>
>>

_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/

Chris Newton, Systems Analyst
Computing Services, University of New Brunswick
newton <at> unb.ca 506-447-3212(voice) 506-453-3590(fax)

Peter Van Epp | 5 Apr 06:17 2001
Picon
Picon

Re: argus-2.0.0 tuning

> 
> Ahh!  Ok. :)  A dualy it is then...
> 
>   Dell PowerEdge 300SC
>   Dual 800 Mhz PIII processors
>   Intel EtherExpress Pro 100
>   64 MB PC133 Ram
>   20 GB 7200 RPM IDE disk
> 
>   $1898 Canadian... or, roughly $1200-$1300 US.
> 
>   then, throw in an additional 512 MB of ECC ram at $140 CDN a stick...
>   and an additional EtherExpress Pro... for $55 CDN.
> 
>   That should be a good sensor, no?

	Yes should make a reasonable sensor. One thing I think we need is to
benchmark all kinds of things (memory and disk both raw and through the 
operating / file system), motherboard chipsets, Ethernet cards, operating 
systems so we can predict what will make a good sensor and know what the 
limits are (and where we need to concentrate) on performance.

> 
>   Also, do you think this will help?  I have seen some utilities around that 
> will allow you to bind a process to a CPU, to stop it from bouncing from one 
> CPU to another when the kernel scheduler feels like it.  It would help cache 
> hits, and performance to bind one of the argus threads to one CPU, and the 
> other 2 less important ones to the other CPU, would it not?  Then, they'd 
> never end up on the same CPU, regardless of whats going on.

	The way to find out is to set one up, then use tcpreplay to play back
a tcpdump file multiple times and time it in each configuration and see what
is fastest with a given configuration.

> 
>  Also, I have seen utilities for binding particular ethernet cards to a 
> particular CPU... so, we could have 1 CPU servicing interrupts from the main 
> monitoring card, and the lightweight argus processes on that CPU as well... 
> and have the other CPU bound to the main argus process.  Comments?
> 

	As above. I don't know of any numbers like this (and our milage may 
vary anyway). This will also depend on the implementation of MP and the hardware
to some extent. I expect some implementations (such as Irix on SGIs) to be 
much better at this than others. There can also be unexpected bottlenecks such
as the serial processing being single threaded on Irix 6.5 (about the last thing
that still was ...).

>  Also, what is polling mode on an ethernet card... which cards support it, it 
> it faster, and how is it enabled? :)
> 

	You shut off the interrupts (at the driver/hardware level) and poll the 
card status register either regularly through your code or in a busy loop (poor
form on a multitasking OS :-)). This reduces interrupt latency and non 
deterministic issues at the cost of more overhead. It would usually be used in 
an imbedded type application that needed every bit of speed (and doesn't 
neccessarily care about overhead). Its likely not practical in a multitasking
OS such as Unix although the RT extensions in FreeBSD could possibly be used
to do this (you need to guarantee the card gets polled regularly, and I'd bet
the scheduling granularity isn't fine enough to do this in a general OS).
	You would probably be better to look at the "optimized TCP/IP stack
adapted from FreeBSD" in said to be in RTEMS (www.oarcorp.com) as a more likely
place to gain processing time. I expect it spends more time in the stack than
in the interrupts (and thus there is more to gain more easily in optimizing
the stack). As I think you suggested earlier the Linux zero copy kernels are
another good place to look. The kernel/user copy is very expensive. You really
want to start by profiling the code so you identify the hot spots and know
where an optimization will gain you performance (not that this is easy of 
course :-)).

> Sorry for all the questions people... but, thanks for any answers.
> 

Peter Van Epp / Operations and Technical Support 
Simon Fraser University, Burnaby, B.C. Canada

Carter Bullard | 6 Apr 03:18 2001

argus tarfile naming conventions!

Gentle People,
There doesn't seem to be much opinion on patches
and how we should distribute, etc....

The best strategy for managing changes is to rely on
CVS, but everyone is not into CVS, so a tarfile of the
current project status is important.  This will be
in the ftp://qosient.com/dev/argus-2.0 directory
and official links on the argus web site.

So now for the hard part, naming conventions.

Suggestions for naming the development tarfiles!
argus-2.0.1.alpha.1?  argus-2.0.1.dev.1? moving to
argus-2.0.1 at release time?

I would rather not generate a patch file until the
next release is "official".  Except, of course,
for emergency purposes to distribute fixes to
particularly nasty problems.

Is this reasonable? I'd like to put the next mods
up this weekend, so your comments would be greatly
appreciated!

Thanks!!!!

Carter


Carter Bullard
QoSient, LLC
300 E. 56th Street, Suite 18K
New York, New York  10022

carter <at> qosient.com
Phone +1 212 588-9133
Fax   +1 212 588-9134
http://qosient.com


Gmane