modversion | 1 Aug 14:00 2010
Picon

How Can I inert mac with rasqlinsert?

Hi list:

How Can I inert mac with rasqlinsert? I’ve set ARGUS_GENERATE_MAC_DATA=yes to collect the mac address,but I can not store the smac and dmac in the mysql database via the command “/usr/local/bin/rasqlinsert -S localhost:561 -m none -d -M time 1d -w mysql://argus <at> localhost/argusData/argusTable_%Y_%m_%d”.

 

Anybody could be kind enough to do me a favor and tell me how can I get the mac into mysql database,thank you very much!

 

Carter Bullard | 1 Aug 16:50 2010

Re: How Can I inert mac with rasqlinsert?

Whenever you insert data using rasqlinsert() the whole argus record is inserted as binary data in each row.
So the mac's are there, and you can print them using rasql().


but to have them as mysql table attributes, you just need to indicate you want it as a printed field.
In your example, you aren't specifying any fields using the "-s field ..." option, so you are using the defaults,
or the fields defined in your ~/.rarc.  Try this

   rasqlinsert -S localhost:561 -m none -d -M time 1s -w mysql://user <at> host/db/table -s +smac +dmac

or add "smac dmac"  to your RA_FIELD_SPECIFIER in your ~/.rarc

This will add smac and dmac to your database schema, which you can verify using the mysql
command, 'desc table'.

   % mysql
   mysql> use argusData;
   mysql> desc argusTable_2010_08_01;

Adding this and running it may cause rasqlinsert() to try to insert the fields into an existing table that
doesn't have these fields in it, and it will fail, so be sure and drop any tables that cause problems.
mysql() will allow you to add attributes to existing tables, so if its really important, you can make
a legacy table usable.

Remember the more fields you "expose" in mysql() the more cycles it will take to insert a record,
so expose them in mysql if you will have queries that use the fields.  If not, use rasql() to grab
the fields when you need them.

Carter



On Aug 1, 2010, at 8:00 AM, modversion wrote:

Hi list:

How Can I inert mac with rasqlinsert? I’ve set ARGUS_GENERATE_MAC_DATA=yes to collect the mac address,but I can not store the smac and dmac in the mysql database via the command “/usr/local/bin/rasqlinsert -S localhost:561 -m none -d -M time 1d -w mysql://argus <at> localhost/argusData/argusTable_%Y_%m_%d”.

 

Anybody could be kind enough to do me a favor and tell me how can I get the mac into mysql database,thank you very much!

 


Attachment (smime.p7s): application/pkcs7-signature, 4973 bytes
modversion | 2 Aug 05:45 2010
Picon

Re: How Can I inert mac with rasqlinsert?

Hi Carter, it works fine , thank you very mych!

 

From: Carter Bullard [mailto:carter <at> qosient.com]
Sent: Sunday, August 01, 2010 10:50 PM
To: modversion
Cc: argus-info <at> lists.andrew.cmu.edu
Subject: Re: [ARGUS] How Can I inert mac with rasqlinsert?

 

Whenever you insert data using rasqlinsert() the whole argus record is inserted as binary data in each row.

So the mac's are there, and you can print them using rasql().

 

 

but to have them as mysql table attributes, you just need to indicate you want it as a printed field.

In your example, you aren't specifying any fields using the "-s field ..." option, so you are using the defaults,

or the fields defined in your ~/.rarc.  Try this

 

   rasqlinsert -S localhost:561 -m none -d -M time 1s -w mysql://user <at> host/db/table -s +smac +dmac

 

or add "smac dmac"  to your RA_FIELD_SPECIFIER in your ~/.rarc

 

This will add smac and dmac to your database schema, which you can verify using the mysql

command, 'desc table'.

 

   % mysql

   mysql> use argusData;

   mysql> desc argusTable_2010_08_01;

 

Adding this and running it may cause rasqlinsert() to try to insert the fields into an existing table that

doesn't have these fields in it, and it will fail, so be sure and drop any tables that cause problems.

mysql() will allow you to add attributes to existing tables, so if its really important, you can make

a legacy table usable.

 

Remember the more fields you "expose" in mysql() the more cycles it will take to insert a record,

so expose them in mysql if you will have queries that use the fields.  If not, use rasql() to grab

the fields when you need them.

 

Carter

 

 

 

On Aug 1, 2010, at 8:00 AM, modversion wrote:



Hi list:

How Can I inert mac with rasqlinsert? I’ve set ARGUS_GENERATE_MAC_DATA=yes to collect the mac address,but I can not store the smac and dmac in the mysql database via the command “/usr/local/bin/rasqlinsert -S localhost:561 -m none -d -M time 1d -w mysql://argus <at> localhost/argusData/argusTable_%Y_%m_%d”.

 

Anybody could be kind enough to do me a favor and tell me how can I get the mac into mysql database,thank you very much!

 

 

CS Lee | 3 Aug 14:32 2010
Picon

Re: raconvert

hi guys,

I'm going to release argus to splunk stuffs - argus2splunk-v1.tar.gz

For splunk part

in NSM/local directory

app.conf
data/ui/views/Argus.xml
inputs.conf
props.conf
savedsearches.conf
transforms.conf
viewstates.conf

inputs.conf defines which directory to monitor for argus data
props.conf defines the argus data
transforms.conf helps to extract argus data field
savedsearches.conf contains all the report

In data/ui/views/Argus.xml, it contains the link to the Argus report. I'm going to discuss about setup here -

For argus part, I use argus, rastream, racluster and ra to export the data as csv format, and rsync to the splunk server to /var/log/argus directory where splunk will monitor, I will also release all the scripts i have to make the whole things work. Basically the setup is

Netowkr Link <------Argus Probe(Monitor, log argus default format and export the data in csv format) ---------rsync in specific time interval-----------> Splunk Server(Process csv data, index, making graph and generate report)

Okay, the reason I have most of the stuffs processed in sensor and then only sending the data to splunk server is because usually splunk is in high load condition if it needs to process everything, and it's not so wise to put more load to splunk server by running argus client tools in Splunk Server.

If you have more argus probes, can consider using radium setup. I will release all the stuffs together at one shot however my current documentation for the setup just sucks ;)

Carter and the rest of friends, if you have better idea of how to implement the whole thing that would be great.

Cheers!






On Fri, Jul 30, 2010 at 11:32 PM, Paul Schmehl <pschmehl_lists <at> tx.rr.com> wrote:
Yep, still in edu, and still trying to rise to your level of geekiness.  :-)

I'd be glad to test it out.  We're making increased use of argus, but searching the logs is timeconsuming.  Being able to search in Splunk and locate exactly what log I need to go to would be quite helpful.

We're presently storing about 15 days of logs, capturing the first 400 bytes of every packet.  It's been quite useful.


--On Friday, July 30, 2010 12:20:03 +0800 CS Lee <geek00l <at> gmail.com> wrote:

hi paul,

Btw, how's life and are you still in .edu?

Yes, I can send you the argus stuffs I have and you can test it, I have it
deployed on one production site, and another on testing site now.


On Fri, Jul 30, 2010 at 5:19 AM, Paul Schmehl <pschmehl_lists <at> tx.rr.com>
wrote:

CS, we are *very* interested in this.  Is your argus to splunk app far
enough along to do testing?


--On Thursday, July 29, 2010 23:57:39 +0800 CS Lee <geek00l <at> gmail.com> wrote:





hi Carter,

I was having the problem as well until I tried to get argus data into splunk,
and in fact I have almost all the fields in argus extracted and send to
splunk, I always put suser data and duser data at last field. My argus data
is in csv form and this is how I have it done with splunk though -

In the props.conf(properties config)
[argus]
sourcetype = argus
REPORT-argus = argus-fields, argus-suser-data, argus-duser-data

In the transforms.conf(data transformation config)
[argus-fields]
DELIMS = ","
FIELDS =
"stime","flags","proto","src_ip","src_port","direction","dst_ip","dst_port","
state","duration","pkts","bytes","appbytes","pps","bps","src_pkts","dst_pkts"
,"src_bytes","dst_bytes","src_appbytes","dst_appbytes","src_pps","dst_pps","s
rc_bps","dst_bps"

[argus-suser-data]
REGEX = ,s\[\d+\]=(?<suser_data>.{0,64}),?

[argus-duser-data]
REGEX = ,d\[\d+\]=(?<duser_data>.{0,64})

I don't expect everyone to get the idea at first glance however if you are
familiar with splunk or regex this won't be too hard.

I'm not trying to promote splunk here but since both of them can be glued
together so well, I just want to be able to perform analysis on every field i
can obtain from argus record, and graphing them, further generating report.
On top of them you can still keep argus record in its own format and
processed by ra like tools when you need to do some other post processing
which is not offered by splunk web.

I have argus app to splunk done and plan to release it soon.

Cheers ;)



On Thu, Jul 29, 2010 at 11:15 PM, Carter Bullard <carter <at> qosient.com> wrote:


Hey CS Lee,
Yes, the user buffers do need some work.  So how do other systems, like csv,
deal with delimiters in the output?  Is there a universal escape strategy?



Good to see you around.

Carter






On Jul 28, 2010, at 11:23 AM, CS Lee wrote:

hi Carter,

How's life, think I'm back and will blog more about argus and flow stuffs!

Regarding raconvert, the tricky part I see would be converting user data
field that is printed because I used to have the problem when using , or
other character as delimeter and end up need to do additional parsing to get
user data extracted properly in the ascii flow records.

Gentle people,
There is a new program in the clients distribution, raconvert(), with manpage.

This program is designed to convert ASCII based argus files to binary argus
data records.   The ASCII must have a single character delimiter, such as a
',',
but you can specify the delimiter, using the "-c char" option.

   ra -r argus.file -c ,  > /tmp/ra.txt
   raconvert -r /tmp/ra.txt -w - | ra

raconvert() is not complete.  Currently, I'm handling maybe 50 out of the 180
something fields that we can printout, but its time to put it out there, so
if you
try to use it, and some fields don't get converted, send me a sample ascii
file,
and I'll add the support that your field.

The records that we generate may not be complete.  It depends on how much
information you provide in the ascii records.  For instance if you only have
the "StartTime" field, without the "LastTime" field, the resulting binary
argus
record will have a duration of 0, so you want to ensure that you have enough
information in the ascii output to convey all that you want.

Also, the name suggests that it should be able to do conversion, which may
imply that it converts more than just one thing to another, so, ......,
if you have any ideas as to what you would like to convert, just holler, and
I'll see what I can do.

I will try to add XML conversion before the summer is done.

So why this program?  The primary reason is to support moving argus data
around in environments that don't like binary data.  You convert the records
to ASCII, printing as many fields as practical, move the file to the next
location,
and then convert them back to binary records so you can do work with them.
Some high security places need this type of support.  But you could also use
it as a means to create an argus data editor, if you wanted.

Hope you find this useful,

Carter

--
Best Regards,

CS Lee<geek00L[at]gmail.com>

http://geek00l.blogspot.com
http://defcraft.net




--



Paul Schmehl, Senior Infosec Analyst
As if it wasn't already obvious, my opinions
are my own and not those of my employer.
*******************************************
"It is as useless to argue with those who have
renounced the use of reason as to administer
medication to the dead." Thomas Jefferson



--
Paul Schmehl, Senior Infosec Analyst
As if it wasn't already obvious, my opinions
are my own and not those of my employer.
*******************************************
"It is as useless to argue with those who have
renounced the use of reason as to administer
medication to the dead." Thomas Jefferson




--
Best Regards,

CS Lee<geek00L[at]gmail.com>

http://geek00l.blogspot.com
http://defcraft.net
CS Lee | 3 Aug 14:47 2010
Picon

IP Correlation

hi guys,

Additionally, I have IP data in Argus correlating with other data sources such as "emering-threats" stuffs, spambot and so forth, all you need to do is actually convert those data in csv format(2 columns)(I have scripts to convert them too) and dump them into lookup directory, there is one simple example config that I put in the props.conf if iirc too.

So with that setup basically you can correlate IP that you obtain from argus data to any external data sources and this helps you to determine any bad ip immediately appeared in the list and it is done automatically. However if you want to run ip address matching quickly on argus data file itself, use rafilteraddr as it is freaking fast.

Cheers

--
Best Regards,

CS Lee<geek00L[at]gmail.com>

http://geek00l.blogspot.com
http://defcraft.net

Carter Bullard | 3 Aug 19:58 2010

argus aggregation and CIDR addresses

Gentle people,
I've added "CIDR address format" printing for IPv4 addresses and will have it for IPv6
addresses later in the week.  I would like to know if anyone has an opinion as to whether
it should be the default printing mode for ra* programs.  

Currently we do not print aggregated IP addresses using CIDR formats.  While the CIDR
mask length has been available in aggregated argus data, it was not uniformly preserved
in all operations.  That has been resolved, and for data aggregated using argus-clients-3.0.4,
the address mask length should be considered to be reliable. 

In order to maintain legacy behavior, there are three modes for printing CIDR addresses,
and they are configured using the RA_CIDR_ADDRESS_FORMAT  variable in the ~/.rarc file. 

   1) Printing disabled, "no", where we will not report the mask. (legacy mode)
   2) Printing enabled, "yes" where we will print "/masklen" when the mask is < full address bits.
   3) Printing enabled, "strict", where we will always print the "/masklen".

The idea behind the "yes" and "strict", is that unless you aggregate the data, all IP addresses
in the flow data have full address bit CIDR masklens, so no need to print the "/32", or "/128" .
One the other hand, some people don't like variability in their output formats, so forcing the "/%d"
to be at the end of every IP address could be a desired feature, for some.

I will leave the default to "no" unless we can come to consensus that "yes" is appropriate.

This is important, as we start to work on IP address indexing, as we have for time indexing.
This effort will be very interesting, but before we start, being able to print the CIDR masklen is
going to be really important.

Hope all is most excellent, and if you do have an opinion, don't hold it back.

Carter

Here is a simple description of how we could do IP address indexing.  I'm going down this path:

One simple strategy for IP address indexing, is to have a mysql table, for each day, that has
entries for the occurences of all the /16 CIDR address aggregates.  This fixed address strategy
has some advantages, primarily, it limits the database to a maximum of 64K entries per index,
which is a good thing.   Using our existing database tool, rasqlinsert(), we can formulate the address
aggregates, and poke the aggregate argus records into a single table, and get some really good information.

   rasqlindex -M rmon -m srcid smac saddr/16 -s stime dur srcid smac saddr -M cache time 1d -w
mysql://user <at> host/db/ipIndex_%Y_%m_%d 

at anytime, we can search the table for the occurence of the /16 network for an address, and if its
a relatively unique address, we'll be able to find it very quickly:

   rasql -M time 1d -r mysql://user <at> host/db/ipIndex_%Y_%m_%d -t -30d -M sql="saddr='network in question/16'"

A more elegant solution would allow us to have different CIDR mask lengths, depending on how
many addresses are represented by the aggregate, and the duration of the aggregate.  If having using
a "/8" entry keeps the range of the aggregate to, say 30 seconds in a day, then that is a good index
representation.  If the "/8" covers the whole day, but a "/9" generates two ranges, a short on in the
morning and a short one in the evening, then using "/9" for the index would be the right thing to do.
We'll be developing IP address indexing strategies that will try to minimize the number of entries,
but also minimize the time range covered by the entries.  

Attachment (smime.p7s): application/pkcs7-signature, 5155 bytes
modversion | 4 Aug 16:36 2010
Picon

which is the best front web interface for me ?

Hi list:

         I want to find the port scanner,login bruteforcer,arp spoofer and the botnet victim in our office network via argus, which is the best front web interface for me to find them out ?

         Thank you very much!

Carter Bullard | 4 Aug 17:22 2010

Re: which is the best front web interface for me ?

Hey modversion,
We don't have a free web interface for argus, but some people have developed
their own web tools.  Mark Bartlett sends screenshots of his stuff occasionally.

There's Periscope, which is a Lisp system that looks particularly cool, there was
ArgusEye, which was a good effort.  These are/were the projects that people have
talked about on the mailing list, where there is code.

I am trying to move things around so that I can do this type of project, but it will take
some time before that happens for me.  If you are interested in doing something
in this area, and want to keep it open, I can contribute.

Carter

On Aug 4, 2010, at 10:36 AM, modversion wrote:

Hi list:

         I want to find the port scanner,login bruteforcer,arp spoofer and the botnet victim in our office network via argus, which is the best front web interface for me to find them out ?

         Thank you very much!


Attachment (smime.p7s): application/pkcs7-signature, 5155 bytes
modversion | 5 Aug 03:16 2010
Picon

Re: which is the best front web interface for me ?

Thank you carter,I will try to do something with Periscope,but could you like to tell me where can I find the commercial web interface for argus ?

If we can not find a suitable web interface,we will do it by ourself for our company,but we can not keep it open, because of the confidentiality agreement.

In my opinion, the visualize map were not the best bet for us, we only want to know which system are hacked (botnet detection) and which system are hacking (scaning,brute-forcing,spoofing)in our company,then block the ip with the firewall and locate the people with the smac.

All of them could be find out by analyse the network behavior data which collected with argus,not very difficult,just count the times which from the same source address to the same destination address and port.

In the botnet detection,we will use black list and white list to make it better

1.       black list: dynamic dns,such as 3322.org.

2.       white list,such as mail server and trusted web server.

 

Anybody could give me some suggestion ? Thanks.

 

From: Carter Bullard [mailto:carter <at> qosient.com]
Sent: Wednesday, August 04, 2010 11:22 PM
To: modversion
Cc: argus-info <at> lists.andrew.cmu.edu
Subject: Re: [ARGUS] which is the best front web interface for me ?

 

Hey modversion,

We don't have a free web interface for argus, but some people have developed

their own web tools.  Mark Bartlett sends screenshots of his stuff occasionally.

 

There's Periscope, which is a Lisp system that looks particularly cool, there was

ArgusEye, which was a good effort.  These are/were the projects that people have

talked about on the mailing list, where there is code.

 

I am trying to move things around so that I can do this type of project, but it will take

some time before that happens for me.  If you are interested in doing something

in this area, and want to keep it open, I can contribute.

 

Carter

 

On Aug 4, 2010, at 10:36 AM, modversion wrote:



Hi list:

         I want to find the port scanner,login bruteforcer,arp spoofer and the botnet victim in our office network via argus, which is the best front web interface for me to find them out ?

         Thank you very much!

 

Phillip Deneault | 5 Aug 21:37 2010

Radium to multiple Argi on the same host

In situations where multiple Argi are running on the same sensor, which
is being collected by a Radium instance on a server, is there a good way
to either designate the destination directories and/or set the $srcid in
such a way as to allow radium to separate the flows on its own?

I can hard code an integer ID number into the monitor id of each Argi,
but then I need to keep some external list.  I don't think using the IP
or hostname will work since the directory structure will probably be
identical for the two without a further index of some kind.

I tried setting an arbitrary string... just on the off-chance it might
work, but was unsuccessful.

Thanks,
Phil


Gmane