Ulises2k | 6 Apr 21:30 2010
Picon

Re: Help reloading the OWASP default profile

http://w3af.svn.sourceforge.net/viewvc/w3af/trunk/profiles/
;)

On Tue, Mar 16, 2010 at 14:30,  <sky <at> skydog.com> wrote:
> I am embarrassed to admit this but I deleted the OWASP top 10 Scan profile
> from my W3AF installation.  The first scan displayed errors in my site that
> need to be addressed.
> However since I deleted the scan I can't replicate the finding of the
> issues.
> I am researching the archives to see if I can find the way to reload the
> default OWASP profile.  If somebody has the time to shoot me an email that
> would be much appreicated.
> Thank you.
> Scott == The stupid!
>
> ------------------------------------------------------------------------------
> Download Intel&#174; Parallel Studio Eval
> Try the new software tools for yourself. Speed compiling, find bugs
> proactively, and fine-tune applications for parallel performance.
> See why Intel Parallel Studio got high marks during beta.
> http://p.sf.net/sfu/intel-sw-dev
> _______________________________________________
> W3af-users mailing list
> W3af-users <at> lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/w3af-users
>
>

--

-- 
Ulises U. Cuñé
(Continue reading)

dersheriff | 22 Apr 16:58 2010
Picon
Picon

Where can I find a REAL STABLE version ?!

Hey folks,

first of all I want to say  that I'm really impressed about the concept and the whole structure of the
w3af-project. I'm using w3af for a few weeks now and I'm sure that it will be a great tool for web application
security WHEN IT'S DONE...

And thats my problem: No matter what I try, w3af is unstable or some features dont work.

In detail: After so many stops and errors while scanning in the past I decided a few days ago to use w3af with
its simplest functionality for test purpose. In every scenario I only use proxy (spiderMan) and
spidering (webSpider).

I want to scan a private app where I have to login first. The plan is to perform the login with spiderMan so that
the session-cookie will be used by w3af without any trouble. After that I want to scan every reachable
asset in the app with webSpider. All in all, no big deal I think.

But I tried it with different OS (WinXP, BackTrack4, Debian 5.0) and different releases (rc2, rc3)
/revisons (svn): The result is always the same - it dont work!!! But it's not everytime the same
error/problem - For example (Note: "In some cases" means different releases/revisions):

- In some cases the spiderMan-Proxy dont work when I want to submit my login. Before that the proxy works
great and I can access every page that I want. But when I submit my credentials nothing happen for a while and
w3af tell me, that the server is not reachable. But thats not true: In some (early) revisions the
spiderMan-Proxy works great and get the session-cookie.

- In some cases (when the spiderMan-proxy work) the webSpider dont do a good job: There are no new assets
accessed by the spider but there exist enough in the application (simple links, no javascript...). The
mistery is, that this feature works also fine in some (early) revisions!

- In some cases, when webSpider AND spiderMan work fine (!!!), and I want to scan the application with a
(Continue reading)

Daniel Gaddis | 22 Apr 22:15 2010
Picon
Picon

webSpider ignoreRegex functionality question

It looks like webSpider will find requests that match ignoreRegex entries and include them for the audit phase.

 

For example, let's say home.php has a link to email.php and I would like to totally ignore email.php. It looks like just specifying ignoreRegex for email.php is not good enough but that I must specify ignoreRegex for the parent home.,php. 

 

While I do want to ignore email.php in this example, I don't really want to miss the other links in the home.php

 

Am I seeing this correctly or am I missing something?

 

I am running w3af-1.0-rc3 (version 1.1 revision 3460) on windows.

 

Thanks,

Daniel

 

------------------------------------------------------------------------------
_______________________________________________
W3af-users mailing list
W3af-users@...
https://lists.sourceforge.net/lists/listinfo/w3af-users
Tom Ueltschi | 27 Apr 11:33 2010
Picon

importResults plugin with Burp or WebScarab input files

Hi Andres and list,

instead of the spiderMan plugin I would like to use another proxy (burp, webscarab) and import the URL's from a file. This way I just have to do it once for multiple scans (no interaction required).

- The latest version from importResults says in its description:

       Three configurable parameter exist:
           - input_csv
           - input_burp
           - input_webscarab

I've used paros proxy extensively, but don't know if I could export a url list in the "inpuc_csv" format.

Has anyone done this with burp or webscarab proxy? Which on is easier to just create an url list?

Can you do this with the free version of burp?

Do you know of the right menu entry to save the url file from burp or webscarab?  (I will try to find it myself with burp first)

Thanks for any help.

Cheers,
Tom


On Wed, Mar 10, 2010 at 2:04 PM, Tom Ueltschi <security-stuff-heLoQhN5uAnk1uMJSBkQmQ@public.gmane.org> wrote:
Andres,

thanks for the prompt response and the great work you (and the other developers) are doing with w3af !


>> - could i provide a login (username/password or session cookie)
>> somehow without using spiderMan proxy?

>    Yes, please see the http-settings, there is a way for you to
> specify a cookie, or add arbitrary headers with headersFile parameter.

this would still require me to do a login and copy/save the session-cookie to be used. (session expiration issues)
i would prefer to provide username/password for the login form (maybe along with the URL and parameter-names of the login page).

i'll try the importResults plugin with a Login-POST request in the input_csv file and see if that would work (and obsolete the need for spiderMan proxy to repeat a scan with login).

i assume the same could be achieved using the formAuthBrute plugin, giving one (or more) valid username/password combinations in the input files (maybe even using stopOnFirst).

- will in this case the successful login session be used for the rest of the scan?

- is there a way to influence the order of audit plugins being executed?  i think they are not executed in the order listed (in the w3af script file)

this would be necessary to do the formAuthBrute first to do the login, and then the rest of the audits with the logged-in users session.


right now i'm doing a scan with the latest SVN, but still the old way. (using VNC viewer from my windows box to configure and start the test on my ubuntu box, using spiderMan proxy).

there is one more suggestion i have ;-)

the spiderMan proxy seems to be listening only on the "local loopback" interface (127.0.0.1), but not on the ethernet interface. from security perspective this is good.  but from usability it would be nice, if it would listen on all (or user configured) interfaces, so i wouldn't need to use VNC viewer anymore.

this would also have to advantage, that if some (stupid) webapp only works right with IE and i don't have IE on linux, i could use IE on windows and configure the proxy port of the ubuntu box.

i prefer running w3af on ubuntu, not on windows, since my windows box is not running 24/7, but the linux box is.

is it already possible to configure spiderMan proxy for all interfaces or would that need code change?

thanks again for the great work!

cheers,
Tom


On Tue, Mar 9, 2010 at 2:29 PM, Andres Riancho <andres.riancho-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
Tom,

On Tue, Mar 9, 2010 at 9:12 AM, Tom Ueltschi
<security-stuff-heLoQhN5uAnk1uMJSBkQmQ@public.gmane.org> wrote:
> Hi all,
>
> i've been using w3af mostly with spiderMan proxy and manual discovery,
> b/c the application needs a login with username/password.
>
> now i would like to scan the same webapp multiple times with different
> sets of audit plugins enabled.  i already have a list of fuzzable URLs
> from previous scans.
>
>>> the goal is to repeat a scan (with same or other plugins) to check if the found vuln's have been fixed, if possible without the need of spiderMan proxy. (i would like to be able to configure and start a scan from remote with ssh without an open proxy port)

Nice use case. I like what you're trying to achieve.

> i found the 2 plugins "importResults" and "urllist_txt", where the
> documentation of the first one seems outdated (only 1 parameter:
> input_file) and the second one seems undocumented here:
> http://w3af.sourceforge.net/plugin-descriptions.php#discovery

- urllist_txt will read the urllist.txt file from the web server
(http://host.tld/urllist.txt). This is not what you want.
- The latest version from importResults says in its description:

       Three configurable parameter exist:
           - input_csv
           - input_burp
           - input_webscarab

Please make sure that you have the latest version of w3af from the
SVN. The (http://w3af.sourceforge.net/plugin-descriptions.php#discovery)
page is outdated, I'll fix that in a while.

> - what's the difference between the two?  which one should be preferred?

   For your use case, please use importResults with input_csv.

> - what's the format of "input_csv" from importResults? (e.g. 1 URL per
> line, with or without URL parameters? is there any separation by
> comma, or why CSV?)

   method, uri, postdata

> - could i provide a login (username/password or session cookie)
> somehow without using spiderMan proxy?

   Yes, please see the http-settings, there is a way for you to
specify a cookie, or add arbitrary headers with headersFile parameter.

> (maybe if it's possible create a GET request in the URL list file
> which does a login? [unless it's POST only] or else how?)

   Hmm... I'm not sure if that's going to work, but its worth a try!
I think its a smart idea.

> thanks for any feedback and answers.

   Thank you!

> Cheers,
> Tom
>

--

Andrés Riancho
Founder, Bonsai - Information Security
http://www.bonsai-sec.com/
http://w3af.sf.net/


------------------------------------------------------------------------------
_______________________________________________
W3af-users mailing list
W3af-users@...
https://lists.sourceforge.net/lists/listinfo/w3af-users
Tiago Mendo | 27 Apr 11:54 2010
Picon

Re: importResults plugin with Burp or WebScarab input files


On 2010/04/27, at 10:33, Tom Ueltschi wrote:

Hi Andres and list,

instead of the spiderMan plugin I would like to use another proxy (burp, webscarab) and import the URL's from a file. This way I just have to do it once for multiple scans (no interaction required).

- The latest version from importResults says in its description:

       Three configurable parameter exist:
           - input_csv
           - input_burp
           - input_webscarab

I've used paros proxy extensively, but don't know if I could export a url list in the "inpuc_csv" format.

Has anyone done this with burp or webscarab proxy? Which on is easier to just create an url list?

I know you can easily generate a list of URL GET requests with the free Burp. Just define a scope for your site, access it through the Burp proxy, and then right click the site in the history tab (I think it is the first one). Choose spider from here (or similar) and then right click again and choose one of the two export options. One of them will fill the clipboard with a list of GETs.

I don't recall doing it with webscarab, so I can't give you more information.



Can you do this with the free version of burp?

yes.


Do you know of the right menu entry to save the url file from burp or webscarab?  (I will try to find it myself with burp first)

read above


Thanks for any help.

Cheers,
Tom


On Wed, Mar 10, 2010 at 2:04 PM, Tom Ueltschi <security-stuff-heLoQhN5uAnk1uMJSBkQmQ@public.gmane.org> wrote:
Andres,

thanks for the prompt response and the great work you (and the other developers) are doing with w3af !


>> - could i provide a login (username/password or session cookie)
>> somehow without using spiderMan proxy?

>    Yes, please see the http-settings, there is a way for you to
> specify a cookie, or add arbitrary headers with headersFile parameter.

this would still require me to do a login and copy/save the session-cookie to be used. (session expiration issues)
i would prefer to provide username/password for the login form (maybe along with the URL and parameter-names of the login page).

i'll try the importResults plugin with a Login-POST request in the input_csv file and see if that would work (and obsolete the need for spiderMan proxy to repeat a scan with login).

i assume the same could be achieved using the formAuthBrute plugin, giving one (or more) valid username/password combinations in the input files (maybe even using stopOnFirst).

- will in this case the successful login session be used for the rest of the scan?

- is there a way to influence the order of audit plugins being executed?  i think they are not executed in the order listed (in the w3af script file)

this would be necessary to do the formAuthBrute first to do the login, and then the rest of the audits with the logged-in users session.


right now i'm doing a scan with the latest SVN, but still the old way. (using VNC viewer from my windows box to configure and start the test on my ubuntu box, using spiderMan proxy).

there is one more suggestion i have ;-)

the spiderMan proxy seems to be listening only on the "local loopback" interface (127.0.0.1), but not on the ethernet interface. from security perspective this is good.  but from usability it would be nice, if it would listen on all (or user configured) interfaces, so i wouldn't need to use VNC viewer anymore.

this would also have to advantage, that if some (stupid) webapp only works right with IE and i don't have IE on linux, i could use IE on windows and configure the proxy port of the ubuntu box.

i prefer running w3af on ubuntu, not on windows, since my windows box is not running 24/7, but the linux box is.

is it already possible to configure spiderMan proxy for all interfaces or would that need code change?

thanks again for the great work!

cheers,
Tom


On Tue, Mar 9, 2010 at 2:29 PM, Andres Riancho <andres.riancho-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
Tom,

On Tue, Mar 9, 2010 at 9:12 AM, Tom Ueltschi
<security-stuff-heLoQhN5uAnk1uMJSBkQmQ@public.gmane.org> wrote:
> Hi all,
>
> i've been using w3af mostly with spiderMan proxy and manual discovery,
> b/c the application needs a login with username/password.
>
> now i would like to scan the same webapp multiple times with different
> sets of audit plugins enabled.  i already have a list of fuzzable URLs
> from previous scans.
>
>>> the goal is to repeat a scan (with same or other plugins) to check if the found vuln's have been fixed, if possible without the need of spiderMan proxy. (i would like to be able to configure and start a scan from remote with ssh without an open proxy port)

Nice use case. I like what you're trying to achieve.

> i found the 2 plugins "importResults" and "urllist_txt", where the
> documentation of the first one seems outdated (only 1 parameter:
> input_file) and the second one seems undocumented here:
> http://w3af.sourceforge.net/plugin-descriptions.php#discovery

- urllist_txt will read the urllist.txt file from the web server
(http://host.tld/urllist.txt). This is not what you want.
- The latest version from importResults says in its description:

       Three configurable parameter exist:
           - input_csv
           - input_burp
           - input_webscarab

Please make sure that you have the latest version of w3af from the
SVN. The (http://w3af.sourceforge.net/plugin-descriptions.php#discovery)
page is outdated, I'll fix that in a while.

> - what's the difference between the two?  which one should be preferred?

   For your use case, please use importResults with input_csv.

> - what's the format of "input_csv" from importResults? (e.g. 1 URL per
> line, with or without URL parameters? is there any separation by
> comma, or why CSV?)

   method, uri, postdata

> - could i provide a login (username/password or session cookie)
> somehow without using spiderMan proxy?

   Yes, please see the http-settings, there is a way for you to
specify a cookie, or add arbitrary headers with headersFile parameter.

> (maybe if it's possible create a GET request in the URL list file
> which does a login? [unless it's POST only] or else how?)

   Hmm... I'm not sure if that's going to work, but its worth a try!
I think its a smart idea.

> thanks for any feedback and answers.

   Thank you!

> Cheers,
> Tom
>

--

Andrés Riancho
Founder, Bonsai - Information Security
http://www.bonsai-sec.com/
http://w3af.sf.net/


------------------------------------------------------------------------------
_______________________________________________
W3af-users mailing list
W3af-users-5NWGOfrQmnetEtDZOKyKiw@public.gmane.orgrge.net
https://lists.sourceforge.net/lists/listinfo/w3af-users


Tiago Mendo

+351 215000959
+351 963618116

Portugal Telecom / SAPO / DTS / Equipa de Segurança

PGP: 0xF962B36970A3DF1D

------------------------------------------------------------------------------
_______________________________________________
W3af-users mailing list
W3af-users@...
https://lists.sourceforge.net/lists/listinfo/w3af-users
Tom Ueltschi | 27 Apr 12:24 2010
Picon

Re: importResults plugin with Burp or WebScarab input files

Hi Tiago

thanks for the reply.  i missed adding a scope at the beginning, but tried to do it afterwards in the proxy-history tab by selecting the list of url's.

from proxy-history there is also the option of "save selected items" which generates a XML file (with items, time, url, request, response etc. as elements).

what's the format expected by importResults input_burp?

thanks,
Tom


On Tue, Apr 27, 2010 at 11:54 AM, Tiago Mendo <tiago.mendo-dl2ejS2iUSYVhHzd4jOs4w@public.gmane.org> wrote:

On 2010/04/27, at 10:33, Tom Ueltschi wrote:

Hi Andres and list,

instead of the spiderMan plugin I would like to use another proxy (burp, webscarab) and import the URL's from a file. This way I just have to do it once for multiple scans (no interaction required).

- The latest version from importResults says in its description:

       Three configurable parameter exist:
           - input_csv
           - input_burp
           - input_webscarab

I've used paros proxy extensively, but don't know if I could export a url list in the "inpuc_csv" format.

Has anyone done this with burp or webscarab proxy? Which on is easier to just create an url list?

I know you can easily generate a list of URL GET requests with the free Burp. Just define a scope for your site, access it through the Burp proxy, and then right click the site in the history tab (I think it is the first one). Choose spider from here (or similar) and then right click again and choose one of the two export options. One of them will fill the clipboard with a list of GETs.

I don't recall doing it with webscarab, so I can't give you more information.



Can you do this with the free version of burp?

yes.


Do you know of the right menu entry to save the url file from burp or webscarab?  (I will try to find it myself with burp first)

read above


Thanks for any help.

Cheers,
Tom


On Wed, Mar 10, 2010 at 2:04 PM, Tom Ueltschi <security-stuff-heLoQhN5uAnk1uMJSBkQmQ@public.gmane.org> wrote:
Andres,

thanks for the prompt response and the great work you (and the other developers) are doing with w3af !


>> - could i provide a login (username/password or session cookie)
>> somehow without using spiderMan proxy?

>    Yes, please see the http-settings, there is a way for you to
> specify a cookie, or add arbitrary headers with headersFile parameter.

this would still require me to do a login and copy/save the session-cookie to be used. (session expiration issues)
i would prefer to provide username/password for the login form (maybe along with the URL and parameter-names of the login page).

i'll try the importResults plugin with a Login-POST request in the input_csv file and see if that would work (and obsolete the need for spiderMan proxy to repeat a scan with login).

i assume the same could be achieved using the formAuthBrute plugin, giving one (or more) valid username/password combinations in the input files (maybe even using stopOnFirst).

- will in this case the successful login session be used for the rest of the scan?

- is there a way to influence the order of audit plugins being executed?  i think they are not executed in the order listed (in the w3af script file)

this would be necessary to do the formAuthBrute first to do the login, and then the rest of the audits with the logged-in users session.


right now i'm doing a scan with the latest SVN, but still the old way. (using VNC viewer from my windows box to configure and start the test on my ubuntu box, using spiderMan proxy).

there is one more suggestion i have ;-)

the spiderMan proxy seems to be listening only on the "local loopback" interface (127.0.0.1), but not on the ethernet interface. from security perspective this is good.  but from usability it would be nice, if it would listen on all (or user configured) interfaces, so i wouldn't need to use VNC viewer anymore.

this would also have to advantage, that if some (stupid) webapp only works right with IE and i don't have IE on linux, i could use IE on windows and configure the proxy port of the ubuntu box.

i prefer running w3af on ubuntu, not on windows, since my windows box is not running 24/7, but the linux box is.

is it already possible to configure spiderMan proxy for all interfaces or would that need code change?

thanks again for the great work!

cheers,
Tom


On Tue, Mar 9, 2010 at 2:29 PM, Andres Riancho <andres.riancho-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
Tom,

On Tue, Mar 9, 2010 at 9:12 AM, Tom Ueltschi
<security-stuff-heLoQhN5uAnk1uMJSBkQmQ@public.gmane.org> wrote:
> Hi all,
>
> i've been using w3af mostly with spiderMan proxy and manual discovery,
> b/c the application needs a login with username/password.
>
> now i would like to scan the same webapp multiple times with different
> sets of audit plugins enabled.  i already have a list of fuzzable URLs
> from previous scans.
>
>>> the goal is to repeat a scan (with same or other plugins) to check if the found vuln's have been fixed, if possible without the need of spiderMan proxy. (i would like to be able to configure and start a scan from remote with ssh without an open proxy port)

Nice use case. I like what you're trying to achieve.

> i found the 2 plugins "importResults" and "urllist_txt", where the
> documentation of the first one seems outdated (only 1 parameter:
> input_file) and the second one seems undocumented here:
> http://w3af.sourceforge.net/plugin-descriptions.php#discovery

- urllist_txt will read the urllist.txt file from the web server
(http://host.tld/urllist.txt). This is not what you want.
- The latest version from importResults says in its description:

       Three configurable parameter exist:
           - input_csv
           - input_burp
           - input_webscarab

Please make sure that you have the latest version of w3af from the
SVN. The (http://w3af.sourceforge.net/plugin-descriptions.php#discovery)
page is outdated, I'll fix that in a while.

> - what's the difference between the two?  which one should be preferred?

   For your use case, please use importResults with input_csv.

> - what's the format of "input_csv" from importResults? (e.g. 1 URL per
> line, with or without URL parameters? is there any separation by
> comma, or why CSV?)

   method, uri, postdata

> - could i provide a login (username/password or session cookie)
> somehow without using spiderMan proxy?

   Yes, please see the http-settings, there is a way for you to
specify a cookie, or add arbitrary headers with headersFile parameter.

> (maybe if it's possible create a GET request in the URL list file
> which does a login? [unless it's POST only] or else how?)

   Hmm... I'm not sure if that's going to work, but its worth a try!
I think its a smart idea.

> thanks for any feedback and answers.

   Thank you!

> Cheers,
> Tom
>

--

Andrés Riancho
Founder, Bonsai - Information Security
http://www.bonsai-sec.com/
http://w3af.sf.net/


------------------------------------------------------------------------------
_______________________________________________
W3af-users mailing list
W3af-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/w3af-users


Tiago Mendo

+351 215000959
+351 963618116

Portugal Telecom / SAPO / DTS / Equipa de Segurança

PGP: 0xF962B36970A3DF1D


------------------------------------------------------------------------------
_______________________________________________
W3af-users mailing list
W3af-users@...
https://lists.sourceforge.net/lists/listinfo/w3af-users
Andres Riancho | 27 Apr 16:07 2010
Picon

Fwd: [Full-disclosure] 2010 Nmap/SecTools.org survey

If you guys have 5 minutes to spare, please complete this survey, and
add w3af to your tools :)

---------- Forwarded message ----------
From: Henri Doreau <henri.doreau@...>
Date: Tue, Apr 27, 2010 at 3:32 AM
Subject: [Full-disclosure] 2010 Nmap/SecTools.org survey
To: Full disclosure <full-disclosure@...>

Hello FD,

the Nmap poject is currently conducting a survey to improve Nmap and
its companion tools and to update the http://sectools.org website.
You can help Nmap by filling out the survey at http://nmap.org/survey

Regards

--
Henri

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

--

-- 
Andrés Riancho
Founder, Bonsai - Information Security
http://www.bonsai-sec.com/
http://w3af.sf.net/

------------------------------------------------------------------------------
Tom Ueltschi | 27 Apr 17:28 2010
Picon

Re: importResults plugin with Burp or WebScarab input files

Hi all

I briefly looked at the code (importResults plugin) and it doesn't look like XML file input. I also tried webscarab (besides burp) and it looks like it's easier to save the files (conversation directory).

I also found out where the importResults comes from :-)

https://svn.sqlmap.org/sqlmap/trunk/sqlmap/lib/core/option.py

I'll try to use the webscarab files as input. hope it works :-)

Tom


On Tue, Apr 27, 2010 at 12:24 PM, Tom Ueltschi <security-stuff-heLoQhN5uAnk1uMJSBkQmQ@public.gmane.org> wrote:
Hi Tiago

thanks for the reply.  i missed adding a scope at the beginning, but tried to do it afterwards in the proxy-history tab by selecting the list of url's.

from proxy-history there is also the option of "save selected items" which generates a XML file (with items, time, url, request, response etc. as elements).

what's the format expected by importResults input_burp?

thanks,
Tom



On Tue, Apr 27, 2010 at 11:54 AM, Tiago Mendo <tiago.mendo-dl2ejS2iUSYVhHzd4jOs4w@public.gmane.org> wrote:

On 2010/04/27, at 10:33, Tom Ueltschi wrote:

Hi Andres and list,

instead of the spiderMan plugin I would like to use another proxy (burp, webscarab) and import the URL's from a file. This way I just have to do it once for multiple scans (no interaction required).

- The latest version from importResults says in its description:

       Three configurable parameter exist:
           - input_csv
           - input_burp
           - input_webscarab

I've used paros proxy extensively, but don't know if I could export a url list in the "inpuc_csv" format.

Has anyone done this with burp or webscarab proxy? Which on is easier to just create an url list?

I know you can easily generate a list of URL GET requests with the free Burp. Just define a scope for your site, access it through the Burp proxy, and then right click the site in the history tab (I think it is the first one). Choose spider from here (or similar) and then right click again and choose one of the two export options. One of them will fill the clipboard with a list of GETs.

I don't recall doing it with webscarab, so I can't give you more information.



Can you do this with the free version of burp?

yes.


Do you know of the right menu entry to save the url file from burp or webscarab?  (I will try to find it myself with burp first)

read above


Thanks for any help.

Cheers,
Tom

------------------------------------------------------------------------------
_______________________________________________
W3af-users mailing list
W3af-users@...
https://lists.sourceforge.net/lists/listinfo/w3af-users
Taras | 27 Apr 22:14 2010

Re: Fwd: [Full-disclosure] 2010 Nmap/SecTools.org survey

+1
Good idea!

On 04/27/2010 06:07 PM, Andres Riancho wrote:
> If you guys have 5 minutes to spare, please complete this survey, and
> add w3af to your tools :)
> 
> 
> ---------- Forwarded message ----------
> From: Henri Doreau <henri.doreau@...>
> Date: Tue, Apr 27, 2010 at 3:32 AM
> Subject: [Full-disclosure] 2010 Nmap/SecTools.org survey
> To: Full disclosure <full-disclosure@...>
> 
> 
> Hello FD,
> 
> the Nmap poject is currently conducting a survey to improve Nmap and
> its companion tools and to update the http://sectools.org website.
> You can help Nmap by filling out the survey at http://nmap.org/survey
> 
> Regards
> 
> --
> Henri
> 
> _______________________________________________
> Full-Disclosure - We believe in it.
> Charter: http://lists.grok.org.uk/full-disclosure-charter.html
> Hosted and sponsored by Secunia - http://secunia.com/
> 
> 
> 

--

-- 
Taras
----
"Software is like sex: it's better when it's free." - Linus Torvalds

------------------------------------------------------------------------------
blscrown Zakk | 28 Apr 16:11 2010
Picon

Auto-enabling plugin

Hello, i have a question about the discovery phase.
When i'm testing a intranet web application with all discovery plugins, but without fingerMSN, fingerGoogle or fingerPKS, w3af enable this plugins automatically.
Why happens that???
Thanks.-

------------------------------------------------------------------------------
_______________________________________________
W3af-users mailing list
W3af-users@...
https://lists.sourceforge.net/lists/listinfo/w3af-users

Gmane