lemons_terry | 4 Feb 17:07 2010

FW: Iperf on Solaris?

Hi

I'm trying to create an Iperf binary on Solaris 10 SPARC x64.  I followed the instuctions provided by a
Phillip Ross post to this emial list in July 2008, but still see the problem mentioned in another email in
this list:

		# /usr/sfw/bin/gmake all
		/usr/sfw/bin/gmake  all-recursive
		gmake[1]: Entering directory `/export/home/iperf/iperf-2.0.4'
		Making all in compat
		gmake[2]: Entering directory `/export/home/iperf/iperf-2.0.4/compat'
		gmake[2]: Nothing to be done for `all'.
		gmake[2]: Leaving directory `/export/home/iperf/iperf-2.0.4/compat'
		Making all in doc
		gmake[2]: Entering directory `/export/home/iperf/iperf-2.0.4/doc'
		gmake[2]: Nothing to be done for `all'.
		gmake[2]: Leaving directory `/export/home/iperf/iperf-2.0.4/doc'
		Making all in include
		gmake[2]: Entering directory `/export/home/iperf/iperf-2.0.4/include'
		gmake[2]: Nothing to be done for `all'.
		gmake[2]: Leaving directory `/export/home/iperf/iperf-2.0.4/include'
		Making all in src
		gmake[2]: Entering directory `/export/home/iperf/iperf-2.0.4/src'
		CC     -o iperf -D_REENTRANT   -DHAVE_CONFIG_H Client.o Extractor.o Launch.o List.o Listener.o Locale.o
PerfSocket.o ReportCSV.o ReportDefault.o Reporter.o Server.o Settings.o SocketAddr.o
gnu_getopt.o gnu_getopt_long.o main.o service.o sockets.o stdio.o tcp_window_size.o
../compat/libcompat.a  -lsocket -lnsl 
		Undefined                       first referenced
		 symbol                             in file
		sched_yield                         ../compat/libcompat.a(Thread.o)
(Continue reading)

Phillip Ross | 4 Feb 17:35 2010
Picon

Re: FW: Iperf on Solaris?

I'm still watching the list and helping folks who run into the issue... but I believe you're the first to have
the problem on sparc architecture.  And come to think of it, I might have provided workarounds for x86 but
not sparc.  I'll test on a sparc and get back to you :)

----- Original Message ----
> From: "lemons_terry@..." <lemons_terry@...>
> To: iperf-users@...
> Sent: Thu, February 4, 2010 11:07:33 AM
> Subject: [Iperf-users] FW: Iperf on Solaris?
> 
> Hi
> 
> I'm trying to create an Iperf binary on Solaris 10 SPARC x64.  I followed the 
> instuctions provided by a Phillip Ross post to this emial list in July 2008, but 
> still see the problem mentioned in another email in this list:
> 
>         # /usr/sfw/bin/gmake all
>         /usr/sfw/bin/gmake  all-recursive
>         gmake[1]: Entering directory `/export/home/iperf/iperf-2.0.4'
>         Making all in compat
>         gmake[2]: Entering directory `/export/home/iperf/iperf-2.0.4/compat'
>         gmake[2]: Nothing to be done for `all'.
>         gmake[2]: Leaving directory `/export/home/iperf/iperf-2.0.4/compat'
>         Making all in doc
>         gmake[2]: Entering directory `/export/home/iperf/iperf-2.0.4/doc'
>         gmake[2]: Nothing to be done for `all'.
>         gmake[2]: Leaving directory `/export/home/iperf/iperf-2.0.4/doc'
>         Making all in include
>         gmake[2]: Entering directory `/export/home/iperf/iperf-2.0.4/include'
>         gmake[2]: Nothing to be done for `all'.
(Continue reading)

lemons_terry | 4 Feb 19:35 2010

Re: FW: Iperf on Solaris?

TERRIFIC!  Thanks very much, Drew.

So the only changes I needed to make from the normal procedure was:

- remove '-Wall' from all of the Makefile files
- add '-lrt' to the line "LIBS = -lsocket -lnsl -lrt" in the Makefile in the 'src' directory.

I could not apply the patch to the resulting code, but maybe I don't need it:

# gpatch -p1 < ../iperf-2.0.4-pthreads-rt.patch
patching file compat/Makefile
Hunk #1 FAILED at 117.
1 out of 1 hunk FAILED -- saving rejects to file compat/Makefile.rej
patching file config.h
Reversed (or previously applied) patch detected!  Assume -R? [n] 
Apply anyway? [n] 
Skipping patch.
1 out of 1 hunk ignored -- saving rejects to file config.h.rej
patching file doc/Makefile
Hunk #1 FAILED at 92.
1 out of 1 hunk FAILED -- saving rejects to file doc/Makefile.rej
patching file include/Makefile
Hunk #1 FAILED at 92.
1 out of 1 hunk FAILED -- saving rejects to file include/Makefile.rej
patching file Makefile
Hunk #1 FAILED at 116.
1 out of 1 hunk FAILED -- saving rejects to file Makefile.rej
patching file man/Makefile
Hunk #1 FAILED at 97.
1 out of 1 hunk FAILED -- saving rejects to file man/Makefile.rej
(Continue reading)

martin | 12 Feb 01:18 2010
Picon

analyzing iperf output

I did iperf test from one Linux server to another. One server is in
Helsinki, other is in London and they are connected with a VPN
tunnel(should be 100Mbps/100Mbps connection). I started iperf server
in London(192.168.1.2) and iperf client in Helsinki(192.168.1.1). I
started server with this "iperf -s -u -fm"  command and client with
"iperf -c 192.168.1.2 -fm -d -t600 -u -b 100m" command. Output was
fallowing:

#iperf -c 192.168.1.2 -fm -d -t600 -u -b 100m
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 0.10 MByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.1.2, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 0.10 MByte (default)
------------------------------------------------------------
[  4] local 192.168.1.1 port 44456 connected with 192.168.1.2 port 5001
[  3] local 192.168.1.1 port 5001 connected with 192.168.1.2 port 31435

[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-600.0 sec  7189 MBytes    101 Mbits/sec
[  4] Sent 5128207 datagrams
[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total Datagrams
[  3]  0.0-600.2 sec  2396 MBytes  33.5 Mbits/sec  0.244 ms
3419123/5128199 (67%)
[  4] Server Report:
[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total Datagrams
[  4]  0.0-600.1 sec  2411 MBytes  33.7 Mbits/sec  0.199 ms
(Continue reading)

lemons_terry | 12 Feb 03:12 2010

10 GbE performance

Hi

We're just starting to use 10 GbE in our lab.  I don't know what to expect for iperf performance in the 10 GbE
world.  We're already seeing that our Linux systems achieve single-stream performance of over 6 Gbps,
while our Solaris systems can't get over 1 Gbps.

Anyone care to share your experience?

Thanks!
tl

Terry Lemons
Backup Recovery Systems Division
EMC²
where information lives
4400 Computer Drive, MS D239
Westboro MA 01580
Phone: 508 898 7312
Email: Lemons_Terry@... 

------------------------------------------------------------------------------
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users

(Continue reading)

Metod Kozelj | 12 Feb 09:43 2010
Picon

Re: analyzing iperf output

Howdy!

When you run UDP tests, you should never rely on the results reported by sending peer[*]. Any network node between peers is free to drop any packet in case of congestion. TCP resolves this by retransmissions and application will normally detect this by seeing lower throughput. Even sending peer will detect lower end-to-end throughput with some delay (depending on Tx buffer size and TCP Window size) and in long-term the results (as reported by sending and receiving peer) should be roughly the same. UDP however does not implement retransmissions by itself. Which means that application at the receiving peer will see missing (dropped) packets while application at the sending peer will have no idea about that[**].

If you're doing tests using UDP I'd suggest to run tests single direction only and alternating peer configuration (first run Helsinki peer as client and London peer as server - results reported from London peer will tell you bandwidth from FI to UK - then run Helsinki peer as sevfer and London peer as client and results from Helsinki peer will tell you bandwidth from UK to FI).

[*] The only time that sendin peer will tell you correct throughput is when the first leg is the bottleneck as IP stack on sending peer will not drop packets but rather throttle down application (Tx buffer full) - this can happen with dial-up setups.

[**] When using UDP as transport protocol, it's up to application to detect any inconsistencies - such as dropped packets, out-of-order delivery etc. and to react to those as desired. One app that does all of it is NFS - it traditionally ran over UDP. Recent implementations avoid burden of data integrity checking by using TCP as transport protocol.

Peace! Mkx -- perl -e 'print $i=pack(c5,(41*2),sqrt(7056),(unpack(c,H)-2),oct(115),10);' -- echo 16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlb xq | dc BOFH excuse #127: Sticky bits on disk.

martin je dne 12/02/10 01:18 napisal-a:
I did iperf test from one Linux server to another. One server is in Helsinki, other is in London and they are connected with a VPN tunnel(should be 100Mbps/100Mbps connection). I started iperf server in London(192.168.1.2) and iperf client in Helsinki(192.168.1.1). I started server with this "iperf -s -u -fm" command and client with "iperf -c 192.168.1.2 -fm -d -t600 -u -b 100m" command. Output was fallowing: #iperf -c 192.168.1.2 -fm -d -t600 -u -b 100m Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 0.10 MByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to 192.168.1.2, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 0.10 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.1 port 44456 connected with 192.168.1.2 port 5001 [ 3] local 192.168.1.1 port 5001 connected with 192.168.1.2 port 31435 [ ID] Interval Transfer Bandwidth [ 4] 0.0-600.0 sec 7189 MBytes 101 Mbits/sec [ 4] Sent 5128207 datagrams [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0-600.2 sec 2396 MBytes 33.5 Mbits/sec 0.244 ms 3419123/5128199 (67%) [ 4] Server Report: [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 4] 0.0-600.1 sec 2411 MBytes 33.7 Mbits/sec 0.199 ms 3408321/5128207 (66%) [ 4] 0.0-600.1 sec 299 datagrams received out-of-order How can there be bandwidth from Helsinki to London 101 Mbits/sec when Iperf server from London raports 33.7 Mbits/sec? How can there be such a huge packet loss(67% from London to Helsinki and 66% from Helsinki to London)? I would appriciate any comments about inner workings of Iperf or explanations about Iperf output :) Thank you in advance!! ------------------------------------------------------------------------------ SOLARIS 10 is the OS for Data Centers - provides features such as DTrace, Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW http://p.sf.net/sfu/solaris-dev2dev _______________________________________________ Iperf-users mailing list Iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org https://lists.sourceforge.net/lists/listinfo/iperf-users

------------------------------------------------------------------------------
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users
Amir Ancel | 14 Feb 09:47 2010
Picon

Re: 10 GbE performance

Hi,

10Gbe should get to 9.45Gbps for unidirectional traffic.

Thanks,

Amir Ancel
Mellanox Performance

-----Original Message-----
From: lemons_terry@...
[mailto:lemons_terry@...] 
Sent: Friday, February 12, 2010 4:13 AM
To: iperf-users@...
Subject: [Iperf-users] 10 GbE performance

Hi

We're just starting to use 10 GbE in our lab.  I don't know what to expect for iperf performance in the 10 GbE
world.  We're already seeing that our Linux systems achieve single-stream performance of over 6 Gbps,
while our Solaris systems can't get over 1 Gbps.

Anyone care to share your experience?

Thanks!
tl

Terry Lemons
Backup Recovery Systems Division
EMC²
where information lives
4400 Computer Drive, MS D239
Westboro MA 01580
Phone: 508 898 7312
Email: Lemons_Terry@... 

------------------------------------------------------------------------------
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace, Predictive Self Healing and Award
Winning ZFS. Get Solaris 10 NOW http://p.sf.net/sfu/solaris-dev2dev
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users

------------------------------------------------------------------------------
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users

Wichai Komentrakarn | 15 Feb 02:51 2010

Iperf UDP Packet Loss

Hi,
 
I am trying to use the Iperf to analyze UDP packet loss on a network. I saw Iperf reported several % packet loss but when I used Wireshark to capture the sent and received packets and compared them. I couldn't see any packets loss on the received side. Can you explain how Iperf reported percentage of packet loss or what is the condision that Iperf decided it didn't recieve a UDP packet.
 
I have the WireShark captures on the sender side and receiver side and I can see all packets were received. Note, the captures on the received side is done on the receiver PC itself. I also use WireShark to print all hexes of these packets into text files and I used text editor to compare the files, I could see only the first 32 bytes (MAC address, IP TTL and IP header checksum) are differenct between the sender side and recevier side which is normal.
 
The Iperf is version 1.7.0 for Win32.
 
Thanks for your advise in advance.
 
Regards,
 
Wichai
------------------------------------------------------------------------------
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users
Feghali, Jose | 15 Feb 19:15 2010

Re: Iperf UDP Packet Loss

I have seen the same behavior in iperf before. Out-of-order and/or lost packets, but UDP transmissions with DVTS or CXP worked very well. If memory serves, playing around with some of the settings (especially the –w parameter) seemed to improve things – but we could never get it to work properly. I also used wireshark to confirm and got similar results to you. If I can find any notes on this issue later will post them here.

 

José

 

From: Wichai Komentrakarn [mailto:wichaikk-j0TCHil8pO/1GAl8QSHYmA@public.gmane.orgom]
Sent: Sunday, February 14, 2010 7:52 PM
To: iperf-users-IlAC/IgH3vHkIehEPBwdNw@public.gmane.org
Subject: [Iperf-users] Iperf UDP Packet Loss

 

Hi,

 

I am trying to use the Iperf to analyze UDP packet loss on a network. I saw Iperf reported several % packet loss but when I used Wireshark to capture the sent and received packets and compared them. I couldn't see any packets loss on the received side. Can you explain how Iperf reported percentage of packet loss or what is the condision that Iperf decided it didn't recieve a UDP packet.

 

I have the WireShark captures on the sender side and receiver side and I can see all packets were received. Note, the captures on the received side is done on the receiver PC itself. I also use WireShark to print all hexes of these packets into text files and I used text editor to compare the files, I could see only the first 32 bytes (MAC address, IP TTL and IP header checksum) are differenct between the sender side and recevier side which is normal.

 

The Iperf is version 1.7.0 for Win32.

 

Thanks for your advise in advance.

 

Regards,

 

Wichai

------------------------------------------------------------------------------
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users
Aaron Brown | 16 Feb 14:55 2010

Re: Iperf UDP Packet Loss


On Feb 14, 2010, at 8:51 PM, Wichai Komentrakarn wrote:

I am trying to use the Iperf to analyze UDP packet loss on a network. I saw Iperf reported several % packet loss but when I used Wireshark to capture the sent and received packets and compared them. I couldn't see any packets loss on the received side. Can you explain how Iperf reported percentage of packet loss or what is the condision that Iperf decided it didn't recieve a UDP packet.

Since UDP is a lossy protocol, the kernel does not guarantee that it will send data its been handed, and will sometimes drop the packets if, for example, the send or receive buffer are filled. It could also be that the packets got corrupted on the wire, didn't checksum properly and were tossed out.

Cheers,
Aaron

Internet2 Spring Member Meeting
April 26-28, 2010 - Arlington, Virginia
http://events.internet2.edu/2010/spring-mm/

------------------------------------------------------------------------------
SOLARIS 10 is the OS for Data Centers - provides features such as DTrace,
Predictive Self Healing and Award Winning ZFS. Get Solaris 10 NOW
http://p.sf.net/sfu/solaris-dev2dev
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users

Gmane