Jeffrey Lane | 18 Apr 04:33 2014

-B binding option not working

I'm running iperf in Ubuntu.  The package seems to be this version:

iperf version 2.0.5 (08 Jul 2010) pthreads

And I'm having problems with testing multiple interfaces.

I have two NICs on my server:

eth0      Link encap:Ethernet  HWaddr 00:30:48:65:5e:0c
          inet addr:10.0.0.123  Bcast:10.0.0.0  Mask:255.255.255.0

eth1      Link encap:Ethernet  HWaddr 00:30:48:65:5e:0d
          inet addr:10.0.0.128  Bcast:10.0.0.0  Mask:255.255.255.0

AND I do not have a default route set (I deleted the default route
just in case it was the thing giving me grief):

ubuntu <at> supermicro:~$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth1
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0

And I'm running iperf as a server on a target machine who's IP is 10.0.0.1.

Now, when I try to run iperf using -B to bind to a particular interface:
ubuntu <at> supermicro:~$ for x in 123 128; do  iperf -B 10.0.0.$x -c 10.0.0.1; done
------------------------------------------------------------
Client connecting to 10.0.0.1, TCP port 5001
Binding to local address 10.0.0.123
(Continue reading)

Bruce A. Mah | 26 Mar 21:33 2014
Picon

iperf-3.0.3 is available

ESnet (Energy Sciences Network) is proud to announce the public
release of iperf-3.0.3.  This version is a maintenance release with a
few bug fixes and enhancements, notably:

* The structure of the JSON output is more consistent between the
  cases of one stream and multiple streams.

* The example programs once again build correctly.

* A possible buffer overflow related to error output has been fixed.
  (This is not believed to be exploitable.)

More information on changes can be found in the RELEASE_NOTES
file in the source distribution.

iperf3 is a tool for measuring the maximum TCP and UDP performance
along a path, allowing for the tuning of various parameters and
reporting measurements such as throughput, jitter, and datagram packet
loss.

The original iperf was implemented by NLANR / DAST.  Version 3 is a
complete reimplementation, with the goals of a smaller, simpler code
base, and a library that can be used by other programs.  It also adds
new functionality, such as CPU utilization measurements, zero copy TCP
support, and JSON output.  Note that iperf3 clients and servers are
not compatible with, and will not interoperate with, earlier versions
of iperf.

iperf3 is fully supported on Linux, FreeBSD, and MacOS X.  It may run
on other platforms as well, although it has not received the same
(Continue reading)

Varun Sharma | 24 Mar 18:18 2014
Picon

Re: Fwd: Sending rate decrease in TCP bidirectional test .

Hi,
I go through iperf source code. I want to discuss some observation .

In Unidirectional case sending side loop run more iteration as compared to Bidirectional  sending side.Due to this less data
transfer in bidirectional case. Now I does not get reason why it run less iteration whereas time for loop run is same in both cases ? and How this reason affects bidirectional sending side performance ?

Regards
Varun


On Sun, Feb 23, 2014 at 2:02 AM, Sandro Bureca <sbureca-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
Hi,
do you have any kind of iptables or packet filtering mechanism on the machine ?
even if  all traffic is permitted it might affect somehow the tcp flow.
Someone tunes the tcp stack with
ifconfig buffer
and sysctrl

See also:
http://dak1n1.com/blog/7-performance-tuning-intel-10gbe



Kind regards,
Sandro

On 21 February 2014 18:54, Varun Sharma <vsdssd-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> With UDP packets It's work fine.
> When I perform tcp bi-directional test for loop back interface (means on
> same machine server and client exists), same decrease in tcp sending side
> happen. Is it mean problem related with TCP/IP stack ?
>
>
> On Thu, Feb 20, 2014 at 11:46 PM, Bob (Robert) McMahon
> <rmcmahon <at> broadcom.com> wrote:
>>
>> It's hard to say without a lot more information (and difficult to debug
>> via email.)  Do you see the same phenomena when using UDP packets?
>>
>>
>>
>> If it's TCP only a next step may be to analyze the tcp flows with
>> something like tcp trace.  http://www.tcptrace.org/
>>
>>
>>
>> Bob
>>
>> From: Varun Sharma [mailto:vsdssd-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org]
>> Sent: Thursday, February 20, 2014 1:45 AM
>> To: Bob (Robert) McMahon
>> Cc: iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org; amira-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org
>> Subject: Re: [Iperf-users] Fwd: Sending rate decrease in TCP bidirectional
>> test .
>>
>>
>>
>>
>>
>> Hi Bob
>>
>> Thanks for reply.
>>
>> I change in iperf as you suggested , but problem still occur. Is there any
>> another setting for overcome this problem ?
>>
>> can you tell me why this happen ?
>>
>>
>>
>> Varun
>>
>>
>>
>> On Thu, Feb 20, 2014 at 11:11 AM, Bob (Robert) McMahon
>> <rmcmahon-dY08KVG/lbpWk0Htik3J/w@public.gmane.org> wrote:
>>
>> I've had to increase the NUM_REPORT_STRUCTS to get better iperf
>> performance in 2.0.5
>>
>>
>>
>> improved/iperf] $ svn diff include/*.h
>>
>> Index: include/Reporter.h
>>
>> ===================================================================
>>
>> --- include/Reporter.h   (revision 2)
>>
>> +++ include/Reporter.h                (working copy)
>>
>> <at> <at> -61,7 +61,7 <at> <at>
>>
>>
>>
>>  #include "Settings.hpp"
>>
>>
>>
>> -#define NUM_REPORT_STRUCTS 700
>>
>> +#define NUM_REPORT_STRUCTS 7000
>>
>> #define NUM_MULTI_SLOTS    5
>>
>>
>>
>> Bob
>>
>>
>>
>> From: Varun Sharma [mailto:vsdssd-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org]
>> Sent: Wednesday, February 19, 2014 9:15 PM
>> To: iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org; amira-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org
>> Subject: [Iperf-users] Fwd: Sending rate decrease in TCP bidirectional
>> test .
>>
>>
>>
>> Hi,
>>
>> My machine ethtool -i info.
>>
>> driver: mlx4_en
>>
>> version: 2.1.6 (Aug 27 2013)
>> firmware-version: 2.5.0
>> bus-info: 0000:19:00.0
>>
>> Even after applying patch problem still occur. Can you tell me the reason
>> for this why decrease in Sending side happen ?
>>  Is this problem regarding iperf  or TCP/IP stack or nic card ?
>>
>> Regards
>>
>> Varun Sharma
>>
>>
>>
>>
>>
>> ---------- Forwarded message ----------
>> From: Amir Ancel <amira-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
>> Date: Wed, Feb 19, 2014 at 2:36 PM
>> Subject: RE: [Iperf-users] Sending rate decrease in TCP bidirectional test
>> .
>> To: Varun Sharma <vsdssd-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>, "iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org"
>> <iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org>
>> Cc: Sagi Schlanger <sagis-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
>>
>> Hi Varun,
>>
>>
>>
>> Can you please share your driver and firmware versions using "ethtool -i
>> ethX" ?
>>
>> Also, attached a patch that fixes bidirectional functional issue.
>>
>>
>>
>> Thanks,
>>
>>
>>
>> Amir Ancel
>>
>> Performance and Power Group Manager
>>
>> www.mellanox.com
>>
>>
>>
>> From: Varun Sharma [mailto:vsdssd-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org]
>> Sent: Wednesday, February 19, 2014 10:44 AM
>> To: iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
>> Subject: [Iperf-users] Sending rate decrease in TCP bidirectional test .
>>
>>
>>
>> Hi,
>>
>> I am using iperf v 2.0.5 for testing the Mellanox ConnectX VPI card. Its a
>> dual port 10G card. Two 16 core machines with 64GB RAM are connected back to
>> back.
>>
>> In TCP Bidirectional test case sending throughput decrease as compare to
>> TCP Unidirectional test case sending .
>>
>> All cases use default setting . No extra parameter is set.
>>
>>
>>
>> In case of 4 Client threads Output comes :--
>>
>> Unidirectional Send Process --- 9.6 Gbps
>>
>> Bidirectional Send Process ----  4.9  Gbps
>>
>>
>> In case of 8 Client threads Output comes :--
>>
>> Unidirectional Send Process --- 9.7 Gbps
>>
>> Bidirectional Send Process -- 6.4 Gbps
>>
>>
>> In case of 16 Client threads Output comes :--
>>
>>  Unidirectional Send Process --- 9.7 Gbps
>>
>>  Bidirectional Send Process --    8 Gbps
>>
>> Any reason for this outcome ?
>>
>> Regards
>>
>> Varun
>>
>>
>>
>>
>>
>>
>
>
>
> ------------------------------------------------------------------------------
> Managing the Performance of Cloud-Based Applications
> Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
> Read the Whitepaper.
> http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk
> _______________________________________________
> Iperf-users mailing list
> Iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
> https://lists.sourceforge.net/lists/listinfo/iperf-users
>

------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users
Damian Lezama | 20 Mar 19:05 2014

iperf tests with many connections

Hi,

I've observed that using many connections (hundreds) with iperf (-P 100 for example) has some issues:
- In links with some latency it takes forever to start testing. This time seems to grow much faster than
linear. For example 20 secs for 50 connections and 100 secs for 100 connections.
- At some point between 100 and 200 connections, iperf stops showing the [SUM] row at the end with the total result

Any ideas?

(the iperf version I have available in this environment is quite old 2.0.4, should I expect this to be fixed
in a newer version?)

Thanks,
Damian

------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users

Bruce A. Mah | 10 Mar 19:16 2014
Picon

iperf3-3.0.2 is available

SPECIAL NOTE: The iperf3 project, including the source code
repository, issue tracker, and wiki, has been moved to GitHub.

ESnet (Energy Sciences Network) is proud to announce the public
release of iperf3 3.0.2.  This version is a maintenance release that
fixes a number of bugs, many reported by users, adds a few minor
enhancements, and tracks the migration of the iperf3 project to
GitHub.  Of particular interest:

* Build / runtime fixes for CentOS 5, MacOS 10.9, and FreeBSD.

* TCP snd_cwnd output on Linux in the default output format.

* libiperf is now built as both a shared and static library; by
  default, the iperf3 binary links to the shared library.

More information on changes can be found in the RELEASE_NOTES
file in the source distribution.

iperf3 is a tool for measuring the maximum TCP and UDP performance
along a path, allowing for the tuning of various parameters and
reporting measurements such as throughput, jitter, and datagram packet
loss.

The original iperf was implemented by NLANR / DAST.  Version 3 is a
complete reimplementation, with the goals of a smaller, simpler code
base, and a library that can be used by other programs.  It also adds
new functionality, such as CPU utilization measurements, zero copy TCP
support, and JSON output.  Note that iperf3 clients and servers are
not compatible with, and will not interoperate with, earlier versions
of iperf.

iperf3 is fully supported on Linux, FreeBSD, and MacOS X.  It may run
on other platforms as well, although it has not received the same
attention and testing.

The source code for iperf 3.0.2 is available at:

http://stats.es.net/software/iperf-3.0.2.tar.gz

SHA256 hash:

3c379360bf40e6ac91dfc508cb43fefafb4739c651d9a8d905a30ec99095b282
iperf-3.0.2.tar.gz

iperf3 is freely-redistributable under a 3-clause BSD license.  More
information can be found in the LICENSING file inside the source
distribution.

Additional information about iperf3 (including the issue tracker,
source code repository access, and mailing list) can be found on the
iperf3 page on GitHub at:

https://github.com/esnet/iperf

The mailing list for iperf3 development is:

iperf-dev@...

To see the list archives or join the mailing list, visit:

http://groups.google.com/group/iperf-dev

------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users
Robert Clove | 26 Feb 09:57 2014
Picon

Change Packet Size

Hi All,


Can anyone please guide me how to change the packet size in iperf.
Please guide thanks in advance.


Regards

------------------------------------------------------------------------------
Flow-based real-time traffic analytics software. Cisco certified tool.
Monitor traffic, SLAs, QoS, Medianet, WAAS etc. with NetFlow Analyzer
Customize your own dashboards, set traffic alerts and generate reports.
Network behavioral analysis & security monitoring. All-in-one tool.
http://pubads.g.doubleclick.net/gampad/clk?id=126839071&iu=/4140/ostg.clktrk
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users
Bruce A. Mah | 25 Feb 19:42 2014
Picon

HEADSUP: iperf3 moving to GitHub (28 Feb 2013)

Greetings!

Sometime in the near future, I will be migrating the iperf3 project from
Google Code to GitHub.  This is part of a movement of several
ESnet-sponsored software projects (including iperf3) from various SCMs
and services onto a single platform.

This migration will include the source code repository, issue tracker
content, and wiki pages.  Downloads will continue to be hosted at ESnet
and will not be affected.  The iperf3 development list
(<iperf-dev@...>) will continue as a Google Group (as least
for now).

I'm tentatively planning to do this work this Friday, 28 February 2010
during California daytime hours.  When done, I will hide (but not
delete) as much as I can from the Google Code site and put pointers to
GitHub in as many appropriate places as I can find.

Please feel free to contact me if you have any questions.

Thank you,

Bruce.

------------------------------------------------------------------------------
Flow-based real-time traffic analytics software. Cisco certified tool.
Monitor traffic, SLAs, QoS, Medianet, WAAS etc. with NetFlow Analyzer
Customize your own dashboards, set traffic alerts and generate reports.
Network behavioral analysis & security monitoring. All-in-one tool.
http://pubads.g.doubleclick.net/gampad/clk?id=126839071&iu=/4140/ostg.clktrk
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users
Varun Sharma | 20 Feb 12:03 2014
Picon

Re: Fwd: Sending rate decrease in TCP bidirectional test .

Hi Amir

Thanks for reply .

Even after running this script  (set_irq_affinity.sh ethX)  problem still occur. Is this problem related to iperf s/w or TCP/IP stack ?

Varun



On Thu, Feb 20, 2014 at 3:30 PM, Amir Ancel <amira-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org> wrote:

Please try the following script which is provided with the package:

 

# set_irq_affinity.sh ethX

 

Thanks,

 

Amir Ancel

Performance and Power Group Manager

www.mellanox.com

 

From: Varun Sharma [mailto:vsdssd-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org]
Sent: Thursday, February 20, 2014 11:46 AM
To: Bob (Robert) McMahon
Cc: iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org; Amir Ancel
Subject: Re: [Iperf-users] Fwd: Sending rate decrease in TCP bidirectional test .

 

 

Hi Bob

Thanks for reply.

I change in iperf as you suggested , but problem still occur. Is there any another setting for overcome this problem ?

can you tell me why this happen ? 

 

Varun

 

On Thu, Feb 20, 2014 at 11:11 AM, Bob (Robert) McMahon <rmcmahon <at> broadcom.com> wrote:

I’ve had to increase the NUM_REPORT_STRUCTS to get better iperf performance in 2.0.5

 

improved/iperf] $ svn diff include/*.h

Index: include/Reporter.h

===================================================================

--- include/Reporter.h   (revision 2)

+++ include/Reporter.h                (working copy)

<at> <at> -61,7 +61,7 <at> <at>

 

 #include "Settings.hpp"

 

-#define NUM_REPORT_STRUCTS 700

+#define NUM_REPORT_STRUCTS 7000

#define NUM_MULTI_SLOTS    5

 

Bob

 

From: Varun Sharma [mailto:vsdssd <at> gmail.com]
Sent: Wednesday, February 19, 2014 9:15 PM
To: iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org; amira-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org
Subject: [Iperf-users] Fwd: Sending rate decrease in TCP bidirectional test .

 

Hi,

My machine ethtool -i info.

driver: mlx4_en

version: 2.1.6 (Aug 27 2013)
firmware-version: 2.5.0
bus-info: 0000:19:00.0

Even after applying patch problem still occur. Can you tell me the reason for this why decrease in Sending side happen ?
 Is this problem regarding iperf  or TCP/IP stack or nic card ?

Regards

Varun Sharma

 

 

---------- Forwarded message ----------
From: Amir Ancel <amira-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Date: Wed, Feb 19, 2014 at 2:36 PM
Subject: RE: [Iperf-users] Sending rate decrease in TCP bidirectional test .
To: Varun Sharma <vsdssd-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>, "iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org" <iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org>
Cc: Sagi Schlanger <sagis-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Hi Varun,

 

Can you please share your driver and firmware versions using “ethtool -i ethX” ?

Also, attached a patch that fixes bidirectional functional issue.

 

Thanks,

 

Amir Ancel

Performance and Power Group Manager

www.mellanox.com

 

From: Varun Sharma [mailto:vsdssd <at> gmail.com]
Sent: Wednesday, February 19, 2014 10:44 AM
To: iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
Subject: [Iperf-users] Sending rate decrease in TCP bidirectional test .

 

Hi,

I am using iperf v 2.0.5 for testing the Mellanox ConnectX VPI card. Its a dual port 10G card. Two 16 core machines with 64GB RAM are connected back to back.

In TCP Bidirectional test case sending throughput decrease as compare to TCP Unidirectional test case sending .

All cases use default setting . No extra parameter is set.

 

In case of 4 Client threads Output comes :--

Unidirectional Send Process --- 9.6 Gbps                

Bidirectional Send Process ----  4.9  Gbps


In case of 8 Client threads Output comes :--

Unidirectional Send Process --- 9.7 Gbps

Bidirectional Send Process -- 6.4 Gbps  


In case of 16 Client threads Output comes :--

 Unidirectional Send Process --- 9.7 Gbps
 
 Bidirectional Send Process --    8 Gbps

Any reason for this outcome ?

Regards

Varun

 

 

 


------------------------------------------------------------------------------
Managing the Performance of Cloud-Based Applications
Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
Read the Whitepaper.
http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users
Varun Sharma | 20 Feb 06:15 2014
Picon

Fwd: Sending rate decrease in TCP bidirectional test .

Hi,

My machine ethtool -i info.

driver: mlx4_en
version: 2.1.6 (Aug 27 2013)
firmware-version: 2.5.0
bus-info: 0000:19:00.0

Even after applying patch problem still occur. Can you tell me the reason for this why decrease in Sending side happen ?
 Is this problem regarding iperf  or TCP/IP stack or nic card ?

Regards
Varun Sharma


---------- Forwarded message ----------
From: Amir Ancel <amira-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Date: Wed, Feb 19, 2014 at 2:36 PM
Subject: RE: [Iperf-users] Sending rate decrease in TCP bidirectional test .
To: Varun Sharma <vsdssd-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>, "iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org" <iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org>
Cc: Sagi Schlanger <sagis <at> mellanox.com>


Hi Varun,

 

Can you please share your driver and firmware versions using “ethtool -i ethX” ?

Also, attached a patch that fixes bidirectional functional issue.

 

Thanks,

 

Amir Ancel

Performance and Power Group Manager

www.mellanox.com

 

From: Varun Sharma [mailto:vsdssd <at> gmail.com]
Sent: Wednesday, February 19, 2014 10:44 AM
To: iperf-users-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
Subject: [Iperf-users] Sending rate decrease in TCP bidirectional test .

 

Hi,

I am using iperf v 2.0.5 for testing the Mellanox ConnectX VPI card. Its a dual port 10G card. Two 16 core machines with 64GB RAM are connected back to back.

In TCP Bidirectional test case sending throughput decrease as compare to TCP Unidirectional test case sending .

All cases use default setting . No extra parameter is set.

 

In case of 4 Client threads Output comes :--

Unidirectional Send Process --- 9.6 Gbps                

Bidirectional Send Process ----  4.9  Gbps


In case of 8 Client threads Output comes :--

Unidirectional Send Process --- 9.7 Gbps

Bidirectional Send Process -- 6.4 Gbps  


In case of 16 Client threads Output comes :--

 Unidirectional Send Process --- 9.7 Gbps
 
 Bidirectional Send Process --    8 Gbps

Any reason for this outcome ?

Regards

Varun

 


Attachment (iperf_dual_test_fix.patch): application/octet-stream, 5616 bytes
------------------------------------------------------------------------------
Managing the Performance of Cloud-Based Applications
Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
Read the Whitepaper.
http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users
Varun Sharma | 19 Feb 09:41 2014
Picon

Sending rate decrease in TCP bidirectional test .

Hi,

I am using iperf v 2.0.5 for testing the Mellanox ConnectX VPI card. Its a dual port 10G card. Two 16 core machines with 64GB RAM are connected back to back.

In TCP Bidirectional test case sending throughput decrease as compare to TCP Unidirectional test case sending .

All cases use default setting . No extra parameter is set.


In case of 4 Client threads Output comes :--

Unidirectional Send Process --- 9.6 Gbps                

Bidirectional Send Process ----  4.9  Gbps


In case of 8 Client threads Output comes :--

Unidirectional Send Process --- 9.7 Gbps

Bidirectional Send Process -- 6.4 Gbps  


In case of 16 Client threads Output comes :--

 Unidirectional Send Process --- 9.7 Gbps
 
 Bidirectional Send Process --    8 Gbps


Any reason for this outcome ?

Regards

Varun

------------------------------------------------------------------------------
Managing the Performance of Cloud-Based Applications
Take advantage of what the Cloud has to offer - Avoid Common Pitfalls.
Read the Whitepaper.
http://pubads.g.doubleclick.net/gampad/clk?id=121054471&iu=/4140/ostg.clktrk
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users
Sandro Bureca | 12 Feb 10:11 2014
Picon

Iperf additional measurement one way delay

Dear all, do you think an additional measurement can be done by iperf to measure the trip time from client to server ? Thank you all.

------------------------------------------------------------------------------
Android apps run on BlackBerry 10
Introducing the new BlackBerry 10.2.1 Runtime for Android apps.
Now with support for Jelly Bean, Bluetooth, Mapview and more.
Get your Android app in front of a whole new audience.  Start now.
http://pubads.g.doubleclick.net/gampad/clk?id=124407151&iu=/4140/ostg.clktrk
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users

Gmane