Hoshall, Paul | 21 Jun 19:55 2016

FW: Who created iPerf 3.1.3 for Windows 64 Bits?

I developed a BSD UDP application using 10 Gbps network adapters but only get 40% throughput.

I'm using Microsoft Visual Studio 2010 and Windows 7.

I'm using basic socket functions: socket, bind, setsockopt, sendto (UDP broadcast).

iPerf 3.1.3 gets almost 100% throughput.

Is the Windows iPerf 3.1.3 source available?  

I'm missing something.....

V/R,
Paul Hoshall
Principal Development Engineer
AAI Corporation
(410)-628-3098
From: Hoshall, Paul <hoshal@...>
Subject: Who created iPerf 3.1.3 for Windows 64 Bits?
Date: 2016-06-21 17:39:28 GMT

 

(Continue reading)

Bruce Mah | 8 Jun 21:04 2016
Picon
Gravatar

iperf-3.1.3 is available


ESnet (Energy Sciences Network) announces iperf-3.1.3, the latest
point/bugfix release from the iperf 3.1 codeline.

iperf-3.1.3 fixes an important security issue in all previous versions
of iperf3.  That issue was a buffer overflow / heap corruption issue
that could occur if a malformed JSON string was passed on the control
channel.  In theory, it could be leveraged to create a heap exploit.
This vulnerability was discovered and reported by Dave McDaniel, Cisco
Talos.  More information can be found in security advisories
TALOS-CAN-0164 and ESNET-SECADV-2016-0001.  The CVE identifier
CVE-2016-4303 has been assigned for this vulnerability.

This release of iperf3 also supports the use of fair-queueing-based
per-socket pacing (in recent versions of the Linux kernel).  Several
other bugs and portability fixes are also included.

More information on changes can be found in the RELEASE_NOTES
file in the source distribution.

iperf3 is a tool for measuring the maximum TCP and UDP performance
along a path, allowing for the tuning of various parameters and
reporting measurements such as throughput, jitter, and datagram packet
loss.  It is fully supported on Linux, FreeBSD, and MacOS X.  It may
run on other platforms as well, although it has not received the same
attention and testing.  Note that iperf3 is not compatible with, and
will not interoperate with, version 2 or earlier of iperf.

The source code for iperf 3.1.3 is available at:

(Continue reading)

Bruce Mah | 8 Jun 21:04 2016
Picon
Gravatar

iperf-3.0.12 is available


ESnet (Energy Sciences Network) announces the public release
of iperf-3.0.12, the latest point/bugfix release from the iperf 3.0
codeline.

iperf-3.0.12 fixes an important security issue in all previous versions
of iperf3.  That issue was a buffer overflow / heap corruption issue
that could occur if a malformed JSON string was passed on the control
channel.  In theory, it could be leveraged to create a heap exploit.
This vulnerability was discovered and reported by Dave McDaniel, Cisco
Talos.  More information can be found in security advisories
TALOS-CAN-0164 and ESNET-SECADV-2016-0001.  The CVE identifier
CVE-2016-4303 has been assigned for this vulnerability.

More information on changes can be found in the RELEASE_NOTES
file in the source distribution.

iperf3 is a tool for measuring the maximum TCP and UDP performance
along a path, allowing for the tuning of various parameters and
reporting measurements such as throughput, jitter, and datagram packet
loss.

The original iperf was implemented by NLANR / DAST.  Version 3 is a
complete reimplementation, with the goals of a smaller, simpler code
base, and a library that can be used by other programs.  It also adds
new functionality, such as CPU utilization measurements, zero copy TCP
support, and JSON output.  Note that iperf3 clients and servers are
not compatible with, and will not interoperate with, earlier versions
of iperf.

(Continue reading)

Bruce Mah | 8 Jun 21:02 2016
Picon
Gravatar

ESnet Software Security Advisory ESNET-SECADV-2016-0001 (iperf3)


ESnet Software Security Advisory
ESNET-SECADV-2016-0001

Topic:			iperf3 JSON parsing vulnerability
Issued:			8 June 2016
Credits:		Dave McDaniel, Cisco Talos
Affects:		iperf-3.1.2 and earlier,
			iperf-3.0.11 and earlier
Corrected:		iperf-3.1.3, iperf-3.0.12
Cross-references:	TALOS-CAN-0164, CVE-2016-4303

I.  Background

iperf3 is a utility for testing network performance using TCP, UDP,
and SCTP, running over IPv4 and IPv6.  It uses a client/server model,
where a client and server communicate the parameters of a test,
coordinate the start and end of the test, and exchange results.  This
message exchange takes place over a TCP control connection, and relies
on a modified version of the open-source cjson library for rendering
and parsing the various messages in JSON.

II.  Problem Description

A bug exists in the way that the included version of the cjson library
handles Unicode literals in JSON string constants.  A malformed
Unicode literal can cause a process parsing a block of JSON to
overwrite a pre-allocated buffer in the heap.  Note that this bug has
already been fixed in recent versions of cjson.

(Continue reading)

Jeffrey Lane | 2 Jun 22:19 2016

iperf3 vs iperf2 on 40Gbit

I am running into a problem where there is a remarkable difference in
reported throughput when using iperf2 vs iperf3.

With iperf2, and about 16 - 20 threads, I can get, reliably, between
32 and 35 Gb/s on a 40Gb segment.  with iperf3 on exactly the same
hardware, I only get 4Gb/s, sometimes I can get up to 11Gb/s.

Is there some trick to getting more accurate results out of iperf3?

There is nothing special about the network, in fact, the network
consists of two servers with 40Gb cards connected directly, no
intervening switch.  There is no routing issue that I can find, the
40Gb ports are on a completely different address space than the
onboard 1Gb ports, AND the 1Gb ports on each server can't talk to each
other.  Physically the ONLY way these servers can talk to each other
is across the 40Gb link.

Another person opened a bug for this:
https://github.com/esnet/iperf/issues/408

and there are now three people on that thread reporting the same issues.

Any suggestions for how to debug or resolve this?

--

-- 
"Entropy isn't what it used to be."

Jeff Lane -
Server Certification Lead, Warrior Poet, Biker, Lover of Pie
Phone: 919-442-8649
(Continue reading)

Bob McMahon | 2 Jun 20:13 2016

Iperf2: -t now supported on server/listener

Hi All,

We just added the code into iperf2 to honor -t when applied to the server.   This will apply to both the listener thread and the server threads.  If the daeomon mode (-D) is set, it will only apply to server threads.  The listener won't hang a blocking accept() when -t is set but will use select() with a timeout per the -t value.  

We also added the pid in the initial report output when -e is set.   This is useful for controlling remote sessions where a high level tool is controlling the remote process via something like ssh.

We're also trying to improve portability for things like RTOSs - those this will take a bit of effort but some effort has been made.

UDP latency/histogram support may be added soon.   Still considering the best way to do that.  Output is way to cryptic for humans so it needs a tool on top to generate visualizations. 

Thanks,
Bob
------------------------------------------------------------------------------
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are 
consuming the most bandwidth. Provides multi-vendor support for NetFlow, 
J-Flow, sFlow and other flows. Make informed decisions using capacity 
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users
Bob McMahon | 22 Feb 22:41 2016

Read delay

Hi All,

We're noticing some TCP rtt/congestion window/aggregation issues when using wi-fi on fedora 22 or greater.   Something being used to help debug is a read delay on the server, i.e. have a n millisecond delay between an accept() and a server thread's initial read().  Also, a display of this latency.   For now, I've used a -q <int value milliseconds> to prototype this.

 iperf -s -e -q 200 -B 192.168.1.70 -l 16384 -i 0.5 -fb -p 61001
 Server thread scheduling latency is 0.200279 seconds

 iperf -s -e -q 0 -B 192.168.1.70 -l 16384 -i 0.5 -fb -p 61001
 Server thread scheduling latency is 0.000272 seconds


------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users
Gabriel L. Somlo | 17 Feb 15:29 2016
Picon

iperf 2.0.8 build error with gcc 6

Hi,

I'm packaging iperf-2.* in Fedora, and since rawhide switched to gcc
6.0 I'm getting an error during build:

...
gcc -DHAVE_CONFIG_H -I. -I..  -I../include -I../include  -Wall -O2 -g
-pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2
-fexceptions -fstack-protector-strong --param=ssp-buffer-size=4
-grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
-m64 -mtune=generic -c string.c
In file included from /usr/include/c++/6.0.0/cmath:42:0,
                 from /usr/include/c++/6.0.0/math.h:36,
                 from ../include/headers.h:85,
                 from ../include/Timestamp.hpp:63,
                 from delay.cpp:55:
/usr/include/c++/6.0.0/bits/cpp_type_traits.h:205:12: error:
redefinition of 'struct std::__is_integer<int>'
     struct __is_integer<int>
            ^~~~~~~~~~~~~~~~~
In file included from /usr/include/c++/6.0.0/cmath:42:0,
                 from /usr/include/c++/6.0.0/math.h:36,
                 from ../include/headers.h:85,
                 from ../include/Timestamp.hpp:63,
                 from delay.cpp:55:
/usr/include/c++/6.0.0/bits/cpp_type_traits.h:138:12: error: previous
definition of 'struct std::__is_integer<int>'
     struct __is_integer<bool>

Makefile:386: recipe for target 'delay.o' failed
make[2]: *** [delay.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[2]: Leaving directory '/builddir/build/BUILD/iperf-2.0.8/compat'
Makefile:377: recipe for target 'all-recursive' failed
make[1]: Leaving directory '/builddir/build/BUILD/iperf-2.0.8'
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2
Makefile:317: recipe for target 'all' failed

Any clue about how this might be worked around would be much
appreciated !

Thanks much,
--Gabriel

------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users

Prithvi Raj | 15 Feb 09:08 2016
Picon

iperf results varies between back to back connected systems

Hi,

I am trying to use iperf 2.0.5 to measure TCP throughput between two Linux systems connected back to back.

The topology I am using is below:

Linux A eth1(192.138.14.1)----eth4(192.138.14.4) Linux B eth2(192.138.4.3)------eth3(192.138.4.2) Linux C

All links between Linuxes are 1gig links. All Linux interfaces are configured to 1000Mbps speed.

Throughout measured from B to C gives

Linux C# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.138.4.2 port 5001 connected with 192.138.4.3 port 60918
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.1 sec  1.11 GBytes   941 Mbits/sec


LinuxB# iperf -c 192.138.4.2
------------------------------------------------------------
Client connecting to 192.138.4.2, TCP port 5001
TCP window size: 23.2 KByte (default)
------------------------------------------------------------
[  3] local 192.138.4.3 port 60918 connected with 192.138.4.2 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.11 GBytes   952 Mbits/sec

When I send from C to B,

Linux B# iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.138.4.3 port 5001 connected with 192.138.4.2 port 38576
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   970 MBytes   813 Mbits/sec

Linux C# iperf -c 192.138.4.3
------------------------------------------------------------
Client connecting to 192.138.4.3, TCP port 5001
TCP window size: 23.2 KByte (default)
------------------------------------------------------------
[  3] local 192.138.4.2 port 38576 connected with 192.138.4.3 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   970 MBytes   814 Mbits/sec

Why am I seeing a marked difference in throughput measured in different directions between two back to back connected systems?

All my sysctl parameters in both Linux are same:

Linux B# sudo vim /etc/sysctl.conf
# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
# sysctl.conf(5) for more details.

# Controls IP packet forwarding
net.ipv4.ip_forward = 1

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

#Enables/Disables TCP SACK (default 1)
net.ipv4.tcp_sack = 1

# Window Scaling
net.ipv4.tcp_window_scaling = 1

#Maximum Receive window size
#net.core.rmem_max = 16777216

# Receive Window Size Min Avg Max
#net.ipv4.tcp_rmem = 4096 87380 16777216

# Send Window Size Min Avg Max
#net.ipv4.tcp_wmem = 4096 16384 16777216



# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

All Linux systems are running Linux Centos 6.4 and same kernel 2.6.32-358.el6.x86_64. This means they all should have the same defualt buffer sizes and same tunable TCP parameters.

On checking the network stats using netstat -s, I found the number of TCP segments sent out to be lesser. Linux B - 21047 segments sent (B to C) and Linux C - 16132 segments sent (C to B). Why is this ? Is there something apart from link speed, linux interface configs, tunable TCP parameters that is affecting the throughput values ?
------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users
Bruce Mah | 2 Feb 01:28 2016
Picon
Gravatar

iperf-3.1.2 is available


ESnet (Energy Sciences Network) announces iperf-3.1.2, the latest
point/bugfix release from the iperf 3.1 codeline.

iperf-3.1.2 fixes a few bugs present in iperf-3.1.1 (and possibly
earlier releases), but generally retaining the same functionality.
More information on specific changes can be found in the RELEASE_NOTES
file in the source distribution.

iperf3 is a tool for measuring the maximum TCP and UDP performance
along a path, allowing for the tuning of various parameters and
reporting measurements such as throughput, jitter, and datagram packet
loss.  It is fully supported on Linux, FreeBSD, and MacOS X.  It may
run on other platforms as well, although it has not received the same
attention and testing.  Note that iperf3 is not compatible with, and
will not interoperate with, version 2 or earlier of iperf.

The source code for iperf 3.1.2 is available at:

http://downloads.es.net/pub/iperf/iperf-3.1.2.tar.gz

SHA256 hash:

f9dbdb99f869c077d14bc1de78675f5e4b8d1bf78dc92381e96c3eb5b1fd7d86  iperf-3.1.2.tar.gz

iperf3 is freely-redistributable under a 3-clause BSD license.  More
information can be found in the LICENSE file inside the source
distribution.

Additional documentation for iperf3 can be found at:

http://software.es.net/iperf

More information about iperf3 (including the issue tracker, source
code repository access, and mailing list) can be found on the iperf3
page on GitHub at:

https://github.com/esnet/iperf

The mailing list for iperf3 development is:

iperf-dev@...

To see the list archives or join the mailing list, visit:

http://groups.google.com/group/iperf-dev

Bob (Robert) McMahon | 24 Dec 20:54 2015

iperf 2.0.8 32 bit seq no.

Hi All,

 

I noticed that the iperf2 seq number  is 32 bits which limits a test to ~2B packets.   With modern computer and i/o systems, 2B isn’t quite as large as it used to be.  I’m likely going to add support for 64 bit counter soon.

 

Bob

 

------------------------------------------------------------------------------
_______________________________________________
Iperf-users mailing list
Iperf-users@...
https://lists.sourceforge.net/lists/listinfo/iperf-users

Gmane