MORTON, ALFRED C (AL | 19 Jul 15:54 2014
Picon

FW: IETF WG state changed for draft-ietf-bmwg-sip-bench-term

Joel and BMWG,

> The IETF WG state of draft-ietf-bmwg-sip-bench-term has been changed to
> "Submitted to IESG for Publication" from "WG Consensus: Waiting for
> Write-Up" by Al Morton:
> 
> http://datatracker.ietf.org/doc/draft-ietf-bmwg-sip-bench-term/

The IETF WG state of draft-ietf-bmwg-sip-bench-meth has been changed to
"Submitted to IESG for Publication" from "WG Consensus: Waiting for
Write-Up" by Al Morton:

http://datatracker.ietf.org/doc/draft-ietf-bmwg-sip-bench-meth/

After a quiet WGLC closing on the 14th, we are declaring WG Consensus
and submitting the SIP benchmarking terms and methodology drafts 
for publication.

The combined shepherding forms have been entered in the datatracker 
for each draft, and appended below.

Congratulations to the co-authors, and please stay vigilant as the
drafts proceed through AD-review, IETF Last Call, and IESG review.

see you at IETF-90,
Al
doc shepherd and co-chair

-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

(Continue reading)

ramki Krishnan | 16 Jul 07:59 2014

Proposed IRTF Network Functions Virtualization Research Group (NFVRG) - first face-to-face meeting at Toronto

Please find more information on NFVRG including charter at - http://trac.tools.ietf.org/group/irtf/trac/wiki/nfvrg

 

Please find meeting location and agenda at - http://trac.tools.ietf.org/group/irtf/trac/wiki/nfvrg-ietf-90

 

Thanks,

Ramki on behalf of the co-chairs

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
Lucien Avramov (lavramov | 16 Jul 07:38 2014
Picon

data center benchmarking - request for further feedback on drafts

Hi BMWG!

We have been talking about the Data Center benchmarking draft for 
sometime. As authors, we would like to solicit more feedback. To date 
the main conversations we had were around jitter and definition on the 
first draft.

The first is draft on definition is at the following URL: 
http://tools.ietf.org/html/draft-dcbench-def-01

Regarding the first draft where we received the most comments so far, we 
would like to hear from you regarding section 4,5,6 and 7. As we have 
build these while talking to customers, switch vendors and traffic 
generator folks, we want to see if there are any other comments around it:

  4 Physical Layer Calibration . . . . . . . . . . . . . . . . . . .  6
  5 Line rate  . . . . . . . . . . . . . . . . . . . . . . . . . . .  7
  6  Buffering . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
  7 Application Throughput: Data Center Goodput. . . . . . . . . . . 13

The second draft is on methodology and can be found here: 
http://tools.ietf.org/html/draft-bmwg-dcbench-methodology-02

We would like to see more feedback especially on the second draft 
section 3,4,5 and 6:
    3. Buffering Testing . . . . . . . . . . . . . . . . . . . . . . .  7
    4  Microburst Testing . . . . . . . . . . . . . . . . . . . .  . . 10
    5. Head of Line Blocking . . . . . . . . . . . . . . . . . . . . . 11
    6. Incast Stateful and Stateless Traffic . . . . . . . . . . . . . 13

We introduce a new method to measure buffering capability of a DUT. Our 
goal with this method is to not have the end user to care about the type 
of DUT he has [cut-through / store-forward] and the test will actually 
detect and measure the DUT buffering. We then use this type of 
methodology for microburst.

Then we would like to know what you think about the head of line 
blocking evaluation. This is very important while designing data center 
networks, in order to make the proper design and deployment decisions 
based on the DUT performance. We use a generic methodology here as well 
bringing the capability to understand the impact of head of line 
blocking in a more precise way than the usual current tests which 
involve only groups of 4 ports.

Finally, section 6 is about mixing udp and tcp traffic on the DUT and 
measuring the latency for udp type of traffic while measuring the 
goodput on the tcp type of traffic.

We consolidate the feedback received so far and want to present it at
IETF90 during our BMWG meeting.

Thank you,
Jacob and Lucien
ramki Krishnan | 16 Jul 03:42 2014

Proposed IRTF Network Functions Virtualization Research Group (NFVRG) - first face-to-face meeting at Toronto

Please find more information on NFVRG including charter at - http://trac.tools.ietf.org/group/irtf/trac/wiki/nfvrg

 

Please find meeting location and agenda at - http://trac.tools.ietf.org/group/irtf/trac/wiki/nfvrg-ietf-90

 

Thanks,

Ramki on behalf of the co-chairs

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
Banks, Sarah | 7 Jul 23:25 2014

In Service Software Upgrade Draft

Hello BMWG,
	The ISSU draft is in a pretty solid state, from the authors point of
view. We wanted to solicit a bit of feedback, and make sure we've covered
all bases. There was some discussion around the authors as to adding some
text around timers and counters in the draft - do you trust the DUT?
Should you confirm these outside of the DUT? Etc. Thoughts?

Kind regards,
Sarah & Fernando & Gery
MORTON, ALFRED C (AL | 7 Jul 18:19 2014
Picon

Truely LAST WGLC: draft-ietf-bmwg-sip-bench-term and -meth

BMWG (and SIPCORE):

A WG Last Call period for the Internet-Drafts on SIP Device
Benchmarking:

   http://datatracker.ietf.org/doc/draft-ietf-bmwg-sip-bench-term/
   http://datatracker.ietf.org/doc/draft-ietf-bmwg-sip-bench-meth/

will be open from 7 July 2014 through 14 July 2014.

These drafts are continuing the BMWG Last Call Process. See
http://www1.ietf.org/mail-archive/web/bmwg/current/msg00846.html
The first WGLC was completed on 5 April 2010 with comments.
The second WGLC was completed on 18 May 2012 with comments.
The third WGLC was completed on 10 Dec 2012 with comments.
An IETF Last Call followed, and completed on 30 Jan 2013 with comments.
A fourth WGLC was completed 11 June 2014 with comments from expert review.
The current versions (11) address Dale Worley's RAI area early review
and Robert Spark's reviews.

Please read and express your opinion on whether or not these
Internet-Drafts should be forwarded to the Area Directors for
publication as Informational RFCs.  Send your comments
to this list or acmorton <at> att.com and sbanks <at> akamai.com

Al
bmwg co-chair
Vic Liu | 7 Jul 03:52 2014

request for comment: New Version Notification for draft-liu-bmwg-virtual-network-benchmark-00.txt

Hi all 

We are researching on virtual network and we've submit a draft of benchmark for virtual network.
Hope we can discuss on the mailing list and any comment are big welcome.
URL:  http://www.ietf.org/internet-drafts/draft-liu-bmwg-virtual-network-benchmark-00.txt

Thank you!
All the best!
Vic

-----邮件原件-----
发件人: internet-drafts <at> ietf.org [mailto:internet-drafts <at> ietf.org] 
发送时间: 2014年7月5日, 星期六 4:32
收件人: Vic Liu; Dapeng Liu; Bob Mandeville; Dapeng Liu; Bob Mandeville; Brooks Hickman; Guang Zhang;
Brooks Hickman; Guang Zhang; vic
主题: New Version Notification for draft-liu-bmwg-virtual-network-benchmark-00.txt

A new version of I-D, draft-liu-bmwg-virtual-network-benchmark-00.txt
has been successfully submitted by Dapeng Liu and posted to the IETF repository.

Name:		draft-liu-bmwg-virtual-network-benchmark
Revision:	00
Title:		Benchmarking Methodology for Virtualization Network Performance
Document date:	2014-07-04
Group:		Individual Submission
Pages:		16
URL:            http://www.ietf.org/internet-drafts/draft-liu-bmwg-virtual-network-benchmark-00.txt
Status:         https://datatracker.ietf.org/doc/draft-liu-bmwg-virtual-network-benchmark/
Htmlized:       http://tools.ietf.org/html/draft-liu-bmwg-virtual-network-benchmark-00

Abstract:
   As the virtual network have been wide establish in IDC. The
   performance of virtual network has become a consideration to the IDC
   managers. This draft introduce a benchmarking methodology for
   virtualization network performance.

Please note that it may take a couple of minutes from the time of submission until the htmlized version and
diff are available at tools.ietf.org.

The IETF Secretariat

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
Barry Constantine | 5 Jul 15:53 2014

Traffic Management Draft-04

Hello Folks,

 

draft-constantine-bmwg-traffic-management-04 was posted this past week, there were issues in the IETF automated posting system and it is unclear if an official post announcement was sent out.

 

Lot’s of editorial tweaks were made, but the main item addressed was Appendix B where the definition of Layer 4 TCP “test pattern” guidelines were added as discussed in London.

 

We look forward to review and comments.

 

Thank you,

Barry Constantine

 

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
internet-drafts | 2 Jul 22:47 2014
Picon

I-D Action: draft-ietf-bmwg-sip-bench-meth-11.txt


A New Internet-Draft is available from the on-line Internet-Drafts directories.
 This draft is a work item of the Benchmarking Methodology Working Group of the IETF.

        Title           : Methodology for Benchmarking Session Initiation Protocol (SIP) Devices: Basic session setup and registration
        Authors         : Carol Davids
                          Vijay K. Gurbani
                          Scott Poretsky
	Filename        : draft-ietf-bmwg-sip-bench-meth-11.txt
	Pages           : 21
	Date            : 2014-07-02

Abstract:
   This document provides a methodology for benchmarking the Session
   Initiation Protocol (SIP) performance of devices.  Terminology
   related to benchmarking SIP devices is described in the companion
   terminology document.  Using these two documents, benchmarks can be
   obtained and compared for different types of devices such as SIP
   Proxy Servers, Registrars and Session Border Controllers.  The term
   "performance" in this context means the capacity of the device-under-
   test (DUT) to process SIP messages.  Media streams are used only to
   study how they impact the signaling behavior.  The intent of the two
   documents is to provide a normalized set of tests that will enable an
   objective comparison of the capacity of SIP devices.  Test setup
   parameters and a methodology are necessary because SIP allows a wide
   range of configuration and operational conditions that can influence
   performance benchmark measurements.

The IETF datatracker status page for this draft is:
https://datatracker.ietf.org/doc/draft-ietf-bmwg-sip-bench-meth/

There's also a htmlized version available at:
http://tools.ietf.org/html/draft-ietf-bmwg-sip-bench-meth-11

A diff from the previous version is available at:
http://www.ietf.org/rfcdiff?url2=draft-ietf-bmwg-sip-bench-meth-11

Please note that it may take a couple of minutes from the time of submission
until the htmlized version and diff are available at tools.ietf.org.

Internet-Drafts are also available by anonymous FTP at:
ftp://ftp.ietf.org/internet-drafts/
Robert Sparks | 18 Jun 23:26 2014

Fwd: Review: draft-ietf-bmwg-sip-bench-term-10 and draft-ietf-bmwg-sip-bench-meth-10

Oops - meant to copy the list. Forwarding...


-------- Original Message -------- Subject: Date: From: To: CC:
Review: draft-ietf-bmwg-sip-bench-term-10 and draft-ietf-bmwg-sip-bench-meth-10
Wed, 18 Jun 2014 16:19:06 -0500
Robert Sparks <rjsparks <at> nostrum.com>
MORTON, ALFRED C (AL) <acmorton <at> att.com>, Vijay Gurbani <vkg <at> lucent.com>, joel jaeggli <joelja <at> bogus.com>, Carol Davids (davids <at> iit.edu) <davids <at> iit.edu>, sporetsky <at> allot.com <sporetsky <at> allot.com>
Banks, Sarah (sbanks <at> akamai.com) <sbanks <at> akamai.com>


Reviews of draft-ietf-bmwg-sip-bench-term-10 and draft-ietf-bmwg-sip-bench-meth-10 Thank you for the restructure and cleanup work on these drafts. Most of the comments I made on version -08 of each of them have been addressed. Since there were so many changes, rather than revisit the thread from the -08 review, I'll start a new thread here, bringing up unaddressed points from the earlier review as necessary. Again, thank you for the adjustment to the title and the text to address the scope of the documents. The result is much more straightforward and clear than what I had earlier reviewed. I have a few remaining points and questions and a few nits to call out. Major technical points: 1) I've read through this enough times now that maybe I've become blind to where it's discussed, but for the INVITE tests, I think you have an unstated assumption that the responding EA is configured to send 200s as quickly as it can, and that whatever delay it has in responding is fairly constant. Otherwise, your tests will have widely varying results due to retransmissions of the INVITE. Is it your intent that the EA would ever retransmit? If not, the assumptions above should be spelled out in the methodology document. 2) The documents still don't make it clear that each register request needs to be to a distinct AOR (otherwise, there is no sense in having a separate re-registration test). 3) The documents (particularly the report forms) assume you will use the same transport on both sides of the DUT. Please state that explicitly, or if allowing for (for instance) UDP on one side of a proxy and TCP on the other was intended, please adjust the document to talk about it. 4) I think the documents are assuming that an EA will make one connection if it's using a connection oriented transport (like TLS). The results of the test will be very different if it opens a new connection for each sent message. Similarly, I think you're assuming that a connection gets set up between the DUT and the EA that will respond to an INVITE once. Those should be called out explicitly. If you're not assuming that, there needs to be more discussion about how connections get established, and whether that's a parameter that needs to be captured as part of the test. 5) Is it the intent to only test with media that acts like G.711 over RTP? The configuration parameters you have for EA imply that. If its not the case, are you missing configuration parameters for, say, video codecs (an EA won't be able to use what you have in terminology's 3.3.4 for that stream), or for an msrp media session? Major editorial points: 1) The terms IS and NS are holdovers from when the document was trying to do more than it is now. NS is not accurately defined and is at cross purposes with the tests you _do_ define in the document (The discussion of MESSAGE and SUBSCRIBE that's still in 3.1.1 of the terminology document does not help this document at all.) I know these words took effort to write, but they really should just be removed. A very short introduction to the two types of session you are actually testing would leave much less room for confusion. 2) I challenged the usefulness of the definition of session (sig,medc,med) and the diagram trying to show these as points in some three-dimensional space in my previous review. I did see Carol's explanation of it in the summary to that review, but I disagree that it is helping this document. I think it's hurting by adding confusion. If you deleted it, the rest of the document means what it meant before, and the reader is no less informed. Please remove it. If the primary point is to make sure that the tester considers covering every permutation of the test parameters, just say that. The graphic implies that some things have more sess.sig than others, and that you might talk about the distance between points in this vector space. Save all this aside and bring it back in a document that actually uses it if you must, but please take it out of this one. 3) The code in Appendix A of the methodology doc should be identified as a "Code Component" as described in the TLP (http://trustee.ietf.org/license-info/IETF-TLP-4.htm) and a license block should be added. More minor points. * The methodology document says the DUT can be any conforming 3261 device. The terminology document says the device can't be user equipment (see bullet 1 in section 1.1) . Neither are correct. (A presence server, for instance, is not a reasonable thing for a DUT for the tests in this document). I suggest in both documents you just explicitly list what the DUT can be. That list should not include "User Agent Server" as it currently does in 3.2.2 of the terminology document. A UAS is just a role that any of these devices, including end-user terminals, can hold. * If a Registrar is the DUT, then neither of the topology figures are correct. (Unless you're intending for an EA to be the registrar, and the DUT is just a proxy). Consider adding a figure that more clearly shows what the topology for your registration test is intended to be, and making it clear that you are only testing INVITE through devices that forward messages. * There are several places that call out RTSP as a media protocol. RTSP is a control protocol, not a media protocol - it doesn't make sense being listed with RTP and SRTP. MSRP would make more sense. * The Expected Results in Section 6.8 of the methodology claim that the rate should not be more than what was in 6.7. How do you come to that conclusion? There are several valid implementation choices that could lead to reregistration taking slightly longer than an initial registration. NITS: Methodology document: Introduction paragraph 1 s/in Terminology document/in the Terminology document/ Introduction paragraph 5. This points to section 4 (the IANA consideration section) and section 2 (the introduction) of the terminology document for an explanation of configuration options. Neither of those sections explain configuration options. Where did you mean to point? Terminology document: Security Considerations: Please remove "and various other drafts". If you know of other important documents to point to, ase add them as references. The definition of Stateful Proxy and Stateless Proxy copied the words "defined by this specification" from RFC3261. This literal copy introduces ambiguity. Please replace "by this specification" with "by [RFC3261]". Introduction paragraph 4, last sentence: By calling out devices that include the UAS and UAC functions, you have eliminated stateless proxies, which contain neither. In section 3, when you use the templates, you are careful to say None under issues when there are no issues. Please use the same care with See Also. Right now, you have empty See Also: sections that could be misread to take up whatever content follows (particularly by a text-to-speech engine). 3.1.6: s/tie interval/time interval/

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
The IESG | 12 Jun 20:00 2014
Picon

WG Action: Rechartered Benchmarking Methodology (bmwg)

The Benchmarking Methodology (bmwg) working group in the Operations and
Management Area of the IETF has been rechartered. For additional
information please contact the Area Directors or the WG Chairs.

Benchmarking Methodology (bmwg)
------------------------------------------------
Current Status: Active WG

Chairs:
  Sarah Banks <sbanks <at> akamai.com>
  Al Morton <acmorton <at> att.com>

Assigned Area Director:
  Joel Jaeggli <joelja <at> bogus.com>

Mailing list
  Address: bmwg <at> ietf.org
  To Subscribe: bmwg-request <at> ietf.org
  Archive: http://www.ietf.org/mail-archive/web/bmwg/

Charter:

The Benchmarking Methodology Working Group (BMWG) will continue to
produce a series of recommendations concerning the key performance
characteristics of internetworking technologies, or benchmarks for
network devices, systems, and services. Taking a view of networking
divided into planes, the scope of work includes benchmarks for the
management, control, and forwarding planes.

Each recommendation will describe the class of equipment, system, or
service being addressed; discuss the performance characteristics that
are pertinent to that class; clearly identify a set of metrics that aid
in the description of those characteristics; specify the methodologies
required to collect said metrics; and lastly, present the requirements
for the common, unambiguous reporting of benchmarking results.

The set of relevant benchmarks will be developed with input from the
community of users (e.g., network operators and testing organizations)
and from those affected by the benchmarks when they are published
(networking and test equipment manufacturers). When possible, the
benchmarks and other terminologies will be developed jointly with
organizations that are willing to share their expertise. Joint review
requirements for a specific work area will be included in the detailed
description of the task, as listed below.

To better distinguish the BMWG from other measurement initiatives in the
IETF, the scope of the BMWG is limited to the characterization of
implementations of various internetworking technologies
using controlled stimuli in a laboratory environment. Said differently,
the BMWG does not attempt to produce benchmarks for live, operational
networks. Moreover, the benchmarks produced by this WG shall strive to
be vendor independent or otherwise have universal applicability to a
given technology class.

Because the demands of a particular technology may vary from deployment
to deployment, a specific non-goal of the Working Group is to define
acceptance criteria or performance requirements.

An ongoing task is to provide a forum for development and
advancement of measurements which provide insight on the
capabilities and operation of implementations of inter-networking
technology.

Ideally, BMWG should communicate with the operations community 
through organizations such as NANOG, RIPE, and APRICOT.

The BMWG is explicitly tasked to develop benchmarks and methodologies 
for the following technologies:

BGP Control-plane Convergence Methodology (Terminology is complete):
With relevant performance characteristics identified, BMWG will prepare
a Benchmarking Methodology Document with review from the Routing Area
(e.g., the IDR working group and/or the RTG-DIR). The Benchmarking
Methodology will be Last-Called in all the groups that previously
provided input, including another round of network operator input during
the last call.

SIP Networking Devices: Develop new terminology and methods to
characterize the key performance aspects of network devices using
SIP, including the signaling plane scale and service rates while
considering load conditions on both the signaling and media planes. This
work will be harmonized with related SIP performance metric definitions
prepared by the PMOL working group.

Traffic Management: Develop the methods to characterize the capacity 
of traffic management features in network devices, such as
classification, 
policing, shaping, and active queue management. Existing terminology 
will be used where appropriate. Configured operation will be verified 
as a part of the methodology. The goal is a methodology to assess the 
maximum forwarding performance that a network device can sustain without 
dropping or impairing packets, or compromising the accuracy of multiple 
instances of traffic management functions. This is the benchmark for 
comparison between devices. Another goal is to devise methods that 
utilize flows with congestion-aware transport as part of the traffic 
load and still produce repeatable results in the isolated test
environment.

IPv6 Neighbor Discovery: Large address space in IPv6 subnets presents 
several networking challenges, as described in RFC 6583. Indexes to 
describe the performance of network devices, such as the number of 
reachable devices on a sub-network, are useful benchmarks to the 
operations community. The BMWG will develop the necessary 
terminology and methodologies to measure such benchmarks.

In-Service Software Upgrade: Develop new methods and benchmarks to 
characterize the upgrade of network devices while in-service, 
considering both data and control plane operations and impacts. 
These devices are generally expected to maintain control plane session 
integrity, including routing connections. Quantification of upgrade 
impact will include packet loss measurement, and other forms of recovery 
behavior will be noted accordingly. The work will produce a definition 
of ISSU, which will help refine the scope. Liaisons will be established 
as needed.

Data Center Benchmarking: This work will define additional terms, 
benchmarks, and methods applicable to data center performance
evaluations. 
This includes data center specific congestion scenarios, switch buffer 
analysis, microburst, head of line blocking, while also using a wide mix 
of traffic conditions. Some aspects from BMWG's past work are not 
meaningful when testing switches that implement new IEEE specifications 
in the area of data center bridging. For example, throughput as defined 
in RFC 1242 cannot be measured when testing devices that implement three 
new IEEE specifications: priority-based flow control (802.1Qbb); 
priority groups (802.1Qaz); and congestion notification (802.1Qau).
This work will update RFC1242, RFC2544, RFC2889 (and other key RFCs), 
and exchange Liaisons with relevant SDOs, especially at WG Last Call.

VNF and Related Infrastructure Benchmarking: Benchmarking Methodologies 
have reliably characterized many physical devices. This work item extends

and enhances the methods to virtual network functions (VNF) and their 
unique supporting infrastructure. A first deliverable from this activity 
will be a document that considers the new benchmarking space to ensure
that common issues are recognized from the start, using background 
materials from industry and SDOs (e.g., IETF, ETSI NFV).
Benchmarks for platform capacity and performance characteristics of 
virtual routers, switches, and related components will follow, including 
comparisons between physical and virtual network functions. In many
cases,
the traditional benchmarks should be applicable to VNFs, but the lab 
set-ups, configurations, and measurement methods will likely need to 
be revised or enhanced.

Milestones:
  Jun 2014 - Basic BGP Convergence Benchmarking Methodology to IESG
Review
  Jul 2014 - Terminology for SIP Device Benchmarking to IESG Review
  Jul 2014 - Methodology for SIP Device Benchmarking to IESG Review
  Aug 2014 - Draft on Traffic Management Benchmarking to IESG Review
  Dec 2014 - Draft on IPv6 Neighbor Discovery to IESG Review
  Mar 2015 - Draft on In-Service Software Upgrade Benchmarking to IESG
Review
  Aug 2015 - Draft on VNF Benchmarking Considerations to IESG Review
  Dec 2015 - Drafts on Data Center Benchmarking to IESG Review

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg

Gmane