IETF Secretariat | 27 Oct 14:18 2014
Picon

ID Tracker State Update Notice: <draft-ietf-bmwg-bgp-basic-convergence-04.txt>

Last call has been made for draft-ietf-bmwg-bgp-basic-convergence and state has been changed to In Last Call
ID Tracker URL: http://datatracker.ietf.org/doc/draft-ietf-bmwg-bgp-basic-convergence/
The IESG | 27 Oct 14:18 2014
Picon

Last Call: <draft-ietf-bmwg-bgp-basic-convergence-04.txt> (Basic BGP Convergence Benchmarking Methodology for Data Plane Convergence) to Informational RFC


The IESG has received a request from the Benchmarking Methodology WG
(bmwg) to consider the following document:
- 'Basic BGP Convergence Benchmarking Methodology for Data Plane
   Convergence'
  <draft-ietf-bmwg-bgp-basic-convergence-04.txt> as Informational RFC

The IESG plans to make a decision in the next few weeks, and solicits
final comments on this action. Please send substantive comments to the
ietf <at> ietf.org mailing lists by 2014-11-10. Exceptionally, comments may be
sent to iesg <at> ietf.org instead. In either case, please retain the
beginning of the Subject line to allow automated sorting.

Abstract

   BGP is widely deployed and used by several service providers as the
   default Inter AS routing protocol.  It is of utmost importance to
   ensure that when a BGP peer or a downstream link of a BGP peer fails,
   the alternate paths are rapidly used and routes via these alternate
   paths are installed.  This document provides the basic BGP
   Benchmarking Methodology using existing BGP Convergence Terminology,
   RFC 4098.

The file can be obtained via
http://datatracker.ietf.org/doc/draft-ietf-bmwg-bgp-basic-convergence/

IESG discussion can be tracked via
http://datatracker.ietf.org/doc/draft-ietf-bmwg-bgp-basic-convergence/ballot/

No IPR declarations have been submitted directly on this I-D.
(Continue reading)

gu rong | 27 Oct 13:02 2014

Solicit comments for draft-huang-bmwg-virtual-network-performance-00.txt

Hi, dear all.

We have just uploaded a draft about virtual network performance benchmarking which is a new version of the existed draft draft-liu-bmwg-virtual-network-benchmark-00.txt.

This draft introduces a benchmarking methodology for virtualization network performance based on virtual switch.

Any comments, suggestions and discussions are warmly welcomed.

Thank you.

 

Best regards from Rong Gu.

 

-----邮件原件-----
发件人: internet-drafts <at> ietf.org [mailto:internet-drafts <at> ietf.org]
发送时间: 20141027 19:55
收件人: Lu Huang; Bob Mandeville; Gu Rong; Rong Gu; Dapeng Liu; Bob Mandeville; Brooks Hickman; Lu Huang; Guang Zhang; Brooks Hickman; Guang Zhang
主题: New Version Notification for draft-huang-bmwg-virtual-network-performance-00.txt

 

 

A new version of I-D, draft-huang-bmwg-virtual-network-performance-00.txt

has been successfully submitted by Rong Gu and posted to the

IETF repository.

 

Name:               draft-huang-bmwg-virtual-network-performance

Revision:  00

Title:                  Benchmarking Methodology for Virtualization Network Performance

Document date:       2014-10-27

Group:               Individual Submission

Pages:               14

URL:            http://www.ietf.org/internet-drafts/draft-huang-bmwg-virtual-network-performance-00.txt

Status:         https://datatracker.ietf.org/doc/draft-huang-bmwg-virtual-network-performance/

Htmlized:       http://tools.ietf.org/html/draft-huang-bmwg-virtual-network-performance-00

 

 

Abstract:

   As the virtual network has been widely established in IDC, the

   performance of virtual network has become a valuable consideration to

   the IDC managers.  This draft introduces a benchmarking methodology

   for virtualization network performance based on virtual switch.

 

                                                                                  

 

 

Please note that it may take a couple of minutes from the time of submission

until the htmlized version and diff are available at tools.ietf.org.

 

The IETF Secretariat

 

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
IETF Secretariat | 27 Oct 05:45 2014
Picon

ID Tracker State Update Notice: <draft-ietf-bmwg-bgp-basic-convergence-04.txt>

IESG state changed to Last Call Requested from AD Evaluation
ID Tracker URL: http://datatracker.ietf.org/doc/draft-ietf-bmwg-bgp-basic-convergence/
internet-drafts | 27 Oct 01:22 2014
Picon

New Version Notification - draft-ietf-bmwg-bgp-basic-convergence-04.txt


A new version (-04) has been submitted for draft-ietf-bmwg-bgp-basic-convergence:
http://www.ietf.org/internet-drafts/draft-ietf-bmwg-bgp-basic-convergence-04.txt

The IETF datatracker page for this Internet-Draft is:
https://datatracker.ietf.org/doc/draft-ietf-bmwg-bgp-basic-convergence/

Diff from previous version:
http://www.ietf.org/rfcdiff?url2=draft-ietf-bmwg-bgp-basic-convergence-04

Please note that it may take a couple of minutes from the time of submission
until the htmlized version and diff are available at tools.ietf.org.

IETF Secretariat.
internet-drafts | 27 Oct 01:22 2014
Picon

I-D Action: draft-ietf-bmwg-bgp-basic-convergence-04.txt


A New Internet-Draft is available from the on-line Internet-Drafts directories.
 This draft is a work item of the Benchmarking Methodology Working Group of the IETF.

        Title           : Basic BGP Convergence Benchmarking Methodology for Data Plane Convergence
        Authors         : Rajiv Papneja
                          Bhavani Parise
                          Susan Hares
                          Dean Lee
                          Ilya Varlashkin
	Filename        : draft-ietf-bmwg-bgp-basic-convergence-04.txt
	Pages           : 34
	Date            : 2014-10-26

Abstract:
   BGP is widely deployed and used by several service providers as the
   default Inter AS routing protocol.  It is of utmost importance to
   ensure that when a BGP peer or a downstream link of a BGP peer fails,
   the alternate paths are rapidly used and routes via these alternate
   paths are installed.  This document provides the basic BGP
   Benchmarking Methodology using existing BGP Convergence Terminology,
   RFC 4098.

The IETF datatracker status page for this draft is:
https://datatracker.ietf.org/doc/draft-ietf-bmwg-bgp-basic-convergence/

There's also a htmlized version available at:
http://tools.ietf.org/html/draft-ietf-bmwg-bgp-basic-convergence-04

A diff from the previous version is available at:
http://www.ietf.org/rfcdiff?url2=draft-ietf-bmwg-bgp-basic-convergence-04

Please note that it may take a couple of minutes from the time of submission
until the htmlized version and diff are available at tools.ietf.org.

Internet-Drafts are also available by anonymous FTP at:
ftp://ftp.ietf.org/internet-drafts/
MORTON, ALFRED C (AL | 22 Oct 22:29 2014
Picon

comments on draft-bhuvan-bmwg-of-controller-benchmarking-01

Hi Bhuvan, Anton, Vishwas, and Mark,

Thanks for preparing a very complete and interesting draft!

My comments on the draft are dispersed throughout the
text below, all prefaced by "ACM:"

regards,
Al
(as participant)

     Benchmarking Methodology for SDN Controller Performance
        draft-bhuvan-bmwg-of-controller-benchmarking-01

...
1. Introduction

   This document provides generic metrics and methodologies for
   benchmarking SDN controller performance. An SDN controller may
   support many northbound and southbound protocols, implement wide
   range of applications and work as standalone or as a group to
   achieve the desired functionality. This document considers an SDN
   controller as a black box, regardless of design and implementation.
   The tests defined in the document can be used to benchmark various
   controller designs for performance, scalability, reliability and
   security independent of northbound and southbound protocols. These
   tests can be performed on an SDN controller running as a virtual
   machine (VM) instance or on a bare metal server. This document is
   intended for those who want to measure the SDN controller
   performance as well as compare various SDN controllers performance.

   Conventions used in this document

   The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
   "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
   document are to be interpreted as described in RFC 2119.

2. Terminology

ACM:
Let's try to be consistent with other efforts that have
also defined terms.  For example, the IRTF SDNRG has an
approved draft now, with many related terms:
http://tools.ietf.org/html/draft-irtf-sdnrg-layer-terminology-04
They define terms like SDN and Interface in the SDN context.
The draft also provides an SDN Architecture in Figure 1,
showing 2 different types of Southbound Interfaces
(Control and Management).

   SDN Node:
      An SDN node is a physical or virtual entity that forwards
      data in a software defined environment.

   Flow:
      A flow is a traffic stream having same source and destination
      address. The address could be MAC or IP or combination of both.
ACM:
This could be closer to the definition of a microflow,
see
http://tools.ietf.org/html/rfc4689#section-3.1.5

   Learning Rate:
      The rate at which the controller learns the new source addresses
      from the received traffic without dropping.
ACM:
I suggest to leave out "without dropping", to give a more general
metric.  Using this definition, we could define the "lossless" or
"reliable" Learning rate where the additional condition of no
messages dropped applies.

   Controller Forwarding Table:
      A controller forwarding table contains flow records for the flows
      configured in the data path.

Bhuvan, et al.            Expires March 26, 2015               [Page 3]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015

   Northbound Interface:
      Northbound interface is the application programming interface
      provided by the SDN controller for communication with SDN
      services and applications.
ACM:
http://tools.ietf.org/html/draft-irtf-sdnrg-layer-terminology-04#page-7
Figure 1 doesn't show the Northbound interface (but it doesn't
specifically show the boundaries of the controller, either...

   Southbound Interface:
      Southbound interface is the application programming interface
      provided by the SDN controller for communication with the SDN
      nodes.

   Proactive Flow Provisioning:
      Proactive flow provisioning is the pre-provisioning of flow
      entries into the controller's forwarding table through
      controller's northbound interface or management interface.

   Reactive Flow Provisioning:
      Reactive flow provisioning is the dynamic provisioning of flow
      entries into the controller's forwarding table based on traffic
      forwarded by the SDN nodes through controller's southbound
      interface.

   Path:
      A path is the route taken by a flow while traversing from a source
      node to destination node.
ACM:
"route" seems unclear, we want to say something about the nodes traversed.
There are lots of definitions of path, we could adapt the one from
from RFC2330 (below), or other source if you want:
   path A sequence of the form < h0, l1, h1, ..., ln, hn >, where n >=
        0, each hi is a host, each li is a link between hi-1 and hi,
        each h1...hn-1 is a router.  A pair <li, hi> is termed a 'hop'.
        In an appropriate operational configuration, the links and
        routers in the path facilitate network-layer communication of
        packets from h0 to hn.  Note that path is a unidirectional
        concept.

   Standalone Mode:
      Single controller handling all control plane functionalities.

   Cluster/Redundancy Mode:
      Group of controllers handling all control plane functionalities .

ACM: for the Mode definitions above:
The Group case should indicate possibilities for how the group
shares the control responsibilities: shared load, separate loads,
active/standby, etc.  The name Cluster/Redundancy could be any of
these types - maybe define each separately. For the group case,
How are the Management Plane functions divided? should add this
aspect.

   Synchronous Message:
      Any message from the SDN node that triggers a response message
      from the controller e.g., Keepalive request and response message,
      flow setup request and response message etc.,

ACM:
"synchronous" seems like the wrong adjective here.  Did this
term come from one of the implementations?  Seems like
"request-response" message or "response required" message is
more exact, but that's just the idealist commenting...

3. Scope

   This document defines a number of tests to measure the networking
   aspects of SDN controllers. These tests are recommended for
   execution in lab environments rather than in real time deployments.

ACM:
s/measure the/measure the performance of/
suggest
s/networking/control and management/ (or just control)

Bhuvan, et al.            Expires March 26, 2015               [Page 4]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015

4. Test Setup

   The tests defined in this document enable measurement of SDN
   controller's performance in Standalone mode and Cluster mode. This
   section defines common reference topologies that are later referred
   to in individual tests.

ACM:
In the network cases below (4.1-4.4), we should probably show the
network path more explicitly (since we went to the trouble to define
it above) between the nodes.  So here the path would be
Node1, link, Node2, link, . . . Noden

                  ----------      ----------        ----------
                 |   SDN    |____|   SDN    |__..__|   SDN    |
                 |  Node 1  |    |  Node 2  |      |  Node n  |
                  ----------      ----------        ----------

4.1 SDN Network - Controller working in Standalone Mode

                          --------------------
                         |  SDN Applications  |
                          --------------------
                                   |
                                   | (Northbound interface)
                         -----------------------
                        |     SDN Controller    |
                        |          (DUT)        |
                         -----------------------
                                   | (Southbound interface)
                                   |
                       ---------------------------
                      |            |              |
                  ----------    ----------    ----------
                 |   SDN    |  |   SDN    |..|   SDN    |
                 |  Node 1  |  |  Node 2  |  |  Node n  |
                  ----------    ----------    ----------

                                  Figure 1

4.2 SDN Network - Controller working in Cluster Mode

                          --------------------
                         |  SDN Applications  |
                          --------------------
                                   |
                                   | (Northbound interface)
        ---------------------------------------------------------
       |  ------------------             ------------------      |
       | | SDN Controller 1 | <--E/W--> | SDN Controller n |     |
       |  ------------------             ------------------      |
        ---------------------------------------------------------
                                   | (Southbound interface)
                                   |
                       ---------------------------
                      |            |              |
                  ----------    ----------    ----------
                 |   SDN    |  |   SDN    |..|   SDN    |
                 |  Node 1  |  |  Node 2  |  |  Node n  |
                  ----------    ----------    ----------

                                  Figure 2
ACM:
Does this apply to shared control and active/standby ?

Bhuvan, et al.            Expires March 26, 2015               [Page 5]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015

4.3 SDN Network with Traffic Endpoints (TE) - Controller working in
    Standalone Mode

                          --------------------
                         |  SDN Applications  |
                          --------------------
                                   |
                                   | (Northbound interface)
                         -----------------------
                        |  SDN Controller (DUT) |
                         -----------------------
                                   | (Southbound interface)
                                   |
                       ---------------------------
                      |            |              |
                  ----------    ----------    ----------
                 |   SDN    |  |   SDN    |..|   SDN    |
                 |  Node 1  |  |  Node 2  |  |  Node n  |
                  ----------    ----------    ----------
                      |                           |
                --------------             --------------
               |   Traffic    |           |   Traffic    |
               | Endpoint TP1 |           | Endpoint TP2 |
                --------------             --------------

                                  Figure 3

4.4 SDN Network with Traffic Endpoints (TE) - Controller working in
    Cluster Mode

                          --------------------
                         |  SDN Applications  |
                          --------------------
                                   |
                                   | (Northbound interface)
        ---------------------------------------------------------
       |  ------------------             ------------------      |
       | | SDN Controller 1 | <--E/W--> | SDN Controller n |     |
       |  ------------------             ------------------      |
        ---------------------------------------------------------
                                   |

Bhuvan, et al.            Expires March 26, 2015               [Page 6]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015

                                   | (Southbound interface)
                                   |
                       ---------------------------
                      |            |              |
                  ----------    ----------    ----------
                 |   SDN    |  |   SDN    |..|   SDN    |
                 |  Node 1  |  |  Node 2  |  |  Node n  |
                  ----------    ----------    ----------
                      |                           |
                --------------             --------------
               |   Traffic    |           |   Traffic    |
               | Endpoint TP1 |           | Endpoint TP2 |
                --------------             --------------

                                  Figure 4

4.5 SDN Node with Traffic Endpoints (TE) - Controller working in
    Standalone Mode
                          --------------------
                         |  SDN Applications  |
                          --------------------
                                   |
                                   | (Northbound interface)
                         -----------------------
                        |     SDN Controller    |
                        |          (DUT)        |
                         -----------------------
                                   | (Southbound interface)
                                   |
                               ----------
                       -------|   SDN    |---------
                      |       |  Node 1  |         |
                      |        ----------          |
                  ----------                  ----------
                 | Traffic  |                | Traffic  |
                 | Endpoint |                | Endpoint |
                 |   TP1    |                |   TP2    |
                  ----------                  ----------

                                  Figure 5

Bhuvan, et al.            Expires March 26, 2015               [Page 7]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015

4.6 SDN Node with Traffic Endpoints (TE) - Controller working in Cluster
    Mode

                          --------------------
                         |  SDN Applications  |
                          --------------------
                                   |
                                   | (Northbound interface)
        ---------------------------------------------------------
       |  ------------------             ------------------      |
       | | SDN Controller 1 | <--E/W--> | SDN Controller n |     |
       |  ------------------             ------------------      |
        ---------------------------------------------------------
                                   | (Southbound interface)
                                   |
                               ----------
                       -------|   SDN    |---------
                      |       |  Node 1  |         |
                      |        ----------          |
                  ----------                  ----------
                 | Traffic  |                | Traffic  |
                 | Endpoint |                | Endpoint |
                 |   TP1    |                |   TP2    |
                  ----------                  ----------

                                  Figure 6

5. Test Considerations

5.1 Network Topology

   The network SHOULD be deployed with SDN nodes interconnected in
   either fully meshed, tree or linear topology. Care should be taken
   to make sure that the loop prevention mechanism is enabled either in
   the SDN controller or in the network. To get complete performance
   characterization of SDN controller, it is recommended that the
   controller be benchmarked for many network topologies. These network
   topologies can be deployed using real hardware or emulated in
   hardware platforms.

5.2 Test Traffic

   Test traffic can be used to notify the controller about the arrival
   of new flows or generate notifications/events towards controller.
   In either case, it is recommended that at least five different frame
   sizes and traffic types be used, depending on the intended network
   deployment.

ACM:
Single size tests?  (should be "yes")
We should recommend the default sizes here or reference another set.

Bhuvan, et al.            Expires March 26, 2015               [Page 8]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015

5.3 Connection Setup

   There may be controller implementations that support
   unencrypted and encrypted network connections with SDN nodes.
   Further, the controller may have backward compatibility with SDN
   nodes running older versions of southbound protocols. It is
   recommended that the controller performance be measured with the
   applicable connection setup methods.

   1. Unencrypted connection with SDN nodes, running same protocol
      version.
   2. Unencrypted connection with SDN nodes, running
      different (previous) protocol versions.
   3. Encrypted connection with SDN nodes,running same protocol version
   4. Encrypted connection with SDN nodes, running
      different (previous)protocol versions.

ACM:
suggest
s/previous/current and older/

5.4 Measurement Accuracy

   The measurement accuracy depends on the
   point of observation where the indications are captured. For example,
   the notification can be observed at the ingress or egress point of
   the SDN node. If it is observed at the egress point of the SDN node,
   the measurement includes the latency within the SDN node also. It is
   recommended to make observation at the ingress point of the SDN node
   unless it is explicitly mentioned otherwise in the individual test.

ACM:
This is really about specificity of measurement points.
The accuracy of results-reporting depends on the measurement
point specifications, but there are lots of other factors
affecting accuracy.
I suggest calling this section
"Measurement Point Specification and Recommendation"

5.5 Real World Scenario

   Benchmarking tests discussed in the document are
   to be performed on a "black-box" basis, relying solely on
   measurements observable external to the controller. The network
   deployed and the test parameters should be identical to the
   deployment scenario to obtain value added measures.

ACM:
suggest:
... to obtain measurements with the greatest value.

6. Test Reporting

   Each test has a reporting format which is specific to individual
   test. In addition, the following configuration parameters SHOULD be
   reflected in the test report.
   1. Controller name and version
   2. Northbound protocols and version
   3. Southbound protocols and version
   4. Controller redundancy mode (Standalone or Cluster Mode)
   5. Connection setup (Unencrypted or Encrypted)
   6. Network Topology (Mesh or Tree or Linear)
   7. SDN Node Type (Physical or Virtual or Emulated)
   8. Number of Nodes
   9. Number of Links
   10. Test Traffic Type

ACM:
I think we may need some more HW specifications here.
check-out:
https://tools.ietf.org/html/draft-morton-bmwg-virtual-net-01#section-3

Bhuvan, et al.            Expires March 26, 2015               [Page 9]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015

7. Benchmarking Tests

7.1 Performance

7.1.1 Network Topology Discovery Time

ACM:
This is a good Benchmark.  One small detail is that we usually
present the Benchmark Definitions separately from the test
procedures - it makes it easier to understand what will be
quantified in a section with all the Benchmark definitions
side by side.  More comments below.

   Objective:
      To measure the time taken to discover the network topology- nodes
      and its connectivity by a controller, expressed in milliseconds.

   Setup Parameters:
      The following parameters MUST be defined:

      Network setup parameters:
      Number of nodes (N) - Defines the number of nodes present in the
      defined network topology
ACM:
suggest:
      Topology: clear specification (e.g., full mesh) or diagram.
------

ACM:
Latency on the links between nodes will affect the result, right?
Perhaps this should be measured and reported, too.
------

      Test setup parameters:
      Test Iterations (Tr) - Defines the number of times the test needs
      to be repeated. The recommended value is 3.
      Test Interval (To)- Defines the maximum time for the test to
      complete, expressed in milliseconds.
ACM:
For un-successful discovery iterations, how are the results reported?

      Test Setup:
      The test can use one of the test setup described in section 4.1
      and 4.2 of this document.

   Prerequisite:
      1.  The controller should support network discovery.
ACM:
. . . MUST support network discovery???

      2.  Tester should be able to retrieve the discovered topology
ACM:
s/should/SHOULD/
          information either through controller's management interface
          or northbound interface.
ACM: add
. . . to determine if the discovery was successful and complete.

   Procedure:
      1.  Initialize the controller - network applications, northbound
          and southbound interfaces.
      2.  Deploy the network with the given number of nodes using mesh
          or linear topology.
      3.  Initialize the network connections between controller and
          network nodes.
ACM:
So, the controller starts out knowing all the nodes it controls,
that makes sense with Topology discovery.

      4.  Record the time for the first discovery message exchange
          between the controller and the network node (Tm1).
      5.  Query the controller continuously for the discovered network
          topology information and compare it with the deployed network
          topology information.
      6.  Stop the test when the discovered topology information is
          matching with the deployed network topology or the expiry of
          test interval (To).

Bhuvan, et al.            Expires March 26, 2015              [Page 10]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015

      7.  Record the time last discovery message exchange between the
          controller and the network node (Tmn) when the test completed
          successfully.
ACM:
. . . successfully (e.g., the topology matches).

   Note: While recording the Tmn value, it is recommended that the
         messages that are used for aliveness check or session
         management be ignored.

   Measurement:
      Topology Discovery Time Tr1 = Tmn-Tm1.

                                        Tr1 + Tr2 + Tr3 .. Trn
      Average Topology Discovery Time = -----------------------
                                        Total Test Iterations

   Note:
      1. To increase the certainty of measured result, it is
ACM:
s/certainty of/confidence in/

         recommended that this test be performed several times with
         same number of nodes using same topology.
      2. To get the full characterization of a controller's topology
         discovery functionality
         a. Perform the test with varying number of nodes using same
            topology
         b. Perform the test with same number of nodes using different
            topologies.

   Reporting Format:
      The Topology Discovery Time results SHOULD be reported in the
      format of a table, with a row for each iteration. The last row of
      the table indicates the average Topology Discovery Time.

      If this test is repeated with varying number of nodes over the
      same topology, the results SHOULD be reported in the form of a
      graph. The X coordinate SHOULD be the Number of nodes (N), the
      Y coordinate SHOULD be the average Topology Discovery Time.
ACM:
nicely done, and very traditional.

      If this test is repeated with same number of nodes over different
      topologies,the results SHOULD be reported in the form of a graph.
      The X coordinate SHOULD be the Topology Type, the Y coordinate
      SHOULD be the average Topology Discovery Time.

ACM:
many of the comments above apply to the sections below,
I won't repeat them.

Bhuvan, et al.            Expires March 26, 2015              [Page 11]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015

7.1.2 Synchronous Message Processing Time

   Objective:
      To measure the time taken by the controller to process a
      synchronous message, expressed in milliseconds.

   Setup Parameters:
      The following parameters MUST be defined:

      Network setup parameters:
      Number of nodes (N) - Defines the number of nodes present in the
      defined network topology

      Test setup parameters:
      Test Iterations (Tr) - Defines the number of times the test needs
      to be repeated. The recommended value is 3.
      Test Duration (Td) - Defines the duration of test iteration,
      expressed in seconds. The recommended value is 5 seconds.

      Test Setup:
      The test can use one of the test setup described in section 4.1
      and 4.2 of this document.

   Prerequisite:
      1. The controller should have completed the network topology
         discovery for the connected nodes.

   Procedure:
      1. Generate a synchronous message from every connected nodes one
         at a time and wait for the response before generating the
         next message.
ACM:
So this is serial message processing time.
We may want to distinguish this in the name of the Benchmark.
-------

ACM:
I don't see how the loss of a request message would be handled.
This needs to be mentioned somewhere.  For example I suppose
a request sender will time-out and re-send the request, but then
we need to know that time-out.  Time-outs have a large impact
on the average for that iteration - perhaps there should be a way
to count the re-transmitted requests. There's definitely an
issue here.
-------

      2. Record total number of messages sent to the controller by all
         nodes (Ntx) and the responses received from the
         controller (Nrx) within the test duration (Td).

ACM:
So this is a fixed duration test, and the number of successful responses
completed determines the average request response time.

   Measurement:
                                                  Td
      Synchronous Message Processing Time Tr1 = ------
                                                  Nrx

                                                   Tr1 + Tr2 + Tr3..Trn
      Average Synchronous Message Processing Time= --------------------
                                                  Total Test Iterations

Bhuvan, et al.            Expires March 26, 2015              [Page 12]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015

   Note:
      1. The above test measures the controller's message processing
         time at lower traffic rate. To measure the controller's
         message processing time at full connection rate, apply the
         same measurement equation with the Td and Nrx values obtained
         from Synchronous Message Processing Rate test
         (defined in Section 7.1.3).
      2. To increase the certainty of measured result, it is
         recommended that this test be performed several times with
         same number of nodes using same topology.
      3. To get the full characterization of a controller's synchronous
         message processing time
         a. Perform the test with varying number of nodes using same
            topology
         b. Perform the test with same number of nodes using different
            topologies.

   Reporting Format:
      The Synchronous Message Processing Time results SHOULD be
      reported in the format of a table with a row for each iteration.
      The last row of the table indicates the average Synchronous
      Message Processing Time.

      The report should capture the following information in addition
      to the configuration parameters captured in section 6.
      - Offered rate (Ntx)

      If this test is repeated with varying number of nodes with same
      topology, the results SHOULD be reported in the form of a graph.
      The X coordinate SHOULD be the Number of nodes (N), the
      Y coordinate SHOULD be the average Synchronous Message Processing
      Time.

      If this test is repeated with same number of nodes using
      different topologies, the results SHOULD be reported in the form
      of a graph. The X coordinate SHOULD be the Topology Type, the
      Y coordinate SHOULD be the average Synchronous Message Processing
      Time.

7.1.3 Synchronous Message Processing Rate

   Objective:
      To measure the maximum number of synchronous messages (session
      aliveness check message, new flow arrival notification
      message etc.) a controller can process within the test duration,
      expressed in messages processed per second.

ACM:
even when the controller is dropping messages?
(this is kind of the Benchmark definition above,
so I'm asking for clarification)

Bhuvan, et al.            Expires March 26, 2015              [Page 13]

Internet Draft    SDN Controller Benchmarking Methodology    March 2015

   Setup Parameters:
      The following parameters MUST be defined:

      Network setup parameters:
      Number of nodes (N) - Defines the number of nodes present in the
      defined network topology.

      Test setup parameters:
      Test Iterations (Tr) - Defines the number of times the test needs
      to be repeated. The recommended value is 3.
      Test Duration (Td) - Defines the duration of test iteration,
      expressed in seconds. The recommended value is 5 seconds.

      Test Setup:
      The test can use one of the test setup described in section 4.1
      and 4.2 of this document.

   Prerequisite:
      1. The controller should have completed the network topology
         discovery for the connected nodes.

   Procedure:
      1. Generate synchronous messages from all the connected nodes
         at the full connection capacity for the Test Duration (Td).
ACM:
I think we need to add detail on the connection capacity from
each node to the controller.  Is it a shared link with an aggregation
point?  Or, do these control connections use traffic management,
and we are talking about the capacity of a virtual pipe, not PHY.?

      2. Record total number of messages sent to the controller by all
         nodes (Ntx) and the responses received from the
         controller (Nrx) within the test duration (Td).

ACM:
We have to distinguish the lossy case, where Ntx not.equal Nrx.
----------

   Measurement:
                                                 Nrx
      Synchronous Message Processing Rate Tr1 = -----
                                                 Td
                                                   Tr1 + Tr2 + Tr3..Trn
      Average Synchronous Message Processing Rate= --------------------
                                                  Total Test Iterations

ACM:
and I think we want a version of this for lossless operation.
perhaps another case with loss ratio measured would also be useful.
ALSO, I think this comment to recognize Loss conditions
may apply to the procedures that follow.

-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

ACM:
A lot of good Benchmark tests follow, but I will stop here
since i've already made quite a few comments.

I think the 3x3 matrix is helpful in the draft, because there
are so many benchmarks described.

>From the user perspective, the Scalability metrics affect the
reliability of the system as they would perceive it. For example
a system operating at max capacity would block further requests
until space is available, so I can make a case that scale influences
reliability (but so does capacity engineering)

EOT.
MORTON, ALFRED C (AL | 21 Oct 00:37 2014
Picon

Session at IETF-91

BMWG,

IETF-91 approaches, and the draft submission deadline is a week away, so...

Please consider issues on drafts that authors would like to discuss, or 
participants would like to raise or resolve from earlier discussion on the *list*.
We'll form an agenda from all the input.  Feel free to send it to your
co-chairs.

Our agenda assignment is now as final as possible,
https://datatracker.ietf.org/meeting/91/agenda.html

1640-1910 HST	 Thursday Afternoon Session III

So please how you would like to participate, and what topics you
can contribute to.  Most importantly, read the new drafts...

regards,
Al
bmwg co-chair
IETF Secretariat | 19 Oct 22:42 2014
Picon

ID Tracker State Update Notice: <draft-ietf-bmwg-bgp-basic-convergence-03.txt>

IESG state changed to AD Evaluation from Publication Requested
ID Tracker URL: http://datatracker.ietf.org/doc/draft-ietf-bmwg-bgp-basic-convergence/
Sarah Banks | 16 Oct 19:24 2014
Picon

Fwd: FYI: Message re: the RFC Editor EFL sessions at IETF 91

Hello BMWG,
Are you an author, or prospective author, and want some assistance and guidance on how to improve your draft(s), during the upcoming meeting in Hawaii? Peruse the email below, and please let me know if you have any questions. It really is a great opportunity and if it's something you've been thinking about, please consider registering on the doodle poll, as it's first come, first serve.

Thanks
Sarah


Begin forwarded message:


Greetings WG and RG Chairs,

The RFC Editor Production Center (RPC) will be hosting an experimental writing lab aimed at helping authors improve their documents during IETF 91.  We are hoping that you will distribute this message to authors who might benefit from individual sessions with the RPC.  The authors that sign up will meet with one or two members of the editorial staff and will have 30 minutes to go through selected text from the document.  The editors will have read the selected text prior to the meeting and will go over any issues their review may reveal, for example, unclear passages, terminology inconsistencies, and grammar and punctuation choices.

Authors may sign up for a time slot using the following Doodle poll (sign ups are on a first-come, first-served basis):

 http://doodle.com/icxnmct9krap798f <http://doodle.com/icxnm>

Note that this is not a technical review, nor is it an end-to-end review.  Questions of a technical nature may be redirected to the WG or relevant ADs.  Because of the time constraints and the preparation time required for each session, it is not possible to review documents in their entirety.  

Once an author has signed up for a time slot, the RPC will send a confirmation email with a request for some additional information specific to the given document and author's goals for attending the session.  We ask that the authors sign up for a session by 26 October, so the RFC Editor has enough time to review the document before IETF 91.  
The authors will be notified of the location for these sessions once the details have been solidified; the sessions will be in the hotel.

Thank you,
RFC Editor Team

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
internet-drafts | 14 Oct 10:32 2014
Picon

I-D Action: draft-ietf-bmwg-bgp-basic-convergence-03.txt


A New Internet-Draft is available from the on-line Internet-Drafts directories.
 This draft is a work item of the Benchmarking Methodology Working Group of the IETF.

        Title           : Basic BGP Convergence Benchmarking Methodology for Data Plane Convergence
        Authors         : Rajiv Papneja
                          Bhavani Parise
                          Susan Hares
                          Dean Lee
                          Ilya Varlashkin
	Filename        : draft-ietf-bmwg-bgp-basic-convergence-03.txt
	Pages           : 34
	Date            : 2014-10-14

Abstract:
   BGP is widely deployed and used by several service providers as the
   default Inter AS routing protocol.  It is of utmost importance to
   ensure that when a BGP peer or a downstream link of a BGP peer fails,
   the alternate paths are rapidly used and routes via these alternate
   paths are installed.  This document provides the basic BGP
   Benchmarking Methodology using existing BGP Convergence Terminology,
   RFC 4098.

The IETF datatracker status page for this draft is:
https://datatracker.ietf.org/doc/draft-ietf-bmwg-bgp-basic-convergence/

There's also a htmlized version available at:
http://tools.ietf.org/html/draft-ietf-bmwg-bgp-basic-convergence-03

A diff from the previous version is available at:
http://www.ietf.org/rfcdiff?url2=draft-ietf-bmwg-bgp-basic-convergence-03

Please note that it may take a couple of minutes from the time of submission
until the htmlized version and diff are available at tools.ietf.org.

Internet-Drafts are also available by anonymous FTP at:
ftp://ftp.ietf.org/internet-drafts/

Gmane