RFC Errata System | 15 Apr 18:52 2014

[Errata Verified] RFC5180 (3953)

The following errata report has been verified for RFC5180,
"IPv6 Benchmarking Methodology for Network Interconnect Devices". 

--------------------------------------
You may review the report below and at:
http://www.rfc-editor.org/errata_search.php?rfc=5180&eid=3953

--------------------------------------
Status: Verified
Type: Editorial

Reported by: Fernando Gont <fernando <at> gont.com.ar>
Date Reported: 2014-04-07
Verified by: Benoit Claise (IESG)

Section: 3

Original Text
-------------
Nevertheless, for each evaluated device, it
is recommended to perform as many of the applicable tests described
in Section 6 as possible.

Corrected Text
--------------
Nevertheless, for each evaluated device, it
is recommended to perform as many of the applicable tests described
in Section 7 as possible.

Notes
(Continue reading)

Barry Constantine | 14 Apr 19:41 2014

CloudEthernet Forum

Hello Folks

 

Looking to see if anyone has any exposure to the newly forming CloudEthernet Forum and how it may relate to other activities being pursued by IETF.

 

Al,

 

If you prefer this be an off list discussion, please indicate that so others may contact me off list.

 

Thank you,

Barry Constantine

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
Banks, Sarah | 8 Apr 19:47 2014

Comments/review on draft-ietf-bmwg-bgp-basic-convergence

Hello draft-ietf-bmwg-bgp-basic-convergence authors,
	In prepping to be your document shepherd, I re-read the draft again with
fresh eyes (hah) and have a few comments; mostly editorial, but
nonetheless, here they come.

Section 1 and 1.1
	I'm not a fan of restating - say what you mean, and be clear and precise.
The document generally reads this way, so consider starting off that way
too. In particular, the first sentence of the first paragraph in Section
1.1, "Since benchmarking is a science of precision, let us restate the
purpose of this document in benchmarking terms." There's something about
this that rubs me the wrong way, as an editor. I'm also not 100% sure
benchmarking is a science of precision. :)  In any event, consider stating
once what the draft is about.

Section 1.1 
	What is "Basic BGP"? Is there "Advanced BGP" too? I jest - but you're
using a term before introducing it, instead, waiting to introduce it 2
paragraphs down. Consider adjusting this.

	You call out a test topology of 3 or 4 nodes - why? I don't think this
has to be a long paragraph, just a sentence, that covers why you chose 3/4
nodes, and not 2, or 200.

	Speaking of Basic BGP definitions - your definition says "as RFC 4271 ...
For IPv6". Your introduction states that this document covers
methodologies for both IPv4 and v6, yet here, you say Basic BGP for v6
only. Consider revising this sentence/paragraph.

Section 1.2
	A minor language nit. In your second sentence/first paragraph, you state,
"To maintain a reliable connectivity within...". Consider revising to, "To
maintain reliable connectivity..."

	Last sentence, "These simple tests... High-level check, of the ..." -
consider removing the ",". There's no need for a pause there.

	Last sentence - what is "multiple implementations"? Please consider
clarifying.

Section 1.4
	While I understand what you're trying to do here, I often tell customers
NOT to do this - that they SHOULD test with the timer settings, for
example, that they deploy in the network today. There are lots of reasons
why default timers don't work for real-life deployments, and test results
you get from default timers are often NOT the same as when, for example,
configured for aggressive timers. I think this skews the data in a way as
to make it useless if non-standard or non-defaults are deployed in the
wild. I'd prefer that if a customer wants to use aggressive timers, they
be configured the same way across each iteration of the test, and across
each vendor's gear, for apples to apples comparison.

Section 3
	You state, "These simple test nodes have 3 or 4 nodes with the following
configuration:" - but then you list what I think are 4 different
configurations. Consider revising the aforementioned sentence to read
something like, "These simple test setups have 3 or 4 nodes with one of
the following configurations:"

	BTW I'm not sure what use "simple" has in the original sentence; while
I'm not married to this idea, I'd nix the use of the word "simple" here,
even from the sample sentence I gave above. :)

	Section 3 is the first time the draft states that both iBGP and eBGP will
be covered. Consider adding this fact to your overview/introduction.

Section 4.2
	Second paragraph - "each test run must identify... Number of routes. This
route stream must be ... Reporting stream." Are these normative
references? :)

	Last paragraph, first sentence, "It is RECOMMENDED that the user may
consider..." Consider revising this to remove the "may" - "It is
RECOMMENDED that the user consider advertising..."

Section 4.3 
	why is "Minimal" with an "M"?

	Why are exact policy documentations a "should" - I think this is
normative anyhow, but why not MUST? If you don't document the policy
processes, so that the tests could be reproduced effectively?

Section 4.8
	Why should, and not MUST?

Section 5.1.1
	When you say "Stand Deviation" did you mean "Standard Deviation"?

Test repeatability:
	Some of the tests cases say that it's recommended to run the test case a
couple times - and not others.  I wonder if you meant this to be true for
all the cases. In any event, consider adding a section or adding to the
"test considerations" section a note on running the test cases multiple
times - and even further, consider taking a stand on how many times to run
them. :)

Section 5.8
	How does one trigger a GR event on the DUT? The document does a pretty
good job hand holding the tester - consider adding a sentence or two at
the start of the section on how to trigger the event. Do you expect the
DUT interface to flap or the test tool to cause the flap? Do you care?

	Why does the draft cover a test case for GR and not NSR?

Thanks
Sarah

	

	
RFC Errata System | 8 Apr 08:34 2014

[Editorial Errata Reported] RFC5180 (3953)

The following errata report has been submitted for RFC5180,
"IPv6 Benchmarking Methodology for Network Interconnect Devices".

--------------------------------------
You may review the report below and at:
http://www.rfc-editor.org/errata_search.php?rfc=5180&eid=3953

--------------------------------------
Type: Editorial
Reported by: Fernando Gont <fernando <at> gont.com.ar>

Section: 3

Original Text
-------------
Nevertheless, for each evaluated device, it
is recommended to perform as many of the applicable tests described
in Section 6 as possible.

Corrected Text
--------------
Nevertheless, for each evaluated device, it
is recommended to perform as many of the applicable tests described
in Section 7 as possible.

Notes
-----

Instructions:
-------------
This errata is currently posted as "Reported". If necessary, please
use "Reply All" to discuss whether it should be verified or
rejected. When a decision is reached, the verifying party (IESG)
can log in to change the status and edit the report, if necessary. 

--------------------------------------
RFC5180 (draft-ietf-bmwg-ipv6-meth-05)
--------------------------------------
Title               : IPv6 Benchmarking Methodology for Network Interconnect Devices
Publication Date    : May 2008
Author(s)           : C. Popoviciu, A. Hamza, G. Van de Velde, D. Dugatkin
Category            : INFORMATIONAL
Source              : Benchmarking Methodology
Area                : Operations and Management
Stream              : IETF
Verifying Party     : IESG
William Cerveny | 7 Apr 20:12 2014

Comments on draft-ietf-bmwg-sip-bench-term-09

My general comments about draft-ietf-bmwg-sip-bench-term-09
1) Where the term has an associated acronym, include that acronym in the
definition title ... this is done in some cases, but not in others
2) Replace instances of "work item" with "document"
3) In section "3.4.2 Registration Rates", the document says:

This benchmark is obtained with zero failures in which 100% of the
      registrations attempted by the EA are successfully completed by
      the DUT.

Is "zero failures" redundant in the above text since 100% success would
imply "zero failures"?

I've sent smaller comments directly to the authors.

Regards,

Bill Cerveny
William Cerveny | 7 Apr 17:06 2014

My comments on draft-ietf-bmwg-sip-bench-meth-09

Below are my high level comments on draft-ietf-bmwg-sip-bench-meth-09
denoted by /* comment */ ; I'm sending more detailed (mostly grammatical
comments) directly to the authors. I've identified the section and
general line number in which the comment is made.

Of note, I had trouble following the variables in the pseudocode, but I
may not have been following close enough attention to the code.

Bill Cerveny

<begin comments on draft-ietf-bmwg-sip-bench-meth-09>

desktop-10-36:scratch wcerveny$ cat grep-output-meth-2014-04-17.txt
1. Abstract:

30-   objective comparison of the capacity of SIP devices.  Test setup
31-   parameters and a methodology are necessary because SIP allows a
wide
32-   range of configuration and operational conditions that can
influence
33:/* In my opinion, sentence beginning with "A standard terminology
..."
34:   is an assumed and I'm not sure it should be in the abstract */
35-   performance benchmark measurements.  A standard terminology and
36-   methodology will ensure that benchmarks have consistent definition
37-   and were obtained following the same procedures.
--
2. Introduction:
--
200-   only.
201-
202-   The device-under-test (DUT) is a SIP server, which may be any SIP
203:/* Capitalization in "Benchmarks can be ..." is inconsistant */
204-   conforming [RFC3261] device.  Benchmarks can be obtained and
compared
205-   for different types of devices such as SIP Proxy Server, Session
206-   Border Controllers (SBC), SIP registrars and SIP proxy server
paired
--
--
208-
209-   The test cases provide metrics for benchmarking the maximum 'SIP
210-   Registration Rate' and maximum 'SIP Session Establishment Rate'
that
211:/* Is "extended period" defined? */
212-   the DUT can sustain over an extended period of time without
failures.
213-   Some cases are included to cover Encrypted SIP.  The test
topologies
214-   that can be used are described in the Test Setup section. 
Topologies
--
--
219-
220-   SIP permits a wide range of configuration options that are
explained
221-   in Section 4 and Section 2 of [I-D.sip-bench-term].  Benchmark
values
222:/* Is associated media defined */
223-   could possibly be impacted by Associated Media.  The selected
values
224-   for Session Duration and Media Streams per Session enable
benchmark
225-
--
3. Benchmarking Topologies
--
259-
260-   There are two test topologies; one in which the DUT does not
process
261-   the media (Figure 1) and the other in which it does process media
262:/* EA defined? */
263-   (Figure 2).  In both cases, the tester or EA sends traffic into
the
264-   DUT and absorbs traffic from the DUT.  The diagrams in Figure 1
and
265-   Figure 2 represent the logical flow of information and do not
dictate
--
4.3 Associated Media
--
346-4.3.  Associated Media
347-
348-   Some tests require Associated Media to be present for each SIP
349:/* Is this redundant? */
350-   session.  The test topologies to be used when benchmarking DUT
351-   performance for Associated Media are shown in Figure 1 and Figure
2.
352-
--
4.6 Session Duration
--
370-
371-   The value of the DUT's performance benchmarks may vary with the
372-   duration of SIP sessions.  Session Duration MUST be reported with
373:/* I'm not sure if this sentence ("A Session Duration ...") is
properly
374-formed (it might be), but I had difficulty following the logic of
the
375:sentence */
376-   benchmarking results.  A Session Duration of zero seconds
indicates
377-   transmission of a BYE immediately following successful SIP
378-   establishment indicate by receipt of a 200 OK.  An infinite
Session
--
4.8 Benchmarking algorithm
--
409-
410-   During the Candidate Identification phase, the test runs until n
411-   sessions have been attempted, at session attempt rates, r, which
vary
412:/* Upper case N and lower case n are different variables?? Same with
"R" */
413-   according to the algorithm below, where n is also a parameter of
test
414-   and is a relatively large number, but an order of magnitude
smaller
415-   than N. If no errors occur during the time it takes to attempt n
--
--
415-   than N. If no errors occur during the time it takes to attempt n
416-   sessions, we increment r according to the algorithm.  If errors
are
417-   encountered during the test, we decrement r according to the
418:/* sentence "The algorithm provides ..." needs clarification
419:Is the word "how" unnecessary? */
420-   algorithm.  The algorithm provides a variable, G, that allows us
to
421-   control how the accuracy, in sessions per second, that we require
of
422-   the test.
--
--
422-   the test.
423-
424-   After this candidate rate has been discovered, the test enters
the
425:/* Is N consistent with N in pseudocode? */
426-   Steady State phase.  In the Steady State phase, N session
Attempts
427-   are made at the candidate rate.  The goal is to find a rate at
which
428-   the DUT can process calls "forever" with no errors and the test
--
--
432-   the steady-state phase is entered again until a final (new)
steady-
433-   state rate is achieved.
434-
435:/* Would this process be clearer if presented as a list? */
436-   The iterative process itself is defined as follows: A starting
rate
437-   of r = 100 sessions per second is used and we place calls at that
438-   rate until n = 5000 calls have been placed.  If all n calls are
--
--
436-   The iterative process itself is defined as follows: A starting
rate
437-   of r = 100 sessions per second is used and we place calls at that
438-   rate until n = 5000 calls have been placed.  If all n calls are
439:/* sps defined? This said, it's easy to figure out */
440-   successful, the rate is increased to 150 sps and again we place
calls
441-   at that rate until n = 5000 calls have been placed.  The attempt
rate
442-   is continuously ramped up until a failure is encountered before n
=
--
--
449-   between the rate at which failures occurred and the last
successful
450-   rate.  Continuing in this way, an attempt rate without errors is
451-   found.  The tester can specify a margin of error using the
parameter G,
452:/* units? */
453-   measured in units of sessions per second.
454-
455-   The pseudo-code corresponding to the description above follows.
--
--
478-      G  := 5      ; granularity of results - the margin of error in
479-                   ; sps
480-      C  := 0.05   ; calibration amount: How much to back down if we
481:/* using "s" before definition, in my opinion; consider "found
candidate rate
482:s but cannot send at s" (Still not right, though ... */
483-                   ; have found candidate s but cannot send at rate
s
484-                   ; for time T without failures
485-
--
--
487-      ; ---- Initialization of flags, candidate values and upper
bounds
488-
489-      f  := false  ; indicates a success after the upper limit
490:/* Capital F never used in pseudocode */
491-      F  := false  ; indicates that test is done
492-      c  := 0      ; indicates that we have found an upper limit
493-
--
--
499-                                   ; characteristics until n
500-                                   ; requests have been sent
501-             if (all requests succeeded) {
502:/* undefined variable r'?  does this matter? */
503-                r' := r ; save candidate value of metric
504-                if ( c == 0 ) {
--
6.3 Session Establishment Rate with Media not on DUT
--
649-          be recorded using any pertinent parameters as shown in the
650-          reporting format of Section 5.1.
651-
652:/* Long sentence in general, but minimally last part of sentence
doesn't
653:conclude */
654-   Expected Results:  Session Establishment Rate results obtained
with
655-      Associated Media with any number of media streams per SIP
session
656-      are expected to be identical to the Session Establishment Rate

<end comments on draft-ietf-bmwg-sip-bench-meth-09>
Bhuvan (Veryx Technologies | 4 Apr 11:08 2014

FW: Benchmarking Methodology for OpenFlow SDN Controller Performance

Hello BMWG,

 

I am forwarding our response that we have provided to Sarah.

 

Thanks,

Bhuvan

 

From: Bhuvan (Veryx Technologies) [mailto:bhuvaneswaran.vengainathan <at> veryxtech.com]
Sent: Thursday, April 03, 2014 12:13 PM
To: Banks, Sarah (sbanks <at> akamai.com)
Cc: 'Anton Basil'; vishwas manral (vishwas <at> ionosnetworks.com)
Subject: RE: [bmwg] Benchmarking Methodology for OpenFlow SDN Controller Performance

 

Hello Sarah,

 

We have gone through your comments and they are good.

Please find below our responses to your comments with tag [Authors].

 

Also I would like to check with you if we can forward your comments to the mailing list to generate more discussion around this.

 

Best regards,

Bhuvan

 

-----Original Message-----
From: Bhuvan (Veryx Technologies) [mailto:bhuvaneswaran.vengainathan <at> veryxtech.com]
Sent: Wednesday, April 02, 2014 3:10 PM
To: 'Banks, Sarah'
Cc: 'anton.basil <at> veryxtech.com'; 'vishwas <at> ionosnetworks.com'
Subject: RE: [bmwg] Benchmarking Methodology for OpenFlow SDN Controller Performance

 

Hello Sarah,

 

Thanks for taking time to review this draft and provide your comments.

We will go over all your comments and provide our response shortly.

 

Best regards,

Bhuvan

 

-----Original Message-----

From: Banks, Sarah [mailto:sbanks <at> akamai.com]

Sent: Wednesday, April 02, 2014 5:58 AM

To: Bhuvan (Veryx Technologies); anton.basil <at> veryxtech.com; vishwas <at> ionosnetworks.com

Subject: Re: [bmwg] Benchmarking Methodology for OpenFlow SDN Controller Performance

 

Authors,

 

                Thanks for your draft! SDN is interesting, and has come up in some hallway conversations - it's nice to see a draft on it. I've reviewed the doc, and I'll try not to take a stance, but rather, address issues that I see that are either technically wrong or lead to the potential to be wrong, or make the draft more readable. I'll point out that your draft could use an Editor's eye - there are grammatical issues throughout. I'm happy to help if you need, please let me know.

 

[Authors] Sure. We would love to take your help in addressing them in the draft.

 

                I like to review section by section. I hope that approach works for you.

Here goes. :)

 

                Overall, I like that your approach is very straight forward, but I find that the document suffers from a true introduction. Indeed, your Appendices read very nicely, and I found myself thinking that this information would have been nice at the top of the hour - at the beginning of the document. Please consider relocating this information. Not everyone is an SDN guru, and not everyone takes an identical approach - level setting where you're coming from and your scope will make consuming the rest of the document (hopefully) easier. Does this make sense?

 

[Authors] Yes. We do agree your point. We will move the appendix B (OpenFlow Protocol Overview) information after the scope section.

 

                Let me go section by section now.

 

In Section 5.1 - You mention that the switches need not be physical switches, but emulated/simulated by software or test tools instead - the last sentence of this paragraph says that you "MUST indicate the number of switches and hosts..." - consider referencing the emulated/simulated number too. A quick read might leave open the idea that only physical switches used be noted/listed.

 

[Authors] We will reword this sentence to reflect the emulated/simulated entities too.

 

Section 5.3 refers to the ability to have differing ways to setup channels and connection type, yet only leverages some (as yet undefined) channel setup modes. Why? Why are all of the methods/modes not applicable or being used?

 

[Authors] Though we have described all the methods/modes that are applicable, our intent is to recommend the methods/modes that are supported by controllers available in the market. Performance measurement over other methods/modes is mentioned as optional, depending on the support available on the controller

 

Section 5.3.1.2: In active-passive mode, you state, "Any asynchronous communications from the switch have to be sent on the active channel."

While I suspect you don't mean this in a normative way, it begs the question - why?

 

[Authors]  We think we can remove this sentence from the draft, as in one of the test we have mentioned that to initiate communication on the passive channel as well.

 

Section 5.3.1.3 - why call this out if you aren't going to require that their use be noted in the test results?

 

[Authors] We did mention their use in Section 5.3.2. So we think of having this section in the draft. Please let us know your thoughts.

 

Section 5.3.2 - What is controller teaming? (OK, I know what it is, but you're using a term not yet defined. Please define.)

 

[Authors] Sure. We will define this in the terminology section.

 

Section 5.3.2 - Why are you recommending the use of TCP here, over the alternatives? Be specific.

 

[Authors] Since TCP is being supported by all OpenFlow implementations, we have recommended TCP over others.

 

In general, you don't take a stance on the number of times a test should be run, nor the amount of time to wait/pause between tests. Would you consider adding values to your recommendations?

 

[Authors] We could certainly look at recommending values for parameters like ‘number of times a test should be run’. But for others, we may not be able to define/recommend values, as these parameters may vary on implementation to implementation.

 

Your reporting format is repeated over and over - I'm a fan of state it once, and modify it when needed. :) Just a thought.

 

[Authors] This is a good suggestion. We can remove this portion wherever repeated in this draft.

 

Section 9 - Security Considerations - to my eye, it's unacceptable to state that there might be security considerations in the security considerations section, and then state that they're out of scope. I disagree. They are in scope. What are the issues? It's a north bound interface in the lab. What's up?

 

[Authors] Fine. We will review this section and add specific issues.

 

Is Appendix A (Section 10) a sample test report? There's no information about the setup/topology of the setup. If this isn't what App A is intended to be, consider adding a sample test report. I'd argue it might not live as an appendix, either. :)

 

In that vein, Appendix A has no quantification about WHAT it is - what is it? Why are there Nas? :) Please clarify.

 

[Authors] We have missed to provide the required descriptions to this section. We shall add a few lines of description that would clarify the section’s purpose.

 

Thanks

Sarah

BMWG Co-Chair

 

 

On 4/1/14, 8:07 AM, "Bhuvan (Veryx Technologies)"

<bhuvaneswaran.vengainathan <at> veryxtech.com> wrote:

 

>Hi folks,

>

>SDN is gaining lot of traction in the industry among vendors and

>service providers.

>We clearly see a need for test methodologies to measure the performance

>of SDN based network. Keeping this viewpoint, we have drafted a

>benchmarking methodology for OpenFlow SDN controller performance.

>http://tools.ietf.org/html/draft-bhuvan-bmwg-of-controller-benchmarking

>-00  We would love to hear any comments and queries on the same.

>

>Thanks,

>Authors

>

 

 

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
Banks, Sarah | 3 Apr 18:47 2014

FW: Benchmarking Methodology for OpenFlow SDN Controller Performance

Hello BMWG,
	I sent comments to the authors on the thread here privately and forgot to
cc bmwg. The authors have asked me to send these comments to the list.
Here is the original email.

Thanks
Sarah

On 4/1/14, 5:27 PM, "Banks, Sarah" <sbanks <at> akamai.com> wrote:

>Authors,
>
>	Thanks for your draft! SDN is interesting, and has come up in some
>hallway conversations - it's nice to see a draft on it. I've reviewed the
>doc, and I'll try not to take a stance, but rather, address issues that I
>see that are either technically wrong or lead to the potential to be
>wrong, or make the draft more readable. I'll point out that your draft
>could use an Editor's eye - there are grammatical issues throughout. I'm
>happy to help if you need, please let me know.
>
>	I like to review section by section. I hope that approach works for you.
>Here goes. :)
>
>	Overall, I like that your approach is very straight forward, but I find
>that the document suffers from a true introduction. Indeed, your
>Appendices read very nicely, and I found myself thinking that this
>information would have been nice at the top of the hour - at the beginning
>of the document. Please consider relocating this information. Not everyone
>is an SDN guru, and not everyone takes an identical approach - level
>setting where you're coming from and your scope will make consuming the
>rest of the document (hopefully) easier. Does this make sense?
>
>	Let me go section by section now.
>
>In Section 5.1 - You mention that the switches need not be physical
>switches, but emulated/simulated by software or test tools instead - the
>last sentence of this paragraph says that you "MUST indicate the number of
>switches and hosts..." - consider referencing the emulated/simulated
>number too. A quick read might leave open the idea that only physical
>switches used be noted/listed.
>
>Section 5.3 refers to the ability to have differing ways to setup channels
>and connection type, yet only leverages some (as yet undefined) channel
>setup modes. Why? Why are all of the methods/modes not applicable or being
>used?
>
>Section 5.3.1.2: In active-passive mode, you state, "Any asynchronous
>communications from the switch have to be sent on the active channel."
>While I suspect you don't mean this in a normative way, it begs the
>question - why?
>
>Section 5.3.1.3 - why call this out if you aren't going to require that
>their use be noted in the test results?
>
>Section 5.3.2 - What is controller teaming? (OK, I know what it is, but
>you're using a term not yet defined. Please define.)
>
>Section 5.3.2 - Why are you recommending the use of TCP here, over the
>alternatives? Be specific.
>
>In general, you don't take a stance on the number of times a test should
>be run, nor the amount of time to wait/pause between tests. Would you
>consider adding values to your recommendations?
> 
>Your reporting format is repeated over and over - I'm a fan of state it
>once, and modify it when needed. :) Just a thought.
>
>Section 9 - Security Considerations - to my eye, it's unacceptable to
>state that there might be security considerations in the security
>considerations section, and then state that they're out of scope. I
>disagree. They are in scope. What are the issues? It's a north bound
>interface in the lab. What's up?
>
>Is Appendix A (Section 10) a sample test report? There's no information
>about the setup/topology of the setup. If this isn't what App A is
>intended to be, consider adding a sample test report. I'd argue it might
>not live as an appendix, either. :)
>
>In that vein, Appendix A has no quantification about WHAT it is - what is
>it? Why are there Nas? :) Please clarify.
Bhuvan (Veryx Technologies | 1 Apr 17:07 2014

Benchmarking Methodology for OpenFlow SDN Controller Performance

Hi folks,

 

SDN is gaining lot of traction in the industry among vendors and service providers.

We clearly see a need for test methodologies to measure the performance of SDN based network.

 

Keeping this viewpoint, we have drafted a benchmarking methodology for OpenFlow SDN controller performance.

 

http://tools.ietf.org/html/draft-bhuvan-bmwg-of-controller-benchmarking-00

 

We would love to hear any comments and queries on the same.

 

Thanks,

Authors

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
MORTON, ALFRED C (AL | 27 Mar 20:53 2014
Picon

New Charter Paragraphs

BMWG,

As we close in on the re-charter text, we decided at our IETF-89 session
to keep Datacenter and VNF paragraphs separate.

To that end, we already have a Datacenter-oriented paragraph in our charter,
but it needs editing -- please make suggestions!

* Data Center Bridging Devices:
    Some key concepts from BMWG's past work are not meaningful when testing
    switches that implement new IEEE specifications in the area of data
    center bridging. For example, throughput as defined in RFC 1242 cannot
    be measured when testing devices that implement three new IEEE
    specifications: priority-based flow control (802.1Qbb); priority groups
    (802.1Qaz); and congestion notification (802.1Qau).
    Since devices that implement these new congestion-management
    specifications should never drop frames, and since the metric of
    throughput distinguishes between non-zero and zero drop rates, no
    throughput measurement is possible using the existing methodology.
    The current emphasis is on the Priority Flow Control aspects of
    Data Center Bridging, and the work will include an investigation
    into whether TRILL RBridges require any specific treatment in the
    methodology. This work will update RFC 2544 and exchange periodic
    Liaisons with IEEE 802.1 DCB Task Group, especially at WG Last Call.

Also, here's the (slightly modified) text for VNF activity:

* VNF and related Infrastructure Benchmarking

Benchmarking Methodologies have reliably characterized many physical devices. 
This work item extends and enhances the methods to virtual network functions (VNF)
and their unique supporting infrastructure. First, the new task space will be 
considered to ensure that common issues are considered  from the start. 
Virtual routers, switches and platform capacity and performance characteristics 
will follow, including comparisons between physical and virtual functions.

Finally, I'll leave it to the authors to say more, but I noticed a new draft
on our tools page:

http://tools.ietf.org/html/draft-bhuvan-bmwg-of-controller-benchmarking-00

take a look, is this something we should consider including on our ride?

Comments by April 14th, please.

regards,
Al/Sarah
bmwg co-chairs
MORTON, ALFRED C (AL | 27 Mar 20:06 2014
Picon

WGLC on Basic BGP Convergence

BMWG,

A WG Last Call period for the Internet-Draft on

"Basic BGP Convergence Benchmarking Methodology for Data Plane Convergence"

draft-ietf-bmwg-bgp-basic-convergence-01.txt

will be open from 27 March through 27 April 2014.

This is the first(?) WGLC on this draft (if anyone has different
info, please let us know...)

Please weigh-in on whether or not you feel that this Internet-Draft
should be given to the Area Directors for consideration to
progressing as an Informational RFC.  Send your comments
to this list.

A URL for this Internet-Draft is:
http://tools.ietf.org/html/draft-ietf-bmwg-bgp-basic-convergence-01

regards,
Al/Sarah
bmwg co-chairs

Gmane