Al Morton | 2 May 15:46 2006
Picon

IETF Meeting Survey

Ray Pelletier <rpelletier <at> isoc.org>, the IETF Administrative Director,
has asked WG chairs to ask WG members to fill out a short survey that
focuses primarily on your meeting experience in Dallas.
There are a few questions for folks who did not attend, as well.
The input will be useful in organizing future meetings.

If you have a few minutes, the survey is at:
<http://www.surveymonkey.com/s.asp?u=649182049947>

Al
bmwg co-chair
Al Morton | 3 May 21:30 2006
Picon

Proposal on Protection Benchmarking

BMWG,

The proponents of the Protection Benchmarking Work Proposal
have prepared the description of this work effort, below.

BMWG discussed this work at the Dallas IETF-65 session,
where there was strong support and involved membership
(see meeting minutes).

Please weigh-in on whether this topic should become part
of BMWG's chartered work, by

                    June 2, 2006

And, if you support the work, please say:

+  HOW you intend to support the development in BMWG,
    (by reviewing draft X by MM/DD/YY, for example),  or

+  WHY this work will be beneficial to BMWG's user community, or

+  Modifications that would make the proposal more useful
    (which we will discuss on the list), and

+  (anything else that's constructive)

And remember, we'd like to hear your opinion on the list,
even if you spoke in favor of this proposal at the meeting.

Thanks in advance for your efforts and commitment to BMWG!
(Continue reading)

Romascanu, Dan (Dan | 4 May 12:17 2006

RE: Proposal on Protection Benchmarking

I have a couple of clarification questions:

- How will the WG identify the different protection mechanisms within
the scope of the work. There are a couple of references to ' include but
are not limited to MPLS Fast Reroute, SONET APS, and High  Availability'
which for me reads into being able to support and identify other similar
mechanisms within the scope in the future. Do you have in mind some kind
of registry to identify protection mechanisms?
- In the schedule and deliverable list why is the result of the merge of
individual submission labeled 'Methodology for MPLS Protection
Benchmarking'. I would expect the outcome to deal with different
protection mechanisms and not to be MPLS specific, or am I missing
something? 

Dan

> -----Original Message-----
> From: Al Morton [mailto:acmorton <at> att.com] 
> Sent: Wednesday, May 03, 2006 10:30 PM
> To: bmwg <at> ietf.org
> Subject: [bmwg] Proposal on Protection Benchmarking
> 
> BMWG,
> 
> The proponents of the Protection Benchmarking Work Proposal 
> have prepared the description of this work effort, below.
> 
> BMWG discussed this work at the Dallas IETF-65 session, where 
> there was strong support and involved membership (see meeting 
> minutes).
(Continue reading)

Al Morton | 4 May 14:25 2006
Picon

RE: Proposal on Protection Benchmarking

Dan, thanks for your questions, please see below:

At 06:17 AM 5/4/2006, Romascanu, Dan \(Dan\) wrote:
>I have a couple of clarification questions:
>
>- How will the WG identify the different protection mechanisms within
>the scope of the work. There are a couple of references to ' include but
>are not limited to MPLS Fast Reroute, SONET APS, and High  Availability'
>which for me reads into being able to support and identify other similar
>mechanisms within the scope in the future. Do you have in mind some kind
>of registry to identify protection mechanisms?

The proposal is to adopt two drafts on the charter in this
work area at present, the terminology and a methodology for MPLS.
The terminology is intended to be "common" to all protection technologies,
while methodology drafts will be technology-specific.
For future methodology drafts, the WG will have this project description,
in addition to BMWG's charter (and AD advice), to help determine
if the proposed draft is appropriate for BMWG.

The current proposal does not involve a registry. Methodology
drafts would simply refer to protection mechanisms by reference to
the standards that describe them.

>- In the schedule and deliverable list why is the result of the merge of
>individual submission labeled 'Methodology for MPLS Protection
>Benchmarking'. I would expect the outcome to deal with different
>protection mechanisms and not to be MPLS specific, or am I missing
>something?

(Continue reading)

Al Morton | 5 May 03:59 2006
Picon

Fwd: a plea to ID authors

BMWG editors, and any future editors,

Transport AD Lars Eggert passed some good advice to all of us below
(and Brian Carpenter endorsed this message today).

word to the wise, and all that...
Al

PS for the not yet fully IETF-indoctrinated, a DISCUSS comment
is a blocking comment, and a draft cannot be approved until
it is resolved with the AD who enters a DISCUSS vote.

>To: wgchairs <at> ietf.org
>From: Lars Eggert <lars.eggert <at> netlab.nec.de>
>Date: Wed, 3 May 2006 17:16:36 -0400
>X-Mailer: Apple Mail (2.749.3)
>Subject: a plea to ID authors
>List-Id: Working Group Chairs <wgchairs.ietf.org>
>
>Hi,
>
>draft-iesg-discuss-criteria-02 says in Section 3.2 that DISCUSSes
>should not be raised for "pedantic corrections to non-normative text"
>or "stylistic issues of any kind."
>
>Personal opinion follows:
>
>I'd like to ask you as WG chairs to remind authors and editors that
>spell-checking and proof-reading a draft before submitting it to the
>IESG is still a very good idea, even if there is no risk of getting a
(Continue reading)

Richard Watts (riwatts | 5 May 17:00 2006
Picon

RE: Benchmarking Network-layer Traffic Control Mechanisms extension for artificial congestion

Hi Scott

As we discussed and agreed I am keen to provide input and support to the above mentioned draft with respect to making artificial congestion  an extension to the existing benchmarking draft.

Please see my initial comments about the existing draft that I forwarded a little while ago and I will also send out soon what I think the wording may be for blending in the artificial congestion aspects of the benchmarking, so that we might get some dialogue going on this topic. The next ITEF in Montreal is not that long away now and I look forward to meeting both you and the rest of the group then.

Look forward to your fedback

 

Regards

 

Richard

 

I would like to

Hi Scott

Apologies for the slight delay in getting back to you, time has been a bit of a challenge, as always. However, please see below some 'cosmetic' comments and queries regarding the existing benchmarking methodology draft, which I hope are useful

Re: Section 3.1 Test Topologies

There seems to be a slight typo in the text i.e. ' Figure 2 shows the Test Topology for benchmarking performance when Forwarding Congestion does not exist on the egress link '. The 'not' needs to be removed to align with Figure 2 heading.

Re: Section 3.2.3 c) under Offered Vector

' Packet size must be equal to or less than the interface MTU so that there is no fragmentation' The 'must' needs to be in upper case.

Re: Section 3.2.5 Expected Vector

The last sentence 'Test cases may be repeated with variation to the expected vector to produce a more benchmark results'. I take this to mean vary the SLA requirements such as packet loss, jitter, forwarding delay etc. If so is this actually required, I understand that the draft uses the word 'may' so infers optional. But, if the DUT is tested to the tightest SLAs and they are achieved, is there any mileage in testing to achieve 'less tight' SLAs ?

Re: Step 2 in the procedure for both 4.2 and 4.3 should have 'be' inserted between the 'MUST' and '2 or more'

Re: 'Expected Results' under section 4.3 states ' Forwarding Vector equals the Offered Load. There is no packet loss and no out of order packets. Output vectors match the Expected Vectors for each DSCP in the Codepoint set'

Should we not ensure consistency in terminology and change Forwarding Vector to Output Vector as per bottom of Page 5 or, change Output Vector on bottom of Page 5 to be consistent with Forwarding Vector in this section ? Additionally, it states 'Forwarding Vector equals the Offered Load'. Offered load, should be Expected Vector as this is the process for the benchmarking of 'with' forwarding congestion ?

Not sure what your thoughts are, but I would not be inclined to state anything about what the expected vector should be in the expected results section, as this will vary depending upon what the target is for the benchmark and how its configured on the DUT. So comments about no packet loss may not be valid.

I am personally of the opinion that we can manipulate this draft to take into account the artificial congestion, I think we just need to weave in the appropriate wording so that the audience is aware that this methodology applies also to artificial congestion. I think the concepts and approach do not change, just because the mechanism for creating congestion might differ.

If you are in agreement I will go ahead and try to make the appropriate changes for your review, comment and input ?

I will also review the Terminology draft very shortly as well as feedback any comments to yourself and the co-authors

Kind Regards

Richard

-----Original Message-----

From: Richard Watts (riwatts)

Sent: 24 March 2006 10:57

To: sporetsky <at> reefpoint.com; Gunter Van de Velde (gvandeve)

Cc: acmorton <at> att.com; gunter <at> vandevelde.cc; Richard Watts (riwatts)

Subject: RE: Benchmarking Network-layer Traffic Control Mechanisms extension for artificial congestion

 

Hi Scott

Many thanks for your invite to co-author the current methodology draft of which I will gladly accept.

I also agree with your approach with how to potentially move forward with the methodology document(s). It would be easier I guess if we could leverage the existing methodology document rather than, having to create a new/separate one.

My initial thoughts are that we should be able to use the existing methodology as we are still creating congestion (just artificially) through the use of shapers and the like on virtual links, that said, with HQF architectures, we have tiered levels of congestion management, without the need to create artificial congestion through shaping.

Will start looking at the terminology and methodology documents to see how best we might address this.

Once again, many thanks for your cordial invitation and your acceptance to co-author, should we need to generate a separate methodology document.

Kind Regards

Richard

-----Original Message-----

From: sporetsky <at> reefpoint.com [mailto:sporetsky <at> reefpoint.com]

Sent: 22 March 2006 17:15

To: Gunter Van de Velde (gvandeve)

Cc: Richard Watts (riwatts); acmorton <at> att.com; gunter <at> vandevelde.cc

Subject: RE: Benchmarking Network-layer Traffic Control Mechanisms extension for artificial congestion

Gunter,

Hello. It was a pleasure to meet you yesterday. Great work on IPv6! I am

looking forward to the author team's further work on it.

The current Network-Layer Taffic Control methodology addresses benchmarking

of egress QoS mechanisms, without naming specific mechanisms or

implementations. Yesterday's BMWG meeting raised the need for the

Network-Layer Taffic Control work item to have methodologies that addressed

classification/shaping and application of DiffServ to virtual links. First

we will need to look at how classification/shaping and application of

DiffServ to virtual links can be addressed in the current methodology

document. If we determine that these require separate methodology

documents, then it was agreed that these methodologies can be addressed as

separate documents as part of the current Network-Layer Taffic Control work

item using the existing Terminology document. If you agree with this

approach then I would be happy to participate as co-author for either of

these methology drafts, if we determine the documents are needed. Likewise,

I would like to invite you or your colleague to join as co-author on the

current methodology draft.

Thanks!

Scott

-----Original Message-----

From: Gunter Van de Velde (gvandeve) [mailto:gvandeve <at> cisco.com]

Sent: Wednesday, March 22, 2006 11:55 AM

To: sporetsky <at> reefpoint.com

Cc: riwatts <at> cisco.com; acmorton <at> att.com; gunter <at> vandevelde.cc

Subject: Benchmarking Network-layer Traffic Control Mechanisms extension

for artificial congestion

 

Hi Scott,

Many thanks yesterday for your presentation and insights in the

Benchmarking test methodology for Network-Layer Control Mechanisms.

As mentioned during the BMWG meeting, a congestion scenario seen often

is that of artificial congestion caused by a diffserv traffic shaping

function.

This is as you know commonly seen at the boundary of an network to condition

the traffic for the right parameters (whatever these parameters actually

are).

I would like to pick up the task to be involved with this work, and would

like

to invite you to be one of the co-authors to advice and share your

experience

in the benchmarking area. Please let me know if you are interested in this

contributing

role? I would like to introduce Richard Watts who is based in UK and is an

expert in QoS deployment (he leads a QoS expert team in Europe). Richard

offered to take the

lead editor role for this piece of work. This would mean that if you accept

co-authorship we will be with the three to start the work.

Would you or Al have any recommended next steps in mind so that

we can present first draft results at IETF66?

My believe is that this work should use the draft-ietf-bmwg-dsmterm-12.txt

and

draft-ietf-bmwg-dsmmeth-01.txt as foundation and complement these two

documents with two new drafts. Consequence is that the existing drafts

will have to be included as 'normative reference' which sounds logical

and acceptable to me.

The first question is now on how to proceed? Should we initially only

prepare a new terminology document for IETF66 or should we do in addition

the methodology draft immediately in parallel?

Any suggestions and advice is welcome,

Kind Regards,

G/

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www1.ietf.org/mailman/listinfo/bmwg
sporetsky | 23 May 23:50 2006

FW: IETF BMWG Work Item Proposal for SIP Performance Benchmarking


Fellow BMWG-ers,

Please find this IETF Work Item Proposal for your review and comment.  The
authors will be submitting a first draft Terminology and Methodology for the
Montreal meeting.

Scott

>  -----Original Message-----
> From: 	Poretsky, Scott  
> Sent:	Thursday, May 11, 2006 12:44 PM
> To:	'acmorton <at> att.com'
> Subject:	IETF BMWG Work Item Proposal for SIP Performance
> Benchmarking
> Importance:	High
> 
> Hi Al,
> 
[snip] 

> Thanks,
> Scott
> 
> ########################
> BMWG Work Item Proposal
> 
> TITLE: Session Initiation Protocol (SIP) Performance Benchmarking
> 
> AUTHOR TEAM:
> Dr. Carol Davids, Illinois Institute of Technology
> Dr. Vijay Gurbani, Lucent Technologies
Scott Poretsky, Reef Point Systems
>  
> GOALS
> The purpose of this work item is to provide a single terminology and
> methodology from which SIP equipment vendors and VoIP service providers
> can measure performance benchmarking metrics for comparison and selection.
> It is intended to develop terms, benchmarks, and methodologies that can be
> applied to any type of IP device including SIP Servers, Session Border
> Controllers (SBCs), and Security Gateways (SEGs).
> 
> MOTIVATION
> Service Providers are now planning VoIP and Multimedia network deployments
> using the IETF developed Session Initiation Protocol (SIP).  Through SIP,
> service providers will be able to have rich service offerings and many
> even plan to turn off their Public Switch Telephone Network within five
> years.   VoIP has led to development of new networking devices including
> SIP Servers, Session Border Controllers, and Security Gateways.  The mix
> of voice and IP functions in this variety of devices has produced
> inconsistencies in vendor reported performance metrics and has caused
> confusion in the service provider community. 
> 
> SCOPE:
> This work item will provide terms, benchmarks, and methodologies for
> performance benchmarking the SIP control and media planes.  The
> methodologies can be used for benchmarking SIP performance of SIP servers,
> SBCs, and SEGs.  The media used for the benchmarking will be the IETF
> defined Real-Time Protocol (RTP).  Test cases will allow testing of VoIP
> or Multimedia over IP by varying the number of media streams per SIP call
> from 1 to higher.  There will be at least one test case to measure the
> impact of a SIP DOS Attack on the performance metrics.  All SIP traffic
> will be in the clear; no encryption using TLS or IPsec will be used.
> Similar benchmarks and methodologies using encryption is a potential work
> item for the BMWG that could be considered in the future.
> 
> There will be separate test cases in the methodology document for
> obtaining measurements for each of the Benchmarking Metrics defined in the
> Terminology.  The benchmarking metrics to be measured for the SIP Control
> Plane and SIP Media Plane are as follow:
> 
> 	Benchmark Metrics - SIP Control Plane 
> 	Standing Calls, maximum (calls)
> 	Calls Per Second, maximum (CPS)
> 	Call Attempts Per Second, maximum (CAPS)
> 	Busy Hour Call Attempts, maximum (BHCA)
> 	Busy Hour Call Connects, maximum (BHCC)
> 	Call Completion Rate (%)
> 	Call Setup Delay, average (msec)
> 	Call Teardown Delay, average (msec)
> 
> 	Benchmark Metrics- SIP Media Plane
> 	RTP Media Throughput, per Media Stream (pps)
> 	RTP Media Throughput, Aggregate (pps)
> 	RTP Packet Loss, average (pps)
> 	RTP Packet Delay, average (msec)
> 	RTP Packet Jitter, average (msec)
> 
> In order to obtain measurements for the benchmarking metrics it will be
> necessary to configure the test setup with certain test parameters.  Some
> of these test parameters may also be the benchmarking metric being
> measured for another test case.  The test parameters to be configured for
> the SIP Control Plane and SIP Media Plane are as follow:
> 
> 	Test Parameters- SIP Control Plane 
> 	Call Duration (msec)
> 	Call Per Second (CPS)
> 	Call Attempts Per Second (CAPS)
> 
> 	Test Parameters - SIP Media Plane
> 	RTP Media Streams per Call (streams per call)
> 	RTP Packet Size (bytes)
> 	RTP Media Offered Load, per Media Stream (pps)
> 	RTP Media Offered Load, Aggregate (pps)
> 
> The basic test topology to be used for this benchmarking is as follows:
> 
> Emulated Agents<--> SEG (optional) <--> SIP Proxy Server (or SBC) <--> SIP
> Server (optional) <--> Emulated Agents
> 
> PROPOSED MILESTONES
> 06/07 First Draft Terminology and Methodology
> 02/07 First WG Last Call
> 05/07 SIP WG and SIPPING WG Review
> 06/07 Final BMWG Last Call for Terminology and Methodology
> 08/07 Submittal for IESG review
> 
> 
> 
Romascanu, Dan (Dan | 24 May 10:20 2006

RE: FW: IETF BMWG Work Item Proposal for SIP PerformanceBenchmarking

I have a clarification question. How do you define 'SIP Media Plane'? It
looks to me that all the benchmark metrics and test parameters included
under this label are generic for any RTP stream benchmark and not
necessarily related to SIP. Am I missing something? 

Dan

> -----Original Message-----
> From: sporetsky <at> reefpoint.com [mailto:sporetsky <at> reefpoint.com] 
> Sent: Wednesday, May 24, 2006 12:50 AM
> To: bmwg <at> ietf.org
> Subject: [bmwg] FW: IETF BMWG Work Item Proposal for SIP 
> PerformanceBenchmarking
> Importance: High
> 
> 
> Fellow BMWG-ers,
> 
> Please find this IETF Work Item Proposal for your review and 
> comment.  The authors will be submitting a first draft 
> Terminology and Methodology for the Montreal meeting.
> 
> Scott
> 
> >  -----Original Message-----
> > From: 	Poretsky, Scott  
> > Sent:	Thursday, May 11, 2006 12:44 PM
> > To:	'acmorton <at> att.com'
> > Subject:	IETF BMWG Work Item Proposal for SIP Performance
> > Benchmarking
> > Importance:	High
> > 
> > Hi Al,
> > 
> [snip] 
> 
> > Thanks,
> > Scott
> > 
> > ########################
> > BMWG Work Item Proposal
> > 
> > TITLE: Session Initiation Protocol (SIP) Performance Benchmarking
> > 
> > AUTHOR TEAM:
> > Dr. Carol Davids, Illinois Institute of Technology Dr. 
> Vijay Gurbani, 
> > Lucent Technologies
> Scott Poretsky, Reef Point Systems
> >  
> > GOALS
> > The purpose of this work item is to provide a single 
> terminology and 
> > methodology from which SIP equipment vendors and VoIP service 
> > providers can measure performance benchmarking metrics for 
> comparison and selection.
> > It is intended to develop terms, benchmarks, and methodologies that 
> > can be applied to any type of IP device including SIP 
> Servers, Session 
> > Border Controllers (SBCs), and Security Gateways (SEGs).
> > 
> > MOTIVATION
> > Service Providers are now planning VoIP and Multimedia network 
> > deployments using the IETF developed Session Initiation Protocol 
> > (SIP).  Through SIP, service providers will be able to have rich 
> > service offerings and many even plan to turn off their 
> Public Switch Telephone Network within five
> > years.   VoIP has led to development of new networking 
> devices including
> > SIP Servers, Session Border Controllers, and Security 
> Gateways.  The 
> > mix of voice and IP functions in this variety of devices 
> has produced 
> > inconsistencies in vendor reported performance metrics and 
> has caused 
> > confusion in the service provider community.
> > 
> > SCOPE:
> > This work item will provide terms, benchmarks, and 
> methodologies for 
> > performance benchmarking the SIP control and media planes.  The 
> > methodologies can be used for benchmarking SIP performance of SIP 
> > servers, SBCs, and SEGs.  The media used for the 
> benchmarking will be 
> > the IETF defined Real-Time Protocol (RTP).  Test cases will allow 
> > testing of VoIP or Multimedia over IP by varying the number 
> of media 
> > streams per SIP call from 1 to higher.  There will be at least one 
> > test case to measure the impact of a SIP DOS Attack on the 
> performance 
> > metrics.  All SIP traffic will be in the clear; no 
> encryption using TLS or IPsec will be used.
> > Similar benchmarks and methodologies using encryption is a 
> potential 
> > work item for the BMWG that could be considered in the future.
> > 
> > There will be separate test cases in the methodology document for 
> > obtaining measurements for each of the Benchmarking Metrics 
> defined in 
> > the Terminology.  The benchmarking metrics to be measured 
> for the SIP 
> > Control Plane and SIP Media Plane are as follow:
> > 
> > 	Benchmark Metrics - SIP Control Plane 
> > 	Standing Calls, maximum (calls)
> > 	Calls Per Second, maximum (CPS)
> > 	Call Attempts Per Second, maximum (CAPS)
> > 	Busy Hour Call Attempts, maximum (BHCA)
> > 	Busy Hour Call Connects, maximum (BHCC)
> > 	Call Completion Rate (%)
> > 	Call Setup Delay, average (msec)
> > 	Call Teardown Delay, average (msec)
> > 
> > 	Benchmark Metrics- SIP Media Plane
> > 	RTP Media Throughput, per Media Stream (pps)
> > 	RTP Media Throughput, Aggregate (pps)
> > 	RTP Packet Loss, average (pps)
> > 	RTP Packet Delay, average (msec)
> > 	RTP Packet Jitter, average (msec)
> > 
> > In order to obtain measurements for the benchmarking 
> metrics it will 
> > be necessary to configure the test setup with certain test 
> parameters.  
> > Some of these test parameters may also be the benchmarking metric 
> > being measured for another test case.  The test parameters to be 
> > configured for the SIP Control Plane and SIP Media Plane 
> are as follow:
> > 
> > 	Test Parameters- SIP Control Plane 
> > 	Call Duration (msec)
> > 	Call Per Second (CPS)
> > 	Call Attempts Per Second (CAPS)
> > 
> > 	Test Parameters - SIP Media Plane
> > 	RTP Media Streams per Call (streams per call)
> > 	RTP Packet Size (bytes)
> > 	RTP Media Offered Load, per Media Stream (pps)
> > 	RTP Media Offered Load, Aggregate (pps)
> > 
> > The basic test topology to be used for this benchmarking is 
> as follows:
> > 
> > Emulated Agents<--> SEG (optional) <--> SIP Proxy Server 
> (or SBC) <--> 
> > SIP Server (optional) <--> Emulated Agents
> > 
> > PROPOSED MILESTONES
> > 06/07 First Draft Terminology and Methodology
> > 02/07 First WG Last Call
> > 05/07 SIP WG and SIPPING WG Review
> > 06/07 Final BMWG Last Call for Terminology and Methodology
> > 08/07 Submittal for IESG review
> > 
> > 
> > 
> 
> _______________________________________________
> bmwg mailing list
> bmwg <at> ietf.org
> https://www1.ietf.org/mailman/listinfo/bmwg
> 
Richard Watts (riwatts | 23 May 17:19 2006
Picon

FW: RE: Benchmarking Network-layer Traffic Control Mechanisms extension for artificial congestion

Hi Scott,
 
as per previous mail below please see proposed updates for discussion.
 
I have deliberately attempted to avoid introducing any new terminology here, but we may have to include 'artificial forwarding congestion' to the terminology draft as well. Hopefully the text in the updates below provides explanatory text for the two following test topology diagrams
 
Discussion:
So the purpose is to provide an 'extension' to the existing ' Benchmarking Network-Layer Traffic Control Mechanisms draft' for artificial congestion. The decision to be made is whether a separate draft is required e.g. Benchmarking Network-layer Traffic Control Mechanisms extension for artificial congestion or, are we able to accommodate this extension within the existing draft. I would like to propose/suggest that as the majority of the concepts/terminology/methodology are the same that we accommodate this within the existing draft.
 
'Artificial' congestion can be created on virtual/logical interfaces by Traffic Control mechanisms, such that the Forwarding Capacity is limited and typically less than the Forwarding Capacity of the actual interface. Essentially the same test methodologies can be applied and the same terminology (just different mechanism for creating congestion), but with the consideration that the Output Vector will be based on the 'Limited' Forwarding Capacity due to the applied Traffic Control Mechanisms versus, the full Forwarding Capacity of the interface.
 
Updates to Existing Draft:
 
re: Section 3.1. Test Topologies
 
<To be added..>
 
Figure 3 shows the test topology for benchmarking performance when 'artificial' Forwarding Congestion does not exist on the egress link. This topology is to be used when benchmarking the Undifferentiated Response and the Traffic Control without 'artificial' Forwarding Congestion.
 
'Artificial' Forwarding Congestion does not exist due to the fact that the Offered Vector (Offer Load) does not exceed the 'limited' Forwarding Capacity of the Traffic Control Mechanisms
 
Figure 4 shows the test topology for benchmarking performance when 'artificial' forwarding congestion does exist on the egress link. This topology is to be used when benchmarking the Traffic Control with 'artificial' Forwarding Congestion
 
'Artificial' Forwarding Congestion is produced by an Offered Vector (Offered Load) to an ingress interface on the DUT destined for a single egress interface configured with traffic control mechanisms that limits the Output Vector to a value ' less than ' full interface Output Vector.
 
 

 

        Expected                                         Vector                                                  |                                                       |                                                    \/                                                ---------        Offered Vector (Limited) ---------        |       |<--------------------------------|       |        |       |                                 |       |        |       |                                 |       |        |  DUT  |                                 | Tester|        |       |                                 |       |        |       |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>|       |        |       |        Output Vector (Limited)  |       |        ---------                                 ---------                                 Figure 3. Test Topology for Benchmarking                      Without 'artificial' Forwarding Congestion

 

 

        Expected                                          Vector                                                   |                                                       |                                                    \/                                                ---------       Offered Vector (Unlimited)---------        |       |<--------------------------------|       |        |       |               |       |        |       | |       |        |  DUT  |                                 | Tester|        |       |                                 |       |        |       |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~>|       |        |       |        Output Vector (Limited) |       |        ---------                                 ---------                                                            Figure 2. Test Topology for Benchmarking                                                                       With 'artificial' Forwarding CongestionRe-use of existing and slight modifications to the Test Cases to be added as below4. Test Cases

 

  4.4 Undifferentiated Response 

 

     Purpose:       To establish the baseline performance of the DUT.

 

     Procedure:     1. Configure DUT with Expected Vector.     2. Configure the Tester for the Offered Vector.          Number of DSCPs MUST equal 1 and the         RECOMMENDED DSCP value is 0 (Best Effort).         Use 1000 Flows identified by IP SA/DA.  All flows        have the same DSCP value.     3. Using the Test Topology in Figure 3, source the         Offered Load from the Tester to the DUT.     4. Measure and record the Output Vector.      5. Maintain offered load for 10 minutes minimum         to observe possible variations in measurements.     6. Repeat steps 2 through 5 with 10000 and 100000        Flows.

 

     Expected Results:     Forwarding Vector equals the Offered Load.  There      is no packet loss and no out-of-order packets.

 

 

  4.5 Traffic Control Baseline Performance     Purpose:       To benchmark the Output Vectors for a Codepoint Set     without 'artificial' Forwarding Congestion.

 

     Procedure:     1. Configure DUT with Expected Vector for each DSCP in         the Codepoint Set.     2. Configure the Tester for the Offered Vector.          Number of DSCPs MUST 2 or more. Any DSCP values can        be used.  Use 1000 Flows identified by IP SA/DA        and DSCP value.     3. Using the Test Topology in Figure 3, source the         Offered Load from the Tester to the DUT.     4. Measure and record the Output Vector for each DSCP        in the Codepoint Set.      5. Maintain offered load for 10 minutes minimum         to observe possible variations in measurements.     6. Repeat steps 2 through 5 with 10000 and 100000        Flows.     7. Increment number of DSCPs used and repeat steps         1 through 6.

 

     Expected Results:     Forwarding Vector equals the Offered Load.  There is       no packet loss and no out-of-order packets.  Output      vectors match the Expected Vectors for each DSCP in      the Codepoint Set.

 

Poretsky                                              [Page 6]    INTERNET-DRAFT       Methodology for Benchmarking         March 2006                  Network-layer Traffic Control Mechanisms

 

  4.6 Traffic Control Performance with Forwarding Congestion     Purpose:       To benchmark the Output Vectors for a Codepoint Set     with 'artificial'Forwarding Congestion.

 

     Procedure:     1. Configure DUT with Expected Vector for each DSCP in         the Codepoint Set.     2. Configure the Tester for the Offered Vector.          Number of DSCPs MUST 2 or more. Any DSCP values can        be used.  Use 1000 Flows identified by IP SA/DA        and DSCP value.  The Offered Load MUST exceed the        'limited'Forwarding Capacity of a single egress link by 25%.     3. Using the Test Topology in Figure 4, source the         Offered Load from the Tester to the DUT.  The         ingress offered load MUST exceed         the 'limited' Forwarding Capacity of the egress link to         produce Forwarding Congestion.     4. Measure and record the Output Vector for each DSCP        in the Codepoint Set.      5. Maintain offered load for 10 minutes minimum         to observe possible variations in measurements.     6. Repeat steps 2 through 5 with 10000 and 100000        Flows.     7. Increment offered load by 25% to 200% maximum.     8. Increment number of DSCPs used and repeat steps         1 through 6.

 

     Expected Results:     Forwarding Vector equals the Offered Load.  There is       no packet loss and no out-of-order packets.  Output      vectors match the Expected Vectors for each DSCP in      the Codepoint Set.
Scott, I am keen to hear your thoughts and please can you provide comment on this proposal or make alternative suggestions so that we can then make the appropriate changes following discussion, to the existing draft ?
 
Kind Regards
 
Richard

From: Richard Watts (riwatts)
Sent: 05 May 2006 16:01
To: 'sporetsky <at> reefpoint.com'
Cc: bmwg <at> ietf.org
Subject: RE: Benchmarking Network-layer Traffic Control Mechanisms extension for artificial congestion

Hi Scott

As we discussed and agreed I am keen to provide input and support to the above mentioned draft with respect to making artificial congestion  an extension to the existing benchmarking draft.

Please see my initial comments about the existing draft that I forwarded a little while ago and I will also send out soon what I think the wording may be for blending in the artificial congestion aspects of the benchmarking, so that we might get some dialogue going on this topic. The next ITEF in Montreal is not that long away now and I look forward to meeting both you and the rest of the group then.

Look forward to your fedback

 

Regards

 

Richard

 

I would like to

Hi Scott

Apologies for the slight delay in getting back to you, time has been a bit of a challenge, as always. However, please see below some 'cosmetic' comments and queries regarding the existing benchmarking methodology draft, which I hope are useful

Re: Section 3.1 Test Topologies

There seems to be a slight typo in the text i.e. ' Figure 2 shows the Test Topology for benchmarking performance when Forwarding Congestion does not exist on the egress link '. The 'not' needs to be removed to align with Figure 2 heading.

Re: Section 3.2.3 c) under Offered Vector

' Packet size must be equal to or less than the interface MTU so that there is no fragmentation' The 'must' needs to be in upper case.

Re: Section 3.2.5 Expected Vector

The last sentence 'Test cases may be repeated with variation to the expected vector to produce a more benchmark results'. I take this to mean vary the SLA requirements such as packet loss, jitter, forwarding delay etc. If so is this actually required, I understand that the draft uses the word 'may' so infers optional. But, if the DUT is tested to the tightest SLAs and they are achieved, is there any mileage in testing to achieve 'less tight' SLAs ?

Re: Step 2 in the procedure for both 4.2 and 4.3 should have 'be' inserted between the 'MUST' and '2 or more'

Re: 'Expected Results' under section 4.3 states ' Forwarding Vector equals the Offered Load. There is no packet loss and no out of order packets. Output vectors match the Expected Vectors for each DSCP in the Codepoint set'

Should we not ensure consistency in terminology and change Forwarding Vector to Output Vector as per bottom of Page 5 or, change Output Vector on bottom of Page 5 to be consistent with Forwarding Vector in this section ? Additionally, it states 'Forwarding Vector equals the Offered Load'. Offered load, should be Expected Vector as this is the process for the benchmarking of 'with' forwarding congestion ?

Not sure what your thoughts are, but I would not be inclined to state anything about what the expected vector should be in the expected results section, as this will vary depending upon what the target is for the benchmark and how its configured on the DUT. So comments about no packet loss may not be valid.

I am personally of the opinion that we can manipulate this draft to take into account the artificial congestion, I think we just need to weave in the appropriate wording so that the audience is aware that this methodology applies also to artificial congestion. I think the concepts and approach do not change, just because the mechanism for creating congestion might differ.

If you are in agreement I will go ahead and try to make the appropriate changes for your review, comment and input ?

I will also review the Terminology draft very shortly as well as feedback any comments to yourself and the co-authors

Kind Regards

Richard

-----Original Message-----

From: Richard Watts (riwatts)

Sent: 24 March 2006 10:57

To: sporetsky <at> reefpoint.com; Gunter Van de Velde (gvandeve)

Cc: acmorton <at> att.com; gunter <at> vandevelde.cc; Richard Watts (riwatts)

Subject: RE: Benchmarking Network-layer Traffic Control Mechanisms extension for artificial congestion

 

Hi Scott

Many thanks for your invite to co-author the current methodology draft of which I will gladly accept.

I also agree with your approach with how to potentially move forward with the methodology document(s). It would be easier I guess if we could leverage the existing methodology document rather than, having to create a new/separate one.

My initial thoughts are that we should be able to use the existing methodology as we are still creating congestion (just artificially) through the use of shapers and the like on virtual links, that said, with HQF architectures, we have tiered levels of congestion management, without the need to create artificial congestion through shaping.

Will start looking at the terminology and methodology documents to see how best we might address this.

Once again, many thanks for your cordial invitation and your acceptance to co-author, should we need to generate a separate methodology document.

Kind Regards

Richard

-----Original Message-----

From: sporetsky <at> reefpoint.com [mailto:sporetsky <at> reefpoint.com]

Sent: 22 March 2006 17:15

To: Gunter Van de Velde (gvandeve)

Cc: Richard Watts (riwatts); acmorton <at> att.com; gunter <at> vandevelde.cc

Subject: RE: Benchmarking Network-layer Traffic Control Mechanisms extension for artificial congestion

Gunter,

Hello. It was a pleasure to meet you yesterday. Great work on IPv6! I am

looking forward to the author team's further work on it.

The current Network-Layer Taffic Control methodology addresses benchmarking

of egress QoS mechanisms, without naming specific mechanisms or

implementations. Yesterday's BMWG meeting raised the need for the

Network-Layer Taffic Control work item to have methodologies that addressed

classification/shaping and application of DiffServ to virtual links. First

we will need to look at how classification/shaping and application of

DiffServ to virtual links can be addressed in the current methodology

document. If we determine that these require separate methodology

documents, then it was agreed that these methodologies can be addressed as

separate documents as part of the current Network-Layer Taffic Control work

item using the existing Terminology document. If you agree with this

approach then I would be happy to participate as co-author for either of

these methology drafts, if we determine the documents are needed. Likewise,

I would like to invite you or your colleague to join as co-author on the

current methodology draft.

Thanks!

Scott

-----Original Message-----

From: Gunter Van de Velde (gvandeve) [mailto:gvandeve <at> cisco.com]

Sent: Wednesday, March 22, 2006 11:55 AM

To: sporetsky <at> reefpoint.com

Cc: riwatts <at> cisco.com; acmorton <at> att.com; gunter <at> vandevelde.cc

Subject: Benchmarking Network-layer Traffic Control Mechanisms extension

for artificial congestion

 

Hi Scott,

Many thanks yesterday for your presentation and insights in the

Benchmarking test methodology for Network-Layer Control Mechanisms.

As mentioned during the BMWG meeting, a congestion scenario seen often

is that of artificial congestion caused by a diffserv traffic shaping

function.

This is as you know commonly seen at the boundary of an network to condition

the traffic for the right parameters (whatever these parameters actually

are).

I would like to pick up the task to be involved with this work, and would

like

to invite you to be one of the co-authors to advice and share your

experience

in the benchmarking area. Please let me know if you are interested in this

contributing

role? I would like to introduce Richard Watts who is based in UK and is an

expert in QoS deployment (he leads a QoS expert team in Europe). Richard

offered to take the

lead editor role for this piece of work. This would mean that if you accept

co-authorship we will be with the three to start the work.

Would you or Al have any recommended next steps in mind so that

we can present first draft results at IETF66?

My believe is that this work should use the draft-ietf-bmwg-dsmterm-12.txt

and

draft-ietf-bmwg-dsmmeth-01.txt as foundation and complement these two

documents with two new drafts. Consequence is that the existing drafts

will have to be included as 'normative reference' which sounds logical

and acceptable to me.

The first question is now on how to proceed? Should we initially only

prepare a new terminology document for IETF66 or should we do in addition

the methodology draft immediately in parallel?

Any suggestions and advice is welcome,

Kind Regards,

G/

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www1.ietf.org/mailman/listinfo/bmwg
sporetsky | 24 May 16:19 2006

RE: FW: IETF BMWG Work Item Proposal for SIP PerformanceBe nchmarking

Hi Dan,

SIP benchmarking requires measurement of the control (SIP signaling) and
data (RTP) planes.  It is more reasonable with SIP to refer to the data
plane as the media plane.  The relationship between the two planes is that
the media will not flow until the control sessions are established.  It is
important to benchmark performance of the media plane with control load, and
vice-versa.

Scott

-----Original Message-----
From: Romascanu, Dan (Dan) [mailto:dromasca <at> avaya.com]
Sent: Wednesday, May 24, 2006 4:20 AM
To: sporetsky <at> reefpoint.com; bmwg <at> ietf.org
Cc: Cullen Jennings; Magnus Westerlund
Subject: RE: [bmwg] FW: IETF BMWG Work Item Proposal for SIP
PerformanceBenchmarking

I have a clarification question. How do you define 'SIP Media Plane'? It
looks to me that all the benchmark metrics and test parameters included
under this label are generic for any RTP stream benchmark and not
necessarily related to SIP. Am I missing something? 

Dan

> -----Original Message-----
> From: sporetsky <at> reefpoint.com [mailto:sporetsky <at> reefpoint.com] 
> Sent: Wednesday, May 24, 2006 12:50 AM
> To: bmwg <at> ietf.org
> Subject: [bmwg] FW: IETF BMWG Work Item Proposal for SIP 
> PerformanceBenchmarking
> Importance: High
> 
> 
> Fellow BMWG-ers,
> 
> Please find this IETF Work Item Proposal for your review and 
> comment.  The authors will be submitting a first draft 
> Terminology and Methodology for the Montreal meeting.
> 
> Scott
> 
> >  -----Original Message-----
> > From: 	Poretsky, Scott  
> > Sent:	Thursday, May 11, 2006 12:44 PM
> > To:	'acmorton <at> att.com'
> > Subject:	IETF BMWG Work Item Proposal for SIP Performance
> > Benchmarking
> > Importance:	High
> > 
> > Hi Al,
> > 
> [snip] 
> 
> > Thanks,
> > Scott
> > 
> > ########################
> > BMWG Work Item Proposal
> > 
> > TITLE: Session Initiation Protocol (SIP) Performance Benchmarking
> > 
> > AUTHOR TEAM:
> > Dr. Carol Davids, Illinois Institute of Technology Dr. 
> Vijay Gurbani, 
> > Lucent Technologies
> Scott Poretsky, Reef Point Systems
> >  
> > GOALS
> > The purpose of this work item is to provide a single 
> terminology and 
> > methodology from which SIP equipment vendors and VoIP service 
> > providers can measure performance benchmarking metrics for 
> comparison and selection.
> > It is intended to develop terms, benchmarks, and methodologies that 
> > can be applied to any type of IP device including SIP 
> Servers, Session 
> > Border Controllers (SBCs), and Security Gateways (SEGs).
> > 
> > MOTIVATION
> > Service Providers are now planning VoIP and Multimedia network 
> > deployments using the IETF developed Session Initiation Protocol 
> > (SIP).  Through SIP, service providers will be able to have rich 
> > service offerings and many even plan to turn off their 
> Public Switch Telephone Network within five
> > years.   VoIP has led to development of new networking 
> devices including
> > SIP Servers, Session Border Controllers, and Security 
> Gateways.  The 
> > mix of voice and IP functions in this variety of devices 
> has produced 
> > inconsistencies in vendor reported performance metrics and 
> has caused 
> > confusion in the service provider community.
> > 
> > SCOPE:
> > This work item will provide terms, benchmarks, and 
> methodologies for 
> > performance benchmarking the SIP control and media planes.  The 
> > methodologies can be used for benchmarking SIP performance of SIP 
> > servers, SBCs, and SEGs.  The media used for the 
> benchmarking will be 
> > the IETF defined Real-Time Protocol (RTP).  Test cases will allow 
> > testing of VoIP or Multimedia over IP by varying the number 
> of media 
> > streams per SIP call from 1 to higher.  There will be at least one 
> > test case to measure the impact of a SIP DOS Attack on the 
> performance 
> > metrics.  All SIP traffic will be in the clear; no 
> encryption using TLS or IPsec will be used.
> > Similar benchmarks and methodologies using encryption is a 
> potential 
> > work item for the BMWG that could be considered in the future.
> > 
> > There will be separate test cases in the methodology document for 
> > obtaining measurements for each of the Benchmarking Metrics 
> defined in 
> > the Terminology.  The benchmarking metrics to be measured 
> for the SIP 
> > Control Plane and SIP Media Plane are as follow:
> > 
> > 	Benchmark Metrics - SIP Control Plane 
> > 	Standing Calls, maximum (calls)
> > 	Calls Per Second, maximum (CPS)
> > 	Call Attempts Per Second, maximum (CAPS)
> > 	Busy Hour Call Attempts, maximum (BHCA)
> > 	Busy Hour Call Connects, maximum (BHCC)
> > 	Call Completion Rate (%)
> > 	Call Setup Delay, average (msec)
> > 	Call Teardown Delay, average (msec)
> > 
> > 	Benchmark Metrics- SIP Media Plane
> > 	RTP Media Throughput, per Media Stream (pps)
> > 	RTP Media Throughput, Aggregate (pps)
> > 	RTP Packet Loss, average (pps)
> > 	RTP Packet Delay, average (msec)
> > 	RTP Packet Jitter, average (msec)
> > 
> > In order to obtain measurements for the benchmarking 
> metrics it will 
> > be necessary to configure the test setup with certain test 
> parameters.  
> > Some of these test parameters may also be the benchmarking metric 
> > being measured for another test case.  The test parameters to be 
> > configured for the SIP Control Plane and SIP Media Plane 
> are as follow:
> > 
> > 	Test Parameters- SIP Control Plane 
> > 	Call Duration (msec)
> > 	Call Per Second (CPS)
> > 	Call Attempts Per Second (CAPS)
> > 
> > 	Test Parameters - SIP Media Plane
> > 	RTP Media Streams per Call (streams per call)
> > 	RTP Packet Size (bytes)
> > 	RTP Media Offered Load, per Media Stream (pps)
> > 	RTP Media Offered Load, Aggregate (pps)
> > 
> > The basic test topology to be used for this benchmarking is 
> as follows:
> > 
> > Emulated Agents<--> SEG (optional) <--> SIP Proxy Server 
> (or SBC) <--> 
> > SIP Server (optional) <--> Emulated Agents
> > 
> > PROPOSED MILESTONES
> > 06/07 First Draft Terminology and Methodology
> > 02/07 First WG Last Call
> > 05/07 SIP WG and SIPPING WG Review
> > 06/07 Final BMWG Last Call for Terminology and Methodology
> > 08/07 Submittal for IESG review
> > 
> > 
> > 
> 
> _______________________________________________
> bmwg mailing list
> bmwg <at> ietf.org
> https://www1.ietf.org/mailman/listinfo/bmwg
> 

Gmane