shiyang | 12 Aug 05:15 2014

question on data center benchmarking draft

Hi Lucien,

When reading your draft about the methodology of the data center benchmarking( draft-bmwg-dcbench-methodology-02), we found it an very interesting and useful document, and we are trying to apply it into our experiments with DC switches.

However, there are some questions:

(1) what does PORT CAPACITY mean in section 2.2

(2) why latency and jitter are not considered if the traffic generator can not be connected to all ports on the DUT in section 2.2? 

     In our case, we'd like to test the relationship between the frame latency and the switch while we only have a generator with not so many ports.

(3) In Section 3.2, Measure maximum DUT buffer size with many to one ports, what does the equation ((N-1)/port capacity * 99.98)% in the iteration mean? What do you expect from the setting?

(4) In section 6, what is the definition of stateful and stateless flow from the generator side of view? There are only unofficial descriptions like “large and small flows” in the text, but in practice, like how can we use in testing setup to produce  the large and small flow?

(5) There is no clear definition for “flow latency”. Is it the same as the frame latency definition defined in the metric draft (draft-dcbench-def-01)?


Thank you very much!

Best,

Yang Shi



_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
MORTON, ALFRED C (AL | 19 Jul 15:54 2014
Picon

FW: IETF WG state changed for draft-ietf-bmwg-sip-bench-term

Joel and BMWG,

> The IETF WG state of draft-ietf-bmwg-sip-bench-term has been changed to
> "Submitted to IESG for Publication" from "WG Consensus: Waiting for
> Write-Up" by Al Morton:
> 
> http://datatracker.ietf.org/doc/draft-ietf-bmwg-sip-bench-term/

The IETF WG state of draft-ietf-bmwg-sip-bench-meth has been changed to
"Submitted to IESG for Publication" from "WG Consensus: Waiting for
Write-Up" by Al Morton:

http://datatracker.ietf.org/doc/draft-ietf-bmwg-sip-bench-meth/

After a quiet WGLC closing on the 14th, we are declaring WG Consensus
and submitting the SIP benchmarking terms and methodology drafts 
for publication.

The combined shepherding forms have been entered in the datatracker 
for each draft, and appended below.

Congratulations to the co-authors, and please stay vigilant as the
drafts proceed through AD-review, IETF Last Call, and IESG review.

see you at IETF-90,
Al
doc shepherd and co-chair

-=-=-=-=-=-=-=-==-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

This is a publication request for:
 draft-ietf-bmwg-sip-bench-meth -11 2014-07-02   	Active
 draft-ietf-bmwg-sip-bench-term -11 2014-07-02   	Active

Al Morton is the Document Shepherd, and prepared this form.

(1) What type of RFC is being requested (BCP, Proposed Standard, Internet Standard, Informational,
Experimental, or Historic)? Why is this the proper type of RFC? Is this type of RFC indicated in the title
page header?

Informational, as indicated on the title page.
All BMWG RFCs are traditionally Informational,
in part because they do not define protocols and
the traditional conditions for Stds track advancement
did not apply.  However, they are specifications and
the RFC 2119 terms are applicable to identify the
level of requirements.

(2) The IESG approval announcement includes a Document Announcement Write-Up. Please provide such a
Document Announcement Write-Up. Recent examples can be found in the "Action" announcements for
approved documents. The approval announcement contains the following sections:

Technical Summary:

All networking devices have a limited capacity to serve their
purpose. In some cases these limits can be ascertained by counting
physical features (e.g., interface card slots), but in other cases
standardized tests are required to be sure that all vendors count
their protocol-handling capacity in the same way, to avoid specmanship.
This draft addresses one such case, where the SIP session-serving 
capacity of a device can only be discovered and rigorously compared
with other devices through isolated laboratory testing.

This document describes the methodology for benchmarking Session
-or-
This document describes the terminology for benchmarking Session
Initiation Protocol (SIP) performance as described in SIP
benchmarking terminology document.  The methodology and terminology
are to be used for benchmarking signaling plane performance with
varying signaling and media load.  Both scale and establishment rate
are measured by signaling plane performance.  The SIP Devices to be
benchmarked may be a single device under test or a system under
test.  Benchmarks can be obtained and compared for different
types of devices such as SIP Proxy Server, Session Border Controller,
and server paired with a media relay or Firewall/NAT device.

Working Group Summary:

There were periods of intense and constructive feedback on this draft,
but also several pauses in progress during development. The most lively
discussions were prompted by presentation of actual test results using
the draft methods, which require significant time investment but are well-
worth the result. These drafts serve a useful purpose for the industry.

Document Quality:

There are existing implementations of the method, as noted above.

Dale Worley conducted an early review, following BMWG's request
of the RAI area.  Dales's comments were addressed in version 05.
Henning Schulzrinne commented on the original work proposal.

Personnel:

Who is the Document Shepherd? Who is the Responsible Area Director?
Al Morton is Shepherd, Joel Jaeggli is Responsible AD.

(3) Briefly describe the review of this document that was performed by the Document Shepherd. If this
version of the document is not ready for publication, please explain why the document is being forwarded
to the IESG.

The shepherd has reviewed the drafts many times, and his comments are
in the BMWG archive. 

(4) Does the document Shepherd have any concerns about the depth or breadth of the reviews that have been performed?
No.

(5) Do portions of the document need review from a particular or from broader perspective, e.g., security,
operational complexity, AAA, DNS, DHCP, XML, or internationalization? If so, describe the review that
took place.

No. Cross-area review has been obtained, however it impossible to get
the attention of everyone who considers themselves a SIP expert.

(6) Describe any specific concerns or issues that the Document Shepherd has with this document that the
Responsible Area Director and/or the IESG should be aware of? For example, perhaps he or she is
uncomfortable with certain parts of the document, or has concerns whether there really is a need for it. In
any event, if the WG has discussed those issues and has indicated that it still wishes to advance the
document, detail those concerns here.

No concerns, this is still a valuable memo, as mentioned above.

(7) Has each author confirmed that any and all appropriate IPR disclosures required for full conformance
with the provisions of BCP 78 and BCP 79 have already been filed. If not, explain why?

There are not outstanding IPR disclosures, according to the authors.

(8) Has an IPR disclosure been filed that references this document? If so, summarize any WG discussion and
conclusion regarding the IPR disclosures.

No.

(9) How solid is the WG consensus behind this document? Does it represent the strong concurrence of a few
individuals, with others being silent, or does the WG as a whole understand and agree with it?

Although the comments and review intensity was highly variable,
it now appears that the WG is satisfied.  
The first WGLC was completed on 5 April 2010 with comments.
The second WGLC was completed on 18 May 2012 with comments.
The third WGLC was completed on 10 Dec 2012 with comments, and the 1st Pub Request.
A IETF Last Call followed, and completed on 30 Jan 2013 with comments.
A fourth WGLC was completed 11 June 2014 with comments from expert review.
The current versions (11) address Dale Worley's RAI area early review
and Robert Spark's reviews.
The fifth WGLC completed quietly on July 14th, 2014.

(10) Has anyone threatened an appeal or otherwise indicated extreme discontent? If so, please summarise
the areas of conflict in separate email messages to the Responsible Area Director. (It should be in a
separate email because this questionnaire is publicly available.)

No.

(11) Identify any ID nits the Document Shepherd has found in this document. (See
http://www.ietf.org/tools/idnits/ and the Internet-Drafts Checklist). Boilerplate checks are not
enough; this check needs to be thorough.

Nits are warnings requiring no action for these drafts.

(12) Describe how the document meets any required formal review criteria, such as the MIB Doctor, media
type, and URI type reviews.

N/A

(13) Have all references within this document been identified as either normative or informative?

Yes.

(14) Are there normative references to documents that are not ready for advancement or are otherwise in an
unclear state? If such normative references exist, what is the plan for their completion?

The -term and -meth drafts are proceeding toward publication as a pair.

(15) Are there downward normative references references (see RFC 3967)? If so, list these downward
references to support the Area Director in the Last Call procedure.

No.

(16) Will publication of this document change the status of any existing RFCs? Are those RFCs listed on the
title page header, listed in the abstract, and discussed in the introduction? If the RFCs are not listed in
the Abstract and Introduction, explain why, and point to the part of the document where the relationship
of this document to the other RFCs is discussed. If this information is not in the document, explain why the
WG considers it unnecessary.

No.

(17) Describe the Document Shepherd's review of the IANA considerations section, especially with regard
to its consistency with the body of the document. Confirm that all protocol extensions that the document
makes are associated with the appropriate reservations in IANA registries. Confirm that any referenced
IANA registries have been clearly identified. Confirm that newly created IANA registries include a
detailed specification of the initial contents for the registry, that allocations procedures for
future registrations are defined, and a reasonable name for the new registry has been suggested (see RFC 5226).

No requests of IANA.

(18) List any new IANA registries that require Expert Review for future allocations. Provide any public
guidance that the IESG would find useful in selecting the IANA Experts for these new registries.

N/A

(19) Describe reviews and automated checks performed by the Document Shepherd to validate sections of the
document written in a formal language, such as XML code, BNF rules, MIB definitions, etc.

N/A
ramki Krishnan | 16 Jul 07:59 2014

Proposed IRTF Network Functions Virtualization Research Group (NFVRG) - first face-to-face meeting at Toronto

Please find more information on NFVRG including charter at - http://trac.tools.ietf.org/group/irtf/trac/wiki/nfvrg

 

Please find meeting location and agenda at - http://trac.tools.ietf.org/group/irtf/trac/wiki/nfvrg-ietf-90

 

Thanks,

Ramki on behalf of the co-chairs

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
Lucien Avramov (lavramov | 16 Jul 07:38 2014
Picon

data center benchmarking - request for further feedback on drafts

Hi BMWG!

We have been talking about the Data Center benchmarking draft for 
sometime. As authors, we would like to solicit more feedback. To date 
the main conversations we had were around jitter and definition on the 
first draft.

The first is draft on definition is at the following URL: 
http://tools.ietf.org/html/draft-dcbench-def-01

Regarding the first draft where we received the most comments so far, we 
would like to hear from you regarding section 4,5,6 and 7. As we have 
build these while talking to customers, switch vendors and traffic 
generator folks, we want to see if there are any other comments around it:

  4 Physical Layer Calibration . . . . . . . . . . . . . . . . . . .  6
  5 Line rate  . . . . . . . . . . . . . . . . . . . . . . . . . . .  7
  6  Buffering . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
  7 Application Throughput: Data Center Goodput. . . . . . . . . . . 13

The second draft is on methodology and can be found here: 
http://tools.ietf.org/html/draft-bmwg-dcbench-methodology-02

We would like to see more feedback especially on the second draft 
section 3,4,5 and 6:
    3. Buffering Testing . . . . . . . . . . . . . . . . . . . . . . .  7
    4  Microburst Testing . . . . . . . . . . . . . . . . . . . .  . . 10
    5. Head of Line Blocking . . . . . . . . . . . . . . . . . . . . . 11
    6. Incast Stateful and Stateless Traffic . . . . . . . . . . . . . 13

We introduce a new method to measure buffering capability of a DUT. Our 
goal with this method is to not have the end user to care about the type 
of DUT he has [cut-through / store-forward] and the test will actually 
detect and measure the DUT buffering. We then use this type of 
methodology for microburst.

Then we would like to know what you think about the head of line 
blocking evaluation. This is very important while designing data center 
networks, in order to make the proper design and deployment decisions 
based on the DUT performance. We use a generic methodology here as well 
bringing the capability to understand the impact of head of line 
blocking in a more precise way than the usual current tests which 
involve only groups of 4 ports.

Finally, section 6 is about mixing udp and tcp traffic on the DUT and 
measuring the latency for udp type of traffic while measuring the 
goodput on the tcp type of traffic.

We consolidate the feedback received so far and want to present it at
IETF90 during our BMWG meeting.

Thank you,
Jacob and Lucien
ramki Krishnan | 16 Jul 03:42 2014

Proposed IRTF Network Functions Virtualization Research Group (NFVRG) - first face-to-face meeting at Toronto

Please find more information on NFVRG including charter at - http://trac.tools.ietf.org/group/irtf/trac/wiki/nfvrg

 

Please find meeting location and agenda at - http://trac.tools.ietf.org/group/irtf/trac/wiki/nfvrg-ietf-90

 

Thanks,

Ramki on behalf of the co-chairs

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
Banks, Sarah | 7 Jul 23:25 2014

In Service Software Upgrade Draft

Hello BMWG,
	The ISSU draft is in a pretty solid state, from the authors point of
view. We wanted to solicit a bit of feedback, and make sure we've covered
all bases. There was some discussion around the authors as to adding some
text around timers and counters in the draft - do you trust the DUT?
Should you confirm these outside of the DUT? Etc. Thoughts?

Kind regards,
Sarah & Fernando & Gery
MORTON, ALFRED C (AL | 7 Jul 18:19 2014
Picon

Truely LAST WGLC: draft-ietf-bmwg-sip-bench-term and -meth

BMWG (and SIPCORE):

A WG Last Call period for the Internet-Drafts on SIP Device
Benchmarking:

   http://datatracker.ietf.org/doc/draft-ietf-bmwg-sip-bench-term/
   http://datatracker.ietf.org/doc/draft-ietf-bmwg-sip-bench-meth/

will be open from 7 July 2014 through 14 July 2014.

These drafts are continuing the BMWG Last Call Process. See
http://www1.ietf.org/mail-archive/web/bmwg/current/msg00846.html
The first WGLC was completed on 5 April 2010 with comments.
The second WGLC was completed on 18 May 2012 with comments.
The third WGLC was completed on 10 Dec 2012 with comments.
An IETF Last Call followed, and completed on 30 Jan 2013 with comments.
A fourth WGLC was completed 11 June 2014 with comments from expert review.
The current versions (11) address Dale Worley's RAI area early review
and Robert Spark's reviews.

Please read and express your opinion on whether or not these
Internet-Drafts should be forwarded to the Area Directors for
publication as Informational RFCs.  Send your comments
to this list or acmorton <at> att.com and sbanks <at> akamai.com

Al
bmwg co-chair
Vic Liu | 7 Jul 03:52 2014

request for comment: New Version Notification for draft-liu-bmwg-virtual-network-benchmark-00.txt

Hi all 

We are researching on virtual network and we've submit a draft of benchmark for virtual network.
Hope we can discuss on the mailing list and any comment are big welcome.
URL:  http://www.ietf.org/internet-drafts/draft-liu-bmwg-virtual-network-benchmark-00.txt

Thank you!
All the best!
Vic

-----邮件原件-----
发件人: internet-drafts <at> ietf.org [mailto:internet-drafts <at> ietf.org] 
发送时间: 2014年7月5日, 星期六 4:32
收件人: Vic Liu; Dapeng Liu; Bob Mandeville; Dapeng Liu; Bob Mandeville; Brooks Hickman; Guang Zhang;
Brooks Hickman; Guang Zhang; vic
主题: New Version Notification for draft-liu-bmwg-virtual-network-benchmark-00.txt

A new version of I-D, draft-liu-bmwg-virtual-network-benchmark-00.txt
has been successfully submitted by Dapeng Liu and posted to the IETF repository.

Name:		draft-liu-bmwg-virtual-network-benchmark
Revision:	00
Title:		Benchmarking Methodology for Virtualization Network Performance
Document date:	2014-07-04
Group:		Individual Submission
Pages:		16
URL:            http://www.ietf.org/internet-drafts/draft-liu-bmwg-virtual-network-benchmark-00.txt
Status:         https://datatracker.ietf.org/doc/draft-liu-bmwg-virtual-network-benchmark/
Htmlized:       http://tools.ietf.org/html/draft-liu-bmwg-virtual-network-benchmark-00

Abstract:
   As the virtual network have been wide establish in IDC. The
   performance of virtual network has become a consideration to the IDC
   managers. This draft introduce a benchmarking methodology for
   virtualization network performance.

Please note that it may take a couple of minutes from the time of submission until the htmlized version and
diff are available at tools.ietf.org.

The IETF Secretariat

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
Barry Constantine | 5 Jul 15:53 2014

Traffic Management Draft-04

Hello Folks,

 

draft-constantine-bmwg-traffic-management-04 was posted this past week, there were issues in the IETF automated posting system and it is unclear if an official post announcement was sent out.

 

Lot’s of editorial tweaks were made, but the main item addressed was Appendix B where the definition of Layer 4 TCP “test pattern” guidelines were added as discussed in London.

 

We look forward to review and comments.

 

Thank you,

Barry Constantine

 

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
internet-drafts | 2 Jul 22:47 2014
Picon

I-D Action: draft-ietf-bmwg-sip-bench-meth-11.txt


A New Internet-Draft is available from the on-line Internet-Drafts directories.
 This draft is a work item of the Benchmarking Methodology Working Group of the IETF.

        Title           : Methodology for Benchmarking Session Initiation Protocol (SIP) Devices: Basic session setup and registration
        Authors         : Carol Davids
                          Vijay K. Gurbani
                          Scott Poretsky
	Filename        : draft-ietf-bmwg-sip-bench-meth-11.txt
	Pages           : 21
	Date            : 2014-07-02

Abstract:
   This document provides a methodology for benchmarking the Session
   Initiation Protocol (SIP) performance of devices.  Terminology
   related to benchmarking SIP devices is described in the companion
   terminology document.  Using these two documents, benchmarks can be
   obtained and compared for different types of devices such as SIP
   Proxy Servers, Registrars and Session Border Controllers.  The term
   "performance" in this context means the capacity of the device-under-
   test (DUT) to process SIP messages.  Media streams are used only to
   study how they impact the signaling behavior.  The intent of the two
   documents is to provide a normalized set of tests that will enable an
   objective comparison of the capacity of SIP devices.  Test setup
   parameters and a methodology are necessary because SIP allows a wide
   range of configuration and operational conditions that can influence
   performance benchmark measurements.

The IETF datatracker status page for this draft is:
https://datatracker.ietf.org/doc/draft-ietf-bmwg-sip-bench-meth/

There's also a htmlized version available at:
http://tools.ietf.org/html/draft-ietf-bmwg-sip-bench-meth-11

A diff from the previous version is available at:
http://www.ietf.org/rfcdiff?url2=draft-ietf-bmwg-sip-bench-meth-11

Please note that it may take a couple of minutes from the time of submission
until the htmlized version and diff are available at tools.ietf.org.

Internet-Drafts are also available by anonymous FTP at:
ftp://ftp.ietf.org/internet-drafts/
Robert Sparks | 18 Jun 23:26 2014

Fwd: Review: draft-ietf-bmwg-sip-bench-term-10 and draft-ietf-bmwg-sip-bench-meth-10

Oops - meant to copy the list. Forwarding...


-------- Original Message -------- Subject: Date: From: To: CC:
Review: draft-ietf-bmwg-sip-bench-term-10 and draft-ietf-bmwg-sip-bench-meth-10
Wed, 18 Jun 2014 16:19:06 -0500
Robert Sparks <rjsparks <at> nostrum.com>
MORTON, ALFRED C (AL) <acmorton <at> att.com>, Vijay Gurbani <vkg <at> lucent.com>, joel jaeggli <joelja <at> bogus.com>, Carol Davids (davids <at> iit.edu) <davids <at> iit.edu>, sporetsky <at> allot.com <sporetsky <at> allot.com>
Banks, Sarah (sbanks <at> akamai.com) <sbanks <at> akamai.com>


Reviews of draft-ietf-bmwg-sip-bench-term-10 and draft-ietf-bmwg-sip-bench-meth-10 Thank you for the restructure and cleanup work on these drafts. Most of the comments I made on version -08 of each of them have been addressed. Since there were so many changes, rather than revisit the thread from the -08 review, I'll start a new thread here, bringing up unaddressed points from the earlier review as necessary. Again, thank you for the adjustment to the title and the text to address the scope of the documents. The result is much more straightforward and clear than what I had earlier reviewed. I have a few remaining points and questions and a few nits to call out. Major technical points: 1) I've read through this enough times now that maybe I've become blind to where it's discussed, but for the INVITE tests, I think you have an unstated assumption that the responding EA is configured to send 200s as quickly as it can, and that whatever delay it has in responding is fairly constant. Otherwise, your tests will have widely varying results due to retransmissions of the INVITE. Is it your intent that the EA would ever retransmit? If not, the assumptions above should be spelled out in the methodology document. 2) The documents still don't make it clear that each register request needs to be to a distinct AOR (otherwise, there is no sense in having a separate re-registration test). 3) The documents (particularly the report forms) assume you will use the same transport on both sides of the DUT. Please state that explicitly, or if allowing for (for instance) UDP on one side of a proxy and TCP on the other was intended, please adjust the document to talk about it. 4) I think the documents are assuming that an EA will make one connection if it's using a connection oriented transport (like TLS). The results of the test will be very different if it opens a new connection for each sent message. Similarly, I think you're assuming that a connection gets set up between the DUT and the EA that will respond to an INVITE once. Those should be called out explicitly. If you're not assuming that, there needs to be more discussion about how connections get established, and whether that's a parameter that needs to be captured as part of the test. 5) Is it the intent to only test with media that acts like G.711 over RTP? The configuration parameters you have for EA imply that. If its not the case, are you missing configuration parameters for, say, video codecs (an EA won't be able to use what you have in terminology's 3.3.4 for that stream), or for an msrp media session? Major editorial points: 1) The terms IS and NS are holdovers from when the document was trying to do more than it is now. NS is not accurately defined and is at cross purposes with the tests you _do_ define in the document (The discussion of MESSAGE and SUBSCRIBE that's still in 3.1.1 of the terminology document does not help this document at all.) I know these words took effort to write, but they really should just be removed. A very short introduction to the two types of session you are actually testing would leave much less room for confusion. 2) I challenged the usefulness of the definition of session (sig,medc,med) and the diagram trying to show these as points in some three-dimensional space in my previous review. I did see Carol's explanation of it in the summary to that review, but I disagree that it is helping this document. I think it's hurting by adding confusion. If you deleted it, the rest of the document means what it meant before, and the reader is no less informed. Please remove it. If the primary point is to make sure that the tester considers covering every permutation of the test parameters, just say that. The graphic implies that some things have more sess.sig than others, and that you might talk about the distance between points in this vector space. Save all this aside and bring it back in a document that actually uses it if you must, but please take it out of this one. 3) The code in Appendix A of the methodology doc should be identified as a "Code Component" as described in the TLP (http://trustee.ietf.org/license-info/IETF-TLP-4.htm) and a license block should be added. More minor points. * The methodology document says the DUT can be any conforming 3261 device. The terminology document says the device can't be user equipment (see bullet 1 in section 1.1) . Neither are correct. (A presence server, for instance, is not a reasonable thing for a DUT for the tests in this document). I suggest in both documents you just explicitly list what the DUT can be. That list should not include "User Agent Server" as it currently does in 3.2.2 of the terminology document. A UAS is just a role that any of these devices, including end-user terminals, can hold. * If a Registrar is the DUT, then neither of the topology figures are correct. (Unless you're intending for an EA to be the registrar, and the DUT is just a proxy). Consider adding a figure that more clearly shows what the topology for your registration test is intended to be, and making it clear that you are only testing INVITE through devices that forward messages. * There are several places that call out RTSP as a media protocol. RTSP is a control protocol, not a media protocol - it doesn't make sense being listed with RTP and SRTP. MSRP would make more sense. * The Expected Results in Section 6.8 of the methodology claim that the rate should not be more than what was in 6.7. How do you come to that conclusion? There are several valid implementation choices that could lead to reregistration taking slightly longer than an initial registration. NITS: Methodology document: Introduction paragraph 1 s/in Terminology document/in the Terminology document/ Introduction paragraph 5. This points to section 4 (the IANA consideration section) and section 2 (the introduction) of the terminology document for an explanation of configuration options. Neither of those sections explain configuration options. Where did you mean to point? Terminology document: Security Considerations: Please remove "and various other drafts". If you know of other important documents to point to, ase add them as references. The definition of Stateful Proxy and Stateless Proxy copied the words "defined by this specification" from RFC3261. This literal copy introduces ambiguity. Please replace "by this specification" with "by [RFC3261]". Introduction paragraph 4, last sentence: By calling out devices that include the UAS and UAC functions, you have eliminated stateless proxies, which contain neither. In section 3, when you use the templates, you are careful to say None under issues when there are no issues. Please use the same care with See Also. Right now, you have empty See Also: sections that could be misread to take up whatever content follows (particularly by a text-to-speech engine). 3.1.6: s/tie interval/time interval/

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg

Gmane