Re: RFC2544 Statement
Curtis Villamizar <curtis <at> occnc.com>
2011-08-02 15:21:03 GMT
In message <94DEE80C63F7D34F9DC9FE69E39436BE3A0BCB7BB3 <at> MILEXCH1.ds.jdsu.net>
Barry Constantine writes:
> Hi All,
> I think this is a good effort with the right intentions.
> There are two (2) scenarios that network providers use RFC2544 (or
> derivatives) today:
> 1. "Turn-up" testing which is conducted on a business class,
> production network before end-customer traffic is commissioned.
> This is generally considered the verification of the end-customer
> 2. Troubleshooting. This is performed after a end-customer has been
> commissioned and the SLA is disputed (poor application performance,
> etc.). The network provider coordinates a time to swtich the
> end-customer to a back-up link and then re-conducts RFC2544 type
> testing on the production link.
> I think this statement will be taken more credibly by network
> providers if it is acknowledged that these types of scenarios exist
> today and that caution should be used when performing RFC2544
> (especially in case #2, troubleshooting).
> Al, I think you mentioned there was a section related to exceptions
> (?), if these points are agreeable, then I think this section should
> be moved closer to the front so that a reader will grasp the overall
> My two cents,
The intent of RFC2544 is as a methodology for "benchmarking", not for
SLA verification and trouble shooting. This should be very clear
after reading the first paragraph of the introduction.
Vendors often engage in "specsmanship" in an attempt to give their
products a better position in the marketplace. This often involves
"smoke & mirrors" to confuse the potential users of the products.
This is for evaluating equipment in the lab before you make a huge
purchase of equipment for a large network deployment.
Any provider that is going to spend many millios of dollars on a
deployment will hire staff specifically for testing, use the best
available off-the-shelf testing availalbe, and do way more testing
than what is described in RFC2544 before buying anything (often
borrowing the equipment used in an initial evaluation from the vendor,
then only buying from first round qualifiers for extended testing).
Of course, for small purchases far less testing rigor is applied.
BMWG has morphed over time, but originally the focus was on testing
for provider networks in the high growth years of the Internet. When
RFC 2544 was written there was still way too much emphasis in BMWG on
repeatable results and way too little emphasis on exposing fatal flaws
in routers before they were deployed in real networks.
BTW- The worst product ever to benchmark well in limited benchmarks
was the Cisco 7000. In a real network it was terrible because it
dropped packets after any route change. A network of Cisco 7000 could
easily be made to go unstable when a lot of route change was occurring
(which resulted in a positive feedback and at times massive
instabilities in networks built with these routers). Lessons were
learned and the 7500, CEF, DCEF, GSR, etc followed and other vendors
emerged, some which learned, others which didn't.