Fred Baker (fred | 21 Aug 03:30 2015
Picon

Re: the draft "Benchmarking Methodology for IPv6 Transition Technologies"


On Aug 20, 2015, at 1:05 AM, Marius Georgescu <liviumarius-g <at> is.naist.jp> wrote:
> I hope my e-mail finds you well. It was very nice meeting you in IETF93. We had a very short discussion about
benchmarking in the context of IPv6 transition technologies. I have been working on a draft associated
with BMWG on this topic for a while now (3rd iteration) and some feedback on it would be more than welcome. If
time allows you, please let me know what you think about it. Here’s a link:
> https://tools.ietf.org/html/draft-georgescu-bmwg-ipv6-tran-tech-benchmarking-01

OK. I have now had a chance to read this. I'm copying the working group, as it may be interested in the
comments. Probably to tell me I'm wrong :-)

My question walking in was how this specifically differed from a benchmark of, say, an Ethernet Switch or a
Ethernet-to-Ethernet Router, or maybe a firewall as the DUT. One might expect the results to be a little
different, and if they are a lot different (wide variation in delay, dramatically reduced rate) that
could be an important observation. There is also an interesting question on data sources and sinks, as
they presumably have different addresses or address families than they might otherwise. But from a step
back, I would expect the procedure to be very similar.

The good news, as I understand the draft, is that they are. In sections 6 and 7, you point to the same
procedures one would use in a test of a standard intermediate DUT. Section 8 needs some work.

When you say "transition technologies", I presume that you're addressing
 - RFC 4213's dual stack model,
 - RFC 6052/6144-6147/6791 IPv4/IPv6 translation,
 - RFC 6296 NPTv6,
 - RFC 6333 DS-Lite,
 - RFC 6740-6748 ILNP,
 - RFC 6877 464XLAT,
 - RFC 7597-7599 MAP (encapsulated and translated),
 - SIIT-DC
(Continue reading)

The IESG | 10 Aug 18:30 2015
Picon

Document Action: 'ISSU Benchmarking Methodology' to Informational RFC (draft-ietf-bmwg-issu-meth-02.txt)

The IESG has approved the following document:
- 'ISSU Benchmarking Methodology'
  (draft-ietf-bmwg-issu-meth-02.txt) as Informational RFC

This document is the product of the Benchmarking Methodology Working
Group.

The IESG contact persons are Benoit Claise and Joel Jaeggli.

A URL of this Internet Draft is:
https://datatracker.ietf.org/doc/draft-ietf-bmwg-issu-meth/

Technical Summary

Many networking functions require operating software maintenance
during their useful life, and some require update on a frequent-enough
basis that there may be an operational challenge to keep sufficient
service capacity on-line due to the need to schedule upgrades during
quiet periods.  In response, network function developers have devised ways
minimize the operational impact of upgrades.  The methodology of this memo
assesses the affect of a software update on dataplane traffic.  

Working Group Summary

Since this proposed the assessment of an operational characteristic
rather than measuring the usual benchmarking dimensions (how big? how many?),
there was considerable discussion to frame the problem so that 
BMWG could adopt the work and relate it to existing methods.
Once the scope and purpose were clear, inclusion in re-chartering,
adoption and consensus followed very smoothly. The BMWG believes
(Continue reading)

Stephen Farrell | 6 Aug 00:25 2015
Picon
Picon

Stephen Farrell's No Objection on draft-ietf-bmwg-issu-meth-01: (with COMMENT)

Stephen Farrell has entered the following ballot position for
draft-ietf-bmwg-issu-meth-01: No Objection

When responding, please keep the subject line intact and reply to all
email addresses included in the To and CC lines. (Feel free to cut this
introductory paragraph, however.)

Please refer to https://www.ietf.org/iesg/statement/discuss-criteria.html
for more information about IESG DISCUSS and COMMENT positions.

The document, along with other ballot positions, can be found here:
https://datatracker.ietf.org/doc/draft-ietf-bmwg-issu-meth/

----------------------------------------------------------------------
COMMENT:
----------------------------------------------------------------------

- I was a bit surprised there was no reference to software
signing here (only checksums are mentioned).  The
difference is that signature verification can also
sometimes require e.g. an OCSP lookup to check for
revocation and that could I guess impact on benchmarking.
(The secdir reviewer also made this point.)

- Kathleen makes a good point in her comment too.
Alissa Cooper | 5 Aug 22:10 2015
Picon

Alissa Cooper's No Objection on draft-ietf-bmwg-issu-meth-01: (with COMMENT)

Alissa Cooper has entered the following ballot position for
draft-ietf-bmwg-issu-meth-01: No Objection

When responding, please keep the subject line intact and reply to all
email addresses included in the To and CC lines. (Feel free to cut this
introductory paragraph, however.)

Please refer to https://www.ietf.org/iesg/statement/discuss-criteria.html
for more information about IESG DISCUSS and COMMENT positions.

The document, along with other ballot positions, can be found here:
https://datatracker.ietf.org/doc/draft-ietf-bmwg-issu-meth/

----------------------------------------------------------------------
COMMENT:
----------------------------------------------------------------------

Section 3:
OLD
Note that, a given vendor implementation may or may not permit the
abortion of
   the in-progress ISSU at particular stages.

NEW
Note that, a given vendor implementation may or may not permit halting
the in-progress ISSU at particular stages.

OLD
the test
   plan document should reflect these and other relevant details and
(Continue reading)

Benoit Claise | 5 Aug 21:41 2015
Picon

Benoit Claise's No Objection on draft-ietf-bmwg-issu-meth-01: (with COMMENT)

Benoit Claise has entered the following ballot position for
draft-ietf-bmwg-issu-meth-01: No Objection

When responding, please keep the subject line intact and reply to all
email addresses included in the To and CC lines. (Feel free to cut this
introductory paragraph, however.)

Please refer to https://www.ietf.org/iesg/statement/discuss-criteria.html
for more information about IESG DISCUSS and COMMENT positions.

The document, along with other ballot positions, can be found here:
https://datatracker.ietf.org/doc/draft-ietf-bmwg-issu-meth/

----------------------------------------------------------------------
COMMENT:
----------------------------------------------------------------------

- Some discrepancies regarding MUST, SHOULD, MAY

One one side: The hardware configuration of the DUT SHOULD be identical
to the one expected to be or currently deployed in production
One the other side: the feature, protocol timing and other relevant
configurations should be matched to the expected production environment.

Note: potentially some other discrepancie with lower/upper case
MUST/SHOULD/MAY

- expand "NSR"

nits:
(Continue reading)

Kathleen Moriarty | 5 Aug 02:44 2015
Picon

Kathleen Moriarty's No Objection on draft-ietf-bmwg-issu-meth-01: (with COMMENT)

Kathleen Moriarty has entered the following ballot position for
draft-ietf-bmwg-issu-meth-01: No Objection

When responding, please keep the subject line intact and reply to all
email addresses included in the To and CC lines. (Feel free to cut this
introductory paragraph, however.)

Please refer to https://www.ietf.org/iesg/statement/discuss-criteria.html
for more information about IESG DISCUSS and COMMENT positions.

The document, along with other ballot positions, can be found here:
https://datatracker.ietf.org/doc/draft-ietf-bmwg-issu-meth/

----------------------------------------------------------------------
COMMENT:
----------------------------------------------------------------------

In section 3, when "other checks" are described, I'd like to see an
explicit mention of a check to ensure the device is authorized to install
the software/firmware.  There have been numerous attacks against just
about every vendor where counterfeit hardware is deployed and runs the
firmware, OS, etc. issued.  Fraud can be a big issue and it isn't always
talked about, but this should definitely be a security consideration.  I
do think it fits in the subsections of section 3.  The statement added
can be as simple as, "check performed to ensure the device is authorized
to install the firmware".
Ramki_Krishnan | 27 Jul 15:51 2015
Picon

measuring energy consumption for NFV

One of the topics which came up during the Prague meeting while discussing Al’s draft on benchmarking VNFs was measuring energy consumption. Please find details below. More details are also in the NFVRG draft - https://datatracker.ietf.org/doc/draft-krishnan-nfvrg-policy-based-rm-nfviaas/?include_text=1.

 

At the physical server level, instantaneous energy

   consumption can be accurately measured through IPMI standard. At a

   VM level, instantaneous energy consumption can be

   approximately measured using an overall utilization metric, which is

   a combination of CPU utilization, memory usage, I/O usage, and

   network usage.

Thanks,

Ramki

 

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
MORTON, ALFRED C (AL | 27 Jul 14:30 2015
Picon

Re: draft-ietf-bmwg-virtual-net / CPU & memory utilization should be test configurations or test results?

Hi Saurabh,

 

Thanks for your question and for continuing this discussion.

 

In my mind, any server-oriented measurement that informs us

about resources consumed would be a fair result to collect.

But as we said during the meeting, Only as an Auxilliary metric

while other benchmarking is in-progress.

 

One of the challenges I mentioned adding to my draft is for

benchmarking metrics to assist deployment designers, or perhaps

to provide input for some form of resource model so that your

question about adding VNFs below might be answered.

 

We have a draft that begins to address the resource sharing

aspect of benchmarking here:

https://tools.ietf.org/html/draft-vsperf-bmwg-vswitch-opnfv-00

prepared by the OPNFV vsperf project, and Maryam Tahhan presented

additional material on this topic in slides (see the IETF-93 materails).

 

Sorry for the brief response and the delay responding, I’ve

been travelling all weekend and just arrived at another meeting. L

 

regards,

Al

 

From: Saurabh Chattopadhyay - ERS, HCL Tech [mailto:saurabhchattopadhya <at> hcl.com]
Sent: Friday, July 24, 2015 5:58 PM
To: draft-ietf-bmwg-virtual-net <at> tools.ietf.org
Cc: bmwg <at> ietf.org
Subject: draft-ietf-bmwg-virtual-net / CPU & memory utilization should be test configurations or test results?

 

Hello Al,

 

First, thank you for writing this draft. It is an excellent guideline for us who are working in VNF Benchmarking area.

 

In the BMWG meeting session yesterday, there were some interesting discussions around whether to consider the CPU / memory utilization (and similar parameters) as test configurations or test results. I think we couldn’t discuss this in detail due to time constrain, thus I thought of bringing this up in the list to have your and other experts’ views.

 

My own understanding is, for VNF benchmarking specifically, this issue becomes little delicate. Considering the fact that VNF Black-box is reliant on certain soft integration and soft partitioning of the underlying hardware, all benchmark results are dependent on the load imposed over the entire hardware, in addition to be dependent on the load directed towards the VNF specifically. For example, if a VNF is masked over four core, and the particular hardware provides 12 more core, VNF’s response on a fixed load condition will vary while the other cores are put under different load conditions. Now at this point we can consider creating certain fixed type of load profiles (let’s say, a combination % of compute, storage and networking load) over remaining hardware, and benchmark the VNF under Test against its own load conditions as originally planned. However, during the real deployment, these fixed type of load profiles (combination % of compute, storage and networking load) don’t correlate well unless we qualify every VNF’s performance profiles against these parameters. So this essentially means that even though a benchmark data is produced for the VNF for the particular type of hardware-part and for certain load conditions over remaining hardware, the deployment folks are not clear on how to leverage this intelligence especially when they are planning to deploy the other VNFs in the remaining hardware.  

 

I’m not sure what will be an appropriate way to solve this. Measuring CPU / memory utilization (and similar parameters for the shared assets) may be an option, but not sure if this truly aligns with the black-box benchmarking methodologies. Kindly advice.

 

Warm Regards,

Saurabh

 



::DISCLAIMER::
----------------------------------------------------------------------------------------------------------------------------------------------------

The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and other defects.

----------------------------------------------------------------------------------------------------------------------------------------------------

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
GEORGESCU LIVIU MARIUS | 25 Jul 12:31 2015
Picon

Re: Question about using the IPv6 benchmarking address space

The amended prefix is within the specifications of RFC5180. But again, I don't know if it's really necessary to change the prefix. I am not sure, but my guess is that IANA allocated the prefix to prevent any of the benchmarking traffic reaching the Internet. That being said, I don't see any problem with using 2001:2:a:bb1e::/64 in an isolated test environment.
However, if respecting the recommendations of RFC5180 is desired, you should also consider the following note in RFC5180:
" Note: Similar to RFC 2544 avoiding the use of RFC 1918 address space for benchmarking tests, this document does not recommend the use of RFC 4193 [4] (Unique Local Addresses) in order to minimize the possibility of conflicts with operational traffic."
This is relevant for the discussion in v6ops about using ULA, which is not recommended by RFC5180.
Marius
On 07/25/15, David Schinazi <dschinazi <at> apple.com> wrote:

Thanks for clarifying Marius, I must have misread /32 instead of /48.

In that case, let me amend my question to use a prefix like this instead:
2001:2:0:aab1/64
(named after our NASDAQ symbol AAPL)

Thanks,
David


On Jul 25, 2015, at 11:25, GEORGESCU LIVIU MARIUS <liviumarius-g <at> is.naist.jp> wrote:

Just to clarify, the IPv6 prefix mentioned in RFC5180(+errata) is 2001:2::/48 and in my understanding does not include the the prefix 2001:2:a:bb1e::/64. However, I don't think there's any issue with using it in the context of an isolated testing environment. 

Marius

On 07/24/15, David Schinazi <dschinazi <at> apple.com> wrote:
Hi bmwg,

We would like to use addresses from the IPv6 benchmarking prefix on an independent test network and wonder if there are issues
we have not thought of.

A little background on what we're doing: in OS X El Capitan we introduced a NAT64 mode for the Mac's Internet Sharing feature
to allow iOS developers to test their applications for IPv6 support. Using this, you can share your IPv4 internet connectivity
(e.g. from ethernet) to a newly created Wi-Fi network that only supports IPv6, and the Mac will perform NAT64+DNS64.
Currently the internal addresses of that network are using the Teredo prefix (2001::/64) and we have been advised to use
something that is not treated differently by RFC 6724 (Default Address Selection for IPv6).
We thought that since this is a testing network and that those addresses never leave that link, using the benchmarking prefix
from RFC5180(+ errata) (2001:2/32) would be reasonable. We thought that using 2001:2:a:bb1e/64 as our network prefix
would be appropriate, as "a:bb1e" looks a bit like "apple". Note that this is the prefix advertised by RA to the Wi-Fi network,
it is not the prefix of the NAT64 translation.

Does anyone think there could be any issues with this?

Thanks,
David Schinazi
Apple CoreOS Networking Engineer


_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
GEORGESCU LIVIU MARIUS | 25 Jul 11:25 2015
Picon

Re: Question about using the IPv6 benchmarking address space

Just to clarify, the IPv6 prefix mentioned in RFC5180(+errata) is 2001:2::/48 and in my understanding does not include the the prefix 2001:2:a:bb1e::/64. However, I don't think there's any issue with using it in the context of an isolated testing environment. 


Marius

On 07/24/15, David Schinazi <dschinazi <at> apple.com> wrote:
Hi bmwg,

We would like to use addresses from the IPv6 benchmarking prefix on an independent test network and wonder if there are issues
we have not thought of.

A little background on what we're doing: in OS X El Capitan we introduced a NAT64 mode for the Mac's Internet Sharing feature
to allow iOS developers to test their applications for IPv6 support. Using this, you can share your IPv4 internet connectivity
(e.g. from ethernet) to a newly created Wi-Fi network that only supports IPv6, and the Mac will perform NAT64+DNS64.
Currently the internal addresses of that network are using the Teredo prefix (2001::/64) and we have been advised to use
something that is not treated differently by RFC 6724 (Default Address Selection for IPv6).
We thought that since this is a testing network and that those addresses never leave that link, using the benchmarking prefix
from RFC5180(+ errata) (2001:2/32) would be reasonable. We thought that using 2001:2:a:bb1e/64 as our network prefix
would be appropriate, as "a:bb1e" looks a bit like "apple". Note that this is the prefix advertised by RA to the Wi-Fi network,
it is not the prefix of the NAT64 translation.

Does anyone think there could be any issues with this?

Thanks,
David Schinazi
Apple CoreOS Networking Engineer


_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg
_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg

Re: draft-ietf-bmwg-virtual-net / CPU & memory utilization should be test configurations or test results..

Hello Al,

 

First, thank you for writing this draft. It is an excellent guideline for us who are working in VNF Benchmarking area.

 

In the BMWG meeting session yesterday, there were some interesting discussions around whether to consider the CPU / memory utilization (and similar parameters) as test configurations or test results. I think we couldn’t discuss this in detail due to time constrain, thus I thought of bringing this up in the list to have your and other experts’ views.

 

My own understanding is, for VNF benchmarking specifically, this issue becomes little delicate. Considering the fact that VNF Black-box is reliant on certain soft integration and soft partitioning of the underlying hardware, all benchmark results are dependent on the load imposed over the entire hardware, in addition to be dependent on the load directed towards the VNF specifically. For example, if a VNF is masked over four core, and the particular hardware provides 12 more core, VNF’s response on a fixed load condition will vary while the other cores are put under different load conditions. Now at this point we can consider creating certain fixed type of load profiles (let’s say, a combination % of compute, storage and networking load) over remaining hardware, and benchmark the VNF under Test against its own load conditions as originally planned. However, during the real deployment, these fixed type of load profiles (combination % of compute, storage and networking load) don’t correlate well unless we qualify every VNF’s performance profiles against these parameters. So this essentially means that even though a benchmark data is produced for the VNF for the particular type of hardware-part and for certain load conditions over remaining hardware, the deployment folks are not clear on how to leverage this intelligence especially when they are planning to deploy the other VNFs in the remaining hardware.  

 

I’m not sure what will be an appropriate way to solve this. Measuring CPU / memory utilization (and similar parameters for the shared assets) may be an option, but not sure if this truly aligns with the black-box benchmarking methodologies. Kindly advice.

 

Warm Regards,

Saurabh

 



::DISCLAIMER::
----------------------------------------------------------------------------------------------------------------------------------------------------

The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information could be intercepted, corrupted,
lost, destroyed, arrive late or incomplete, or may contain viruses in transmission. The e mail and its contents
(with or without referred errors) shall therefore not attach any liability on the originator or HCL or its affiliates.
Views or opinions, if any, presented in this email are solely those of the author and may not necessarily reflect the
views or opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification,
distribution and / or publication of this message without the prior written consent of authorized representative of
HCL is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately.
Before opening any email and/or attachments, please check them for viruses and other defects.

----------------------------------------------------------------------------------------------------------------------------------------------------

_______________________________________________
bmwg mailing list
bmwg <at> ietf.org
https://www.ietf.org/mailman/listinfo/bmwg

Gmane