FW: Feedback on ForCES Model v02
Putzolu, David <david.putzolu <at> intel.com>
2004-03-12 18:00:26 GMT
Bounced due to email changed, forwarding...
From: Steven Blake [mailto:steven.blake <at> ericsson.com]
Sent: Thursday, March 11, 2004 5:33 PM
To: Putzolu, David
Cc: FORCES <at> PEACH.EASE.LSOFT.COM
Subject: Re: Feedback on ForCES Model v02
On Sun, 2004-02-29 at 19:46, Putzolu, David wrote:
Hi David; sorry for the delay. I will respond to some of your comments.
> LFB metadata definition: It seems to me that implementations
> might do more than encode metadata differently, they might
> transport it differently (i.e. if a FE has a classification
> processor and a traffic management processor, they might
> use a shared memory region, or a streaming interface, or
> some other interface to share metadata) or they might not
> even have the metadata at all (i.e. if a NPU performs both
> classification and traffic shaping, but directly associates
> classification entries --> traffic shaping entries, it might
> not even have the metadata in the actual fast path, but instead
> use the metadata to setup the entries). Maybe say:
> The FE model defines how such metadata is identified,
> produced and consumed by the LFBs, but not how metadata
> is encoded within an implementation. , "identified,
> produced and consumed by the LFBs, but not how per-packet
> state is implemented within actual hardware"
Agreed. We are on the same page, we just didn't express it well
> Section 2.3 says:
> Definition of the various payloads of ForCES messages
> (irrespective of the transport protocol ultimately selected)
> cannot proceed in a systematic fashion until a formal
> definition of the objects being configured and managed (the
> FE and the LFBs within) is undertaken.
> This paragraph confused me a bit. My belief is that one
> *can* proceed in defining how LFBs are carried, identified,
> and queried, without having to specify that a scheduler uses
> a 64 bit unsigned int for representing minimum service rate
> in bits per second. This is analogous to how SNMP protocol
> messages could be defined before all MIBs where defined.
We can start of course, but we can't lay out the TLVs for instant until
we resolve these issues. Maybe the sentence in Sec. 2.3 is too strong.
> Page 12 shows "internal control" in the figure. This is
> confusing because the LFBs are still part of the ForCES
> model of the actual FE implementation. Perhaps say
> "LFB manipulation" or "per-LFB interactions" here?
Ok. The control is internal insofar as we are not specifying the
details of how internal ForCES driver for an FE talks to the
individual LFBs (in whatever manifestation they may have in hardware
> Page 13 replace "new LFB class can" with "new LFB classes can"
> Section 3.2.1 Input group is confusing to me. If it
> is only a way of identifying which LFB sourced a packet to
> another FE, which not simply assert that LFBs are able to
> determine which input connection # a packet arrived on?
> Does input group offer anything more than this?
In the absence of an input group, there is only one input. The
existence of two or more inputs in a group on a LFB instance is
what allows that LFB to distinguish packets based on the input
connection the packet is received on.
> Section 3.2.2 It isn't clear what the use of an output
> group is. How would a FE or LFB use it? LPM is the example
> given but doesn't make sense. If the LFB classifies a packet
> it will likely have a specific next LFB connection to send
> it on. The alternative is to just say "send this packet on
> that output group" - but that makes for unpredictable /
> unspecified behavior, which is (correctly) identified as
A LFB with N ports in a single output group is equivalent to a
LFB with a single output port paired with a Redirector LFB with N
output ports. Output groups allow us to avoid introducing an
otherwise extraneous Redirector, and the output selector metadata
it must use.
The idea in fact is to allow the output port within an output group
to be specified on a per-attribute basis (e.g., on a per-prefix
basis for a LPM LFB class).
The specific case of a LPM LFB needs to be looked at: I can think
of the need for multiple output types (e.g., separate ones for source
and destination prefix check), but not multiple ports per-type.
> Section 3.2.3 bottom of page 16: The underlying
> implementation may not be passing the packet at
> all. E.g. an IP packet in an PPPoverATMAAL5 frame might
> actually consist of a scattered bunch of pointers to
> pieces of the packet. I'd suggest just shortening the
> 2nd paragraph to say:
> "logically operates on, but this has no implications
> on how packets are stored or moved in the underlying
> Section 3.2.3 bottom of page 17: Isn't it possible that
> the underlying implementation not actually have metadata
> in the fast path? E.g. LFB 1 may associate metadata
> with a packet:
> Address/mask: 10.0.0.0/255.0.0.0 --> metadata for next hop 1
> and LFB 2 might associate that metadata with some action:
> packet w/metadata value 1 --> send out port 3
> But the underlying implementation might not have any
> metadata in the fast path, e.g.:
> Address/mask 10.0.0.0/255.0.0.0 --> send out port 3
> Admittedly, there is software in between that stores
> the metadata to then "cook" the data into the hardware
> representation, but the point is the hardware may not
> even have metadata. So I'd suggest saying:
> "While it is important to define the metadata types
> passing between LFBs, it is not necessary to define
> the encoding mechanism used for LFBs for that
> metadata. As with the ForCES model in general, no
> constraints are placed on underlying FE implementation
> of metadata as long as the FE behaves in a fashion
> that one would predict given a particular metadata
> In the next paragraph the term marking is overloaded.
> Replace "marked" with "associated".
> 126.96.36.199 Forwarded is overloaded. Replace "forwarded
> by the LFB" with "passed on to the next LFB".
Ok, although "passed on" seems kind of clumsy.
> 188.8.131.52 Why must there be at least one producer and
> one consumer? While I can see the need for a
> consumer to require an upstream producer (aside: it
> would be better if each metadata consumer LFB, even
> non-optional ones, also specify what happens if the
> expected metadata is not present), there is no real
> need for each producer require a down graph consumer.
> 3.2.5 last para: Why are FEs disallowed from
> supporting multiple versions of a particular
Because that is supported by inheritance, silly.
I'm not sure what the rationale is for this statement, so I defer to my
> 3.3.1 While hardware could influence topology, I think
> it is more important that the data path be modeled in
> a clear fashion. If a single hardware classifier is
> used for two logically different classifications (e.g.
> in IP in IP tunneling, one classification on the outer,
> another on the inner DIP), that still would be better
> represented as two separate LFB instances, unless one
> specifically wants to reuse the same classification
> entries (e.g. an ACL where any access to certain ports
> is to be blocked, whether on the inside or outside of
> an IPinIP or IPsec tunnel mode connection).
The issue here is that we previously decided that each LFB instance
exclusively owns all of its attributes. So if you have a single
hardware classifier, used to implement two or more logical
classifications, you cannot model them as separate LFB instances
unless the partioning of hardware resources (e.g., classifier entries
or memory) between the logical classifiers is fixed. Otherwise,
the maximum available classifier attributes for each LFB instance
would be variable.
This brings up the point that there are some potential implementations
that cannot guarantee a reasonable maximum number of supported
attributes for some function. An LPM LFB using a trie structure and
fixed memory might not be able to guarantee a number of
attributes (e.g., prefix entries) for arbitrary prefix sets, sufficient
for every application, even though in practice the LFB instance can
support sufficient attributes given normal prefix distributions.
> 3.3.1 Page 28/9 It says "One way to encode...". Not
> sure of the relevance of that. Is it the suggested
> mechanism or the actual mechanism used in this document?
> If so say so!
This is a part of the document where we are still thinking out loud,
so suggestions are welcome.
> 4.7 Re: editor's note: Should LFB class names
> be managed by IANA?
> 4.7.2 Re: Omitting the tag when a LFB has no input
> ports - what about having a reserved "None" value?
> Explicitly using a "none" statement tends to catch
> errors better than omission of the statement,
> which can happen by mistake and be a difficult bug
> to catch.
Punt to Zsolt.
> 4.7.3 Same comment
> 184.108.40.206 Make it mandatory that the element be present -
> same reasoning as above, minimizes errors of omission.
> 220.127.116.11.1 Don't understand what namespace this is.
> FE has no way of knowing what names the CE is assigning
> to other FEs, so this must be something that the other
> FEs communicate to the FE - should be a valid identifier
> prior to CE<->FE connection, otherwise this becomes a
> chicken & egg problem. Each FE should have some unique
> ID - maybe just use a MAC address perhaps that the FE owns.
> Or is this what 18.104.22.168.4 is?
It would seem so. Joel?
> Section 6 general comment: This seems to be a very good
> start on the following charter deliverable:
> o A formal definition of the controlled objects in the
> functional model of a forwarding element. This
> includes IP forwarding, IntServ and DiffServ QoS. An
> existing specification language shall be used for
> this task.
> I remember some discussion about splitting this off.
> Do you think this should remain in the base model draft,
> or should it be split off as a separate doc/draft?
I think we resolved to split this into a separate document, perhaps
retaining a non-exhaustive list of LFB classes without description.
> Section 6 general comment: Some ASCII art of each
> of the LFBs showing inputs, outputs, etc., would be
> 6.1 Ports are tricky things. If a single LFB represents
> both send and receive then all FE graphs will look like
> cycles, which is rather confusing (and hard for a machine
> to parse). Of course, splitting ports also is confusing -
> if one downed a receive LFB it would down the whole port,
> affecting the transmit LFB as well. This isn't a new issue,
> but the doc doesn't really explain how it will be addressed -
> better to put a stake in the ground one way or the other.
We intend to say that a single (port or interface) LFB represents
both send and receive; please let me know where the text is
unclear. It is easier to visualize the graphs if the port/interface
LFBs are split in two, but it shouldn't be harder for a machine
to parse (otherwise, how would OSPF work?), keeping in mind that port
LFBs are special since that is where traffic enters and exits the FE.
> 6.2 Same comment applies
> 6.5 The bullet ". that is maintained at a downstream LFB"
> appears out of context, what is it referring to?
That should be concatenated with the previous bullet.
> 6.8 mentions a "Modifier LFB" that is not explained or
> mentioned elsewhere. 3.2.4 Mentions a Header Modifier LFB
> that is also not mentioned elsewhere - should both of these
> be "Packet Header Rewriter LFB"?
This section needs some loving in my opinion.
> 6.9 mentions packet length metadata (also 6.6 and 6.13) -
> this is not really explained. Since it is associated
> with all packets, it seems like it need not really
> be called out as metadata (perhaps treat it as implicit
> metadata that all LFBs know about, just as all LFBs can
> implicitly see the contents of the packet). An alternative
> would be to say that every packet has special RO metadata
> always associated with it such as length.
I think I will adopt your second suggestion.
> 6.17 Why are encapsulation & decapsulation together in
> one LFB? Seems confusing - isn't encap & decap usually
> separate functions, performed on separate packet streams?
Encap/decap are like port LFBs in the sense that the configuration
for one function affects the other.
> 8.7 Need to address what happens to packets arriving at/
> transiting a FE during reconfiguration. Do they get
> dropped on disconnected inter-LFB links? Start filling
> queues? Something else?
That would be implementation dependent. A question is whether the
behavior should be specified as a FE capability, or even as a LFB
> 10. A model-specific security issue that should be mentioned
> is the risk of misconfiguring a LFB graph resulting in
> insecure behaviors. Having software on a CE reconfigure
> a LFB graph is a very easy way to shoot yourself in the foot.
Yes, and hopefully an FE implementation would prohibit this.
Expressing the topology reconfiguration capabilities of a FE in
such a way that a CE can always know beforehand whether a particular
topology will be allowed is going to be a tricky problem, and is
probably impossible to solve in the general case (other than enumerating
all possibilities), so the protocol will need to support a FE-CE
error to cover this.
> That's all I had. Overall this document is looking very good,
> once the remaining editor's notes are addressed and the WG has
> had a chance to properly review it I think it will be ready to
Steven L. Blake <steven.blake <at> ericsson.com>
Ericsson IP Infrastructure +1 919-472-991
You wouldn't be seeing the following text if my employer
didn't block ports 25 and 110.
This communication is confidential and intended solely for the
addressee(s). Any unauthorized review, use, disclosure or distribution
is prohibited. If you believe this message has been sent to you in
error, please notify the sender by replying to this transmission and
delete the message without disclosing it. Thank you.
E-mail including attachments is susceptible to data corruption,
interruption, unauthorized amendment, tampering and viruses, and we only
send and receive e-mails on the basis that we are not liable for any
such corruption, interception, amendment, tampering or viruses or any