RE: Comments on "draft-ietf-bmwg-firewall-05.txt"
Hickman, Brooks <brooks.hickman <at> spirentcom.com>
2002-07-08 23:18:55 GMT
Thanks very much for your close read and many constructive comments. We have
> -----Original Message-----
> From: Brian Ford [mailto:brford <at> cisco.com]
> Sent: Monday, July 01, 2002 6:49 PM
> To: brooks.hickman <at> spirentcom.com; dnewman <at> networktest.com;
> saldju.Tadjudin <at> spirentcom.com; tmartin <at> gvnw.com
> Cc: bmwg <at> ietf.org
> Subject: Comments on "draft-ietf-bmwg-firewall-05.txt"
> To: Brooks Hickman, David Newman, Saldju Tadjudin, Terry Martin;
> From: Brian Ford, Consulting Engineer, Cisco Systems
> Regarding: Comments on "draft-ietf-bmwg-firewall-05.txt"
> In section 4 of the draft "Test Setup" you stated:
> >One interface of the firewall is attached to the unprotected
> > network, typically the public network(Internet). The
> other interface
> > is connected to the protected network, typically the
> internal LAN.
> You stated "One interface of the firewall is attached to the
> network, typically the public network(Internet).". Given
> that this draft
> addresses Firewall performance I would suggest that you limit the
> definition to protected and an unprotected networks.
> Attempting to measure
> performance of Firewalls at the Internet edge
That's out of scope not only for this document but also for this working
group. This draft focuses on measurement of one or at most a few boxes, not
a box at the edge of a huge honking network.
We don't believe modification is needed on this point. The draft doesn't
advocate attaching the firewall to a production network before measuring its
Totally agree that the terms protected and unprotected networks are apropos.
The phrase "typically the public network" was intended only to show what is
by far the most common configuration.
> devices (some of
> which you have technically described in the draft) are not
> used limits the
> usefulness of this draft.
> >Tri-homed configurations employ a third segment called a
> > Demilitarized Zone(DMZ). With tri-homed configurations, servers
> > accessible to the public network are attached to the
> DMZ. Tri-Homed
> > configurations offer additional security by separating server(s)
> > accessible to the public network from internal hosts.
> I want to point out several problems with this statement.
> OK. This is a nit picking point. There is no need to refer
> to a third (or
> additional Firewall interface) as a "DMZ". In fact there is nothing
> "militarized" about a Firewall.
Sure. At the same time, the documentation for 100 percent of tri-homed
firewalls on the market -- including those sold by a San Jose-based
networking company -- refers to a "DMZ."
We can debate whether the label is appropriate or not (one of us finds the
term "layer-4 switch" to be equally idiotic), but it is the commonly used
term. It's also the term already defined in the terminology document, RFC
I'd suggest a better choice
> of wording
> would be "perimeter network".
> You stated "servers accessible to the public network are
> attached to the
> DMZ"; when in fact a perimeter network is used to implement
> an often new
> security policy on an additional network segment. Those
> servers don't have
> to be servicing the public network and could instead be servicing the
> inside network.
> Further you state "Tri-Homed configurations offer additional
> security by
> separating server(s) accessible to the public network from internal
> hosts.". I would suggest that the security policy implemented on the
> perimeter interface offers additional security; and not the
> fact that the
> interface was installed and is in use.
While we don't disagree, is there a specific methodological question here?
> In section 4.2 you discuss "Virtual Clients / Servers":
> You explained that data sources may emulate multiple users
> and hosts; which
> your methodology refers to as virtual servers and clients.
> You stated that
> the test report SHOULD indicate the number of virtual clients and
> servers. You stated that "Testers MUST synchronize all data sources
> participating in a test.". Could you elaborate as to how the
> data sources
> "MUST" be synchronized? What's the reasoning behind that "MUST"?
It is impossible to obtain valid measurements of latency and other
delay-related metrics unless the transmitter and receiver are synchronized.
> In section 4.3 Test Traffic Requirements you state:
> >For the purposes of benchmarking firewall performance this document
> > references HTTP 1.1 or higher as the application layer entity,
> > although the methodologies may be used as a template for
> > benchmarking with other applications.
> Elsewhere in the document you stated that many different
> types of Firewalls
> could potentially be under test; yet you only call for HTTP
> to be used to
> develop a performance metric.
> Another issue with HTTP testing is that it limits the type of
> rules that
> can be implemented and tested to the HTTP application only.
> Why not include FTP transfers of several fixed sized files?
> Why limit the
> specification to just HTTP?
There's nothing in the draft that prevents implementers from doing other
protocols. We call for HTTP because it represents a large proportion of IP
traffic (for some definition of large).
Two of us are implementers of test tools that can be used to measure
firewall performance. In our experience getting the most common protocol
defined first is a sensible practice. We're certainly open to adding others
down the road, and again the draft does not discourage this.
> In section 4.5 Multiple Client/Server Testing:
> You stated "Each virtual client MUST initiate connections in
> a round-robin
> fashion.". But wouldn't this "round robin" behavior create a
> stream of traffic? Would'nt this type of testing better
> emulate a Firewall
> behind a load balancing device (when in fact the majority of
> Firewalls are
> installed without a load balancing front end)? I'd suggest
> that the type
> of test described doesn't adequately reflect real world
> conditions and that
> other methods should be considered.
> Also see RFC 2544 section 21. regarding Bursty Traffic as an
> alternative to
> steady state (stream) traffic.
RFC2544 is not applicable in this case since it applies only to traffic at
or below the network layer. TCP is by nature both bursty and
However, you are correct that we describe a methodology in which the
"connection requests" from each client->Server(s) are offered in an evenly
distributed manner. The intent here is to define a methodology that works
independent of the number of clients and/or servers specified.
The term "real world" is heard a lot in the testing arena. Defining what
that means, in terms of an algorithm, is usually difficult or impossible to
> In section 4.7 Rule Sets:
> I thought you did a good job of defining rule set
> functionality. I was
> surprised that you didn't come out strongly for or against
> zero or single
> rule set tests. I think they are irrelevant and turn
> otherwise interesting
> Firewall performance studies into a discussion of forwarding.
> But in some
> Firewalls it is worthwhile to test the default security policy.
In our opinion, the forwarding rate for a firewall's default security policy
SHOULD be 0 pps. While there is nothing like a consensus on what constitutes
"default security policy," there is agreement that firewalls (and security
devices in general) should not forward any traffic unless the user
explicitly allows it.
> I would like to see you go further in defining how many rules
> should be
> included in any test.
We really wrestled with benchmarking involving rule sets. Among the major
--different rules may have different performance costs, even on the same DUT
--rule set order may have a different performance cost on different DUTs
--the same rule may have different performance costs on different DUTs
Of these, we only seek to measure the last. There's too much wiggle room in
the first two for us to be able to declare general test cases that are
meaningful for all DUTs.
I'd also like to see you go further in
> locations were basic rule sets could be discovered. See RFC
> 2544 section
> 11.4.1 on Filter Addresses. I suggest that something like
> that is needed
> in this RFC. For example; somewhere in the DUT / SUT there
> should probably
> be an RFC 1918 Private Address Filter (you did earlier make
> the case for
> Internet connectivity). There are plenty of recommendations
> about default
> rule sets at the SANS and SecurityFocus; as well as almost
> all Firewall
> vendors sites.
Thanks; some of us are big fans of those sites. The issue of location of
rule sets is very router-like (ie, do we apply rules on ingress, egress, or
both). While there may be merit in such measurements, the general rule with
firewalls is that they apply filtering on one interface at a time.
> in section 4.8 in your discussion of Authentication you point
> out that
> "Authentication is usually performed by devices external to
> the firewall
> itself, such as an authentication server(s) and may add to
> the latency of
> the system.". But you did not go so far as to require that
> an external
> authentication source be used. I think you should require the
> authentication database (at least) to be external to the
> Firewall SUT / DUT.
Why? Is authentication now an essential firewall function?
While there's certainly merit in such measurements, they are above and
beyond the fundamental firewall function of access control.
> Not included in Section 4 was any discussion of logging.
> Reporting is
> discussed in section 5 but Syslog is not required. At a
> minimum Syslog MUST
> be supported and enabled on a DUT / SUT. The Syslog server MUST be a
> separate device (so that Syslog is not recorded on the candidate
> Firewall). Berkeley Standard Syslog (RFC 3164) should be used.
External logging servers is an interesting suggestion, but unfortunately we
doubt such servers are used by even a significant minority of firewalls
We are careful in this document to avoid what are essentially security
assessments. The use of an external syslog server is one such security
check; it's a great idea from a security standpoint, but we are explicitly
not assessing device security. (If we were, we'd not only follow your
recommendation but also require that log data be transmitted with strong
encryption and that the logging server use write-once media).
Also, while logging isn't a MUST requirement, its use is implied ("If the
DUT/SUT has logging capability, the log SHOULD be checked...")
> The log file that is called for in section 5.1.2 would be of
> little use in
> an operational Firewall. I think that a test tool used to
> create the test
> environment might better create the type of log file discussed here.
> In section 5 the end condition for the tests seems to be
> anything more than
> zero packet loss. I'd suggest that zero packet loss is one
> way of ending
> the test but is really only realistic for higher end appliance
> Firewalls. You may want to suggest that some defined amount
> of packet loss
> that does not exceed some number (say .25, .5, or 1%) should
> be the end
"Allowable packet loss" has already been the topic of much discussion over
many years. There is no consensus on how much loss is allowable; the only
number everyone agrees on is 0.
> Also of interest is the DUT / SUT "overloaded behavior" as defined by
> Bradner in RFC 1242. Can the DUT / SUT recover from an
> overload event?
> Several times in section 5 it is stated:
> >Between each iteration, it is RECOMMENDED that the tester issue a
> > TCP RST referencing all connections attempted for the previous
> > iteration, regardless of whether or not the connection
> attempt was
> > successful. The tester will wait for age time before
> continuing to
> > the next iteration.
> In a network each of the individual connections would be
> terminated with a
> RST. Why this RST for all connections? Shouldn't there be
> some section
> were each of the connections gets an individual RST?
Each connection must be sent an RST since a TCP RST can only apply to one
connection at a time. Will reword the paragraph as follows:
Between each iteration, it is RECOMMENDED that the tester issue a TCP RST
for each connection attempted for the previous iteration, regardless of
whether or not the connection attempt was successful. The tester will wait
for age time before continuing to the next iteration.
> Might you want to test whether and how the Firewall deals
> with connections
> that don't close?
Can you clarify? In a majority of cases, the clients and servers emulated by
the test instrument(s) will own both sides of the TCP connection and can
therefore determine if the connection closes properly.
Are you talking about the condition in which the test instrument properly
closes a connection, but the connection still exists in the DUT/SUT's state
Shouldn't the Firewall apply a connection
> timer? Shouldn't that be tested?
Testing the connection timer on a DUT/SUT would basically be a functional
test, not a performance metric.
> Check section 5.4.3 as there seeming to be an incomplete
> section of text
> (typo?) repeated with that heading number.
Thanks. Good catch.
> In section 5.11 Latency you should note that Bradner states
> in RFC 1242
> under his discussion of testing latency:
> >Measurements should be
> > taken for a spectrum of frame sizes without changing
> > the device setup.
> and require that the device setup not be changed during
> Firewall testing.
Thanks, this is a useful suggestion for network-layer measurements.
It's less useful for application-layer measurements, since TCP may (will?)
determine what frame sizes we'll be using.
Thanks again. We very much appreciate your comments.
> Liberty for All,
> Brian Ford
> Consulting Engineer
> Corporate Consulting Engineering, Office of the Chief
> Technology Officer
> Cisco Systems, Inc.
> e-mail: brford <at> cisco.com