Bill Davidsen | 2 Feb 18:14 1998
Picon

Re: Usefor Second System Syndrome

Chris Newman <Chris.Newman <at> innosoft.com> write:

> In reading the recent proposals on this list I get the definite impression
> that this group is quickly moving towards second system syndrome.  For
> those who don't know, second system syndrome is what happens when you
> start haphazardly adding every feature you can imagine that was missing
> from the first version.  The result is invariably a system so complicated
> that it's slow, huge and riddled with bugs.

> Finally, I'm concerned that ramifications to the rest of the Internet are
> being ignored by this group.  Anything done in Usefor *will* leak into
> email, like it or not.  As far as I'm concerned *any* new difference
> between RFC822/MIME and the Usenet format should require very strong
> justification.

I normally avoid "me too" notes like the plague, and I have given up
posting "overkill" and "creeping featurism" notes, because everyone is
ignoring them.

Adding tons of marginally useful features will probably result in the
standard being rejected and/or ignored. Implementors will simply stay
with the old ways, because the cost to benefit ratio of some features is
way too high.

In that group I place the whole signature mess (as an example), which
not only beats a dead horse but drags it through the streets. All to
solve a problem which could be addressed in the real world by saying
that the *last* occurence of the standard commonly used delimiter
sequence would be the start of sig.

(Continue reading)

Chris Newman | 2 Feb 18:38 1998

Re: What signing and certificate systems to use

On Fri, 30 Jan 1998, Brad Templeton wrote:
> If we adopt multipart/signed, it means that signed groups will be
> entirely multipart, with their important headers duplicated at the start
> of the message.  It means transport agents will be required to unpack
> MIME multiparts to get at those headers to relay articles.
>
> I'm not actually against that, but it's a pretty strong requirement, and
> while people here have been in favour of defining a way to sign articles,
> I have seen opposition to this much of a change.  Perhaps I am wrong.

With all due respect, the code to unpack a MIME multipart is on the order
of 100 lines, while the code to verify a signature is on the order of 5000
lines (not to mention that the bignum multiply/add usually has to be
hand-coded in assembly).  I'd consider the requirement to use MIME
multiparts negligible compared to the requirement to verify the signature.

And the canonicalization rules for multipart/signed are much simpler and
don't involve an incompatible change to the standards (forbidding header
re-ordering and re-folding).

> That would be useful, though there are already some freely available libraries
> out there.

Yeah, I'd probably start with the SSLeay libraries as they have an
appropriate copyright.  I've already got a version of the SSLeay DSA code
with the ASN.1 contamination excised.

>  All that needs to be done is to call them and then convert
> the resulting signature to whatever form we wish to represent it.
> (I am recommending two 27 character strings of mime base64)
(Continue reading)

Bill Davidsen | 2 Feb 18:47 1998
Picon

Re: Usefor Second System Syndrome

Brad Templeton <brad <at> clari.net> wrote:

> It's not that I don't agree that it is more work to allow arbitrary
> data lengths, at least in C.  (It's easy in perl and some other languages).
> 
> But what do you suggest?  History is full of examples of standards
> setting fixed buffer sizes only to discover just a few years later,
> with Moore's law in effect, that they were wrong, and can't fix it.

I'm really tired of writing software to pander to assholes. The function
of the From/Sender field to is identify the originator. It is *not*
essential that it include realname, shoe size, and sexual preference. If
someone wants to use a 124 character email address (I didn't make that
up) I see no reason why I should provide bytes to carry 10k of useless
info, which will be used by spammers for add anyway.

I see no reason why the Subject line should be infinitely long, since
the subject is intended to describe the post, not be a one line
substitute for it. That goes for the Path line as well, I really don't
care where it went if it took 50 hops, I won't feed it to anyone, none
of my feeds wants it. Note that limiting the number of hops, and even
bytes in Path doesn't cause the world to come to an end either.

How about message-ID? Do we really need infinite length, given that the
totaly number of sites and users on the planet will fit nicely in some
reasonable limit (I'm arguing that there should *be* a limit, not size
as yet). Given that this directly affects the References length, why
does it need to be overly long?

Consider this said for other headers as well, I am a *lot* more
(Continue reading)

Bill Davidsen | 2 Feb 19:09 1998
Picon

Re: Usefor Second System Syndrome

Brad Templeton <brad <at> clari.net> wrote:

> There are limits in all code, but the ideal limits are "what the hardware
> can do" and that grows as the hardware grows -- which it does.

I'm sorry, that's totally wrong. Programs should use only the reqources
needed to provide functionality, not all the resource available. Your
approach is like Netscape allocating all the colors in the X colormap
"in case it needs them" rather than 'as needed." If there's a need for a
header to be large (other than to cater to the ignorant), fine, to to
make things large and complex because we can is silly, and the new RFC
will be so infrequently used as to be irrelevant.

> > Further, much of how well NNTP/Usenet currently works is based on the
> > small size of articles.  If half of the headers I've seen proposed on
> 
> The bulk of USENET is now images, whether we like it or not.
> 
> And you may cringe at this thought, but I think that USENET is actually
> the right technology to distribute very large objects, like video.
> Anything that plays faster than your network capacity needs to be
> pre-distributed, or receipt must be delayed upon fetch.  If you ahve
> a video you *know* that people at the site want to watch, the right way
> to distribute it is usenet style.

What in the world are you thinking? Distributing large objects should be
done on demand, not to slow news to a crawl. If it were not for careful
administration binaries would consume all available resources now.
> > 
> > I can already see people trying DOS attacks by generating messages with
(Continue reading)

Bill Davidsen | 2 Feb 19:21 1998
Picon

Re: Usefor Second System Syndrome

Simon Lyall <simon <at> darkmere.gen.nz> wrote:

> A small thing about limits. These are intended to bring us into line with
> the mail standards which do not have limits for headers and bodies. The
> same with the comments (which I personally find pointless) they are to
> bring us into line with mail.

I never thought I'd agree with my mother about this, but "if all your
friends jumped off a cliff would you do it too?" If mail does something
which is bad for usenet, should we blindly follow?

> The intention is that as far as possible the standards should be the same
> for both types of messages, obviously we are taking different tracks in
> places (especially character set) but the closer the better.

I agree, it's just that I have a very diferent idea of "as possible"
than you. I can't see having a standard require something 90% of the
world won't deliver, and infinite anything falls in that category.

> Reference headers are really the only one I will consider for cutting, I
> suspect I have gone a little far bumping the number to 100 so I'll revise
> it downwards again (possibly to 63 or 31) or just make it a "should not
> be trimmed" as has been suggested.

There should be a minimum size and rules about how to trim, or everyone
will do it their own way.

> As for the boggy of 500K Subject headers then that is something that
> already exists, email allows them (as far as I can tell, Im sure someone
> will say if I missed a limit somewhere) and there is not obvious reason
(Continue reading)

Bill Davidsen | 2 Feb 19:28 1998
Picon

Re: Usefor Second System Syndrome

sommar-usefor <at> algonet.se (Erland Sommarskog) wrote:

> Myself, I also get worried at times. A new Usenet RFC is long overdue,
> and I would prefer to have an RFC out fairly soon with just the most
> important things included. Then we could drool over all the goodies.

Most correct.

> Then of course, there are dissenting opinions on what is the most
> important. For me, the single most important issue is to make 8-bit
> characters first-class citizens both in body and headers, and to have
> RFC2047 deprecated from news, as this is a big source of annoyance in 
> Swedish newsgroups.

I think it would be reasonable to limit the names of headers to 7 bit,
just so people not using 8 bit readers will know what headers they're
unable to read. That shouldn't be too confining since not even Brad has
proposed any header names which require extended character sets.

> Well, in the case of 8-bit chars there is a very strong justification.
> The RFC2047 mess has proven unworkable in news. (Not that it works that
> much better in mail for that matter.)

I have no problem with that, there's obviously a need for 8 bit for some
users, and typically the data would be in groups transacted mostly in
the language(s) using the extended characters.
--
   -bill davidsen (davidsen <at> prodigy.com) Prodigy Internet Server Operations
"The secret to procrastination is to put things off until the
 last possible moment - but no longer"  -me
(Continue reading)

Brad Templeton | 2 Feb 20:25 1998
Picon
Picon

Re: Usefor Second System Syndrome

I agree that last delim is better than first delim, though neither is
reliable.

However, just what are the things that you claim would get the standard
unimplemented?

Of the new things I have proposed:

	Sig-Len:	Oh, that's soooo hard to implement.
	Signed, etc.:	Verifying is not mandatory.  Signing only becomes
			required once everybody starts demanding it.
	Supersedes:	Having multiple args has been in Henry's doc for
			years.  It's pretty trivial to implement
	Replaces:	Not hard to implement, but can be treated like
			supersedes if the software wants
	Named articles	A system could not implement this if it desired.
			Though frankly, it's very simple to implement in
			most news systems -- store by name in the
			directory instead of by number.  Sooo hard.
	Verified Path	Some minor work
	Topics		Real work for a newsreader to implement, but again,
			not required.

What you may be missing, at least if it's me your accusing, is that we're
not laying out hard things you MUST do.   The few MUST-dos are not that
complex.

We're laying out two things:
	a) Here are some useful things tha can be done that are good ideas
	b) If they are to be done, here is how to do them.
(Continue reading)

Brad Templeton | 2 Feb 20:40 1998
Picon
Picon

Re: What signing and certificate systems to use

On Mon, Feb 02, 1998 at 09:38:06AM -0800, Chris Newman wrote:
> On Fri, 30 Jan 1998, Brad Templeton wrote:
> With all due respect, the code to unpack a MIME multipart is on the order
> of 100 lines, while the code to verify a signature is on the order of 5000

You misunderstand.  MIME unpack isn't that hard (full mime which is
recursive is a bit more work) but the problem is that use of mime signed
articles is a 100% hard cutover in newsgroups that want signature to
a pretty much new format.

Yes, old newsreaders could still read the articles, after getting past
the duplicated header, but frankly it will be a big pain.   Ditto when
they followup etc.

Since a signed newsgroup has to be 100% signed, this says, "If you want
to have a signed newsgroup, then everybody has to have a mime aware
newsreader."

What that means is you will not for a very long time get an existing
newsgroup to go signed.  Because there will, for a long time, be a significant
portion of the population that will fight it.  And even those with new
software will defend them.   I have seen this happen.

For posting, there is a work-around.  Those with old software mail their
messages (signed, or verified by challenge-response) to a gateway that
signs for them.  Mark the group moderated and you are ready.

But there is no work around for reading.

So we get some groups where signing is 100% required and some where it is
(Continue reading)

Brad Templeton | 2 Feb 20:48 1998
Picon
Picon

Re: Usefor Second System Syndrome

On Mon, Feb 02, 1998 at 01:09:50PM -0500, Bill Davidsen wrote:
> Brad Templeton <brad <at> clari.net> wrote:
> What in the world are you thinking? Distributing large objects should be
> done on demand, not to slow news to a crawl. If it were not for careful
> administration binaries would consume all available resources now.

I disagree entirely.   If you want to distribute 5MB hires videos to a group of
sites, and you have a strong expectation that a user at each site will
play the video, and probably more than one user, and you have 28k modems,
do you want to:

a) Have the first user pull down the video, sucking up all the bandwidth
when they request it for 20 minutes, or more than 20 minutes if
they are to not use all the bandwidth, or

b) Feed the video in advance, on a "when bandwidth is available" basis
(once you have a QoS based protocol) so that this user and all others
can play the video the instant they want to, with no delays.

(THe middle ground, streaming, helps the situation a bit, though in this
case, they still have to wait about several minutes before they can start
playing the video, and they suck up all the bandwidth.  If they can play
it streaming at all.)

Choice (a) has the one extra efficiency that if nobody at the site ever
requests the item, it is never sent.  THat's of value if you expect
intermittent readership.

Choice (b), the usenet way, has all the other advantages.  In fact, the whole
point of usenet is to deliver stuff in advance for instant local response
(Continue reading)

Chris Newman | 2 Feb 20:50 1998

Header/Message Limits

The current limit for messages in email is specified in RFC 1123, section
5.3.8.  I recommend reading it.

Usenet and email currently have no specified limits on headers, and they
seem to work just fine.  Why fix something which isn't broken?

User interfaces will do what is necessary to present headers to the user,
and if they have to truncate them, they will.

Servers which wish to process arbitrary-length headers efficiently can
simply mmap() the articles (or use a fixed buffer at the max message
size) and do in-place processing with minimal calls to allocation 
functions.

If you do add length limits, find the length limits in deployed software
and document them with the same sort of advice as in the above section.

But please don't add length limits which are smaller than those which
function in deployed software.

		- Chris


Gmane