Loyer, Jeff | 1 Jul 01:32 2006
Picon

Re: Trace Spacing Rule

A rule of thumb often listed for minimum spacing between high speed
signals (NOT spacing between 2 halves of a differential pair) is 3xh for
stripline and 5xh for microstrip.  This is the edge-to-edge spacing, or
"air gap" (not center-to-center pitch), and h is the dielectric
thickness between the trace and nearest reference plane. This represents
a commonly used compromise between acceptable routing density and
crosstalk, and applies for both single-ended and differential signals.
Typically, for 5 mil traces, h is about 5 or 6 mils for stripline and 3
or 4 mils for microstrip, so your spacing is about 15-20 mils in both
cases.

Folks used to quote the minimum separation as a function of the trace
width, but this was done by making assumptions about the corresponding
dielectric thickness for 50 ohms.  I think it's better to talk about the
separation as a function of the dielectric thickness, and the other is
coming out of vogue.

At these separations, NEXT is still higher for microstrip than stripline
(and FEXT is 0 for single-ended stripline).

Using 3xh for dual stripline isn't a good idea - it will have more NEXT
than microstrip at 5xh (and about the same FEXT for differential
signals).  5xh is a better number.

Similar guidelines should also be applied for adjacent legs of
"serpentines".

Once you establish this number, you should then run your simulations
with aggressors placed this far from your victim.  If the simulations
fail due to crosstalk, you can require more separation or less coupled
(Continue reading)

Chris Cheng | 1 Jul 06:15 2006

Re: Fibre channel interconnect margins

A while ago I've started a long thread about "do you really ship a product at bert 10e-xx?"
You seem to imply that a 10e-12 will be the benchmark for acceptance.
Does CISCO really ship a product that will take an error every few days as acceptable ?

________________________________

From: si-list-bounce@... on behalf of Mcgrath, Christopher
Sent: Fri 6/30/2006 1:03 PM
To: si-list@...
Subject: [SI-LIST] Re: Fibre channel interconnect margins

Some stuff that I have done in the past to stress test FC links:
1. Max out the cable length defined by the FC-PI spec.
2. Put the product under test into a thermal chamber while varying all
voltages associated with the FC link (ASIC, SERDES, PHY, etc.) across
all corners.  (i.e. 2 voltages across hot/cold corners =3D 8 test cases)
3. User random data and not just the idle characters on the link.

Using the BER as the benchmark for acceptance (something like 10e-12),
these three things were the things we did to beat the hell out of links
before officially qualifying the physical link.

Our philosophy was to not use things like pre-emphasis or techniques
like that to stress the link but to tune the link for optimal
performance and reliability (best BER).  Once we established that after
our standard tests (#1-3 in my list) was sufficient to pass
interoperability standards with good margin (>2 orders of magnitude of
BER), we elected not to mess around with emphasis or amplitude.  The
only thing that we had to tune was the RX termination in the ASIC to
best match the board trace impedance, but this tuning was a separate
(Continue reading)

Alan.Hiltonnickel | 1 Jul 06:48 2006
Picon

Re: Fibre channel interconnect margins

Hey Chris,
When you started that thread I didn't answer, since it seemed pretty 
obvious that "errors are bad", and I didn't know better.

In fact, I think that companies DO ship products that toss a random 
error approximately every 10e-xx or so. Why? Because the statistical 
theory behind those errors is that random/Gaussian noise is, by 
definition, unbounded - errors are a fact of life, even if the error 
rate is very low. Eventually you have to an edge that is outside the 
jitter spec. A single unrepeatable error out of billions of bits simply 
has to be expected.

What matters is that the system (specifically the higher layers of the 
stack) respond to these random errors in such a way that they are not 
necessarily catastrophic. Most serial protocols will simply resend the 
packet. If it's truly a random error, the next transmission (or the 
theirs) will be sent correctly. As well, many protocols also have error 
detection and correction built in, and thus can recover in that fashion.

So sure, you should expect an error every day or so. Your system must 
simply be able to handle that eventuality. What concerns me is that 
these protocols are capable of masking serious bit error problems, which 
don't become apparent until someone notices the system is really 
lagging, or their video is starting to stutter.

Keep in mind that we're talking about random errors. If you get an error 
that happens every time you send a particular packet, you have a faulty 
product. Repeating that pattern will increase the bit error rate above 
the spec, and allow you to diagnose and fix the problem.

(Continue reading)

Chris Cheng | 1 Jul 07:09 2006

Re: Fibre channel interconnect margins

I don't know what kind of FCAL chip you are dealing with. Everyone I have to work with have CRC detect and
flags. And you bet every CRC they detect will be flagged and report back. If it takes an error every feel
days, I will get phone calls.
________________________________

From: Alan.Hiltonnickel@... [mailto:Alan.Hiltonnickel@...]
Sent: Fri 6/30/2006 9:48 PM
To: Chris Cheng
Cc: si-list@...
Subject: Re: [SI-LIST] Re: Fibre channel interconnect margins

Hey Chris,

When you started that thread I didn't answer, since it seemed pretty obvious that "errors are bad", and I
didn't know better. 

In fact, I think that companies DO ship products that toss a random error approximately every 10e-xx or so.
Why? Because the statistical theory behind those errors is that random/Gaussian noise is, by
definition, unbounded - errors are a fact of life, even if the error rate is very low. Eventually you have to
an edge that is outside the jitter spec. A single unrepeatable error out of billions of bits simply has to be expected.

What matters is that the system (specifically the higher layers of the stack) respond to these random
errors in such a way that they are not necessarily catastrophic. Most serial protocols will simply resend
the packet. If it's truly a random error, the next transmission (or the theirs) will be sent correctly. As
well, many protocols also have error detection and correction built in, and thus can recover in that fashion.

So sure, you should expect an error every day or so. Your system must simply be able to handle that
eventuality. What concerns me is that these protocols are capable of masking serious bit error problems,
which don't become apparent until someone notices the system is really lagging, or their video is
starting to stutter.
(Continue reading)

Alan.Hiltonnickel | 1 Jul 07:57 2006
Picon

Re: Fibre channel interconnect margins

You might want to re-check that. Yes, we have the same CRC checks and 
flags. But after error rates get to the "expected" or "acceptable" 
levels, most folks turn off the error reporting and let the software and 
hardware do what it's designed to do, because the system is tolerant of 
errors. That's the purpose of CRCs - to make systems more robust, not to 
make the customers help you debug your systems.
Otherwise, it sounds like you're telling us that you have invented a way 
to design equipment where randomness does not happen. Hope you got a 
patent on that! ;-)

Alan

Chris Cheng wrote On 06/30/06 22:09,:

> I don't know what kind of FCAL chip you are dealing with. Everyone I 
> have to work with have CRC detect and flags. And you bet every CRC 
> they detect will be flagged and report back. If it takes an error 
> every feel days, I will get phone calls.
>
> ------------------------------------------------------------------------
> *From:* Alan.Hiltonnickel@... [mailto:Alan.Hiltonnickel@...]
> *Sent:* Fri 6/30/2006 9:48 PM
> *To:* Chris Cheng
> *Cc:* si-list@...
> *Subject:* Re: [SI-LIST] Re: Fibre channel interconnect margins
>
> Hey Chris,
>
> When you started that thread I didn't answer, since it seemed pretty 
> obvious that "errors are bad", and I didn't know better.
(Continue reading)

Chris Cheng | 1 Jul 09:01 2006

Re: Fibre channel interconnect margins

I know exactly what those expected and acceptable level you are talking about. And I have adjust and monitor
them at both production and prototype level. It is either humping along or fall on its face and nothing in
between. If you tell me you have indeed log those errors and get 10e-12 BERT my hats off to you. I have been
puzzled by this 10e-xx claims for a long time and indeed very interested in what others do.
________________________________

From: Alan.Hiltonnickel@... [mailto:Alan.Hiltonnickel@...]
Sent: Fri 6/30/2006 10:57 PM
To: Chris Cheng
Cc: si-list@...
Subject: Re: [SI-LIST] Re: Fibre channel interconnect margins

You might want to re-check that. Yes, we have the same CRC checks and flags. But after error rates get to the
"expected" or "acceptable" levels, most folks turn off the error reporting and let the software and
hardware do what it's designed to do, because the system is tolerant of errors. That's the purpose of CRCs -
to make systems more robust, not to make the customers help you debug your systems.

Otherwise, it sounds like you're telling us that you have invented a way to design equipment where
randomness does not happen. Hope you got a patent on that! ;-)

Alan

------------------------------------------------------------------
To unsubscribe from si-list:
si-list-request@... with 'unsubscribe' in the Subject field

or to administer your membership from a web page, go to:
http://www.freelists.org/webpage/si-list

For help:
(Continue reading)

Kevin K | 1 Jul 16:34 2006
Picon

SDRAM Routing Topology

Dear Experts,
I'm working on a design that uses the PPC405EP (Power
PC).  The memory subsystem consists of four SDR
SDRAM's.  What's the SDRAM routing topology?
1) Daisy-chain topology
2) T-topology
3) Either topology as long as the length requirements
are met.

I have a reference design board that uses both
daisy-chain topology and T-topology.

Thank you,
Kevin

__________________________________________________
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
------------------------------------------------------------------
To unsubscribe from si-list:
si-list-request@... with 'unsubscribe' in the Subject field

or to administer your membership from a web page, go to:
http://www.freelists.org/webpage/si-list

For help:
si-list-request@... with 'help' in the Subject field

List FAQ wiki page is located at:
(Continue reading)

Chris Cheng | 1 Jul 21:25 2006

Re: Fibre channel interconnect margins

My apology to Chris and Alan. My math is horrible. At 2+Gb/s, a BERT of 10e-12 is hardly a matter of days but
hours or even minutes. With a typical dual loop JBOD with a few hundred disks, we are talking about
something you can see VERY often. Are you still saying you are shipping product like that ?
________________________________

From: si-list-bounce@... on behalf of Chris Cheng
Sent: Sat 7/1/2006 12:01 AM
To: Alan.Hiltonnickel@...
Cc: si-list@...
Subject: [SI-LIST] Re: Fibre channel interconnect margins

I know exactly what those expected and acceptable level you are talking about. And I have adjust and monitor
them at both production and prototype level. It is either humping along or fall on its face and nothing in
between. If you tell me you have indeed log those errors and get 10e-12 BERT my hats off to you. I have been
puzzled by this 10e-xx claims for a long time and indeed very interested in what others do.
________________________________

From: Alan.Hiltonnickel@... [mailto:Alan.Hiltonnickel@...]
Sent: Fri 6/30/2006 10:57 PM
To: Chris Cheng
Cc: si-list@...
Subject: Re: [SI-LIST] Re: Fibre channel interconnect margins

You might want to re-check that. Yes, we have the same CRC checks and flags. But after error rates get to the
"expected" or "acceptable" levels, most folks turn off the error reporting and let the software and
hardware do what it's designed to do, because the system is tolerant of errors. That's the purpose of CRCs -
to make systems more robust, not to make the customers help you debug your systems.

Otherwise, it sounds like you're telling us that you have invented a way to design equipment where
randomness does not happen. Hope you got a patent on that! ;-)
(Continue reading)

steve weir | 2 Jul 01:27 2006

Re: SDRAM Routing Topology

Kevin, it doesn't sound like you have a simulator.  A T topology will 
get into trouble pretty quickly I advise against it.  A simulator 
would tell you if you can get away with what you would like.

Steve
At 07:34 AM 7/1/2006, Kevin K wrote:
>Dear Experts,
>I'm working on a design that uses the PPC405EP (Power
>PC).  The memory subsystem consists of four SDR
>SDRAM's.  What's the SDRAM routing topology?
>1) Daisy-chain topology
>2) T-topology
>3) Either topology as long as the length requirements
>are met.
>
>I have a reference design board that uses both
>daisy-chain topology and T-topology.
>
>Thank you,
>Kevin
>
>
>__________________________________________________
>Do You Yahoo!?
>Tired of spam?  Yahoo! Mail has the best spam protection around
>http://mail.yahoo.com
>------------------------------------------------------------------
>To unsubscribe from si-list:
>si-list-request@... with 'unsubscribe' in the Subject field
>
(Continue reading)

Hal Murray | 2 Jul 01:43 2006

Re: Fibre channel interconnect margins


> Because the statistical  theory behind those errors is that random/
> Gaussian noise is, by  definition, unbounded - errors are a fact of
> life, even if the error  rate is very low. Eventually you have to an
> edge that is outside the  jitter spec. A single unrepeatable error out
> of billions of bits simply  has to be expected.

By that line of reasoning, your CPU would make occasional errors.

I think the Gaussian noise assumption is reasonable on fiber systems.  If the 
error rate is low, it is exponential in signal/noise ratio.

On copper systems, errors are often associated with crosstalk and/or 
reflections.  Those are deterministic, not Gaussian.  Even EMI and power 
supply noise are probably more deterministic than Gaussian.

A well designed CPU is essentially error free because it's reasonable to 
engineer a system with a signal/noise ratio that turns into less than one 
error per age of the universe.

Years ago, when I worked with fibers, consensus was that if you could measure 
the error rate it was too high.  The context was long links - telcom.  Their 
engineering may be less conservative these days.

The error budget included quite a bit of reserve for the laser aging and 
additional splices that get added when backhoes find the fibers.  The systems 
were designed to have a low error rate under worst case conditions.  Most 
systems are far from worst case.  That's especially true just after 
installation when you are trying to verify that everything is working 
correctly.  The extra/reserve signal/noise ratio makes the error rate so low 
(Continue reading)


Gmane