quinta-feira, 8 de julho de 2010

8a parte em ingles

load when in
operation, the part as designed will in theory be able to carry a multiple of that
load to allow for an abnormally weak part or batch of material being used, an
exceptionally high load being applied, and other unusual but not fully unexpected
conditions of use. The multiple is known as a “factor of safety,”26 or
sometimes jocularly (but not totally in jest), a “factor of ignorance” in recognition
of the fact that not everything engineers do is fully understood by them and
that there are likely to be unanticipated conditions that must somehow be taken
into account in design. Although the concept of factor of safety is most readily
articulated and understood in the context of loads on structures, the idea of a
factor of safety can apply to engineering designs of all kinds.
2. Conservatism in design
An engineering design is said to be conservative when it carries an adequate
factor of safety.27 What is adequate may be a matter of judgment. There can
actually be several different factors of safety identified with a given design. Thus,
an airplane may be designed with one factor of safety against its wings fracturing
and falling off and another against its fuselage being dented. A dented fuselage
may have a small effect on how efficiently the plane flies, but a fractured wing
would obviously jeopardize everyone on board. To apply a greater factor of
safety to the wings makes sense even to a nonengineer.
What is an adequate factor of safety in a given application depends upon
many things, including the state of the art of the theory underlying the design,
the quality of materials that are used, and the quality and reliability of the workmanship
that goes into realizing the design. In the middle of the nineteenth
century, the theory of iron bridge design was in its infancy, and a responsible
26. See generally Baum v. United States, 765 F. Supp. 268, 273 (D. Md. 1991); In re Lloyd’s Leasing
Ltd., 764 F. Supp. 1114, 1127–28 (S.D. Tex. 1990); State ex rel. Fruehauf Corp. v. Industrial Comm’n,
No. 90AP-393, 1991 Ohio App. LEXIS 2022, at *4 (Ohio Ct. App. 1991).
27. See generally Union of Concerned Scientists v. Atomic Energy Comm’n, 499 F.2d 1069, 1086–
90 (D.C. Cir. 1974); United States v. Hooker Chem. & Plastics Corp., 607 F. Supp. 1052, 1065
(W.D.N.Y. 1985).
Reference Guide on Engineering Practice and Methods
bridge engineer had to rely upon a large factor of safety—a good deal of conservatism—
to ensure a safe bridge.
When a bridge over the River Dee collapsed in 1847 and the accident claimed
some lives, a royal commission was appointed to look into the use of iron in
railroad bridges. As part of the investigation, prominent engineers of the time
were asked what factor of safety they applied to their bridges, and the responses
ranged from 3 to 7.28 Robert Stephenson, the engineer of the Dee Bridge, had
been using factors between 1 and 2 for bridges like the Dee, and the Dee itself
was found to have had a factor of safety of about 1.5.29
Dozens of bridges like the Dee, which was a brittle cast-iron beam trussed
with malleable wrought iron, had been built in the preceding decade or so, and
their successful performance justified to Stephenson, at least, the use of the lower
factors of safety. The Dee was, however, the longest such bridge that had ever
been attempted, and it collapsed after some heavy gravel was added to its roadway
to reduce the possibility of its wooden deck being set afire by hot cinders
spewed out of crossing steam engines. (The addition of the gravel also naturally
lowered the factor of safety below 1.5.)
Although Stephenson was not as conservative as his contemporaries, he was
not found negligent by the royal commission, and he went on to complete the
landmark Britannia Bridge, whose design was being developed at the time of
the Dee collapse and during its investigation. The Britannia, however, being of
a more innovative design than the Dee, and with barely a precedent, was much
more conservatively designed. Indeed, it was so conservative in its design that
the chains that were to assist in holding up the box girder spans were deemed
unnecessary, and so the towers to hold the chains remained a functionless frill
on the completed bridge.
3. “Pushing the envelope”
As indicated in Figure 1 on the following page, Robert Stephenson was “pushing
the envelope”30 with his Dee Bridge and related bridges, in the sense that he
was designing and building structures that were on the edge of the field of
experience.31 When the main-span length of such bridges was plotted against
the year of construction, the data points representing Stephenson’s bridges were
in extreme positions on the graph.32 Since the vague but generally smooth border
formed by the extreme points in such a plot is known as an envelope of the
28. See Petroski, supra note 12, at 101.
29. See Henry Petroski, Design Paradigms: Case Histories of Error and Judgment in Engineering
85–86 & fig.6.2 (1994).
30. See generally Hataway v. Jeep Corp., 679 So. 2d 913, 920 (La. Ct. App. 1996) (defining “pushing
the envelope” in the context of vehicle testing).
31. See Petroski, supra note 29, at 83–84 & fig.6.1.
32. P.G. Sibly, The Prediction of Structural Failures (1977) (unpublished Ph.D. dissertation, University
of London).
Reference Manual on Scientific Evidence
points, Stephenson’s designs represented by the extreme points were “pushing
the envelope,” that is, bulging it outward however slightly. It should be realized,
however, that there are notable examples of successful bridges built well
outside the envelope of experience. One was Stephenson’s Britannia Bridge,
and another famous one is the Forth Bridge, a cantilever bridge that was built at
twice the span length of existing examples when there was very little experience
with that genre.
Figure 1. The building and length of nineteenth-century trussed-girder
From Petroski, supra note 29, at 84 & fig. 6.1 (after Sibly, 1977).
Although the term may be more familiar in aeronautical and aerospace applications,
the phenomenon of “pushing the envelope” is a common and natural
thing to do in all of engineering. When designs work, there is a natural tendency
to pare down those designs to shed excess strength, which usually equates
with weight and, therefore, cost. There are several good reasons for the lowering
of the factor of safety. With experience comes confidence, not to mention
familiarity, with a design, and the design does not command the same sense of
conservatism that new and unfamiliar designs do. As familiar designs of a particular
kind proliferate, there also tends to evolve a sense that they can be extended
to new limits, because prior limitations, which were expressions of conservatism,
are thought to be excessive. New materials, construction, and manu-
1830 1835 1840 1845
First period of “Railway Mania”
railway building
Span Length (feet)
Bridges designed by Robert Stephenson
Dee Bridge
Bridges designed by others
Reference Guide on Engineering Practice and Methods
facturing techniques; greater theoretical understanding; and improved tools of
analysis also argue for less conservatism, lower factors of safety, and the pushing
of the envelope.
The development of cable-stayed bridges was following this pattern at the
end of the twentieth century. Dating principally from the 1950s in post-war
Germany, cable-stayed bridges are attractive design options because they are
relatively light and can be constructed relatively quickly, as compared with, say,
suspension bridges. Cable-stayed bridges soon proliferated, but their main spans
were increased slowly and incrementally, a conservative way to push the envelope.
It was generally held that cable-stayed bridges were the span of choice for
many applications in the 1,000- to 1,500-foot range; conventional suspension
bridges were specified for longer spans. In the 1990s, however, cable-stayed
designs with longer spans—some on the order of 3,000 feet—began to be built,
increasing the maximum span by about 50% in one fell swoop.33
Such severe pushing of the envelope—indeed, going beyond or outside the
envelope—is not unheard of. As mentioned earlier, the 1,710-foot Forth Bridge
of the cantilever type did so in 1890, and the 3,500-foot George Washington
Bridge almost doubled the main span of the longest previous suspension bridge,
the 1,800-foot Ambassador Bridge between Detroit and Windsor, Ontario. The
Tacoma Narrows Bridge near Seattle was built to the same state of the art as the
George Washington, and, with a 2,800-foot main span, was the third largest in
the world when completed in 1940. The Tacoma Narrows differed from the
George Washington in a significant way, however, in that it was extremely
narrow in comparison with its length, something so far outside the envelope of
experience that one consulting engineer reviewing the design recommended
that the bridge be built only if it were widened.34 It was not, and the bridge
collapsed in a 42-mile-per-hour wind only three months after it was completed.35
The state of the art had not included analyzing and designing suspension bridges
for aerodynamic effects, which were considered irrelevant.
D. Design Experience and Wisdom
The engineer who had most to do with the design of the Tacoma Narrows
Bridge, Leon Moisseiff, was among the most distinguished engineers working
on suspension bridges at the time. He had had a hand, as consulting engineer, in
the design of virtually every record-breaking suspension bridge conceived and
built since the turn of the century, and he was responsible for the principal
analytical tool that was used in making bridges lighter because the forces in them
33. See Petroski, supra note 29, at 175 fig.10.3.
34. See Petroski, supra note 19, at 297–300.
35. See Northwestern Mut. Fire Ass’n v. Union Mut. Fire Ins. Co., 144 F.2d 274 (9th Cir. 1944).
Reference Manual on Scientific Evidence
could be calculated more accurately. When the critical but much less prominent
engineer reviewing the Tacoma Narrows design recommended that it be widened
to bring it more in line with demonstrated practice, Moisseiff dismissed the
suggestion and essentially pointed to his considerable experience with suspension
bridges and the theories of their behavior that he and a colleague had developed
as his justification for leaving things as they were. Experience can be a
dangerous thing in engineering if it blinds the engineer to the fact that envelopes
can be pushed only so far.36
Another example of the arrogance of experience occurred in the design and
construction of the Quebec Bridge across the Saint Lawrence River, discussed
earlier. The chief engineer, Theodore Cooper, had an impeccable reputation,
but his confidence seems to have been almost without bounds. The construction
of the bridge was not properly monitored, and the incomplete structure
collapsed in 1907. It was later found that the weight of the structure had been
seriously underestimated in the design calculations and that the principal compression
members in the structure were too slender.37
The examples of the Tacoma Narrows and Quebec Bridges are not typical of
engineering practice, of course, but they are instructive in indicating that experience
alone is no substitute for careful, correct, and complete analysis. These
examples also illustrate that modes of failure that can be ignored in the design of
structures of a certain proportion can be critical in the design of structures of the
same genre but a different proportion. In the case of the Tacoma Narrows Bridge,
aerodynamic effects that were of little consequence for wider, stiffer bridges like
the George Washington proved disastrous for Moisseiff’s narrow, flexible design.
Similarly, the compression members of heavy, stubby cantilever bridges
were not in danger of buckling, but they proved to be the weak links in a light,
slender bridge like the Quebec.
E. Conservative Designs
Although it would appear to be a truism that conservative designs well within
the state of the art pose little danger of failing, what constitutes conservatism in
engineering design can be elusive. Galileo, though commonly thought of as a
scientist, was very interested in Renaissance engineering. In fact, the motivation
for his mature work, Dialogues Concerning Two New Sciences, was in some of the
limitations of engineering understanding that led at the time to the spontaneous
failure of ships and obelisks, among other things. One story Galileo tells at the
beginning of this seminal work on strength of materials is of a long piece of
marble that was being kept in storage with a support under each of its ends.
Because it was known at the time that long heavy objects like ships and obelisks
36. See Petroski, supra note 19, at 294–308.
37. See id. at 109–18.
Reference Guide on Engineering Practice and Methods
could break under such conditions, one observer suggested that a third support
be added under the middle of the piece of marble, as indicated in Figure 2.
According to Galileo, everyone consulted thought it was a good idea, and it was
done. After a while, however, the marble was found to have broken in two,
anyway.38 In their self-satisfaction in taking action to prevent one mode of
failure from occurring, the Renaissance engineers did not think to worry about
the new mode of failure they were making possible by adding an additional
support and thus changing the whole system and enabling it to behave in an
unanticipated way.
Figure 2. The two failure modes described by Galileo.
From Petroski, supra note 29, at 53 & fig. 4.3 (after Galileo, 1638).
An analogous event happened in 1981 in Kansas City, Missouri, when the
elevated walkways of a hotel collapsed, killing 114 people.39 The recently opened
Hyatt-Regency Hotel had an expansive and towering lobby–atrium, and the
elevated walkways, or skywalks, crossing it were designed to be supported from
above so as to leave the floor of the lobby unobstructed by columns. The original
design called for suspending one of the skywalks below another by means of
long roof-anchored steel rods that would pass through the beams supporting the
top walkway and support the lower one also, as indicated in Figure 3a. During
construction, it was suggested that each single long rod be replaced by two
shorter rods, one supporting the upper walkway from the roof and the other
supporting the lower walkway from the upper. Such a design change could
have been viewed as conservative because the unwieldy longer rods could have
been bent and damaged during installation, whereas the shorter ones were more
likely to survive installation without incident.
38. See Petroski, supra note 29, at 47–51.
39. See Deborah R. Hensler & Mark A. Peterson, Understanding Mass Personal Injury Litigation: A
Socio-Legal Analysis, 59 Brook. L. Rev. 961, 972–74 (1993) (overviewing the events of the Hyatt-
Regency skywalk collapse). See also In re Federal Skywalk Cases, 680 F.2d 1175 (8th Cir. 1982); In re
Federal Skywalk Cases, 97 F.R.D. 380 (W.D. Mo. 1983).
Reference Manual on Scientific Evidence
Figure 3. Connection detail of upper suspended walkway in the Kansas City
Hyatt Regency Hotel, as originally designed (A) and as built (B).
From Petroski, supra note 29, at 61 & fig. 4.7 (after Marshall et al., 1982).
When the structural engineers were asked about the change from single rods
to double ones, they apparently raised no objection, and the skywalks were built
in the changed manner. When the skywalks collapsed, the design change was
quickly identified as the structural culprit. Replacing the one-rod design with
the two-rod design essentially doubled the bearing stress on the upper walkway
beam, because the connection there had to support the weight of not only the
upper walkway but also the lower walkway. In the original design, the lower
walkway’s weight was carried by the rod and not the upper walkway.40 Thus,
what might appear to be relatively simple design changes for the better can
drastically alter a system’s behavior by introducing failure modes not even possible
in the original design. Seemingly simple and innocuous design changes can
be among the most pernicious. Had the design change not been made, the
skywalks would likely still be in place.
The explosion of the space shuttle Challenger might be attributed, at least in
part, to an attempt to design a more conservative solid booster rocket than had
ever flown. Prior booster rocket designs, such at that of the Titan III, had a
single O-ring sealing the gap between mating sections of the rocket casing. The
40. See Petroski, supra note 12, at 86–88.
Reference Guide on Engineering Practice and Methods
Titan design was a very successful and proven one, and this argued for its adoption
for space shuttle use. However, to make the design even more reliable, or
so it was thought, a second O-ring was added to the joint between the sections,
as indicated in Figure 4. This design change must surely have been considered a
more conservative approach. It was, however, the complication of having two
O-rings, and the difficulty of checking the proper seating of the one hidden by
the other from visual inspection, that was a factor in the development of the leak
that caused the Challenger to explode. Indeed, the supposed conservatism of the
double O-ring design might also have contributed to the ill-fated decision to
launch the shuttle against the advice of engineers who knew the O-rings were
susceptible to damage in cold weather, which prevailed on the morning of the
Figure 4. O-ring designs for Titan III and space shuttle booster rocket.
From Petroski, supra note 29, at 63 & fig. 4.9 (after Bell & Esch, 1987).
41. See Trudy E. Bell & Karl Esch, The Fatal Flaw in Flight 51-L, IEEE Spectrum, Feb. 1987, at 36.
See also Hans Mark, The Space Station: A Personal Journey 218–21 (1987).
Titan III joint Shuttle booster joint
tang O-ring
Reference Manual on Scientific Evidence
F. Daring Designs
If the belief that a design is conservative can be misplaced, so can a fear that any
design innovation is doomed to fail. The Apollo 11 mission to the moon demonstrated
that an engineering system design of enormous complexity and novelty,
that of the moon lander, could succeed the first time it was tried. Indeed,
the history of engineering is full of examples of new designs succeeding the first
time they have been attempted. Among the most famous and successful bridges
in the world is the Forth Bridge in Scotland, described earlier. This innovative
design comprising record-breaking cantilever spans was also the first major bridge
to be made entirely of steel.
IV. Success and Failure in Engineering
A. The Role of Failure in Engineering Design
Failure is a central idea in engineering. In fact, one definition of engineering
might be that it is the avoidance of failure. When a device, machine, or structure
is designed by an engineer, every way in which it might credibly fail must
be anticipated to ensure that it is designed to function properly. Thus, in designing
a bridge, the engineer is responsible for choosing and specifying the type and
size of the piers, beams, and girders so that the bridge does not get undermined
by the current in the river the bridge spans, does not collapse under rush-hour
traffic, and does not get blown off its supports. The engineer ensures that these
and other failures do not occur by analyzing the design on paper, and the objective
of the analysis is to calculate the intensity of forces in the structure and
compare them with limiting values that define failure. If the calculated force
intensities are sufficiently within the limits of the material to be used, the bridge
is assumed to be safe, at least with respect to the modes of failure considered.
(Each separate mode of failure must be identified and checked individually.)
In a suspension bridge, for example, the total force in the main cable depends
upon the geometry of the bridge and the traffic it must carry. The force the
cable must resist determines how large the cable must be if a certain type of steel
wire is used. Since the steel wire, like every engineering material, has a breaking
(failure) point, the engineer calculates how far from the breaking point the cable
will be when the bridge is in service. If this difference provides the desired factor
of safety, the engineer concludes that the bridge will not fail, at least in the mode
of the cable breaking, even if the wire installed is somewhat weaker than average
and the traffic load is heavier than normal. Other possible ways in which
failure may occur must also be considered, of course. These may include such
phenomena as corrosion, ship collision, and earthquakes. The collection of such
calculations and considerations constitutes a complete analysis of the design.
Reference Guide on Engineering Practice and Methods
B. The Value of Successes and Failures
It is an apparent paradox of science and engineering that more is learned from
failures than from successes. Indeed, Karl Popper’s philosophy of science holds
that a scientific hypothesis must be falsifiable. What this means is that a given
hypothesis can be found false by a single counterexample. Thus, if a scientist
puts forth a hypothesis that states that no living thing can exist for more than 100
years, the documented existence of a living tree more than 300 years old disproves
the hypothesis. If, however, no one can produce a living thing that is
more than 100 years old, this does not prove the hypothesis. It merely confirms
it as a (true) hypothesis, still subject to being proven false by a single
Engineering has hypotheses also, and they are equally refutable by a single
counterexample. In the first half of the nineteenth century, it was a commonly
held belief (or hypothesis) that a suspension bridge could not safely carry railroad
trains. John Roebling explained his reason for studying the failures of suspension
bridges that had occurred during that time by stating that he could not
know how to design a successful bridge unless he knew what he had to design it
against. In the 1850s he designed and built a suspension bridge over the Niagara
Gorge that did carry railroad as well as carriage traffic. In other words, Roebling’s
bridge provided the counterexample to the hypothesis that suspension bridges
could not carry railroad trains. At the same time, his successful bridge did not
prove that all suspension bridges would be safe.
When a bridge carries traffic successfully or a skyscraper stands steady in the
wind, the structure does not reveal much beyond the fact that it is fulfilling its
function. Although design claims that the structure would not fail will have
been verified by the successful structure, and measurements of how much the
structure moves under load will confirm quantitatively what the design calculations
predicted, that does not prove that the design analysis was total or complete.
If the design calculations did not include aerodynamic effects, for example,
like the flutter of a bridge’s roadway in the wind, that does not mean the
wind cannot bring the structure down, as it did the Tacoma Narrows Bridge.
Nature does not ignore what an engineer may have overlooked.
If an unexpected failure occurs, however, such as the collapse of the Tacoma
Narrows Bridge, then it provides incontrovertible evidence that the design was
improperly (or incompletely) analyzed or something was overlooked. Whereas
aerodynamic effects might have been insignificant in bridges that were wide and
heavy, like the George Washington Bridge, they could not be ignored in light
and narrow structures like the Tacoma Narrows Bridge. Unfortunately, it often
takes a catastrophic failure to provide the clear and unambiguous evidence that
the design assumptions were faulty.
There were precursors to the collapse of the Tacoma Narrows Bridge, in that
Reference Manual on Scientific Evidence
several other bridges built in the late 1930s displayed unexpected behavior in
the wind. Indeed, engineers were studying the phenomenon, trying to understand
and explain it, and debating how properly to retrofit the bridges affected
when the landmark failure occurred. It provided the counterexample to the
implicit engineering hypothesis under which all such bridges were designed,
namely, that the wind did not produce aerodynamic effects in heavy bridge
decks sufficient to bring them down. Thus, the failure of the Tacoma Narrows
Bridge proved more instructive than the success of all the bridges that had performed
satisfactorily—or nearly so—over the preceding decades.
1. Lessons from successful designs
Strictly speaking, a successful design teaches engineers only that that design is
successful. It does not prove that another design like it in every way but one will
also be successful. For example, there is a size effect in engineering, as in nature,
and it appears to have been known, though not necessarily fully understood, for
millennia. Vitruvius, who wrote in the first century B.C. what is generally considered
to be the oldest work on engineering extant, related the story of the
ancient engineer Callias, who convinced the citizens of Rhodes with the aid of
a model that he could build a machine to defend their city against any siege the
enemy could launch. When the enemy did attack with an unprecedentedly
large heliopolis, Callias confessed that he could not defend the city as promised
because although his defense machine worked as a model, it would not work at
the scale needed to conquer the gigantic heliopolis.
Galileo, writing fifteen centuries later, described how limitations to size were
appreciated in the Renaissance, even though still not fully understood. He told
of the spontaneous failure of wooden ships upon being launched and of stone
obelisks upon being moved. It was Galileo’s work that finally explained what
was happening. Since the volume of a body, natural or artificial, increases faster
than the area of its parts as they are scaled up in a geometrically similar way,
there will come a time when the weight is simply too much for material of the
body to bear. This, as Galileo explained, is why smaller animals have different
proportions than larger ones, and it is also why things in nature grow only so
large. So it is with engineered structures.
The phenomenon of the size effect is not the only one that has taken engineers
by surprise. The aerodynamic instability manifested in suspension bridges
in the late 1930s was absent or insignificant and thus unimportant in early designs
of those structures. However, it became dominant and thus significant in
evolved designs, which were so much larger, lighter, narrower, or more slender.
Another example relates to metal fatigue, a mechanical phenomenon in which
the repeated loading and unloading of a structural component leads to crack
growth, which in turn can lead to catastrophic failure of the weakened part.
Reference Guide on Engineering Practice and Methods
Metal fatigue had long plagued the railroad industry. In time it came to be
understood that if the intensity of loading was kept below a certain threshold,
cracks would not develop and thus the structure would not be weakened. When
commercial jet aircraft were first developed after the Second World War, metal
fatigue was not believed to be relevant, but the mysterious failures of several de
Havilland Comets in the 1950s led one engineer to suspect that fatigue was
indeed the cause of the mid-air disasters. It was in fact true that the cyclic pressurization
and depressurization of the cabin with every takeoff and landing was
producing fatigue cracks that grew until the fuselage could no longer hold together.
The engineer was able to confirm his hypothesis about fatigue by testing
to failure an actual Comet fuselage under controlled conditions.42
The phenomenon of fatigue does not affect only large structures made of
metal. A fatigue failure of a more modest kind but nevertheless of significant
consequence to those who used the device was the breakage of keys on the
child’s toy Speak & Spell. Introduced by Texas Instruments in the late 1970s,
not long after electronic calculators had become embraced by engineers, this
remarkable device employed one of the first microelectronic voice synthesizers.
Speak & Spell would ask a child to spell a word, and the child responded by
pecking out the word letter by letter on the keyboard, each letter appearing as it
was typed on the calculator-like display. Upon hitting the enter key, the child
was told that the spelling was correct or was asked to try again. Children enjoyed
the toy so much that they used it for hours on end, thus flexing the plastic
hinges of the letter keys over and over again. This repeated loading and unloading
of the plastic hinges led some of them to exhibit fatigue and break off.
Children could still fit their little fingers into the keyholes, however, and so they
could continue to use the toy, disfigured as it was. What makes the experience
with Speak & Spell so instructive as an example of a fatigue failure is that the first
key to break was invariably the one used most—the E key. For those Speak &
Spells that continued to be used, subsequent keys tended to break in the same
sequence as the frequency of letters used in the English language—E, T, A, O,
I, N, and so forth—thus demonstrating the fundamental characteristic of fatigue
failure, namely, that all other things being equal, the part subjected to the most
loadings and unloadings will break first.43
The Speak & Spell example also shows how engineering designs are changed
in response to repeated failures. In time, a new model of the toy was introduced,
one with a redesigned keyboard. In place of the plastic keys that fit individually
into recesses there was a flat keyboard printed on a rubbery plastic sheet that
overlay all the switches for the letters. Not only did the new design reduce the
incidence of key failure, but it also made for a flat surface that was much easier
42. See Petroski, supra note 12, at 176–84.
43. Id. at 22–27.
Reference Manual on Scientific Evidence
to clean than the original model, which collected the snack residue that children
are likely to leave on their toys. The redesign of the Speak & Spell is a representative
example of how engineers are attentive and responsive to failures.
2. Lessons from failures
Unanticipated failures may be thought of as unplanned experiments. While failures
are also unwanted, of course, the surprise result of any failure is clearly
interesting, and it reveals a point of ignorance that engineers must then seek to
correct. Thus, when the Tacoma Narrows Bridge collapsed, bridge engineers
could no longer argue that they did not have to analyze large suspension bridge
designs for their susceptibility to aerodynamic effects. Indeed, it was the unanticipated
motion of bridge decks (the failure of them to hang steady in the wind)
that prompted wind-tunnel tests of the deck designs for future suspension bridges.
Although such model tests were still open to some criticism as to their relevance
for the full-scale bridge, comparative wind-tunnel tests could be conducted on
alternative deck designs, and such tests led to new designs in the wake of the
Tacoma Narrows collapse. The wing-like decks of the Severn and Humber
Bridges in Britain are examples of such new designs.
Failures in machine parts are equally revealing of design weaknesses. A bracket
that keeps breaking in an automobile engine, for example, indicates a poorly
designed detail, and it is likely that this bracket will in time be redesigned to give
it greater strength in the vulnerable location. As a result, replacement parts will
come to be manufactured in a slightly different form than the original, and later
models of the same automobile are likely to come with the redesigned bracket
C. Successful Designs Can Lead to Failure
A major advance in the design and construction of long-span suspension bridges
was made in the mid-nineteenth century by John A. Roebling. His career culminated
in his design of the Brooklyn Bridge, the completion of which was
overseen by his son, Washington A. Roebling, and his wife, Emily Warren
Roebling. For half a century from 1883, when the Brooklyn Bridge was opened
to traffic, suspension bridges evolved in several directions. The most obvious
change was that the length of the main span increased from the 1,595 feet of the
Brooklyn Bridge to the 4,200 feet of the Golden Gate Bridge, which was completed
in 1937. Another important development was the increasing slenderness
of suspension bridges, perhaps best exemplified by the shallow roadway of the
George Washington Bridge as completed in 1931 with only a single deck. (The
lower deck was not added until the early 1960s.) The evolution to slenderness
of suspension bridges culminated in several long-span suspension bridges of the
Reference Guide on Engineering Practice and Methods
late 1930s, including the Bronx-Whitestone and Deer Isle Bridges, which used
shallow plate girders instead of deep deck trusses to support the roadway.
Another important change in the design of suspension bridges after the Brooklyn
Bridge was the elimination of the cable stays that radiate from that bridge’s
Gothic towers to its roadway. In the Brooklyn Bridge this feature results in the
web-like pattern of its cables that is characteristic of Roebling designs. John
Roebling had incorporated this feature, as well as guy wires steadying the bridge
from beneath, in his Niagara Gorge Bridge of 1854, which was the first suspension
bridge to carry the heavy and violent loads of railroad trains. As suspension
bridges came in time to be built larger, the feature of guy wires was dispensed
with, as the effect of the wind on vertical motions of the deck was believed to be
insignificant. In this way, the successful designs of more than a half century
earlier evolved into the light, narrow, slender, and unadorned Tacoma Narrows
Bridge that could not withstand a 42-mile-per-hour wind.
The evolution of bridges is a paradigm for the development of all designed
structures and for the evolution of artifacts generally. The more successful a
design, the more likely it is to be a model for future designs. But because engineering
and construction are influenced by aesthetics, economics, and, yes, ethics
or their absence, designs tend to get pared down in time.44 This paring down
can take the form of enlargement in size without a proportional increase in
strength, in defiance of the size effect; streamlining in the sense of doing away
with what is believed to be superfluous; lightening by the use of stronger materials
or materials stressed higher than before; and cheating, which can take the
form of leaving out some indicated reinforcement in concrete or deliberately
substituting inferior materials for specified ones. The cumulative effect of such
paring down of strength is a product that can more readily fail. If the trend
continues indefinitely, failure is sure to occur.
When failures do occur, engineers necessarily want to learn the causes. Understanding
of the reason for repeated failures—structural or otherwise—that
jeopardize the satisfactory use and therefore the reputation of a product typically
leads to a redesigned product. Thus, the vulnerability of automobile doors to
being dented in parking lots led to the introduction of protective strips along the
length of car bodies. The propensity of pencil points to break under relatively
light writing pressure led pencil manufacturers in the 1930s to look into the
reasons for the failures. When it was found that the pencil lead was not being
44. See Baum v. United States, 765 F. Supp. 268, 274 (D. Md. 1991) (noting the often conflicting
factors, the court commented that “National Park Service officials have more than safety in mind in
determining the design and use of man-made objects such as guardrails and signs along the parkway.
These decisions require balancing many factors: safety, aesthetics, environmental impact and available
financial resources.”).
Reference Manual on Scientific Evidence
properly glued to the wood case, research-and-development efforts were initiated
to design a more supportive joining process. This led to proprietary pencil
manufacturing processes with names such as “Bonded,” “Chemi-Sealed,” “Pressure
Proofed,” and “Woodclinched,” some of which can be found still stamped
on pencils sold today.45
Failures that cause more significant property damage or that claim lives are
usually the subject of failure analyses conducted by consulting engineers or forensic
engineers. Such investigations may be likened to puzzle solving or to
design problems worked in reverse, in that the engineer must develop hypotheses
and then test them with analysis. However, with direct design there is no
unique solution; in a forensic engineering problem, there presumably is a unique
cause of a particular failure, but it might not easily be found.
The failure analyst or forensic engineer must essentially come up with a hypothesis
of how the particular failure under investigation was initiated and progressed.
The hypothesis obviously must be consistent with the evidence, which
should be preserved as much as possible in the state in which it existed when the
failure occurred. This means, for example, that the configuration of an accident
scene should be recorded before anything is moved, that the fracture surfaces of
broken parts should not be touched or damaged further, that bent and twisted
parts should be left in their as-found condition, and generally that each and
every piece of potential evidence should be carefully labeled and handled with
care. In other words, the scene of an engineering failure should as much as
possible be treated as if it were the scene of a crime. The urgent need to move
material objects to reach persons involved in an accident takes precedence, of
course, and how this may have affected forensic evidence must itself be taken
into account in the analysis of evidence from the accident scene.
There have been attempts to formalize the procedures involved in the investigation
of failures, especially those of a recurring nature, such as the collapse of
structures.46 However, with the exception of aircraft accident sites, which are
under the control of the National Transportation Safety Board (NTSB), there is
no uniform way in which structural failure sites are controlled. In the case of the
Kansas City Hyatt-Regency walkways collapse, for example, the owner of the
building had the one surviving walkway removed within a day or so of the
accident, thus depriving engineers of the opportunity to study an undamaged
structure of similar design to see if it provided any clues to the cause of the
collapse of the other two walkways.
Regardless of how the failure or accident site is treated, investigating engineers
must seek clues to the cause in whatever way they can. The most helpful
information naturally comes from the most well-preserved pieces of the puzzle.
45. See Henry Petroski, The Pencil: A History of Design and Circumstance 244–45 (1990).
46. See, e.g., Jack R. Janney, Guide to Investigation of Structural Failures (1979).
Reference Guide on Engineering Practice and Methods
Thus, broken parts should be handled with care so as not to destroy evidence of
how a crack might have begun and propagated or how two broken pieces may
or may not fit together. Cracks in metal and plastic generally leave telltale clues
as they grow, and the failure-analysis expert can read these clues under a microscope
with some degree of certainty. Broken pieces that fit together to produce
a part that could be mistaken for new were it not for the fracture indicate that
the material was extremely brittle when the part broke, something that may or
may not have been appropriate for the design. In contrast, pieces that when
fitted together show the part to have been stretched and bent before breaking
indicate a ductile material and give some indication of the nature of the loads
before the fracture. Such conclusions can be drawn with a high degree of certainty,
and the kind of information they yield can often lead to the construction
of a very likely scenario for what happened.
Investigators for the NTSB look for such clues, and more of course, when
they collect the parts of a crashed plane and assemble them on the floor of a
hangar. No matter how sure the board’s final conclusion might be, however, it
is always presented as a “most likely cause” rather than a proven fact, in recognition
that fundamentally the proffered cause is but a hypothesis. Just as scientific
hypotheses can be confirmed and verified but never proven with mathematical
certainty, so the cause of an engineering failure can only be confirmed and
verified by the surviving evidence. The evidence can often be so overwhelmingly
convincing, however, that engineers use it to guide their redesigns and
future designs.
The more catastrophic and dramatic failures, especially those that claim lives,
are often the subject of public and formal investigations. The explosion of the
space shuttle Challenger, in which all seven astronauts on board died, was investigated
by a presidential commission, whose hearings were televised. The collapse
of the Quebec Bridge, which claimed the lives of about seventy-five construction
workers, was looked into by a royal commission. And the failure of the
elevated walkways in the Kansas City Hyatt-Regency Hotel in 1981 was investigated
in some detail by what was then the National Bureau of Standards. (The
role of the engineers in the collapse of the walkways was the subject of a case
presented by the professional engineering licensing board of Missouri before a
commissioner.47) In all such cases, there have been extensive formal reports,
which are often very informative not only about the particular case under consideration
but also about the nature of the engineering design process generally.
Collectively, such reports can point to patterns regarding failures and thus to
generalizations about what engineers might be watchful for in the future.
For example, the history of bridges over the last century and a half reveals a
47. Missouri Bd. of Architects, Prof’l Eng’rs & Land Surveyors v. Duncan, No. AR-84-0239, 1985
Mo. Tax LEXIS 50 (Mo. Admin. Hearing Comm’n Nov. 15, 1985).
Reference Manual on Scientific Evidence
disturbing pattern of success leading to failure. Beginning with the Dee Bridge
failure in 1847, roughly every 30 years there has been a major bridge failure—
each of a different type of bridge—and each failure can be traced to the gradual
transformation of a successful bridge design.48 Among the explanations for this
haunting pattern is that novel types of bridges are designed by engineers who
take care with the designs, since they have few precedents, and the designs that
are successful are copied and in time come to be attempted in longer lengths, in
more slender profiles, and with increasing casualness by a younger generation of
engineers that is unaware of or does not remember the assumptions that went
into the early designs or the limitations of those designs. Such a pattern was
being repeated in the late twentieth century for cable-stayed and post-tensioned
bridges, and such bridges may well be expected to suffer a catastrophic failure
early in the new millennium.
D. Failures Can Lead to Successful Designs
Just as successful designs can lead to failures, so can failures lead to revolutionary
successes. The same history of bridge failures described earlier (in section IV.C)
also reveals that with a catastrophic failure, a type of bridge or a construction
practice falls out of favor. This occurs often more for extratechnical reasons,
such as an attempt to regain the public’s confidence so that the new bridge will
attract the public to a railroad or a toll highway.
If a type of bridge ceases to be used, then a new type must be developed for
the building of new bridges. In the wake of a major failure, new engineers are
likely to be retained, engineers with solid reputations and impeccable credentials.
Furthermore, because a novel type of bridge is being proposed, its design
must proceed with deliberate attention to detail and explicit consideration of all
relevant modes of failure. In the wake of the failure, the bridge tends to be
overdesigned to further ensure its reliability.49
E. Engineering History and Engineering Practice
The historical pattern described in the preceding two sections points to the
value of history for present and future engineering. As suspension bridges were
being designed with ever longer lengths and with ever more slender profiles,
engineers of the 1920s and 1930s looked to the history of bridges for aesthetic
models. Among the bridges often referred to was the Menai Strait Suspension
Bridge in Wales, which was designed and built by Thomas Telford in the 1820s.
The stone towers, iron chains, and wooden deck of this classic bridge influenced
greatly the bridges of a century later, but the Menai served only as an aesthetic
48. See Petroski, supra note 29, at 168–69.
49. Id. at 176–77.
Reference Guide on Engineering Practice and Methods
model and thus only to a limited extent. The repeated destruction in the first
half of the nineteenth century of the Menai Strait Bridge’s deck in the wind was
dismissed as irrelevant to the state of the art of modern bridge building. This was
so because it was believed that the force of the wind could not produce the same
effects on a heavy steel deck that it did on the Menai Strait’s light wooden
fabric. This, of course, proved to be a totally unfounded assumption.
The history of engineering, even of ancient engineering as recorded 2,000
years ago by Vitruvius, has a relevance to modern engineering because the fundamental
characteristics of the central activity of engineering—design—are essentially
the same now as they were then, have been through the intervening
millennia, and will be in the new millennium and beyond. Those characteristics
are the origins of design in the creative imagination, in the mind’s eye, and the
fleshing out of designs with the help of experience and analysis, however crude.
Furthermore, the evolution of designs appears to have occurred throughout
recorded history in the same way, by incremental corrections in response to real
and perceived failures in or inadequacies of the existing technology, the prior
art. There also is strong evidence in the historical record that engineers and their
antecedents in the crafts and trades have always pushed the envelope until failures
have occurred, giving the advance of technology somewhat of an epicyclic
character. Thus, according to this view, the fundamental characteristics of the
creative human activity we call design are independent of technological advances
in analytical tools, materials, and the like.
The way artifacts were designed and developed in ancient times remains a
model for how they are designed and evolve today. This is illustrated in a story
Vitruvius relates of how the contractors and engineers Chersiphron, Metagenes,
and his son Paconius used different methods to move heavy pieces of stone from
quarry to building site. The method of Chersiphron—which was essentially to
use column shafts as wheels, into whose ends hollows were cut to receive the
pivots by which a pulling frame was attached, as indicated in Figure 5—worked
fine for the cylindrical shapes that were used for columns, but the method failed
to be useful to move the prismatic shapes of stones that were used for architraves.
Metagenes very cleverly adapted Chersiphron’s method by making some
evolutionary modifications in how the stone was prepared for hauling. He essentially
used an architrave as an axle, around whose ends he constructed wheels
out of timber, as indicated in Figure 6. When Paconius was faced with a new
problem, however, involving a stone that could not be defaced in the way the
earlier methods had to be to receive pivots, he devised a scheme to prepare the
stone without damaging it. As indicated in Figure 7, he enclosed the stone in a
great timber spool around which a hauling rope could be wound. The method
would also appear to be but an incremental evolutionary development from that
of his predecessors, but it proved to be a colossal failure because the spool and its
Reference Manual on Scientific Evidence
cargo could not be kept on a straight path, and all the time and effort spent in
getting the spool back to the center of the road led to the bankruptcy of the
contracting business. Understanding the way in which Chersiphron’s successful
method evolved through Metagenes’s method to Paconius’s dismal failure is a
paradigm for the design process. It behooves engineers and those who wish to
appreciate the enterprise of engineering to understand through such a paradigm
the process independent of the particular application and the state of the art in
which it is embedded at any given point in history.50
Figure 5. Chersiphron’s scheme for transporting circular columns.
From Petroski, supra note 29, at 19 & fig. 2.1 (after Larsen, 1969).
50. Id. at 17–26.
hooked to
ox team
Reference Guide on Engineering Practice and Methods
Figure 6. Metagenes’s scheme for transporting architraves.
From Petroski, supra note 29, at 20 & fig. 2.2 (after Coulten, 1977).
Figure 7. Paconius’s scheme for transporting the pedestal for the Statue of
From Petroski, supra note 29, at 22 & fig. 2.3 (after Coulten, 1977).
marble architrave
wooden frame
marble pedestal
wooden spool
Reference Manual on Scientific Evidence
Although the examples in this reference guide are drawn mainly from the
fields of civil and mechanical engineering and are largely historical, the principles
of design, analysis, and practice that they illustrate are common to all fields
of engineering and are relevant to twenty-first century engineering. The nature
of engineering design is such that emerging fields like bioengineering and software
engineering can be expected to follow similar paths of development as
have the older and more traditional fields, in that design errors will be made,
failures will occur, and designs will evolve in response to real and perceived
failures. Biomedical engineering, which grew mainly out of electrical engineering,
is already a well-established discipline with its own academic departments,
professional journals, and societies. One such journal is the IEEE Transactions on
Biomedical Engineering, published by the Engineering in Medicine and Biology
Society of the Institution of Electrical and Electronics Engineers.
Although there has been some opposition among professional engineers to
the term “software engineering” and to the use of the title “software engineer”
by those without engineering degrees, there are clear indications that this opposition
is lessening. The State of Texas, for example, now licenses software engineers
under that title. The software engineering community itself has for some
time felt a kinship to engineering more than to computer science, and the name
of their principal professional society, the Association for Computing Machinery
(ACM), is certainly more suggestive of an engineering organization than a
science one. Software engineering publications have run at least one extensive
interview with a prominent bridge designer, and at least one expert on bridge
failures has been invited to give keynote addresses at meetings of software engineers.
Thus, those engaged in software design and development are recognizing
the validity of the analogy between what they and civil engineers do and the
lessons to be learned by analogy from structural engineering history and failures.
There is also on the Internet a very well-established and closely read Forum on
Risks to the Public in Computers and Related Systems (comp.risks), which is
operated by the ACM Committee on Computers and Public Policy, and moderated
by Peter G. Neumann.51 That the newest engineering fields share a methodology
and an interest in failures with the oldest engineering fields should be
no more surprising than the fact that the newest scientific fields share the scientific
method with older sciences like chemistry and physics.
51. This publication is available on request from risks-request@csl.sri.com with the single-line
message “Subscribe.”
Reference Guide on Engineering Practice and Methods
V. Summary
In summary, engineering and science share many characteristics and methodologies,
but they also have their distinct features and realms of interest. Among
the points that have been made in this reference guide that might be considered
in evaluating an engineering expert’s testimony are the following:
• Engineering and scientific practice share qualities, such as rigor and method,
but they remain distinct endeavors.
• Engineering in its purest form seeks to synthesize new things; science seeks
to understand what already exists.
• Engineering is more than applied science; engineering has an artistic and
creative component that manifests itself in the design process.
• Engineering designs are analogous to scientific hypotheses in that they can
be proven wrong by a single counterexample (such as a failure) but cannot
ever be proven absolutely correct or safe.
• Engineering always involves an element of risk; it is the engineer’s responsibility
to minimize that risk to within socially acceptable limits.
• Engineering designs are tested by analysis; it is when engineers are doing
analysis that they behave most like scientists.
• Engineering in a climate of repeatedly successful experience can lead to
overconfidence and complacency, and this is when errors, accidents, and
failures can happen.
• Engineering failures provide reality checks on engineering practice, and the
information generated by a failure investigation is very valuable not only to
explain the failure itself but also to point to shortcomings in the state of the
• Engineering is always striking out in new directions, but that is not to say
that new fields of engineering are different in principle from traditional
• Engineering has a rich history, which is dominated by successes but punctuated
by some colossal failures, and that history provides great insight into
the nature of engineering and its practice today.
Reference Manual on Scientific Evidence
Glossary of Terms
ABET. Accreditation Board for Engineering and Technology, a consortium of
engineering professional societies that accredits academic engineering and
engineering technology programs.
analysis. The study of an engineering system that leads to a usually quantitative
understanding of how its constituent parts interact. See also design.
applied science. Science or a scientific endeavor pursued not merely for an
understanding of the universe and its materials and structures but with a practical
objective in mind. Seeking the fundamental nature of subatomic particles
is considered pure science if it has no other objective than an understanding
of the nature of matter. Using scientific principles relating to the
interaction of atoms to define specifications for a nuclear reactor is applied
science. Engineering, which involves a synthesis of science, experience, and
judgment, is frequently but mistakenly termed applied science.
computer-aided design (CAD). The use of digital computers to model,
analyze, compare, and evaluate how changes in an engineering system affect
its behavior, with the objective of establishing an acceptable design. The
most sophisticated applications of CAD eliminate much of the paper calculations
and drawings long associated with engineering design and allow the
data associated with a design to be transferred electronically from the design
to the manufacturing stage.
conservatism (in engineering). When choices are encountered in engineering
modeling, design, or analysis, choosing the option that makes the design
safer or causes the analysis to predict a lower load capacity rather than a
higher one.
constraints. Anything outside the designer’s control that restricts choices in
design is known as a constraint. Thus, if a certain clearance above mean high
water or a certain width of channel is required of a bridge, these are design
constraints for the bridge. Other constraints may be more abstract, but nonetheless
physically meaningful, for example, in the mathematical analysis of
two machine parts interacting with one another in a computer model, the
constraint that one solid part is not allowed to share the same position in
space at the same time as another.
dead load. The load on a structure that is due to the weight of the structure
design. The aspect of engineering that creates new machines, systems, structures,
and the like. Design involves an artistic component, in that the design
engineer must create something, usually expressed in a sketch or physical
Reference Guide on Engineering Practice and Methods
model, that can be communicated to other engineers, who can then analyze
and criticize it, and flesh it out.
design assumptions. No engineering design can proceed through analysis
without some assumptions being made about what its salient features are or
what physical phenomena are important to its operation. Thus, it is a common
assumption that the series of bolts connecting a steel beam to a column
is so tightened that no movement is allowed between the parts. This design
assumption defines conditions under which the analysis must proceed.
design constraints. See constraints.
design load. The load that a component of a structure is designed to support.
E.I.T. See Engineer in Training.
Engineer in Training (E.I.T.). An engineer who has passed the Fundamentals
of Engineering Examination, the first step in becoming licensed as a professional
engineering method. Akin to the scientific method, the engineering method
uses quantitative tools and experimental procedures to test and refine designs.
engineering science. Disciplines that follow the rigors of the scientific method
but have as their objects of study the artifacts of engineering rather than the
given objects and phenomena of the universe.
equilibrium state. The condition of an engineering system whereby it is in
equilibrium with its surroundings, that is, no change in the system will occur
without some change in the forces applied or the configuration obtaining.
“factor of safety.” The ratio of a load that causes failure to the design load of
a structure.
failure. The condition of not working as designed. A bridge that collapses
under a railroad train is obviously a failure of a catastrophic kind. A less dramatic
but nonetheless bothersome design failure might be a skyscraper that
sways in the wind not so much as to endanger the structure but enough to
cause the occupants of upper stories to become sick to their stomachs. A
project that goes over budget or is not aesthetically satisfying might also be
considered a failure by some engineers.
failure analysis. The determination of the sequence of events and cause of a
failure. Failure analysis can involve not only a detailed physical examination
of the broken parts of a failed structure or system but also the development of
conceptual and computer models to demonstrate how the failure progressed.
failure load. The load at which a structure fails to support the loads imposed
on it.
Reference Manual on Scientific Evidence
fatigue. The phenomenon whereby a part of a machine or structure develops
cracks (fatigue cracks) that grow under continued, repeated loading. When
the cracks grow to critical lengths, the machine part or structure can fracture.
forensic engineering. That branch of engineering that deals with the investigation,
nature, and causes of failures.
Fundamentals of Engineering Examination. The test that is used to qualify
engineers to use the Engineer-in-Training (E.I.T.) designation.
hypothesis. In engineering, a design on paper or in a computer. The design is
a hypothesis in the sense that it is an unproven assertion, albeit one that may
have a high level of professional experience and judgment backing up its
veracity. Also like a scientific hypothesis, an engineering design cannot be
proven absolutely to be correct, but can only be falsified. The falsification of
an engineering design (hypothesis) is known as a failure.
instability. The phenomenon whereby a small disturbance of an engineering
system results in a large change from its equilibrium state or condition of
stability. An aluminum beverage can that crumples under a slightly too strong
grip could be said to exhibit a buckling instability.
iteration. The engineering design process whereby successive calculations yield
successively more accurate predictions of an engineering system’s behavior.
Iterations often proceed in reaction to the degree to which the latest calculation
differs from the previous one, with an increment based on the difference.
The process is necessary in steel design, for example, because the principal
load on a structure is its dead weight, which naturally depends on the
size of the steel members used. The choice of the size of the members, in
contrast, depends on the weight of the structure. To begin to iterate toward
a fixed design in this vicious circle requires an educated guess at the outset of
how heavy the structure must be. The more experienced an engineer, the
more accurate the guess is likely to be.
licensing. The process by which engineers progress from E.I.T. to P.E. status.
live load. The load on a structure that is due to things other than the weight of
the structure itself. Live loads can include people, furniture, and materials
stored in an office building or warehouse, or the traffic on a bridge.
load. In structural engineering, the weight of a structure and the weight of any
objects resting upon it or moving across it. See also dead load, design load,
live load.
metal fatigue. See fatigue.
mode of failure. The manner in which an engineering system can fail. Most
systems have multiple modes of failure, and for design purposes the one that
is likely to occur under the smallest load on the system is termed the governing
mode of failure.
Reference Guide on Engineering Practice and Methods
model. A physical, mathematical, or computer-based representation of an engineering
system. Although a model is clearly not identical to the real system,
this fact is often forgotten in the interpretation of results from testing a model
or running a computer program.
P.E. See Professional Engineer.
prior art. In the field of patents, the technology that is in place at the time a
patent is applied for. To be patentable, an invention must not be obvious to
one versed in the prior art. See also state of the art.
professional engineer (P.E.). An engineer who has completed a number of
years in responsible charge of engineering work and who has passed both the
Fundamentals of Engineering and the Professional Engineering Examinations.
Under certain circumstances in some states, exemptions to examination
may be granted. Abbreviated P.E. in the United States.
“pushing the envelope.” Designing beyond engineering experience. Much
of engineering is making ever larger, lighter, faster, or smaller things. Such
evolutionary developments can, of course, be guided by experience with
what has already been made and is operating successfully. All examples of a
thing that have been successfully designed are said to be contained within an
envelope, which metaphorically encloses them. When data points representing
individual engineering systems of a certain kind are plotted on a graph, a
smooth curve going through the data points on the fringes of the collection
of points is said to be an envelope. To push the envelope is to extend the
range of experience, or to add a data point that moves the envelope curve
beyond the realm of experience, something that is a natural activity of engineers.
When it is done a little at a time, there is little chance that engineers
will be surprised by some totally new behavior or not have time to react to it
if it does appear to be developing. When the envelope is pushed too violently,
however, the design can surprise engineers with totally unexpected
and uncontrollable behavior.
scientific method. See engineering method.
S.E. A registered Structural Engineer.
size effect. Something that works fine on a small scale will not necessarily
work as well when it is scaled up. In structural engineering this phenomenon
has been known since ancient times but was not explained until Galileo did
so in the Renaissance. In structural engineering, the phenomenon has to do
with the fact that the weight of an object is proportional to its volume, which
is related to its size (height, length, or width) to the third power. The strength
of an object, however, is only proportional to the area that resists it being
pulled apart, and the area is related to size to the second power. There will
Reference Manual on Scientific Evidence
invariably be a point in the scaling up of a structure geometrically at which
the weight exceeds the strength and the structure cannot hold together. Size
or scale effects can be exhibited in all kinds of engineering systems, as in a
manufacturing process that works fine in the laboratory but is a complete
failure when scaled up to factory proportions. It is for this reason that novel
power plant designs go through several stages of being scaled up.
stability. An engineering system is said to be stable if it exhibits a small response
to a small disturbance. Stable behavior is exhibited when the top of a tall
building moves just slightly to the side when the wind increases and returns
to its equilibrium position when the wind stops blowing. In contrast, if the
top of the building begins moving in an erratic way when the wind increases
from 40 to 42 miles per hour, the structure is said to be unstable at that wind
“state of the art.” The sum total of knowledge, experience, and techniques
that are known and used by those practicing a particular branch of engineering
at a given time. See also prior art.
strength of materials. The engineering science that relates how the change of
shape of a body is related to the forces that are applied to it, and, by extension,
how much resistance it offers to breaking.
structural engineer (S.E). A civil engineer who specializes in the design and
analysis of structures, especially large structures like bridges and skyscrapers.
A licensed structural engineer is entitled to use the letters S.E. after his or her
structure. An assemblage of parts made of a material or materials (steel, concrete,
timber, etc.) and designed to carry loads.
truss. An arrangement of structural elements, usually in a series of triangular
configurations, used to build up a larger structural component that can span
long distances with minimal weight. Trusses are usually made of metal or
timber, the former being common in bridges and industrial applications and
the latter in domestic roof structures.
wind tunnel. An experimental facility in which models can be placed in a
controlled air stream to test their behavior in the wind or the air currents
flowing around them. Wind tunnels are commonly used in the development
of airplanes and large structures like suspension bridges and skyscrapers, which
are likely to be subjected to large wind forces. Prior to the collapse of the
Tacoma Narrows Bridge in the wind, bridge decks were not subjected to
wind-tunnel testing. Subsequent to the 1940 accident, it became standard
practice to test for stability in a wind tunnel the model of any proposed
bridge deck design.
Reference Guide on Engineering Practice and Methods
References on Engineering Practice and Methods
James L. Adams, Flying Buttresses, Entropy, and O-Rings: The World of an
Engineer (1991).
David P. Billington, The Innovators: The Engineering Pioneers Who Made
America Modern (1996).
David P. Billington, Robert Maillart’s Bridges: The Art of Engineering (1979).
D.I. Blockley, The Nature of Structural Design and Safety (1980).
Kenneth A. Brown, Inventors at Work: Interviews with 16 Notable American
Inventors (1988).
Louis L. Bucciarelli, Designing Engineers (1994).
Steven M. Casey, Set Phasers on Stun: And Other True Tales of Design, Technology,
and Human Error (1993).
Jacob Feld & Kenneth L. Carper, Construction Failure (2d ed. 1997).
Eugene S. Ferguson, Engineering and the Mind’s Eye (1992).
Samuel C. Florman, The Introspective Engineer (1996).
Samuel C. Florman, The Civilized Engineer (1987).
Samuel C. Florman, The Existential Pleasures of Engineering (1976).
Forensic Engineering (Kenneth L. Carper ed., 1989).
Michael J. French, Invention and Evolution: Design in Nature and Engineering
(2d ed. 1994).
Gordon L. Glegg, The Development of Design (1981).
Richard E. Goodman, Karl Terzaghi: The Engineer as Artist (1999).
James E. Gordon, Structures, Or, Why Things Don’t Fall Down (Da Capo
Press 1981) (1978).
Barry B. LePatner & Sidney M. Johnson, Structural and Foundation Failures: A
Casebook for Architects, Engineers, and Lawyers (1982).
Matthys Levy & Mario Salvadori, Why Buildings Fall Down: How Structures
Fail (1992).
Richard L. Meehan, Getting Sued, and Other Tales of the Engineering Life
Henry Petroski, The Book on the Bookshelf (1999).
Henry Petroski, Remaking the World: Adventures in Engineering (1997).
Henry Petroski, Invention by Design: How Engineers Get from Thought to
Thing (1996).
Henry Petroski, Engineers of Dreams: Great Bridge Builders and the Spanning
of America (1995).
Reference Manual on Scientific Evidence
Henry Petroski, Design Paradigms: Case Histories of Error and Judgment in
Engineering (1994).
Henry Petroski, The Evolution of Useful Things (1992).
Henry Petroski, The Pencil: A History of Design and Circumstance (1990).
Henry Petroski, To Engineer Is Human: The Role of Failure in Successful
Design (1985).
Jacob Rabinow, Inventing for Fun and Profit (1990).
Ben R. Rich & Leo Janos, Skunk Works: A Personal Memoir of My Years at
Lockheed (1994).
Steven S. Ross, Construction Disasters: Design Failures, Causes, and Prevention
Mario Salvadori, Why Buildings Stand Up: The Strength of Architecture
(McGraw-Hill 1982) (1980).
Charles H. Thornton et al., Exposed Structure in Building Design (1993).
Walter G. Vincenti, What Engineers Know and How They Know It: Analytical
Studies from Aeronautical History (1990).
When Technology Fails: Significant Technological Disasters, Accidents, and
Failures of the Twentieth Century (Neil Schlager ed., 1994).
abuse-of-discretion standard, 13, 18, 23, 26, 27, 28, 443 n.18
additive effect, 429
anecdotal evidence, 90-92
association (between exposure and disease), 336, 337, 348, 357, 419-26
Bayesian approach (Bayes’ theorem), 117, 132-33, 151-52, 466, 467, 536-44
case reports, 474, 475
causal effect of injury
disputes over, 289-91
using evidence from clinical practice for, 91 n.19
causal inferences, 256-60
causality, 184-85
causation, 323
external causation, 451 n.45, 452, 457, 468-78, 479
proof by expert testimony, 32-38
confidence interval, 117-19, 243-44, 354-55, 360-61
confidentiality, 52-53
ethical obligation of survey research organization, 272
professional standards for survey researchers, 272
protecting identities of individual respondents, 271-72
surveyor-respondent privilege, not recognized, 272
confounders (third variables), 138
confounding factors, 369-73, 423, 428
correlation, 204-05
correlation coefficients, 135-39
antitrust damages, 322-25
causation, 323
exclusionary conduct, 324
lost profits, 322
scope, 322-23
“tying” arrangement, 324-25
apportionment, 309-10, 320, 321
avoided cost, 293-94
causal effect of injury, disputes over, 289-91
characterization of harmful event, 284-94
“but-for” analysis, 284-87
and costs, 293-94
disputes over economic effects, 287-89
stock options, 294
tax treatment of, 291-93
damages study, 280-81, 328-29
disaggregation, see multiple challenged acts
double-counting, avoiding, 286, 312, 316, 320, 322
earnings, what constitutes, 295
employment law, 310
expectation, 283
expert’s qualifications, 282-83
explanatory variables, 323
future earnings, projection of, 299-300
actual earnings of plaintiff after harmful event, 299
profitability of business, 299
Reference Manual on Scientific Evidence
damages, continued
future losses, discounting, 300-05
appraisal approach, 305
capitalization factor, 303-04
interest rate, 301-03
offset by growth in earnings, 302
future losses, projection of, 300
in general, 280-81
intellectual property damages
apportionment of, 320-22
in general, 316-22
market-share analysis (sales), 318-19
price erosion, 319-20
“reasonable royalty” and designing around the paternt, 316-17, 321
liquidated damages, 326-27
lost profit, 320
measuring losses, tax considerations, 291-93
mitigation, 295-96, 312-14
multiple challenged acts, 305-07
patent infringement by public utility, 309-10
personal lost earnings, 311-16
benefits, 311-12
discounting, 315
mitigation, 312-14
projected earnings, 311, 314
retirement and mortality, 316
prejudgment interest, calculation of, 297-98
price erosion, 287, 288, 319-20
and regression analysis, 282
reliance, 283
securities damages, 325-26
market effect of adverse information, 326
turnover patterns in ownership, 326
structured settlements, 311
subsequent unexpected events, 311
and surveys, 282
Daubert, 442-43, 489, 537, 546, 551, 553
as viewed by a scientist, 81-82
gatekeeping function, 489
see generally 10-38
defendant’s fallacy, 539
dependent variable, choosing, 181, 186-87, 195
DNA evidence
affinal model, 530
allele, 492, 496
amplification, 497-98, 515
autoradiograph, 517
band shift, 517
basic product rule, 525-31, 556
chip, 552
database, 532-34
Daubert, 489, 537, 546, 551, 553
gatekeeping function, 489
DNA evidence, continued
defendant’s fallacy, 539
degradation, 506, 507, 514, 516
deoxyribonucleic acid (DNA)
applications of non-human DNA technology, 549-59
definition, 487, 491-96
and Federal Rules of Evidence
Rule 104, 523 n.175
Rule 401, 523 n.175
Rule 403, 500 n.69, 517 n.145, 523 n.175, 537, 544, 545
Rule 702, 500 n.69, 537, 544, 545
laboratory analysis of,
Bayes’ theorem, 536, 544
binning, 535
match, 516-19, 534
window, 535
microchondrial DNA, 495
sequence, 492
Hardy-Weinberg, 526, 528, 557, 558
linkage, 526, 528, 557
genome, 491
genotype, 493, 494, 502, 508, 518, 519, 520
multilocus, 525
single locus, 526
heterozygote, 508
homozygote, 508
interim ceiling method, 528
likelihood ratio
admissibility, 543-45
definition, 534-36
locus, 492
mitochondria, 495, 505
nucleotide, 491
nucleus, 491, 505
proficiency test, 511-12
prosecutor’s fallacy, 539, 539 n.239
quality assurance, 509-12
quality control, 509-12
random amplified polymorphic DNA (RAPD), 552, 554
random match probability, 525
admissibility, 530, 537-48
and databases, 532, 533
juror comprehension of, 537-45
random mating, 525
reverse dot blot, 517
sequence-specific oligonucleotide (SSO) probe, 561
short tandem repeat (STR), 494
single nucleotide polymorphism (SNP), 492
Southern blotting, 501
testing methods
PCR, 488, 493 n.32, 497, 500, 504, 506, 507, 515, 551, 552, 561
restriction fragment length polymorphism (RFLP), 501, 506, 556
variable number tandem repeat (VNTR), 494, 500-03
Reference Manual on Scientific Evidence
DNA evidence, continued
transposition fallacy, 544
true match, 534
amplified fragment length polymorphism (AFLP), 499 n.63, 552
base pair (bp), 491, 492, 505
chromosome, 491
polymorphism, 494, 496
dose-response relationship, 346, 347, 377, 406, 475
ecological fallacy, 344
compared with science, 579-88
difference, 579
struggles to define in the courts, 579-80
similarities, 584-86
artistic component, 586
assumptions, 592-94, 596, 605
computer-aided design (CAD), 594
generally, 596
difficulty of defining, 600-01, 602
constraints, 592
experience as pitfall, 599-600
factor of safety, 596
as guide to succesful designs, 612
role of, 604
value of, 604, 608
design loads, 592
dead load, 593-94
pushing the envelope, 597-99, 613
state of the art, 595
distinguised from scientists, 581
professional qualifications, 581-84
history, 612-16
in general, 578
association (between exposure and disease), 336, 337, 348, 357
measuring exposure
biological marker, 366
ecological fallacy, 344
etiology, 335
false results (erroneous association)
alpha, 356, 357
beta, 362
biases, 349, 354, 355, 363-69
information bias, 365-68
misclassification bias, 368
selection bias, 363-65
epidemiology, continued
false results, continued
confounding factor, 369-73
controlling for
stratification, 373
multivariate analysis, 373
false negative error, 362
false positive error, 356-61
power, 362-63
random (sampling) error, 354
confidence interval, 354-55, 360-61
statistical significance, 354, 357, 359-60, 362
true association, 355
general causation, 336, 374-79, 382
agent, 335, 336, 337, 338-39, 340
single, 379
multiple, 379
biological plausibility, 375, 378
dose-response relationship, 346, 347, 377
guidelines for determining, 375-79
replication, 377-78
in general, 335-38
incidence, 343, 348
prevalence, 343
specific (individual) causation, 336, 381-86
admissibility of evidence, 382
sufficiency of evidence, 382-86
specificity, 379
animal (in vivo), 345-46
extrapolation, 346
generalizability of, 372 n.305
human (in vitro), 346-47
in general, 337, 338-47
clinical, 338, 339
experimental, 338-39
multiple, 380-81
meta-analysis, 380
observational, 339-45
case-control, 342-43
and bias, 363-64, 365-66
cohort, 340-42
and bias, 364
and toxicology, compared, 346-47
cross-sectional, 339, 343-44
ecological, 340, 344-45
hospital-based, 364
time-line (secular trend), 345
toxicologic, 345-47
research design, 338-39, 372
Reference Manual on Scientific Evidence
epidemiology, continued
study results, interpretation of
adjustment for non-comparable groups, 352-54
attributable risk, 351-52, 385
odds ratio, 350-51
relative risk, 348-49, 376-77
standardized mortality ratio (SMR), 353
error in measuring variables, 200
etiology, 335, 451, 458, 460, 474, 476, 477 n.139
expert, qualification of, 201, 282-83
advanced degree, 415-16
basis of toxicologist’s expert opinion, 416
board certification, 417, 448
other indicia of expertise, 418
physician, 416, 447
professional organization, membership in, 417
expert evidence, management of, see management of expert evidence
in engineering, 581-84
in statistics, 87
in surveys, 238
explanatory variables, 92 n.23, 181, 187-89, 195-98, 323
exposure (to toxic substance), 472-73
extrapolation, 346
from animal and cell research to humans, 410-11, 412, 419
in statistical experiments, 96-97
falsification (falsifiability), 70-71, 78
Federal Rules of Evidence
Rule 102, 29
Rule 104, 523 n.175
Rule 104(a), 11
Rule 202, 27
Rule 401, 523 n.175
Rule 403, 86, 500 n.69, 517 n.145, 523 n.175, 537, 544, 545
Rule 702, 11, 12, 15, 18, 21, 22, 86, 443 n.18, 500 n.69, 537, 544, 545
forensic identification (challenges to), 31-32
Frye test, 11, 23, 24, 25, 26
gatekeeping function, 11, 15, 16, 17, 18, 19, 23, 27, 30, 38, 489
general acceptance, 11, 23, 24, 25, 26
general causation, 336, 374-79, 382, 419-22
General Electric Co. v. Joiner, 10, 13-15, 18, 26, 32-34
generalizability of studies, 372 n.305
how science works
historical background, 68-69
myths (and countermanding facts)
duty of falsification, 78
honesty and integrity of scientists, 79
open-mindedness of scientists, 78
pseudo-science easily distinguisted, 78
science as open book, 78
theories only theories, 79
triumph of reason over authority, 77-78
how science works, continued
professional scientists
institutions for, 75-76
reward system and, 76-77
rigor in reporting procedures and data, 73, 79
science and law compared
different word use, 80-81
different objectives, 81
science as adversary process, 74
theoretical underpinnings
falsification (falsifiability), 70-71, 78
as element in Daubert, 79 n.15, 81 n.17
as scientist’s duty, 78
difficulties with, 71
paradigm shifts, 71-73
shortcomings as theory, 73
scientific method, 69-70
as element in Daubert, 79 n.15
hypothesis tests, 121-30, 192, 356 n.60
“intellectual rigor” test, 18, 19, 23, 24, 25, 26
intercept, 140
Kumho Tire Co. v. Carmichael, 10, 15-23, 26-33, 35-38
least-squares regression, 217-18
likelihood ratio
admissibility, 543-45
definition, 534-36
linear association, 136-37
linear regression model, 207-10
management of expert evidence
collateral estoppel, 48
confidentiality, 52-53
court-appointed experts, 43, 45, 52, 59-63
discovery of
attorney work product, 50
testifying experts, 49
nontestifying experts, 51
nonretained experts, 51
court-appointed experts, 52
expert testimony
need for, 47
timing of designation of testifying experts, 43
limiting the number of testifying experts, 47-48
magistrate judges, use of, 48-49
motions in limine, 53-54
pretrial conferences
defining and narrowing issues, 43
experts reports, 44, 50-51
initial conference, 42
final pretrial conference, 56-57
protective orders, 52-53
reference guides, 45-47
special masters, use of, 43, 63-66
Reference Manual on Scientific Evidence
management of expert evidence, continued
summary judgement, 54-56
technical advisor, 59
defining the trial structure, 57
jury management, 57-58
structuring expert testimony, 58
presentation of evidence, 58
videotaped depositions, 52
measurement error, 145 n.213, 200, 518 n.148
medical testimony
Americans with Disabilities Act, 441, 479
Bayes’ theorem, 466, 467
Black v. Food Lion, Inc., 442 n.15, 445 n.29
case reports, 474, 475
case series, 474
causation (external), 451 n.45, 452, 457, 468-78, 479
Daubert, 442-43
diagnostic tests
clinical tests, 460-61
generally, 457-58
laboratory tests, 459-460
pathology tests, 460
differential diagnosis, 443-4, 463, 467, 470 n.112, 476 n.135, 477 n.139
differential etiology, 443-4, 470 n.112, 474 n.126, 476 n.135, 477 n.139
dose-response, 475
ERISA, 441, 479, 478 n.145
etiology, 451, 458, 460, 474, 476, 477 n.139
exposure (to toxic substance), 472-3
General Electric Co. v. Joiner, 442 n.14, 443 n.18
Kumho Tire, 442-43
sensitivity, 461, 465-66
specificity, 461, 465-66
symptomatology, 453-54
tissue biopsy, 457, 458, 460
true negative rate, see “specificity”
true positive rate, see “sensitivity”
multiple regression analysis
causality, 184-85
census undercount cases, questionable use in, 183
computer output of, 218-19
correlation, 204-05
death penalty cases, questionable use in, 183
statistical studies of,
dependent variable, choosing, 181, 186-87, 195
employment discrimination, 181-83, 191
scatterplot, 204
use of statistics in assessing disparate impact of,
and use of survey research, 233
expert, qualification of, 201
explanatory variables, 181, 187-89, 195-98
feedback, 195-96
forecasting, 219-221
standard error of, 220-21
multiple regression analysis, continued
growth of use in court, 182
hypothesis tests, 192
in general, 181-85, 204-21
interpreting results, 191-200
correlation versus causality, 183
error in measuring variables, 200
practical significance versus statistical significance, 191-95
regression slope, 212
robustness, 195-200
stastical significance, 191-95
linear regression model, 207-10
measurement error, 200
model specification (choosing a mocel), 186-91
errors in model, 197-98
nonlinear models, 210
null hypothesis, 193-95, 214, 219
patent infringement, 183
precision of results, 212-18
goodness-of-fit, 215-17
least-squares regression, 217-18
standard error, 212-15, 216, 221
p-value,194, 219
regression line, 207, 208-10
goodness-of-fit, 209, 215-16
regression residuals, 210
research design, 185-91
formulating the question for investigation, 186
spurious correlation, 184, 195
standard deviation, 213
statistical evidence, 201-03
statistical significance
hypothesis test, 194
p-value, 194
null hypothesis, 122-23, 193-95, 214, 219, 356
observational studies, 94-96, 339-45
odds ratio, 109, 350-51
patient’s medical history, 428-31
posterior probabilities, 131-33, 534, 536-37, 544-45
power, 125-26, 362-63
prosecutor’s fallacy, 539, 539 n.239
p-values, 121-30, 156-57, 194, 219, 357
random (sampling) error, 115, 354
randomized controlled experiments, 93-94
reference guides, 45-47
regression analysis, 282
regression lines, 139-43, 207, 208-10
regression slope, 212
research design
in vitro, 410
in vivo, 406-09
scatter diagrams (scatter plot), 134-35, 204
Reference Manual on Scientific Evidence
science, how it works, see how science works
scientific method, 69-70
sensitivity, 461, 465-66
multiple-chemical hypersensitivity, 416 n.43
slope, 140
regression slope, 212
specific (individual) causation, 336, 381-86, 422-26
specificity, 379, 461, 465-66
standard deviation, 114, 213
standard error, 212-15, 216, 221
statistical significance, 191-95, 354, 357, 359-60, 362
hypothesis test, 194
p-value, 194
anecdotal evidence, 90-92
income and education, 134
average, in statistical parlance, 113 n.100
Bayesian approach, 117, 132-33, 151-52
confidence intervals, 117-19
confounders (third variables), 138
correlation coefficients, 135-39
data, collection of
censuses, 343
individual measurements, 102-04
observational studies, 94-96
proper recording, 104
randomized controlled experiments, 93-94
reliability, 102-03
surveys, 98-102
validity, 103-04
data, inferences drawn from
estimation, 117-21
in general, 115-17
hypothesis tests, 121-30
p-values, 121-30, 156-57
posterior probabilities, 131-33
data, presentation and analysis of
center of distribution, 113-14
graphs, 110-13
interpreting rates or percentages, 107
misleading data, 105-07
percentages, 108
variability, 114-15
discrimination, 108, 145, 147-49
enhancing statistical testimony, 88-89
narrative testimony, 89
sequential testimony, 89
expertise in, 87
applied statistics, 86
probability theory, 86
theoretical statistics, 86
two-expert cases, 87
statistics, continued
in general, 85-86
association, 134-35
distribution of batch of numbers, 112
histograms, 112
scatter diagrams, 134-35
trends, 110-11
linear association, 136-37
mean, 113-114
median, 113-14
mode, 113
normal curve, 155-58
null hypothesis, 122-23
odds ratio, 109
one-tailed and two-tailed tests, 126-27
outliers, 137
percentage-related statistics, 108
power, 125-26
calculation of, 157-58
random error, 115
range, 114
regression lines, 139-43
intercept, 140
slope, 140
unit of analysis, 141-42
and voting rights cases, 142-43
standard deviation, 114
standard error, 117-19, 148, 153
statistical significance, 93 n.28, 116, 121, 123-25
surveys, 98-102
transposition fallacy, 131 n.167
trends, 110-11
two-tailed tests, see one-tailed tests
survey research
admissibility of, 233
advantages of, 231-32
attorney participation in survey, 237
causal inferences, 256-60
change of venue, 240, 243, 261
comparing survey evidence to individual testimony, 235-36
computer-assisted interview (CAI), 262-63
computer-assisted telephone interviewing (CATI), 262
ethical obligation of survey research organization, 272
professional standards for survey researchers, 272
protecting identities of individual respondents, 271-72
surveyor-respondent privilege, not recognized, 272
consumer impressions, 256
data entry, 268
design of survey, 236-39
disclosure of methodology and results, 269-70
in general, 231-36
in-person interviews, 260-261
Reference Manual on Scientific Evidence
survey research, continued
internet surveys, 264
interviewer surveys, 264-67
objective administration of survey
procedures to minimize error and biases, 267
sponsorship disclosure, 266
selecting and training interviewers, 264-65
mail surveys, 263-64
objectivity of, 237-38
pilot-testing, 271
pretest, 249, 271
population definition and sampling, 239-48
bias, 245-47
cluster sampling, 243
confidence interval, 243-44
convenience sampling, 244
mail intercept survey, 246-47
nonresponse, 245-46
probability sampling, 242-44
random sampling, 242
representativeness of sample, 245
response rates, 245-46
sampling frame (or universe), 240-42
screening potential respondents, 247
selecting the sample population, 242-44
stratified sampling, 243
target population, 240
purpose of survey, 236-39
questions, 248-49
ambiguous reponses, use of probes to clarify, 253-54
clarity of, 248-49
consumer impressions, 256
control group or question, 256-60
filter questions to reduce guessing, 249-51
open-ended versus closed-ended questions, 251-55
order of questions, effect of, 254-55
pretests, 248-49
primacy effect, 255
recency effect, 255
relevence of survey, 236-37
reporting, 270-71
responses, grouping of, 268
skip pattern, 262-63, 265
survey expertise, 238
telephone surveys, 261-63
use of surveys in court, 233-35
surveys, 98-102, 282
see also survey research
as element in Daubert, 79 n.15
acute toxicity testing, 406-07
additive effect, 429
antagonism, 429
association (see general and specific causation in this entry)
chemical structure of compound. 421
confounding factors, 423, 428
dose-response relationship, 406
and epidemiology, 413-15
expert qualifications
advanced degree, 415-16
basis of toxicologist’s expert opinion, 416
board certification, 417
other indicia of expertise, 418
physician, 416
professional organization, membership in, 417
extrapolation from animal and cell research to humans, 410-11, 412, 419
in general, 403-19
general causation, 419-22
animal testing, extrapolation from, 419-20
biological plausibility, 422
chemical structure of compound, 421
in general, 419
in vitro tests of compound, 422
organ specificity of chemical, 420-21
genome, human, effect of understanding on torts, 421
good laboratory practice, 411-12
multiple-chemical hypersensitivity, 416 n.43
one-hit theory (model), 407-08
patient’s medical history
competing causes (confounding factors) of disease, 428-29
different susceptibilities to compound, 430
effect of multiple agents, 429
evidence of interaction with other chemicals, 429
in general, 427-31
laboratory tests as indication of exposure to compound, 428
when data contradict expert’s opinion, 430-31
potentiation, 429
regulatory proceedings, 404
research design
in general, 405-10
in vitro, 410
in vivo, 406-09
maximum tolerated dose, 408-09
no observable effect level, 407
no threshold model, 407-08
safety and risk assessments, 411-13
specific causation, 422-26
absorption of compound into body, 425
excretory route of compound, 425
exposure, 424
metabolism, 425
no observable effect level, 426
regulatory standards, 423-24
Reference Manual on Scientific Evidence
structure activity relationships (SAR), 421
synergistic effect, 429
torts, 404
transposition fallacy, 131 n.167, 544
two-expert cases, 87
workings of science, see how science works
The Federal Judicial Center
The Chief Justice of the United States, Chair
Judge Stanley Marcus, U.S. Court of Appeals for the Eleventh Circuit
Judge Pauline Newman, U.S. Court of Appeals for the Federal Circuit
Chief Judge Jean C. Hamilton, U.S. District Court for the Eastern District of Missouri
Senior Judge Robert J. Bryan, U.S. District Court for the Western District of Washington
Judge William H. Yohn, Jr., U.S. District Court for the Eastern District of Pennsylvania
Judge A. Thomas Small, U.S. Bankruptcy Court for the Eastern District of North Carolina
Magistrate Judge Virginia M. Morgan, U.S. District Court for the Eastern District of Michigan
Leonidas Ralph Mecham, Director of the Administrative Office of the U.S. Courts
Judge Fern M. Smith
Deputy Director
Russell R. Wheeler
About the Federal Judicial Center
The Federal Judicial Center is the research and education agency of the federal judicial system. It
was established by Congress in 1967 (28 U.S.C. §§ 620–629), on the recommendation of the
Judicial Conference of the United States.
By statute, the Chief Justice of the United States chairs the Center’s Board, which also includes
the director of the Administrative Office of the U.S. Courts and seven judges elected by the
Judicial Conference.
The Director’s Office is responsible for the Center’s overall management and its relations with
other organizations. Its Systems Innovation & Development Office provides technical support for
Center education and research. Communications Policy & Design edits, produces, and distributes
all Center print and electronic publications, operates the Federal Judicial Television Network, and
through the Information Services Office maintains a specialized library collection of materials on
judicial administration.
The Judicial Education Division develops and administers education programs and services for
judges, career court attorneys, and federal defender office personnel. These include orientation
seminars, continuing education programs, and special-focus workshops. The Interjudicial Affairs
Office provides information about judicial improvement to judges and others of foreign countries,
and identifies international legal developments of importance to personnel of the federal courts.
The Court Education Division develops and administers education and training programs and
services for nonjudicial court personnel, such as those in clerks’ offices and probation and pretrial
services offices, and management training programs for court teams of judges and managers.
The Research Division undertakes empirical and exploratory research on federal judicial processes,
court management, and sentencing and its consequences, often at the request of the Judicial
Conference and its committees, the courts themselves, or other groups in the federal system. The
Federal Judicial History Office develops programs relating to the history of the judicial branch and
assists courts with their own judicial history programs.

Nenhum comentário:

Postar um comentário