they only fool themselves when they act as if scientific opinion automatically dictates the correct answer.
271. NRC II, supra note 1, at 192. As indicated in earlier sections, these “underlying data” have been
collected and analyzed for many genetic systems.
272. Id.
273. Id. at 193 (“Certainly, a judge’s or juror’s untutored impression of how unusual a DNA profile
is could be very wrong. This possibility militates in favor of going beyond a simple statement of a
match, to give the trier of fact some expert guidance about its probative value.”).
274. Cf. id. at 195 (“Although different jurors might interpret the same words differently, the formulas
provided . . . produce frequency estimates for profiles of three or more loci that almost always can be
conservatively described as ‘rare.’”).
275. State v. Bloom, 516 N.W.2d 159, 166–67 (Minn. 1994) (“Since it may be pointless to expect
ever to reach a consensus on how to estimate, with any degree of precision, the probability of a random
match, and that given the great difficulty in educating the jury as to precisely what that figure means and
does not mean, it might make sense to simply try to arrive at a fair way of explaining the significance of
the match in a verbal, qualitative, non-quantitative, nonstatistical way.”); see also Kenneth R. Kreiling,
Review-Comment, DNA Technology in Forensic Science, 33 Jurimetrics J. 449 (1993).
Reference Guide on DNA Evidence
547
used and the available population data—avoid assertions in court that a particular
genotype is unique in the population.”276 Following this advice in the context
of a profile derived from a handful of single-locus VNTR probes, several
courts initially held that assertions of uniqueness are inadmissible,277 while others
found such testimony less troublesome.278
With the advent of more population data and loci, the 1996 NRC report
pointedly observed that “we are approaching the time when many scientists will
wish to offer opinions about the source of incriminating DNA.”279 Of course,
the uniqueness of any object, from a snowflake to a fingerprint, in a population
that cannot be enumerated never can be proved directly. The committee therefore
wrote that “[t]here is no ‘bright-line’ standard in law or science that can
pick out exactly how small the probability of the existence of a given profile in
more than one member of a population must be before assertions of uniqueness
are justified . . . . There might already be cases in which it is defensible for an
expert to assert that, assuming that there has been no sample mishandling or
laboratory error, the profile’s probable uniqueness means that the two DNA
samples come from the same person.”280
276. NRC I, supra note 1, at 92.
277. See State v. Hummert, 905 P.2d 493 (Ariz. Ct. App. 1994), rev’d, 933 P.2d 1187 (1997); State v.
Cauthron, 846 P.2d 502, 516 (Wash. 1993) (experts presented no “probability statistics” but claimed
that the DNA could not have come from anyone else on earth), overruled, State v. Copeland, 922 P.2d
1304 (Wash. 1996); State v. Buckner, 890 P.2d 460, 462 (Wash. 1995) (testimony that the profile
“would occur in only one Caucasian in 19.25 billion” and that because “this figure is almost four times
the present population of the Earth, the match was unique” was improper), aff’d on reconsideration, 941
P.2d 667 (Wash. 1997).
278. State v. Zollo, 654 A.2d 359, 362 (Conn. App. Ct. 1995) (testimony that the chance “that the
DNA sample came from someone other than the defendant was ‘so small that . . . it would not be worth
considering’” was not inadmissible as an opinion on an ultimate issue in the case “because his opinion
could reasonably have aided the jury in understanding the [complex] DNA testimony”); Andrews v.
State, 533 So. 2d 841, 849 (Fla. Ct. App. 1988) (geneticist “concluded that to a reasonable degree of
scientific certainty, appellant’s DNA was present in the vaginal smear taken from the victim”); People
v. Heaton, 640 N.E.2d 630, 633 (Ill. App. Ct. 1994) (an expert who used the product rule to estimate
the frequency at 1/52,600 testified over objection to his opinion that the “defendant was the donor of
the semen”); State v. Pierce, No. 89-CA-30, 1990 WL 97596, at *2–3 (Ohio Ct. App. July 9, 1990)
(affirming admission of testimony that the probability would be one in 40 billion “that the match would
be to a random occurrence,” and “[t]he DNA is from the same individual”), aff’d, 597 N.E.2d 107
(Ohio 1992); cf. State v. Bogan, 905 P.2d 515, 517 (Ariz. Ct. App. 1995) (it was proper to allow a
molecular biologist to testify, on the basis of a PCR-based analysis that he “was confident the seed pods
found in the truck originated from” a palo verde tree near a corpse); Commonwealth v. Crews, 640
A.2d 395, 402 (Pa. 1994) (testimony of an FBI examiner that he did not know of a single instance
“where different individuals that are unrelated have been shown to have matching DNA profiles for
three or four probes” was admissible under Frye despite an objection to the lack of a frequency estimate,
which had been given at a preliminary hearing as 1/400).
279. NRC II, supra note 1, at 194.
280. As an illustration, the committee cited State v. Bloom, 516 N.W.2d 159, 160 n.2 (Minn. 1994),
a case in which a respected population geneticist was prepared to testify that “in his opinion the ninelocus
match constituted ‘overwhelming evidence that, to a reasonable degree of scientific certainty, the
Reference Manual on Scientific Evidence
548
The report concludes that “[b]ecause the difference between a vanishingly
small probability and an opinion of uniqueness is so slight, courts may choose to
allow the latter along with, or instead of the former, when the scientific findings
support such testimony.”281 Confronted with an objection to an assertion of
uniqueness, a court may need to verify that a large number of sufficiently polymorphic
loci have been tested.282
DNA from the victim’s vaginal swab came from the [defendant], to the exclusion of all others.’” NRC
II, supra note 1, at 194–95 n.84. See also People v. Hickey, 687 N.E.2d 910, 917 (Ill. 1997) (given the
results of nine VNTR probes plus PCR-based typing, two experts testified that a semen sample originated
from the defendant).
281. NRC II, supra note 1, at 195. If an opinion as to uniqueness were simply tacked on to a
statistical presentation, it might be challenged as cumulative. Cf. id. (“Opinion testimony about uniqueness
would simplify the presentation of evidence by dispensing with specific estimates of population
frequencies or probabilities. If the basis of an opinion were attacked on statistical grounds, however, or
if frequency or probability estimates were admitted, this advantage would be lost.”).
282. The NAS committee merely suggested that a sufficiently small random match probability compared
to the earth’s population could justify a conclusion of uniqueness. The committee did not propose
any single figure, but asked: “Does a profile frequency of the reciprocal of twice the earth’s
population suffice? Ten times? One hundred times?” Id. at 194. Another approach would be to consider
the probability of recurrence in a close relative. Cf. Belin et al., supra note 171.
The FBI uses a slightly complex amalgam of such approaches. Rather than ask whether a profile
probably is unique in the world’s population, the examiner focuses on smaller populations that might be
the source of the evidentiary DNA. When the surrounding evidence does not point to any particular
ethnic group, the analyst takes the random match probability and multiplies it by ten (to account for any
uncertainty due to population structure). The analyst then asks what the probability of generating a
population of unrelated people as large as that of the entire U.S. (290 million people) that contains no
duplicate of the evidentiary profile would be. If that “no-duplication” probability is one percent or less,
the examiner must report that the suspect “is the source of the DNA obtained from [the evidentiary]
specimen . . . .” Memorandum from Jenifer A.L. Smith to Laboratory, Oct. 1, 1997, at 3. Similarly, the
FBI computes the no-duplication probability in each ethnic or racial subgroup that may be of interest.
If that probability is 1% or less, the examiner must report that the suspect is the source of the DNA. Id.
Finally, if the examiner thinks that a close relative could be the source, and these individuals cannot be
tested, standard genetic formulae are used to find the probability of the same profile in a close relative,
that probability is multiplied by ten, and the resulting no-duplication probability for a small family
(generally ten or fewer individuals) is computed. Once again, if the no-duplication probability is no
more than 1%, the examiner reports that the suspect is the source. Id. at 3–4. In an apparent genuflection
to older cases requiring testifying physicians to have “a reasonable degree of medical certainty,” the
analyst must add the phrase “to a reasonable degree of scientific certainty” to the ultimate opinion that
the suspect is the source. Id. at 2–4. This type of testimony is questioned in Evett & Weir, supra note
174, at 244.
Reference Guide on DNA Evidence
549
VIII. Novel Applications of DNA Technology
Most routine applications of DNA technology in the forensic setting involve
the identification of human beings— suspects in criminal cases, missing persons,
or victims of mass disasters. However, inasmuch as DNA technology can be
applied to the analysis of any kind of biological evidence containing DNA, and
because the technology is advancing rapidly, unusual applications are inevitable.
In cases in which the evidentiary DNA is of human origin, new methods of
analyzing DNA will come into at least occasional use, and new loci or DNA
polymorphisms will be used for forensic work. In other cases, the evidentiary
DNA will come from non-human organisms—household pets,283 wild animals,284
insects,285 even bacteria286 and viruses.287 These applications are directed either
at distinguishing among species or at distinguishing among individuals (or subgroups)
within a species. These two tasks can raise somewhat different scientific
issues, and no single, mechanically applied test can be formulated to assess the
validity of the diversity of applications and methods that might be encountered.
Instead, this section outlines and describes four factors that may be helpful in
deciding whether a new application is scientifically sound. These are the novelty
of the application, the validity of the underlying scientific theory, the validity
of any statistical interpretations, and the relevant scientific community to
consult in assessing the application. We illustrate these considerations in the
context of three novel, recent applications of DNA technology to law enforcement:
• Although federal law prohibits the export of bear products, individuals in
this country have offered to supply bear gall bladder for export to Asia,
where it is prized for its supposed medicinal properties. In one investigation,
the National Fish and Wildlife Forensic Laboratory, using DNA test-
283. Ronald K. Fitten, Dog’s DNA May Be Key in Murder Trial: Evidence Likely to Set Court Precedent,
Seattle Times, Mar. 9, 1998, at A1, available in 1998 WL 3142721 (reporting a trial court ruling in favor
of admitting evidence linking DNA found on the jackets of two men to a pit bull that the men allegedly
shot and killed, along with its owners).
284. For example, hunters sometimes claim that they have cuts of beef rather than the remnants of
illegally obtained wildlife. These claims can be verified or refuted by DNA analysis. Cf. State v. Demers,
707 A.2d 276, 277–78 (Vt. 1997) (unspecified DNA analysis of deer blood and hair helped supply
probable cause for search warrant to look for evidence of illegally hunted deer in defendant’s home).
285. Felix A.H. Sperling et al., A DNA-Based Approach to the Identification of Insect Species Used for
Postmortem Interval Estimation, 39 J. Forensic Sci. 418 (1994).
286. DNA testing of bacteria in food can help establish the source of outbreaks of food poisoning and
thereby facilitate recalls of contaminated foodstuffs. See Jo Thomas, Outbreak of Food Poisoning Leads to
Warning on Hot Dogs and Cold Cuts, N.Y. Times, Dec. 24, 1998.
287. See State v. Schmidt, 699 So. 2d 448 (La. Ct. App. 1997) (where the defendant was a physician
accused of murdering his former lover by injecting her with the AIDS virus, the state’s expert witnesses
established that PCR-based analysis of human HIV can be used to identify HIV strains so as to satisfy
Daubert).
Reference Manual on Scientific Evidence
550
ing, determined that the material offered for export actually came from a
pig, absolving the suspect of any export law violations.288
• In State v. Bogan,289 a woman’s body was found in the desert, near several
palo verde trees. A detective noticed two seed pods in the bed of a truck
that the defendant was driving before the murder. A biologist performed
DNA profiling on this type of palo verde and testified that the two pods
“were identical” and “matched completely with” a particular tree and “didn’t
match any of the [other] trees,” and that he felt “quite confident in concluding
that” the tree’s DNA would be distinguishable from that of “any
tree that might be furnished” to him. After the jury convicted the defendant
of murder, jurors reported that they found this testimony very persuasive.
290
• In R. v. Beamish, a woman disappeared from her home on Prince Edward
Island, on Canada’s eastern seaboard. Weeks later a man’s brown leather
jacket stained with blood was discovered in a plastic bag in the woods. In
the jacket’s lining were white cat hairs. After the missing woman’s body
was found in a shallow grave, her estranged common-law husband was
arrested and charged. He lived with his parents and a white cat. Laboratory
analysis showed the blood on the jacket to be the victim’s, and the hairs
were ascertained to match the family cat at ten STR loci. The defendant
was convicted of the murder.291
A. Is the Application Novel?
The more novel and untested an application is, the more problematic is its
introduction into evidence. In many cases, however, an application can be new
to the legal system but be well established in the field of scientific inquiry from
which it derives. This can be ascertained from a survey of the peer-reviewed
scientific literature and the statements of experts in the field.292
288. Interview with Dr. Edgard Espinoza, Deputy Director, National Fish and Wildlife Forensic
Laboratory, in Ashland, Ore. (June 1998). Also, FDA regulations do not prohibit mislabeling of pig gall
bladder.
289. 905 P.2d 515 (Ariz. Ct. App. 1995).
290. Brent Whiting, Tree’s DNA “Fingerprint” Splinters Killer’s Defense, Ariz. Republic, May 28,
1993, at A1, available in 1993 WL 8186972; see also Carol Kaesuk Yoon, Forensic Science: Botanical
Witness for the Prosecution, 260 Science 894 (1993).
291. DNA Testing on Cat Hairs Helped Link Man to Slaying, Boston Globe, Apr. 24, 1997, available in
1997 WL 6250745; Gina Kolata, Cat Hair Finds Way into Courtroom in Canadian Murder Trial, N.Y.
Times, Apr. 24, 1997, at A5; Marilyn A. Menott-Haymond et al., Pet Cat Hair Implicates Murder Suspect,
386 Nature 774 (1997).
292. Even though some applications are represented by only a few papers in the peer-reviewed
literature, they may be fairly well established. The breadth of scientific inquiry, even within a rather
specialized field, is such that only a few research groups may be working on any particular problem. A
better gauge is the extent to which the genetic typing technology is used by researchers studying related
Reference Guide on DNA Evidence
551
Applications designed specially to address an issue before the court are more
likely to be truly novel and thus may be more difficult to evaluate. The studies
of the gall bladder, palo verde trees, and cat hairs exemplify such applications in
that each was devised solely for the case at bar.293 In such cases, there are no
published, peer-reviewed descriptions of the particular application to fall back
on, but the analysis still could give rise to “scientific knowledge” within the
meaning of Daubert.294
The novelty of an unusual application of DNA technology involves two
components—the novelty of the analytical technique, and the novelty of applying
that technique to the samples in question.295 With respect to the analytical
method, forensic DNA technology in the last two decades has been driven in
part by the development of many new methods for the detection of genetic
variation between species and between individuals within a species. The approaches
outlined in table A-1 for the detection of genetic variation in humans—
RFLP analysis of VNTR polymorphism, PCR, detection of VNTR
and STR polymorphism by electrophoresis, and detection of sequence variation
by probe hybridization or direct sequence analysis—have been imported from
other research contexts. Thus, their use in the detection of variation in nonhuman
species and of variation among species involves no new technology.
DNA technology transcends organismal differences.
Some methods for the characterization of DNA variation widely used in
studies of other species, however, are not used in forensic testing of human
DNA. These are often called “DNA fingerprint” approaches. They offer a snapshot
characterization of genomic variation in a single test, but they essentially
presume that the sample DNA originates from a single individual, and this presumption
cannot always be met with forensic samples.
The original form of DNA “fingerprinting” used electrophoresis, Southern
blotting, and a multilocus probe that simultaneously recognizes many sites in
the genome.296 The result is comparable to what would be obtained with a
problems and the existence of a general body of knowledge regarding the nature of the genetic variation
at issue.
293. Of course, such evidence hardly is unique to DNA technology. See, e.g., Coppolino v. State,
223 So. 2d 68 (Fla. Dist. Ct. App.), appeal dismissed, 234 So. 2d 120 (Fla. 1968) (holding admissible a test
for the presence of succinylcholine chloride first devised for this case to determine whether defendant
had injected a lethal dose of this curare-like anesthetic into his wife).
294. 509 U.S. 579, 590 (1993) (“to qualify as ‘scientific knowledge,’ an inference or assertion must
be derived by the scientific method”).
295. From its inception, both these aspects of forensic DNA testing have been debated. See, e.g., 1
McCormick on Evidence, supra note 11, § 205, at 902; Thompson & Ford, supra note 183.
296. The probes were pioneered by Alec Jeffreys. See, e.g., Alec J. Jeffreys et al., Individual-specific
“Fingerprints” of Human DNA, 316 Nature 76 (1985). In the 1980s, the “Jeffreys probes” were used for
forensic purposes, especially in parentage testing. See, e.g., D.H. Kaye, DNA Paternity Probabilities, 24
Fam. L.Q. 279 (1990).
Reference Manual on Scientific Evidence
552
“cocktail” of single-locus probes—one complex banding pattern sometimes analogized
to a bar-code.297 Probes for DNA fingerprinting are widely used in genetic
research in non-human species.298
With the advent of PCR as the central tool in molecular biology, PCRbased
“fingerprinting” methods have been developed. The two most widely
used are the random amplified polymorphic DNA (RAPD) method299 and the
amplified fragment length polymorphism (AFLP) method.300 Both give bar codelike
patterns.301 In RAPD analysis, a single, arbitrarily constructed, short primer
amplifies many DNA fragments of unknown sequence.302 AFLP analysis begins
with a digestion of the sample DNA with a restriction enzyme followed by
amplification of selected restriction fragments.303
Although the DNA fingerprinting procedures are not likely to be used in the
analysis of samples of human origin, new approaches to the detection of genetic
variation in humans as well as other organisms are under development. On the
horizon are methods based on mass spectrometry and hybridization chip technology.
As these or other methods come into forensic use, the best measure of
scientific novelty will be the extent to which the methods have found their way
into the scientific literature. Use by researchers other than those who developed
them indicates some degree of scientific acceptance.
The second aspect of novelty relates to the sample analyzed. Two questions
are central: Is there scientific precedent for testing samples of the sort tested in
the particular case? And, what is known about the nature and extent of genetic
variation in the tested organism and in related species? Beamish, the Canadian
case involving cat hairs, illustrates both points. The nature of the sample—cat
297. As with RFLP analysis in general, this RFLP fingerprinting approach requires a relatively good
quality sample DNA. Degraded DNA results in a loss of some of the bars in the barcode-like pattern.
298. E.g., DNA Fingerprinting: State of the Science (S.D.J. Pena et al. eds., 1993). The discriminating
power of a probe must be determined empirically in each species. The probes used by Jeffreys for
human DNA fingerprinting, for instance, are less discriminating for dogs. A.J. Jeffreys & D.B. Morton,
DNA Fingerprints of Dogs and Cats, 18 Animal Genetics 1 (1987).
299. John Welsh & Michael McClelland, Fingerprinting Genomes Using PCR with Arbitrary Primers, 18
Nucleic Acids Res. 7213 (1990); John G.K. Williams et al., DNA Polymorphisms Amplified by Arbitrary
Primers Are Useful as Genetic Markers, 18 Nucleic Acids Res. 6531 (1990).
300. Pieter Vos et al., AFLP: A New Technique for DNA Fingerprinting, 23 Nucleic Acids Res. 4407
(1995).
301. The identification of the seed pods in State v. Bogan, 905 P.2d 515 (Ariz. Ct. App. 1995), was
accomplished with RAPD analysis. The general acceptance of this technique in the scientific community
was not seriously contested. Indeed, the expert for the defense conceded the validity of RAPD in
genetic research and testified that the state’s expert had correctly applied the procedure. Id. at 520.
302. Primers must be validated in advance to determine which give highly discriminating patterns for
a particular species in question.
303. Both the RAPD and AFLP methods provide reproducible results within a laboratory, but AFLP
is more reproducible across laboratories. See, e.g., C.J. Jones et al., Reproducibility Testing of RAPD,
AFLP and SSR Markers in Plants by a Network of European Laboratories, 3 Molecular Breeding 381 (1997).
This may be an issue if results from different laboratories must be compared.
Reference Guide on DNA Evidence
553
hairs—does not seem novel, for there is ample scientific precedent for doing
genetic tests on animal hairs.304 But the use of STR testing to identify a domestic
cat as the source of particular hairs was new. Of course, this novelty does not
mean that the effort was scientifically unsound; indeed, as explained in the next
section, the premise that cats show substantial microsatellite polymorphism is
consistent with other scientific knowledge.
B. Is the Underlying Scientific Theory Valid?
Daubert does not banish novel applications of science from the courtroom, but it
does demand that trial judges assure themselves that the underlying science is
sound, so that the scientific expert is presenting scientific knowledge rather than
speculating or dressing up unscientific opinion in the garb of scientific fact.305
The questions that might be asked to probe the scientific underpinnings extend
the line of questions asked about novelty: What is the principle of the testing
method used? What has been the experience with the use of the testing method?
What are its limitations? Has it been used in applications similar to those in the
instant case—for instance, for the characterization of other organisms or other
kinds of samples? What is known of the nature of genetic variability in the
organism tested or in related organisms? Is there precedent for doing any kind of
DNA testing on the sort of samples tested in the instant case? Is there anything
about the organism, the sample, or the context of testing that would render the
testing technology inappropriate for the desired application?306 To illustrate the
usefulness of these questions, we can return to the cases involving pig gall bladders,
cat hairs, and palo verde seed pods.
Deciding whether the DNA testing is valid is simplest in the export case. The
question there was whether the gall bladders originated from bear or from some
other species. The DNA analysis was based on the approach used by evolutionary
biologists to study relationships among vertebrate species. It relies on sequence
variation in the mitochondrial cytochrome b gene. DNA sequence analysis
is a routine technology, and there is an extensive library of cytochrome b sequence
data representing a broad range of vertebrate species.307 As for the sample
304. E.g., Russell Higuchi et al., DNA Typing from Single Hairs, 332 Nature 543, 545 (1988). Collection
of hair is non-invasive and is widely used in wildlife studies where sampling in the field would
otherwise be difficult or impossible. Hair also is much easier to transport and store than blood, a great
convenience when working in the field. Id.
305. See Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 590 (1993) (“The adjective
‘scientific’ implies a grounding in the methods and procedures of science. Similarly, the word ‘knowledge’
connotes more than subjective belief or unsupported speculation.”).
306. But cf. NRC I, supra note 1, at 72 (listing seven “requirements” for new forensic DNA tests to
achieve “the highest standards of scientific rigor”).
307. If the bear cytochrome b gene sequence were not in the database, it would be obligatory for the
proponents of the application to determine it and add it to the database, where it could be checked by
other researchers.
Reference Manual on Scientific Evidence
554
material—the gall bladder—such cells may not have been used before, but gall
bladder is simply another tissue from which DNA can be extracted.308 Thus,
although the application was novel in that an approach had to be devised to
address the question at hand, each segment of the application rests on a solid
foundation of scientific knowledge and experience. No great inferential leap
from the known to the unknown was required to reach the conclusion that the
gall bladder was from a pig rather than a bear.
The DNA analysis in Beamish required slightly more extrapolation from the
known to the unknown. As indicated in the previous section, the use of cat
hairs as a source of DNA was not especially novel, and the very factors that
reveal a lack of novelty also suggest that it is scientifically valid to test the DNA
in cat hairs. But we also observed that the use of STR typing to distinguish
among cats was novel. Is such reasoning too great a leap to constitute scientific
knowledge? A great deal is known about the basis and extent of genetic variation
in cats and other mammals. In particular, microsatellite polymorphism is
extensive in all mammalian species that have been studied, including other members
of the cat family. Furthermore, by testing small samples from two cat populations,
the researchers verified the loci they examined were highly polymorphic.
309 Thus, the novelty in using STR analysis to identify cats is not scientifically
unsettling; rather, it extends from and fits with everything else that is known
about cats and mammals in general. However, as one moves from well-studied
organisms to ones about which little is known, one risks crossing the line between
knowledge and speculation.
The DNA testing in State v. Bogan310 pushes the envelope further. First, the
genetic variability of palo verde trees had not been previously studied. Second,
it was not known whether enough DNA could be extracted from seed pods to
perform a genetic analysis. Both of these questions had to be answered by new
testing. RAPD analysis, a well-established method for characterizing genetic
variation within a species, demonstrated that palo verde trees were highly variable.
Seed pods were shown to contain adequate DNA for RAPD analysis.
Finally, a blind trial showed that RAPD profiles correctly identified individual
308. There is a technical concern that the DNA extracted from a gall bladder might contain inhibitors
that would interfere with the subsequent sequence analysis; however, this merely affects whether
the test will yield a result, and not the accuracy of any result.
309. One sample consisted of nineteen cats in Sunnyside, Prince Edward Island, where the crime
occurred. See Commentary, Use of DNA Analysis Raises Some Questions (CBS radio broadcast, Apr. 24,
1997), transcript available in 1997 WL 5424082 (“19 cats obtained randomly from local veterinarians on
Prince Edward Island”); Marjorie Shaffer, Canadian Killer Captured by a Whisker from Parents’ Pet Cat,
Biotechnology Newswatch, May 5, 1997, available in 1997 WL 8790779 (“the Royal Canadian Mounted
Police rounded up 19 cats in the area and had a veterinarian draw blood samples”). The other sample
consisted of nine cats from the United States. DNA Test on Parents’ Cat Helps Put Away Murderer, Chi.
Trib., Apr. 24, 1997, available in 1997 WL 3542042.
310. 905 P.2d 515 (Ariz. Ct. App. 1995).
Reference Guide on DNA Evidence
555
palo verde trees.311 In short, the lack of pre-existing data on DNA fingerprints
of palo verde trees was bridged by scientific experimentation that established the
validity of the specific application.
The DNA analyses in all three situations rest on a coherent and internally
consistent body of observation, experiment, and experience. That information
was mostly pre-existing in the case of the gall bladder testing. Some information
on the population genetics of domestic cats on Prince Edward’s Island had to be
generated specifically for the analysis in Beamish, and still more was developed
expressly for the situation in the palo verde tree testing in Bogan. A court, with
the assistance of suitable experts, can make a judgment as to scientific validity in
these cases because the crucial propositions are open to critical review by others
in the scientific community and are subject to additional investigation if questions
are raised. Where serious doubt remains, a court might consider ordering
a blind trial to verify the analytical laboratory’s ability to perform the identification
in question.312
C. Has the Probability of a Chance Match Been Estimated
Correctly?
The significance of a human DNA match in a particular case typically is presented
or assessed in terms of the probability that an individual selected at random
from the population would be found to match. A small random match
probability renders implausible the hypothesis that the match is just coincidental.
313 In Beamish, the random match probability was estimated to be one in
many millions,314 and the trial court admitted evidence of this statistic.315 In
311. The DNA in the two seed pods could not be distinguished by RAPD testing, suggesting that
they fell from the same tree. The biologist who devised and conducted the experiments analyzed
samples from the nine trees near the body and another nineteen trees from across the county. He “was
not informed, until after his tests were completed and his report written, which samples came from”
which trees. Bogan, 905 P.2d at 521. Furthermore, unbeknownst to the experimenter, two apparently
distinct samples were prepared from the tree at the crime scene that appeared to have been abraded by
the defendant’s truck. The biologist correctly identified the two samples from the one tree as matching,
and he “distinguished the DNA from the seed pods in the truck bed from the DNA of all twenty-eight
trees except” that one. Id.
312. Cf. supra note 311. The blind trial could be devised and supervised by a court-appointed expert,
or the parties could be ordered to agree on a suitable experiment. See 1 McCormick on Evidence, supra
note 11, § 203, at 867.
313. See supra § VII.
314. David N. Leff, Killer Convicted by a Hair: Unprecedented Forensic Evidence from Cat’s DNA Convinced
Canadian Jury, Bioworld Today, Apr. 24, 1997, available in 1997 WL 7473675 (“the frequency of
the match came out to be on the order of about one in 45 million,” quoting Steven O’Brien); All
Things Considered: Cat DNA (NPR broadcast, Apr. 23, 1997), available in 1997 WL 12832754 (“it was
less than one in two hundred million,” quoting Steven O’Brien).
315. See also Tim Klass, DNA Tests Match Dog, Stains in Murder Case, Portland Oregonian, Aug. 7,
1998, at D06 (reporting expert testimony in a Washington murder case that “the likelihood of finding
Reference Manual on Scientific Evidence
556
State v. Bogan,316 the random match probability was estimated by the state’s
expert as one in a million and by the defense expert as one in 136,000, but the
trial court excluded these estimates because of the then-existing controversy
over analogous estimates for human RFLP genotypes.317
Estimating the probability of a random match or related statistics requires a
sample of genotypes from the relevant population of organisms. As discussed in
section VII, the most accurate estimates combine the allele frequencies seen in
the sample according to formulae that reflect the gene flow within the population.
In the simplest model for large populations of sexually reproducing organisms,
mating is independent of the DNA types under investigation, and each
parent transmits half of his or her DNA to the progeny at random. Under these
idealized conditions, the basic product rule gives the multilocus genotype frequency
as a simple function of the allele frequencies.318 The accuracy of the
estimates thus depends on the accuracy of the allele frequencies in the sample
database and the appropriateness of the population genetics model.
1. How Was the Database Obtained?
Since the allele frequencies come from sample data, both the method of sampling
and the size of the sample can be crucial. The statistical ideal is probability
sampling, in which some objective procedure provides a known chance that
each member of the population will be selected. Such random samples tend to
be representative of the population from which they are drawn. In wildlife
biology, however, the populations often defy enumeration, and hence strict
random sampling rarely is possible. Still, if the method of selection is uncorrelated
with the alleles being studied, then the sampling procedure is tantamount to
random sampling with respect to those alleles.319 Consequently, the key question
about the method of sampling for a court faced with estimates based on a
database of cats, dogs, or any such species, is whether that sample was obtained
in some biased way—a way that would systematically tend to include (or exclude)
organisms with particular alleles or genotypes from the database.
a 10-for-10 match in the DNA of a randomly chosen dog of any breed or mix would be one in 3
trillion, and the odds for a nine-of-10 match would be one in 18 billion”).
316. 905 P.2d 515 (Ariz. Ct. App. 1995).
317. Id. at 520. The Arizona case law on this subject is criticized in Kaye, supra note 178.
318. More complicated models account for the population structure that arises when inbreeding is
common, but they require some knowledge of how much the population is structured. See supra § VII.
319. Few people would worry, for example, that the sample of blood cells taken from their vein for
a test of whether they suffer from anemia is not, strictly speaking, a random sample. The use of convenience
samples from human populations to form forensic databases is discussed in, e.g., NRC II, supra
note 1, at 126–27, 186. Case law is collected supra note 179.
Reference Guide on DNA Evidence
557
2. How Large Is the Sampling Error?
Assuming that the sampling procedure is reasonably structured to give representative
samples with respect to those genotypes of forensic interest, the question
of database size should be considered. Larger samples give more precise estimates
of allele frequencies than smaller ones, but there is no sharp line for determining
when a database is too small.320 Instead, just as pollsters present their
results within a certain margin of error, the expert should be able to explain the
extent of the statistical error that arises from using samples of the size of the
forensic database.321
3. How Was the Random Match Probability Computed?
As we have indicated, the theory of population genetics provides the framework
for combining the allele frequencies into the final profile frequency. The frequency
estimates are a mathematical function of the genetic diversity at each
locus and the number of loci tested. The formulas for frequency estimates depend
on the mode of reproduction and the population genetics of the species.
For outbreeding sexually reproducing species,322 under conditions that give rise
to Hardy-Weinberg and linkage equilibrium, genotype frequencies can be estimated
with the basic product rule.323 If a species is sexually reproducing but
given to inbreeding, or if there are other impediments to Hardy-Weinberg or
linkage equilibrium, such genotype frequencies may be incorrect. Thus, the
reasonableness of assuming Hardy-Weinberg equilibrium and linkage equilibrium
depends on what and how much is known about the population genetics
of the species.324 Ideally, large population databases can be analyzed to verify
independence of alleles.325 Tests for deviations from the single-locus genotype
320. The 1996 NRC Report refers to “at least several hundred persons,” but it has been suggested
that relatively small databases, consisting of fifty or so individuals, allow statistically acceptable frequency
estimation for the common alleles. NRC II, supra note 1, at 114. A new, specially constructed
database is likely to be small, but alleles can be a assigned a minimum value, resulting in conservative
genotype frequency estimates. Ranajit Chakraborty, Sample Size Requirements for Addressing the Population
Genetic Issues of Forensic Use of DNA Typing, 64 Human Biology 141, 156–57 (1992). Later, the
NAS committee suggests that the uncertainty that arises “[i]f the database is small . . . can be addressed
by providing confidence intervals on the estimates.” NRC II, supra note 1, at 125.
321. Bruce S. Weir, Forensic Population Genetics and the NRC, 52 Am. J. Hum. Genetics 437 (1993)
(proposing interval estimate of genotype frequency); cf. NRC II, supra note 1, at 148 (remarking that
“calculation of confidence intervals is desirable,” but also examining the error that could be associated
with the choice of a database on an empirical basis).
322. Outbreeding refers to the propensity for individuals to mate with individuals who are not close
relations.
323. See supra § VII.
324. In State v. Bogan, 905 P.2d 515 (Ariz. Ct. App. 1995), for example, the biologist who testified
for the prosecution consulted with botanists who assured him that palo verde trees were an outcrossing
species. Id. at 523–24.
325. However, large, pre-existing databases may not be available for the populations of interest in
Reference Manual on Scientific Evidence
558
frequencies expected under Hardy-Weinberg equilibrium will indicate if population
structure effects should be accorded serious concern. These tests, however,
are relatively insensitive to minor population structure effects, and adjustments
for possible population structure might be appropriate.326 For sexually
reproducing species believed to have local population structure, a sampling strategy
targeting the relevant population would be best. If this is not possible, estimates
based on the larger population might be presented with appropriate caveats. If
data on the larger population are unavailable, the uncertainty implicit in basic
product rule estimates should not be ignored, and less ambitious alternatives to
the random match probability as a means for conveying the probative value of a
match might be considered.327
A different approach may be called for if the species is not an outbreeding,
sexually reproducing species. For example, many plants, some simple animals,
and bacteria reproduce asexually. With asexual reproduction, most offspring are
genetically identical to the parent. All the individuals that originate from a common
parent constitute, collectively, a clone. The major source of genetic variation
in asexually reproducing species is mutation.328 When a mutation occurs, a
new clonal lineage is created. Individuals in the original clonal lineage continue
to propagate, and two clonal lineages now exist where before there was one.
Thus, in species that reproduce asexually, genetic testing distinguishes clones,
not individuals, and the product rule cannot be applied to estimate genotype
frequencies for individuals. Rather, the frequency of a particular clone in a population
of clones must be determined by direct observation. For example, if a rose
thorn found on a suspect’s clothing were to be identified as originating from a
particular cultivar of rose, the relevant question becomes how common that
variety of rose bush is and where it is located in the community.
these more novel cases. Analyses of the smaller, ad hoc databases are unlikely to be decisive. In Beamish,
for instance, two cat populations were sampled. The sample of nineteen cats from Sunnyside, in Prince
Edward Island, and the sample of nine cats from the United States revealed considerable genetic diversity;
moreover, most of the genetic variability was between individual cats, not between the two populations
of cats. There was no statistically significant evidence of population substructure, and there was
no statistically significant evidence of linkage disequilibrium in the Sunnyside population. The problem
is that with such small samples, the statistical tests for substructure are not very sensitive; hence, the
failure to detect it is not strong proof that either the Sunnyside or the North American cat population
is unstructured.
326. A standard correction for population structure is to incorporate a population structure parameter
FST into the calculation. Such adjustments are described supra § VII. However, appropriate values
for FST may not be known for unstudied species.
327. The “tree lineup” in Bogan represents one possible approach. Adapting it to Beamish would have
produced testimony that the researchers were able to exclude all the other (28) cats presented to them.
This simple counting, however, is extremely conservative.
328. Bacteria also can exchange DNA through several mechanisms unrelated to cell division, including
conjugation, transduction, and transformation. Bacterial species differ in their susceptibility to undergo
these forms of gene transfer.
Reference Guide on DNA Evidence
559
In short, the approach for estimating a genotype frequency depends on the
reproductive pattern and population genetics of the species. In cases involving
unusual organisms, a court will need to rely on experts with sufficient knowledge
of the species to verify that the method for estimating genotype frequencies
is appropriate.
D. What Is the Relevant Scientific Community?
Even the most scientifically sophisticated court may find it difficult to judge the
scientific soundness of a novel application without questioning appropriate scientists.
329 Given the great diversity of forensic questions to which DNA testing
might be applied, it is not possible to define specific scientific expertises appropriate
to each. If the technology is novel, expertise in molecular genetics or
biotechnology might be necessary. If testing has been conducted on a particular
organism or category of organisms, expertise in that area of biology may be
called for. If a random match probability has been presented, one might seek
expertise in statistics as well as the population biology or population genetics
that goes with the organism tested. Given the penetration of molecular technology
into all areas of biological inquiry, it is likely that individuals can be found
who know both the technology and the population biology of the organism in
question. Finally, where samples come from crime scenes, the expertise and
experience of forensic scientists can be crucial. Just as highly focused specialists
may be unaware of aspects of an application outside their field of expertise, so
too scientists who have not previously dealt with forensic samples can be unaware
of case-specific factors that can confound the interpretation of test results.
329. See supra § I.C.
Reference Manual on Scientific Evidence
560
Appendix
A. Structure of DNA
DNA is a complex molecule made of subunits known as nucleotides that link
together to form a long, spiraling strand. Two such strands are intertwined around
each other to form a double helix as shown in Figure A-1. Each strand has a
“backbone” made of sugar and phosphate groups and nitrogenous bases attached
to the sugar groups.330 There are four types of bases, abbreviated A, T, G, and
C, and the two strands of DNA in the double helix are linked by weak chemical
bonds such that the A in one strand is always paired to a T in the other strand
and the G in one strand is always paired to a C in the other.331 The A:T and G:C
complementary base pairing means that knowledge of the sequence of one strand
predicts the sequence of the complementary strand. The sequence of the nucleotide
base pairs carries the genetic information in the DNA molecule—it is the
genetic “text.” For example, the sequence ATT on one strand (or TAA on the
other strand) “means” something different than GTT (or CAA).
Figure A-1. A Schematic Diagram of the DNA Molecule
The bases in the nucleotide (denoted C, G, A, and T) are arranged like the
rungs in a spiral staircase.
330. For more details about DNA structure, see, e.g., Anthony J.F. Griffiths et al., An Introduction
to Genetic Analysis (6th ed. 1996); Mange & Mange, supra note 23, at 95.
331. The bonds that connect the complementary bases are known as hydrogen bonds.
Reference Guide on DNA Evidence
561
B. DNA Probes
A sequence specific oligonucleotide (SSO) probe is a short segment of singlestranded
DNA with bases arranged in a particular order. The order is chosen so
that the probe will bind to the complementary sequence on a DNA fragment, as
sketched in Figure A-2.
Figure A-2. A Sequence-Specific Probe Links (Hybridizes) to the Targeted
Sequence on a Single Stand of DNA
C. Examples of Genetic Markers in Forensic Identification
Table A-1 offers examples of the major types of genetic markers used in forensic
identification.332 As noted in the table, simple sequence polymorphisms, some
variable number tandem repeat (VNTR) polymorphisms, and nearly all short
tandem repeat (STR) polymorphisms are detected using polymerase chain reaction
(PCR) as a starting point. Most VNTRs containing long core repeats are
too large to be amplified reliably by PCR and are instead characterized by restriction
fragment length polymorphism (RFLP) analysis using Southern blotting.
As a result of the greater efficiency of PCR-based methods, VNTR typing
by RFLP analysis is fading from use.
332. The table is adapted from NRC II, supra note 1, at 74.
Reference Manual on Scientific Evidence
562
Table A-1. Genetic Markers Used in Forensic Identification
Nature of variation at locus
Locus example Method of detection Number of alleles
Variable number tandem repeat (VNTR) loci contain repeated core sequence elements, typically 15–
35 base pairs (bp) in length. Alleles differ in the number of repeats and are distinguished on the basis
of size.
D2S44 (core Intact DNA digested with At least 75 (size range
repeat 31 bp) restriction enzyme, pro- is 700–8500 bp); allele
ducing fragments that are size distribution is
separated by gel electro- essentially continuous
phoresis; alleles detected
by Southern blotting
followed by probing with
locus-specific radioactive
or chemiluminescent probe
D1S80 (core Amplification of allelic About 30 (size range is
repeat 16 bp) sequences by PCR; discrete 350–1000 bp); alleles
allelic products separated can be discretely
by electrophoresis and distinguished
visualized directly
Short tandem repeat (STR) loci are VNTR loci with repeated core sequence elements 2–6 bp in length. Alleles
differ in the number of repeats and are distinguished on the basis of size.
HUMTHO1 Amplification of allelic 8 (size range 179–
(tetranucleotide sequences by PCR; discrete 203 bp); alleles can
repeat) allelic products separated by be discretely distinguished
electrophoresis on sequencing
gels and visualized directly,
by capillary electrophoresis,
or by other methods
Simple sequence variation (nucleotide substitution in a defined segment of a sequence)
DQA (an Amplification of allelic 8 (6 used in
expressed gene sequences by PCR; discrete DQA kit)
in the histo- alleles detected by sequencecompatibility
specific probes
complex)
Polymarker Amplification of allelic Loci are bi- or tri-
(a set of sequences by PCR; discrete allelic; 972 genofive
loci) alleles detected by sequence- typic combinations
specific probes
Mitochondrial Amplification of control- Hundreds of sequence
DNA control sequence and sequence variants are known
region (D-loop) determination
Reference Guide on DNA Evidence
563
D. Steps of PCR Amplification
The second National Research Council report provides a concise description of
how PCR “amplifies” DNA:
First, each double-stranded segment is separated into two strands by heating. Second, these
single-stranded segments are hybridized with primers, short DNA segments (20–30 nucleotides
in length) that complement and define the target sequence to be amplified. Third, in
the presence of the enzyme DNA polymerase, and the four nucleotide building blocks (A,
C, G, and T), each primer serves as the starting point for the replication of the target
sequence. A copy of the complement of each of the separated strands is made, so that there
are two double-stranded DNA segments. The three-step cycle is repeated, usually 20–35
times. The two strands produce four copies; the four, eight copies; and so on until the
number of copies of the original DNA is enormous. The main difference between this
procedure and the normal cellular process is that the PCR process is limited to the
amplification of a small DNA region. This region is usually not more than 1,000 nucleotides
in length, so PCR methods cannot, at least at present, be used [to amplify] large
DNA regions, such as most VNTRs. 333
Figure A-3 illustrates the steps in the PCR process for two cycles.334
Figure A-3. The PCR Process
333. NRC II, supra note 1, at 69–70.
Reference Manual on Scientific Evidence
564
In principle, PCR amplification doubles the number of double-stranded DNA
fragments each cycle. Although there is some inefficiency in practice, the yield
from a 30-cycle amplification is generally about one million to ten million copies
of the targeted sequence.
E. Quantities of DNA in Forensic Samples
Amounts of DNA present in some typical kinds of evidence samples are indicated
in Table A-2. These are approximate, and the quantities of DNA extracted
from evidence in particular cases may vary somewhat.335
Table A-2. DNA Content of Biological Samples336 and Genetic Testing
Success Rates
Type of Sample DNA Content PCR Success Rate
Blood 20,000–40,000 ng/mL
stain 1 cm x 1 cm ca. 200 ng > 95%
stain 1 mm x 1 mm ca. 2 ng
Semen 150,000–300,000 ng/mL
on post-coital vaginal swab 0–3000 ng >95%
Saliva 1000–10,000 ng/mL
on a cigarette butt 0–25ng 50–70%
Hair
root end of pulled hair 1–750 ng >90%
root end of shed hair 1–12 ng <20% ng =" nanogram," ml =" milliliter;" cm =" centimeter;" mm =" millimeter">.
9. See American Inst. of Hydrology home page (visited July 28, 1999)
Reference Guide on Engineering Practice and Methods
583
ship or educational background. Among the highest honors an American engineer
can receive is membership in the National Academy of Engineering (NAE).
That many of the members of the NAE were educated as scientists and have no
degrees in engineering underscores the overlap between engineering and science.
Indeed, many members of the NAE, including some who are engineers
by education as well as by practice, are also members of the National Academy
of Sciences, and a small number of these are also members of the Institute of
Medicine.
In spite of this apparent open-mindedness and inclusiveness at the highest
ranks of the profession, it is a common complaint among engineers who reflect
on the nature of the profession and the public perception of it that science is
often credited with technological achievements that are properly termed engineering.
Although such observations, like most complaints of interest groups,
are usually taken as sour grapes, there appears to be some validity to the engineers’
claim, as newspaper stories about technological subjects frequently reveal.
When, for example, the Mars Pathfinder mission approached its goal of landing
on the red planet and deploying the rock-exploring rover in July 1997, a typical
newspaper headline read, “A New Breed of Scientists Studying Mars Takes
Control.”10 The scientists who were charged with studying the geology and
chemistry of the planet’s surface did indeed take over the news conferences and
television interviews. The engineers who had conceived and designed the essential
spacecraft and the rover it carried were, after some brief initial appearances,
relegated to obscurity. A cultural critic writing for the New York Times
even dismissed the engineers as prosaic and the Mars landing as not a television
spectacular.11 Whether or not it was spectacular, the physical mission was
definitely an engineering achievement from which the scientific enterprise of
planetary exploration benefited greatly.
Another common irritation among many engineers is when scientists are
actually credited with an achievement that is clearly an engineering one. A new
airplane, for example, might be heralded in the mass media as a “scientific breakthrough”
when in fact it is an engineering one. More irritating to engineers,
however, is the perception that when such an airplane crashes, as during a test
flight, a headline is more likely than not to describe it as an “engineering failure.”
The crediting of scientists over engineers with achievement was strikingly
demonstrated when a U.S. postage stamp was issued in 1991 commemorating
Theodore von Karman, one of the founders of the Jet Propulsion Laboratory,
10. John Noble Wilford, A New Breed of Scientists Studying Mars Takes Control, N.Y. Times, July 14,
1997, at A10.
11. Walter Goodman, Critic’s Notebook: Rocks, in Sharp Focus, but Still Rocks, N.Y. Times, July 6,
1997, § 1, at 12.
Reference Manual on Scientific Evidence
584
which managed the Pathfinder mission. He was identified on the stamp as an
“aerospace scientist,” a fact that disappointed many engineers. It was only on
the selvage of the stamp that von Karman was acknowledged to be a “gifted
aerodynamicist and engineer.” Yet von Karman’s first degree was in engineering,
and it was his desire to build and launch successful rockets—definitely an
engineering objective—that drove him to study them as objects of science, just
as an astronomer might study the stars as objects of nature, seeking to understand
their origin and behavior. Unlike the engineer von Karman, who wanted
to understand the behavior of rockets in order to make them do what he desired,
however, the astronomer as scientist observes the stars with no further
objective than to understand them and their place in the universe. A pure “rocket
scientist,” in other words, would be interested not in building rockets but in
studying them.
C. Some Shared Qualities
Engineering clearly does share some qualities with science, and much of what
engineering students study in school is actually mathematics, science, and engineering
science. In fact, the graduate engineer’s considerable coursework in
these theoretical subjects distinguishes him or her more from the engineering
technician than from the scientist. With this scientific background, an engineer
is expected to be able to design and analyze and predict reliably the behavior of
new objects of technology and not just copy and replicate the old. In addition to
mathematics, science, and engineering science, however, the engineering student
takes courses specifically addressing design, which is what distinguishes
engineering from science.
1. Engineering is not merely applied science
That science forms a foundation for engineering is not to say that engineering is
merely applied science and that engineers merely apply the laws of science in
creating engineering designs. Although “applied science” is a commonly encountered
pejorative definition of engineering, sometimes offered by scientists
who consider engineering inferior to science and who do not fully appreciate
the nature of engineering design, it is a patently false characterization. Engineering
in its purest form involves creative leaps of the imagination not unlike those
made by a scientist in framing a hypothesis or those made by an artist in conceiving
a piece of sculpture.
Rather than following from scientific theory, an engineering design (hypothesis)
provides the basis for analysis (testing the hypothesis) within that theory.12
Engineering designs are not often likened to scientific hypotheses, but in fact
12. See Henry Petroski, To Engineer Is Human: The Role of Failure in Successful Design 40–44
(1985).
Reference Guide on Engineering Practice and Methods
585
their origins can be quite similar and the testing of them remarkably analogous.
Just as the conception of a scientific hypothesis is often the result of a creative,
synthetic mental leap from a mass of data to a testable statement about it, from
disorder to order, from wonder to understanding, so the origins of an engineering
design can be spontaneous, imaginative, and inductive. Like the testing of
the hypothesis, the analysis of the design proceeds in an orderly and deductive
way. As in most analogies, however, the parallels are not perfect and the distinctions
are not clear-cut. Design and analysis are in fact often intertwined in engineering
practice. The design of a bridge may serve as a paradigm.
Imagine that a city wants a bridge to cross a river much wider and deeper
than has ever been bridged before. Because the problem is without precedent,
there is no existing bridge (no preexisting design) to copy. Engineers will, of
course, be aware of plenty of shorter bridges in more shallow water, but can
such models be scaled up? Even if it appears that they can technically, would it
be practical or economical to do so? When presented with such a problem, the
engineer must conceive a solution—a design—not on the basis of mathematics
and science alone, but on the basis of extrapolating experience and, if necessary,
inventing new types of bridges. The creative engineer will come up with a
conceptual design, perhaps little more than a sketch on the back of an envelope,
but clear enough in its intention to be debated among colleagues. This is the
hypothesis—that the particular kind of bridge sketched can in fact be built and
function as a bridge.
It is only when such a conceptual design is articulated that it can be analyzed
to see if it will work. If, for example, the bridge proposed is a suspension bridge
of a certain scale, it is possible to calculate whether its cables will be strong
enough to support themselves, let alone a bridge deck hanging from them and
carrying rush-hour traffic. Contrary to conventional lay wisdom, however, bridge
designs do not follow from the equations of physics or any other science. Rather,
the conceptual bridge design provides the geometrical framework for the engineer
to use in applying the equations embodying the theory of structures to
determine whether the various parts of the proposed bridge will be able to carry
the loads they will have to after construction is complete. When a preliminary
analysis determines that the conceptual design is in fact sound, the engineer can
carry out more detailed design calculations, checking the minutest details to be
sure that the structure will not fail under the expected loads.
The design of less critical and less costly products of engineering follows a
similar process. Imagine that a company wants to develop a new product, perhaps
because sales of its existing products are dropping off. The company’s engineers
are thus given the problem of coming up with something new, something
better than all existing products, something unprecedented. The engineers,
who often work in teams, will, perhaps by some ineffable process, conceive
and articulate some new design, some new invention. Their hypothesis is,
Reference Manual on Scientific Evidence
586
of course, that this design can be realized and the product sold at a competitive
price. Testing the hypothesis may involve years of work, during which the
engineers may find themselves faced with new problems of developing new
materials and new manufacturing processes to fully and effectively realize the
new design for a specified cost. The final product thus may be something that
looks quite different from the first sketches of the original conceptual design.
The engineers’ experience will be not unlike that of scientists finding that they
must modify their hypothesis as testing it reveals its weaknesses.
2. Engineering has an artistic component
The act of conceiving an engineering design is akin to the act of conceiving a
painting or other work of art. Like the fine artist, the engineer does not proceed
in a cultural vacuum, but draws upon experience in creating new work. Given
the task of designing a bridge over obstacles between Point A and Point B, the
engineer usually begins by sketching, literally or in the mind’s eye, possible
bridges. These preliminary concepts are likely to look not unlike those of bridges
that cross over similar obstacles. Bridge designs that have worked in the past are
likely to work in the future, if the new bridge is not too much longer or is not
in too much deeper water than the earlier designs. However, each bridge project
can also have its unique foundation, approach, or span problems, and the engineer
must be prepared to modify the design accordingly, thus creating something
that is different from everything that has come before.
Just as the artist chooses a particular block of stone out of which to chisel a
figure or a specific size of canvas on which to paint, the engineer engaged in
conceptual design also makes a priori choices about how tall a bridge’s towers
will be or how far its deck will span between piers. There are infinite geometrical
combinations of these features of a bridge, as there are for the features of a
figure in stone or the painting on canvas. It is the artistic decision of the engineer,
no less than that of the artist, that fixes the idea of the form so that it can be
analyzed, criticized, and realized by others. A recently published biography of a
geotechnical engineer highlights the creative aspect of engineering practice
through its subtitle, The Engineer as Artist.13
D. The Engineering Method
What is known as the engineering method is akin to the scientific method in
that it is a rational approach to problem solving. Whereas the fundamental problem
addressed via the scientific method is the testing of hypotheses, that ad-
13. Richard E. Goodman, Karl Terzaghi: The Engineer as Artist (1999). The book also provides
insight into the many dimensions of personality and temperament—from the artistic to the scientific—
that can coexist in an individual engineer.
Reference Guide on Engineering Practice and Methods
587
dressed by the engineering method is the analysis of designs, which, as noted
earlier, may be considered hypotheses of a sort. Once a conceptual design has
been fixed upon, detailed design work can begin to flesh out the details. The
engineering method is the collective means by which an engineer approaches
such a problem, not only to achieve a final design but also to do so in such a way
that the rationale will be understood by other engineers. Those other engineers
might be called upon to check the work with the intention of catching any
errors of commission or omission in the assumptions, calculations, and logic
employed.
The starting point of much engineering work is in what has previously been
done. That is not to say that engineers merely follow examples or use handbooks,
for engineers are typically dealing with what has not been encountered
before in exactly the same scale, context, or configuration. Yet, just as artists are
ever conscious of the traditions of art history, so in the most creative stage of
engineering, where conceptual designs are produced, engineers typically rely
upon their knowledge of what has and has not worked in the past in coming up
with their new designs. The development of these conceptual designs into working
artifacts usually involves the greater expenditure of time and visible effort,
and it is in this developmental stage that the engineering method most manifests
itself.
Many engineering problems begin with shortcomings or downright failures
with existing technology. For example, earthquakes in California have revealed
weaknesses in prior designs of highway bridges: horizontal ground motion causing
road decks to slide off their supports and vertical ground motion causing the
support columns themselves to be crushed. To prevent such failures in the future,
engineers have proposed a variety of ways to retrofit existing structures.
Among the designs is one that wraps reinforced concrete columns in composite
materials, with the intention of preventing the concrete from expanding to the
point of failure. The idea is attractive because the flexible, textile-like materials
could be applied relatively easily and economically to bridges already built. The
basic engineering question would be whether it would be economical to wrap
enough material around a column to achieve the desired effect.
The engineering method of answering such a question typically involves both
theory and experiment. Since the material has a known strength and a known
structure, calculations within the broad category of theory of strength of materials
can produce answers as to whether the wrapping can contain the pressure
of the expanding concrete during an earthquake. The problem and the calculations
are complicated by the fact that a composite material is not a simple one,
and its containing strength depends upon the structure of the wrapping material.
Indeed, the engineering problem can very easily be diverted to one of establishing
the best way to manufacture the composite material itself in order to achieve
Reference Manual on Scientific Evidence
588
the desired result most efficiently. The calculations themselves will involve hypotheses
about how the material is made and how it will perform when called
upon to do so. In other words, all the calculations depend to a great extent upon
theory and theoretical assumptions. Furthermore, there are fundamental questions
about how the material will behave after prolonged exposure to the environment,
including pollution and sunlight, which are known to have deleterious
effects on certain composite materials. Also, there are questions about the
long-term behavior of the composite wrapping when it would be in place on a
column which itself was subjected to the repeated loads on the highway it supports.
The repeated loading and unloading can cause what is known as fatigue,
and what may be strong enough when newly installed may have its strength
considerably reduced over the course of time. Experiments on the composite
material, its components, and the wrapped column may be necessary to answer
questions about the design and the theory upon which its analysis is based. What
is central to the engineering method used to approach and attack such problems
is its empirical and quantitative nature, and in this regard it is not unlike the
scientific method.
While the design of bridges and analysis of proposed means to retrofit them
against earthquake damage may appear to involve problems specific to civil
engineering, the nature of the design process and the method used to analyze
proposed designs is typical of engineering design and the engineering method
generally. No engineer can design a crankshaft for an automobile engine or a
circuit for an electronic calculator without first having a conceptual design that
serves as a basis for the detailed design and development, including the confirming
analysis that the thing is going to work when manufactured, installed, or
assembled. The difference between a successful design and an unsuccessful one
can often be traced to how carefully and thoroughly a design was in fact analyzed
and tested—just as if it were a scientific hypothesis.
III. The Nature of Engineering
The practice of engineering is often separated into the two components of design
and analysis, and different groups of engineers frequently carry out the
distinct but hardly separable activities and pass their results back and forth over
what has sometimes been described metaphorically as a wall. It is also a common
complaint among engineers that when the designers and analysts have finished
their work, they throw the “finished” design over another wall and let the
manufacturing engineers worry about how to make the parts and assemble them.
This model has historically been especially notorious in the aircraft manufacturing
industry, with the notable exception of the Skunk Works operation of the
Reference Guide on Engineering Practice and Methods
589
Lockheed Corporation, in which all engineers and assembly workers carried out
their secret and highly successful projects in one big building.14
With the advent of computer-aided design and manufacturing, designers and
manufacturers scattered around the world were able to combine design, analysis,
and manufacturing in a highly integrated manner, as was done very successfully
with the design and manufacture of the Boeing 777.15 For all their importance
in being but preludes to manufacturing, however, design and analysis are
the aspects of engineering that are most commonly subject to dispute and thus
to scrutiny. Indeed, even when there are problems with manufacturing, it is the
tools and practices of design and analysis that are called upon to identify the
causes of faults and to redesign the artifact or the process that manufactured it.
A. Design Versus Analysis
1. Design
Design, being dominated at its most fundamental level by the artistic component
of engineering, and involving a lot of creativity, cannot be easily codified.
A conceptual design can thus often be sketched more easily than it can be articulated
in words, which is perhaps one of the reasons patents are not easy
reading and almost always are accompanied by figures. It is debatable, therefore,
whether design can be taught in any definitive way. That is not to say that
design cannot be assessed in meaningful ways. Unlike an artistic design, which is
often judged principally on the basis of aesthetics and taste, an engineering design
is most properly judged by how well it functions. Indeed, engineers sometimes
are rightly criticized for apparently seeing function as the only requirement
of their designs.
The word design, used in an engineering context as a noun, verb, and adjective,
has several different meanings, and is often used without distinguishing
qualifiers. One engineer’s conceptual design of a bridge or machine part is seldom,
if ever, sufficiently fleshed out that the artifact can be built or manufactured
without further details. This kind of design is high-level design, in the
sense that it is typically conceived of or decided upon by someone in a leadership
role on a project. With the conceptual design fixed, the engineering or
detail design can proceed, usually by individual engineers or teams of engineers.
This kind of design can be repetitive and tedious, full of calculations and small
iterations, but the computer is increasingly being used to take over such tasks. A
typical design task at this level would be to choose the sizes of the individual
14. See Ben R. Rich & Leo Janos, Skunk Works: A Personal Memoir of My Years at Lockheed
(1994).
15. See Henry Petroski, Invention by Design: How Engineers Get from Thought to Thing 129
(1996).
Reference Manual on Scientific Evidence
590
pieces of steel that will make up a bridge or to determine the detailed geometry
of a machine part for an engine. The finished product of such tasks can itself be
referred to as “the design.” This is not to say that the result will be exactly the
same no matter what engineer carries out the calculations, for the design process
is replete with individual judgments and decisions that cumulatively affect the
result.
2. Analysis
Analysis, in contrast, is highly codified and structured. Unlike design problems,
which seldom if ever have unique solutions, problems in analysis have only one
right or relevant answer. Thus, once produced on paper or computer screen,
the design might be checked by analysts using well-established theories of engineering
science and mechanics, such as strength of materials, elasticity, or dynamics.
Given the now fixed geometry of a structural or machine component
and the agreed-upon design loads it is expected to experience, the analyst is able
to calculate deflections, natural frequencies, and other responses of the part to
the loads. Assuming no errors are made, the value of these responses will not
depend upon who does the calculations. The calculated responses serve to check
that the design is correct within the specifications of the design problem, and
this is one way engineering design proceeds within a system of checks and balances.
If the magnitudes of the responses prove to be unacceptable, the design
will be sent back to the designers for further iteration. Needless to say, sometimes
the designer and the analyst are one and the same individual engineer, in
which case the design should ultimately be checked by another engineer.
Because the end result of an analysis is often a single precise number, analysis
lends itself more easily to explication in the classroom and to coursework in the
curriculum, and, according to some critics, it is taught in engineering schools
sometimes almost to the exclusion of design. Indeed, until recently, the Accreditation
Board for Engineering and Technology (ABET), which accredits
engineering programs in the United States, had specific and distinct minimum
requirements for the number of both design and analysis courses in the curriculum.
Although this bean-counting approach has been abandoned of late, ABET
does expect each program it accredits typically to contain a capstone design
course, in which engineering students, usually in their senior year, are involved
in a major design project that forces them to draw upon and synthesize the use
of the analytical and design skills learned throughout the curriculum.
The usual engineering curriculum in the United States now comprises four
years of study leading to a bachelor’s degree, typically a Bachelor of Science or
a Bachelor of Science in Engineering. Thus, in engineering, unlike in law and
medicine, it is common to encounter practitioners with only an undergraduate
education, and often a highly specialized, technical one at that. This, along with
Reference Guide on Engineering Practice and Methods
591
the fact that engineering has no single membership organization analogous to
the American Bar Association or the American Medical Association, has been
identified as a reason that the engineering profession is not perceived to have the
status of the legal and medical professions, at least in the eyes of many engineers.
For decades, there have been ongoing debates within the profession as to whether
the first degree in engineering should be a five-year degree,16 but few serious
movements have been made in that direction. Indeed, five-year engineering
degrees were more common decades ago, and long-term trends have been to
move away from an extended curriculum and even to reduce the requirements
for the four-year degree. Increasingly, there has been discussion about expecting
a master’s degree to be the first professional degree, but this too is far from
the universal point of view.
The Ph.D. in engineering is typically a research degree, and the doctorallevel
engineer will most often be engaged in analysis rather than design. Indeed,
a design-based dissertation is considered an oxymoron in most engineering graduate
programs. That is not to say that the engineer with a doctorate will not or
cannot do design; he or she will more typically serve in a consulting capacity,
engaged in both design and analysis of a nonroutine kind. It is not at all uncommon
to find doctoral-level engineers working in research-and-development
environments who seldom if ever perform design tasks, however, and they may
have had little if any design experience.
B. Design Considerations Are More Than Purely Technical
The considerations that go into judging the success or effectiveness of an engineering
design are seldom only technical, and at a minimum they usually involve
questions of cost and benefit, and of investment and profit. Other design
considerations include aesthetics, environmental impact, ergonomics, ethics, and
social impact. Although such implications may not be considered explicitly by
every engineer working on every design project, an engineering team collectively
is likely to be aware of them. Aesthetics, for example, have been discussed
explicitly as a dominant design consideration for bridges of monumental proportions,
such as long-span suspension bridges. The ratio of the sag to the span
of the main cables, which can be set for aesthetic as well as technical objectives,
subsequently can have a profound impact on the forces in the cables themselves
and hence the economics of the project.17
16. See, e.g., Samuel C. Florman, The Civilized Engineer 205–06 (1987).
17. See David P. Billington, The Innovators: The Engineering Pioneers Who Made America Modern
6–12 (1996).
Reference Manual on Scientific Evidence
592
1. Design constraints
Engineering has been defined as design under constraint. Design constraints are
among the givens of a problem, the limitations within which the engineer must
work. A bridge over a navigable waterway has to provide a clear shipping channel
between its piers and sufficient clearance beneath its roadway, and these are
thus nonnegotiable design constraints. The specification of such clearances forces
the design to have piers at least a certain distance apart and a roadway that is a
certain distance above the water. The design of a roof structure over an auditorium
has to accommodate the architect’s decision that the auditorium will have
a given width and ceiling height and have no columns among its seats. Such
constraints can have profound implications for the type of bridge chosen and the
kind of roof structure devised by the structural designer.
2. Design assumptions
No engineering design can be advanced through analysis unless certain assumptions
are made. These design assumptions can be implicit or explicit, and they
often involve technical details that affect the difficulty and accuracy of any subsequent
analysis. Common design assumptions for long-span suspension bridges
in the 1930s were that wind blowing across a deck displaced it sideways only
and that wind did not have any aerodynamic effect on the structure. The former
was an explicit design assumption that was manifested in the calculation of how
stiff the bridge deck had to be in a horizontal plane. The latter assumption was
implicit in the sense that it was never considered, but it may be considered an
assumption nevertheless, since no calculation or analysis was performed to verify
that aerodynamic effects were of no consequence. It was only after the Tacoma
Narrows Bridge was destroyed by wind in 194018 that the bridge-design community
recognized that aerodynamic effects were indeed important and could
not be ignored by engineers or anyone else.
3. Design loads
No structural engineering analysis can proceed without the loads on the structure
being stated explicitly. This presents a dilemma for the designer who is
charged with specifying how large the structural components must be. The
components are chosen to support a given load, but the bulk of that load is often
the weight of the structural components themselves. For example, the weight of
the steel in a long-span bridge may be over 80% of the total load on the structure.
The engineer proceeds with the analysis only by first making an educated
guess about how much steel will be required for the bridge. Since most bridge
design involves familiar spans and types of structures, the educated guess can be
18. See Northwestern Mut. Fire Ass’n v. Union Mut. Fire Ins. Co., 144 F.2d 274 (9th Cir. 1944).
Reference Guide on Engineering Practice and Methods
593
guided by experience. After a “design by analysis” based on the assumed weight
is carried out, the original assumption about the weight of steel can be checked.
If there is not sufficiently close agreement, the guess (assumption) can be modified
and an iteration carried out. In other engineering design problems, the design
loads may be the electric currents expected in a circuit or the volume of water to
be handled by a sewer system, but the nature of the design problem is analogous
to that of designing a bridge.
A well-known failure resulting from an improper use of the iterative design
process occurred early in the twentieth century in the design and construction
of the Quebec Bridge across the Saint Lawrence River.19 The chief engineer,
Theodore Cooper, was approaching the end of a distinguished career when he
was given the opportunity to design and build the longest cantilever bridge in
the world. His concept was for a slender-looking steel span of 1,800 feet between
piers. The detailed design, that is, the sizing of the steel members, was to
be carried out by Peter Szlapka, an engineer who worked in the offices of the
Phoenix Bridge Company but had no experience in the field. Since Cooper,
who was not in good health, did not want to travel to the construction site from
his office in New York, he could not heed in time warning signs that the steel
was not bearing the load properly, and the bridge collapsed before it was completed.
An investigation by a royal commission found that Szlapka had curtailed
his iteration prematurely and had underestimated the actual weight of steel on
the bridge. As a result, some of his calculations of strength were as much as 20%
higher than existed in the actual structure. The Quebec Bridge was redesigned
and completed in 1917, but to this day no cantilever bridge has been designed
with a longer span.
The weight of a bridge structure itself is known as the dead load.20 The
weight of traffic and snow and the force of wind and earthquakes are known as
live loads.21 These live loads are often specified as design loads, and they involve
assumptions about how much traffic the bridge will carry and how extreme
nature can be at the location of the bridge. The specification of design loads22
has a profound impact on the cost of a structure, and hence design loads are
19. See Henry Petroski, Engineers of Dreams: Great Bridge Builders and the Spanning of America
101–11 (1995).
20. See Space Structures Int’l Corp. v. George Hyman Constr. Co., No. 88-0423, 1989 U.S. Dist.
LEXIS 5798, at *5 n.2 (D.D.C. May 24, 1989) (defining “dead load” as the weight of the frame and its
components). See also Wright v. State Bd. of Eng’g Exam’rs, 250 N.W.2d 412, 414 (Iowa 1977) (defining
“dead load” as the weight of the roof itself).
21. See Space Structures, 1989 U.S. Dist. LEXIS 5798, at *5 n.2 (defining “live load” as the weight
of the snow, rain, and wind that a frame can support). See also Wright, 250 N.W.2d at 415 (defining
“live load” as the weight of the snow).
22. See Space Structures, 1989 U.S. Dist. LEXIS 5798, at *5 n.2 (defining “load” as the weightbearing
capacity of the frame itself).
Reference Manual on Scientific Evidence
594
chosen carefully. A bridge might conceivably have to support bumper-to-bumper
traffic consisting entirely of heavy trucks fully loaded, but designing for such a
load would make for a heavy, and therefore expensive, bridge. For a wide bridge
with many lanes, it is unlikely that trucks would ever occupy every lane equally
(indeed, they might be prohibited from doing so at all), and so an engineering
judgment is made as to what is a credible design load. Because engineers took
into account such considerations, the George Washington Bridge, which was
first opened to traffic in 1931, could be designed and built for an affordable
price. Otherwise it might not have been built when it was.23
Another example involves the construction of library buildings. Whereas libraries
built at the beginning of the twentieth century are likely to have the
floors of their bookstacks supported by the shelving structure, libraries built after
the middle of the twentieth century are more likely to have the bookcases supported
by the floors of the building. The space devoted to bookcases in such
structures is actually only about one-third of the floor space, since adequate aisle
space must be allowed for access. The dead load of the modern library building
is that of the structure itself. The bookcases, which can be relocated if necessary,
the books they hold, and the library staff and patrons can be considered the live
load. A typical design assumption might be that upper-stack floors would carry
a live load of about 150 pounds per square foot. Because of the ever-present
demands on libraries to find more space for shelving books without constructing
a new building or expanding an existing one, compact shelving came to be
increasingly considered. However, since such shelving might increase the design
live load on a floor to 300 pounds per square foot or more, it could not be
installed on upper floors without compromising the factor of safety of the structure
(see section III.C.1). Basement floors, on the other hand, which might
have been designed at the outset for heavier loads, such as those required for
storing larger and heavier library materials like maps and newspapers, could be
retrofitted with compact shelving.24
Increasingly, bridges, buildings, machine parts, and other engineering structures
and components are being designed with computers by a process known as
computer-aided design (CAD). Much of the iterative process and the loading
considerations described earlier can be incorporated into the computer software
and so is invisible to the engineer using the computer. The engineer still plays a
central role in the design process, however, especially when specifying what
goes into the computer model of the structure or machine part being designed.
This input can typically include the overall size of the structure or part, the
23. Jameson W. Doig & David P. Billington, Ammann’s First Bridge: A Study in Engineering, Politics,
and Entrepreneurial Behavior, 35 Tech. & Culture 537 (1994).
24. See Henry Petroski, The Book on the Bookshelf 178–80, 206–08 (1999).
Reference Guide on Engineering Practice and Methods
595
specification of loads, the strength of the materials chosen, and the details of
connections between interacting parts of the design.
C. “State of the Art”
The term “prior art”25 is ubiquitous in the patent literature and designates existing
technology that is being improved upon by something new, useful, and
nonobvious. Virtually everything that is patented improves upon the prior art,
and thus the prior art is in an ever-changing state. To work totally within the
prior art at a given time is to design something that would be considered routine
and thus hardly an invention. Engineers often work within the prior art, as
when they design a common highway bridge that is very much like so many
other highway bridges up and down the same road. Yet engineers are also often
called upon to build bridges in new settings and under new circumstances, and
in these cases they often must develop new types of bridges or devise new
construction procedures. In such cases they may in fact have to go beyond the
prior art and thus come up with something that is patentable.
When engineers are solving problems of an unusual kind or solving routine
problems in a new way, they are in fact acting as inventors. Indeed, engineering
can be thought of as institutionalized or formalized invention, though the terminologies
of invention and engineering are commonly kept distinct. The term
“prior art,” for example, is seldom used in engineering; the term “state of the
art” is used instead. Yet just as the prior art changes with each new patent, the
“state of the art” in engineering also means different things at different times. At
any given time, however, it designates what is considered the latest and generally
agreed upon practice of engineers in a given area, whether that be bridge
design, automobile design, or ladder design. To be considered innovative engineering,
a new idea or design must not be obvious to someone versed in the
state of the art.
To say that an engineer is practicing engineering within the state of the art is
not a pejorative characterization, but rather an indication that the engineer is
up-to-date in the field. The state of the art is advanced in engineering, as in
science, by pioneers (inventors) who see limitations to the state of the art and
who find fault with aspects of the state of the art that are not evident to those
immersed in the paradigm.
25. See 35 U.S.C. § 103(a) (1999) (defining “prior art” as subject matter that as a whole would have
been obvious to a person having ordinary skill in the subject area). See also Afros S.P.A. v. Krauss-Maffei
Corp., 671 F. Supp. 1402, 1412 (D. Del. 1987) (discussing the scope of prior art as “that which is
‘reasonably pertinent to the particular problem with which the inventor was involved’” (quoting Stratoflex,
Inc. v. Aeroquip Corp., 713 F.2d 1530, 1535 (Fed. Cir. 1983))).
Reference Manual on Scientific Evidence
596
1. “Factor of safety”
Engineers recognize that they do not always fully understand the engineering–
scientific theory or principles that underlie the functioning of their design. They
also recognize that they necessarily have made assumptions in their analysis, and
so the design as built will not behave exactly like the theoretical (mathematical)
model that served as the basis for their analysis. They recognize further that a
design as built does not necessarily have exactly the same details of workmanship
or strength of materials as were assumed in the calculations. For these reasons
and more, engineering designs are not made exactly to theoretical specifications
but rather are made to practical ones.
If a machine part is calculated to carry a certain maximum
Nenhum comentário:
Postar um comentário