From Casetext: Smarter Legal Research

United States v. Aceituno

United States District Court, D. New Hampshire
Oct 25, 2023
Crim. 20-cr-081-01-LM (D.N.H. Oct. 25, 2023)

Opinion

Crim. 20-cr-081-01-LM

10-25-2023

United States of America v. Lester Aceituno


Opinion No. 2023 DNH 136 P

ORDER

LANDYA MCCAFFERTY, UNITED STATES DISTRICT JUDGE

Defendant Lester Aceituno is charged in an indictment with one count of conspiracy to commit bank fraud in violation of 18 U.S.C. § 1349 and two counts of aggravated identity theft in violation of 18 U.S.C. § 1028A. Before the court is Aceituno's motion in limine to exclude the testimony of the government's fingerprint expert. See doc. no. 94. The court held an evidentiary hearing on the motion, at which both parties presented expert witnesses. The government's expert testified to her fingerprint identification procedures as well as her conclusion that the latent fingerprint is a match for Aceituno's known print. The defense expert opined that the methodology and standards the government's expert used were too vague to be reliable, and that errors in her analytical process cast further doubt on the identification. By endorsed order dated October 5, 2023, the court denied Aceituno's motion and indicated that a written order would follow.

LEGAL STANDARDS

Federal Rule of Evidence 702 provides the requirements for expert witness testimony:

A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if:
(a) the expert's scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue;
(b) the testimony is based on sufficient facts or data;
(c) the testimony is the product of reliable principles and methods; and
(d) the expert has reliably applied the principles and methods to the facts of the case.

Based on these requirements, an expert witness's testimony may be challenged on the grounds that the witness is not qualified to give the opinion, the opinion is not based on specialized knowledge, the opinion is not reliable, or the opinion is not relevant. Carrozza v. CVS Pharm., Inc., 992 F.3d 44, 56 (1st Cir. 2021); Bogosian v. Mercedes-Benz of N. Am., 104 F.3d 472, 476 (1st Cir. 1997). The proponent of the expert witness bears the burden of showing by a preponderance of evidence that the testimony is admissible. See Martinez v. United States, 33 F.4th 20, 24 (1st Cir. 2022); United States v. Tetioukhine, 725 F.3d 1, 6 (1st Cir. 2013); see also Fed.R.Evid. 702 advisory committee's note to 2023 amendment (explaining that 2023 changes “clarify and emphasize” that the preponderance of the evidence standard applies under Rule 702).

The judge has a gatekeeping role to ensure that an expert witness's testimony is both reliable and relevant. Martinez, 33 F.4th at 24. In carrying out that function, the judge focuses on the process that generated the opinion, not on the opinion itself. Lopez-Ramtiez v. Toledo-Gonzalez, 32 F.4th 87, 94 (1st Cir. 2022) (citing Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 595 (1993)). “There is an important difference between what is unreliable support and what a trier of fact may conclude is insufficient support for an expert's conclusion.” Milward v. Acuity Specialty Prods. Grp., Inc., 639 F.3d 11, 15 (1st Cir. 2011); Lopez-Ramirez, 32 F.4th at 94. “Vigorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible evidence.” Daubert, 509 U.S. at 596.

After Daubert established the trial court's gatekeeping function with respect to scientific expertise, the Supreme Court clarified in Kumho Tire, Co. v. Carmichael, 526 U.S. 137, 147 (1999), that this function also extended to “technical” or “other specialized knowledge.” E.g., Lawes v. CSA Architects & Eng'rs LLP, 963 F.3d 72, 98 n.39 (1st Cir. 2020). In Kumho Tire, the Court upheld the trial court's decision that an expert in engineering lacked “sufficient specialized knowledge to assist the jurors ‘in deciding the particular issues in the case.'” Kumho Tire, 526 U.S. at 156, 158 (quoting 4 J. McLauglin, Weinstein's Federal Evidence ¶ 702.05[1], at 702-33 (2d ed. 1998)). Following Kumho Tire, the key issue for non-scientific testimony is “whether the expert ‘employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.'” United States v. Monteiro, 407 F.Supp.2d 351, 357 (D. Mass. 2006) (quoting Kumho Tire, 526 U.S. at 156).

Regardless of whether the testimony is scientific, technical, or specialized, a court must determine whether the proffered expert testimony is sufficiently reliable. Id. (citing Kumho Tire, 526 U.S. at 147). In making that determination, the court must assess the reliability of the methodology underlying the expert's testimony. Id. Daubert outlines five factors that should guide courts in making this reliability determination: (1) whether the methodology can be or has been tested; (2) whether the methodology has been subjected to peer review and publication; (3) the known or potential error rate; (4) the existence of standards controlling the methodology's operation; and (5) the degree to which the methodology has been accepted within the relevant discipline. Daubert, 509 U.S. at 595-96. However, the Daubert factors “may not perfectly fit every type of expert testimony, particularly technical testimony based primarily on the training and experience of the expert.” Monteiro, 407 F.Supp.2d at 357. Thus, the court may also look to other factors in determining whether a methodology is sufficiently reliable. See, e.g., Kumho Tire, 526 U.S. at 150 (emphasizing that the Daubert factors are neither definitive nor exhaustive); United States v. Mitchell, 365 F.3d 215, 235 (3d Cir. 2004) (quoting In re Paoli R.R. Yard PCB Litig., 35 F.3d 717, 742 n.8 (3d Cir. 1994)) (listing additional factors used to determine a method's reliability).

The advisory committee's notes to the 2023 amendments to Rule 702 acknowledge that a known error rate is preferable but may not always be available. “In deciding whether to admit forensic expert testimony, the judge should (where possible) receive an estimate of the known or potential rate of error of the methodology employed, based (where appropriate) on studies that reflect how often the method produces accurate results.” Fed.R.Evid. 702 advisory committee's note to 2023 amendments.

The advisory committee's notes to the 2023 amendments also make clear that where the methodology involves subjective assessments, the expert should avoid statements of certainty: “Forensic experts should avoid assertions of absolute or one hundred percent certainty-or to a reasonable degree of scientific certainty-if the methodology is subjective and thus potentially subject to error.”

If the court determines that the methodology is reliable, it must then decide whether the expert reliably applied the methodology to the facts of the case. Fed.R.Evid. 702(d); Monteiro, 407 F.Supp.2d at 357-58. Expert testimony regarding the comparison of features, such as the ridges in fingerprints, “must be limited to those inferences that can reasonably be drawn from a reliable application of the principles and methods.” Fed.R.Evid. 702 advisory committee's note to 2023 amendments. The court must remain mindful that it is a “gatekeeper,” not an “armed guard,” and the “party who proffers expert testimony” need not prove to the court “that the expert's assessment of the situation is correct.” Ruiz-Troche v. Pepsi Cola of P.R. Bottling Co., 161 F.3d 77, 85-86 (1st Cir. 1998). “[O]nce a trial judge determines the reliability of the proffered expert's methodology and the validity of his reasoning, the expert should be permitted to testify as to the inferences and conclusions he draws from it, and any flaws in his opinion may be exposed through cross-examination or competing expert testimony.” United States v. Mooney, 315 F.3d 54, 63 (1st Cir. 2002).

Effective December 1, 2023, the language of Rule 702(d) will state: “the expert's opinion reflects a reliable application of the principles and methods to the facts of the case.” Fed.R.Evid. 702(d) 2023 amendment. This change does not impose “any new, specific procedures,” but instead emphasizes the bounds of the inferences permitted in expert testimony. Fed.R.Evid. 702 advisory committee's note to 2023 amendments.

BACKGROUND

I. Factual and Procedural Background

The indictment alleges that Aceituno entered into a conspiracy to deposit and withdraw funds from fraudulently obtained checks and money orders. Specifically, it alleges that, between June 1, 2016, and October 10, 2017, the conspirators opened accounts with two different banks using other individuals' identification information. The conspirators allegedly deposited checks into these accounts and then obtained the funds through debit card purchases, withdrawals, money orders, and cashier's checks. Doc. no. 2 at 2. The indictment also alleges that, in relation to this conspiracy, Aceituno knowingly possessed and used the names, dates of birth, and Social Security numbers of two other people. Id. at 4.

In 2017, the government located a fingerprint on a post office box application written in the name of a victim in this case. The government intends to argue that the fingerprint is a match for one of Aceituno's known prints. To that end, the government intends to present the testimony of Patricia Cornell, a fingerprint analyst for the U.S. Postal Inspection Service's National Forensic Laboratory, as an expert in latent fingerprint analysis. In 2017, Cornell analyzed the latent print found on the application and identified it as a match with a known print of Aceituno's. She will testify at trial that she used an identification method called “ACE-V,” which stands for Analysis, Comparison, Evaluation, and Verification. Nat'l Rsch. Council, Strengthening Forensic Science in the United States: A Path Forward 137 (2009).

The unidentified fingerprint is referred to as a “latent” print, while fingerprints already identified to a particular individual are referred to as “known” fingerprints.

“ACE-V” is an acronym that describes a four-part process. “A” stands for “analysis.” The expert analyzes the latent fingerprint to determine whether the features are sufficiently detailed and clear for comparison. “C” stands for “comparison.” The expert compares the levels of detail in the latent print to a known fingerprint. “E” stands for “evaluation.” The expert evaluates the matching features between the two prints to determine whether there is a match. “V” stands for “verification.” During this last step, the result is verified. Id. at 137-38. Cornell followed these four steps. She is expected to testify at trial that, using the ACE-V method, she identified the latent fingerprint as a match for Aceituno's known print.

Aceituno moved to exclude Cornell's testimony, arguing that the government could not demonstrate that her testimony is admissible pursuant to Rule 702, Daubert, and its progeny. He attached a report to his motion authored by Michele Triplett, who, like Cornell, has extensive experience conducting fingerprint identification analyses (doc. no. 94-2). Triplett opined that the government had provided insufficient information regarding the criteria guiding Cornell's analysis, and that, based on the information that had been made available to her, she was unable to conclude that Cornell's identification was reliable. The government filed an objection to Aceituno's motion, and the court held an evidentiary hearing on September 25, 2023, at which both Cornell and Triplett testified.

Per Aceituno's request, the court permitted Triplett to testify via video due to her schedule. While the government saw no need for a hearing, the government did not object to Triplett testifying via video in the event of such a hearing.

II. The Government's Expert

Cornell, a Senior Latent Fingerprint Examiner for the Postal Inspection Service, testified at the hearing about her qualifications; her agency's standards and protocols; her use of the ACE-V method; and the application of ACE-V to the fingerprints in this case. Cornell began by describing her qualifications. She worked for four years as a basic fingerprint analyst before her promotion to her current senior position. She held previous positions in law enforcement in which she processed evidence from crime scenes, and taught classes on crime scene processing as an adjunct professor. Cornell testified that she has more than 375 hours of training in latent fingerprint identification. She has been certified by an international accrediting body since 2009. Cornell's position at the Postal Inspection Service requires her to undergo proficiency testing every year and she has passed every proficiency test. Cornell testified that she has identified more than 1,700 latent fingerprints that were of value for further study in 2023 alone.

In her testimony, Cornell described the Postal Inspection Service's Standard Operating Procedures (“SOPs”), which were also referenced in her expert report. The SOPs instruct an analyst to utilize ACE-V, consider certain contextual factors surrounding a print, look for increasing levels of detail, and make notations to indicate different patterns or results. Cornell described the three levels of detail the SOPs require examiners to consider. Level 1 detail includes the overall pattern on a fingerprint, which may be an arch, a loop, or a “whorl.” Level 2 detail includes the movement of ridges across a fingerprint, such as one ridge splitting into two. Level 3 detail consists of the minute details within a ridge, such as its shape, pores, and edges. Cornell also testified that she compared the distance between and position of features in a fingerprint.

Cornell then described each step in her application of ACE-V in this case with an accompanying PowerPoint of colorful dots, which demarcated the marks she made on an image of the latent fingerprint. First, in the “A” analysis stage, Cornell testified that she looked at a photograph of the fingerprint and “instantaneously” identified Level 1 detail: a left-slant loop. This meant that the fingerprint was likely from the left hand. She identified the left-slant loop based on the overall shape of the ridges. She then testified to the Level 2 detail, which she marked on the photo with a sharp stylus called a “pointer.” The sharp point on the pointer left a distinct mark on the image to document a unique feature, such as a ridge on the fingerprint. Cornell explained that the pointer does not penetrate the image and does not in any way “distort” that feature on the image. Once she found enough features on the latent print that-in her view-made the print valuable for further identification, she drew a red arch over the top of the image of the fingerprint as the SOPs required. She also marked and disregarded a section of the print that she thought was too distorted to be reliable.

The court understood her testimony to be describing the marks as indentations, similar to braille.

In the “C” comparison stage, Cornell testified that she compared the print to the known prints of (1) an alleged co-conspirator in this case and (2) a postal inspector assigned to this matter. Cornell found that neither matched. She then put an unmarked image of the latent fingerprint into an online automated fingerprint identification system (“AFIS”) database. Using an algorithm, the AFIS returns potential matches based on the image submitted. Cornell testified that she ran the image through the computer program and gave it a feature descriptor of a “leftslant loop.” The AFIS returned an additional twenty known prints to compare, including Aceituno's. Through a process of elimination, she narrowed down the candidates to Aceituno, and then to one of the ten prints on Aceituno's fingerprint card. Cornell testified that throughout the process she knew nothing about the case except Aceituno's name on his fingerprint card. She did not know that Aceituno was a suspect in the case.

Aceituno does not challenge the government's use of the AFIS.

In conducting her comparison, Cornell testified that she compared the latent print to Aceituno's known print side-by-side. She used her pointer to mark features that seemed to match on both the latent and known prints. She testified that she found more data points in the known print to supplement her points on the latent print. Cornell explained that she reached a preliminary conclusion that she had a match. She then attempted to disprove her conclusion by looking for discrepancies in the marked points. The SOPs did not require Cornell to find a minimum number of points to make her conclusion. She testified that she marked eleven features in agreement between the two prints.

During the “E” evaluation stage, Cornell testified that she reviewed the points in agreement between the two prints and preliminarily identified the latent print to Aceituno. She formalized her opinion and documented it as a preliminary identification. Cornell testified that this preliminary conclusion does not become a full identification until it is verified.

For the “V” verification stage, Cornell sent the photograph of the latent fingerprint and the known fingerprint card to another examiner: a peer at the Postal Inspection Service. Cornell explained that her agency did not conduct blind verifications. She referred to research showing that blind verifications take eight times longer than unblind verifications, with a negligible impact on the verifier's conclusion. Here, the verifying peer knew before her verification that Cornell had identified the latent fingerprint to a specific known print belonging to Aceituno. The peer did not have Cornell's documentation or marked points. Cornell's verifying peer repeated the comparison and evaluation stages of ACE-V, double-checking Cornell's work, and concluded, as did Cornell, that there was a match.

A “blind” verification occurs when the verifying peer repeats the original examiner's process without knowing the original examiner's results. See United States v. Mahone, 453 F.3d 68, 72 (1st Cir. 2006) (referring to a blind verification where the verifier “had not reviewed [the initial] report before conducting his examination”).

Cornell testified that in July 2023, at the prosecutor's request, she redid the comparison, evaluation, and verification stages of ACE-V, with a blind verification. During her second ACE-V procedure, Cornell again reached the conclusion that the latent was a match for Aceituno's known print. However, Cornell admitted that, during her second ACE-V procedure, she marked one of the features by mistake. Specifically, one point she marked on the latent print did not match the point she marked on the known print. The verifying peer also marked one point in error. Cornell testified that she discovered these mistakes after reading Triplett's report. Cornell explained that she corrected both mistakes by removing the errant marks and identifying the features properly.

On cross-examination, Cornell was asked about the error rate for fingerprint identification. She testified that she only knew of a potential false positive error rate, which was 0.2%. Cornell was also asked about marked points on the latent and known prints that seemed mismatched. She opined that, because a fingerprint may be slightly distorted or smudged, otherwise identical ridges can appear to have minor discrepancies. She characterized these discrepancies as “within tolerance,” making them usable points.

III. The Defense's Expert

In response to Cornell's testimony, Aceituno called Michele Triplett. Triplett testified to her criticisms of ACE-V generally and critiqued the application of ACE-V in this case. Triplett acknowledged ACE-V's wide acceptance, but she explained that ACE-V is not a scientific method. Rather, in her view, ACE-V is no more than an order of events or broad outline within which the agency or investigator operates. The agency fills in the outline with standard operating procedures, or “SOPs.” Triplett testified that she published a paper advocating for the field's departure from ACE-V. Although she was not opposed to an analyst using ACE-V, Triplett opined that more specific information and protocols were needed.

With respect to the Postal Inspection Service's SOPs-the specific procedures Cornell used to implement ACE-V in this case-Triplett testified that the documentation required by the SOPs was insufficient to evaluate Cornell's work. She opined that the lack of objective criteria set forth within the SOPs for matching a latent print to a known print made it difficult to assess the reliability of Cornell's identification. Triplett also thought that the determination of a point as “within tolerance” was a subjective, standardless judgment call, which further hurt the reliability of the identification. Moreover, although Cornell testified that the print in this case was “not complex,” Triplett disagreed. Triplett described the latent print as complex, which would require “additional explanation, additional documentation, [and] additional quality assurance measures.”

In addition, Triplett pointed out the two errors made in marking features. To Triplett, these errors undermined Cornell's entire identification. She testified that there were other errors beyond those she identified in her expert report. Although Triplett found fingerprints “extremely reliable” in most cases, she testified that the procedures used in this case did not reliably support Cornell's conclusion.

More broadly, Triplett testified to the error rates in fingerprint identification. She testified that one of the first studies that provided an error rate showed that identifications were correct in 99.8% of identifications, as Cornell testified. But given the volume of fingerprint identifications and the variable print quality- ranging from the simple to the more complex-Triplett testified that the studies do not necessarily account for complex prints. Triplett testified that a study with more complex latent fingerprints would show a higher error rate. The studies did not disclose the complexity of the prints used and there was, in her words, “really no rating mechanism.”

DISCUSSION

In his motion in limine, Aceituno argues that Cornell's methodology-the ACE-V method in combination with the Postal Inspection Service's SOPs-is not sufficiently reliable. Doc. no. 94 at 5. He also argues that Cornell did not reliably apply this methodology. Id. The court will first analyze the reliability of the ACE-V method used here, then consider whether Cornell's application of this method was sufficiently reliable.

I. The Government Expert's Methodology Was Reliable

Aceituno first argues that Cornell's ACE-V methodology was unreliable as there is no way to reproduce her results under the SOPs she used because they are not specific enough. The court analyzes the reliability of Cornell's approach using the relevant Daubert/Kumho factors.

A. Testability

Under Daubert and its progeny, the court may consider whether the method, and its underlying premises, are testable. Daubert, 509 U.S. at 593; Mitchell, 365 F.3d at 235. “‘Testability' has also been described as ‘falsifiability,'” which is the ability to prove a hypothesis false. Mitchell, 365 F.3d at 235.

Cornell testified that, during the comparison stage, she tested her preliminary conclusions about the potential identification. Cornell attempted to falsify her initial hypothesis that the latent print matched Aceituno's known print. She examined the eleven marked points on each of the prints, searching for discrepancies between the points of identification. Cornell testified that, in her view, she had sufficient information to test her hypothesis. She testified that she also had a peer redo the comparison, evaluation, and verification to further test her preliminary conclusion. The ACE-V process was repeated once more at the request of the prosecution when Triplett identified Cornell's error, essentially re-testing the original conclusions. Although she marked one of those points in error, Cornell was nonetheless able to reach the same conclusion, this time with a blind verification. In light of the peer review and blind verification, as well as Cornell's attempts to falsify her conclusions, this factor cuts in favor of admitting Cornell's testimony.

B. Peer Review and Publication

Both Cornell and Triplett testified that fingerprint identification has been peer reviewed and published in scholarly journals. Cornell also testified that the verification in the ACE-V method used at the Postal Inspection Service is itself a form of case-by-case “peer review.” In other words, within each ACE-V fingerprint identification, a peer at the Postal Inspection Service reviews the original analyst's evaluation. This layer of review adds to the reliability of ACE-V, and therefore supports admission of Cornell's testimony.

C. Known or Potential Error Rate

Neither expert could provide a definitive error rate for fingerprint identification using the ACE-V method. Cornell testified that a “majority” of studies found a false positive rate of 0.2%, and Triplett testified that she was aware of that figure. But Triplett pointed out that most studies do not distinguish between error rates where the expert is looking at simple versus complex latent prints. This is important, according to Triplett, because the higher the complexity of the latent print, the greater the error rate.

Neither party submitted any of these studies in evidence.

To Triplett, the latent print in this case was complex and therefore more likely to be erroneously matched to a known print. But to Cornell, the print was “not complex” and had only two minor errors. This dispute between experts over the relative complexity of the latent print and the errors in the expert's work is the kind of debate that a jury should be allowed to hear and assess. See Mooney, 315 F.3d at 63. However, the lack of a known error rate cuts against the reliability of Cornell's particular version of the ACE-V method.

D. Existence of Controlling Standards

Regarding the standards governing the ACE-V method, Aceituno argued at the evidentiary hearing that the standards for documentation, specificity, and quantitative detail were lacking. Cornell testified that she used ACE-V and her agency's SOPs to identify Aceituno's fingerprint. Triplett opined that an agency's standards matter more than adherence to ACE-V's four-step methodology, and she expressed reservations about the lack of quantitative standards in the SOPs used in this case.

The court agrees with Triplett's characterization of the ACE-V method. An acronym of four letters does not create a technical methodology. By itself, ACE-V is a mere organizational structure, comparable to the “IRAC” (“Issue, Rule, Application, Conclusion”) writing structure that law students learn in first-year legal writing courses. Just because a lawyer uses the IRAC method to write a brief does not render the brief more likely to reach either a reliable or correct legal conclusion. Likewise, ACE-V is only as reliable as the testing and analysis that takes place within it.

The Postal Inspection Service's SOPs neither set thresholds for the number of features an analyst should identify nor provide metrics to determine whether the features between two prints sufficiently correspond. Triplett conceded, however, that requiring a minimum number of features to find a match would not necessarily improve the accuracy of the identification. The SOPs require that analysts make certain notations to indicate their findings on the fingerprint photographs and Cornell testified about these details. Cornell's testimony supplemented and explained in detail the operation of the SOPs within each step of the procedure. The court finds that this factor cuts slightly in favor of admitting Cornell's testimony.

E. Acceptance Within the Relevant Discipline

Widespread acceptance in the relevant discipline “can be an important factor in ruling particular evidence admissible, and ‘a known technique which has been able to attract only minimal support within the community' may properly be viewed with skepticism.'” Daubert, 509 U.S. at 594 (quoting United States v. Downing, 753 F.3d 1224, 1238 (3d Cir. 1985)).

At the evidentiary hearing, both experts testified to the long-standing use of the ACE-V method in the fingerprint identification field. Aceituno introduced as an exhibit a 2009 report from the National Research Council, which was critical of ACE-V. Despite its criticism, the report still referred to the method as “[t]he technique used to examine prints made by friction ridge skin.” National Research Council, supra, at 137; see also United States v. Baines, 573 F.3d 979, 983 (10th Cir. 2009) (referring to ACE-V as the “process used for determining whether a latent print matches a known print”). Moreover, courts have readily admitted fingerprint identification using the ACE-V method. See, e.g., United States v. Pena, 586 F.3d 105, 110 (1st Cir. 2009) (collecting cases); see also United States v. Crisp, 324 F.3d 261, 266 (4th Cir. 2003) (noting that fingerprint identification evidence has been admissible in criminal trials for more than a century). This court is unaware of any case in which a court has sustained a Daubert challenge to a fingerprint identification made using the ACE-V method. Indeed, the Fourth Circuit has noted the “strong expert and judicial consensus” regarding the reliability of ACE-V fingerprint identifications. Crisp, 324 F.3d at 266. Given this consensus, and the widespread adoption of the ACE-V structure for fingerprint identifications, this factor supports admitting Cornell's expert testimony.

Many of these cases predate the 2009 National Research Council report.

However, in light of ACE-V's shortcomings-as identified by Triplett and the National Research Council's report-the court concludes that the proponent of a fingerprint expert must do more than simply describe the methodology as the “widely accepted ACE-V method.” See, e.g., Baines, 573 F.3d at 991 (“The ACE-V system is a procedural standard but not a substantive one.”). By itself, the ACE-V method describes neither a technique nor a specialized skill. To be admissible, the specific manner in which the expert employs the ACE-V method must satisfy the Daubert/Kumho principles.

F. Reproducibility

Finally, Aceituno asked the court to consider reproducibility. At the evidentiary hearing, Cornell all but reproduced her process-taking the court through the details of each step in her analysis. The documentation required as part of the identification, though minimal, nonetheless allowed Cornell to repeat and demonstrate to the court her detailed evaluation of the fingerprints. As part of the verification stage of the ACE-V method, her results were also peer reviewed and, essentially, reproduced.

Cornell's report, by itself, did not adequately describe the methodology employed in this case. By testifying to the operating procedures she used within each step, however, Cornell supplemented her report with sufficient details and explanations. The lack of a known error rate (that properly accounts for the simplicity or complexity of the latent print) raises concerns about the methodology. But, in light of Cornell's detailed testimony at the evidentiary hearing, the court finds Cornell's version of the ACE-V methodology sufficiently reliable.

II. The Government's Expert Reliably Applied the ACE-V Method

Aceituno also argues that Cornell did not reliably apply the ACE-V method. Specifically, Aceituno challenges Cornell's documentation, the subjective judgments made using the SOPs, and the errors in marking features.

A. Documentation

Aceituno first argues that the SOPs do not require enough documentation to determine the reliability of the expert's application of the ACE-V method and the SOPs themselves. The SOPs require documentation in the form of notations and markings-rather than narrative explanations-when analyzing a fingerprint. Cornell testified to her adherence to the SOPs' requirements for documentation. Further, over several hours in the courtroom, Cornell explained her notations on a demonstrative overlay of the fingerprint image to show her process. Any deficiencies in her documentation were cured by her testimony.

B. Subjectivity of Standards

Aceituno next argues that the lack of specificity in the SOPs renders their application highly subjective. Cornell testified that differences in the points of identification on the latent and known prints were “within tolerance,” that is, within a permissible margin of error. Triplett testified that the notion of “tolerance” was too subjective, especially because the SOPs did not provide a numerical value for tolerance. This debate is appropriate for the jury to hear in weighing Cornell's credibility. Aceituno does not argue that Cornell deviated from any particular standards. Aceituno's real challenge is to the lack of numeric standards for guiding fingerprint analysis in general. The lack of numeric standards is a problem seemingly endemic to fingerprint identification, but it does not render Cornell's application of ACE-V unreliable.

C. The Government Expert's Errors

Finally, Aceituno argues that the errors made by Cornell and her peer undermine the reliability of the entire identification. In her expert report, Triplett identified two errors, prompting the government to ask Cornell to reexamine the fingerprints. Cornell did so and she explained the errors in her live testimony. Any errors in fingerprint identification go to the weight of her testimony and her credibility as an expert. See Mooney, 315 F.3d at 63. Aceituno will have the opportunity to expose these errors to the jury on cross-examination or with his own expert. Id.

Having considered Cornell's latent fingerprint identification through the lens of Daubert and Kumho Tire, as well as the long line of cases admitting fingerprint identifications using the ACE-V method, the court finds that Cornell's use of the ACE-V method is sufficiently reliable under Rule 702.

CONCLUSION

Aceituno's motion in limine (doc. no. 94) is denied.

SO ORDERED.


Summaries of

United States v. Aceituno

United States District Court, D. New Hampshire
Oct 25, 2023
Crim. 20-cr-081-01-LM (D.N.H. Oct. 25, 2023)
Case details for

United States v. Aceituno

Case Details

Full title:United States of America v. Lester Aceituno

Court:United States District Court, D. New Hampshire

Date published: Oct 25, 2023

Citations

Crim. 20-cr-081-01-LM (D.N.H. Oct. 25, 2023)

Citing Cases

United States v. Roper

SeeTr. at 36:3-11. More broadly, “fingerprint identification has been peer reviewed and published in…