Source code ch.08: Experts

Source Code & Software Patents: A Guide to Software & Internet Patent Litigation for Attorneys & Experts
by Andrew Schulman (http://www.SoftwareLitigationConsulting.com)
Detailed outline for forthcoming book

Chapter 8: Technical expert witnesses & non-testifying consultants in software patent litigation

Table of contents

  • 8.1 Introduction
  • 8.2 Role of testifying technical experts in software patent litigation
    • 8.2.1 Technical experts & Markman (claim construction) hearings
  • 8.3 Role of non-testifying consulting technical experts
  • 8.4 Who, what, where, when, why, how
    • This subsection covers:
      • Who: Attributes of software patent litigation experts; see 8.4.1 below
      • What: Scope of assignment; see 8.4.2 below
      • When: Need for early engagement of expert; see 8.4.3 below
      • Why: See 8.2 (role) above for reasons why an expert is almost always necessary, not merely helpful
      • How: See 8.6.5 below on methods and tools used by software experts
      • How much: See 8.4.1.2 below on expert fees
    • 8.4.1 Who: Attributes of software patent litigation experts
      • 8.4.1.1 Relation of the testifying expert to other experts, and to the hiring attorney
      • 8.4.1.2 Expert fees
    • 8.4.2 What: Scope of assignment
    • 8.4.3 When: Need for early engagement of expert
  • 8.5 Rules governing experts: FRCP 26, FRE 702-705, and Advisory Committee Notes (ACN)
    • 8.5.1 Federal Rules of Civil Procedure (FRCP) Rule 26 and ACN
    • 8.5.2 Federal Rules of Evidence (FRE) Rule 702 and ACN
    • 8.5.3 FRE Rule 703 and ACN
    • 8.5.4 FRE Rule 704 and ACN
    • 8.5.5 FRE Rule 705 and ACN
    • 8.5.6 FRE Rules 706 and ACN: see court-appointed experts at 8.8 below
  • 8.6 Expert opinion and bases: reliable principles, methods, and facts
    • This subsection covers:
      • 8.6.1 Does Daubert apply to software patent litigation & source code examination?
      • 8.6.2 Expert experience/qualifications as basis for opinion – no ipse dixit
      • 8.6.3 General principles & assumptions as basis for opinion
      • 8.6.4 Facts (qualitatively & quantitatively sufficient) as basis for opinion
      • 8.6.5 Methodology (including Daubert and post-Daubert factors) as basis for opinion
      • 8.6.6 “What is an “Opinion”?
      • 8.6.7 “Fit” and “Application”
    • 8.6.1 Application of Daubert to software patent litigation & source-code examination
      • 8.6.1.1 Why Daubert appears inapplicable, or at least not a “big deal,” for technical experts in software patent litigation
      • 8.6.1.2 Why Daubert concerns of reliability do apply in software patent cases
      • 8.6.1.3 Reasons to consider a Daubert foundation for software expert testimony
    • 8.6.2 Expert experience and qualifications as basis for opinion – no ipse dixit
    • 8.6.3 General principles & assumptions as basis for opinion
    • 8.6.4 Facts (qualitatively & quantitatively sufficient) as basis for opinion
      • 8.6.4.1 Qualitative sufficiency – type of facts relied upon
      • 8.6.4.2 Quantitative sufficiency – adequate supply of facts
      • 8.6.4.3 “What did you NOT look at?”
      • 8.6.4.4 Testifying expert’s reliance upon non-testifying consultant & upon attorneys for facts
      • 8.6.4.5 Expert’s reliance on inadmissible evidence, and the “conduit” problem
      • 8.6.4.6 Holistic vs. disaggregated treatment of individual facts
    • 8.6.5 Methodology (including Daubert and post-Daubert factors) as basis for opinion
      • This subsection covers:
        • 8.6.5.1 Daubert- and post-Daubert factors
        • 8.6.5.2 “Falsifiability,” testability, and testing
        • 8.6.5.3 “Peer review” & publication of software analysis methodologies
        • 8.6.5.4 “Error rate” & its application to software analysis, including source-code review
        • 8.6.5.5 Standards & controls regarding software analysis
        • 8.6.5.6 General acceptance in a field of expertise
        • 8.6.5.7 Non-litigation background to the methodology
        • 8.6.5.8 Adequacy to explain important facts, and consideration of alternate theories
      • 8.6.5.1 Daubert- and post-Daubert factors
        • (1) “whether it can be (and has been) tested”, “falsifiability, or refutability, or testability” – see 8.6.5.2 below
        • (2) “whether the theory or technique has been subjected to peer review and publication” – see 8.6.5.3 below
        • (3) “the court ordinarily should consider the known or potential rate of error … and the existence and maintenance of standards controlling the technique’s operation” – see 8.6.5.4 and 8.6.5.5 below
        • (4) “general acceptance”: “explicit identification of a relevant scientific community and an express determination of a particular degree of acceptance [of the expert’s methdology] within that community” – see 8.6.5.6 below
      • 8.6.5.2 “Falsifiability,” testability, and testing
      • 8.6.5.3 “Peer review” & publication of software analysis methodologies
      • 8.6.5.4 “Error rate” & its application to software analysis, including source-code review
      • 8.6.5.5 Standards & controls regarding software analysis
      • 8.6.5.6 General acceptance in a field of expertise
      • 8.6.5.7 Non-litigation background to the methodology
      • 8.6.5.8 Adequacy to explain important facts, and consideration of alternate theories
    • 8.6.6 What is an “Opinion”?
      • 8.6.6.1 Opinion vs. “just let the facts speak for themselves”
      • 8.6.6.2 When raw facts without an opinion are appropriate
      • 8.6.6.3 No merely conclusory opinions, or inadequately explored legal criteria
      • 8.6.6.4 Hedging opinions & “weasel words”
      • 8.6.6.5 Degrees of certainty
      • 8.6.6 “Fit” and “Application”: wringing and stretching
    • 8.7 “Battle of the experts”: Why & how experts disagree over the facts of how patent claims read onto software
    • 8.8 Court-appointed special masters, and proposed solutions to the “expert problem”

8.1 Introduction

  • Because this is a book on software analysis, the focus here is technical expert testimony, not economics/damages
  • Non-testifying experts often perform source-code examination, so this chapter covers non-testifying as well as testifying experts
  • “Daubert” here used as shorthand for FRE 702, as interpreted in Daubert v. Merrell Dow and its progeny, including Kumho Tire v. Carmichael
  • While a formal Daubert hearing seems unlikely for technical experts in software patent litigation (though becoming more likely), this chapter uses Daubert to structure discussion of expert’s qualifications, use of general principles and facts, reliable methodology, and opinion
  • Daubert/FRE 702 challenge more likely for plaintiff’s expert rather than defendant’s; see 2015 study of Daubert motions  (see also 2016 PwC study of Daubert challenges to financial experts)
  • Even when admissibility of technical expert evidence under Daubert/FRE 702 is not an issue in software patent litigation, the same questions are important in showing or attacking the weight that the fact-finder should give to the expert’s testimony.

8.2 Role of testifying technical experts in software patent litigation

  • Tech expert in software-related patent case typically uses source code to establish technical components of infringement and invalidity.
  • The expert’s role is to apply specialized knowledge, to provide facts and opinions useful to the fact-finder (judge or jury).
  • What experts do: investigate, examine, search, read, analyze, compare, match, translate, interpret, test, report, testify, opine, teach, rebut.
  • Expert can both:
    • “familiarize” (translate complex/foreign into simple terms) and
    • “de-familiarize” (show that things are not so simple; this a task for D expert re: infringement, and P expert re: invalidity).
  • Expert “sees things” that laypersons can’t see (or don’t see until expert points it out to them) — though if done poorly, this can appear as if the expert is simply downplaying “what everyone can see plain as day,” to direct (or divert) attention to something obscure (plainly visible only to the expert), which the expert says is more important.
  • Expert’s role is generally to provide opinions (not merely “let the facts speak for themselves” as is sometimes disingenuously pretended), based on specialized knowledge/methods applied to facts; see 8.6 below re: opinion
  • Expert is often also responsible for uncovering/locating these facts, e.g. from source-code examination, possibly using non-testifying consulting expert for initial fact digging (see 8.3 below)
  • Technical expert may be necessary (not merely helpful) to establish infringement or invalidity of patents in “complex” technology (Centricut v. Esab) (patentee could not withstand summary judgment on the issue of literal infringement in a case involving complex technology in the absence of expert testimony).
  • Expert may be necessary (not merely helpful) to establish an issue of material fact re: summary judgment (SJ); see affidavits in ch.27; but expert’s merely “conclusory” broad opinions do NOT establish a genuine issue of material fact (see e.g. Telemac v. Topp) (conclusory statements offered by experts are not evidence).
  • Expert testimony on each and every claim limitation is often necessary to avoid “failure of proof” under Celotex v. Catrett re: FRCP 56(e): SJ should be granted for failure to make sufficient showing to establish existence of even a single element (here, patent claim limitation) essential to party’s case (possibly see HSBC v. Decisioning.com); such failure may follow successful exclusion of expert opinion under Daubert, and/or following failure to completely disclose opinion or bases in expert report (see chapter 27)
  • Above all, an expert witness is a witness; contrast unspoken assumption that expert witness is a “member of the team” (i.e., a “hired gun”).
  • Patent plaintiff may need to have an expert to show its seriousness, willingness to invest in the litigation; expert as price of admission to patent litigation; failure to hire an expert may be seen as indicating frivolousness, or mere “nuisance value.”
  • Patent litigation system as a whole needs a “pool” of experts (see   BIAX v. Brother   re: party’s “arrogant” attempt to restrict opposing expert from any expert-witness work for four years, because of expert’s exposure to party’s source code; such restrictions would seriously limit the pool of experts; courts must ensure parties have access to experts with specialized knowledge)
  • Even if both sides will come to factual agreement about what source code does, each side still needs its own expert to arrive at such agreement; risky to depend on the other side’s narrative (see [case] re: “trust but verify”?)
  • Sometimes party uses expert merely to neutralize other side’s expert in the hope that, faced with a “battle of the experts” (see 8.7 below), jury may simply disregard all expert testimony
  • Even though jury not required to credit even un-contradicted expert testimony, and even though case may turn on law (claim construction) not fact (code interpretation), each side will generally want its own technical expert
  • Expert’s explicit role in Markman hearing is diminished as “extrinsic evidence” under Phillips, but experts will find and develop facts which drive each side’s desired claim constructions; see 8.2.1 below (including re: technology “tutorials” at Markman hearing)
  • Expert may speak for PHOSITA (person having ordinary skill in the art) re: enablement (e.g., N. Telecom v. Datapoint;  how much code-writing constitutes undue experimentation?), obviousness
  • [Any “complex” patent cases in which no technical experts, because dispute was entirely claim construction and/or economics/damages?]
  • Pros & cons of treating expert witness as a member of “the team”; e.g. expert attending deposition of opposing expert; better to use non-testifying expert in this capacity
  • Rebuttal experts (see ch.27 on rebuttal reports)
  • See also “Who” at 8.4.1 below re: necessary skills and qualifications of the technical expert

8.2.1 Technical experts & Markman (claim construction) hearings

  • As noted above, expert’s explicit role in Markman hearing is diminished as “extrinsic evidence” under Phillips
  • Source code for accused product will likely not come into Markman hearing (claim construction is ostensibly performed without reference to the accused product), but each party’s preferred claim construction will likely be driven by facts uncovered by experts from source code re: infringement, invalidity
  • Technical expert can provide tutorial on “the art”; courts find this useful for “complex” technologies
  • Expert can explain meaning which words would have carried, at the time of invention, to one having “ordinary skill” in the art (PHOSITA)
  • This includes ordinary words with specialized meaning (e.g., “thread”) or ambiguous words (e.g., “process”: did the term, in this patent, at the relevant time, refer to a full process or to a lightweight process, e.g. a thread?)
  • “Expert” can represent one of “ordinary skill”: an expert is one testifying on the basis of specialized knowledge compared to a layperson; even the PHOSITA is an expert
  • Expert testimony is extrinsic evidence, given less weight in claim construction than intrinsic evidence (claims, specification, file wrapper)
  • No role for experts who contradict intrinsic evidence
  • Some courts have stated that expert testimony at Markman hearing is by its nature less reliable than intrinsic evidence
  • Markman hearings are intended to be legal rather than factual: the only role for the expert is to help the court understand the patent claims, which constitute a legal document (even though patents are ostensibly written for those skilled in the art; see Bruce Abramson’s CAFC book on this curious contradiction)

8.3 Role of non-testifying consulting technical experts

  • FRCP 26(b)(4)(D) “Expert Employed Only for Trial Preparation”
  • Whereas facts, data, and assumptions provided by attorney to testifying expert are likely discoverable (even if otherwise protected attorney work product; see FRCP 2010 amendment FRCP 26(b)(4)(C)), materials created by, or shared with, non-testifying experts are almost never discoverable (but note that 26(b)(4)(D) only excludes discovery “by interrogatories or deposition”)
  • In software patent litigation, a non-testifying expert often does the bulk of source-code examination, selecting files meriting closer inspection by testifying expert
  • Testifying expert can use non-testifying consultant to do mechanical “leg work” (e.g. initial proposed selection of relevant files from large source-code production)
  • Testifying expert can rely on non-testifying expert’s output, if of the type reasonably relied upon in the expert’s field (even though consulting expert’s memos were prepared for litigation); such reliance is not “bolstering” nor mere “parroting” of the non-testifying expert, at least when testifying expert confirms report (see Medisim v. BestMed re: report analyzing source code); but sometimes danger of testifying expert as “mouthpiece” for out-of-court consulting expert?
  • Attorneys can use non-testifying consultant to:
    • explore preliminary case theories & potential defenses;
    • conduct tentative testing;
    • identify potential blind alleys;
    • to find and hold “bad facts”;
    • as devil’s advocate;
    • provide tech assistance to attorney taking opposing expert’s deposition (using testifying expert for this last role may compromise appearance of expert neutrality)
  • Some attorneys may prefer a “Chinese Wall” between non-testifying and testifying experts; pros & cons:
    • Pro: keep non-testifying expert’s possibly wide-ranging research non-discoverable; shared with attorney
    • Cons: some non-testifying expert research may need to be independently duplicated by testifying expert
  • Consulting expert to critique opposing side’s experts; this is a separate role from assisting own side’s experts
  • FRCP 26 rules and ACN re: non-testifying experts (“Expert employed only for trial preparation”)
  • FRCP amendments in 2010 re: expert report drafts (FRCP 26(b)(4)(B)) were intended to reduce the need for non-testifying consultants, suggesting that shielding drafts (or generally reducing expert’s discoverable surface area) was viewed as a major reason for non-testifying consultants; however, non-testifying consultants continue to play an important role, suggesting that shielding report drafts was not, or is no longer, a primary motivation for using non-testifying experts.
  • Law re: non-testifying experts is based on confidentiality of litigation work product
  • Non-testifying expert’s work product, when not passed to testifying expert, or when not relied upon or considered by testifying expert, is discoverable only in exceptional circumstances; treated as attorney work product?
  • But communications between testifying and non-testifying experts are discoverable, in most cases
  • Materials passed from non-testifying to testifying expert are materials “considered,” to be noted in expert report, unless testifying expert ignores, and even then perhaps discoverable
  • Rules of discoverability if non-testifying consultant becomes a testifying expert (Intermedics v. Ventritex; Monsanto v. Aventis); conversely, if designated testifying expert is “downgraded” to non-testifying (e.g. because of unfavorable views?)
  • Relationship of non-testifying expert to party’s in-house employees
  • Do not try label of “non-testifying expert” as ruse to shield party employees from discovery (see e.g. ZCT v. FlightSafety;  inappropriate and “incredible” attempt to shield in-house employee)
  • Testifying expert’s own support staff (consulting firm), performing tests under expert’s direction, are in same role as non-testifying consultants?
  • Ironically, non-testifying expert (part of “team”) may be better able to act as devil’s advocate, sounding board, or (annoying but necessary) wet blanket, than can independent testifying expert; ironic because testifying expert witness intended to be impartial, while non-testifying expert is part of the “team”; this is a perverse outcome of the privileged status of non-testifying witness communications coupled with generally wide-open discovery (apart from FRCP 2010 amendments re: drafts of expert reports) into testifying expert’s communications
  • Limiting non-testifying expert’s communications with opposing attorneys hosting source-code examination

8.4 Experts in software patent litigation: Who, what, where, when, why, how

  • Who: Attributes of software patent litigation experts; see 8.4.1 below
  • What: Scope of assignment; see 8.4.2 below
  • When: Need for early engagement of expert; see 8.4.3 below
  • Where: See ch.11 on protective orders for location of source-code examination
  • Why: See 8.2 (role) above for reasons why an expert is almost always necessary, not merely helpful
  • How: See 8.6.5 below on methods and tools used by software experts
  • How much: See 8.4.1.2 below on expert fees

8.4.1 Who: Attributes of software patent litigation experts

  • Qualifications may come from experience as well as education (see FRE 702)
  • Formal qualifications often useful to avoid Daubert challenge
  • Unfortunately, little proficiency testing in software examination (see 8.6.2 below)
  • Often need software engineering (SE) rather than computer science (CS) [explain CS vs. SE difference]
  • Expert often not just any programmer: experience not only in writing code, and not only in closely reading code (a different skill from the ability to write programs), but in comparing code to text (such as a patent claim); see xxx below on expert’s non-litigation & litigation practice
  • Expert’s ability to represent the PHOSITA in a given “field” (see xxx below on expert role in obviousness analysis, and broad v. narrow definitions of the “field of expertise”)

Generalists vs. specialists:

  • Generalist has broader experience; better able to compare & contrast
  • CS & SE have general principles which express themselves in different specific fields such as graphics, memory management, networking, etc.
  • Generalist who does testing may win out over specialist who doesn’t test [case]
  • Daubert challenges are often on precise field of expertise required in case (see 8.6.2 below)
  • Field of expertise, and danger of using expert to do “double duty” in areas outside field, or as “one man band”

Industry vs. ivory tower/academia:

  • Industry experts specializing in precise field may be excluded from source-code access under protective order (PO; see ch.11); thus, often only generalists will be acceptable to the other side
  • Party generally cannot use in-house employee as expert, again because likely precluded from access to other side’s source code under PO
  • Inventor as expert? (see e.g. Verizon v. Cox, limiting inventor testimony to factual testimony not requiring expertise: either non-opinion or FRE 701 non-expert opinion)
  • Both testifying and non-testifying viewers of source code will under the PO need to be acceptable to the party producing the source code
  • Some tech experts with industry background may also properly opine on business aspects of field of expertise (relevant to damages)

“Hired gun” (professional witness) vs. “mad scientist”:

  • Percentage of time devoted to non-litigation practice (see 8.6.5.1 below on post-Daubert factors)
  • While expert is often a “specialist” in code examination methods designed for patent litigation, this should at least be explicitly rooted in standard non-litigation practices (see xxx below)
  • Rather than expertise being contradicted by seeming “over” participation in litigation, comparison of code with patent claims is a specialized skill; rooted in non-litigation code analysis, but arguably distinct
  • Experts who work almost exclusively for plaintiff or defendant
  • Hands-on vs. “armchair”; danger of seeming over-reliance on non-testifying consultant
  • Expert’s comfort level with both flexibility (especially early in case) & specificity (e.g., recognition of seemingly small word differences in patent claims); this reflects the nature of patents themselves as combinations of broad and specific (see xxx on flexibility and commitment)
  • Expert’s comfort with both broad & narrow readings of code
  • Expert keeps up with, or able to quickly get up to speed in, the relevant current literature (ACM Portal, CiteSeer, Google Scholar)
  • Specialized knowledge in programming languages, operating systems, types of software

Specific skills, heuristics:

  • Ability to recognize key “idioms” in relevant programming languages (e.g. compile-time invocation of anonymous functions in JavaScript; captures in regular expressions)
  • Ability to recognize potential lower-level constructs from higher-level code (e.g. “hashing” claim limitation likely matches array[“string”] because associative arrays rely on hashing; spreadsheet likely matches “sparse matrix”)
  • Ability to recognize higher-level constructs from lower-level code (though code may not “look like” what it does)
  • Ability to recognize key constructs even when unnamed or unlabeled (e.g. callback, iterator, destructor)
  • Ability to detect when something is missing (e.g. “they gave us what looks like client code, so there must be some server code too”)
  • [See Wason card problem: know which card to turn over to answer question; disconfirmation should have more weight than confirmation; but not necessarily in patent infringement, where the presence of non-infringing technology in one part of a product generally doesn’t contradict the presence of infringing technology in another part]
  • “Expert shopping”
  • “Good expert but bad witness” (waffler, blabberer, chatterer): consider use as non-testifying consultant
  • Conflicts of interest, experts “conflicted out”: issue is previous access to confidential info, rather than “loyalty” (expert witness ultimate loyalty should be to the fact-finder, not the party retaining the expert)

8.4.1.1 Relation of the testifying expert to other experts, and to the hiring attorney

  • Testifying expert relationship to non-testifying consultant (assistant, shield, etc.); see 8.3 above
  • Technical expert relationship to economics/damages experts: provide tech aspects of damages (e.g., role of infringing technology in overall product; technical aspects of market definition; tech ID infringing part numbers, as input to damages accounting)
  • Relationship to “opposing” experts: pros & cons of testifying expert’s presence at deposition of “opposing” expert; see also 8.8 below on “hot tubbing”
  • Relationship with hiring attorneys: “coaching”, “woodshed”; see chapter 27 on attorney role in drafting expert report, and dangers of “ghostwriting”
  • Relations among multiple technical experts: see 8.4.2 below on scope of assignment; infringement/invalidity expert bifurcation
  • Pros & cons of communication among experts: not privileged; contrivances (?) of attorney presence at expert meetings
  • Pros & cons of viewing the expert as part of “the team”: danger of waiving work product privilege; tensions & contradictions in expert’s role

8.4.1.2 Expert fees

  • Under FRCP 26, fees must be disclosed in expert report (see chapter 27)
  • No contingency fees for experts (see Lerach)
  • “Lock-up” fees (paying expert mostly to keep from the other side)
  • Why expert’s litigation fees should be same as non-litigation fees
  • Why charging a premium for time spent testifying isn’t a good idea
  • Percentage of total income from litigation?
  • When reasonableness of fees (including number of hours worked) is an issue: e.g. payment of expert fees as part of settlement, for sanctions, or for “exceptional case”
  • Party taking deposition from expert pays expert fees for deposition time
  • Submitting untested/untestable expert testimony can lead to an award to the opposing side for its own expert fees necessary to rebut, as well as for attorney fees (see MarcTec v. Cordis, awarding expert fees to party forced to incur costs to rebut unreliable/irrelevant expert testimony)
  • Time: see budgeting, scheduling, triage, prioritization in ch.23
  • While expert contacts are usually with law firm rather than with firm’s client, expert agreement is typically “pay when paid” — for payment, expert must ultimately look to client rather than to law firm, even when expert has had zero contacts with client
  • Expert agreement should make explicit that expert payment is not dependent on opinions reached

8.4.2 What: Scope of assignment

  • Sometimes separate experts for infringement vs. invalidity: pros & cons of expert bifurcation:
    • Pros: if no one expert has an opinion on both infringement and invalidity, it is easier for the hiring party to maintain what otherwise might appear an inconsistent stance
    • Cons: because preferred claim construction is generally a needle carefully threaded between infringement and invalidity (for P, capturing infringement on the one hand and avoiding invalidity on the other; the converse for D), it is best if one expert is fully aware of both; similar in some ways to “nose of wax” issues when P in earlier IPR took narrow validity position that now tends to contradict P’s later position on D’s infringement.
  • Questions at expert deposition re: what were you asked to look at?; what were you not asked to look at?; what were you asked NOT to look at?
  • Attorney can properly set scope of assignment, e.g. as part of establishing division of labor among multiple experts
  • Danger of scope of assignment outside expert’s stated field of expertise: see 8.4.1 above on generalists vs. specialists [see also xxx below on defining the field of expertise]

8.4.3 When: Need for early engagement of expert

  • Need for early involvement of expert (likely non-testifying), so as not to later present experts/examiners with a fait accompli
  • Plaintiff using early expert to assist with pre-filing investigation (see ch.6); use consulting rather than likely testifying until know more about case, to see if case is likely to have expert support
  • Early expert involvement to guide discovery requests for specific source code & tech documents
  • To review proposed protective order re: source-code access; attorneys often do not appreciate the impact of agreed-to PO restrictions (see ch.11 and ch.15)
  • To suggest any preservation orders re: other side’s ongoing changes to source code
  • Defendant likely (though not always) already has in-house expertise in its own software, but careful with early involvement in litigation of employees who may become fact witnesses; do not try to shield discovery of in-house fact witnesses as if undiscoverable “consultants”
  • Above points are all re: early involvement; also note need for later rebuttal experts, supplemental reports; see ch.14 on scheduling & supplementation

8.5 Rules governing experts: FRCP 26, FRE 702-705, and Advisory Committee Notes (ACN)

  • In addition to FRCP, FRE, and committee notes (see below), see also Federal Judicial Center publications used by judges:
  • Patent Case Management Judicial Guide, 2009, chapters 6 (Summary judgment) and 7 (Pretrial case management)
  • Manual for Complex Litigation,  3rd ed., 2004,, chapter 23 (Expert scientific evidence)
  • Reference Manual on Scientific Evidence, 3rd ed., 2011, 897-960 (Reference guide on engineering)
  • Of course, state courts (for e.g. trade secret cases or non-IP software cases) may have different rules

8.5.1 Federal Rules of Civil Procedure (FRCP) Rule 26 and ACN

  • FRCP 26(a)(2)(B): “(a) Required Disclosures … (2) Disclosure of Expert Testimony … (B) Witnesses Who Must Provide a Written Report. Unless otherwise stipulated or ordered by the court, this disclosure must be accompanied by a written report — prepared and signed by the witness — if the witness is one retained or specially employed to provide expert testimony in the case or one whose duties as the party’s employee regularly involve giving expert testimony. The report must contain: (i) a complete statement of all opinions the witness will express and the basis and reasons for them; (ii) the facts or data considered by the witness in forming them; …”
  • Largely covered in chapter 27 on expert report
  • Here, note that FRCP requires that expert completely disclose (i) opinions + basis + reasons + (ii) facts/data “considered” by expert in forming opinions
  • Sanction for failure to fully pre-disclose opinions and reasons: party not allowed to use info, witness, or evidence (unless failure substantially justified or harmless)
  • No protection against discoverability of materials expert considered, or for relied-upon assumptions provided by attorney
  • Opinions: see 8.6 below
  • Basis: facts relied upon + principles, assumptions; see 8.6, 8.6.2 below
  • Reasons: methodology, “thought process,” how expert gets from facts to conclusions; see 8.6.5 below
  • Facts/data “considered”: this is a broader category than what expert “relied” upon; see 8.6.4 below
  • Assumptions: see 8.6.3 below
  • Separate treatment of evidence intended solely to contradict or rebut other side’s expert; see rebuttal report in ch.xxx
  • Separate treatment of supplemental expert reports, when expert learns report is incomplete or incorrect in some material respect; see ch.14 on scheduling
  • FRCP 26 on non-testifying consulting expert (“Expert employed only for trial preparation”): see 8.3 above
  • Expert discovery: see ch.28 on deposition; chapter 27 on drafts of expert report

8.5.2 Federal Rules of Evidence (FRE) Rule 702 and ACN

  • FRE 702: “A witness who is qualified as an expert by knowledge, skill, experience, training, or education may testify in the form of an opinion or otherwise if: (a) the expert’s scientific, technical, or other specialized knowledge will help the trier of fact to understand the evidence or to determine a fact in issue; (b) the testimony is based on sufficient facts or data; (c) the testimony is the product of reliable principles and methods; and (d) the expert has reliably applied the principles and methods to the facts of the case.”
  • Expert qualified by knowledge, skill, experience, training, or education — need not be academic
  • Testify in the form of an opinion or otherwise — can be purely factual, without opinion, but expert almost always must express an opinion (see xxx below)
  • Scientific, technical, or other specialized knowledge — reliability requirements apply to “technical” as well as scientific evidence (Kumho Tire) [explain rule re: sci/tech evidence, not merely sci/tech expert testimony]
  • (a) Help trier of fact to understand evidence — “helpful,” i.e., need not be strictly necessary; interpretation of facts
  • (a) Help trier of fact to determine a fact in issue
  • (b) Based on sufficient facts or data — both quantitative & qualitative sufficiency; see xxx below
  • (c) Product of reliable principles — see xxx below
  • (c) Product of reliable methods — see xxx below
  • ACN example: While the terms “principles” and “methods” appear solely applicable to scientific knowledge, they also apply to technical/specialized knowledge, such as police testimony “regarding the use of code words in a drug transaction”; the “principle” is the regular use of code words to conceal drug transactions; the “method” is the “application of extensive experience to analyze the meaning of the conversations.” [Hmm, maybe draw out analogy from expert explaining source code to narc explaining drug “code words”?]
  • (d) Principles & methods reliably applied to facts of case — see xxx below
  • Trial judges are “gatekeepers” to exclude all unreliable expert testimony [actually, all unreliable sci/tech evidence]
  • Expert’s client (not the opponent) has the burden of establishing expert’s reliability
  • Daubert provides a non-exclusive checklist for assessing expert reliability — see xxx below, including post-Daubert criteria
  • Expert must explain how conclusion is grounded in an accepted body of learning or experience
  • If expert is relying on experience (e.g. after years of reading code, “I know it when I see it”), the expert must explain why that experience is a sufficient basis for the opinion (no ipse dixit)
  • Testimony is permitted between competing methodologies within a field of expertise

8.5.3 FRE Rule 703 and ACN

  • FRE 703: “An expert may base an opinion on facts or data in the case that the expert has been made aware of or personally observed. If experts in the particular field would reasonably rely on those kinds of facts or data in forming an opinion on the subject, they need not be admissible for the opinion to be admitted. But if the facts or data would otherwise be inadmissible, the proponent of the opinion may disclose them to the jury only if their probative value in helping the jury evaluate the opinion substantially outweighs their prejudicial effect.”
  • Expert may rely on inadmissible evidence, if experts in the particular field would reasonably rely in non-litigation context
  • This is distinct from Rule 702 determination whether expert has sufficient factual basis
  • Whether expert can disclose the inadmissible basis to a jury is a separate question; underlying information is not admissible simply because opinion is admitted; but opponent may “open the door”; see xxx re: impermissible use of expert as “conduit,” “backdooring”
  • Inadmissible (albeit reliable) bases are only disclosed to jury if of substantial value to help jury to evaluate expert’s opinion (only if probative value substantially outweighs danger that jury might use the evidence for some purpose other than evaluating expert)
  • Rule is designed to bring judicial practice into line with practice of experts in non-litigation context

8.5.4 FRE Rule 704 and ACN

  • FRE 704: “(a) In General – Not Automatically Objectionable. An opinion is not objectionable just because it embraces an ultimate issue. (b) [post-Hinckley exception for mental state of criminal defendant, e.g. insanity defense]”
  • Expert opinion may “embrace” an ultimate issue
  • But expert opinions will be excluded when “phrased in terms of inadequately explored legal criteria” (e.g., “D infringes claim 1” without exploring each and every element of infringement, or “P’s claim 1 is obvious in light of prior art” without exploring each and every element of obviousness)
  • No expert opinions which would merely tell the jury what result to reach

8.5.5 FRE Rule 705 and ACN

  • FRE 705: “Unless the court orders otherwise, an expert may state an opinion — and give the reasons for it — without first testifying to the underlying facts or data. But the expert may be required to disclose those facts or data on cross-examination.”
  • Expert may testify to opinion or reasons without first disclosing underlying facts or data
  • This merely addresses the old “hypothetical question” format, and does not contradict FRCP 26 required earlier disclosure of underlying facts in expert report as prerequisite to oral testimony

8.5.6 FRE Rules 706 and ACN: see court-appointed experts at xxx below

8.6 Expert opinion and bases: reliable principles, methods, and facts

  • Even though a Daubert challenge to expert admissibility is the exception rather than the rule (see FRE 702 ACN), expert reliability is a core topic for cross-examination and rebuttal
  • The burden of showing expert reliability is on the proponent (NOT a burden on opponent to show unreliability); without this up-front showing, expert inadmissibility can result in summary judgment (SJ) failure
  • “Reliability” is a lower standard than correctness (FRE 702 ACN)
  • Legal “reliability” is not the same as technical reliability (consistency; achieves the same result every time), and is closer to validity (actually measures what it says it measures; validity = consistency with external criteria; reliability = internal consistency across tests)
  • Reliability & expert’s non-litigation practice: would an engineer base a real-world engineering decision on (i.e., rely upon) the facts, assumptions, and methods used by the expert in litigation to arrive at an opinion?
  • 8.6.1 Does Daubert apply to software patent litigation & source code examination?
  • 8.6.2 Expert experience/qualifications as basis for opinion — no ipse dixit
  • 8.6.3 General principles & assumptions as basis for opinion
  • 8.6.4 Facts (qualitatively & quantitatively sufficient) as basis for opinion
  • 8.6.5 Methodology (including Daubert and post-Daubert factors) as basis for opinion
  • 8.6.6 “What is an “Opinion”?
  • 8.6.7 “Fit” and “Application”

8.6.1 Application of Daubert to software patent litigation & source-code examination [change terminology throughout from “Daubert” to “FRE 702”?]

  • 8.6.1.1 Why Daubert appears inapplicable, or at least not a “big deal,” for technical experts in software patent litigation
  • 8.6.1.2 Why Daubert concerns of reliability do apply in software patent litigation
  • 8.6.1.3 Reasons to consider a Daubert foundation for software expert testimony

8.6.1.1 Why Daubert appears inapplicable, or at least not a “big deal,” for technical experts in software patent litigation

  • Under FRE 702 (revised after the Supreme Court’s Daubert decision), all expert testimony based on “specialized knowledge” must be shown to be “reliable” before being admitted into evidence
  • But is Daubert really applicable to source-code examination in software patent litigation?
  • FJC Patent Case Management Judicial Guide: “the role that experts play in patent cases does not always fit squarely within the Fed. R. Evid. 702/Daubert   framework”; especially challenging is testimony in the frequent cases when the expert is asked to use specialized knowledge “to evaluate a hypothetical legal construct,” e.g. Who is a PHOSITA?; would a PHOSITA have believed at the time of alleged infringement that differences between patent claim and accused product are “insubstantial”?; would PHOSITA have had a “motivation” at the time of patent filing to combine known prior-art references?
  • BOX: Is source code examination “science”? Is it “forensics”? (If forensics, then not “computer forensics,” but something different: “software forensics”)
  • At one level, the answer (to question: is Daubert applicable to source-code exam) is obviously yes: under Kumho Tire and FRE 702, all   technical expert testimony (not only scientific) must be shown to be reliable to be admissible
  • At the same time, it’s unusual to hear of a “Daubert hearing” or expert voir dire in software patent cases
  • As of xxx, Lexis listed only 50 patent decisions containing references to Daubert or Kumho, and the phrase “source code,” compared to 780 decisions with just the phrase “source code,” and the Daubert/Kumho discussion in some of the 50 cases pertains to economics/damages rather than technical experts
  • [Address the idea that Daubert only applies to “novel” or ad hoc methodology; Daubert fn11: “Although the Frye decision itself focused exclusively on ‘novel’ scientific techniques, we do not read the requirements of Rule 702 to apply specially or exclusively to unconventional evidence. Of course, well established propositions are less likely to be challenged than those that are novel, and they are more handily defended. Indeed, theories that are so firmly established as to have attained the status of scientific law, such as the laws of thermodynamics, properly are subject to judicial notice”]
  • Daubert is typically thought of as a way for the “gatekeeper” (judge) to keep out “junk science” (though perhaps anything more than “junk science,” i.e. with some “reasonable modicum of reliability,” will be found acceptable?)
  • So-called “junk science” typically appears in torts cases (toxic torts, products liability) with issues of causation (yes, plaintiff took D’s drug and yes, P or P’s child has suffered injury, but did D’s drug cause P’s injury?)
  • There are few causation issues on the technical side of patent litigation, apart perhaps from whether e.g. this code “causes” this GUI/feature, or whether this code/feature “causes” product purchases (a mixed question of technical and economic fact)
  • And expertise in patent litigation differs from that in torts, because in patent litigation the law itself is about technology, so there is less danger of extraneous unreliable evidence being introduced (?)
  • Daubert issues do frequently arise in software patent litigation, as to economic/damages testimony (Uniloc v. Microsoft re: “25% royalties rule”; entire market value rule)
  • But do litigants need to worry about Daubert on the technical side of software patent litigation?
  • Many experts believe no, even after learning of the Kumho Tire decision (in   Kumbo, P argued that Daubert is too inflexible for engineering testimony, and P lost this argument)
  • What are thought of as the “four Daubert criteria” (falsifiability, peer review, error rate, and general acceptance) seem at first glance like a poor fit to software analysis, in particular to source-code examination
  • Actually, the four criteria are a reasonably good fit, as are the additional post-Daubert critiera (see xxx below); e.g. there is extensive academic literature on comparing source code with text, and studies of source-code reading error rates (albeit when looking for defects rather than for code/text matches)
  • Source-code examination is often described by experts and their clients in plain-folk terms along the lines of “shucks, you just go in, find the code, and read it, and then you describe what it does”; according to one software expert, “my expertise is saying that something red is ‘red'”
  • If it were really so simple, then the expert is a translator; but even seemingly a “straightforward” translation of foreign spoken languages into English requires expert admissibility (see Fishman, “Recordings, Transcripts, and Translations as Evidence”)
  • This further implies (and some experts state) that there is no legitimate room for expert disagreement on what the code does; and that any disagreement would center on claim construction, which under Markman is largely a legal question (or perhaps a “mongrel practice” of both law and fact), and which in any case should not be directly affected by source-code examination (Markman hearings “do not consider the accused instrumentality”)
  • Indeed, experts hired by opposing sides should eventually end up in agreement about what the code does
  • Contrast the role of software engineers in patent case with the role of engineers in a products liability case, or doctors in medical malpractice cases
  • Code, as code, is less open to ambiguities, or varying interpretations

8.6.1.2 Why Daubert concerns of reliability do apply in software patent cases

  • Yes, code as such is an unambiguous set of instructions to a machine, leaving seemingly no room for the sort of subjective interpretations which courts try to exclude using Daubert
  • But code, when compared with the language of patent claims, is open to varying interpretations: see e.g. expert dispute over whether code for a “playlist” narrowly constitutes a “wish list” or is only broadly a database   (MobileMedia v. Apple)
  • Networking may introduce a non-deterministic element [similarly multiprocessing?]
  • Even given a fixed claim construction, two experts can start with the same accused/anticipatory product and come to opposing results on how the code maps onto the claim [give example where disagreement persisted post-Markman]
  • Further, there are more-competent and less-competent methods of code analysis:
  • Examining source code, without examining the actual accused product, can yield incorrect results (see chapters xxx and xxx on the distinction between source code and software products)
  • Statically examining code without dynamic examination (e.g., network monitoring) can yield incomplete results (see chapter xxx on static v. dynamic methods)
  • Assertions of absence (“our code doesn’t do this”) may be based on insufficient facts (e.g., only being provided with client source code, in a client/server system)
  • Some experts rely more on non-source materials (e.g., especially telling quotations from deposition transcripts) than on source code; this is open to Daubert attack for qualitative insufficiency of facts relied upon (see xxx below)
  • Cases where expert relied on marketing materials, conducted inadequate review of accused product (e.g. Furminator v. Kim Laube)
  • Generally, the frequent complaints regarding infringement contentions (“conclusory,” “mimicking” claim language, providing facts as well as conclusion, yet failing to connect them; see chapters xxx and xxx) often also apply to expert reports

8.6.1.3 Reasons to consider a Daubert foundation for software expert testimony

  • Especially important for plaintiff: most challenges are to P’s expert (generally expressed in terms of FRE 702 rather than Daubert)
  • Proponent of expert testimony has the burden of showing its reliability
  • Burden is not on the opponent to show non-reliability, or otherwise challenge expert testimony
  • Exclusion of expert testimony, e.g. for merely “conclusory” opinion with insufficient basis, can directly lead to summary judgment (SJ)
  • Even though Daubert is strictly a matter of admissibility, and technical experts will almost never be excluded in software patent litigation, the Daubert-based criteria can also be used to argue “weight” and sufficiency of expert opinion, including why one expert’s opinion should be preferred over another’s
  • Daubert and post-Daubert critieria provide a useful way for experts to question their own methodology, and to think through how to explain their methodology to others
  • Software experts may regard themselves as barely applying a methodology, because it has become second nature, learned through years of practice, and it is now difficult to describe what they are doing when look at code and compare it with a patent claim; but there IS a methodology, which is reliable or less so
  • Mere appeal to one’s many years of experience, without an explanation of how the experience has been applied, is mere ipse dixit (basically, “because I say so”); see FRE 702 ACN above
  • By making their methodology explicit, both to themselves and others, experts can anticipate challenges
  • For example, experts should work through the Malone “Daubert Dance”, and ask themselves each question; not to come up with snappy come-backs (a la some books of advice for experts), but to identify potential weaknesses in the facts, principles, or methods used as a basis for the expert’s conclusions
  • In particular, forcing oneself to work through the methodology helps avoid the frequent problem of “conclusory” opinions; even expert testimony which includes both sufficient facts as well as the opinions still often fail to connect the other to the other: why do these facts lead to this conclusion?
  • There truly is a methodology employed in source-code examination, composed at the very least of the steps of deciding upon search terms, searching, selecting, summarizing, and comparing; each step can be done more or less competently
  • This methodology includes how one choose what and what not to look at, how far to drill down, how much to trust naming and comments, etc.
  • The lack of vigorous testing of expert testimony in software patent litigation should not leave attorneys or experts complacent:
  • Some of the public backlash at “trolls” and software patents generally has been transformed into heightened procedural requirements (see the Leahy bill in chapter xxx), and could possibly be reflected in greater scrutiny especially of plaintiff experts, analogous to how the strong albeit confused desire for “tort reform” resulted in Daubert
  • Daubert challenges may be more likely when only one side has a “professional witness” (on the one hand, professional witness has been accepted by earlier courts; on the other, less likely to have significant recent non-litigation practice; see xxx below)
  • Even long-established forensic methods such as fingerprint comparison are coming under closer scrutiny (see the NAS report, Strengthening Forensic Science in the United States)
  • Increased use of source-code analysis in other areas of the law (see book’s Conclusion) should eventually lead to some standardized requirements; see proposal for adding computer science to the federal judiciary’s standard Reference Manual on Scientific Evidence)
  • Against all this, one reason to not explicitly discuss Daubert criteria is that it may appear that one “doth protest too much” (Malone)

8.6.2 Expert experience and qualifications as basis for opinion — no ipse dixit

  • Expert qualifications and skills are discussed above at xxx
  • Here, the issue is expert’s pointing to “years of experience” and qualifications, as such, as a basis for opinions
  • Expert testimony generally is a product of years spent learning to see things not apparent to laypersons
  • Experts make connections (x relates to y) or distinctions (x is not y), not apparent to laypersons
  • FRE 702 is explicit that “experience” (not merely education or training) is a basis for specialized knowledge
  • But basing expert opinion solely on experience/qualifications is an appeal to one’s own intuition: “I know it when I see it”; ipse dixit: asking the fact finder to take expert’s say-so, solely because expert is experienced/qualified
  • Treating the expert as a “black box”: facts go in (with additional facts generated within the black box, e.g. from reverse engineering), inside the box facts are mixed with mysterious principles and methods, and out come opinions
  • See xxx above on FRE 702 ACN: expert cannot employ experience as a sole basis for opinions: expert must at least explain how the experience leads to the opinion
  • It may be difficult for an expert to articulate a method which has become “second nature” (see xxx above); see studies in “tacit knowledge” (Polanyi); analogous to the difficulty of extracting “rules” for use in expert systems
  • Treating the expert as a “black box” would be supportable, IF there were proficiency testing: periodic re-“calibration” of experts to bring them to within 5% agreement of each other (this is how bar examination graders are trained)
  • However, proficiency testing of experts is currently limited: CodeSuite certification is a good example, but is focused on copyright and trade-secret rather than patent litigation; see xxx below on testing source-code readers for ability to find known defects
  • All this said, the fact-finder is likely to use expert’s qualifications and stated years of experience as a quick proxy to avoid deeper questioning of how the expert got from facts on the one hand to opinions on the other
  • Judges may use previous acceptance in similar cases as a proxy; judicial acceptance is often confused for “general acceptance” (see xxx below)
  • How can expert explain experience as a route to the opinion, without   ipse dixit ?
  • It is best if the expert can point to any non-litigation experience, even though the most pertinent years of experience may be work in previous litigation; see xxx below on post-Daubert critieria of non-litigation experience
  • If possible, relied-upon non-litigation experience should closely match the methodology employed in software patent litigation
  • In particular, any specific non-litigation experience closely comparing code with text/prose (e.g., specifications, requirements documents)
  • Also consider the “field” of expertise: broad vs. narrow; similar to issue of defining the field of expertise for obviousness analysis (PHOSITA has ordinary skill in some field, which may be defined broadly or narrowly; generally proponent of obviousness seeks a broad field)
  • [More here on defining the “field of expertise,” as Daubert challenges often look for mis-fits between expert’s stated field of expertise on the one hand, and the field pertinent to the case on the other; watch out for contradictions between party’s preferred field (“who is the PHOSITA?”) in obviousness arguments, and expert’s claimed field of expertise (deposition will try to pin expert down to a specific field, with explicit acknowledgements of areas in which non-expert)]

8.6.3 General principles & assumptions as basis for opinion

  • Reminder of where we are: expert provides opinions; opinions must have bases; bases are: experience, reliable general principles, facts, reliable methodology, application of principles & methodology to the facts; in this subsection, look at principles
  • FRE 702 refers to “reliable principles and methods”
  • What is the difference between a principle and a method?
  • A principle is a general statement; the major premise for a syllogism
  • For example: “Associative arrays use hash tables” is a major premise in: “Associative arrays use hash tables; D’s code in file f function g at lines 100-103 uses strings as indices into an array; array[‘string’] is an example of an associative array; therefore, D’s product embodies the ‘hash table’ element of P’s claim 1”
  • Often, these principles are left implicit; in contrast to the above example, expert reports and claims tables more typically would state, “D’s product embodies the ‘hash table’ element of P’s claim 1; see file f function g lines 100-103”
  • Implied principles and connections should be unacceptable in claim charts: see ch.26
  • Similarly, it should be unacceptable for expert testimony and reports to “leave as an exercise for the reader” how the expert got from the raw facts (array[‘string’] at file f function g lines 100-103) to the cooked opinion (P’s product embodies a hash table)
  • Unstated principles are assumptions, which can be uncovered and weakened at deposition (see ch.28)
  • Opponent can seek to show sensitivity of conclusions/opinions to unstated assumptions
  • Once principles are made explicit, they often must be qualified: “associative arrays use hash tables” should be restated as “associative arrays generally use hash tables” or “in PHP, associative arrays as implemented as hash tables”
  • One role for experts is to provide exceptions to the other side’s generalizations: e.g., “in D’s C++ code, operator[] was implemented using a simple ‘for’ loop rather than a hash table”
  • Once principles are made explicit, the opinion often must be tightened to avoid sensitivity to unqualified assumptions
  • Principles may come from computer science or software engineering generally, or from a specific subfield
  • Once principles are made explicit, the expert is more likely to provide a specific basis/reference/citation for them
  • Expert should seek out citations for principles, rather than the software equivalent of “why, everyone ’round these parts knows…”
  • Even seemingly “obvious” truisms often have been studied: do searches in CiteSeer, ACM Portal, Google Scholar
  • Expert should keep up-to-date in academic literature using CiteSeer, ACM Portal, Google Scholar
  • Learned treatises: see FRE 803(18); as an exception to the hearsay rule, learned treatises can be introduced into evidence as an exception, even without use to rebut opposing party, so long as relied upon by expert in direct exam, or called to expert’s attention in cross exam
  • Encyclopedias are often referenced in patent cases: see especially the ACM’s Computer Science Handbook, ed. Allen Tucker (CRC, 2004), with 110 chapters on different areas, e.g. cryptography, fault tolerance, computational biology, volume visualization, rendering, etc.; see also Encyclopedia of Computer Science, ed. Ralston et al. (Wiley, 2003)
  • Dictionaries, like expert testimony, plays a diminished role as “extrinsic evidence” in claim construction post-Philips, but dictionaries remain useful as a source of general principles
  • See Oxford Dictionary of Computing and, despite the title, Eric Raymond’s brilliant New Hacker’s Dictionary (MIT Press, 3rd ed., 1996)
  • Numerous area-specific references, e.g.,C.J. Date’s Relational Database Dictionary (2008) or Pocket Handbook of Image Processing Algorithms in C, by Myler and Weeks (1993) [use better examples!]
  • A problem with using such “learned treatises” is apparent from the publication dates above: they are often out of date; the index to the ACM handbook does include PHP and CGI, but not JavaScript; more important, nothing on “listener”, callback, thunk, or vtable
  • Because passages can be quoted to an expert from treatises in cross-examination, some experts point to the behind-the-times nature of treatises as a basis for what is sometimes called the “journals gambit”: the expert will refuse to acknowledge ANY learned treatises, saying that he relies entirely upon journals
  • Of course, the expert against whom the treatise is being quoted need not have acknowledged it as authoritative for it to be used against him; approval by the expert hired by the cross-examiner is all that is needed
  • If expert is going to fall back on journals, he should be up-to-date in relevant journal literature (see ACM Portal etc. above)
  • It is often effective to cite the opposing party’s own “learned” materials; e.g. if Microsoft is on the other side, refer to the Microsoft Press Computer Dictionary
  • When US DOJ cited Microsoft dictionary in antitrust cases, experts retained by Microsoft, in an interesting show of expert “independence,” dismissed their client’s dictionary as unauthoritative for lack of peer review
  • Peer-reviewed academic literature by the opponent’s engineers can also be referenced; e.g. research.microsoft.com

8.6.4 Facts (qualitatively & quantitatively sufficient) as basis for opinion

  • Reliable facts/data required as one basis for expert opinion; FRE 702
  • Sufficient facts (including as part of sworn expert opinion) needed to survive summary judgment
  • Surviving SJ also requires sufficient factual disagreement; otherwise purely legal issue for which jury not used
  • Sufficiency is both quantitative & qualitative
  • Expert’s role in assisting party to meet factual appeals standard: sufficient facts that rational juror could find for party
  • 8.6.4.1 Qualitative sufficiency — type of facts relied upon
  • 8.6.4.2 Quantitative sufficiency — adequate supply of facts
  • 8.6.4.3 “What did you NOT look at?”
  • 8.6.4.4 Testifying expert’s reliance upon non-testifying consultant & upon attorneys for facts
  • 8.6.4.5 Expert’s reliance on inadmissible evidence, and the “conduit” problem
  • 8.6.4.6 Holistic vs. disaggregated treatment of individual facts

8.6.4.1 Qualitative sufficiency — type of facts relied upon

  • Courts ask what types of facts or data are reasonably relied upon in the field of expertise
  • What data would a software engineer rely upon to determine if code embodies or carries out a specification?; only source code?; ever a substitute for source code?; must source code (static analysis) be combined with something else, such as product testing (dynamic analysis)?
  • Source code is of course reasonably relied upon in software engineering (UTSL = “Use the Source, Luke”)
  • Needless to say, expert should have access to actual source code, not just a printout or excerpts [but note frequent selection of specific files by non-testifying consultant; see xxx]
  • Source code can be inaccurate: it is possible to look at the wrong code (not corresponding to the product; not used in the product; not the correct version); comments and naming can be inaccurate; of course code contains bugs (though these will reflect actual product behavior, even if unintended, and patent litigation focus is generally on structure [what something actually does] rather than function [what something is supposed to do])
  • At the same time, software engineers in a non-litigation context do not rely solely on source code; they also walk down the hall to ask the author, and consult documents, old emails, online forums, etc.
  • Software experts who rely solely or primarily on something other than source code are at risk of a Daubert challenge: see e.g. Padcom v. NetMon (expert did not refer to source code)
  • Examples of non-source materials overly relied upon by experts in software patent litigation include marketing materials (see e.g. Pharmastem v. ViaCell), deposition testimony, and early specifications/requirements documents
  • Another example is reliance upon comments or the names of files/function/methods to the exclusion of the code itself (though comments can help show the purpose/intent/design of code, which can in some circumstances be important apart from the structure/implementation of the code itself; see e.g. Versata v. SAP, testimony on what source code was designed to do, relevant to “computer instructions capable of” patent claim limitation)
  • Given the “aura” around source code (see ch.11), it is possible that an expert relying solely upon reverse engineering of the accused product might be at a disadvantage to an expert relying solely upon examination of source code purported to represent the accused product
  • Use of reverse-engineered facts should likely be given methodological support (see xxx below on use of standard v. ad hoc tools; xxx on reverse engineering as a standard industry methodology), and may require authentication (see chapter xxx)
  • Conversely, an expert who relies solely on source code, without some testing or reverse engineering of the actual accused product, should generally be seen as relying upon qualitatively insufficient facts
  • Inadequate testing/review of the accused product is a basis for Daubert exclusion (e.g., Furminator v. Kim Laube)
  • At deposition, the expert’s hierarchy of facts can be elicited; e.g. source code > deposition testimony; product > source; code > comments; etc.; attempt to get expert to identify some factual basis as “more important” than another
  • While asking the author is a legitimate non-litigation method of determining what software does, an expert using deposition testimony should try to find source-code corroboration for a fact generated at a code author’s deposition
  • While looking at non-source documents (even marketing materials) is a legitimate non-litigation method of determining what software does, expert should try to corroborate at least with multiple non-source references, if not with source code
  • Possible challenge to expert’s reliance on his own ability to interpret another company’s internal documents, not intended for public consumption; is interpretation of internal/confidential docs (arguably employing another company’s shorthand or tacit assumptions) within expert’s field of expertise?

8.6.4.2 Quantitative sufficiency — adequate supply of facts

  • In addition to relying upon the right types of evidence, the expert must also rely upon enough evidence
  • A quantitatively weak factual foundation generally goes to weight rather than admissibility, but the primary concern of this chapter is weight rather than admissibility
  • In other words, the expert must read enough code to stay in the case, or at least to maintain credibility
  • Juries often consider which expert’s opinion accounts for MORE of the facts: who took into account more evidence, and whose specific opinion is consistent with a wider variety of the key evidence
  • One danger is only reading materials provided by the client [case]
  • Here, non-testifying consultant possibly accesses all source code, then selects few files for expert to consider (see xxx below)
  • Conversely, there are dangers in giving the expert free range over everything in the case (e.g., the full Concordance database), because this can lead to loss of attorney/client privilege or work-product protection if some material (e.g. attorney annotations to documents) are not withheld from testifying expert
  • Materials “relied upon” is a smaller subset of materials “considered” (see xxx)
  • It is of course impossible to consider everything; the expert must reach conclusions within a reasonable time, to a reasonable degree of certainty; but the expert should be able to articulate what was chosen to look at, what was chosen to ignore (see below)
  • Likely experts on both sides will have been selective in similar ways, but this does not preclude “you only looked at some of the source code, right?”
  • See also xxx below on testing, because testing is also a way to generate new facts
  • Experts swear to tell, inter alia, “the whole truth”; at the same time, expert witnesses are witnesses, i.e., they answer questions; even their reports are written in response to a specific assignment (see xxx) [see Finegan and Kadane articles on experts, “the whole truth,” and question of whether experts are ever responsible to “volunteer” even if not asked]

8.6.4.3 “What did you NOT look at?”

  • Deposition questions on what work the expert didn’t do, and what the expert didn’t look at:
  • “What do you wish you could do … to feel absolutely safe about your conclusion?” (see Malone on expert deposition)
  • Selection of facts: why did you “consider” x (listed in expert report on materials considered), but not “rely” upon it in forming your conclusion?; is there something unreliable about x?; something less reliable about x than y?
  • Careful: experts often view disconfirming evidence as merely anomalous, or less reliable than confirming evidence, without being able to articulate what makes it less reliable or less significant
  • How did you decide what to consider?; to what extent was the material considered provided to you by others? (see 8.6.4.4 below on attorneys & non-testifying consultants)
  • When did you decide to stop looking in the source code?; given typical huge quantities, the expert must make choices, and must stop somewhere; given ability to only look at portions, which portions were selected, and why?
  • Given impossibility of looking at everything, problem isn’t selectivity as such, but rather a failure to explain the selections made: WHY some portions of source code are unimportant for the conclusions reached
  • Selectivity in source code inspection is sometimes a matter of breadth rather than depth: how far did you decide to drill down?; e.g., given reliance upon a call to a function named PerformXYandZ, did you rely upon the name for the fact that it does X, Y, and Z, or did you drill down to the lower-level implementation of the function to confirm that it actually does X, Y, and Z?
  • Given that selection of portions of source code for close reading likely depends on searching, by expert and/or by consultant, and given it is possible to do search competently or incompetely, reliably or unreliably, some explanation should be provided of the search methodology (see ch.18, including e-discovery cases re: keyword selection, and possible need for non-keyword searching); note PTO file wrapper preservation of examiner’s searches

8.6.4.4 Testifying expert’s reliance upon non-testifying consultant & upon attorneys for facts

  • To the extent possible, the expert should report facts from his own investigation: testing, reverse engineering, experimentation, studying publicly-accessible technical materials
  • Any expert affidavits (e.g. re: summary judgment) must be based upon personal knowledge (PK)
  • However, there is no strict requirement that non-affidavit expert testimony e.g. expert report, deposition or expert trial testimony be based on PK (see Monsanto v. David)
  • Daubert notes that, unlike an ordinary witness, “an expert is permitted wide latitude to offer opinions, including those that are not based on first hand knowledge or observation” (and, Daubert continues, “this relaxation of the usual requirement of first hand knowledge” is why expert opinions must have a reliable basis in the expert’s discipline)
  • So experts applying reliable methods can rely on second-hand information, if the information is of the type reasonably relied upon in the discipline; see xxx below
  • A major reason to use a non-testifying consultant is to do some testing and experimentation, saving expert time/cost
  • Non-testifying consultant can follow blind alleys, etc., while testifying expert can replicate testing in worthwhile avenues
  • See xxx above on potential issues when non-testifying consultant selects source-code files for testifying expert from much larger production
  • Is testifying expert reasonably relying upon non-testifying consultant? [if becomes an issue, could lead to discoverability of some non-testifying consultant work product, or even deposition of consultant?; see xxx above]
  • Testifying expert should make at least one trip to source-code site, and explore entire source-code production on their own to some extent; at deposition, opposing attorney may wish to probe how long testifying expert looked at each of the key source-code files
  • Testifying expert should at least replicate testing/experiment/reverse engineering results generated by non-testifying expert
  • [Issues when non-testifying consultant is holder of “bad facts” shielded from testifying expert]
  • Some facts needed by the testifying expert will likely require help from attorneys, via deposition or interrogatory questions; expert should supply attorney with questions to which answers are needed for the expert opinion [thus, attorney helps supply bricks for expert’s “wall,” as well as vice versa?]

8.6.4.5 Expert’s reliance on inadmissible evidence, and the “conduit” problem

  • See FRE 703 & ACN above re: expert’s use of inadmissible evidence of the type reasonably relied upon by experts in the field
  • Example: datestamped web pages from Wayback Machine (archive.org), without the bother of authentication by a custodian at archive.org (see ch.25)
  • Example: API information or open source from third party web site, where third party is provider of components used by opponent in litigation
  • “What is not in the file is not in the world”: courts may use admissibility of evidence as a proxy for its reliability, but expert opinion ideally should be based on the type of facts the expert would use in a non-litigation context (even if the evidence is either inadmissible or, more likely, the hiring attorneys lack the time or motivation to have it admitted)
  • Experts should not however be used as a “conduit” or “backdoor” for evidence which has not been admitted into evidence; this includes using the expert to “get in” evidence which was not timely introduced in party’s infringement contentions [cite cases: when expert materials are not substitute for party’s submissions; conversely, note that expert cannot rely on party’s submissions e.g. PICs]
  • Otherwise inadmissible facts are only disclosed to jury when court determines that substantially necessary to evaluate expert’s opinion

8.6.4.6 Holistic vs. disaggregated treatment of individual facts

  • Sufficiency of facts, e.g. to survive summary judgment, could be judged as a whole (cumulatively) or piecemeal (seriatim)
  • The holistic/cumulative approach was rejected in GE v. Joiner; for sufficiency testing, each fact by itself must be reliable; a large collection of unreliable facts does not by their conglomeration become reliable [double check this is consistent with GE v. Joiner]
  • Generally, plaintiffs favor the holistic approach, and defendants favor disaggregation
  • There should be some room for arguing that multiple facts, with some questions or qualifications attached to them, but from multiple independent angles or methodologies, yield a stronger conclusion than a single fact generated with a single methodology
  • Example: source code + network monitoring of product + logging of product + internal docs, each with some question attached should > source code without any questions, because the pure source code reference may not even reflect the product [need to explain this better]

8.6.5 Methodology (including Daubert and post-Daubert factors) as basis for opinion

  • “Methodology” is how the expert gets from facts and principles (see above) on the one hand to conclusions/opinions (see below) on the other
  • As noted at xxx above, many software experts believe there is little methodology involved here, besides “just read the code and say what it does,” “say that something red is ‘red’,” etc.
  • In fact, there is a methodology, even if unconscious, and if nothing else there is a “thought process”
  • Cases are very clear that the expert MUST disclose his “thought process,” i.e., how he got from facts to opinions [cite “thought process” cases]
  • This mirrors the rule that even initial infringement contentions cannot be merely “conclusory” (see ch.7)
  • Even an expert’s own testing/experimental source code may be insufficient to show the expert’s thought process, when the source code is uncommented, with naming the court views as unnecessarily cryptic (see Novartis v. Ben Venue opinion reprinting expert’s uncommented source code)
  • So, expert opinions without sufficient reasoning are like source code without comments or with unhelpful names such as x, y, and z
  • Experts are frequently conclusory, failing to connect up facts to opinions; the word “see” is often used to avoid disclosing the expert’s reasoning; e.g., “D’s code implements the hashtable element of P’s claim 1; see file f function g lines 100-103” where the given lines don’t say anything about a hashtable; what’s often missing is a clause beginning with the word “because” [the “Wizard of Oz” rule: “because because because because because”]
  • FJC Patent Case Management Judicial Guide (PCMJG) examples: following discussion of literal infringement, expert offers “bald” statement that “to the extent that there are any differences between the accused product and Claim 1, they are insubstantial and the accused products infringe under the doctrine of equivalents”; or, expert opinion addresses specific claim element, but only to “parrot an accepted test for determining the ultimate issue,” e.g. “Although claim 1 requires ‘a layer’ that performs both functions, the combination of two layers in the accused product achieves substantially the same function in substantially the same way to achieve substantially the same result as would a single layer”; at the very least, expert opinion must separately discuss why each of function, way, and result are substantially the same
  • Conclusory opinions fail to raise genuine issue of fact re: SJ; e.g. merely listing prior-art references and concluding with the stock phrase that the invention would have been “obvious” to the PHOSITA, without a connection between the facts and the conclusion (see Innogenetics v. Abbott Labs, cited in Stamps.com v. Endicia)
  • FRCP 26(a)(2)(B)(i) (see xxx above) REQUIRES expert disclosure, not only of all opinions but also of “the basis and reasons for them”; reasons are distinct from basis (facts); reasons are methodology and principles, applied to the facts
  • Parties often dispute whether the expert report discloses sufficient bases for the opinion to avoid being conclusory, and therefore sufficient to raise an issue of material fact; court must test “whether the other sections of the report do, indeed, support the opinion alleged to be conclusory” (FJC PCMJG)
  • It is not up to the opponent to show unreliability of an expert’s methodology; it is the burden of the proponent to show its reliability [case]
  • [See xxx above on why to go through all this, even when a Daubert challenge is not anticipated, or even when both parties would likely face the same challenge, and will mutually stipulate to avoid Daubert questions; don’t forget the court’s gatekeeping role, which is supposed to be independent of the wishes of the parties]
  • This lengthy subsection applies the four Daubert criteria, and several post-Daubert criteria, to software experts:
  • 8.6.5.1 Daubert- and post-Daubert factors
  • 8.6.5.2 “Falsifiability,” testability, and testing
  • 8.6.5.3 “Peer review” & publication of software analysis methodologies
  • 8.6.5.4 “Error rate” & its application to software analysis, including source-code review
  • 8.6.5.5 Standards & controls regarding software analysis
  • 8.6.5.6 General acceptance in a field of expertise
  • 8.6.5.7 Non-litigation background to the methodology
  • 8.6.5.8 Adequacy to explain important facts, and consideration of alternate theories

8.6.5.1 Daubert- and post-Daubert factors

  • Daubert decision provided four criteria for determining the reliability of expert testimony:
  • (1) “whether it can be (and has been) tested”, “falsifiability, or refutability, or testability” — see 8.6.5.2 below
  • (2) “whether the theory or technique has been subjected to peer review and publication” — see 8.6.5.3 below
  • (3) “the court ordinarily should consider the known or potential rate of error … and the existence and maintenance of standards controlling the technique’s operation” — see 8.6.5.4 and 8.6.5.5 below
  • (4) “general acceptance”: “explicit identification of a relevant scientific community and an express determination of a particular degree of acceptance [of the expert’s methdology] within that community” — see 8.6.5.6 below
  • Criteria (3) has two separate parts, so there were actually five Daubert criteria
  • The four/five criteria semi-hardened into what was sometimes seen as a fixed set of factors
  • However, the court explicitly stated: “Many factors will bear on the inquiry, and we do not presume to set out a definitive checklist or test”
  • On remand in Daubert, Judge Kozinski added another influential factor: “whether the experts are proposing to testify about matters growing naturally and directly out of research they have conducted independent of the litigation, or whether they have developed their opinions expressly for purposes of testifying…. a scientist’s normal workplace is the lab or the field, not the courtroom or the lawyer’s office” — see 8.6.5.7 below
  • FRE 702 ACN includes several criteria:
  • (1) Kozinski’s non-litigation research criteria (above)
  • (2) Expert unjustifiably extrapolating from accepted premise to unfounded conclusion; too great an analytical gap between data & opinion — see xxx below
  • (3) Expert has accounted for obvious alternative explanations — see 8.6.5.8 below
  • (4) Expert is being as careful as would be in regular non-litigation professional work
  • (5) Expert’s claimed field of expertise is known to reach reliable results
  • Other frequently-cited criteria: [use Malone/Zweig, Effective Expert Testimony, 2e, pp.245-265; Merlino, Springer & Sigillo study in Future of Evidence (ABA), 1-32, esp. survey of 100 federal decisions re: engineering testimony, pp.8,14,28-29]
  • Expert’s experience, skills, and knowledge
  • Breadth of underlying analysis
  • Consideration of “confounds”
  • Method’s relation to other known-reliable methods
  • Whether the method was created for litigation, and specifically for this litigation — see 8.6.5.7 below
  • Precision of results vs. broad generalizations — see xxx on opinions
  • Conducting research before reaching conclusion
  • Adequacy to explain important facts — see 8.6.5.8 below
  • Internal consistency [e.g. for patents, no nose of wax, e.g. selective broad reading for some elements, with narrow reading for others; or broad for infringement and narrow for invalidity (P) or vice versa (D)]
  • Consistency between what the expert says the method is, and the actual method employed (note some forensics fields in which reference is mouthed to a given standard, without adherence)

8.6.5.2 “Falsifiability,” testability, and testing

  • Daubert criteria (1) for determining the reliability of expert testimony: “whether it can be (and has been) tested … falsifiability, or refutability, or testability” (quoting Karl Popper)
  • Attorneys cringe at the term “falsifiability”: “why would you want to falsify your own opinion?!”; Rehnquist’s concurring opinion affected not to understand what the term means; even attorneys who understand the term fear its effect on the jury
  • But falsifiability is supposed to be how science works: generate theories, and seek out disproof, rather than collect instances that are consistent with the theory (see Wason card problem)
  • Disconfirming or inconsistent evidence should carry more weight than confirming or consistent evidence
  • Without getting into whether science often really works that way, or whether source-code examination is “science” (rather than software engineering), at any rate Daubert states a strong preference for making falsifiable statements, i.e., ones which can be tested to determine whether they are false
  • For example, “D’s code x corresponds to P’s claim element y” is not a testable statement, because “corresponds to” could mean almost anything; some aspect of x would likely bear some “correspondence” to some aspect of y
  • In contrast, “D’s code x has the same inputs, outputs, side effects, and role, as that of P’s claim element y” is at least more testable; it specifies the attributes/characteristics along which x and y are being compared
  • FRE 702 ACN refers to objective statements vs. a “subjective, conclusory approach that cannot reasonably be assessed for reliability”
  • Falsifiable, testable, or objective statements are more a characteristic of the expert opinion, rather than of how it is reached; see xxx below on the form of the opinion
  • Daubert critieria (1) not only looks at testability, but also looks at actual testing, and this is especially relevant to software examination
  • According to one source, the Daubert criteria most commonly failed is lack of support from testing [cite ABA, though other studies indicate experience/skills a more commonly cited criteria?]
  • Lack of testing is a “red flag” of misapplied methodology [cite Malone/Zwieg]
  • Here, testing primarily means checking results from source-code exam against actual accused product
  • Given conclusions drawn from the source code, how would one go about testing the product to try to disprove the conclusions? (or perhaps show that the conclusions only describe peripheral or unimportant attributes of the product, e.g. code which almost never executes, which is relevant to damages)
  • Product testing
  • Otherwise-qualified expert testimony can be excluded in patent litigation when testing isn’t applied to the accused product (e.g., Izumi v. Philips)
  • Testifying expert should do some of own testing, at least to replicate that done by others, e.g. by expert’s staff or non-testifying consultant (see Vikase v. Am. Nat’l Can, where expert testified to his presence during tests, but turned out to not have been present)

8.6.5.3 “Peer review” & publication of software analysis methodologies

  • Daubert criteria (2): “whether the theory or technique has been subjected to peer review and publication”
  • The term “peer review” is slightly confusing here, because while “peer review” of course refers to gatekeeping by an academic/research community as precondition to journal publication, the term “peer review” also refers to a specific software-engineering methodology of code inspection (see e.g. Karl Wiegers, Peer Reviews in Software: A Practical Guide)
  • What software-engineering methodology are source-code examiners following when working in patent litigation?
  • The methodology is variously called software inspection (see Gilb & Graham, Software Inspection; IEEE, Software Inspection: An Industry Best Practice); software audit (see Hollocker, Software Reviews and Audits Handbook);   “walkthroughs” (see Freedman & Weinberg, Handbook of Walkthroughs, Inspections, and Technical Reviews); and “peer review” (see Karl Wiegers, Peer Reviews in Software: A Practical Guide)
  • The books cited above refer to journal articles on these methodologies; the journal articles cover goals, advances, and problems in software inspection, and often provide case studies in successful or unsuccessful software inspection; the articles have typically been peer reviewed
  • Software inspection has been studied using e.g. known defects or bugs, testing the ability of code reviewers to find the defects/bugs in different types of code, different types of inspection (e.g. solo vs. team), in different rates of code reading, in different blocks of time, etc.
  • [Following two examples from very useful (non peer review)  white paper on peer code review by Smart Bear Software; but replace with more canonical-looking peer review studies?]
  • Example peer-reviewed journal article on software inspection: Kelly et al., “An experiment to investigate interacting versus nominal groups in software inspection,” Proceedings of 2003 Conference of the Center for Advanced Studies on Collaborative Research, Toronto, 2003, IBM
  • Unusual example peer-reviewed journal article on software inspection: Uwano et al., “Analyzing individual performance of source code review using reviewers’ eye movement,” Proceedings of 2006 Symposium on Eye Tracking Research & Applications (ETRA), San Diego, 2006, ACM
  • [List studies which specifically discuss both human and automated methods of comparing code against text; see chapter xxx]
  • One promising-look inspection methodology is the “pair” approach: one person “drives” at the computer, summarizing code out loud while reading, while the other looks not at the code but at text against which code is to be compared
  • Related to the “pair” approach is a possible “blind” approach: one reader selects code; another reader with no (or very restricted) knowledge of the desired result carefully reads the code and summarizes it; the summary is then compared against the patent claim [see also xxx below on “blind” approach to avoid hindsight in obviousness analysis]
  • It is likely insufficient for expert to refer vaguely to a given software inspection methodology, without some showing that expert adhered to this methodology in the course of arriving at his opinion
  • For example, are there known types of false positives or false negatives to be avoided, and how were these addressed in the examination?
  • False positives have been studied in the peer-review literature, e.g. likelihood that code reviewer will flag a block of code as defective in a particular way, when that type of defect is later shown to not be present in the flagged code
  • False negatives: e.g. likelihood that code reviewers will not find a known defect/bug in the code, in a certain period of time, or at a certain rate of code reading
  • Issues from software-inspection literature:
  • Difference styles of reading necessary/helpful for object-oriented vs. procedural vs. event-driven code
  • Tendency of review to grow beyond originally selected files (reviewing file x requires pulling in files y and z, which …)
  • Importance of understanding y and z, which call x, to understanding x [analogy for attorneys: understanding the proposition/holding of case x often requires determining how x was cited in later cases y and z]
  • Ideal period of code reading: most defects found in < 60 minute block; few after > 90 min.
  • Ideal rates of code reading: in one study, 200 lines/hour
  • Types of software review: checklist of what to look for (e.g., buffer overflow, SQL injection) vs. “systematic” vs. “use-case”
  • Surprising benefits of rapidly scanning repetitious code to look for pattern changes (also useful when reviewing lengthy network-monitoring logs; small differences “jump out” at the reviewer)
  • Benefits of initial slower reading, in reducing overall time to find relevant code
  • Surprising benefits of sometimes reading printed code vs. on-screen [add this to ch.24 on printing]
  • Problems in finding omissions, absences
  • What makes some code much harder to understand
  • [Above mostly from studies of software inspection looking for “defects” such as security flaws or simple bugs; apply studies in which code is being compared to specifications]

8.6.5.4 “Error rate” & its application to software analysis, including source-code review

  • Daubert criteria (3): “the court ordinarily should consider the known or potential rate of error … and the existence and maintenance of standards controlling the technique’s operation”; here, known/potential error rate; separate issue of standards is discussed at 8.6.5.5 below
  • “Error rate” initially sounds inapplicable to the task of source-code reading, but as noted at 8.6.5.3 above, software inspection is a well-established field of study; it includes studies of the rate at which source-code readers can find known defects in code
  • Both false positives (identifying a defect which is not present) and false negatives (failure to locate a known defect) are measured in software-inspection studies
  • E.g., false negative: it appears that even the best software inspector only finds about 1/2 of defects found by an entire group [Gilb, Software Inspection, 299]
  • To what extent are defect-finding tests applicable to infringement- or invalidity-finding code reading?
  • Here in patent litigation, the task is different from finding defects: it is comparison of code with text (patent claim)
  • But this too is potentially measurable, at the very least with inter-expert calibration (see below)
  • As noted at xxx above, especially when experts appeal to their own experience as the basis for asserted matches, the expert should be subject to proficiency testing
  • For copyright purposes, an examiner’s ability to compare one body of code to another, containing known non-verbatim matches, is measured as part of CodeSuite certification
  • CodeSuite does not measure code/text comparison needed in patent litigation
  • Ability to match code against patent claims could be measured e.g. by taking code which appears as an exemplary embodiment in a variety of patents, separating the code from the patents, modifying the code in a variety of ways, and using the code as part of a larger test in which test-takers are asked from align blocks of code with the patent claims; proficiency at this task (which could be made progressively easier or more difficult) could be objectively measured
  • Another form of testing is “calibration”: measurement of each expert’s results against those produced by other experts [note for attorneys: this is how the essay and “performance” portions of bar exams are graded; consider how most bar graders will assign a grade within 5 points of other graders; this is somewhat similar to the task of getting multiple software examiners to agree on whether code matches a patent claim]
  • Here, litigation itself is a sort of “peer review” able to catch errors: false positives (flagging non-infringing code as infringing, or non-anticipatory code as anticipatory) should be caught by the other side’s experts
  • However, false negatives are unlikely to be called out: if P’s experts overlook some infringing look, D’s experts are hardly likely to call attention to it; likewise, if D’s experts are unaware of anticipatory prior art; some of this might be brought out in deposition?

8.6.5.5 Standards & controls regarding software analysis

  • Second part of Daubert criteria (3): “existence and maintenance of standards controlling the technique’s operation”
  • In other words, known “best practices,” against which expert’s performance can be measured
  • Both de jure (formal) and de facto (informal) standards
  • Examples of de facto best practices here: don’t only look at source code; make sure source matches accused/anticipatory product; make sure selected code is executed; determine if/when selected code is bypassed; do not depend solely on comments or file/function names to determine what code does; determine whether error/exception handling can affect operation; supplement static analysis of code with dynamic (e.g. network) analysis; etc.
  • There is not a de jure standard which “controls” all the operation of source-code examination as employed in patent litigation
  • There are two relevant Institute of Electrical & Electronics Engineers (IEEE) standards:
  • IEEE 1028-1998 (1028-1997): Software Reviews and Audits
  • IEEE 1012-1998: Software Verification and Validation
  • See also IEEE Software Engineering Book of Knowledge (SWEBOK) chapter 11 re: software quality, inspection, walk-through
  • Unfortunately, experience-based forensics testimony sometimes mouths acknowledgement of a given standard (e.g., arson investigators and NFPA), without being expected to actually follow the standard; mere lip-service to the service has been taken as sufficient
  • A source-code examiner ought at least to have glanced at the IEEE standards, to be familiar with some of the authorities in the field (e.g., the names Gilb or Fagan or Wiegers in software inspection), and to be able to cite a few known issues (see xxx above) and how they were addressed in this case’s source-code examination
  • The IEEE standards noted above are likely neither sufficiently detailed nor current to truly “govern” or “control” much of what the source-code examiner does
  • Use of industry-standard tools (e.g. SciTools Understand, Fiddler or Wireshark network monitors), rather than home-grown or ad hoc scripts, is one way to adhere to a standard — IF the examiner can explicitly cite several known “traps” or “gotchas” associated with the tool (see chapter xxx), and how they were addressed or avoided; and can explain how the tool’s output (what the expert relies upon) relates to its input
  • All source-code examiners should be familiar with Code Reading by Spinellis (see chapter xxx), and should be able to describe how the exam adhered to some of the several hundred maxims in the book [also Code Complete by McConnell, etc.]
  • See also chapter xxx noting the existence of a widely-used, albeit tacit, standard methodology of source-code examination in patent litigation; see best-practices list above; whether an expert is governed/controlled by such tacit standard would depend on a specific checklist (a subject for deposition), and/or the results of proficiency testing/calibration (see xxx above)
  • Deposition questions: what is the field of expertise?; does this field have best practices?; codified where?; if tacit rather than codified, can expert enumerate some best practices?; worrisome if can’t
  • At deposition, turn sample list of best practices above (“don’t only look at source code,” etc.) into questions: what did you look at besides source code?; how did you confirm that this source code matches the product? (though one answer is perhaps: your client provided during discovery); how did you determine the code you selected as relevant in the case is actually executed?; etc.; “I didn’t” should be a worrisome answer

8.6.5.6 General acceptance in a field of expertise

  • Daubert criteria (4) incorporates the older Frye standard: “general acceptance”: “explicit identification of a relevant scientific community and an express determination of a particular degree of acceptance [of the expert’s methodology] within that community”
  • This includes technical as well as scientific communities
  • Note that previous acceptance of the expert in earlier litigation should not satisfy this criterion: judicial acceptance is not “general acceptance”
  • Much depends on how broadly or narrowly the expert’s field of expertise is defined; examples of narrow fields: network security; cryptography; volume rendering
  • Analogous to the determination of “the art” in which the PHOSITA has ordinary skill (see obviousness, chapter xxx)
  • If the expert is representing the PHOSITA, the expert’s stated field of expertise for Daubert should at least be consistent with “the art”; P generally seeks a narrower art for obviousness, D a broader art; this cuts across usual preference of P for broader expertise, D for narrower
  • What are examples of methodologies which might be generally accepted within a given broad or narrow field of software expertise?
  • If the expert were to broadly define his expertise as code reading, the methodologies would be those referenced at xxx above
  • If the expert were to broadly define his expertise as reverse engineering, the methodologies would perhaps be those discussed at the IEEE annual Working Conference on Reverse Engineering (WCRE), or as practiced in the internet-security or malware-detection fields
  • Expert should be able to articulate some generally-accepted methodologies, even at the risk of sounding obvious or trite, such as “One way to learn the exact network traffic generated by a program is to run the software together with the Wireshark network monitor,” or “One way to properly follow overloaded function names in source code is to rely on SciTools Understand to link a function call to the correct function implementation”
  • This methodology, when expressed, may sound trite (see the FRE 702 ACN “code words” example above); concern about sounding trite may explain why experts often avoid explaining their methodology

8.6.5.7 Non-litigation background to the methodology

  • Courts prefer expert opinions based on methodologies originally applied in a non-litigation context
  • If the methodology was not designed with litigation in mind, it will likely be viewed as more reliable, especially compared to methods designed for this particular litigation
  • At the same time, litigation frequently calls for methods whose only application is litigation or the law [“GPs don’t do autopsies”]
  • And non-litigation methodologies may not be directly relevant (e.g., animal studies used in torts litigation about injury to humans)
  • Many of the “forensic sciences” (e.g., fingerprint comparison) have this problem: the most directly-relevant methodologies were created for legal purposes, and arose in an ad hoc fashion from practical legal necessity rather than from (at least arguably more rigorous and testable) academic study
  • Arguably, comparison of software with patent claims is only applicable in a legal context: either litigation itself, or activities “in the shadow of” litigation (e.g., a company’s “freedom to operate” comparison of its own technology with competitors’ patents)
  • Experts should seek to show that most of what they do in software patent litigation is rooted in non-litigation practice: industry-standard methods of code searching, software inspection, reverse engineering, and comparison between code and specifications
  • Typical protective orders (POs) often prevent experts and examiners in these cases from following the same procedures they would use in a non-litigation context; see chapter xxx
  • Some experts specialize exclusively in litigation work: balance pros & cons
  • See FRE 703 and xxx above on expert using nonadmitted (or even inadmissible) facts, if of the type reasonably relied upon in the expert’s non-litigation work; this suggests the importance that the expert have non-litigation work as a basis for the litigation-related practice
  • Similarly, “reasonable degree of certainty” (see xxx below) states that expert is opining with the same degree of “connectedness” (between the bases and the opinion) that would be expected in the non-litigation context of software engineering, i.e., engineering decisions could rely on (would be based on) the facts/method/opinion

8.6.5.8 Adequacy to explain important facts, and consideration of alternate theories

  • In a dispute between two experts, fact-finders often consider which expert’s opinion accounts for more of the known important facts in the case; see opinions at 8.6.6 below
  • To generate opinions more likely to account for seemingly countervailing facts, and to consider alternate theories, the expert must consciously consider them; see chapter xxx on analysis the expert should conduct of his own opinions, and challenges he should anticipate
  • Whether to incorporate anticipated challenges into the initially-stated opinion, or whether to wait until they are raised, is a separate question from whether to work at anticipating them; see ch.xxx on rebuttal, and chapter 27 on the expert report
  • If the expert had been hired by the other side, what code would they focus on?; how does this square with the code the expert is instead focusing on?
  • Some experts and examiners tend to downplay the importance of, or reliability of, evidence which points away from a conclusion the expert has reached, in ways that may be difficult to defend at deposition
  • Some experts and examiners adopt a fatalistic “it is what is is” or “it says what it says” attitude towards evidence which appears to contradict their conclusions; this is preferable to ignoring the evidence, but does little to integrate it into the expert’s opinion
  • Ideally, the expert’s opinion integrates both “good facts” and “bad facts” [this sounds like a homily; give a concrete example]

8.6.6 What is an “Opinion”?

  • Consider for a moment the oddity that opinions are admissible as evidence
  • At the same, even the most factual testimony generally contains some opinion (see FRE 701 & ACN on lay non-expert opinion testimony: “he was drunk”, “he was driving erratically”)
  • Experts opinions are allowed in circumstances in which lay opinion is not; this is why expert opinions must be “reliable” to be admitted (contrast lay opinion, which must merely be “rationally based on the perception of the witness”)

8.6.6.1 Opinion vs. “just let the facts speak for themselves”

  • Experts are generally expected to provide opinion testimony; most of FRE 702-705 relates to opinions or the bases for opinions
  • Experts sometimes resist the characterization of their testimony as opinion: “I just describe the facts as I find them,” “I just let the facts speak for themselves”; this has a nice ring to it, but is either naive or disingenuous
  • The conceit that the expert merely speaks for otherwise-mute evidence, is merely an amaneusis for the facts, is especially prevalent in forensics; it may find its way into patent litigation when experts describe their work as “forensic” (see Introduction and Conclusion)
  • One reason is concern over “so this is just your opinion?” questions (with the word “opinion” said with dripping sarcasm); but expert opinions are not “just” opinions; they should be professional opinions, to a reasonable degree of certainty (see xxx), based on specialized knowledge, principles, and methods, intended to be used as the basis for decision-making
  • Trick deposition question by party favoring SJ: induce deponent to state that there is no reasonable disagreement regarding fact
  • Which types of evidence do/don’t “speak for themselves,” in what circumstances? (Even eyewitness testimony can be probed; photographs will not always speak for themselves in the age of Photoshop; documents and electronic documents may need to be probed for authenticity; authentication generally asks whether a piece of evidence is what its proponent purports it to be)
  • Source code definitely does NOT “speak for itself”: see e.g. Amdocs v. Opennet, in which source code, presented without expert testimony as to how source code operates, failed to raised issue of material fact to contradict opponent’s testimonial evidence (conversely, expert testimony without reference to source code may also fail to raise an issue of material fact, e.g. Padcom v. Netmotion; though generally expert testimony without use of source code goes to weight not admissibility e.g. TV Interactive v. Sony)
  • Experts sometimes avoid expressing an opinion, by failing (consciously or unconsciously) to write complete sentences.
  • The word “see” is especially useful in avoiding expression of an opinion: “Hashtable: see file f function g lines 100-102,” by itself, merely tells the reader to see for themselves, without committing the expert to anything
  • Similarly, “for example” or “e.g.” is often used to resist expressing a firm opinion: “Hashtable: see for example file f function g, file f2, function g2, file f3, function g4; also e.g. file f4 function g5”: this superficially looks informative, but does not actually say anything
  • Contrast: “The hashtable element of P’s claim 2 is embodied in file f.php function g lines 100-102, because the code employs an associative array (“array[‘string’]”), the code is written in PHP and uses built-in PHP associative arrays, and these are implemented using hashtables” [for which a reference should likely be provided to PHP doc or source code]”
  • Of course, expert opinions are especially valuable at a higher level of granularity than the example above: “D’s accused product x embodies each and every element of P’s claim 1”; but these higher-level opinions must be based on lower-level statements made by the expert
  • Report table of contents (TOC) should present full opinions, with hierarchy of higher- and lower-level opinions; see ch.27
  • Note that avoiding “see,” “e.g.” and other non-statements is consistent with Daubert criteria (1): expert opinions should be “falsifiable” (capable of being tested)
  • Experts of course do also provide facts, sometimes “new” facts found by expert themselves as a result of testing or experimentation (here, software reverse engineering; experts may even write new software to generate facts about the underlying software) [move this point to elsewhere in chapter]
  • Experts can also provide general “background” testimony on principles of a given field, e.g. a tutorial for judge or jury on the relevant technology; expert is not required to connect this background information to the facts of the case (see xxx), but rare that won’t do so
  • Importance of opinion:
  • As noted above (see xxx on FRE 703), facts relied upon as bases for expert opinion may be inadmissible, if of the type reasonably relied upon in the field; the role of such facts is merely as bases for an expert opinion; the purpose of the facts is as the basis for opinion
  • Court wants benefit of expert’s considered opinion, boiling down raw facts
  • Court requires reliable basis for opinions, because fact-finder is permitted to rely on opinion
  • Similarly, FRE 705 states that an opinion can even be provided at trial without the underlying basis, again highlighting the importance of opinion testimony (of course, the expert report must disclose all the bases for the opinion; bases must be disclosed if asked on cross-exam; and it is almost always preferable to testify to bases as well as opinion)
  • FRE 704 even permits (with one exception not relevant here) experts to provide opinion testimony which “embraces an ultimate issue to the decided by the trier fact” (such as infringement or invalidity); while this is not a license to in effect tell the fact-finder how to decide (see xxx below on use of “inadequately explored legal criteria”), it again shows centrality of opinion to expert testimony

8.6.6.2 When raw facts without an opinion are appropriate

  • Experts sometimes testify to facts and not opinion
  • FRE 702 states an expert “in the form of an opinion or otherwise”
  • It is possible to let others draw inferences: either other experts, or even the fact-finder
  • Even when providing opinions, these are often bricks for a larger wall; a single expert is almost never responsible for the entire factual portion of the party’s case
  • However, the danger of providing bricks for another’s wall is that they may be misused to produce someone else’s oddly-shaped wall; what is the responsibility of the expert for how their output is used by others?; see papers on expert witnesses and “the whole truth”
  • Experts are often called upon to provide affidavits for use e.g. in summary judgment; see ch.27
  • Affidavits are sworn, based on personal knowledge
  • Affidavits are often drafted by attorneys, containing a series of more-or-less disconnected material needed as support for a brief, with an expert asked to sign the affidavit
  • Experts should at least carefully confirm that each proposition they have been asked to endorse is supportable, and has been stated with any necessary qualifications (e.g., “typically” or “generally” rather than an unstated always)

8.6.6.3 No merely conclusory opinions, or inadequately explored legal criteria

  • While experts are generally expected to provide opinions, of course these opinions must have a reasonable basis in the expert’s experience or qualifications, facts, principles, and methods (see Daubert above)
  • The expert must disclose these basis, at least in the expert report (see chapter 27)
  • In oral testimony at the trial, the expert need not provide the basis before the opinion, nor provide the basis at all unless asked on cross-examination (FRE 705)
  • But opinions without a basis carry very little weight (even if the fact-finder does not fully understand the bases, judges and juries can easily tell whether an opinion has one, and perhaps whether it is sufficient to support the opinion)
  • Opinions without sufficient basis are “conclusory,” and should be treated analogously to a party’s “conclusory” ICs (see ch.26)
  • Conclusory expert assertions cannot carry a party’s SJ burden (Invitrogen v. Clonte)
  • One mark of a conclusory opinion is absence of words such as “because” which tie facts to conclusions
  • Another mark of a conclusory opinion is use of “inadequately explored legal critieria,” i.e., legal terms of art which have not been parsed into their component parts, with a discussion of each and every component [note similarity to analyzing patent claims by parsing into limitations, and connecting each and every limitation to accused/anticipatory product]
  • For example, “obviousness” is a legal term of art; a statement that a patent claim was “obvious” is merely conclusory unless it explores each of the Graham factors: (1) the scope and content of the prior art, in the appropriate field or “analogous arts”; (2) the level of ordinary skill in the art at the relevant time; (3) the differences between the claimed invention and the prior art; and (4) objective evidence of nonobviousness, such as long-felt but unsolved need, and failure of others
  • It can be easy to stray from factual analysis (software interpretation) into legal analysis (claim interpretation); see xxx above on the reduced role for experts as “extrinsic evidence” in Markman claim construction
  • [Perhaps discuss facts/law: e.g. legal vs. factual impossibility in criminal law (see the deer-hunting story of Mr. Law & Mr. Fact); how to apply this to difficult mixed questions of law and fact, such as patent infringement?]
  • FRE 704 ACN provides this example: the question, “Did T have capacity to make a will?” cannot be answered by an expert, but “Did T have sufficient mental capacity to know the nature and extent of his property and the natural objects of his bounty and to formulate a rational scheme of distribution?” could, if the expert delved into each separate factual area; the 2nd question is not merely a contrived and long-winded version of the 1st question, because it directs the expert to the individual factual components of the ultimate issue

8.6.6.4 Hedging opinions & “weasel words”

  • Experts are expected on the one hand to provide firm and certain opinions (see degrees of certainty at xxx below), and on the other hand to not over-extrapolate too much of an opinion from the supporting basis (see xxx on “fit” & application)
  • This can prove an awkward set of constraints
  • The expert would perhaps like to say “well, it sure looks to me like they’re most probably using hashtables, but I can’t be certain because we don’t have the source code for that”
  • Such testimony wouldn’t be helpful to the fact-finder [explain why not; explain difference between “I’m reasonably certain that…” and “There’s a good chance that…” or “I believe that…”]
  • To avoid these constraints, experts sometimes resort to hedging or “weasel words”
  • The all-time favorite weasel words are “is consistent with”
  • “Is consistent with” is almost always an accurate conclusion as such, and the expert perhaps hopes the fact-finder won’t appreciate how little the statement really says
  • Similar phrases: “points towards,” “indicates,” “supports an assertion that” (see Graham Jackson, “Understanding forensic science opinions,” and further discussion in chapter 27 on expert reports)
  • What should expert do instead, when tempted to resort to “is consistent with”?
  • One option is to recall (see xxx above) that fact-finders prefer opinions which account for more of the important facts; this means incorporating what may seem like “unfavorable” or countervailing facts into the opinion
  • Rather than be tempted to list only favorable facts and link them vaguely to the desired conclusion with “is consistent with,” it is likely preferable to list both favorable and unfavorable facts, and explain why as a whole they support the conclusion, rather than an alternative [“support the conclusion” or even “points towards” are not weasel words, when the alternate is explicitly excluded]
  • But when “weighing” the facts in this way, don’t try to take on the fact-finders job (e.g., assessing credibility of a deponent’s testimony), and note the possibility of “disaggregation” (see xxx above)

8.6.6.5 Degrees of certainty

  • It was noted above that it is not helpful for experts to testify along such lines as “It looks to me like…” or “There’s a good chance that…”
  • Courts want experts to be as certain in forming opinions, to be used by the fact-finder, as the expert himself would be in a non-litigation context in forming an opinion upon which something depended
  • In other words, this is not mere opinion, but opinion to be used as a basis for decision-making
  • This is the significance of the phrase, “reasonable degree of certainty,” and of such awkward formulations as, e.g. “reasonable degree of software-engineering certainty”
  • Phrases like this are never used outside court (e.g. in engineering itself), yet their point is precisely to refer to non-court criteria for drawing inferences, upon which inference something else will depend
  • Expert is opining with the same degree of “connectedness” that would be expected in the non-litigation context of software engineering
  • Connectedness: if you wouldn’t say to a coworker on a project: “the code uses x, therefore it also uses y,” or if the coworker wouldn’t go and put code into the product on such a basis (“hey wait a minute, what about…?”), you shouldn’t say it in litigation either
  • Specifically on matching code to patent claims: if a specification or requirements document called for an x, and you as a software engineer wouldn’t use a y to implement the x, you likely shouldn’t say that an x in the source code matches a y in the patent claim
  • Again, this is unfortunately why experts use “is consistent with” and other weasel words: they know they can’t support an equation between x and y, but they “need” some association between them (perhaps it is the 11th hour, with the expert report due tomorrow at 9am), so they resort to a weak association which superficially sounds strong
  • On the other hand, remember that courts are not looking for 100% certainty either; the standard in civil litigation is not “beyond a reasonable doubt,” but a much lower standard: preponderance of the evidence
  • At the same time, however, also remember that because of the presumption of validity, patent invalidity must be shown to a higher standard: “clear and convincing evidence”
  • The phrase “reasonable degree of software-engineering certainty” is fine, so long as the expert understands what it means: that you would make software engineering decisions, affecting real-world products or services, based on conclusions reached to the same degree of certainty you have here in the litigation

8.6.6 “Fit” and “Application”: wringing and stretching

  • The tests for expert reliability generally apply to methods, not conclusions/opinions (Daubert: “The focus, of course, must be solely on principles and methodology, not on the conclusions that they generate”)
  • But after Daubert, courts realized that conclusions themselves must also be examined for reliability, somewhat apart from the methodology
  • GE v. Joiner: “methodology and conclusions are not entirely distinct from another another”
  • Possibly “too great an analytical gap” between the facts of the case and the expert opinion
  • An expert may employ methods which are reliable, but which are not applicable in this case; or the expert may “stretch” or over-extrapolate
  • While courts prefer methods not produced for this particular litigation (see xxx above), on the other hand the methods should bear a close relationship to the issue in the litigation
  • “Fit” = pertinence, relevance
  • For example, animal studies may be reliable, but are arguably not applicable to litigation involving humans
  • In a software patent case, an analogy might be drawing conclusions about an accused product by looking at non-accused software, without explicitly indicating why it is reasonable to extrapolate from the non-accused to the accused
  • Similarly, experts sometimes broadly opine from general computer science principles, without knowing enough about the particulars of the relevant software: e.g., “the way we do X in computer science is by doing Y; D’s product does X; therefore [and this part is often left unstated by the expert, hoping the listener will mentally fill it in] D’s product does Y,” without considering whether the “X is done by doing Y” generalization applies to D’s product
  • See ch.xxx on “the inventor’s fallacy”: “the only way to do X is by doing Y, and D does X, therefore it does Y”; but TMTOWTDI: “there’s more than one way to do it” (Larry Wall)
  • Sometimes conclusions without “fit” are simply non sequiturs, e.g. “if X and Y, then Z”, without any facts in the case as to X or Y
  • FRE 702 & “application”: in addition to (1) sufficient facts or data, and (2) reliable principles and methods, FRE 702 also requires (3) “the witness has applied the principles and methods reliably to the facts of the case”
  • Whereas “fit” looks at the “analytical gap” between the case and the conclusions, “application” looks at the related but distinct problem of over-extrapolation, “wringing” conclusions from the facts
  • FRE 702 ACN: “the trial court must scrutinize not only the principles and methods used by the expert, but also whether those principles and methods have been properly applied to the facts of the case”
  • FRE 702 ACN makes an exception when the expert is merely educating the fact finder about general principles, without attempting to apply them to the facts of the case; but the very case Daubert itself cites as its example of “fit” had to do with general background testimony (US v. Downing, re: psychological principles of eyewitness identification)
  • “Fit” and “application” provide ways to exclude expert conclusions which just don’t smell right?
  • A court might apply these where the expert is clearly “stretching”

8.7 “Battle of the experts”: Why & how experts disagree over the facts of how patent claims read onto software

  • Expert disagreements in court have met with dismay and outrage since the 18th century (see Tal Golan’s excellent history of expert testimony, Laws of Men and Laws of Nature)
  • The dismay is based largely on a naive view of science and technology as offering up single unambigous truths
  • The outrage expresses itself in the view that, when two experts disagree, one or both must be biased “whores” engaged in “junk science”
  • But expert disagreement is no more surprising than the frequent need or desire to get a “second opinion” in medicine
  • Expert disagreement occurs naturally in experience-based fields, where experts do not have rigidly-defined standards (see xxx above, noting the need for proficiency testing and “calibration” among software examiners)
  • Expert disagreement is generally necessary for a patent case to get to a jury, because without factual disagreement the court can resolve the case on the law (see xxx above re: summary judgment)
  • Because juries resolve factual disputes, the expert disagreement must be factual: generally not only over how to interpret the patent claims (a largely legal question; see xxx) but over how to interpret the software
  • Because SJ requires a factual dispute, there is an incentive for D (who generally would prefer SJ) to concede on facts, and confine the dispute to law (claim construction)
  • Experts will likely agree on 90% of the facts, with factual dispute narrowed to a single element/step
  • Some examples of “battle of the experts” involving factual disputes over source code:
  • Dynetix v. Synopsys: Dynetix expert testimony, based in part on review of VCS Multicore source code, concludes that despite compile-time partitioning, run-time conversion of partitions into slave threads practices the Auto Detection and Purpose components of the Dynetix patent claim. Synopsys argues that the autopartitioning source code relied upon by Dynetix’s expert is “blocked” by other code, and therefore can never be executed. “The net result of all this is that,at least with respect to DLP, there is a genuine issue of material fact as to whether the autopartitioning feature infringes the parallel simulation claims. While Synopsys has presented evidence showing the accused product does not practice a key limitation of the claims in question, Dynetix has presented competent evidence to counter that assertion. In particular, Dynetix’s expert Amin points to portions of the source code that indicate the user does not supply a variable, the program launches into the autopartioning mode described as infringing.   This is a classic ‘battle of the experts’ on a material issue of fact.   It is the jury’s province to resolve such issues, not the court’s.” [fns ommitted]
  • Experts often say there is little room for factual disagreement, but the Dynetix v. Synopsys example belies this; yet see the following footnote from Dynetix, indicating lack of facts based on a discovery dispute: “To be sure, at least with respect to method claims, ‘[i]t is not enough to simply show that a product is capable of infringement; the patent owner must show evidence of specific instances of direct infringement.’ [citing Fujitsu v. Netgear; see ch.26] But Dynetix has presented an affidavit showing these facts are unavailable to it and the court has issued not just one but two orders compelling Synopsys to produce evidence relating to this issue…. Two other motions to compel are pending…. Under such circumstances, it would be unjust to penalize Dynetix for failing to tender this very same evidence.”
  • Thus, one reason for a “battle of the experts” is insufficient facts, and this occurs frequently in litigation, given the limits on time and money present even in the largest cases
  • Masimo v. Philips: “Where the parties disagree is in the application of fuzzy logic and whether conventional logic, math, averages, IF-THEN-ELSE statements and precision can be used in fuzzy logic processes. Masimo contends equations designed to calculate averages or scaling equations do not constitute fuzzy logic. Philips maintains such concepts are inherent to how fuzzy logic is implemented, disputing Masimo’s argument that the calculated confidence values (fRawConf) in its source code do not represent partial memberships in sets, because each number is a precise calculation…. genuine issues exist regarding whether the contested lines of source code in the Signal IQ software are fuzzy logic. The parties’ experts dispute the functionality of the source code in the Signal IQ software, and how fuzzy logic is used in the Bosque patent, the Signal IQ software (as well as whether it is used at all in Masimo’s product), and the ‘074 patent. For these reasons, the issue of literal infringement should be left to the jury.”
  • Masimo is an example of experts disagreeing over whether a piece of code does x (here, “fuzzy logic”); in such disputes one expert may appear to be “stretching” a definition to fit a borderline case
  • A similar example is MobileMedia v. Apple, in which experts disagreed over whether a playlist constitutes a “wish list” or merely a database; see also Friskit v. RealNetworks, with experts disagreeing over whether playback code was   “automatic”; Uniloc v. Microsoft, whether MD5 is “summation”
  • These are all examples of differing code interpretation, not claim interpretation, even though it seems that code reading should, given the very nature of code as instructions to a CPU, be unambiguous
  • What then are the sources of expert disagreement over code?
  • Dynetix above shows that one source is the need for inferences from incomplete facts
  • Masimo and MobileMedia above show legitimate disputes over whether definitions cover borderline cases; though this is also open to less-legitimate “stretching”
  • Experts can disagree over the live/dynamic behavior of a system, which may be crucial to a method claim, if one expert relies more on static analysis of source code, and the other more on dynamic examination of the product; network and multi-process behavior of a system is non-deterministic
  • Experts can disagree over source code’s agreed-upon “capability” of performing a step ever actually comes to fruition in a live system; see e.g. MetLife v. Bancorp; but in some cases mere capability is sufficient, e.g. Versata v. SAP, with patent claim for “computer instructions capable of” x, and expert demonstrates performance of x without modifying source code, even though shipped product does not do x
  • Experts can disagree over whether comments or function/method names accurately reflect what the source code does, and the level at which function/method names can be relied upon without further examination
  • Expert disagreement in software patent litigation is rooted in the difficulty of aligning code with prose/text (which is also ultimately at the root of software bugs, and the difficulty of finding them)
  • Just as there are numerous ways (x1, x2, etc.) to implement y (e.g. square root), there are numerous ways of describing what x1 does, and even what y it corresponds to
  • A given implementation/structure/means may correspond to multiple roles/functions/purposes
  • At a low level, note how a given CPU instruction may be used for multiple purposes, e.g. Intel x86 LEA (load effective address) is often used to do addition or multiplication (for this and other examples of multiple functions carried out by a single instruction, see Abrash, Zen of Assembly Language; Warren, Hacker’s Delight)
  • Accuracy of descriptions of “what x does” depend on the level of description, which attributes are being emphasized (e.g. functional vs. structural), etc.
  • Just as eyewitnesses see different things, without any single eyewitness seeing everything, likewise experts can look at the same facts and “see” different things, and draw different conclusions, not only because of biases, but because “a picture tells 1,000 words” so any description emphasizes some things and omits some
  • Even given the faux naive “I just present the facts,” experts will see different facts as worth presenting
  • FRE 702 ACN: a court ruling that one expert’s testimony is reliable does not imply that competing testimony is unreliable; testimony is permitted based on competing methodologies within a field of expertise
  • In infringement dispute, P’s expert will generally lean towards simplification, D’s expert will generally lean towards showing subtleties, complexities; these tendencies are reversed in invalidity dispute
  • What proxies do non-experts use to assess a battle of experts?; credentials, statements of certainty, and good teaching style are often used as bases for favoring one expert over another; are there better proxies?

8.8 Court-appointed special masters, and proposed solutions to the “expert problem”

  • Expert witnesses are widely perceived as presenting a problem to the legal system
  • Their testimony is frequently seen as biased, mere advocacy by “hired guns,” rather than as providing the legal system with the considered assistance of science and technology
  • Various solutions have been proposed since the 18th century
  • FRE 706 puts forward one possible solution: court-appointed experts
  • However, courts only infrequently appoint experts
  • According to FRE 706 ACN, the very availability of this solution decreases the need for its use: the “ever-present possibility” is supposed to have a “sobering effect” on parties to litigation
  • Courts may hire technical advisors, and technically-trained special masters(FRCP 53); see e.g. In re Subpoena to Chronotek
  • FRCP 53 special master to examine source code prior to discovery (see RGIS v. AST)
  • Neutral expert to answer question whether must D produce entire source code? (see Friskit v. RealNetworks)
  • See Lee Hollaar, “The Use of Neutral Experts”
  • Another proposed solution to the expert problem is so-called “hottubbing”: requiring experts to prepare a joint report on their agreement and differences, with each expert required to respond directly (not via an attorney) to the other experts; see Welding Fumes Prod. Liab. Litig.
  • Another approach might be retaining experts merely to describe similarities and dissimilarities of code with patent claims, without offering a quasi-ultimate opinion on infringement or invalidity [some jurisdictions use this approach for expert testimony re: questioned documents, handwriting comparison]
  • Another approach would be designation of a neutral expert to produce abstracts of the relevant code, possibly in a “blind” context (i.e., without reference to the patent claim)
  • Semi-blind examination is sometimes done in obviousness analysis, where it is well known that “hindsight” should not be used: rather than using the patent as a template with which to view the prior art, the expert is presented with materials in sequence (see PLI Patent Litigation, ch.8)
  • Professional codes of ethics, and applicability to expert litigation-related work