Software Litigation Consulting

Andrew Schulman
Consulting Technical Expert & Attorney

NotebookLM (unnecessarily vague) summary of “self-referential” patent system

It’s not really worth reading NotebookLM’s prose below by itself. I’ll later post a detailed critique of how Google’s NotebookLM (NBLM) handled this task, both its high and low points, and assessing how to go forward productively using AI chatbots to turn rough book notes into usable first drafts of writing. But first, I want to get a better handle on how to tell NBLM to produce the output I want. That is “prompt engineering,” but for RAG (Retrieval Augmented Generation): “Pushing a RAG”?

A prompt in PTCF format (personna/task/context/format) works well with NotebookLM. In fact, based on rough scoring of the responses NBLM did better than ChatGPT-5.1, Claude Opus 4.5, and Gemini 3 Pro Preview. Comparison and analysis coming soon.

In the meantime, you can just glance at NBLM’s output below and see how good it looks, and then read a little more carefully and ask yourself if it’s actually saying anything. Did the prose really have to be so generic? I don’t mean this as a standard “AI doesn’t know what it’s talking about, it’s just doing next-token completion” complaint, because while true, that’s not inconsistent with the ability to generate writing that is a lot better than what NotebookLM did here.

NBLM-generated audio and video, while sometimes incorrect, is definitely not flat and bland like the output below is. I quickly had NBLM generate a video specifically on this same “self-referential patent system” theme, based on the same materials of mine as the NBLM-generated text below, and while the resulting video has some problems, at least it’s not “meh” in the way that NBLM’s text output below is. See video towards the end of this page.

Given the quality of the NBLM video, despite some mistakes it makes, I don’t think the boring flat quality of the NBLM text is the fault of my rough notes — NBLM video took those same rough notes and did something interesting with them.

  • Aside: On whether LLMs understand what they’re talking about (which is important to assessing LLM output), See “Potemkin Understanding” paper (and article: “AI models just don’t understand what they’re talking about”), note the related question “what does it mean to speak competently about baseball without ever having watched a game?”, but also note that even next word/token completion requires something like comprehension of the surrounding words, and especially note that words/tokens in LLMs are represented by embeddings, which in LLMs encode meaning (semantics; see “How Do Human and Artificial Minds Understand Language” in Louwerse, Understanding Artificial Minds through Human Minds: The Psychology of AI, 2025). Simply being able to competently predict and generate the next token in a language sequence requires something closer to understanding than to mere parrot mimicry. (Sam Altman’s “i am a stochastic parrot and u r 2” is relevant here.)
  • See also Language Models in Plato’s Cave (“LLMs reconstruct the human mind through the shadow that it casts on the Internet…. By acquiring compressed representations of this text, the LLM is essentially trying to reverse engineer the mental process that gave rise to it, and indirectly copying the corresponding cognitive ability…. LLMs are trying to skip the neurons all together and reconstruct the mind from the shadow it casts on the Internet”). More later.]

Sort-of TL;DR : Anyone who happens to be reading this might reasonably ask, “Why are you getting into issues of LLM ‘understanding’ and ‘psychology’ and lord knows what else, when the problem you had with NotebookLM boils down to the prompt?” Indeed, complaining about the response below to the brief question “To what extent is the patent system ‘self-referential’?” may miss the main point: when working with an LLM, or an LLM+RAG like NotebookLM, for long-term projects rather than one-off queries, one needs to think about “prompt engineering.” That phrase rightly produces some eye-rolling, but on the other hand the idea that “English is the new programming language” (i.e. one gets what one wants out of LLMs simply by asking for it in human language), if meant to imply that any English will do, is naive.

  • LLMs can do shockingly well even with so-called “zero-shot” prompting that provides no examples of what one wants (like my simple “To what extent is the patent system ‘self-referential’?” here).
  • But it’s better to provide even a single example of the what the desired output should look like, or what it should include, along with a negative “close but no cigar” example (basically telling the LLM, “hey, this looks like it would illustrate the point, but actually it’s a counter-example, because…”).
  • Here, a better prompt would have included a reference to one or more examples — known to be in my NBLM sources (I’m prompting an LLM+RAG, not simply an LLM) — of the patent system behaving self-referentially (e.g. countries and companies relying on mere counts of patents acquired), AND one or more examples in the sources where the patent system is a routinized treadmill, but not necessarily self-referential.
  • Speculation: in an LLM+RAG like Google’s NotebookLM, can anything in your docs (or potentially summarizable/synthesizable across multiple docs) be elicited, given the right prompt? Is there possibly some relation here to the theory that LLMs are Turing complete (see e.g. “Ask and it shall be given” paper)? Of course, AI vendors work hard at blocking their AI chatbots from responding to certain prompts, but those guardrails are put in at a high level, with the underlying capabilities unchanged (and thus a target for “jailbreaking” prompts).
  • A good working assumption is that you can probably get anything you want from a sufficiently large LLM, or LLM+RAG (if what you want is somehow/somewhere reflected in your uploaded sources) — if only you can figure out the right way to ask. Hopefully that does not include the ability to elicit verbatim copies of training data, even though verbatim text is not “in” the LLM weights.
  • See also “in-context learning” (ICL) in prompts (e.g. IBM article: ICL “enables AI models to learn tasks dynamically by simply giving examples in a prompt”).
  • I would call NBLM’s output below an example of “workslop”. If there exists some prompt that would have elicited better output from NBLM, does that imply that “workslop” is mostly the result of poor prompts? [TODO: more on workslop. It’s easy to see that part of the workslop problem is how good LLM output looks at first and second glance. One has to overcome the natural tendency to quickly accept something that “looks right” and move on.]

NotebookLM’s “comprehensive briefing” of my notes for a forthcoming book on patent litigation missed one key point from the notes on the possibly “self-referential” nature of the patent system. So I simply asked the chatbot and it answered, based of course on my documents in NotebookLM (though unfortunately without any specific examples my notes provided):

Me: To what extent is the patent system “self-referential”?

NotebookLM: The patent system is described as self-referential or self-perpetuating to a significant extent, acting like a treadmill or arms race that creates constant churn and activity.

Here’s how the sources explain this self-referential nature:

  • Encouraging More Patenting and Litigation
    • Success with patents encourages more patenting, which then leads to more litigation. This, in turn, shapes how people patent in the future. The system’s incentives are described as pushing inventors, companies, and the economy in different directions.
    • This continuous cycle creates a lot of activity and churn within the system. It acts as an “engine” that keeps running due to these self-perpetuating incentives.
  • Feedback from Patenting to Invention
    • The process of patenting can influence future inventive activity, prompting inventors to think in terms of claims and how their work integrates into existing technological landscapes.
    • Interactions with the Patent Office examiner, who cites prior art and questions claims, forces inventors to clarify and frame their claims as a greater departure from what already exists, ideally pushing towards greater novelty or non-obviousness. This constitutes a “patent feedback” mechanism that can change the invention “itself” or at least the inventor’s “next one”.
    • The patent system’s structure and requirements, particularly the claim format, channels and routinizes inventive activity, encouraging a more systematic approach to innovation. It pushes inventors to both generalize their advances and narrowly differentiate them from prior art, a “threading the needle” dynamic.
  • Promoting Progress through Competition and Incentives
    • While the system aims to “promote progress” by incentivizing invention and disclosure, it does so with inherent contradictions and complexities. The system is seen as a “schizo mishmash that pushes/pulls in multiple directions,” but it still functions as a system that aims to impose regularity on invention.
    • The patent system is not just a passive repository of inventions; the active involvement of would-be inventors in the system, driven by its requirements, shapes its creation.
    • It creates an additional area of competition that can be engaged in even without a tangible product. This can be seen as “competition by other means”.
    • The system creates incentives not only to invent but also to get a patent, which might happen even if the underlying invention has already occurred. This “incentive to patent” is a key aspect of its self-referential nature.
    • The system’s longevity itself suggests there isn’t an easily available substitute, reinforcing its persistent existence.
  • Property Registry and Information System
    • Part of the patent system’s goal is to create and maintain a patent system — a property registry that draws property lines, similar to how land is registered.
    • The system encourages codification of inventions into tangible forms that can be valued, transferred, exchanged, and referenced. This “reification” makes inventions numbered, classified, and defined, turning technology into a tradable asset and enabling markets for technological knowledge.
    • The system is partly a system because of classification and citations, which connect patents and provide a “window into the knowledge economy”. The collection of prior art, which includes patents and applications, helps the system feed itself for future rounds of examination and potential invalidation. Indeed, the collection of prior art can become an end in itself.

In essence, the patent system’s self-referential nature stems from its ability to generate its own activity and direction, influencing not just the form of intellectual property but also the very act and subsequent shape of invention itself.

Me: Could you give me again that same summary of the “self-referential” theme, but this time with some specific examples from the sources?

[NBLM’s revised summary now includes a “Specific Example” bullet for each major point, but these sound like rewordings, rather than specific examples. Specific examples would likely include some proper nouns: names of cases, authors of papers on economics of the patent system — something other than this long run of eyes-glazing prose (though relieved a bit with some decent catchphrases in quotation marks like “threading the needle” and “churn” from my rough notes). I mean, if it’s going to refer to “reification,” it could at least stick in a reference to Louis Althusser — though, okay, it would be very far afield to refer to Althusser’s wife :-(  And the lower-case quotation “window into the knowledge economy” could at least break up the text a little if it noted this is the subtitle of a book, “Patents, Citations, and Innovations: A Window on the Knowledge Economy” (Jaffe & Trajtenberg, 2002), which could then be used as a basis for a riff on whether patent citations are a mere “window,” or instead somehow actually part of that economy (somewhat along the lines of “an engine not a camera“. Surely some of the blame for not including more relevant specific examples and citations (more relevant than what I’ve just typed in here) is mine, for not spelling out what I meant by “specific examples” in my prompt.]

The patent system is extensively described as self-referential or self-perpetuating, often compared to a treadmill or arms race that generates continuous activity and “churn” within itself. This self-referential nature is driven by several mechanisms:
• Encouraging More Patenting and Litigation: The system creates a continuous cycle where success with patents leads to more patenting, which in turn results in more litigation. This litigation then shapes future patenting strategies.
    ◦ Specific Example: The possibility of litigation itself acts as an incentive within the patent system. Litigation is a “crucial part” of the system, acting as an “engine” that defines validity and scope, especially since the Patent Office operates on “rational ignorance,” spending limited time (around 20 hours per patent) because most patents will never be litigated. This means that the system relies on subsequent challenges to define its boundaries and strength.
• Feedback from Patenting to Invention: The act of patenting influences subsequent inventive activity, prompting inventors to “think in terms of claims.
    ◦ Specific Example: The patent system “pushes inventive thinking/direction” and “routinizes invention”. It encourages inventors to both generalize their advances and narrowly differentiate them from prior art, a process described as “threading the needle”. Interactions with patent examiners, who cite prior art and question claims, compel inventors to clarify their claims, pushing for greater novelty or non-obviousness in their current or “next one” invention.
• Promoting Progress through Competition and Incentives: While its fundamental premise is to “promote progress”, the system does so with acknowledged “inherent contradictions. It aims to impose regularity on invention despite its complexities.
    ◦ Specific Example: The system creates incentives not just to invent, but specifically “to get a patent”. It fosters “an additional area of competition” and its long existence suggests a lack of easy substitutes, reinforcing its self-perpetuating nature. This can result in a “treadmill” or “arms race” of innovation and a constant “churn” of patents, which is implicitly seen as beneficial activity, similar to economic circulation. This dynamic can lead to a focus on “incremental improvements,” which are considered significant contributions to progress.
• Property Registry and Information System: A key function of the patent system is to establish and maintain a property registry that delineates technological “property lines”.
    ◦ Specific Example: The system encourages the codification of inventions into tangible forms that can be valued, transferred, exchanged, and referenced. The very structure of the patent system, with its classification and citation mechanisms, connects patents and provides a “window into the knowledge economy”. The continuous collection of prior art—which includes existing patents and applications—is vital for future examination rounds and potential invalidation challenges, suggesting that “patent system collection of prior art has become an end in itself”.

NotebookLM can be inaccurate; please double check its responses. 


[Again, I’ll soon be posting a detailed critique of how NotebookLM did with this task, both in extracting information from my rough notes and in turning them into usable prose.]

As noted above, I had NotebookLM generate a video specifically on the “self-referential” theme. This was my prompt:

Comprehensively cover everything in the sources that touches on the possibly “self-referential” or “self-perpetuating” nature of the patent system. Don’t assume that everything described as a “treadmill” or “channeling” or “routinizing” is necessary self-referential, as opposed to something that drives underlying innovation. Give examples from the sources that show, and that don’t show, self-referential behavior.

The resulting video (which it took NBLM a few minutes to generate; note that taking more time is an important difference between NBLM audio and video output formats, and its chat text output):

This isn’t the comprehensive walk-through of my notes that I had asked for, but perhaps NBLM is trying to tell me that there are better goals than completeness. At the same time, the video tends to imply that prior art only exists because of the patent system; but while earlier patents and patent applications are the primary forms of prior art that are used to assess the novelty and non-obviousness of a contemporary patent application, prior art also comes from all publications, known as non-patent literature (NPL).