What the Machine Returns: On AI-Generated Summaries and the Limits of Structural Knowledge
There is a particular experience available to anyone whose work circulates publicly in the current information environment: the experience of reading what an artificial intelligence system has concluded about that work. The summary appears with apparent confidence. It draws on real sources. It names real titles and describes real concepts. And yet something is structurally misaligned. Not in the sense of hostility or misinterpretation, but in the sense that the description no longer corresponds to the current state of the work.
The system has not misunderstood. It has represented what it had access to. The problem is that what it had access to is not the work as it now exists.
This essay examines that experience directly. A colleague recently shared AI-generated search results summarizing the Psychological Architecture framework and the scholarly work associated with it. The results were not hostile. In several respects they were generous, even accurate in their broad strokes. They correctly identified the four-domain structure of the framework, named several of its constructs, and described its positioning relative to clinical and self-help traditions. They also contained errors — not random errors, but the specific kind of errors that emerge when a system trained on a fixed body of data attempts to represent work that has continued to develop after that data was collected.
The errors are worth examining not as grievances but as evidence. They reveal something about the structural relationship between AI summarization and living intellectual work — something that extends well beyond any single scholar or framework.
What the Results Contained
The AI-generated summaries described the Psychological Architecture framework as a system organized around four domains: mind, emotion, identity, and meaning. This is accurate. They named several structural models within the framework, including the Emotional Avoidance Loop and the Identity Collapse Cycle. This is accurate. They described the scholarly positioning of the work — independent of university appointment, focused on structural rather than clinical or topical inquiry, intended as a rigorous alternative to self-help literature. This is also substantially accurate.
The inaccuracies were of a different order. One result attributed a book title that does not exist. Another placed the author in a teaching role at a specific institution — a detail plausible enough to have been synthesized from adjacent information, but false. A third described the framework as though it were a completed, static system rather than a living body of work currently in active development. Each of these errors follows a recognizable pattern.
The fabricated title is a confabulation error: the system, trained to expect a certain kind of scholarly output, produced a plausible title where none existed. The institutional attribution is a proximity error: the system associated the scholar with an institution that appeared in nearby data, inferring affiliation from adjacency. The treatment of the framework as complete is a snapshot error: the system summarized what it found at a fixed point in time and presented that summary as a current description.
The Snapshot Problem
AI language models do not learn continuously. They are trained on a corpus of data collected up to a point in time, after which their knowledge of the world is fixed. When a user queries such a system about a scholar or framework, the system returns a summary based on whatever existed in that corpus. If the scholar has published significantly since the training cutoff, those publications do not exist for the system. If the framework has been extended, revised, or formally systematized in new ways, those developments are invisible.
The Psychological Architecture framework currently includes seven named structural models. The AI results referenced three or four, depending on the query. This is not a rounding error. The models not mentioned include the Meaning Hierarchy System, which was developed to address a generative gap in the Meaning domain — a gap that the earlier literature had identified but not resolved. That development is substantively important to the framework. It does not appear in AI-generated summaries because it postdates the relevant training data.
The problem is not that AI systems are unreliable. It is that they are reliable in the wrong direction: they are reliably accurate about the past and reliably incomplete about the present. For stable bodies of work — a thinker whose major contributions are decades old, a framework that has not changed significantly — this distinction may not matter much. For work that is actively developing, the distinction is structural. The AI summary does not describe the work. It describes a prior version of the work, presented as though it were current.
What Confabulation Reveals
The fabricated book title deserves closer attention. It is easy to dismiss as a hallucination — a known failure mode of large language models — but the structure of the error is more instructive than its label. The system did not generate a random string of words. It generated a plausible title, in the appropriate register, consistent with the kind of work it had accurately described elsewhere in the same response. The error was internally coherent.
This is what confabulation looks like when it emerges from a well-trained system rather than a poorly trained one. The system has learned what a book title associated with this kind of work should look like. It has learned the conventions of the genre, the register of the author, the expected scope of a scholarly publication in this area. When it encounters a gap in its knowledge — a work it suspects exists but cannot verify — it fills the gap with something structurally consistent with what it does know.
The result is a false statement that is harder to detect than an obviously false statement would be. A reader unfamiliar with the actual bibliography has no ready means of identifying the error. The title sounds right. It fits the context. It is the kind of thing this scholar might have written. The plausibility of the fabrication is itself a product of the system's accuracy in other respects — accuracy that lends credibility to the errors embedded alongside it.
Proximity and Affiliation
The institutional attribution error follows a different logic. When an AI system encounters a scholar who has given lectures, appeared in academic contexts, or whose work has been discussed in proximity to a particular institution, the system may infer an affiliation that does not exist. The inference is not random — it is based on real data about real proximity. But proximity is not affiliation, and the system does not reliably distinguish between them.
For scholars whose independence is itself a structural condition of their work — whose separation from institutional appointment is a deliberate choice with intellectual consequences — this error is not merely factual. It misrepresents the nature of the work. The Psychological Architecture framework was developed outside of institutional constraints precisely because institutional constraints would have shaped it differently. An affiliation that does not exist, attributed by an AI system drawing on proximity data, places the work in a context that alters its meaning.
This is a subtler problem than a fabricated title. A nonexistent book can be identified and corrected. A misattributed institutional context shapes how everything else in the summary is read, and the correction requires more than a simple fact check. It requires understanding why the independence matters — which is exactly the kind of contextual knowledge that a system summarizing from a fixed corpus is poorly positioned to convey.
The Deeper Limitation
Beyond the specific errors, the AI results share a structural limitation that the errors only partially reveal. The summaries describe the Psychological Architecture framework as a set of named constructs and a set of domain categories. They do not describe it as a system — as a set of relationships between constructs, a logic of interaction between domains, a set of implications that follow from the architecture as a whole rather than from any individual part.
This is the flattest version of the framework: a list of components without the connective tissue that makes the components meaningful. It is the version that emerges when a system extracts named entities from a body of text without holding the argument that organizes those entities. The constructs appear correctly named. The argument that gives them their explanatory power does not appear at all.
A framework is not a catalogue. The Psychological Architecture framework argues that the four domains — Mind, Emotion, Identity, Meaning — are interdependent in specific ways, that disruption in one produces predictable effects in the others, that human coherence depends on the integration of all four rather than the optimization of any single one. This argument is the framework. The named constructs are its instruments. What the AI returns is the instruments without the argument — a toolkit without the theory of what the toolkit is for.
What This Means for Distributed Knowledge
AI-generated summaries are increasingly the first point of contact between a reader and a body of work. A search no longer leads directly to primary material. It leads first to a synthesized representation of that material, produced by a system that extracts and recombines what it has available.
This alters the sequence through which knowledge is encountered. The reader does not begin with the work and move toward interpretation. The reader begins with an interpretation and moves, if at all, toward the work. What is encountered first establishes the frame within which everything that follows will be read.
For bodies of work that are stable and widely distributed, this mediation may introduce only minor distortion. For work that is developing, or that circulates outside institutional publication systems, the effect is more significant. The system’s representation is limited to what has been captured, and what has been captured is often incomplete. Developments that occur after that capture do not appear. Arguments that depend on relational structure are flattened into named components. Gaps are filled with plausible inferences.
The result is not simply an abbreviated account of the work. It is a restructured version of it, organized according to the logic of summarization rather than the logic of the work itself.
Reading the Results Structurally
There is a way to read AI-generated summaries that is more useful than simply checking them for accuracy. The errors, omissions, and structural flattening in such summaries function as a kind of diagnostic — not of the work itself, but of how the work has traveled. What a system gets right indicates what has been written about clearly, in sources the system was able to index. What a system gets wrong, or leaves out, indicates where the public record is thin, where the argument has not been rendered in ways the system can capture, or where development has outpaced the available documentation.
Examined in this light, the AI results described above are useful. They confirm that the foundational structure of the Psychological Architecture framework has penetrated the public record sufficiently to be returned accurately in broad outline. They identify, through their omissions, which developments are not yet adequately represented in indexable sources. They reveal, through their confabulations, which elements of the work are recognizable enough to generate plausible imitations.
None of this makes the errors unimportant. A fabricated title attributed to a scholar is a false statement in wide circulation. An institutional attribution that misrepresents the conditions of the work distorts its meaning for every reader who encounters it without access to the correction. The practical consequences of AI inaccuracy are real, and they fall disproportionately on scholars who lack the institutional infrastructure to push corrections into the systems that would propagate them.
The Framework and Its Summary
The Psychological Architecture framework is a structural account of how human experience maintains coherence and why it fails to do so. It argues that coherence depends on the integration of four interdependent domains, and that disruption in any one of them propagates through the others in predictable ways. This argument is still being worked out. New structural models are still being developed. The monograph that formalizes the framework is in its fourth version. The work is not complete.
What the AI returns is a summary of a prior version of this work, presented as a current description. The summary is not hostile to the work. In some respects it is accurate, even perceptive. But it cannot hold the argument that organizes the framework, cannot represent the developments that have occurred since its training data was collected, and cannot correct for the proximity errors and confabulations that emerge when a system fills gaps in its knowledge with plausible inferences.
The machine returns what it has. What it has is a snapshot.
For bodies of work that no longer change, this may be sufficient. For work that continues to develop, it is a structural distortion. The reader encounters not the work, but a prior version of it, stabilized and presented as current. The difference is not visible from within the summary itself.
The work continues beyond the frame. The question is no longer whether the frame is accurate. It is whether a system organized around fixed representations can faithfully mediate what is still in motion.