Impoverished Knowing
What traps prevent us from interpreting the world in order to change it?
Robert Musil’s novel, The Man Without Qualities, is a magisterial tour of modes of life in the twilight of the Austro-Hungarian Empire. It is composed almost entirely of philosophical asides. In one such moment, Musil imagines Plato appearing in a modern newsroom. At first, the philosopher is overwhelmed by the possibility of the newspaper as an engine for the refinement and proliferation of knowledge. Plato’s excitement is then mirrored by the newspaper editor, who showers the philosopher with lucrative offers: commissions for “philosophical travel pieces," short-stories, and treatments for turning his work into films. However, as soon as his return “ceased to be news,” and Plato begins actually trying to implement his ideas instead of merely circulating them as content, the editor relegates him to producing “a nice little column on the subject now and then for the Life and Leisure section (but in the easiest and most lively style possible, not heavy: remember the readers).” Soon the solicitations for content dry up as less “outdated” and more newsworthy subjects arise. “For some reason,” Musil acidly observes, “newspapers are not the laboratories and experimental stations of the mind that they could be, to the public’s great benefit, but usually only its warehouses and stock exchanges.”1
Modern society, Musil invites us to consider, readily metabolizes all content, even the weightiest of philosophical knowledge, into a range of easily digestible and commodifiable forms. It is especially at the moment that this knowledge propels one into action that it is hurriedly brushed out of the frame. In our own context of imperial decline, this voracious subsumption of information has reached spectacular new heights through the ceaseless feed of content on our phones. In ways that Musil could already foresee, it has precipitated a pervasive climate of nihilism which is also a kind of epistemological impasse: We already know how everything will be metabolized as information as it happens, so why act? What new information could possibly disrupt this endlessly entertaining, closed, spectacular loop? What conditions would enable us to grasp that knowledge?
Elsewhere I have written about the series Chernobyl as a faithful representation of this sort of claustrophobic epistemological horror. The series asks: in a context where one’s political and social reality is so heavily circumscribed as to become ineffable, where every new event is sublimated through a careful choreography of rhetorical maneuvers that preserves the status quo, what happens when a really existing crisis (such as an exposed nuclear reactor core) undermines the ordering logic of our shared mythos? In the show, the vast bureaucratic apparatus of the Soviet Union is shown to be a myth-preserving machine: it fosters and maintains the illusion that everything is well-functioning, even as it is horrifically collapsing before our eyes. Ultimately, with the help of intrepid, truth-wielding scientists, the bureaucratic machine can no longer preserve the symbolic order and the Lacanian Real breaks through, ushering the demise of the Soviet Union.
Putting aside an easy reading of the show as a belated (and historically inaccurate) critique of Soviet Socialism, the series can be more productively read as an allegory for our own epistemological impasse. Even as we are confronted with the catastrophes of climate crisis, the genocide of Palestinians, ICE concentration camps, and racist police terror, few of us have seemed capable of radically altering the trajectories of our lives in ways that are proportional to the scale of these crises. Even when finally driven to express dissent, this generally takes minimally disruptive forms, involving relatively little risk, with predictably elusive consequences. Reflecting on this disjuncture in a piece written years ago, I was left wondering:
If things were really bad enough that the only reasonable solution was radical personal or revolutionary change of some kind, would we even be capable of knowing that, so bound up, as we all are, in the reproduction of the everyday? What new information could possibly make that conclusion “knowable”? Liberal media loves to tempt us with lurid White House leaks and exposés, but what information are we actually waiting for that we don’t already know?2
One way to begin answering these questions is to map the ways in which knowledge is metabolized in our own context. If, for Musil, even Plato would be readily subsumed into the ceaseless churning of a print capitalism, and in Chernobyl, nothing could escape the disinformation apparatus of Soviet bureaucracy, what epistemological traps prevent us from producing or consuming knowledge in a way that might incite us to live differently?
Of course, the most immediate cause célèbre regarding the fate of human knowledge is AI. As Yuk Hui notes, much of the discourse surrounding AI and the obsolescence of human intelligence has rehearsed older debates about the status of philosophical knowledge in contrast with practical reason.3 Philosophers at least as far back as Kant have distinguished between the higher order, though less remunerative, work of attempting to rationally discern questions pertaining to the common good versus the “lower order,” but more lucrative, application of philosophical tools to address mundane practical issues like how to get out of legal disputes. As in this “conflict of the faculties” of the eighteenth century, AI is prompting us to defend knowledge that has little immediate monetary value or else cannot be algorithmically extruded. Though, calling the effluvium of AI, “knowledge,” seems like a misnomer anyways because, at least thus far, AI remains an elaborate version of the predictive text on your phone or a glorified search engine. The reservoir of “knowledge” to which it grants access is simply a pastiche of extant human invention. While AI is predictable and mediocre by design, human intelligence is counterintuitive and playful; leaping to imaginative conclusions based on even minimal inputs.
In addition to differing in quality, the “knowledge” of AI and human cognition differ in kind. An insight of Hegel’s Phenomenology of Spirit which found later articulation in the “Enactivism” of Eleanor Rosch, et al., was that knowledge is not simply content that is passively received into the empty vessel of our brains, but rather it is something that is realized only contextually through creative use and adaptation. Knowledge, in other words, is not like so much inert “data” stored on a hard drive, but is instead something much more vital and dynamic. The active recall or synthesis of external stimuli that we call “knowledge” is always irrevocably in dialogue with one’s social setting and historical context. To make sense of the world is to necessarily interpolate ourselves within these forces which appear external to us but of which, in fact, we are a part. When we “know” something, it is in terms that are intelligible in a given place and time among certain people. “Knowing” brings us into contact with these forces which in turn shape us. “Knowing,” in this active sense, is not a static thing, but rather a process of becoming. AI and its hawkers, by contrast, treat knowledge as a discrete commodity that is external to us which you either have or do not. Speaking at a BlackRock Infrastructure Summit, Sam Altman told prospective investors that in the future, information will be something paid for on a subscription model like water or electricity.4
The grievous category-error of treating AI as “knowledge” threatens to undermine nothing less than what it means to be a human being: someone who is more or less actively aware of–and in dialogue with–the shifting terrain of their historical situation.5 In Sam Kriss’ recent account of the grim forms of life cohering around AI in the Bay Area, he describes start-up bros, raking in millions in venture capital with the promise of using AI to provide real-time responses to daily activities (job interviews, exams, dates), as they occur. This sort of technological prosthesis raises the question: even if we could outsource the basic work of cognizing the world to machines, what would be left for us to do?6 Kriss usefully stresses the curious paradox that, while tech bros fetishize alpha male and “agentic” behavior, the technology they pursue is designed very specifically to deny human agency. Hegel would not be surprised to learn that, insofar as Silicon Valley has a philosophical interest, it is in stoicism, which for him was a servile form of thinking: a retreat to the inner sanctum of the mind to shield oneself from pervasive conditions of external unfreedom.7
The “knowledge” that AI’s content-mongers are peddling is really a kind of unknowledge insofar as it robs us of the deliberative labor through which we shape ourselves as human beings in relation to one another. However, this insight needn’t mean the complete abandonment of the technology. As Aaron Benanav argues in his Automation and the Future of Work, if we achieved a socialist society, technologies like automation or AI could save us from having to do work that we did not feel like doing because toiling in these jobs would no longer be the only means through which we would access food, shelter, healthcare, sociality, and a dignified life. No one should have to write the copy on the back of potato chip bags, unless they want to.
See Part II Next Week!
Lynchian, ambient jazz from Lemon Quartet:
Robert Musil, The Man Without Qualities, New York: Vintage, 1996, 352.
This analysis appeared in Flatland'‘s Old Future’s Almanac, in an essay I wrote titled “As Long as the Future is Unthinkable, Everything is Possible: An Invitation.” This is also a question that I address, in a different way, in “Lurid Attachments.”
Witness the slack-jawed vacuity of tech billionaire Marc Andreessen celebrating his own lack of interiority or historicity: https://www.businessinsider.com/marc-andreessen-zero-introspection-debate-2026-3








