On the Understanding
Recent discussions about Artificial General Intelligence (AGI) suggest that we may be on the verge of creating machines that are “generally intelligent.” However, there are two main objections:
- AGI cannot truly understand
- AGI cannot create any new knowledge
In this post, I will argue that these two issues are closely related. To do so, I will draw upon Arthur Schopenhauer’s philosophical distinction between Understanding and Reason. I will also show how Understanding, in Schopenhauer’s view, differs fundamentally from abstract Reasoning and why this matters when assessing AGI.
1. Schopenhauer’s Concept of Understanding
Understanding is the same in all animals and in all men; it has everywhere the same simple form—knowledge of causality, transition from effect to cause, and from cause to effect, nothing more. - Arthur Schopenhauer
According to Schopenhauer, Understanding is fundamentally the knowledge of cause and effect. This faculty arose in living creatures with the development of the first sense organs, allowing raw sensory data to become perceptions. When an organism’s senses are stimulated (the “effect”), the mind infers a corresponding “cause” (an external object or event). This direct, intuitive leap from an observed effect to its underlying cause is what Schopenhauer calls direct Understanding. Perception is the end result of this process of causal inference and it is consider it to be one of the forms of Understanding (the lowest form).
Humans also have indirect Understanding, which further analyzes causal connections not just on the body (e.g., what is causing this particular sensation?) but between external objects themselves. This gave rise to scientific inquiry and all systematic exploration of how the world works.
1.1. Causality - changes of states not objects
If we want to dig a little deeper into his philosophy of Causality we can see that for him causality is “director of all changes”. What causality deals with is changes of states. So in his opinion it is wrong to say that single Object A is a cause of Object B in case of some change. On closer inspection we see that state A (object A + preconditions) is the cause of state B once change happens. We won’t go too much here about his notion of causality, but we can say that it closely matches Finite State Machine concept.
2. How Understanding Differs from Reasoning
Schopenhauer draws a strict distinction between Understanding and Reasoning:
-
Understanding: The intuitive, causal inference that we share with animals—though humans can do it in more sophisticated ways (indirectly).
-
Reasoning: The ability to form abstract concepts from our perceptions. This is unique to humans. Through Reason, we create general ideas, words, and language; we store, combine, and communicate knowledge in a symbolic form.
Although Reason is a remarkable leap—enabling language, the passage of knowledge across generations, and much of civilization—Schopenhauer insists it is secondary to Understanding. Reason depends on the raw material that Understanding provides through perception of the world.
“No entirely original and new knowledge will result from abstract reasoning alone; that is to say, no knowledge whose material neither lay already in perception nor was drawn from self-consciousness.” - Arthur Schopenhauer
In other words, you cannot generate brand-new insights purely through logical manipulation of concepts. At some level, the knowledge must be grounded in actual perception—real-world cause and effect.
3. Why Perception Trumps Conception
Many people might think that pure, logical Reasoning is more advanced or more “human.” Yet Schopenhauer strongly defends the primary role of perceptual, immediate causal knowledge. Abstract concepts, he argues, are merely tools to organize or communicate those direct insights.
“Every simpleton has Reason—give him the premises, and he will draw the conclusion; whereas primary, intuitive knowledge is supplied by the Understanding.” - Arthur Schopenhauer
4. Creating Knowledge: Two Transitions of Causality
4.1. Transition From Effect to Unknown Cause: The Realm of Science Scientists study the world by noticing certain effects (e.g., a strange reading on a sensor) and inferring a cause. This includes forming hypotheses, designing experiments, and uncovering new phenomena. The Understanding is doing the heavy lifting, supplying intuitive leaps or sudden insights (“flashes of insight”) that unveil new causal connections.
4.2. Transition From Known Cause to Desired Effect: The Realm of Engineering Engineers, tinkerers, and inventors start with a known set of causes (materials, mechanisms, processes) and strive to produce a specific outcome (the “desired effect”). If existing knowledge does not apply perfectly (state that produced effect is not the same), they resort to trial and error—hands-on tinkering that again relies on seeing and feeling what works, adjusting on the fly.
5. How This Relates to AGI AGI systems today often excel at manipulating abstract representations (language models, data patterns, etc.). However, Schopenhauer’s account suggests that true Understanding—the ability to infer genuinely new causal knowledge—cannot arise solely through symbolic processing or pattern recognition.
-
AGI and Understanding
Many AI methods rely on large-scale pattern recognition and statistical inference. They do not (yet) ground these patterns in direct, causal engagement with the world in the same way humans and animals do.
-
AGI and Creating New Knowledge
Without that grounding in real sensory, hands-on experimentation (and the moment-to-moment “flash of insight”) which is the end result of Understanding, an AI might be unable to spontaneously discover truly novel causal links. The process of Understading where he infers causes is per definition new knowledge because it establises causal connections and objects where they did not exist earlier.
5.1. The Coffee Test by Steve Wozniak
An example of 2nd transition (From Known Cause to Desired Effect) appears in Steve Wozniak’s famous “coffee test”: once AGI learns how to make a coffee in his home (1st transition) it should be able to walk into someone’s home, find the kitchen, and make a cup of coffee on its own (2nd transition).
How do humans learn to make coffee in the first place? Primarily with 1st transition: you press a button, register a change (the machine pours coffee) as an effect and your brain directly infers that your button press (change) was the cause. This direct experimentation is how “new” knowledge is formed—transitioning from known effect to unknown cause and pinning the right cause to the effect. You can then “extend the range of it’s applicability” by generalising that knowledge of causal state A that produced state B. The way you store it is in abstract form (verbal or written) and then it becomes part of realm of Reasoning. Think about IKEA manual as perfect example of that.
The same idea is conveyed here in this [Twitter thread] by François Chollet (https://x.com/fchollet/status/1736483628971082111)
But we have two issues here:
-
The state that produced effect may not the be same in new situation. So you need tinkering or trail and error, not just Reasoning. Schopenhauer points out that causality depends on a complete set of conditions (a total state) rather than just single objects. If even one of those conditions changes in a new situation, the same effect might not happen. Therefore, you need trial and error (not just abstract Reasoning) to discover or restore the missing conditions. If any one of these elements is missing or altered (e.g., different machine model might have unfamiliar buttons or unclear labels, filter or pod system, no filter in place), you don’t truly have the same state that produced the effect before. Simply “knowing the instructions” (Reason) isn’t enough—you often have to tinker (adjust, observe, repeat) to pinpoint and fix whichever condition is off in the new setup.
-
Saying that AGI can apply that knowledge to novel situations blures the fact that what AGI is actually doing is applying EXISTING knowledge (created with causal inference) to novel situation. The only moment when new knowledge is created is when someone observed cause and effect relation through Understanding. In other words new knowledge is not the same as application of that knowledge.
In both science and engineering, genuinely new knowledge requires more than just abstract conceptual manipulation. It demands direct contact with the world’s causal structure.
It is possible that future research in robotics, embodied cognition, and continual learning will help AIs develop something closer to Schopenhauer’s notion of Understanding. But for now, these remain open challenges.
6. Conclusion By grounding these philosophical insights in practical examples like making coffee, assembling IKEA furniture, or the process of discovery in science, we see that true knowledge—true causal Understanding—always arises from direct, perceptual interaction with the world. For now, AGI systems lack that grounding, which raises fundamental questions about whether they can ever genuinely understand or create knowledge in the way humans do.
Objects are first of all objects of perception, not of thought, and all knowledge of objects is originally and in itself perception. Perception, however, is by no means mere sensation, but the understanding is already active in it. The thought, which is added only in the case of men, not in the case of the brutes, is mere abstraction from perception, gives no fundamentally new knowledge, does not itself establish objects which were not before, but merely changes the form of the knowledge already won through perception, makes it abstract knowledge in concepts, whereby its concrete or perceptible character is lost, but, on the other hand, combination of it becomes possible, which immeasurably extends the range of its applicability. - Arthur Schopenhauer