[This post is not explicitly about AI. But I think it is interesting, and will be a useful building block in future work.]
When I was finishing my PhD thesis, there came an important moment where I had to pick an inspirational quote to put at the start. This was a big deal to me at the time — impressive books often carry impressive quotes, so aping the practice felt like being a proper intellectual. After a little thinking, though, I got it into my head that the really clever thing to do would be to subvert the concept and pick something silly. So I ended up using this from The House At Pooh Corner by A. A. Milne:
When you are a Bear of Very Little Brain, and you Think of Things, you find sometimes that a Thing which seemed very Thingish inside you is quite different when it gets out into the open and has other people looking at it.
It felt very relevant, in the thick of thesis writing, that the quote was both self-deprecating and about the difficulty of effective communication. And actually, the more I thought about it, the more I realised I hadn’t picked something silly after all. Rather, this passage brilliantly expresses a profound fact about meaning1.
The illusion of transparency
We’ve all had that feeling, I think, of not being able to find the right word — that sense of grasping for something just out of reach. Of looking for the exact term that will capture the Thing that’s so Thingish in our mind and communicate it with complete fidelity. Yet, often, even when we find what we thought we wanted, it still doesn’t quite do the trick.
The fundamental problem is that language is a compression of whatever is going on in our minds. Each word or phrase is located in a personal web of meaning, which is high dimensional and unique to ourselves. So when we say something, we never actually pass on the full meaning we intend. There’s a whole world of context that stays put in our brain, getting approximated away in our public statement.
This is a ubiquitous source of communication failures. I’ve heard it referred to before as the ‘illusion of transparency’:
The illusion of transparency is the misleading impression that your words convey more to others than they really do. Words are a means of communication, but they don't in themselves contain meaning. The word apple is just five letters, two syllables. I use it to refer to a concept and its associations in my mind, under the reasonable assumption that it refers to a similar concept and group of associations in your mind; this is the only power words have, great though it may be. Unfortunately, it's easy to lose track of this fact, think as if your words have meanings inherently encoded in them, leading to a tendency to systematically overestimate the effectiveness of communication.
As I’m writing this, I have to consider whether each word or phrase I put on the page will have a similar kind of meaning for you as it does for me. I know from experience that this is difficult to get right2. My task is not to create a standalone document, but an interactive one — to unlock for you, in your mind using your concepts and associations, the meaning I want to convey. Note that, if we do not share enough concepts, this will be impossible.
Words are public, meaning is private
I have a personal spin on this idea called the Iceberg Theory of Meaning.
As we’ve been discussing, when you say something to someone else, you don’t just want to pass on a bunch of words — you want to pass on a meaning. Let’s imagine the words in your statement as the top section of an iceberg. Lying above the water, these are public and agreed on by all observers. Meaning, though, lives beneath the waves. It lies in the bottom, much larger, section, corresponding to the rich collection of connected concepts and associations in your mind. These are private and will be different for each person.
The ocean, in this analogy, is your world model — the totality of your concepts and associations. Meaning can be considered the subset that get activated by the words in the statement.

We can see from this how meaning is intrinsically subjective. The words may be the same for both of you, but as you have different brains, wired up in different ways, the meaning of them will not.
Looking beneath the surface
There is however a bit more we need to add. Words are not the only public information that accompanies a statement, and are consequently not the only things we condition on when constructing a meaning. There are two more pieces which belong in the top section of the iceberg, available to all observers:
Environmental details, such as the medium, location, time of day, etc.3
The identities of the speaker and the audience.
The first point is fairly straightforward. Consider how a set of words spoken in a work context may imply a very different meaning to the same words said at a party. Just like before, we should note that the contextual information is itself meaningless — it is just a prompt that will activate a meaning in your mind.
The second point is more subtle. While identities may be public, and in that sense just another set of environmental variables, they have special significance as markers of the minds we are trying to exchange meaning with. They frame an active interpretative process, where we guess at each other’s world models and how they will shape the meaning of the statement.
As a speaker, this looks like trying to tailor our words to work with the concepts we believe the listener to have. This practice is most obvious when you consider the process of teaching.
As a listener, this looks like using our knowledge of the speaker to prime which concepts and associations are most likely to be relevant. If Alice is talking about ‘Justice’, we may know from past experience that she means something very different to what Bob would if he used the same words.
In both cases, we try to guess what meaning has or will form in the other person’s mind, beneath the surface of the water.
The iceberg in action
Sometimes, people make these inferences about each other very badly, leading to much wailing and gnashing of teeth. For instance:
People intepret political comments from their opponents in extreme ways, attaching radically different meanings to those intended. A call to ‘tax the rich’, meant as a pragmatic suggestion to help balance the budget, might be interpreted as a jealous attack on wealth creators. Or, a plea to ‘prosecute shoplifters’ may be heard as contempt for the poor rather than concern about a sharp increase in incidents.
People make comments that rely on unstated assumptions their listeners do not share, leading to much confusion. You see this in debates between people who possess conflicting sets of facts about the same topic, often without realising it. Perhaps one person thinks politician X is deliberately trying to run a government programme into the ground, whereas the other thinks they are working hard to save it. This happens in ethical conversations too, where people treat underlying beliefs, e.g. ‘oil is bad’, as unstated primitives in broader arguments that collapse on contact with a different set of primitives.
People have deeply unproductive arguments that use big, ambiguous terms in very different, yet load-bearing ways. An argument about whether capitalism is good will not usually go well if one person takes it to mean ‘exploiting the poor’ while the other thinks it refers to ‘efficient markets generating wealth’. This often happens when one person is trying to decouple a narrow meaning of a term from a set of broader meanings, in order to discuss a specific aspect, but for the other person this simply isn’t possible — for them, the meanings are all too integrated.
These kinds of communication failures are commonplace and under-acknowledged. Even if you understand they are possible, it is still hard to overcome them. Modelling others is difficult, particularly those with very different backgrounds to you. In fact, due to a kind of radical ignorance, doing so is often intractable. You can’t work to incorporate a different perspective — a different source of meaning — into your own if you don’t know it exists in the first place. Instead, you will just carry on arguing, deeply frustrated at the other person’s inability to understand what you mean.
If you have any feedback, please leave a comment. Or, if you wish to give it anonymously, fill out my feedback form. Thanks!
It also led me to rediscover Winnie-the-Pooh, for which I am grateful. Not only are the Pooh books wickedly funny, but they possess joy and kindness to a degree I forgot for many years as a young adult. If that doesn’t qualify them as impressive and profound, I don’t know what would.
Indeed, the first long piece I published I ended up taking down and partially rewriting, as it was clear many people had drawn quite different conclusions from what I meant. My thesis was that superintelligent AI, while able to outcompete humans, will nevertheless make mistakes. But most commenters seemed to think I meant the mistake-making implied humans will win any conflict. I traced this error to certain phrases in my introduction, which I think primed some readers to interpret the whole piece from this angle.
Strictly speaking, the speaker and listener might observe different things, so some of this information is hidden.