The Hard Problem of Consciousness for People Who Get Knocked Unconscious

First off, I got trolled into writing this piece by Phrost. Here’s what did it:


So since he asked the question, here I am, and this is why:

When it comes to the Philosophy of Mind there are many different takes and camps. I am what is known in the literature as a “substance dualist”. Briefly, a substance dualist believes there are two broad categories of “things” in the world, in this case material stuff made up of physical matter and these non-material things called “minds.” As I said, there are many, many camps here, but we’ll principally be discussing monism (one thing) and dualism. Within monism, there are two kinds, but only one is worth discussing: Materialism. (The other is known as empirical idealism and Kant so thoroughly refuted it in the Critique of Pure Reason almost no one is a monistic idealist these days.)

Materialism also has a lot of nuance in it, but the basic thesis is that everything that exists or can exists is ultimately material, including “minds,” whatever they are. Some are “eliminative,” meaning they think what we call “minds” are really other things, like the brain.
Others might be epiphenomenalists or supervenient materialists, who think that minds are something arises when you have a special or correct combination of matter. Then there are the panpsychists, who hold that EVERYTHING possesses consciousness to a greater or lesser degree. But they’re weird as shit, so we won’t be discussing them right now, just what I’d call minimal material theorists, or those who think that whatever the mind is, it is not a separate substance.

Cast-iron pan-psychism

A lot of people adopt the following model for consciousness: the brain is the seat of consciousness. It is the “hardware” of the human computer, and the “mind” things it does are part of its “software.” This is particularly attractive to AI researchers and cognitive scientists/neuroscientists as it flatters their preconceptions about what minds and brains are supposed to do. But I hope to show this intuition is wrong.

So, for reductio, let’s assume that there are no minds and what minds are is just things the brain does. If this is true, then any consistent and coherent theory of mind must account through a brain-process what we experience as a mental-process. In other words, there must bet at least a coherent explanation of how a brain could do what a mind does. Note that I am not requiring here a full and complete explanation, merely a theoretical one that COULD account for it. We call this the “hard” problem of consciousness, or explaining the things that make minds special and distinct.

For example, there may be “easy” problems of consciousness, such as understanding how perception or memory happens, or pattern recognition, or process information. We know how the brain does that, and it is a more or less mechanical/material process.
But the hard problems are explaining how we have three things: (1) qualia; (2) intentionality; and (3) semantic content. These three things will each require their own lengthy part of this thread, so buckle up.


First, what are qualia? A “quale” is a “seeming” of an experience. When I see something green, we can mechanistically explain light refraction patterns, retinal sensing, ocular processing in the brain, etc. and so on. But none of that captures what it is like to see green.
The “likeness” about qualia is a problem of subjectivity. That is, we know, as thinking subjects, that there is a likeness about our experience. Maybe when I see green, I am calmed, because it’s my favorite color. Maybe you think it’s tacky and get annoyed. The fact is, both of us can look at the same green thing and take away different seemings from it; it appears in consciousness to you differently than me. If we were simple rote machines, that would seem to be impossible.

Nagel’s influential book

Thomas Nagel undertook this explanation in his essay “What Is It Like t`o be a Bat?” We can imagine that we could know every physical fact about bats from biology, but none of us would understand what bat-hood is actually like because we’re not bats. Furthermore, even if you transplanted human consciousness into a bat, all we would know is what it is like to have a bat-body, not to see, taste, experience, or think about life in bat-like ways.

Another famous thought experiment that shows the insufficiency of materialism to explain qualia is Frank Jackson’s Mary argument. Let’s say you take a child, named Mary, and lock her in a room with only black and white things. But you teach her every fact about color. She knows that red objects are any object which reflects light in Earth’s atmosphere at a certain wavelength. She could even measure that with appropriate tools. But until she *sees* something red, she doesn’t know what red is.

That’s because there’s something about seeing a red thing that is unique to the individual, something transmitted only to a minded thing outside of the brute physical facts: a quale. Now let’s turn to how that factors into our computational model of the brain. Under that model, we might make a statement like “pain is what happens when C-fibers are activated.” And we know that’s true from neuroscience. We can observe a brain under a fMRI and see that when c-fibers are firing, the subject should be experiencing pain. But if we ask the subject, we may get wildly varied responses based on pain tolerance, whether the person is kinky, etc.

There’s something about the subjective experience of pain not captured in the firing of c-fibers, and that is “what the experience of pain is like to that unique subject.” That item picked out by that term is a quale, and there’s no physical equivalent. No one has ever, or will ever, locate a quale under a microscope. It’s not the sort of thing one should expect to find there. It exists purely in the mind. It is a mental thing. But if it exists, then at least one non-material thing exists.

OBJECTION: but it still depends on biology! No one who doesn’t have a brain and c-fibers experiences pain, so pain is just something another part of the brain refers to the conscious subject to let it know it’s in pain. ANSWER: while it is true that brains and c-fibers appear necessary for minds to do what they do, there’s no indication it is necessarily so; parallel evolution of bodies, brains, and minds doesn’t imply that is the way it ALWAYS has to be.

FURTHER OBJECTION: we might consider the brain super complex, and with multiple systems interacting, and therefore the subjective experience of pain arises only relationally between those parts; it doesn’t belong a totally separate substance. ANSWER: this could be true, but you’re just kicking the hard problem can down the road. Now you have to explain “in what” that relational property is represented. For example, @phrost brought up a relational thing, “society,” which only exists with groups of humans. But “society,” though relational, is represented in ideo-material relations of people throughout history. As an ontological status, we know those things exists and we can predicate “society” upon things we otherwise know to exist.

If we want to predicate “the material equivalent of subjectivity” on something, we need to know what that is. “Ultra-super-complex brain structures we haven’t discovered yet” is wishful, optimistic thinking. It’s an article of materialist/naturalist faith. That is what I wish to avoid by being a dualist; I know, from my own experience and interacting with other people, that we all experience subjectivity. I’ve never met a philosophical zombie nor do I ever think I will. From that brute experiential fact, I can infer that we all possess qualia and that these qualia do not appear to inhere any physical substance; therefore, I must predicate them on an irreducible simple known as “the mind.” Provisionally, therefore, I must grant that minds can do things material cannot.

OBJECTION: how then do we explain causal interaction between the mind and the material world? This is the epiphenomenon objection. The answer, suggested to me by my dearly departed friend Erik, is “causally.” When I, in an earlier materialist phase of life, suggested that this was a *bad* answer because we don’t have a model of mental causation, he corrected me. We don’t have *any* coherent model of causation. Think about it: can you define “cause” and “effect” without reference to the terms “cause” or “effect?” The notion of causality itself is circular. Pace Hume, all we can observe is constant conjunction.

Pace Kant, we know that causality itself is one of the basic categories of judgment we impose upon experience to make it intelligible. But none of these tell us what causality is, just how we might recognize the causal relation. That being the case then, what real objection can the materialist lodge against mental -> material or material -> mental causation? None. That we cannot fully explain how changes in one cause changes in the other is inconsequential. All we need to know is that mental events have material causative power, and vice versa, to say that minds and brains do appear to interact, even if we do not understand all of the processes yet. I am not suggesting we will find a “mind messenger particle” or a noematon or something. I am suggesting that whatever the causally-interactive mechanism is, it is no more “spooky” than any other causal model.


Inside St. Paul’s cathedral

Next, let’s look at “intentionality.” Pace Husserl, “intentionality” is the about-ness of our thoughts. For example, if I think about seeing St. Paul’s Cathedral, I have in my mind the object of my thoughts: St. Paul’s Cathedral. If I go and visit it, I see the same thing.

Despite both mental acts (thinking about, seeing) having the same object, there is a difference: in one, the actual physical cathedral is before me. For Husserl, this didn’t matter; both “intuitions” (whether thought or perceptual) INTENDED the same object. If the object is present-at-hand, that intuition is called “fulfilled.” If the object is not present-at-hand, it is unfulfilled. But both of them intend the same object: thus, intentionality. This was a key feature of Husserl’s theory of mind, because all cognitive acts are INTENTIONAL. That is, they all possess an intention, even acts of pure fantasy and imagination. This intention is not equivalent to the thing itself, or else all cognitive acts would require presence-at-hand. Thus, the intended object is something MENTAL.

OBJECTION: but when you remember something, or imagine it, you’re just piecing together something in your mind from what you already know, so the memories themselves could be stored physically.

ANSWER: while true, show me in the brain where my memories of St. Paul’s Cathedral are stored; point to the precise cluster of neurons and neurotransmitters that signify St. Paul’s Cathedral. RESPONSE: that’s special pleading. Of course we can’t do that, but suppose we could. Would you still object? REPLY: of course I would, because those neurons aren’t the thing itself. Remember, the fulfilled intuition is the thing itself *as represented in the mind*.

Semantic Content

OK, back at it. In my reply on the Twin Earth experiment, the third irreducible material simple is “semantic content.” I had a professor, Ignacio Angelelli, explain it this way: let’s say we have 3 mathematicians at a conference, and you ask them about triangles. Each one has his or her own private thoughts about triangles, the concepts they possess. But they are also talking about objective triangles, that anyone can pick up and understand. If one of them does a proof on a blackboard, anyone at the conference will have… in their minds created THEIR OWN concept of triangle, and they they will also be referencing this objective (or maybe intersubjective) concept of triangle which exists independent of any particular mind.

Prof A. likened this to personal concepts being “windows” onto the greater, universal concept. But where do these “personal” concepts exist if not in the mind, in the same way as intended objects and qualia? The same objections and replies apply there as well. So we have three things that do not have, at present, a sufficient materialist explanation. Acting as reductio, then, we must reject our assumed premise: that mind is just something special a brain does.

The Prestige

(L-R) Andy Serkis, David Bowie, Hugh Jackman in The Prestige

Now, it’s time to add all the buts and caveats.

First, yes, nothing here precludes a materialist account of mind from ONE DAY being successful. But just be aware that by insisting that a sufficient materialist account must exist, one WILL exist, it’s an article of faith. Scientism, or the belief that all knowledge the in the world is fundamentally scientific, is a religious mode of thinking, and as a committed rationalist, I think we should avoid such thinking in rational matters (in religious matters is another; indulge faith all you want).

Second, this does not imply the existence of the “soul” or atman or whatever. There is good reason to think minds and brains ARE critically linked, for example, the presence of brain damage or hallucinogens or whatnot dramatically affecting mental states.

something about zombies goes here

Recall Sellar’s Chinese Room thought experiment: place a man in a room, give him a list of Chinese symbols and a method of translating them to English, and he will seem to be able to read Chinese, but never be able to speak a lick of it. That is what a computer does: a computer is good at taking inputs created by minded things and replicating them, perhaps in a convincing enough fashion to make you think you’re talking to another minded thing. But no computer has ever had a mind of its own. So for now, all we can say is that humans and animals with higher brain function appear to be minded creatures, so brains, particularly advanced brains, and minds seem to go together.

It remains to be seen whether we can have a mind separate from the brain, or a particular brain, and there are all sorts of fun thought experiments: teletransporter, brain-in-a-vat, etc. that explore these possibilities. That’s beyond the scope of this piece. My point here is simply to point out that there are good reasons for at least provisionally adopting substance dualism as the most rational hypothesis until we can somehow explain away the unique “hard problem” features of consciousness if ever. And I say the only reason we would is to rescue ontological naturalism from the conclusion that there may be more things of in this world than can be studied with empirical science, Horatio.

Husband. Lawyer. Outdoorsman. Warrior poet. Philosopher king. Legend.
The Art of Fighting BS Podcast on Spotify

The Art of Fighting BS Podcast on iTunes

The Art of Fighting BS Podcast on Google Play

The Art of Fighting BS Podcast on Stitcher

Latest articles

Related articles