Back to S. W. Wilson Home Page

What Is Pain?

This discussion of the nature of pain is from an email interaction with Jeff Arle, a neurosurgeon, Kris Carlson, a developer and computer scientist, Edmund Schofield, a physicist, Steve Smith, a software entrepreneur, and Dave Waltz, an artificial intelligence pioneer. It is presented in hopes of being of general interest. Please send any comments to me at wilson@prediction-dynamics.com.


I guess there is a topic where Jeff and I would not agree--yet it's also one that tries to avoid the swampish kinds of things he is arguing against here.

I take Jeff's basic position on mind and consciousness as Materialist. Having looked a great deal into the head and seen all that's in there but nothing spiritual, as well as having looked closely at much scientific theoretical work, he is certainly entitled to his viewpoint. In fact it might be correct. But to me there is one major unanswered question which I will try to present in this forum, and would request your comments on.

The question is not new, yet it seems to me it is not usually stated as clearly as it can be. The question is: what is pain? I suggest that we don't know the answer in any interesting sense; and since we don't, we have hardly started on the scientific understanding of consciousness.

I am talking about the *feeling* of pain. How does it arise? (For that matter, how do any subjective feelings arise? But pain is the simplest, least complicated case.) Here is a system made of flesh and blood and neurons, etc., which from a Materialist viewpoint is not essentially different from a very elaborate machine (except it's "wetware"). In fact, a "computational Materialist" might go so far as to say the human system is just a circuit, a very complex circuit, with a very elaborate program running in it.

Here is the problem, for me: I can't understand how a circuit could feel pain. I don't mean pull its fingers away from a hot stove because its peripheral heat sensors become super-excited; a zombie could do that. I mean *feel* it. For instance, where in the circuit is the site of feeling pain, if there is one? In such-and-such transistor? In a certain local complex of circuitry? But we can't understand that, can we, because we have no models according to which a circuit can have any kind of subjective feeling. Yes, the circuit could be, say, overheated, or abnormally active. But how is that equal to feeling something?

Some might say that a circuit being in a certain kind of state *is* to feel pain, and explain it that way. But this seems a cop-out, I think because such explanations-by-identity are not based on any deeper elements. They are not like, say, explaining the gas laws using the underlying statistical mechanics.

A slightly different Materialist view of pain is that a certain kind of signal from the periphery (or wherever the initial sensing neuron is located) is "interpreted" as pain. But what would that mean? An electrical signal of a certain kind, and consisting of a train of potential spikes, comes in from the sensor. It excites certain neurons, which proceed to emit their own spike trains. After a very small number of such stages, the pain information arrives--where? Somewhere in the CNS, where it initiates--besides, say, a rapid physical reaction--a feeling of pain.

This is very peculiar, to me. What exactly is *painful* about a train of potential spikes? What makes this electrical signal painful and a different one pleasurable, say? They are both just electrical signals. Well, perhaps there is a look-up table, and the first looks up to Pain and the second to Pleasure. Simple as that. But "who" is reading the look-up table? And, aren't the outputs, i.e., Pain and Pleasure, just electrical signals again? As in the earlier circuit-state situation, we have no models for feeling.

I'll stop for the moment here, hoping for reactions. :)

Stewart


*******************************************************


Thank you, Jeff and Dave and Ed, for these thoughtful and incisive comments. I will say something to them as follows, and hope to hear more from you and anyone else who is interested.


From Jeff Arle:

I just got this and am so excited to respond but don't have the time just this second (ironically, going in to place a set of electrodes on the motor cortex to treat a chronic pain problem - yes, really) and just reviewed the literature for an invited paper on this and other interventions to 'treat' these pain perceptions and otherwise. So, I do deal with these kinds of questions in a real world environment myself, and even attempt to do something about what the answers may involve in real people. It is exactly the kind of question to ask and try to make headway on, because, like you, I believe answering it goes a long way to answering all the other qeustions as well. Briefly, your 'cop-out' line of thinking is (imho) the right direction and there is support for it. But the related discussion you initiate in the email is perfect, but is, or will be in the next few decades, answerable in a reductionist/materialist way to many's satisfaction I believe. - J

I am pleased (and relieved) that a distinguished person in the real neural world finds my questions important. Just to clarify: the 'cop-out' I referred to purports to solve the problem by *identifying* pain with a certain brain state; no further explanation is called for. The problem with this is that we think we know--in principle--how to define a brain state (some neurons, a connectivity, a firing pattern, etc.), but we have no models as to how a feeling of pain can arise from such a state. What is required for the 'brain state' explanation to be viable and scientifically interesting is submission of a model that places the origin of pain outside of, or underneath--in the reductionist sense-- the facts of the state itself. This I take to be the kind of explanation that Jeff refers to.


From Dave Waltz:

Some thoughts about pain:

Pain perception appears to be localized in the rostral anterior cingulate cortex (rACC). The locus of pain perception in the cortex is significant: to a large extent the cortex is reflective, so that activities in the rest of the brain, many of them reflexive, can be perceived as well as reacted to. Perceiving pain is probably necessary if an organism is to learn to avoid it. Even a flatworm may by this criterion perceive pain. However, while a flatworm might learn to avoid situations/objects associated with pain, I doubt that one could consider and make plans to avoid pain in the future. That humans can do this says a lot about our ability to reify our experiences, mull them over, use them in "what-if" planning, and effectively learn (reliably recall) learned policies in the appropriate circumstances.

Consciousness of pain (and everything else we're capable of thinking about) has the property of being highly selective -- from the vast number of objects, situations, possible actions, etc. presented to us in any circumstance we narrow down those we're "aware of" to a very small number. Pain is especially good at making sure any item with which it's associated is among this small number of items in consciousness. This is important for survival (since pain can indicate the need to take action to avoid damage to our bodies). In cases where we're fighting for our lives and where adrenaline is plentiful, most pain is typically suppressed, and only becomes evident later.

In this perspective congenital pain serves no useful purpose, but is the negative side effect of an important survival system. I'm not sure what one can usefully say about "phantom limb" pain, though in recent work human subjects can learn to "train" the rACC to diminish the perceived intensity and discomfort of pain (whether experimentally applied or congenital). (See http://paincenter.stanford.edu/research/rtfmristudy.html ).

This is all far from the metaphysics of whether the real world exists or not, but nonetheless I hope it may spur some creative thoughts. -Dave


Dave's comment illustrates his near-encyclopedic awareness of what's going on in multiple fields! I think the most interesting point for us is "Perceiving pain is probably necessary if an organism is to learn to avoid it." Clearly pain in the form of a *signal* of trauma, danger, etc. is vital for survival and important for learning. The question is, is it necessary to feel it? If this sounds obvious, consider how the field of autonomous robotics depends on sensor signals to keep robots from bumping walls, or, for that matter, low-battery signals to get them to go and charge up. Here we have 'pain' and 'hunger' signals--and they are often so characterized-- but clearly (to any serious person!) the robot is not feeling anything in the sense that we feel pain and hunger. It is in principle possible that to advance beyond the simple devices of today, robots will need subjective feelings. But it is not clear. Similarly it is not clear, to me, that our level of behavior and intelligence requires subjective feelings--perhaps we could do just as well as zombies, where a zombie has our perceptive and cognitive powers, but doesn't feel anything.

What is the relevance? If emotions are strictly computational--if zombies and robots could do everything we do--then why do we have feelings? I.e., not only would pain and other feelings be (in my view) unexplained in the sense of what they are, but their 'why' would be unexplained as well.


From Ed Schofield:

I submit that "Darwin" (sensu lato) would have a lot to say on this question.

I take it he means the explanation of pain can be found in evolution. In ethology, I believe there is a distinction between 'structural' explanations and 'functional' explanations of behavior, where structural means 'how does it work' and functional means 'why did it evolve'. I would agree that looking to evolution could be important in explaining the 'why' of feelings. As for how they work I would be interested in hearing how evolution might shed light on that.


************************************************************

I am late with this, due to computer problems. Here are some replies to my previous message, with my responses.


From Kris Carlson:

Well the pain discussion seems to have really touched a nerve.

Couple of responses:

1. What is the Turing Test in the context of pain? I think considering this will challenge and sharpen the view that there is more to it than circuitry (i.e. analog circuitry and hormones in animals and humans). My answer is implicit in the next points.

2. Prima facie defining "pain" seems easier than defining "soul" but they are perhaps related discussions. Do animals feel pain? Do they have a soul? Does an ant with 10^2 neurons feel pain when it is avoiding death by heat? A robot as Stewart asks?

Why isn't the answer yes: Moving to a state of lesser happiness (perceived subjectively) they all feel as "pain", but the more complicated nervous systems are experiencing a richer form of pain than the less complicated ones? And if we for a moment take Fredkin's definition of "soul" (since it is precise enough to discuss) as the # of bits it takes to distinguish one individual mind from another, any entity that learns has a soul, but a more complex learning system would have a more complex soul than the less complex one.

At least everyone gets to have a soul :-) but the human ones are (much) more complex than others.

Can't help but throw this in: If a machine that has a catalog of all its parts and their connections, and can monitor at will the dynamics of same, is it not more aware of itself than we are, even if it is less complex?

3. Then a nervous system or a learning machine that has far more complexity than we do could experience a far richer sensation of pain than we do, right? Perhaps whales do. And now I would come back to the Turing Test for Pain; if we could not/did not emulate the richness of the pain experience of another entity, would it not say we did not pass its Turing Test (for Pain)?


To (2.), probably higher animals feel more pain than lower animals, and amoebae feel none. Of course, they all react to injury, but the question here is whether they feel it. That people definitely feel pain and robots definitely do not suggests a continuum. However, it's a continuum with no explanation, since we have no objective theory of how any material object can feel anything.

Some might argue the other way round: that the existence of pain in people, who are material objects, is proof that any material object feels pain to the degree of its complexity. But this is again scientifically useless, in my opinion, since there is no underlying theory of how the pain is felt.


From Steve Smith:

The beauty of the Turing test is it doesn't matter whether someone is "self aware" - all they have to do is pass the test in a believable way.

The Turing test applied to pain would be to judge whether someone responds in all ways consistent with someone who does "feel pain".

Oh and by the way the only person we can be sure of that "feels pain" is us. We have no proof that everyone else isn't a Rod Brooks robot who behaves like us in all the correct Turing ways but isn't.

"That which you are looking for is that which is looking." - Saint Francis of Assissi


I'm not sure a Turing test would be useful for pain. It would seem much easier to pass than one for intelligence, and thus not convincing. Would it be that hard to write a program that produced loud cries and grimaces (a la Kismet) in response to various nasty probes? Would that convince anyone?

As for whether anyone else feels pain, in contrast to Steve, I am as sure of that as I am of anything. A fundamental principle of science is that the laws of nature hold everywhere and for everything; similar objects and systems must behave similarly. (They are not subject to divine caprice as in certain religions.) Any two human beings are overwhelmingly similar--consider the near-identity of their DNA and their long evolutionary histories that differ only in the tiniest detail in the most recent instant of time. How can there be the slightest possibility that A feels but B doesn't?


From Ed Schofield:

Dear Stewart,

With respect to your last sentences to me ("I would agree that looking to evolution could be important in explaining the 'why' of feelings. As for how they work I would be interested in hearing how evolution might shed light on that.") I would suggest as a preliminary answer ("preliminary" because I would have to do a lot more thinking about it) that, rather than looking directly, or exclusively, to Darwin--or biological evolution--for an answer to "how they might work," I would look to chemistry (organic chemistry and biochemistry) for an answer to questions dealing with that level.

Ed.


Evolutionarily, pain is a response that ensures an organism's survival--or at least one that greatly increases its chances of survival.

There is, I suppose, such a thing as unnecessary or gratuitous pain (e.g., torture). Given the overwhelming complexity of the universe and the perversity and cruelty of our species, gratuitous pain probably serves no evolutionarily meaningful end, or, if it does serve such an end it does so do a very small degree.

Ed.


To the first message, we seem to agree that the explanation of feelings in the structural sense will come less from evolution than from chemistry and other fields that might offer an underlying theory.

To your second message, when you say, "pain is a response that ensures an organism's survival"--that is certainly true for the "response" aspect. However, as I've noted elsewhere, it isn't clear, to me, why actually feeling the pain should be required; i.e., in order to identify and deal with a physical insult, it is not necessary to actually feel it, only to detect it.


From Dave Waltz:

I think that you would all find it worthwhile -- and highly relevant -- to read Marvin Minsky's 1968 paper "Matter, Mind and Models": http://web.media.mit.edu/~minsky/papers/MatterMindModels.txt which provides an interesting explanation about why people tend toward dualism, and why we find it hard to reconcile thought and neural activity.

I have downloaded but not had a chance to read it. I'll comment in the next of these.


***********************************************************

There was just one reply to the last round, plus I've repeated Dave Waltz's comment from that round.

I hope people will continue to write and debate my* thesis that pain, the simplest feeling, is essentially unexplained scientifically and therefore that there is a large gap in the present understanding of consciousness. --Stewart

* "my" in the sense that I am representing it. :)


From Jeff Arle:

my life allows nothing but useless brief interludes of comment (perhaps this is fortunate for all of you):

There seem to be varying thresholds of pain among humans - so there are some differences, and even to the extent that there have been documented cases where the 'feeling' of pain is not 'coded for' at all (these children can really get injured if it isn't picked up right away - but is incredibly rare of course) -- so clearly they are wired differently and likely never developed certain regions in the frontal/cingulate areas at all I suspect. [obviously, all of this supports - but does not 'prove', the idea that the sensation/perception of certain stimuli as 'pain' is created by these cells and nothing more].

As for a modified kind of Turing test on pain -- IS the brain doing any more than activating circuits that assign words (internally) to the stimulus of pain, along with vocalizations if strong enough, and other movements, all at the same time - and what we sense as a 'feeling' of pain is nothing more than the concommittant interaction of all of these circuits, which we then assign another set of internal words or circuits to characetrize?? Why couldn't it be that? It seems it IS that, until proven otherwise, rather than the other way around. We know the neurons are doing some of it at least -- we have no evidence for anything else involved.

- Jeff


Well, I think your comments are great, right to the point. Your first comment is about evidence that pain is localized in certain cells, as shown by people without those cells or that area feeling reduced or no pain at all. You suggest that this shows that pain "is created by these cells and nothing more"; i.e., it's the activity of those cells that *is* pain.

To me, there are two problems. First, the "broken radio" problem--removing a tube (easier than a transistor!) kills the radio, but does this mean the tube is the source, cause, or origin of the music? Perhaps this objection can be argued around; I don't know.

The bigger problem, for me, is that localizing (even with certainty) the feeling of pain in the activity of a certain subregion of the brain does not seem to change the issue: how can a collection of interconnected cells feel pain? Just because the collection is localized would not seem to reduce the problem.

New York City is a huge system consisting of millions of active components interconnected in myriad ways. Is there any sense in which New York City feels *anything*? Of course the individuals do, and that is the problem we are trying to solve. But NYC as a whole, if not quite so complex as the cingulate area, is certainly a very complex system and yet who would claim that it has feelings, even slight ones? If the cingulate is different in some decisive way, what is the difference?

The second comment is a bit different in that it brings in the idea of interpretation: the "interaction of all of these circuits, which we than assign another set of internal words or circuits to characterize [as pain]". The problem with this "lookup table" approach is (1) who is "we"?, and (2) the outcome of the table must also be an interaction of circuits so what has been explained?


From Dave Waltz:

I think that you would all find it worthwhile -- and highly relevant -- to read Marvin Minsky's 1968 paper "Matter, Mind and Models": http://web.media.mit.edu/~minsky/papers/MatterMindModels.txt which provides an interesting explanation about why people tend toward dualism, and why we find it hard to reconcile thought and neural activity.

-Dave


Alas, having read it twice, I can't get anything out of this paper that is clear enough to chew on about the question of pain. Would you kindly summarize how the paper is relevant?


**********************************************************

Thanks to Jeff there is one further round.


From Jeff Arle:

yes - I understand where this is going. The tube/transistor thing can be argued around a little bit easily, and in other more involved ways with difficulty. Let me suggest first that there are MANY things a radio does, among them making sound, tuning to a station, amplifying signals, and so forth. Removing a certain component may alter the sound, or the ability to tune, or the ability to amplify, or the ability of it to turn on at all. Is the tube then SOLELY responsible for any of those functions? Hard to tell -- but it would clearly be at least one necessary part of them if it alters their function. Working further through the system (forward and back) one can deduce the necessary and sufficient components that ARE responsible for the given function. Does a transistor 'sense' its ability to amplify/relay/gate/etc.?? Of course not - not in the sense we are wondering -- but its interaction with all the other components WOULD (I posit) provide a platform for another set of circuits to assign values to that functioning and so on (see Hofstadter's new book 'I am a Wierd Loop').

As for NYC and the cingulate -- great question -- but let me ask this: when Kurzweil derived the appropriate transfer functions for piano keys to develop the synthesizer (digital representation of the way each key creates sound hammering a string) he had to account for the tensions, wood properties, angles, forces and so forth. Ok - it was solveable and with a complete set of these transfer functions one can recreate pretty faithfully with some refinements the acoustics of various pianos and so forth. HOWEVER - if one then asks, what is the transfer function of the piano itself (as if instead of just focusing on the keys, we decided that a very complex rendering of the whole piano should work) it would completely miss the mark - knowing all the appropriate values of the piano structure and knowing how it responds to a variety of impulse functions would never allow one the ability to derive the digital representation of how various combinations of keystrokes would create a piece of music. In the same way, if we bang the brain and derive its 'transfer function' it has nothing to do with what the neurons in circuits do with their transfer functions. Similarly, although somewhat tangentially, asking whether any complex interactive system (such as NYC) can create 'feeling' is perhaps not the right question exactly - but it's close. I would say that such an interacting complex system DOES create something on a more abstract scale - if it had the other appropriate internal components of interacting that assign 'words' or 'meanings' to this abstract quality, and we or it could communicate that outwardly (or we could 'record' from it internally somehow) then it would be noticed as well. SO, it may not be 'feeling' or 'pain' per se, but I suspect most highly interactive complex systems DO create many abstract things -- we just don't have names for many of them yet or ways to notice them in the first place. -- J


I feel that in replying to this I would be mostly repeating previous points, and I don't want to be boring. So I will just say that in this excellent and very stimulating interaction we appear to have rested our cases. I am pleased that it has stayed entirely on what I would call an objective and scientific level, as well as including the kind of speculation that science must have in order to advance!

Stewart