Select Page

Alva Noe’s Philosophical Concepts 8: Out of Your Head

Written by Jeff Drake
10 · 23 · 19

The following discussion is based on information that can be found fully described in Alva Noe’s book, Out of Our Heads: Why You Are Not Your Brain and Other Lessons from the Biology of Consciousness and Alva Noe’s article from 2002, Is the Visual World a Grand Illusion? I urge you to read both of these, but no worries if you choose not to, as I’ve read them for you and recount what I have learned below.

This post is more on the topic I last wrote about in “Alva Noe’s Philosophical Concepts 7: The Door to Perception”. I have emphasized several times how important I feel it is to understand that “The Grand Illusion” is fallacious. If the reasons aren’t yet clear as for why this I keep hammering away on this, let me just say that the majority of all our current theories on perception and visual science are built on an assumption that our brains recreate what we see with our eyes and that is what we “really” see, but this assumption is just not true. It thus begs the question as to how can we hope to advance the field if we are finding our direction with a broken compass?  In this post I will unveil the veritable hornet’s nest Noe stirred up by taking a rather sharp stick (his logic/analytical skill) and uses it to poke two world-renowned scientists and their work that won them the Nobel prize. For Noe, it’s go big or go home.

Please note that I will attempt to spare you from much of the low-level detail behind their work, other than its conclusions and high points – not necessarily for the sake of brevity, but because this tale gets into the scientific weeds quickly and you don’t really need to hear all of that to accept or refute their conclusion. But, if what I write leaves you thirsty for more, than I recommend you buy his book. ?

As it has been a while since my last Alva Noe post, I will reprint the recap of the Grand Illusion below from the previous post:

Grand Illusion Recap:

The “Grand Illusion,” in a nutshell, is a rather descriptive term used by Alva Noe and others, to explain the current accepted scientific explanation as to how our vision works i.e., that our brains essentially reproduce what our eyes see, meaning that we never really “see” what is out there directly. We’re always at least one-step away from reality. This is also known as “representational perception,” or even “reconstructionist perception,” which purports that vision is really produced by the process of our brains reconstructing the world “out there” and presenting it to our brains “in here”, based on information encoded on our retinas.

This theory, or rather this assumption, has been at the root of academic visual perception science for many years and is still the standard dogma, but is it correct? My previous Grand Illusion posts reviewed what Alva Noe has to say on this subject, which is essentially “This is wrong!” According to Noe, it’s not just wrong, it’s “bad science.” Needless to say, even though he is “just a philosopher,” he has caused a bit of a firestorm within scientific circles, since his criticism includes a detailed refutation of the much-acclaimed scientific conclusions of scientist Francis Crick (of DNA fame), for which Crick and his partner received the Nobel prize. Well, as Carl Sagan was oft quoted, “Extraordinary claims require extraordinary evidence.” Here Alva Noe comes into his element, because he deals with this erroneous assumption blow-by-blow, step-by-step, in a way that is not only highly understandable, but also logical, and in my opinion, a way that utilizes common sense.

And so it begins….

Alva Noe tells us the story of two famous scientists, David H. Hubel and Torsten Weisel. You may not recognize their names, but believe me, they are seriously famous figures. Their theory of perception is embedded in virtually everything you have probably read on the subject.

Specifically, their work focused on vision in mammals. Important work for sure, but Noe tells us that their work rests “…on an untenable conception of vision and other mental powers as computational processes taking place in the brain.” Noe’s statement is revealing. He is saying that their work has essentially two related – and indefensible – parts: a) a conception of how we mammals perceive the world, and b) the idea that the brain works like a computer. Noe says both are wrong. When combined, as in Hubel and Weisel’s theory, the two ideas become what’s called, the “computational theory of mind.” The problem with this theory, Noe says, “…is that it supposes, mistakenly, that mind arises out of events in the head…” Thus, “…The legacy of Hubel and Wiesel’s research must be called into question.”

Why are they mistaken? That is the question.

Hubel and Weisel were not run-of-the-mill scientists and their work was considered top-notch. They won the Nobel Prize for research on the neurophysiology of vision, no small feat. They conducted their research at John Hopkins, then at Harvard from the 50’s to the 80’s. Their work, Noe tells us, is considered an “important landmark in the science of consciousness.” This make sense, right? Seeing is damned important to us mammals. We access the world around us with our eyes. We discern a myriad of colors, hues, shapes and shadows and use our eyes to find our way around and avoid falling into holes or off cliffs. Noe tells us that “…the visual character of objects shapes how we conceive of them.” We see the world in 3-D. When we look at the front of an object, we immediately are aware that it has other sides, some hidden from us, including a back that we cannot see. I’m boggled right now thinking about what it must be like to be blind. How awful.

Noe tells us that, “It is sometimes said that we know more about how the brain enables us to see than we do about any other mental function of the brain.”[i] Further, he states that when people say this kind of thing, they usually have the work of Hubel and Weisel in mind. There’s a good reason for this. Consider this problem – it is extremely difficult to study working brains inside living subjects. You can’t just crack someone’s head open and see what’s going on. Brains are hidden behind bone. And even if you removed the skull, you couldn’t discern how the brains is functionally organized just by looking at it. It’s a mystery. This is where the work of Hubel and Weisel comes into play and gains historical significance. You see, they “…seemed to find a way to exhibit the brain’s workings in a way that made what it was doing intelligible: how the brain might be achieving the function of making us visually conscious.”[ii] Thus, their work “…was and remains the standard by which the field measures itself.”[iii] To this, Noe admits his astonishment because their approach had definite “shortcomings.” Further, he tells us that these shortcomings “…remain, even now, the shortcomings of the field of research into the neural basis of consciousness.”[iv]

Hubel and Weisel’s Journey

These scientists started out on a historically familiar path, albeit with updated technology, to poke and prod the visual cortices of cats and monkey brains with microelectrodes. They attempted to build upon the work of others in their field, such as Stephen Kuffler (their mentor at Johns Hopkins), who had made some important discoveries about the behavior of cells in the retina. Our visual cells, like just about any other cell, responds to stimuli. In a visual cell, the so-called “receptive field” for that cell is the area of the retina which, when stimulated, causes the cell to alter its firing rate. Noe says that, “…you can think of the cell’s receptive field as the region in space in front of the animal to which a cell is responsive.)”[v] What Kuffler discovered is that retinal cells had “…receptive fields consisting, in effect, of concentric circles.” These concentric circles responded differently to different visual stimuli. For example, a spot falling on the central region of the retina’s receptive field would activate the cell, but if a ring-shaped visual structure fell outside of the central region of the cell, this would inhibit the cell’s firing. They also found that applying a “diffuse light” equally across the entire receptive field would “produce a weaker reaction than a spot falling only on the center. Off-center cells had it the other way around.”[vi] Interesting!

Hubel and Weisel were apparently quite impressed with their mentor’s work and decided to take a similar approach and “…simply extend Stephen Kuffler’s work to the brain.”[vii] In other words, they were going to take readings from the different visual-related cells, map the receptive fields, and as Hubel wrote himself, “…look for any further processing of visual information.”[viii]  (I’ll ask you to read that quote twice, as I’ll be returning to it). You’d think that given the amount of visual stimulation our brain receives that the hard part would be sorting through and making sense of the overwhelming response data, but in fact, they discovered the opposite. “…the problem was getting the cortical cells to respond at all.”[ix] Interesting, but they kept at it and eventually figured it out through experimentation.

Imagine if you will, a subject cat or monkey, whose head is strapped down, skull opened with inserted electrodes, and in front of the subject’s eyes is an apparatus that allows the scientists to place in front of the subject’s eyes different slides containing different stimuli, like a black spot or circle, for example.

One thing they discovered is that when they did get a response from a cell, the response had nothing to do with the black dot on the slide! Instead, what the cell was reacting to was the light reflecting off the edge of the slide which resulted in an extremely thin black shadow, a “…a straight dark line on a light background.” This thin line was what the cells wanted and reacted to, although the response differed based on the orientation of the shadow crossing over it. The plot thickens.

Their work progressed. Along their experimental journey they discovered that entire classes of cells had very different receptive areas in the visual cortices of cats from those in the retina. They also found that there were cells they referred to as “simple,” and cells they called “complex,” and that the visual cells formed a network hierarchy such that complex cells were receiving inputs from the simple cells. Their work continued over the next 25 years, leading to numerous discoveries about the brain’s visual cortex, including their characterization of the “functional architecture” of the visual cortex.

Alas, there was to me, a cat and monkey lover, a disturbing discovery they made about cortical development. They discovered that baby cats and monkeys, when deprived of sight at birth by sewing their eyes shut for a period of critical development time, resulted in permanent blindness. A rather sad experiment that demonstrates an important and startling fact – that the mammal’s ability to see requires experience. If animals are not allowed to see, not allowed to visually experience the world around them during a critical time in their development, they will never see. Wow.

One of the first things Noe takes issue with Hubel and Weisel conclusions is Hubel’s own written statements about their work. While admitting that their work is impressive, Noe recounts how Huber wrote that their work, from the beginning, was absent any hypotheses. Huber goes further and compares their work with Christopher Columbus. Their work, Huber says, was “mainly exploratory.” Huber writes that most of their work was done in the spirit of Columbus as he crossed the Atlantic, who did so simply “…to see what he would find.”[x] Huber goes further and states, “It is hard, now, to think back and realize just how free we were from any idea of what cortical cells might be doing in an animal’s daily life.”[xi]

Alva Noe’s reaction to this statement from Hubel, if I were to put words in his mouth, would be, “Bullshit!” As Noe explains, Columbus didn’t just cross the Atlantic to “see what he could see.” He had some very specific, but sadly, wrong, ideas about what he was going to find. As for Huber and Weisel, Noe argues, it is impossible to take the claim that they were not guided by theory seriously. Noe says, “Indeed, how could they not have been? After all, there are billions of cells in the brain, and they are massively interconnected. To form any conception whatsoever of what individual cells are contributing to the brain’s functioning, you need to have a tolerably clear idea in advance of what the brain is doing. And indeed, Hubel and Wiesel did have such a guiding conception.”[xii]

Noe takes time to quote Hubel several times saying more or less the same thing, but makes the point that while it is true that in 1958, when Hubel and Weisel were beginning their work, no one really knew how neurons functioned (and therefore it’s also true that no one knew what the visual cortex was “for”), it is wrong to say that they didn’t have any idea how the cortex worked. Because they did know! Granted, the details still needed to be worked out, but Noe states that at the time it was definitely understood “… that the visual cortex was in the business of analyzing visual information (as Hubel put it) and that, therefore, individual neurons must somehow, some way, be making a contribution to that was something that Hubel and Wiesel knew from the moment they set sail.”[xiii]

For added emphasis, Noe requotes Hubel where he stated (above) that he and Weisel were going to simply extend the work of their mentor, Kuffler, and “…look for any further processing of visual information.” (Noe’s italics). Noe emphasizes this line of the quote and I asked that you read it twice for a good reason, because the statement by Hubel is revealing as it demonstrates an inherent assumption on Hubel’s part, that the brain functions as an “information processor.” I apparently have this idea so ingrained in me, that I missed the importance of the quote the first time I read it, as it agreed with what I believed to be the case.

Of course, Hubel and Weisel were not the first to think of the brain as information processor. By the 1950’s, most neuroscientists held the common belief that vision presented the brain with a problem it had to solve. They also believed that the portions of the brain dedicated to vision “…could be thought of as systems of networks or circuits or, as Hubel and Wiesel sometimes put it, machines for “transforming information…”[xiv] Noe tells us of the various scientists who preceded Hubel and Weisel that believed in the brain as information processor (I won’t detail that here) and thus, it’s not much of a surprise that 25 years later, Hubel and Weisel were awarded the Nobel prize “…for their discoveries concerning information processing in the visual system.”[xv] Lastly, Noe says that it is “…remarkable that their landmark investigations into the biology of vision take as their starting point a startlingly nonbiological engineering conception of what seeing is.”[xvi]

There you have it. Essentially it is a foregone conclusion by most that the brain functions just like a computer. Is this correct? No, it isn’t. And Noe tackles that next.

The Mind as Computer

Noe tells us of a book written by David Marr, titled “Vision,” which was published the year after Hubel and Weisel won the Nobel Prize. He calls it a “landmark book” in which Marr says that, “…vision is an information-analysis process carried out in the brain.”[xvii] Marr’s book was essentially playing the same tune that Hubel and Weisel were already clearly marching to – that vision is the process used by our brains to discern “…how things are in the scene from images in the eyes. That is, it is a process of extracting a representation of what is where in the scene from information about the character of light arrayed across the skin of receptors in the eyes.”[xviii]

But, Noe tells us, Marr took this assumption and pushed it somewhat farther than Hubel and Weisel, because Marr gave us “conceptual clarity” about the place of theory in thinking about perception. Marr wrote, “trying to understand vision by studying only neurons is like trying to understand bird flight by studying only feathers: It just cannot be done.” Noe explains, “You need a theoretical conception of what neurons (or feathers) are doing just in order to decide which facts are even relevant.” He tells us that this isn’t because there is something “peculiar” to vision, but is instead due to the “explanatory challenges” we have to deal with whenever “…we want to understand an information-processing mechanism.”[xix]

Noe, as usual, uses a good example demonstrating the issue he’s talking about – cash registers. Cash registers? Yep. You’ll have to trust me, it’s actually a useful example.

Let’s pretend for a minute that we don’t know the purpose of a cash register. It’s no stretch of the imagination to say that if we don’t know the purpose of a cash register, then we don’t really know what the hell it is, which is a device for “…adding up numbers to keep track of balances due.”[xx] As Noe explains, it is really only after we know the purpose of the cash register that “you can reasonably ask: How does it manage to do this?”[xxi]

Noe then describes a number of procedures one can use to figure out what a cash register is doing when it is adding numbers. For instance, there are various algorithms for adding numbers depending on whether you are using Roman, Binary, Octal numbers, etc.; different mechanisms used for carrying out the addition e.g., pen and paper, an abacus, or a computer, etc. He says further that in order to figure out how the cash register works (or any machine, really), we need to understand and answer three questions:

  1. What function is it computing?
  2. What algorithms or rules is it using to carry out this function?
  3. How are these algorithms implemented physically in the mechanism?

This approach seems rather straight forward doesn’t it? (I find this often to be true after someone lays it out for me, as I might be hard-pressed to list the three most pertinent questions off the cuff. LOL.) Noe tells us that there is a beauty to this approach in that it “…enables the study of an information-processing mechanism to move forward even though the physics or electronics or physiology of the mechanism may be unknown.”[xxii]

If, in fact, vision is the process of producing representations of the scene before our eyes from information about light wavelengths or from the different intensities of points of light striking our eyes, then we can begin to investigate what kind of rules our brains could use to enable this type of analysis of visual information. We can do this even before we know about the cells in our eyes or the way they may be linked together via networks. Very cool. This approach i.e., the information-processing approach, to the mind and vision enables us to appreciate that both are processes that are “carried out in a physical medium (the brain, a computer, whatever) and that the processes are not themselves intrinsically physical. They are information-theoretic, or computational.”[xxiii]

Here, Noe points out that once again we are faced with the irony that “vision is made amenable to neurophysiological investigation only at the price of conceptualizing vision as, in and of itself, a nonbiological (that is to say, a computational) process that just happens, in humans, to be realized in the brain.”[xxiv]

This is an important point to realize, too important for me to try and paraphrase, so here’s a long quote from Noe:

“The fact that we can see thanks only to the workings of our wet, sticky, meat-slab brains doesn’t make seeing an intrinsically neuronal activity any more than chess is. To understand how brains play chess, you first need to understand chess and the distinct problems it presents. And, crucially, you don’t need to understand how brains work or how computers are electrically engineered to understand that. Chess is only played by systems (people and machines) made out of atoms and electrons. But chess isn’t a phenomenon that can be understood at that level. And the same is so for vision. To understand how the brain functions to enable us to see, according to the information-processing perspective, you must understand vision as the sort of process that might unfold just as easily in a computer.”[xxv]

Do you see the problem we run into linguistically when trying to describe a process like chess at a level deeper, say, than the board, the various moves, goals, etc.? Try explaining the function of a phone using only the language of quantum mechanics. Theoretically you could, but you wouldn’t. It’d be too difficult. We need a level of abstraction much higher than that.

So, what are we to do with the theory that the brain and the way we see is the result of the way our brains process visual information i.e., like a computer? Noe says again, we have to question this conclusion and proceeds to walk us through doing just that.

As an example, Noe asks us to consider what it means for a detective to extract information about an intruder from a footprint, or an oceanographer to extract information about prehistoric climates from dredging up unicellular fossils from the ocean floor. He tells us that these are good examples of how we can extract information about one thing from another. He also says that there is another fact we need to consider in this type of information extraction: “…there is a definite causal relationship between the character of the intruder and the properties of the footprints, or between the climate millions of years ago and the fossil chemistry of foraminifera today. And what makes it the case that the detective and the oceanographer can extract this information is that they are each armed with knowledge of the way in which what they have access to now (the footprint, the fossils) was shaped by what they want to learn.”[xxvi]

But, Noe tells us, “Things are different, though, when it comes to the brain and the retinal image.”[xxvii] Sure, there is no doubt really that a retinal image is replete with information about the scene before the eyes. After all, there is a lot known about the various mechanisms describing how this works. Noe then says, “Presumably, then, a suitably placed scientist would be able to extract that information.”[xxviii] Makes sense, right? Not so fast, says Noe (or words to that effect). We have to remember that the brain is not a scientist or a detective. It doesn’t actually know anything itself, nor does it have eyes to examine a retinal image. “It has no capacity to make inferences about anything, let alone inferences about the remote environmental causes of the observable state of the retina.”[xxix] All of this brings us back to the original pressing question – how the hell do we make any sense about the brain being an information-processing device?

Noe warns us that there is risk of “vacuity” in this “computer model of the mind,” which is just a fluffy way of saying that we are at risk of doing something really stupid.  Zing! He says that if our goal is to understand the biological basis of mind, how are we supposed to succeed in doing that if we attempt to explain our mental powers by referring to the “cognitive powers of the brain”.  This is circular logic! He says, “We—adult humans and other animals—think; we see, we feel, we judge, we infer. It’s working in a big, plain circle to say that what makes it possible for us to do all that—what explains these prodigious powers of mind—is the fact that our brains, like wily scientists, are able to figure out the distal causes of the retinal image. For that just takes for granted the nature of mental powers without explaining them. Is cognitive science guilty in this way of reasoning as if there were mind-possessing agencies (homunculi) at work inside us?”[xxx] I don’t know about you, but I’d say, “Yes, absolutely!”

These days science fiction is always coming up with new TV series or movies showing super computers solving amazing technical problems, from space travel to time travel. We watch these and use computers everyday ourselves to solve a myriad of problems, so we accept the premise of our brains as computers as believable because it doesn’t seem too far out on a limb to buy into the concept that our brains work in a very similar way. But here Noe says that to do so is to buy into a claim that is built on a mistake.

And what is this mistake, pray tell? It is this, says Noe – computers don’t perform calculations. They don’t think. I don’t know how many times in my career in IT that I had to remind a customer that computers are very stupid. They do exactly what they are told i.e., what the rules say they should do. But, let’s face it, following rules blindly to achieve a result is not the same as understanding the result. Using Noe’s examples, you can follow a recipe blindly, but that doesn’t mean you understand how the recipe works. In other words, understanding a problem or some computational puzzle “does not consist in merely following a rule blindly.”[xxxi] Noe uses school as an example and reminds us that following a recipe to derive an answer and thus get a good score on an exam is not the same as understanding the solution to that problem! It’s the same with computers. Yes, they come up with answers, but no, they have no understanding of the answer.

Noe takes it logically further by telling us that computers don’t really even follow rules blindly. They don’t follow anything. We talk about computers that way, but they aren’t following rules, they are just flipping electronic bits and bytes. We use a computer to perform operations, but the computer has no understanding of what the operations are that we perform with it. A wristwatch can tell you the time, but it has no idea of what time it is. The watch, the computer, are just tools, nothing more. Noe puts a fine point on this by saying, “If computers are information processors, then they are information processors the way watches are. And that fact does not help us understand the powers of human cognition.”[xxxii]

Your mind is not in your head

Philosophers are often disparaged because they can argue equally for or against most subjects. While technically this is true, I chalk up this talent to an ability to look and analyze two or more different sides to a problem, not a bad skill, I think. The current topic under discussion is no exception. Here, Noe tells us in a rather nice bit of mental gymnastics, that the fact that computers don’t think is actually “…a good reason to hold that brains don’t think because they are computers.”[xxxiii] Lol!

Noe recounts a famous peer of his, Professor John Searle, who believes this. Searle argues further that “consciousness and cognition arise from the intrinsic nature of neural activity itself.”[xxxiv] According to Searle, consciousness and cognition are ““caused by and realized in the human brain.”[xxxv] Following Searle’s reasoning, we claim that computers solve problems and represent the world “derivatively”, because we treat them as if they do. But, according to Searle – “the brain’s powers are not derivative; they are original. The brain thinks and represents.”[xxxvi] (The quotes are Noe’s, not Searle).

Again Noe says, “Not so fast.” In fact, says Noe, this is “exactly the wrong conclusion to draw from the fact that brains don’t think by computing: they don’t, but not because they think some other way.” It’s even simpler than this, explains Noe. It is because brains don’t think. (Personally, I recommend thinking about this and remembering it, LOL). Noe says that the idea that the brain could represent the world all on its own makes as much sense as saying that “marks on paper could signify all on their own (that is, independently of the larger social practice of reading and writing).”[xxxvii] This claim about representation and the concept it is based on do not make sense. If it is true, then who is the brain representing the world to? You see where this goes by now, right? Round-and-round, wheels within wheels, etc.

The world that we are a part of, says Noe, is not “…made in the brain or by the brain. It is there for us and we have access to it.” If I direct my thoughts to a task, like chess, or to an object e.g., a lamp, I am not doing so due to “my internal computational states.”[xxxviii] This is something that Searle and Noe both agree on. But Noe says further that what actually gives thoughts their content is “my involvement with the world.”[xxxix] Our “interior makeup” does not “give meaning and reference to my mental states” all on its own! In other words, meaning is not intrinsic! It doesn’t just pop into our heads, although it may seem that way sometimes. “Meaning is relational.”[xl] The many relations between our thoughts, ideas, and the images we see and the way they are directed towards events, people and various problems in the world are due to the fact that we are embedded in the world and our dynamic interaction with the world. As Noe says, “The world is our ground; the world provides meaning.”[xli] Here is where I have to quote the philosopher who first introduced me to phenomenology, Maurice Merleau-Ponty. He said, “The world is pregnant with meaning.” Implied here is the directive that it is up to us to seek it.

Noe finishes this thought by restating the central claim of his book: “…that the brain is not, on its own, a source of experience or cognition. Experience and cognition are not bodily by-products. What gives the living animal’s states their significance is the animal’s dynamic engagement with the world around it.”[xlii]

I’ll stop here and give you a break. Yes, there is more to come on this subject. But before I go, I just want to point out that there is a word, I think, that is going to be very useful when discussing the concept of our dynamic interaction with the world, a word that describes something that is evident at the very first moment of life. I want to use it here and now, because I fear it may eventually become over-used and lose its value. And that word is: entanglement. More on this later.

[i] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (p. 150). Farrar, Straus and Giroux. Kindle Edition.

[ii] Ibid.

[iii] Ibid.

[iv] Ibid.

[v] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (p. 151). Farrar, Straus and Giroux. Kindle Edition.

[vi] Ibid.

[vii] Ibid.

[viii] Ibid.

[ix] Ibid.

[x] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (p. 154). Farrar, Straus and Giroux. Kindle Edition.

[xi] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (p. 154). Farrar, Straus and Giroux. Kindle Edition.

[xii] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (p. 155). Farrar, Straus and Giroux. Kindle Edition.

[xiii] Ibid.

[xiv] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (p. 156). Farrar, Straus and Giroux. Kindle Edition.

[xv] Ibid.

[xvi] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (p. 157). Farrar, Straus and Giroux. Kindle Edition.

[xvii] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (p. 157). Farrar, Straus and Giroux. Kindle Edition.

[xviii] Ibid.

[xix] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (p. 158). Farrar, Straus and Giroux. Kindle Edition.

[xx] Ibid.

[xxi] Ibid.

[xxii] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (pp. 158-159). Farrar, Straus and Giroux. Kindle Edition.

[xxiii] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (p. 159). Farrar, Straus and Giroux. Kindle Edition.

[xxiv] Ibid.

[xxv] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (pp. 159-160). Farrar, Straus and Giroux. Kindle Edition.

[xxvi] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (p. 161). Farrar, Straus and Giroux. Kindle Edition.

[xxvii] Ibid.

[xxviii] Ibid.

[xxix] Ibid.

[xxx] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (pp. 161-162). Farrar, Straus and Giroux. Kindle Edition.

[xxxi] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (p. 163). Farrar, Straus and Giroux. Kindle Edition.”

[xxxii] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (pp. 163-164). Farrar, Straus and Giroux. Kindle Edition.

[xxxiii] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (p. 164). Farrar, Straus and Giroux. Kindle Edition.

[xxxiv] Ibid.

[xxxv] Ibid.

[xxxvi] Ibid.

[xxxvii] Ibid.

[xxxviii] Ibid.

[xxxix] Ibid.

[xl] Ibid.

[xli] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (pp. 164-165). Farrar, Straus and Giroux. Kindle Edition.

[xlii] Noe, Alva. Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (p. 165). Farrar, Straus and Giroux. Kindle Edition.

Please follow and like me:

Let us know what you think…

Comments

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

More Like This

Related Posts

A Noteworthy Project

A Noteworthy Project

The question, “What is consciousness” remains at the top of a list of the 12 or so most important scientific questions that remain to be answered. As a result, there are a lot of people working on trying to find an answer to this question in a variety of fields:...

read more

Author

Jeff Drake

Retired IT consultant, world-traveler, hobby photographer, and philosopher.