Thinking about thinking and vice versa

“A human being might be more a verb than a noun” a friend said, discussing consciousness. A great line, that. There may not be a mind as much as a process, he explained. I’d been reading that theory too. What we think of as us is just the result of a lot of various brain processes. There was an air of mystery to it.  There is no there there, someone else chimed in. I liked that line, too. But neither it nor the verb line quite did it for me. But it got me to thinking, and then to writing, and writing, and writing. Rather than sleeping, sleeping, and sleeping, which would have been a much better idea, running on fumes as I am.

But when discussing the nature of consciousness, I just don’t think there’s a difference between it being a process or a thing. It’s still all neurology, which is just a way of saying physiology. Except we can’t say physiology until we actually know to a fairly definitive degree what neurons are doing what that allows for consciousness–aka the human mind–to happen.  But you can say the same thing about climate. There’s a whole bunch of things that combine in all kinds of varied ways to create what we call “climate”. But very few are a complete mystery to us anymore. They can all be explained and modelled. Debated, yes, and various models drawn up, but they are all models based on data. We’re not there with the mind yet.  We know that it’s all neurology, we just don’t know exactly what does what in order to make us conscious. But once we do, we will no longer think of it as a verb. We’ll call it a process, a thing. It’ll be a noun.

But as I said, we aren’t there yet. And since we aren’t there yet, we tend to ascribe it a sense of mystery  that makes it more than a mere thing. But that sense of mystery is just a state of ignorance. That is, we don’t know how it works yet.  Once we do, it’ll no longer be a philosopher’s quandary, a mysterious unknowable thing, a process that can’t be reduced to a noun…it’ll no longer be anything other than a process that can be described like any other process. Nothing metaphysical about it.

Neurology and cognitive science have advanced at such an incredibly prodigious rate since the 1980’s that its difficult for non-specialists to even conceive of what all the new knowledge means. There is so much known about us now, about what our neural networks do, about how so much of our behavior can be located in places in the brain that can literally be seen and touched…and yet we still don’t even have ways to understand it except in neurological terms. yet we really are the result of patterns of electro-chemical responses in our neurons and glia. If your electrolytes run low, a neuron can’t carry the charge its receiving from another neuron to the next. No potassium, no thought. My wife’s heart stopped that way, and with it, her brain. (Both were revived, thankfully.) Or too much potassium and the neuron begins firing too many adjacent neurons and you can have epilepsy (which is why I have to be very careful with bananas and high potassium foods.) And while these very simple chemical processes are at the root of human consciousness, it’s the astonishingly complex lattice of interactions across the 100 million neurons and hundreds of billions of possible synaptic connections that create the thing we call consciousness…a process that as of now is simply too vast and complex and variable for us to really understand as a whole.

Which is a shame, because until we can conceptualize that whole, consciousness–what we are–will be an unfathomable mystery. But we couldn’t discuss the universe a century ago like we do now. Today it’s not only the subject of documentary series, where physicists describe cosmology and even theoretical physics for laymen, but those laymen, millions of them, are able to understand what is being explained to them. They can conceive it. The human brain can, by now, turn all that physics into a model it can see in its mind’s eye. We can’t yet do that with the neurology of consciousness. I like to think we are at the same stage now with our own neurology as we were a hundred years ago with conceptualizing evolution. People then knew that there was such a thing, but they had to think of it as a mysterious process called “evolution”. Now we know how it works, and evolution isn’t a mysterious process at all. It’s something we can conceptualize so readily that unless we’re religious, there’s no mystery to it at all. It’s a wikipedia entry.

We’re a couple decades away from that as far as human consciousness goes. But it too will stop being a mystery, will stop being so difficult to conceive of. I’m not saying it will be figured out yet, but it will be seen as a noun, a thing, something that can be explained as a physiological process. By then there will no longer be philosophers involved in the discussions, any more than philosophers are involved in oncology or genetics or climate science now. At some point science becomes mechanics, a study of process.

As far as the mind and consciousness goes, we’re not at the point yet where the process has been defined or even discovered. It’s like a 16th century map with big empty spaces unhelpfully labeled terra incognita.  And for laymen like us, several steps removed from the state of the science, we’re left in the dark, and almost always a couple years–even a few years–behind the research.  And then the competing theories are still being battled out in the journals, consensus is far off. But I think that in a generation at most we’ll be working with a model of human consciousness just like we have a working model of evolution today.

By then we’ll no longer asking if there’s a there there, or think that there might not even be a source of consciousness, or debating whether consciousness is a noun or a verb. All our discussions, and certainly an essay like this, will seem terribly quaint. Consciousness will be understood as a process, we’ll know to a much greater degree just where in the brain it’s located and how it happens, and it won’t ruin anything. I think there’s a fear that if we discover the actual mechanics of consciousness it’ll ruin everything somehow. That we need the mystery. But understanding evolution didn’t ruin everything, nor did the discovery of our tiny insignificant place in the universe. What we are, what makes us people, makes us cognitive beings is based on much more than theories of mind, universe or genetics. And none of that will be changing soon. After all, we invented the Internet (the only human creation that comes anywhere near the complexity of the human brain) and then filled it with porn. If baboons invented the internet they would have filled it with porn, too. And if baboons could they’d have TMZ. We’re all primates. Knowing exactly what consciousness is won’t change that a bit.

It’s now 6 am and I’ve spent the whole night writing about thinking. I began the night thinking about sleeping. Which I’d better try to do a little bit before it’s too late.

Neurons of the neocortex–your consciousness is in there somewhere. Photograph by Benjamin Bollmann, from Sebastian Seung’s


Oliver Sacks

One of Oliver Sack’s great unnoticed achievements was helping to bury Freud. By displaying in clear prose how behavior and thinking and observations are shaped by neurological processes, and not by subconscious fears and desires and the misdirected horniness that cannot be named, he undermined for millions of readers the entire basis of so called Freudian science. Freud’s work was mostly nonsense. It was highly imaginative, quite brilliant, and in a time when almost nothing was known of the actual workings of the brain (probably 99.99% of all of today’s knowledge of what the brain is, how it developed and how it functions has been uncovered in the past 25 years) Freud’s theories seemed plausible. Obsolete theories have a way of lasting in the public eye long after their scientific invalidation. People retain what they learned in school for life, and everyone took a psychology course or three. Sadly, just about everything we learned in those psychology courses prior to the 1980’s or ’90’s (depending on how hip your professor was) has turned out to be irrelevant if not flat out wrong. And we learned a lot of Freud. Of course we did. He was to psychology then what Charles Darwin was to biology. He was the big thinker.

But in all those Oliver Sacks best sellers, Freud never comes into the picture at all. Sacks lays out the neurology, the actual brain processes, making it all beautiful and real and utterly fascinating. And his observations were fact-based and scientifically proven, that is there was a rigorous testing procedure to establish those facts. Freud was guessing, fantasizing really. About as close as I remember anything being proven in psychology class was Pavlov’s salivating dogs and some of Skinner’s disturbing behavioral experiments with his own children. Otherwise we just took it all on faith. But Sack’s stories–case studies, really, beautifully written–were so factual and real they rendered Freudian theory for his readers as unplausible as any pseudo-science. He didn’t even have to tell us so. It’s just that for people who read Sacks–and millions did–Freudian theory just suddenly seemed kind of absurd. Shelf it with phrenology, physiognomy, eugenics and Lysenkoism. Freud was that wrong. There was simply no evidence of his theories in the brains of the people Sacks had treated. Of course not. These people were all neurons and brain regions and wiring gone synaptically, tragically wrong. He could explain his patients’ sometime bizarre conditions by showing us just what was wrong, neurologically. It might be weird and counter-intuitive, but it made sense. We can only imagine how a strict Freudian analyst would have diagnosed a man who mistook his wife for a hat.

Oliver Sacks was a key figure in changing the way people see the brain. His little true life stories allowed us to grasp the stunning complexity of neuroscience. The public’s image of what we fundamentally are shifted dramatically. Where once we were all Oedipal, it might now just be a few neurons shook loose. Sacks made the brain understandable to the layman, the real brain, full of flesh and blood and neurons and thought. We became us, the real us, and not a caricature with a fondness for Mom…and just in time, too. I mean the thought of a strictly Freudian Facebook is just too weird to think about.

Oliver Sacks and somebody's brain. (Photo by Adam Scourfield for AP)

Oliver Sacks with somebody’s brain. (Photo by Adam Scourfield for AP)

Your brain on homonyms

I know very well [posted a friend of mine], really by instinct, the proper placement of apostrophes and other punctuation and the usage of words like “there, their, they’re”, “you your you’re” and so on and yet I get to typing so fast that my brain is constantly pulling the wrong one out of my hat. I get so embarrassed when I discover these mistakes later.

I love when that happens, actually, because I think it shows so much about the brain and language.

This is how I think that happens: we type by listening to the inner voice speaking in our heads…language was strictly spoken for a hundred or so thousand years before we began reading it, and very few people at all were reading and writing it until the last century or two. And typing was not invented till 150 years ago. When we direct our fingers to type we are actually listening to the narrative voice running through our heads, and then some center of the brain in turn directs the actions of our fingers to type what it is “hearing”…an impressive task, as typing is something much more complex than writing with a pen or pencil (or with a stylus on a mud tablet as they did when writing was brand new). We sort of type the way a pianist plays, hearing the music in his head and then coordinating his hands and fingers to play it. Of course, in music you can harmonize so you can use all the fingers, but writing has no harmonies (unfortunately) so its more like playing trumpet, one note on a trumpet equals one letter on a keyboard. (Both language and speech are more like playing the trumpet, too, but that’s another essay). So our typing fingers hear the sounds of the words and not see them. There, their and they’re can be placed interchangeably by our undiscriminating fingers as they are homonyms (though perhaps not in all dialects.) Some people could make the same mistake with merry, marry and Mary, though in my regionally accented ears the three sound like different words, unlike there, their and they’re. It doesn’t always happen, perhaps it’s a matter of context or syntax that enables our typing fingers to figure out the correct homonym. Sometimes, though, they mess up, and probably more than we realize because we catch ourselves making the mistake as we type. But we don’t always so quite often a homonym (i.e., words that sound alike but have different spellings) have to be scene–that is read–and not heard–that is written–to be corrected. And I notice now that I typed scene instead of seen, proving my point. Ha!

Basically, we are typing by dictating to ourselves. We hear but don’t see the words…and if we DO see the words it immediately breaks up our train of thought as reading and writing are two completely separate processes in the brain and we can’t do both simultaneously. So those their/they’re/there’s will keep popping up and their/they’re/there’s not much we can do about it except proofread–and even then if you proofread too soon your brain still has the short term memory of “hearing” what you wrote and your eyes will not always catch the mistake. Sometimes, if writing a story, it’s good to let the draft sit for an hour before proofing it, by which time that short term memory will be gone and you are actually reading what you wrote and not remembering what you narrated.

I also typed are as our….making at least two that made it past two drafts (and spell check) before I noticed them. In a digital format it’s no big deal. But years ago, when they got past me, spellcheck, my editor and at least one copy editor to wind up fast and permanent on paper in my jazz column in the LA Weekly, I would invariably be admonished by readers. You should really learn how to write, somebody would say. Usually from the safety of an email, but sometimes in person, at a jazz club, if they were drunk enough. You should learn how to write, they’d say, then totter off.

Memes and meme theory

There has to be some neurological reason for why people instantly believe Facebook memes. They will even insist that the meme was correct even when shown information  that disproves the meme. So we don’t read memes the way we read, sat, and ordinary Facebook post. We certainly don’t read them the way we read articles or blogs. We retain an element of skepticism when we read something not in a meme. But memes, they are not only believed, but they are believed without question. Somehow, the part of then reading process that takes in the information we read and mulls it over before accepting it–a process that takes a fraction of a second, but it is there, allowing you to tell a lie from fact, a joke from a real story–that process is completely skipped when we look at a meme.

It might be that we read memes like we traffic signs. They come similarly packaged, and we can’t actually edit it or change the letters around, it is a picture of language. As are road signs. We never doubt a road sign. If it says stop we know it means we should stop. If it says merging traffic ahead we know there will be a lane of traffic coming in. If it says no parking we never assume it means we can park there. We just believe. We may not obey, but it doesn’t mean that we deny that what the sign tells us is not true. I’m not sure how that works. I’m not sure why we instantly believe a traffic sign, with no need for reflection, while we find ourselves thinking the traffic laws in the driver’s manual are stupid. But I suspect the reading process for memes and traffic signs are similar. Because most people instantly believe memes, without question. It takes effort to doubt them. None of us who do doubt them began doubting them. We learned to do that, and we are in the minority. And when we do tell people that the memes are wrong, the meme believers will doubt that we are correct. No matter how much information they are shown, they will be skeptical of the actual information presented in a non-meme format–written in a post, say, or presented in a link–and will actually argue that the meme was correct. And that is neurological. That is an automated brain process. That is something very difficult to avoid. A meme–always presented in a picture form, such as jpeg–has an ability to circumvent our critical thinking faculties and become fact in our mind, much as we automatically believe a merging lane ahead sign. Its viral potential is phenomenal because its information is believed, without question, by most people who read it. I remember reading about Dawkins’ meme theory, before these Facebook style memes even existed, and meme theory fell apart because there was no mechanism for transmission. But now, via Facebook, there is a mechanism for transmission. A meme can spread from human brain to human brain via our eye and ability to read language . It can’t be spread to a blind man. It can’t be spread to someone who can’t read, or to someone who can’t read the language in the meme. But it can be spread in picture form–if you rewrote it in text it would not be believed automatically–and will be believed the way traffic signs are believed. There is a way to get people to believe anything they read if it can be put in a picture format. The fact that you can have a meme several hundred words long that is still believed without question in a way an article is not is probably because we read so much more than we ever did before because  we are online so many hours a day and when we are online we are reading constantly. We are just used to more written words now, and as such, we can compartmentalize entire paragraphs into picture-like packets that are looked at the way we read a traffic sign.

The potential for exploitation here is breathtaking, and doubtless it is already happening. Meme theory, long just a nifty idea, a theoretical possibility, can actually happen via Facebook, When we share a meme, we are replicating it in the mind of whoever reads it. It has to be the single fastest way of spreading identical information that there is, and only with discipline can a person learn to read them critically, because memes are designed to be believed exactly as they are written. They are, I suspect, a revolutionary form of spreading information. Probably temporary, eventually people will begun reading them like we read everything else, critically and skeptically. But for now, memes will be spreading world wide, too often sowing misinformation and disinformation and utterly believed by nearly everyone who reads them because the belief is automatic.



There’s nothing like accidentally posting a random collection of notes to your blog and then having to go into all the social media sites and deleting it. This didn’t happen when this stuff was all analog, with an analog pen and analog paper and analog edits and analog scratching out and analog illegibility. Not to mention the lost art of margin doodling. Times were simpler then. Messier, but simpler. I almost miss ink stained hands.

I have a whole box full of analog words like that. Page after analog page. I like looking at the edits. The sentences lined out and rewritten in the margins.The paragraphs lifted up and dropped onto a whole other page. Sometimes there are entire pages scratched out that I really like now. This was a much younger brain, I wonder what it thought when it saw this stuff. And this was before email, before instant messages, before texting and tweets and Facebook posts. Before the comments sections on news sites. Before blogging. This was a different universe. In that universe none of you people would be reading this. In fact none of you would have read anything I wrote unless you picked up a West Coast Review of Books or an obscure rock zine or two.

But that universe was pure creativity, a lab, a mass of failure, the occasional gem. Rhymes even. Certainly a lot of epilepsy. I keep thinking I ought to drag that box out of the closet and zap some of that stuff into the digital universe. But there’s so much. It’s a helluva lot of work, transcribing. And it feels weird going back in time like that. You begin to feel the way you felt decades and decades ago. That fresh, unwrinkled skin. The raging testosterone. The stupidity, on one hand, and then all those brain cells long since gone. What would it feel to be dropped into my twenty-five year old body with a brain a quarter again as big as mine now? Would it be noticeable? How could it not be? Like moving into a sprawling ranch house from a two bedroom apartment. All this snuggly comfort would be gone in all those rooms, but think of the views you’d have. Views you’d given up as your life got smaller, narrower, quieter. Even if the brain is only 15% smaller in volume, there are all sorts of synaptic paths you’ve abandoned. Like that big ranch house full of nooks and crannies you no longer use. A back door you haven’t opened in decades. The kids’ room, left as it was. A garage stacked with inaccessible boxes full of things you forgot you ever had. Neurons have settled into comfortable patterns. Some are passed by, ignored. Some have drifted into other areas of responsibility. Much has been sorted into piles, some you need, and some like those boxes in the garage. You just don’t get excited about so much anymore, not like you did when you were in your twenties, because your brain is so set in its sensory and concept reception ways. It’s gotten comfortable, in sort of the cognitive equivalent of a favorite chair, watching old movies.

Our brains are at the maximum size in our twenties…after that the brain doesn’t bother replacing the cells–neurons and glia both–it doesn’t think are necessary. We don’t have a choice, it does it for us, it economizes. Such a shame. We’ll never know exactly what we’ve lost, but we know we lost something. I lost all those analog thoughts and memories. I’d love to have them back. Or maybe I don’t. Digital is easier, editing so simple. Mistakes so easily hidden. Things, worthless or not, so easy to save. I guess that’s a good thing.

So I’ll put off pulling out that box again and live in the now. It’s easier that way. As much as I reminisce about the analog universe, this digital one is much easier, while it lasts. Civilization is on the cusp of the next step. You can feel it. Something beyond this even, something beyond the written word. And people like me will be museum pieces then. Historical oddities. We wrote. You what? Wrote. What was that? This. That? Yeah this. Why?

Why? I have no idea why. We just did.