Vincent Van Gogh

(2015)

Yet another theory on Vincent Van Gogh, this time from an astrophysicist. Van Gogh’s Turbulent Mind Captured Turbulence. “The painter’s creations during his blue period mirrored nature’s turbulent flows, as if his mind somehow tapped into a universal archetype, says astrophysicist Marcelo Gleiser.” OK. Some sort of astrophysical mysticism. But while tapping into the universal archetype is cool, Van Gogh was very epileptic, and a lot of his technique was simply his painting the visual effects of his epilepsy. The auras around light (like the moon and stars in Starry Night), which can vibrate or pulsate, and the flattening effect, with objects appearing to be separated from the things behind them, as in the tree in the painting. To an epileptic, objects close up will appear three dimensional, objects further back will have less depth and objects further back appear flat, like a backdrop. It’s like a crowd scene shot on a soundstage in old film, with real people up front, cut outs of people behind them, and paintings of people on the backdrop furthest back. It’s actually a pretty cool effect. The first time I ever stood right in front of a Van Gogh–Irises, at the Getty–that is what I saw. The gradations of dimension. Vividly three dimensional in front. Less depth behind. And further back little or no depth at all. I was startled at how epileptic it was. It was like I’d missed a dose of my meds. Pretty cool.

On evenings that I have nothing to do, if the effect creeps up on me I’ll sit out on the deck and the view goes very Van Gogh–not like Starry Night, but earlier, before his temporal lobe seizures got out of control. It’s the most gorgeous thing, one of the symptoms of epilepsy that I love. I’ll sit out there and just look at all the beauty of it for an hour or so, and wait a bit before taking my pills. Once they hit the bloodstream it all goes three dimensional again, a shame, but it’s better that way. But you all don’t know what you’re missing. If you ever dropped acid you’ve seen some of the same effect, though you thought you were a Greek god at the time.

Sometimes I’ll take advantage of that state to write. Push off taking those pills a bit. It’s an old epileptic writer’s trick. There’s an intensity to writing epileptic. You get lost in it. Hours can pass. Sometimes it can get out of control and the writing comes out like Neil Cassidy on speed, unusable. Sometimes it’ll take me so close to the bone I’ll begin trembling and feel physically ill. I don’t like to let that happen. And sometimes it comes out just right. A little edgy, maybe, a little weird even, but just right. Still, I don’t recommend it as a writing technique. But it does give me a glimpse into Dostoevsky’s muse. Or what Van Gogh might have been seeing.

You hear a lot about Van Gogh’s use of absinthe. As if absinthe was some crazy elixir, like LSD. But it’s basically alcohol, a very high octane alcohol, like white lightning or Rumplemintz. You drink enough of that kind of stuff you’ll be messed up, epileptic or not. Of course, it would likely have very deleterious effects on epilepsy, exacerbating it severely. Our wiring is thinly sheathed, our neurons very susceptible to firing out of sync, or too often, or firing when they shouldn’t be firing. Doesn’t take much too short us out. Any stimulus is a potential risk. And in Van Gogh’s case, uncontrolled by medication as it was, any booze in excess probably messed him up badly. Caffeine could have too. Or any drugs he was doing. Even cigars. Because stimulants–any stimulants–can exacerbate epilepsy. And Van Gogh’s main problem was he was very epileptic. That’s what he was diagnosed as during his lifetime, that is what he was being treated for, that is what he was being medicated for, though the meds at the time were only partially effective. There’s a terrific chapter on him in the book Seized by Eve LaPlante. Van Gogh was classically epileptic, with a classically epileptic inter-ictal personality. That is, he was different even between seizures. And he had a lot of seizures. Not big seizures, so much, but lots and lots of lesser seizures. What are commonly called petit mals. You have a lot of those and they will mess you up. I’ve been there. It’s not easy. I can’t imagine living that way without effective medication. I’d be in a 24/7 epileptic world. I’d be unmanageable, full of rage and inspiration and moments of brilliance and many more of embarrassment. I’d be writing every conscious moment. I’d be falling madly in love incessantly. I’d drive people nuts with chatter. I wouldn’t want to sit next to me on a bus. I’d be a lot like the descriptions you read of Vincent Van Gogh. I don’t mean the talent, obviously, or the genius or any of that. I mean the personality. The epileptic personality. Temporal lobe epileptics (or frontal lobe epileptics whose seizure activity extends into the temporal lobe) are all remarkably similar. Do a lot of the same crazy things. And Vincent Van Gogh was as epileptic as they come. Textbook. He wasn’t crazy, or delusional, or mad. He was just really tore up by his virtually uncontrolled temporal lobe epilepsy. You look at his paintings chronologically, you can it increasing. Something was making it worse. The visual effects he recorded are intense. Obviously the concentration required in painting is bad for him–ideally, an epileptic should do as little as possible–and his frustration is palpable. He would have known what the problem was. Suicide is not an uncommon cure.

I’m not sure why art historians refuse to label Van Gogh’s malady as epilepsy. I guess there’s a deeply rooted stigma about epilepsy and a romance to madness. Whatever. But Van Gogh was as messed up as Dostoevsky who was also classically epileptic. Yet today, neither would ever have been the artists they were, as their distinctive art was so based on badly controlled temporal/frontal lobe seizure activity. Pills would ruin that. They would have been much happier, written shorter books, painted fewer paintings. But the live wire creativity that goes with uncontrolled epilepsy, that would never have happened. We might have Irises, but no Starry Night. Maybe a Crime and Punishment, but no Brothers Karamazov. I think about Van Gogh and Dostoevsky often, and I pity them, a little. They were  messed up. Tragically so. Yet being messed up is what turned them into such extraordinary figures. An extraordinary painter, and astonishing novelist. There’s a bizarre tendency in modern western civilization to disprove anyone famous was epileptic. There is scarcely an epileptic of note for whom experts haven’t tried to erase the shame of the falling sickness. They’re forever looking for some other explanation. Dostoevsky made that impossible in his case with his vivid descriptions of seizures throughout his novels. Van Gogh, though, has been the victim of quite literally dozens of theories of his “madness”. Some are feasible, others unlikely. Sometimes very unlikely. And now a scientist has him metaphysically in tune with the mathematics of the universe. How this works, who knows. Is there any science behind this? Absolutely not. It is complete fantasy. The truth is that Vincent Van Gogh had epilepsy, and his art was an epileptic’s eye view of the world.

Starry Night--the sky.

Detail from Starry Night.

Hypergraphia

Funny thing, epilepsy, its demands pretty much control your life. Especially when you suddenly can’t take one of your meds for a week. This blog was put on hold…I stayed in, low stress, no driving, taking too much of the other pill (It jacks up testosterone levels…how’s that for a side effect?) and watching comedies all week. Laughter, as Readers Digest used to say in bathrooms all across America, is the best medicine.

I also avoided writing. Writing and epilepsy are interlinked in me, the hole in my brain is right smack in the language center, the neurons are all crazy there, abuzz with excess electro-chemical energy, making words and sentences and chatter come out in torrents. You learn to contain the chatter and get a handle on the torrent of prose. I do have a couple pieces on the blog somewhere that are pure epileptic energy, endless paragraphs, ideas whipping about with near Brownian motion. I read them now, thoroughly medicated, and they look nuts. A pal of mine loves the stuff, though. Reads like a beatnik on speed writing Roman history, he said. I assume it was a compliment. Inevitably, though, epileptic writing leaves me sick–literally nauseous, dazed, out of it. A Sonny Rollins review once put me to bed it left me so ill. I feel like that and wonder how the hell Dostoevsky managed so many perfect novels, each as long as the Manhattan phone book. The poor bastard must have been sick all the time. I know I’ve avoided writing another piece like that Sonny review. If I feel myself getting that deep I pull back, make a wisecrack, take it down a notch or two in intensity. I don’t like to write myself sick.

Anyway, on Tuesday I was finally able to take Tegretol again. It’s the champagne of bottled medicines, you know, quite the luxury at over three bucks a pill. Within a couple hours I could feel it, in the long neurons that run the length of our arms and legs. It’s like they mellow out. That’s what Tegretol does, it settles down the neurons, or settles down their synapses anyway, which spark and cause the potassium in the neuron (aka the nerve cell) to flip sides and fire the synapse that sparks the next neuron to keep the impulse going. Too much of this activity you seize, not enough, say to your heart muscle, you die. Tegretol keeps everything at a sweet medium.

We drove around doing a bunch of chores on Tuesday (I’d really missed driving) and once our tasks were out of the way I sat down at the computer and began writing. And writing. And writing. All these pent up words came pouring out. I just couldn’t stop writing. I kept returning to my desk and out popped another story. I was like a blueballed teenager in a room full of cheerleaders, frantically releasing what had been pent up for too long. Just writing and writing, sometimes all night long into the morning. A few hours sleep and then back at it. Hell, that’s what this piece is. Just some silly essay about an epileptic’s hypergraphic world, in case you were wondering where the hell all these stories are coming from, not that any of you actually were. This is just writing for its own sake. That hole in my brain has a strange power over me sometimes. But it’s been there as long as I’ve been alive, so I’m used to it. Consider it a blessing, a reader told me. No, I said, I consider it a pain in the ass. And then I wrote about it.

Stutter

Damn Lamictal woke me up again. That’s the epilepsy med that increases testosterone, and apparently gnarly dudes don’t sleep. The other seizure med I take, carbamazepine, just makes me stutter incoherently on occasion. The stutter used to be really bad. The side effect of the carbamazepine combined with speech problems caused by the epilepsy itself and I’d get stuck on sounds and couldn’t say them no matter what happened. The voiceless “th” sound–as is thistle–would invariably trip me up and I’d be th-th-th-th-th-th-th-th….sounding like a leaking air compressor. The ladies thought it was the most darling thing ever, the big giant macho dude with the cutest stutter. I’d turn red with frustration and embarrassment. That’s ok, they’d invariably say. Then they said awwww. God I hated those awwwws. But women could also invariably understand what I was trying to say. It would sound like gibberish even to me, yet they could understand it. I assumed it was the baby talk instinct, and I was just some big dude making baby talk. They’d smile. Awwww.

The reactions of the guys at work were funnier. I don’t know if it was the extra testosterone or what, but most of them would be really meek around me. Hesitant. A tad obsequious. Not all, and certainly not the alpha males. Just the nebbish types, who, in an office, is an awful lot of them. I was the big gnarly dude in the office. But when I started stammering  they couldn’t figure it out at all. Men can’t speak baby talk. To us a goo goo is a goo goo. A stammer just gibberish. They’d come up to me in the hallway with just the hint of a kowtow and say something friendly. I’d look down at them and start stammering. They’d freeze. Their expressions were priceless. They had no idea what I was trying to say, but it probably meant I was going to hurt them. They’d scurry off to the safety of their cubicles. Awwww.

The limits of one’s language are the limits of one’s world.

A friend said something beautiful. The limits of one’s language, he wrote, are the limits of one’s world. My friend, an old punk rocker like me, had of late revealed a gift for poetry hidden in prose. Deep stuff, beautifully written. Even something so dry and philosophical in another’s hands came out with a lilt, a tinge of sophistication. The words spill out like a melody. How the limits of our language are the limits of our world.

But I dunno. That seems too limiting. I think that sometimes those bent towards language over estimate its importance. Here’s why:

You see, our brains get most of their information without language at all–via sight, mostly, but also hearing, smell, touch, taste, not to mention balance, motion, memory, time, pheromones, etc….it’s not our world that is limited by our facility with language, but our ability to put much of that into words. Language is maybe 100,000 or 200,000 years old, but sensory perception is a billion years old, so the vast majority of how we perceive the world cannot even be described in words because it was created so long before we began using language…and most of it we are unaware of anyway. These are awarenesses (to coin an ugly plural) that happen without us even knowing they’re occurring. Most of what makes us aware we don’t even realize is operating. We don’t know in the same way a deer doesn’t know, or a lizard, or a fish. Or even an invertebrate…some of whose sensory perceptions extend far back, long before the first hint of a backbone–then a boneless column we call a notochord, still found in lampreys and lungfish–ever evolved.

Light, for instance….far back in the Precambrian ancient animals reacted to light and dark. We still do. People in polar zones become depressed during the winter when the sun scarcely rises above the horizon. They are wrapped in darkness 23 hours of the day, and depression steps in, and they sit, morose and miserable, in their overheated cabins longing for sunlight and perhaps not even knowing it. The roots of that go back a billion years, long before eyes existed. We bask in the light of the sun and don’t even realize just how primal that is. It is fundamental to who we are, this profoundly binary sense of light versus dark. Try describing that feeling without relying on scientific concepts (as I just had to do there). Try describing that feeling in the first person without explaining it. Just describe how it feels and what is happening to you. You can’t. It’s something that exists not only without language, but without consciousness at all. We simply don’t think about, nor can we explain it in language. Language doesn’t realize it’s there. It is an awareness beyond language, a perception beneath what we would consider a conscious perception.

Notice I didn’t even mention writing. Here’s why….writing is maybe 5,000 years old, written story telling (The Epic of Gilgamesh, for example) maybe three thousand years plus a few. So we can so far only put a tiny percentage of language into writing. It’s still more a technology than an instinct. We learn to write. Language is built in, and we begin to speak at a certain age. Writing is so new that in brain terms it is almost inconsequential. We write after someone shows us how to.

So I don’t think that the limits of one’s world are the limits of one’s language, because language barely scrapes the surface how we perceive the world….since almost all of it is done without language at all.

I think the more you write the more limited the language you write with seems…in can be really frustrating seeing something that you can’t actually describe…and describing sound is even harder. Music is virtually impossible to describe, because writing is a visual thing–you describe what you see–and not a hearing thing. And this is just scratching the surface. We are very profoundly driven by  pheromones and yet we aren’t even aware of it.  Pheromones work below the level of consciousness–they evolved long before consciousness did–and so we are not actually aware of just how much of what we are is pheromone driven. Which means we can’t even write–let alone talk–about experiencing them. We can’t describe how pheromones are making us act, and yet they affect almost everything we do. That is very much our world, but how do we write about it when we don’t even know what is happening? It’s like theoretical physicists postulating all kinds of dimensions of universes existing at the same time, and that we live in all of them simultaneously, but we don’t know it. Those are hypothetical constructs, though. But we actually do live within multiple sensory dimensions, but we are only consciously aware of sight, hearing, taste, smell and touch. When you realize that pheromones have vastly more impact on us than does taste or smell (as in olfactory smell–pheromones are “smelled” differently), and yet we cannot even tell what they do, that’s when you realize the problem with language. It barely scratches the surface of who we are and what the world is around us. Our brain and body responds to far more sensory information than we are consciously aware of.

Apes with extraordinary cognitive abilities

Once you realize that every single human being there is has inside their skulls the most complex thing that we know of in the entire universe, it gets a little weird. There are over 7 billion of these brains out there right now, all over the planet, each vastly more complex in its interconnectedness than the universe it exists in. Dig these numbers: a human brain has about 86 million neurons, and roughly ten times that many glial cells, or upwards of a billion. Each of these neurons fires five to fifty times a second and each of these neurons has up to ten thousand connections with other neurons. The estimates for the total numbers of synapses (i.e. the connections) between our neurons run from 100 trillion to 1,000 trillion (or one quadrillion). These synapses connect via dendrites (little filaments that grow from the surface of a neuron) and there are more dendrites than are used by neuron at any given time, so the potential number of connections could be one million billion (or one quintillion). That difference between that maximum total number of actual synapses (one quadrillion) and potential synapses (one quintillion), means the brain hasn’t come close to maximizing its capacity. And it means that the brain will continue to grow in complexity (and size). The human brain currently uses but a tiny fraction of its synaptic capacity. There simply isn’t enough to think about yet to fill it up.

83% of your brain is cerebral cortex, the thing that makes you you and people people. That cerebral cortex has grown at an astonishing speed evolutionary-wise. In just a couple million years it has expanded from chimp size (about a pound) to what it is now (about three pounds). Indeed it has grown so fast that it developed the folds you see in a brain in a jar, in order to maximize the number of neurons that could be crammed in the area available inside the skull. These folds increase surface area inside a limited space (or skull size), which increases the amount of neurons and synaptic connections between them. The size of our skull is limited by the dimensions of the human female’s birth canal. Indeed, the difficulty of human birth is due entirely to the size of the homo sapien sapien cerebral cortex. Were the woman’s pelvis able to widen further (it can’t, or at least natural selection isn’t capable of widening it at the same rate as a continuously expanding skull size)–or were it detachable like a snakes jaw (it isn’t), the human skull might be even larger, since apparently skull size is one of the things that can change quickly in our species through time (look at a collection of us and our predecessors to compare.)

Now about those billion glial cells. There’s ten times as many of them because they are much smaller than neurons. We used to think all these glia simply held neurons in place–it is vital that neurons remain in place to keep the synapses firing correctly, since synapses are not actually linked together but are just close enough for an electro-chemical charge to cross between them. But these glial cells also help to provide the neurons with nutrition, such as oxygen or the minerals such as potassium used in neurotransmission, which neurons exhaust quickly. And glia also helps with repairs and supplies the myelin which, like the rubber around a wire, shields the current running from one neuron to another via each synapse. But now it’s also known that much of the brains incredible plasticity is due to glial cells, and they are used in communication (and even breathing) and who knows what else. Glial cells, like everything else about the brain, just keep revealing more complexity.

And the complexity of all this is so vast that we are incapable of actually visualizing it. We fall back on huge numbers like quadrillion and quintillion, or compare it to the relative paucity of complexity in the known universe. What we have in our own skulls, and which is our very essence, we can barely understand. But every person you see has something in their heads that is more astonishing than the entire known universe. I can tell you that without truly comprehending it myself, because it is not comprehensible. We can understand it as a fact, an abstraction, but not actually appreciate just what it means. Like how we know what infinity is, but we can’t truly comprehend what it is. Our brains have myriad capabilities beyond our capacity to understand, because our brains are smarter than we are intelligent. After all, we are still just apes. Apes with extraordinary cognitive abilities, but still apes.

Thinking about thinking and vice versa

“A human being might be more a verb than a noun” a friend said, discussing consciousness. A great line, that. There may not be a mind as much as a process, he explained. I’d been reading that theory too. What we think of as us is just the result of a lot of various brain processes. There was an air of mystery to it.  There is no there there, someone else chimed in. I liked that line, too. But neither it nor the verb line quite did it for me. But it got me to thinking, and then to writing, and writing, and writing. Rather than sleeping, sleeping, and sleeping, which would have been a much better idea, running on fumes as I am.

But when discussing the nature of consciousness, I just don’t think there’s a difference between it being a process or a thing. It’s still all neurology, which is just a way of saying physiology. Except we can’t say physiology until we actually know to a fairly definitive degree what neurons are doing what that allows for consciousness–aka the human mind–to happen.  But you can say the same thing about climate. There’s a whole bunch of things that combine in all kinds of varied ways to create what we call “climate”. But very few are a complete mystery to us anymore. They can all be explained and modelled. Debated, yes, and various models drawn up, but they are all models based on data. We’re not there with the mind yet.  We know that it’s all neurology, we just don’t know exactly what does what in order to make us conscious. But once we do, we will no longer think of it as a verb. We’ll call it a process, a thing. It’ll be a noun.

But as I said, we aren’t there yet. And since we aren’t there yet, we tend to ascribe it a sense of mystery  that makes it more than a mere thing. But that sense of mystery is just a state of ignorance. That is, we don’t know how it works yet.  Once we do, it’ll no longer be a philosopher’s quandary, a mysterious unknowable thing, a process that can’t be reduced to a noun…it’ll no longer be anything other than a process that can be described like any other process. Nothing metaphysical about it.

Neurology and cognitive science have advanced at such an incredibly prodigious rate since the 1980’s that its difficult for non-specialists to even conceive of what all the new knowledge means. There is so much known about us now, about what our neural networks do, about how so much of our behavior can be located in places in the brain that can literally be seen and touched…and yet we still don’t even have ways to understand it except in neurological terms. yet we really are the result of patterns of electro-chemical responses in our neurons and glia. If your electrolytes run low, a neuron can’t carry the charge its receiving from another neuron to the next. No potassium, no thought. My wife’s heart stopped that way, and with it, her brain. (Both were revived, thankfully.) Or too much potassium and the neuron begins firing too many adjacent neurons and you can have epilepsy (which is why I have to be very careful with bananas and high potassium foods.) And while these very simple chemical processes are at the root of human consciousness, it’s the astonishingly complex lattice of interactions across the 100 million neurons and hundreds of billions of possible synaptic connections that create the thing we call consciousness…a process that as of now is simply too vast and complex and variable for us to really understand as a whole.

Which is a shame, because until we can conceptualize that whole, consciousness–what we are–will be an unfathomable mystery. But we couldn’t discuss the universe a century ago like we do now. Today it’s not only the subject of documentary series, where physicists describe cosmology and even theoretical physics for laymen, but those laymen, millions of them, are able to understand what is being explained to them. They can conceive it. The human brain can, by now, turn all that physics into a model it can see in its mind’s eye. We can’t yet do that with the neurology of consciousness. I like to think we are at the same stage now with our own neurology as we were a hundred years ago with conceptualizing evolution. People then knew that there was such a thing, but they had to think of it as a mysterious process called “evolution”. Now we know how it works, and evolution isn’t a mysterious process at all. It’s something we can conceptualize so readily that unless we’re religious, there’s no mystery to it at all. It’s a wikipedia entry.

We’re a couple decades away from that as far as human consciousness goes. But it too will stop being a mystery, will stop being so difficult to conceive of. I’m not saying it will be figured out yet, but it will be seen as a noun, a thing, something that can be explained as a physiological process. By then there will no longer be philosophers involved in the discussions, any more than philosophers are involved in oncology or genetics or climate science now. At some point science becomes mechanics, a study of process.

As far as the mind and consciousness goes, we’re not at the point yet where the process has been defined or even discovered. It’s like a 16th century map with big empty spaces unhelpfully labeled terra incognita.  And for laymen like us, several steps removed from the state of the science, we’re left in the dark, and almost always a couple years–even a few years–behind the research.  And then the competing theories are still being battled out in the journals, consensus is far off. But I think that in a generation at most we’ll be working with a model of human consciousness just like we have a working model of evolution today.

By then we’ll no longer asking if there’s a there there, or think that there might not even be a source of consciousness, or debating whether consciousness is a noun or a verb. All our discussions, and certainly an essay like this, will seem terribly quaint. Consciousness will be understood as a process, we’ll know to a much greater degree just where in the brain it’s located and how it happens, and it won’t ruin anything. I think there’s a fear that if we discover the actual mechanics of consciousness it’ll ruin everything somehow. That we need the mystery. But understanding evolution didn’t ruin everything, nor did the discovery of our tiny insignificant place in the universe. What we are, what makes us people, makes us cognitive beings is based on much more than theories of mind, universe or genetics. And none of that will be changing soon. After all, we invented the Internet (the only human creation that comes anywhere near the complexity of the human brain) and then filled it with porn. If baboons invented the internet they would have filled it with porn, too. And if baboons could they’d have TMZ. We’re all primates. Knowing exactly what consciousness is won’t change that a bit.

It’s now 6 am and I’ve spent the whole night writing about thinking. I began the night thinking about sleeping. Which I’d better try to do a little bit before it’s too late.

Neurons of the neocortex–your consciousness is in there somewhere. Photograph by Benjamin Bollmann, from Sebastian Seung’s connectomethebook.com

.

Oliver Sacks

One of Oliver Sack’s great unnoticed achievements was helping to bury Freud. By displaying in clear prose how behavior and thinking and observations are shaped by neurological processes, and not by subconscious fears and desires and the misdirected horniness that cannot be named, he undermined for millions of readers the entire basis of so called Freudian science. Freud’s work was mostly nonsense. It was highly imaginative, quite brilliant, and in a time when almost nothing was known of the actual workings of the brain (probably 99.99% of all of today’s knowledge of what the brain is, how it developed and how it functions has been uncovered in the past 25 years) Freud’s theories seemed plausible. Obsolete theories have a way of lasting in the public eye long after their scientific invalidation. People retain what they learned in school for life, and everyone took a psychology course or three. Sadly, just about everything we learned in those psychology courses prior to the 1980’s or ’90’s (depending on how hip your professor was) has turned out to be irrelevant if not flat out wrong. And we learned a lot of Freud. Of course we did. He was to psychology then what Charles Darwin was to biology. He was the big thinker.

But in all those Oliver Sacks best sellers, Freud never comes into the picture at all. Sacks lays out the neurology, the actual brain processes, making it all beautiful and real and utterly fascinating. And his observations were fact-based and scientifically proven, that is there was a rigorous testing procedure to establish those facts. Freud was guessing, fantasizing really. About as close as I remember anything being proven in psychology class was Pavlov’s salivating dogs and some of Skinner’s disturbing behavioral experiments with his own children. Otherwise we just took it all on faith. But Sack’s stories–case studies, really, beautifully written–were so factual and real they rendered Freudian theory for his readers as unplausible as any pseudo-science. He didn’t even have to tell us so. It’s just that for people who read Sacks–and millions did–Freudian theory just suddenly seemed kind of absurd. Shelf it with phrenology, physiognomy, eugenics and Lysenkoism. Freud was that wrong. There was simply no evidence of his theories in the brains of the people Sacks had treated. Of course not. These people were all neurons and brain regions and wiring gone synaptically, tragically wrong. He could explain his patients’ sometime bizarre conditions by showing us just what was wrong, neurologically. It might be weird and counter-intuitive, but it made sense. We can only imagine how a strict Freudian analyst would have diagnosed a man who mistook his wife for a hat.

Oliver Sacks was a key figure in changing the way people see the brain. His little true life stories allowed us to grasp the stunning complexity of neuroscience. The public’s image of what we fundamentally are shifted dramatically. Where once we were all Oedipal, it might now just be a few neurons shook loose. Sacks made the brain understandable to the layman, the real brain, full of flesh and blood and neurons and thought. We became us, the real us, and not a caricature with a fondness for Mom…and just in time, too. I mean the thought of a strictly Freudian Facebook is just too weird to think about.

Oliver Sacks and somebody's brain. (Photo by Adam Scourfield for AP)

Oliver Sacks with somebody’s brain. (Photo by Adam Scourfield for AP)

Your brain on homonyms

I know very well [posted a friend of mine], really by instinct, the proper placement of apostrophes and other punctuation and the usage of words like “there, their, they’re”, “you your you’re” and so on and yet I get to typing so fast that my brain is constantly pulling the wrong one out of my hat. I get so embarrassed when I discover these mistakes later.

I love when that happens, actually, because I think it shows so much about the brain and language.

This is how I think that happens: we type by listening to the inner voice speaking in our heads…language was strictly spoken for a hundred or so thousand years before we began reading it, and very few people at all were reading and writing it until the last century or two. And typing was not invented till 150 years ago. When we direct our fingers to type we are actually listening to the narrative voice running through our heads, and then some center of the brain in turn directs the actions of our fingers to type what it is “hearing”…an impressive task, as typing is something much more complex than writing with a pen or pencil (or with a stylus on a mud tablet as they did when writing was brand new). We sort of type the way a pianist plays, hearing the music in his head and then coordinating his hands and fingers to play it. Of course, in music you can harmonize so you can use all the fingers, but writing has no harmonies (unfortunately) so its more like playing trumpet, one note on a trumpet equals one letter on a keyboard. (Both language and speech are more like playing the trumpet, too, but that’s another essay). So our typing fingers hear the sounds of the words and not see them. There, their and they’re can be placed interchangeably by our undiscriminating fingers as they are homonyms (though perhaps not in all dialects.) Some people could make the same mistake with merry, marry and Mary, though in my regionally accented ears the three sound like different words, unlike there, their and they’re. It doesn’t always happen, perhaps it’s a matter of context or syntax that enables our typing fingers to figure out the correct homonym. Sometimes, though, they mess up, and probably more than we realize because we catch ourselves making the mistake as we type. But we don’t always so quite often a homonym (i.e., words that sound alike but have different spellings) have to be scene–that is read–and not heard–that is written–to be corrected. And I notice now that I typed scene instead of seen, proving my point. Ha!

Basically, we are typing by dictating to ourselves. We hear but don’t see the words…and if we DO see the words it immediately breaks up our train of thought as reading and writing are two completely separate processes in the brain and we can’t do both simultaneously. So those their/they’re/there’s will keep popping up and their/they’re/there’s not much we can do about it except proofread–and even then if you proofread too soon your brain still has the short term memory of “hearing” what you wrote and your eyes will not always catch the mistake. Sometimes, if writing a story, it’s good to let the draft sit for an hour before proofing it, by which time that short term memory will be gone and you are actually reading what you wrote and not remembering what you narrated.

I also typed are as our….making at least two that made it past two drafts (and spell check) before I noticed them. In a digital format it’s no big deal. But years ago, when they got past me, spellcheck, my editor and at least one copy editor to wind up fast and permanent on paper in my jazz column in the LA Weekly, I would invariably be admonished by readers. You should really learn how to write, somebody would say. Usually from the safety of an email, but sometimes in person, at a jazz club, if they were drunk enough. You should learn how to write, they’d say, then totter off.

Memes and meme theory

There has to be some neurological reason for why people instantly believe Facebook memes. They will even insist that the meme was correct even when shown information  that disproves the meme. So we don’t read memes the way we read, sat, and ordinary Facebook post. We certainly don’t read them the way we read articles or blogs. We retain an element of skepticism when we read something not in a meme. But memes, they are not only believed, but they are believed without question. Somehow, the part of then reading process that takes in the information we read and mulls it over before accepting it–a process that takes a fraction of a second, but it is there, allowing you to tell a lie from fact, a joke from a real story–that process is completely skipped when we look at a meme.

It might be that we read memes like we traffic signs. They come similarly packaged, and we can’t actually edit it or change the letters around, it is a picture of language. As are road signs. We never doubt a road sign. If it says stop we know it means we should stop. If it says merging traffic ahead we know there will be a lane of traffic coming in. If it says no parking we never assume it means we can park there. We just believe. We may not obey, but it doesn’t mean that we deny that what the sign tells us is not true. I’m not sure how that works. I’m not sure why we instantly believe a traffic sign, with no need for reflection, while we find ourselves thinking the traffic laws in the driver’s manual are stupid. But I suspect the reading process for memes and traffic signs are similar. Because most people instantly believe memes, without question. It takes effort to doubt them. None of us who do doubt them began doubting them. We learned to do that, and we are in the minority. And when we do tell people that the memes are wrong, the meme believers will doubt that we are correct. No matter how much information they are shown, they will be skeptical of the actual information presented in a non-meme format–written in a post, say, or presented in a link–and will actually argue that the meme was correct. And that is neurological. That is an automated brain process. That is something very difficult to avoid. A meme–always presented in a picture form, such as jpeg–has an ability to circumvent our critical thinking faculties and become fact in our mind, much as we automatically believe a merging lane ahead sign. Its viral potential is phenomenal because its information is believed, without question, by most people who read it. I remember reading about Dawkins’ meme theory, before these Facebook style memes even existed, and meme theory fell apart because there was no mechanism for transmission. But now, via Facebook, there is a mechanism for transmission. A meme can spread from human brain to human brain via our eye and ability to read language . It can’t be spread to a blind man. It can’t be spread to someone who can’t read, or to someone who can’t read the language in the meme. But it can be spread in picture form–if you rewrote it in text it would not be believed automatically–and will be believed the way traffic signs are believed. There is a way to get people to believe anything they read if it can be put in a picture format. The fact that you can have a meme several hundred words long that is still believed without question in a way an article is not is probably because we read so much more than we ever did before because  we are online so many hours a day and when we are online we are reading constantly. We are just used to more written words now, and as such, we can compartmentalize entire paragraphs into picture-like packets that are looked at the way we read a traffic sign.

The potential for exploitation here is breathtaking, and doubtless it is already happening. Meme theory, long just a nifty idea, a theoretical possibility, can actually happen via Facebook, When we share a meme, we are replicating it in the mind of whoever reads it. It has to be the single fastest way of spreading identical information that there is, and only with discipline can a person learn to read them critically, because memes are designed to be believed exactly as they are written. They are, I suspect, a revolutionary form of spreading information. Probably temporary, eventually people will begun reading them like we read everything else, critically and skeptically. But for now, memes will be spreading world wide, too often sowing misinformation and disinformation and utterly believed by nearly everyone who reads them because the belief is automatic.

Room full of notes

The opening paragraph of a Brick’s Picks in August 2009. The room full of notes was from a very cool if disturbing visual hallucination I had at a Charles Owens gig at the World Stage back in 2006. Epilepsy…that was a bad year. Seizure activity all through the brain with lots of odd effects. I remember I was standing next to Chet Hanley at the time. As he didn’t mention seeing the room fill up with the notes pouring from Charles Owen’s horn like bubbles I decided not to mention it. I just took another pill. I’d forgotten all about this until reading this again just now. But I remember it was about that time I realized I better not stick with the jazz critic thing much longer. I did, though, for four or five more years. Anyway, I never had another visual like that. A little too Lewis Carroll for me.

We dig saxophone. It’s the iconic jazz ax. Trumpets were once, a long time ago, and clarinets had their sweet little run too. But once solid hard-blowing Coleman Hawkins got the sax out front that was it. Lester Young came in right after that, so spooky and perfect and lackadaisically gorgeous… Then Bird just turned everything inside out with his thing, rushing here and there and everywhere at once almost. You try to follow those solos, your eyes’ll cross. And then Trane? Oh lord. You put Trane’s thing on top of Bird’s thing on top of Hawk’s things and all around Prez’s thing and you got harmonics gone nuts, fingers going crazy, you got all that forced air rushing through that crazy saxophone and notes and chords flying free from that bell, making crazy patterns, and if you could see them, if the notes were different colors, they’d be filling rooms, all squiggly flatted fifths and minor sevenths and whole bars of chords piling up everywhere. Think of that next time you’re sitting there in some jazz joint, the sax man blowing his ass off. Imagine all those notes. Not even the piano emits as many notes (and those would be neatly stacked or maybe scattered across the floor like shards of a glass enclosure.) Nope, it’s the sax that makes the most sound in jazz. There’s just literally more jazz to be heard coming out of it. Music theory this ain’t. We just dig the sax.