Red and white cubes

Had to take a mental examination for epilepsy on Monday morning. Lots of drawing, writing things down, repeating, remembering, trying to do things backwards. Little brain games. I was doing OK until the cubes. Four of them, red on some sides, white on other sides, some 50/50. I had to put them in combinations to create patterns. Right off the bat I knew I was in trouble…I had to create abstract patterns with abstract patterns. A trigger. I don’t do abstract shapes well. Within seconds I was going numb, talking slow, having trouble remembering anything. But I finished the test. All day long I was in a daze. Memory problems, out of it. Next day the same thing. On Wednesday more of the same. Finally today I am mostly back to normal. It used to be that concentrating on abstractions like that would wipe me out for a few hours. Now it lasts days. It’s not as intense as it used to get–no nausea–but it’s longer lasting. And of course, all that does is burn out more synapses and wreck more neurons. It’s like an epilepsy loop. The damage causes aka petit mal seizures which wrecks neurons which causes petit mal seizures which wrecks more neurons…..drip drip drip.

I’ve been trying to write this for days now. But I had to write this without trying to visualize the cubes and patterns, as the memory of them has the same effect as the actual experience, though the symptoms–numbness, stutter, memory loss–were even stronger. Apparently the actual experience of playing with those damn cubes involved several disparate sensory parts of the brain, but the memory involves only the parts of the brain that store the memories of the experience, and for some reason those memory centers set off more of the symptoms than the original sensory centers. If I were to think hard right now and try to remember actually doing the tests–though the memory of it is thoroughly garbled by now–I would actually get sicker than if I were actually doing the tests. I suppose I was only able to write this now as the short term memory of it has dissipated…as short term memories do after a couple days. Medium and long term memories are not as triggerable, if that is a word. If not, it is now.

The focal point of my seizures is a hole in the brain in the frontal lobe. A birth defect. Hence I can be set off by certain abstractions. I cannot do any sort of math beyond simple mathematics, I cannot read complex philosophy, I have difficulty with the modularity of things within things within things. Were I a gorilla (a common misconception, actually), this would not be as much of a problem. But Homo sapien sapiens are really big on the abstractions; our brains, in large part, grew huge because of the giant frontal lobes that developed to handle abstractions. It’s just that some of us have holes in our frontal lobes. Most of my symptoms, however, are in the temporal lobe, as so much frontal lobe activity is channeled through the temporal lobe. When the neurons that are all screwed up around the hole in my brain in my frontal lobe start firing in too rapid an order and messing things up, the extra electrical energy is released in the temporal lobe. Apparently the frontal lobe can handle such things easier. The temporal lobe, perhaps because it is much older and developed when brains were smaller and contained far less potential electrical energy (think of a neuron as a battery, and our frontal lobes are an enormous collection of interconnected batteries), perhaps never developed the protection it needs to protect it from such excess electricity. Like putting a Model T electrical system in a Porsche. The slightest thing could burn it out. And after a lifetime of epilepsy, the temporal lobe is thoroughly burned out. It has actually shrunk in size from all the electrical abuse, and a wide array of its knowledge–the various kinds of memories it stores in various places–are not even accessible anymore. The executive functions it controls–planning, etc–get thoroughly messed up. Frontal lobe functions–all that useless historical information and irritating know-it-all trivia–are unfazed. I can write. I can read. I can talk and talk and talk until everybody leaves. But stupid things like red and white cubes can mess me up for days.

One of the fun things about epilepsy is how it displays the inner workings of the brain. If you are fascinated by cognitive processes, it’s like being your own laboratory. One of the less fun things is drooling all over the carpet.

Music

One of the more fascinating, if unsettling, things about a neurodegenerative disease is how parts of your personality slip away unnoticed as synapses wilt and whole brains parts shrivel. These past several years music slowly stopped being a vital part of my life. I find myself going days, even weeks, without listening to it except in a car. I have a huge collection of recordings–vinyl, cassette, CD, digital–I barely touch anymore. I fear I could sell them all and miss only the mementos. And I scarcely ever write about music anymore. There was a time I did so fanatically. That time has gone. Somewhere in my head, in the temporal lobe or frontal lobe or both, enough neurons have been singed to make music less important to me than, say, books or old movies. I love the music people and I love the excitement of skilled improvisation, but very little else moves me anymore like it once did. I thought a couple years ago that maybe it was just a spell, maybe being a jazz critic had burned me out, but no. It’s permanent. I still love music, but it’s no longer essential. I still hear it all the time, tunes going through my head, but somehow the emotional connection has faded. Like it’s still up in my frontal lobe, intellectually, but the feel and soul of it in my temporal lobe has shriveled away with my bruised and battered hippocampus. I wish this bothered me more, but it doesn’t. Still, I wonder with a chill what, in ten years time, will also have faded away in significance.

Verbs not adjectives

[This was an email from 2010 that attempted to explain what I had drunkenly been describing to a friend at one of our parties. I failed. But she was terribly nice about it. I found the email later, turned it into a blog post. Then I realized I hadn’t discussed metaphors at all–which I always advise to use sparingly because the reading brain tends to trip over them–and worse yet, couldn’t discuss them. So I decided to do more reading up on metaphors. Now, reading this again, I don’t see why I thought this piece failed, as it works fine without the metaphor discussion. Whatever. This is what happens when college drop outs try to think like smart people.]

Ya know, L., I just remembered we were talking at that party about my verbs-instead-of-adjectives thing. Here’s the LA Weekly piece I was describing:

Lockjaw and Prez made him pick up the saxophone. This was New Orleans. There was a teenaged “Iko, IKo”, the very first. By ’63 he’s in L.A., playing Marty’s every night, and players—Sonny Rollins, everybody—dropping by, sitting in. Steady work with Basie and the Juggernaut and Blue Mitchell. Twenty years with Jimmy Smith. A million sessions for Motown and Stax, and first call for a slew of singers—that’s where you refine those ballad skills, with singers. Live he slips into “In A Sentimental Mood” and everything around you dissolves. There’s just his sound, rich, big, full of history, a little bitter, maybe, blowing Crescent City air. He gets inside the very essence of that tune, those melancholy ascending notes, till it fades, pads closing, in a long, drawn out sigh. You swear it’s the most beautiful thing you’ve ever heard, that song, that sound, and you tell him so. He shrugs. “It’s a lifetime of experience” he says, then calls out some Monk and is gone.*

How that piece happened was I asked to do a pick on an upcoming Herman Riley gig at Charlie O’s. He was one of the great tenor saxophonists and yet a virtual unknown. I called him up for a few comments before writing. He spilled for a half hour, his whole life. It was overwhelming. I didn’t dare cut him off. He was one of my heroes. So I just dashed off some notes and then pulled them together. I had 200 words, tops, to work with.  I probably had 400 words first draft. Kept winnowing. Reducing.  Down to the verbs. It’s nearly all verbs, action. I had a real good friend at work I used to instant message all the time, and I became fascinated with how language was used in our messages. By how brief a message could be if reduced to the verbs, and how much of an impact it would have. It could be quite visceral. I was doing a lot of heavy thinking about neurolinguistics then, how language works mechanically in the brain. It seemed to me that action was much more effective than description, and could get across the same info. Furthermore, when you used verbs instead of actions, the brain—by means of mirror neurons—automatically pictures what is being described without any necessary descriptive context—it somehow fills all that  in. That means that virtually no description is needed beyond bare hints. I never say where it is I am seeing him play here, even though that is based upon an actual event (a Charlie O’s gig I had just seen.) But even so, invariably the response from readers was “I felt like I was there”. Where exactly? It doesn’t matter. All that matters is the action.

The brain is more powerfully affected by action than anything else in language. It sees something or even reads about something and the exact same neurons that are used to actually perform the action are stimulated. I just picked up a glass of water. In your mind the exact same neurons that you would use to pick up a glass of water fired off as if they were picking up a glass of water and not me writing that I had. No matter…whether you do it, watch it, or read about it, the effect is the same. And that is why narratives that are based on action instead of description, that describe movement, things take place in time, are so powerful. **

That being said, you have to think about perspective, and not from the point of view of the writer but of the reader. I believe it works this way: if you describe something in first person the reader’s mind has to visualize the action, then interpret it into you taking the action, and then interpreting that action as if they were doing it themselves, and then imagining you doing so, etc, etc. That’s a lot of steps. Third person works the same way pretty much.  Second person, in English, anyway, is impossible to pull off—you do this, you do that, etc. Very cumbersome. Our language is not grammatically designed to pull that off easily (but that’s another lecture….) So I use an implied second person. Everything I do is first person, but I remove myself (by not using the first person pronoun much) and let the reader get the feeling that it is actually he that is doing/seeing/hearing what I describe. Again, this shortens the steps necessary for the brain to interpret what it is reading into understanding.  I’ve noticed that the fewer steps required the faster the language is retrieved by the brain and the more powerful the impact. And good writing is all about impact. You want to move people, you have to increase the visceral impact of the prose. And to do that you need to think about how it is that the brain turns the words you right into thoughts it can visualize. It’s verbs, baby.

The brain is designed for verbs. It sees in verbs. It has instantaneous perception in verbs. Adjectives take extra thinking. You don’t want extra thinking. You want your words to turn themselves into those units of perception that lay beneath language, that existed before language, and you want that process to occur as quickly as possible. Think of porn. The sexual excitement people get viewing it is not literary. It’s older than that. As is love. And just about anything else important. I mean the hell with language. You have to get beneath it. You have to aim for the centers beyond the brain’s language centers, because that is where the feeling is. That is where you move people. And the most direct way to get there is action, verbs.

It interesting to note that vision was initially a matter of detecting motion, indeed a matter of detecting the change in light. If a primitive animal was hiding and suddenly all was light, it meant something might have exposed it in order to eat it. Or if the light suddenly changed to dark, it meant something blocking the light might be there to eat it. This implies movement. Go forward half a billion years or so and amphibians detect movement and not much else. You can see this in a frog to this day. A fly sits on a leave three inches from the frog, the frog can’t see it.  The fly takes flight and zap, it’s the frog’s lunch. Reptiles can see more than amphibians. Mammals more still ***. But the fundamental basis of recognition remains movement.

Adjectives reflect a much more sophisticated vision and awareness of what is being seen. Adjectives require observation and analysis. There is no equivalent of mirror neurons for the kind of information adjectives describe. You watch a tennis player serve the ball, the mirror neurons fire in exactly the same way in your brain as they do in the tennis player’s brain even though you are not actually moving. They reflect the motion. But the neurons that fire to let you know that you are seeing a tennis player with red hair, a blue shirt and a green tennis ball, they do not fire in the same way the tennis player’s does. The only automatic understanding of what you see is the parts of the brain that detect motion, that is action. Those have been firing off in brains for hundreds of millions of years. They are part of the fundamental infrastructure of the brain, much as the hypothalamus deep in the temporal lobe controls the fundamental 4 F’s (feeding, fighting, fleeing and mating) of human and mammalian behavior, and reptilian and amphibian and fish behavior****. That is, going back over half a billion years (though way back when it was not necessarily a hypothalamus per se in other animals, I believe, but the things that evolved into our own hypothalamus). Mirror neurons are also an ancient part of the brain. Information relayed through mirror neurons is understood instantly by the brain. Whatever you write that fires them off will be much more quickly and powerfully–indeed viscerally–responded to by the reader. In fact, you have only to give the absolute barest amount of descriptive detail, since the brain seems to automatically fill that in. If you say you walk into a room and sit in a chair, the reader’s brain automatically seems to provide all the details it needs to understand the scene. The reader gets that “it feels like I’m there” feeling. And nothing gets a response from readers more than prose that makes them feel as if they are part of the action. Makes them feel they are the one actually walking into the room and sitting in a chair. Or sitting in a bar and seeing and hearing a saxophone player.

Plus you should spell good.

.

Notes: Continue reading

Thinking about thinking and vice versa

“A human being might be more a verb than a noun” a friend said, discussing consciousness. A great line, that. There may not be a mind as much as a process, he explained. I’d been reading that theory too. What we think of as us is just the result of a lot of various brain processes. There was an air of mystery to it.  There is no there there, someone else chimed in. I liked that line, too. But neither it nor the verb line quite did it for me. But it got me to thinking, and then to writing, and writing, and writing. Rather than sleeping, sleeping, and sleeping, which would have been a much better idea, running on fumes as I am.

But when discussing the nature of consciousness, I just don’t think there’s a difference between it being a process or a thing. It’s still all neurology, which is just a way of saying physiology. Except we can’t say physiology until we actually know to a fairly definitive degree what neurons are doing what that allows for consciousness–aka the human mind–to happen.  But you can say the same thing about climate. There’s a whole bunch of things that combine in all kinds of varied ways to create what we call “climate”. But very few are a complete mystery to us anymore. They can all be explained and modelled. Debated, yes, and various models drawn up, but they are all models based on data. We’re not there with the mind yet.  We know that it’s all neurology, we just don’t know exactly what does what in order to make us conscious. But once we do, we will no longer think of it as a verb. We’ll call it a process, a thing. It’ll be a noun.

But as I said, we aren’t there yet. And since we aren’t there yet, we tend to ascribe it a sense of mystery  that makes it more than a mere thing. But that sense of mystery is just a state of ignorance. That is, we don’t know how it works yet.  Once we do, it’ll no longer be a philosopher’s quandary, a mysterious unknowable thing, a process that can’t be reduced to a noun…it’ll no longer be anything other than a process that can be described like any other process. Nothing metaphysical about it.

Neurology and cognitive science have advanced at such an incredibly prodigious rate since the 1980’s that its difficult for non-specialists to even conceive of what all the new knowledge means. There is so much known about us now, about what our neural networks do, about how so much of our behavior can be located in places in the brain that can literally be seen and touched…and yet we still don’t even have ways to understand it except in neurological terms. yet we really are the result of patterns of electro-chemical responses in our neurons and glia. If your electrolytes run low, a neuron can’t carry the charge its receiving from another neuron to the next. No potassium, no thought. My wife’s heart stopped that way, and with it, her brain. (Both were revived, thankfully.) Or too much potassium and the neuron begins firing too many adjacent neurons and you can have epilepsy (which is why I have to be very careful with bananas and high potassium foods.) And while these very simple chemical processes are at the root of human consciousness, it’s the astonishingly complex lattice of interactions across the 100 million neurons and hundreds of billions of possible synaptic connections that create the thing we call consciousness…a process that as of now is simply too vast and complex and variable for us to really understand as a whole.

Which is a shame, because until we can conceptualize that whole, consciousness–what we are–will be an unfathomable mystery. But we couldn’t discuss the universe a century ago like we do now. Today it’s not only the subject of documentary series, where physicists describe cosmology and even theoretical physics for laymen, but those laymen, millions of them, are able to understand what is being explained to them. They can conceive it. The human brain can, by now, turn all that physics into a model it can see in its mind’s eye. We can’t yet do that with the neurology of consciousness. I like to think we are at the same stage now with our own neurology as we were a hundred years ago with conceptualizing evolution. People then knew that there was such a thing, but they had to think of it as a mysterious process called “evolution”. Now we know how it works, and evolution isn’t a mysterious process at all. It’s something we can conceptualize so readily that unless we’re religious, there’s no mystery to it at all. It’s a wikipedia entry.

We’re a couple decades away from that as far as human consciousness goes. But it too will stop being a mystery, will stop being so difficult to conceive of. I’m not saying it will be figured out yet, but it will be seen as a noun, a thing, something that can be explained as a physiological process. By then there will no longer be philosophers involved in the discussions, any more than philosophers are involved in oncology or genetics or climate science now. At some point science becomes mechanics, a study of process.

As far as the mind and consciousness goes, we’re not at the point yet where the process has been defined or even discovered. It’s like a 16th century map with big empty spaces unhelpfully labeled terra incognita.  And for laymen like us, several steps removed from the state of the science, we’re left in the dark, and almost always a couple years–even a few years–behind the research.  And then the competing theories are still being battled out in the journals, consensus is far off. But I think that in a generation at most we’ll be working with a model of human consciousness just like we have a working model of evolution today.

By then we’ll no longer asking if there’s a there there, or think that there might not even be a source of consciousness, or debating whether consciousness is a noun or a verb. All our discussions, and certainly an essay like this, will seem terribly quaint. Consciousness will be understood as a process, we’ll know to a much greater degree just where in the brain it’s located and how it happens, and it won’t ruin anything. I think there’s a fear that if we discover the actual mechanics of consciousness it’ll ruin everything somehow. That we need the mystery. But understanding evolution didn’t ruin everything, nor did the discovery of our tiny insignificant place in the universe. What we are, what makes us people, makes us cognitive beings is based on much more than theories of mind, universe or genetics. And none of that will be changing soon. After all, we invented the Internet (the only human creation that comes anywhere near the complexity of the human brain) and then filled it with porn. If baboons invented the internet they would have filled it with porn, too. And if baboons could they’d have TMZ. We’re all primates. Knowing exactly what consciousness is won’t change that a bit.

It’s now 6 am and I’ve spent the whole night writing about thinking. I began the night thinking about sleeping. Which I’d better try to do a little bit before it’s too late.

Neurons of the neocortex–your consciousness is in there somewhere. Photograph by Benjamin Bollmann, from Sebastian Seung’s connectomethebook.com

.

Oliver Sacks

One of Oliver Sack’s great unnoticed achievements was helping to bury Freud. By displaying in clear prose how behavior and thinking and observations are shaped by neurological processes, and not by subconscious fears and desires and the misdirected horniness that cannot be named, he undermined for millions of readers the entire basis of so called Freudian science. Freud’s work was mostly nonsense. It was highly imaginative, quite brilliant, and in a time when almost nothing was known of the actual workings of the brain (probably 99.99% of all of today’s knowledge of what the brain is, how it developed and how it functions has been uncovered in the past 25 years) Freud’s theories seemed plausible. Obsolete theories have a way of lasting in the public eye long after their scientific invalidation. People retain what they learned in school for life, and everyone took a psychology course or three. Sadly, just about everything we learned in those psychology courses prior to the 1980’s or ’90’s (depending on how hip your professor was) has turned out to be irrelevant if not flat out wrong. And we learned a lot of Freud. Of course we did. He was to psychology then what Charles Darwin was to biology. He was the big thinker.

But in all those Oliver Sacks best sellers, Freud never comes into the picture at all. Sacks lays out the neurology, the actual brain processes, making it all beautiful and real and utterly fascinating. And his observations were fact-based and scientifically proven, that is there was a rigorous testing procedure to establish those facts. Freud was guessing, fantasizing really. About as close as I remember anything being proven in psychology class was Pavlov’s salivating dogs and some of Skinner’s disturbing behavioral experiments with his own children. Otherwise we just took it all on faith. But Sack’s stories–case studies, really, beautifully written–were so factual and real they rendered Freudian theory for his readers as unplausible as any pseudo-science. He didn’t even have to tell us so. It’s just that for people who read Sacks–and millions did–Freudian theory just suddenly seemed kind of absurd. Shelf it with phrenology, physiognomy, eugenics and Lysenkoism. Freud was that wrong. There was simply no evidence of his theories in the brains of the people Sacks had treated. Of course not. These people were all neurons and brain regions and wiring gone synaptically, tragically wrong. He could explain his patients’ sometime bizarre conditions by showing us just what was wrong, neurologically. It might be weird and counter-intuitive, but it made sense. We can only imagine how a strict Freudian analyst would have diagnosed a man who mistook his wife for a hat.

Oliver Sacks was a key figure in changing the way people see the brain. His little true life stories allowed us to grasp the stunning complexity of neuroscience. The public’s image of what we fundamentally are shifted dramatically. Where once we were all Oedipal, it might now just be a few neurons shook loose. Sacks made the brain understandable to the layman, the real brain, full of flesh and blood and neurons and thought. We became us, the real us, and not a caricature with a fondness for Mom…and just in time, too. I mean the thought of a strictly Freudian Facebook is just too weird to think about.

Oliver Sacks and somebody's brain. (Photo by Adam Scourfield for AP)

Oliver Sacks with somebody’s brain. (Photo by Adam Scourfield for AP)

Your brain on homonyms

I know very well [posted a friend of mine], really by instinct, the proper placement of apostrophes and other punctuation and the usage of words like “there, their, they’re”, “you your you’re” and so on and yet I get to typing so fast that my brain is constantly pulling the wrong one out of my hat. I get so embarrassed when I discover these mistakes later.

I love when that happens, actually, because I think it shows so much about the brain and language.

This is how I think that happens: we type by listening to the inner voice speaking in our heads…language was strictly spoken for a hundred or so thousand years before we began reading it, and very few people at all were reading and writing it until the last century or two. And typing was not invented till 150 years ago. When we direct our fingers to type we are actually listening to the narrative voice running through our heads, and then some center of the brain in turn directs the actions of our fingers to type what it is “hearing”…an impressive task, as typing is something much more complex than writing with a pen or pencil (or with a stylus on a mud tablet as they did when writing was brand new). We sort of type the way a pianist plays, hearing the music in his head and then coordinating his hands and fingers to play it. Of course, in music you can harmonize so you can use all the fingers, but writing has no harmonies (unfortunately) so its more like playing trumpet, one note on a trumpet equals one letter on a keyboard. (Both language and speech are more like playing the trumpet, too, but that’s another essay). So our typing fingers hear the sounds of the words and not see them. There, their and they’re can be placed interchangeably by our undiscriminating fingers as they are homonyms (though perhaps not in all dialects.) Some people could make the same mistake with merry, marry and Mary, though in my regionally accented ears the three sound like different words, unlike there, their and they’re. It doesn’t always happen, perhaps it’s a matter of context or syntax that enables our typing fingers to figure out the correct homonym. Sometimes, though, they mess up, and probably more than we realize because we catch ourselves making the mistake as we type. But we don’t always so quite often a homonym (i.e., words that sound alike but have different spellings) have to be scene–that is read–and not heard–that is written–to be corrected. And I notice now that I typed scene instead of seen, proving my point. Ha!

Basically, we are typing by dictating to ourselves. We hear but don’t see the words…and if we DO see the words it immediately breaks up our train of thought as reading and writing are two completely separate processes in the brain and we can’t do both simultaneously. So those their/they’re/there’s will keep popping up and their/they’re/there’s not much we can do about it except proofread–and even then if you proofread too soon your brain still has the short term memory of “hearing” what you wrote and your eyes will not always catch the mistake. Sometimes, if writing a story, it’s good to let the draft sit for an hour before proofing it, by which time that short term memory will be gone and you are actually reading what you wrote and not remembering what you narrated.

I also typed are as our….making at least two that made it past two drafts (and spell check) before I noticed them. In a digital format it’s no big deal. But years ago, when they got past me, spellcheck, my editor and at least one copy editor to wind up fast and permanent on paper in my jazz column in the LA Weekly, I would invariably be admonished by readers. You should really learn how to write, somebody would say. Usually from the safety of an email, but sometimes in person, at a jazz club, if they were drunk enough. You should learn how to write, they’d say, then totter off.

Memes and meme theory

There has to be some neurological reason for why people instantly believe Facebook memes. They will even insist that the meme was correct even when shown information  that disproves the meme. So we don’t read memes the way we read, sat, and ordinary Facebook post. We certainly don’t read them the way we read articles or blogs. We retain an element of skepticism when we read something not in a meme. But memes, they are not only believed, but they are believed without question. Somehow, the part of then reading process that takes in the information we read and mulls it over before accepting it–a process that takes a fraction of a second, but it is there, allowing you to tell a lie from fact, a joke from a real story–that process is completely skipped when we look at a meme.

It might be that we read memes like we traffic signs. They come similarly packaged, and we can’t actually edit it or change the letters around, it is a picture of language. As are road signs. We never doubt a road sign. If it says stop we know it means we should stop. If it says merging traffic ahead we know there will be a lane of traffic coming in. If it says no parking we never assume it means we can park there. We just believe. We may not obey, but it doesn’t mean that we deny that what the sign tells us is not true. I’m not sure how that works. I’m not sure why we instantly believe a traffic sign, with no need for reflection, while we find ourselves thinking the traffic laws in the driver’s manual are stupid. But I suspect the reading process for memes and traffic signs are similar. Because most people instantly believe memes, without question. It takes effort to doubt them. None of us who do doubt them began doubting them. We learned to do that, and we are in the minority. And when we do tell people that the memes are wrong, the meme believers will doubt that we are correct. No matter how much information they are shown, they will be skeptical of the actual information presented in a non-meme format–written in a post, say, or presented in a link–and will actually argue that the meme was correct. And that is neurological. That is an automated brain process. That is something very difficult to avoid. A meme–always presented in a picture form, such as jpeg–has an ability to circumvent our critical thinking faculties and become fact in our mind, much as we automatically believe a merging lane ahead sign. Its viral potential is phenomenal because its information is believed, without question, by most people who read it. I remember reading about Dawkins’ meme theory, before these Facebook style memes even existed, and meme theory fell apart because there was no mechanism for transmission. But now, via Facebook, there is a mechanism for transmission. A meme can spread from human brain to human brain via our eye and ability to read language . It can’t be spread to a blind man. It can’t be spread to someone who can’t read, or to someone who can’t read the language in the meme. But it can be spread in picture form–if you rewrote it in text it would not be believed automatically–and will be believed the way traffic signs are believed. There is a way to get people to believe anything they read if it can be put in a picture format. The fact that you can have a meme several hundred words long that is still believed without question in a way an article is not is probably because we read so much more than we ever did before because  we are online so many hours a day and when we are online we are reading constantly. We are just used to more written words now, and as such, we can compartmentalize entire paragraphs into picture-like packets that are looked at the way we read a traffic sign.

The potential for exploitation here is breathtaking, and doubtless it is already happening. Meme theory, long just a nifty idea, a theoretical possibility, can actually happen via Facebook, When we share a meme, we are replicating it in the mind of whoever reads it. It has to be the single fastest way of spreading identical information that there is, and only with discipline can a person learn to read them critically, because memes are designed to be believed exactly as they are written. They are, I suspect, a revolutionary form of spreading information. Probably temporary, eventually people will begun reading them like we read everything else, critically and skeptically. But for now, memes will be spreading world wide, too often sowing misinformation and disinformation and utterly believed by nearly everyone who reads them because the belief is automatic.