One of the most captivating books of 2010 was not a gory science-fiction thriller or a gripping end-of-the world page-turner, though its subject matter is equally engrossing and out of the ordinary. It is about somewhat crazy people doing crazy things as seen through the lenses of the man that has been treating them for decades. The Naked Lady Who Stood On Her Head is the first psych ward memoir, a tale of a curious doctor/scientist and his most extreme, bizarre, and sometimes touching cases from the nation’s most prestigious neurology centers and universities. Included in ScriptPhD.com’s review is a podcast interview with Dr. Small, as well as the opportunity to win a free autographed copy of his book. Our end-of-the year science library pick is under the “continue reading” cut.
Gary Small is a very unlikely candidate for the chaos that many of us confuse with a psych ward. Whether it was the frantic psych consults on ER or fond remembrance of Jack Nicholson and his cohorts in One Flew Over The Cuckoo’s Nest, most of us have a natural association of psychiatry with insanity or pandemonium. Meeting Dr. Small in real life is the antithesis of these scenarios. Warm, welcoming, serene and genuinely affable, his voice translates directly from the pages of his latest book. Told in chronological order—starting with
a young, curious, inexperienced intern at Harvard’s Massachussetts General Hospital to his tenure as a world-renowned neuroscientist at UCLA—The Naked Lady Who Stood On Her Head feels like an enormous learning and growing experience for Dr. Small, his patients, and the reader.
The scene plays out like a standard medical drama or movie. In the beginning, the young, bright-eyed, bushy-tailed, trepidatious doctor is exploring while learning the ropes on duty. There is, in the self-titled chapter, literally a naked lady standing on her head in the middle of a Boston psych ward. Dr. Small is the only doctor that can cure her baffling ailment, but in doing so, only begins to peel away at what is really troubling her. There is a bevvy of inexplicable fainting schoolgirls afflicting the Boston suburbs. Only through a fresh pair of eager eyes is the root cause attained, a cause that to this day sets the standard for mass hysteria treatment nationwide. And there is a mute hip painter from Venice beach, immobile for weeks until Small, fighting the rigid senior attendings, gets to the unlikely diagnosis. As the book, and Dr. Small’s career, flourishes, we meet a WebMD mom, a young man literally blinded by his family’s pressure, a man whose fiancé’s obsession with Disney characters resurfaces a painful childhood secret, and Dr. Small’s touching story of having to watch as the mentor he introduced at the book’s beginning hires him as a therapist so that he can diagnose his teacher’s dementia. Ultimately, all of the characters of The Naked Lady Who Stood on Her Head, and Dr. Small’s dedication and respect, have a common thread. They are real, they are diverse, and they are us. Psych patients are not one-dimensional figments of a screenwriter’s imagination. They are the brother who has childhood trauma, the friend with a dysfunctional or abusive family, the husband or wife with a rare genetic predisposition, and all of us are but one degree away from the abnormal behavior that these conditions can ignite. In his book, Dr. Small has pulled back the curtain of a notoriously secretive and mysterious field. It’s a riveting reveal, and absolutely worth an appointment. The Naked Lady Who Stood On Her Head has been optioned by 20th Century Fox, and may be coming to your televisions soon!
Podcast Interview
In addition to his latest novel, Gary Small is the author of the best-selling global phenomenon The Memory Bible: An Innovative Strategy For Keeping Your Brain Young and a regular contributor to The Huffington Post (several excellent recent articles can be found here and here). His seminal research on Alzheimer’s disease, aging and brain training has appeared in recent articles in NPR and Newsweek. A seminal brain imaging study recently completed in his laboratory garnered worldwide media attention for suggesting that Google searching can stimulate the brain and literally keep aging brains agile. Dr. Small regularly updates his research and musings on his personal blog.
ScriptPhD.com sat down for a one-on-one podcast with Dr. Small and discussed inspiration for the book, and how it conveys the inner thought process of a psychiatrist through their many interesting cases. In our podcast, we discuss how media and on-screen portrayal of psychiatrists contribute to people’s perceptions of the field, how the themes of empathy and humanity are indellibly woven into case studies, the challenges and fullfillment of psychiatry and the contribution of pop culture in modern psychoses.
~*ScriptPhD*~
*****************
ScriptPhD.com covers science and technology in entertainment, media and advertising. Hire our consulting company for creative content development. Subscribe to free email notifications of new posts on our home page.
Scientists are becoming more interested in trying to pinpoint precisely what’s going on inside our brains while we’re engaged in creative thinking. Which brain chemicals play a role? Which areas of the brain are firing? Is the magic of creativity linked to one specific brain structure? The answers are not entirely clear. But thanks to brain scan technology, some interesting discoveries are emerging. ScriptPhD.com was founded and focused on the creative applications of science and technology in entertainment, media and advertising, fields traditionally defined by “right brain” propensity. It stands to reason, then, that we would be fascinated by the very technology and science that as attempting to deduce and quantify what, exactly, makes for creativity. To help us in this endeavor, we are pleased to welcome computer scientist and writer Ravi Singh’s guest post to ScriptPhD.com. For his complete article, please click “continue reading.”
Before you can measure something, you must be able to clearly define what it is. It’s not easy to find consensus among scientists on the definition of creativity. But then, it’s not easy to find consensus among artists, either, about what’s creative and what’s not. Psychologists have traditionally defined creativity as “the ability to combine novelty and usefulness in a particular social context.” But newer models argue that these type of definitions, which rely on extremely-subjective criteria like ‘novelty’ and ‘usefulness,’ are too vague. John Kounios, a psychologist at Drexel University who studies the neural basis of insight, defines creativity as “the ability to restructure ones understanding of a situation in a non-obvious way.” His research shows that creativity is not a singular concept. Rather, it’s a collection of different processes that emerge from different areas of the brain.
In attempting to measure creativity, scientists have had a tendency to correlate creativity with intelligenceor at least link creativity to intelligenceprobably because we believe that we have a handle on intelligence. We believe can measure it with some degree of accuracy and reliability. But not creativity. No consensus measure for creativity exists. Creativity is too complex to be measured through tidy, discrete questions. There is no standardized test. There is yet to be a meaningful “Creativity Quotient.” In fact, creativity defies standardization. In the creative realm, one could argue, there’s no place for “standards.” After all, doesn’t the very notion of standardization contradict what creativity is all about?
To test creativity, researchers have historically attempted to test divergent thinking, an assessment construct originally developed in the 1950s by psychologist J. P. Guilford, who believed that standardized IQ tests favored convergent thinkers (who stay focused on solving a core problem), rather than divergent thinkers (who go ‘off on tangents’). Guilford believed that scores on IQ tests should not be taken as a unidimensional measure of intelligence. He observed that creative people often score lower on standard IQ tests because their approach to solving the problems generates a larger number of possible solutions, some of which are thoroughly original. The test’s designers would have never thought of those possibilities. Testing divergent thinking, he believed, allowed for greater appreciation of the diversity of human thinking and abilities. A test of divergent thinking might ask the subject to come up with new and useful functions for a familiar object, such as a brick or a pencil. Or the subject might be asked to draw the taste of chocolate. You can see how it would be very difficult, if not impossible to standardize a “correct” answer.
Eastern traditions have their own ideas about creativity and where it comes from. In Japan, where students and factory workers are stereotyped as being too methodical, researchers are studying schoolchildren for a possible correlation between playfulness and creativity. Nath philosopher Mahendranath wrote that man’s “memory became buried under the artificial superstructure of civilization and its artificial concepts,” his way of saying that that too much convergent thinking can inhibit creativity. Sanskrit authors described the spontaneous and divergent mental experience of sahaja meditation, where new insights occur after allowing the mind to rest and return to the natural, unconditioned state. But while modern scientific research on meditation is good at measuring physiological and behavioral changes, the “creative” part is much more elusive.
Some western scientists suggest that creativity is mostly ascribed to neurochemistry. High intelligence and skill proficiency have traditionally been associated with fast, efficient firing of neurons. But the research of Dr. Rex Jung, a research professor in the department of neurosurgery at the University of New Mexico, shows that this is not necessarily true. In researching the neurology of the creative process, Jung has found that subjects who tested high in “creativity” had thinner white matter and connecting axons in their brains, which has the effect of slowing nerve traffic. Jung believes that this slowdown in the left frontal cortex, a brain region where emotion and cognition are integrated, may allow us to be more creative, and to connect disparate ideas in novel ways. Jung has found that when it comes to intellectual pursuits, the brain is “an efficient superhighway” that gets you from Point A to Point B quickly. But creativity follows a slower, more meandering path that has lots of little detours, side roads and rabbit trails. Sometimes, it is along those rabbit trails that our most revolutionary ideas emerge.
You just have to be willing to venture off the main highway.
We’ve all had aha! momentsthose sudden bursts of insight that solve a vexing problem, solder an important connection, or reinterpret a situation. We know what it is, but often, we’d be hard-pressed to explain where it came from or how it originated. Dr. Kounios, along with Northwestern University psychologist Mark Beeman, has extensively studied the the Aha! moment.” They presented study participants with simple word puzzles that could be solved either through a quick, methodical analysis or an instant creative insight. Participants are given three words then are asked to come up with one word that could be combined with each of these three to form a familiar term; for example: crab, pine and sauce. (Answer: “apple.”) Or eye, gown and basket. (Answer: ball)
About half the participants arrived at solutions by methodically thinking through possibilities; for the other half, the answer popped into their minds suddenly. During the “Aha! moment,” neuroimaging showed a burst of high-frequency activity in the participants’ right temporal lobe, regardless of whether the answer popped into the subjects’ minds instantly or they solved the problem methodically. But there was a big difference in how each group mentally prepared for the test question. The methodical problem solvers prepared by paying close attention to the screen before the words appearedtheir visual cortices were on high alert. By contrast, those who received a sudden Aha! flash of creative insight prepared by automatically shutting down activity in the visual cortex for an instantthe neurological equivalent of closing their eyes to block out distractions so that they could concentrate better. These creative thinkers, Kounios said, were “cutting out other sensory input and boosting the signal-to-noise ratio to enable themselves retrieve the answer from the subconscious.
Creativity, in the end, is about letting the mind roam freely, giving it permission to ignore conventional solutions and explore uncharted waters. Accomplishing that requires an ability, and willingness, to inhibit habitual responses, take risks. Dr. Kenneth M. Heilman, a neurologist at the University of Florida believes that this capacity to let go may involve a dampening of norepinephrine, a neurotransmitter that triggers the fight-or-flight alarm. Since norepinephrine also plays a role in long-term memory retrieval, its reduction during creative thought may help the brain temporarily suppress what it already knows, which paves the way for new ideas and discovering novel connections. This neurochemical mechanism may explain why creative ideas and Aha! moments often occur when we are at our most peaceful, for example, relaxing or meditating.
The creative mind, by definition, is always open to new possibilities, and often fashions new ideas from seemingly irrelevant information. Psychologists at the University of Toronto and Harvard University believe they have discovered a biological basis for this behavior. They found that the brains of creative people may be more receptive to incoming stimuli from the environment that the brains of others would shut out through the the process of “latent inhibition,” our unconscious capacity to ignore stimuli that experience tells us are irrelevant to our needs. In other words, creative people are more likely to have low levels of latent inhibition. The average person becomes aware of such stimuli, classifies it and forgets about it. But the creative person maintains connections to that extra data that’s constantly streaming in from the environment and uses it.
Sometimes, just one tiny stand of information is all it takes to trigger a life-changing “Aha!” moment.
Ravi Singh is a California-based IT professional with a Masters in Computer Science (MCS) from the University of Illinois. He works on corporate information systems and is pursuing a career in writing.
*****************
ScriptPhD.com covers science and technology in entertainment, media and advertising. Hire our consulting company for creative content development.
Subscribe to free email notifications of new posts on our home page.
]]> As Comic-Con winds down on the shortened Day 4, we conclude our coverage with two panels that exemplify what Comic-Con is all about. As promised, we dissect the “Comics Design” panel of the world’s top logo designers deconstructing their work, coupled with images of their work. We also bring you an interesting panel of ethnographers, consisting of undergraduate and graduate student, studying the culture and the varying forces that shape Comic-Con. Seriously, they’re studying nerds! Finally, we are delighted to shine our ScriptPhD.com spotlight on new sci-fi author Charles Yu, who presented his new novel at his first (of what we are sure are many) Comic-Con appearance. We sat down and chatted with Charles, and are pleased to publish the interview. And of course, our Day 4 Costume of the Day. Comic-Con 2010 (through the eyes of ScriptPhD.com) ends under the “continue reading” cut!
Comics Design
We are not ashamed to admit that here at ScriptPhD.com, we are secret design nerds. We love it, particularly since good design so often elevates the content of films, television, and books, but is a relatively mysterious process. One of THE most fascinating panels that we attended at Comic-Con 2010 was on the design secrets behind some of your favorite comics and book covers. A panel of the world’s leading designers revealed their methodologies (and sometimes failures) in the design process behind their hit pieces, lifting the shroud of secrecy that designers often envelop themselves in. An unparalleled purview into the mind of the designer, and the visual appeal that so often subliminally contributes to the success of a graphic novel, comic, or even regular book. We do, as it turns out, judge books by their covers.
As promised, we revisit this illuminating panel, and thank Christopher Butcher, co-founder of The Toronto Comic Arts Festival and co-owner of The Beguiling, Canada’s finest comics bookstore. Chris was kind enough to provide us with high-quality images of the Comics Design panel’s work, for which we at ScriptPhD.com are grateful. Chris had each of the graphic artists discuss their work with an example of design that worked, and design that didn’t (if available or so inclined). The artist was asked to deconstruct the logo or design and talk about the thought process behind it.
Mark Ciarello – (art + design director at DC Comics)
Mark chose to design the cover of this book with an overall emphasis on the individual artist. Hence the white space on the book, and a focus on the logo above the “solo” artist.
Adam Grano – (designer at Fantagraphics)
Adam took the title of this book quite literally, and let loose with his design to truly emphasize the title. He called it “method design.” He wanted the cover to look like a drunken dream.
For the Humbug collection, Grano tried hard not to impress too much of himself (and his tastes) in the design of the cover. He wanted to inject simplicity in a project that would stand the test of time, because it was a collector’s series.
Grano considered this design project his “failure.” It contrasts greatly with the simplicity and elegance of Humbug. He mentioned that everyone on the page is scripted and gridded, something that designers try to avoid in comics.
Chip Kidd – (designer at Random House)
Chip Kidd had the honor of working on the first posthumous Peanuts release after Charles M. Schultz’s death, and took to the project quite seriously. In the cover, he wanted to deconstruct a Peanuts strip. All of the human element is taken out of the strip, with the characters on the cover up to their necks in suburban anxiety.
Kidd likes this cover because he considers it an updated spin on Superman. It’s not a classic Superman panel, so he designed a logo that deviated from the classic “Superman” logo to match.
Kidd chose this as his design “failure”, but not the design itself. The cover represents one of seven volumes, in which the logo pictured disintegrates by the seventh issue, to match the crisis in the title. Kidd’s only regret here is that he was too subtle. He wishes he’d chosen to start the logo disintegration progression sooner, as there’s very little difference between the first few volumes.
Fawn Lau – (designer at VIZ)
Fawn was commissioned to redesign this book cover for an American audience. Keeping this in mind, and wanting the Japanese animation to be more legible for the American audience, she didn’t want too heavy-handed of a logo. In an utterly genius stroke of creativity, Lau went to an art store, bought $70 worth of art supplies, and played around with them until she constructed the “Picasso” logo. Clever, clever girl!
Mark Siegel – (First Second Books)
Mark Siegel was hired to create the cover of the new biography Feynman, an eponymous title about one of the most famous physicists of all time. Feynman was an amazing man who lived an amazing life, including a Nobel Prize in physics in 1965. His biographer, Ottaviani Myrick, a nuclear physicist and speed skating champion, is an equally accomplished individual. The design of the cover was therefore chosen to reflect their dynamic personalities. The colors were chosen to represent the atomic bomb and Los Alamos, New Mexico, where Feynman assisted in the development of The Manhattan Project. Incidentally, the quote on the cover – “If that’s the world’s smartest man, God help us!” – is from Feynman’s own mother.
Keith Wood – (Oni Press)
Wood remarked that this was the first time he was able to do design on a large scale, which really worked for this project. He chose a very basic color scheme, again to emphasize a collection standing the test of time, and designed all the covers simultaneously, including color schemes and graphics. He felt this gave the project a sense of connectedness.
Wood chose a pantone silver as the base of this design with a stenciled typeface meant to look very modern. The back of the cover and the front of the cover were initially going to be reversed when the artists first brought him the renderings. However, Wood felt that since the book’s content is about the idea of a girl’s traveling across the United States, it would be more compelling and evocative to use feet/baggage as the front of the book. He was also the only graphic artist to show a progression of 10-12 renderings, playing with colors, panels and typeface, that led to the final design. He believes in a very traditional approach to design, which includes hand sketches and multiple renderings.
The Culture of Popular Things: Ethnographic Examinations of Comic-Con 2010
Each year, for the past four years, Comic-Con ends on an academic note. Matthew J. Smith, a professor at Wittenberg University in Ohio, takes along a cadre of students, graduate and undergraduate, to study Comic-Con; the nerds, the geeks, the entertainment component, the comics component, to ultimately understand the culture of what goes on in this fascinating microcosm of consumerism and fandom. By culture, the students embrace the accepted definition by famous anthropologist Raymond J. DeMallie: “what is understood by members of a group.” The students ultimately wanted to ask why people come to Comic-Con in general. They are united by the general forces of being fans; this is what is understood in their group. After milling around the various locales that constituted the Con, the students deduced that two ultimate forces were simultaneously at play. The fan culture drives and energizes the Con as a whole, while strong marketing forces were on display in the exhibit halls and panels.
Maxwell Wassmann, a political economy student at Wayne State University, pointed out that “secretly, what we’re talking about is the culture of buying things.” He compared Comic-Con as a giant shopping mall, a microcosm of our economic system in one place. “If you’ve spent at least 10 minutes at Comic-Con,” he pointed out, “you probably bought something or had something tried to be sold to you. Everything is about marketing.” As a whole, Comic-Con is subliminally designed to reinforce the idea that this piece of pop culture, which ultimately advertises an even greater subset of pop culture, is worth your money. Wassmann pointed out an advertising meme present throughout the weekend that we took notice of as well—garment-challenged ladies advertising the new Green Hornet movie. The movie itself is not terribly sexy, but by using garment-challenged ladies to espouse the very picture of the movie, when you leave Comic-Con and see a poster for Green Hornet, you will subconsciously link it to the sexy images you were exposed to in San Diego, greatly increasing your chances of wanting to see the film. By contrast, Wassmann also pointed out that there is a concomitant old-town economy happening; small comics. In the fringes of the exhibition center and the artists’ space, a totally different microcosm of consumerism and content exchange.
Kane Anderson, a PhD student at UC Santa Barbara getting his doctorate in “Superheroology” (seriously, why didn’t I think of that back in graduate school??), came to San Diego to observe how costumes relate to the superhero experience. To fully absorb himself in the experience, and to gain the trust of Con attendees that he’d be interviewing, Anderson came in full costume (see above picture). Overall, he deduced that the costume-goers, who we will openly admit to enjoying and photographing during our stay in San Diego, act as goodwill ambassadors for the characters and superheroes they represent. They also add to the fantasy and adventure of Comic-Con goers, creating the “experience.” The negative side to this is that it evokes a certain “looky-loo” effect, where people are actively seeking out, and singling out, costume-wearers, even though they only constitute 5% of all attendees.
Tanya Zuk, a media masters student from the University of Arizona, and Jacob Sigafoos, an undergraduate communications major at Wittenberg University, both took on the mighty Hollywood forces invading the Con, primarily the distribution of independent content, an enormous portion of the programming at Comic-Con (and a growing presence on the web). Zuk spoke about original video content, more distinctive of new media, is distributed primarily online. It allows for more exchange between creators and their audience than traditional content (such as film and cable television), and builds a community fanbase through organic interaction. Sigafoos expanded on this by talking about how to properly market such material to gain viral popularity—none at all! Lack of marketing, at least traditional forms, is the most successful way to promote a product. Producing a high-quality product, handing it off to friends, and promoting through social media is still the best way to grow a devoted following.
And speaking of Hollywood, their presence at Comic-Con is undeniable. Emily Saidel, a Master’s student at NYU, and Sam Kinney, a business/marketing student at Wittenberg University, both took on the behemoth forces of major studios hawking their products in what originally started out as a quite independent gathering. Saidel tackled Hollywood’s presence at Comic-Con, people’s acceptance/rejection thereof, and how comics are accepted by traditional academic disciplines as didactic tools in and of themselves. The common thread is a clash between the culture and the community. Being a member of a group is a relatively simple idea, but because Comic-Con is so large, it incorporates multiple communities, leading to tensions between those feeling on the outside (i.e. fringe comics or anime fans) versus those feeling on the inside (i.e. the more common mainstream fans). Comics fans would like to be part of that mainstream group and do show interest in those adaptations and changes (we’re all movie buffs, after all), noted Kinney, but feel that Comic-Con is bigger than what it should be.
But how much tension is there between the different subgroups and forces? The most salient example from last year’s Con was the invasion of the uber-mainstream Twilight fans, who not only created a ruckus on the streets of San Diego, but also usurped all the seats of the largest pavilion, Hall H, to wait for their panel, locking out other fans from seeing their panels. (No one was stabbed.) In reality, the supposed clash of cultures is blown out of proportion, with most fans not really feeling the tension. To boot, Seidel pointed out that tension isn’t necessarily a bad thing, either. She gave a metaphor of a rubber band, which only fulfills its purpose with tension. The different forces of Comic-Con work in different ways, if sometimes imperfectly. And that’s a good thing.
Incidentally, if you are reading this and interested in participating in the week-long program in San Diego next year, visit the official website of the Comic-Con field study for more information. Some of the benefits include: attending the Comic-Con programs of your choice, learning the tools of ethnographic investigation, and presenting the findings as part of a presentation to the Comics Arts Conference. Dr. Matthew Smith, who leads the field study every year, is not just a veteran attendee of Comic-Con, but also the author of The Power of Comics.
COMIC-CON SPOTLIGHT ON: Charles Yu, author of How To Live Safely in a Science Fictional Universe.
Here at ScriptPhD.com, we love hobnobbing with the scientific and entertainment elite and talking to writers and filmmakers at the top of their craft as much as the next website. But what we love even more is seeking out new talent, the makers of the books, movies and ideas that you’ll be talking about tomorrow, and being proud to be the first to showcase their work. This year, in our preparation for Comic-Con 2010, we ran across such an individual in Charles Yu, whose first novel, How To Live Safely in a Science Fictional Universe premieres this fall, and who spoke about it at a panel over the weekend. We had an opportunity to have lunch with Charles in Los Angeles just prior Comic-Con, and spoke in-depth about his new book, along with the state of sci-fi in current literature. We’re pretty sure Charles Yu is a name science fiction fans are going to be hearing for some time to come. ScriptPhD.com is proud to shine our 2010 Comic-Con spotlight on Charles and his debut novel, which is available September 7, 2010.
How To Live Safely in a Science Fictional Universe is the story of a son searching for his father… through quantum-space time. The story takes place on Minor Universe 31, a vast story-space on the outskirts of fiction, where paradox fluctuates like the stock market, lonely sexbots beckon failed protagonists, and time travel is serious business. Every day, people get into time machines and try to do the one thing they should never do: try to change the past. That’s where the main character, Charles Yu, time travel technician, steps in. Accompanied by TAMMY (who we consider the new Hal), an operating system with low self-esteem, and a nonexistent but ontologically valid dog named Ed, Charles helps save people from themselves. When he’s not on the job, Charles visits his mother (stuck in a one-hour cycle, she makes dinner over and over and over) and searches for his father, who invented time travel and then vanished.
Questions for Charles Yu
ScriptPhD.com: Charles, the story has tremendous traditional sci-fi roots. Can you discuss where the inspiration for this came from?
Charles Yu: Well the sci-fi angle definitely comes from being a kid in the 80s, when there were blockbuster sci-fi things all over the place. I’ve always loved [that time], as a casual fan, but also wanted to write it. I didn’t even start doing that until after I’d graduated from law school. I did write, growing up, but I never wrote fiction—I didn’t think I’d be any good at it! I wrote poetry in college, minored in it, actually. Fiction and poetry are both incredibly hard, and poetry takes more discipline, but at least when I failed in my early writing, it was a 100 words of failure, instead of 5,000 words of it.
SPhD: What were some of your biggest inspirations growing up (television or books) that contributed to your later work?
CY: Definitely The Foundation Trilogy. I remember reading that in the 8th grade, and I remember spending every waking moment reading, because it was the greatest thing I’d ever read. First of all, I was in the 8th grade, so I hadn’t read that many things, but the idea that Asimov created this entire self-contained universe, it was the first time that I’d been exposed to that idea. And then to have this psychohistory on top, it was kind of trippy. Psychohistory is the idea that social sciences can be just as rigorously captured with equations as any physical science. I think that series of books is the main thing that got me into sci-fi.
SPhD: Any regrets about having named the main character after yourself?
CY: Yes. For a very specific reason. People in my life are going to think it’s biographical, which it’s very much not. And it’s very natural for people to do that. And in my first book of short stories, none of the main characters was named after anyone, and still I had family members that asked if that was about our family, or people that gave me great feedback but then said, “How could you do that to your family?” And it was fiction! I don’t think the book could have gotten written had I not left that placeholder in, because the one thing that drove any sort of emotional connection for the story for me was the idea of having less things to worry about. The other thing is that because the main character is named after you, as you’re writing the book, it acts as a fuel or vector to help drive the emotional completion.
SPhD: In the world of your novel, people live in a lachrymose, technologically-driven society. Any commentary therein whatsoever on the technological numbing of our own current culture?
CY: Yes. But I didn’t mean it as a condemnation, in a sense. I wouldn’t make an overt statement about technology and society, but I am more interested in the way that technology can sometimes not connect people, but enable people’s tendency to isolate themselves. Certainly, technology has amazing connective possibilities, but that would have been a much different story, obviously. The emotional plot-level core of this book is a box. And that sort of drove everything from there. The technology is almost an emotional technology that [Charles, the main character] has invented with his dad. It’s a larger reflection of his inability to move past certain limitations that he’s put on himself.
SPhD: What drives Charles, the main character of this book?
CY: What’s really driving Charles emotionally is looking for his dad. But more than that, is trying to move through time, to navigate the past without getting stuck in it.
SPhD: Both of his companions are non-human. Any significance to that?
CY: It probably speaks more to my limitations as a writer [laughs]. That was all part of the lonely guy type that Charles is being portrayed as. If he had a human with him, he’d be a much different person.
SPhD: The book abounds in scientific jargon and technological terminology, which is par for the course in science fiction, but was still very ambitious. Do you have high expectations of the audience that will read this book?
CY: Yeah. I was just reading an interview where the writer essentially said “You can never go wrong by expecting too much [of your audience].” You can definitely go wrong the other way, because that would come off as terrible, or assuming that you know more. But actually, my concerns were more in the other direction, because I knew I was playing fast and loose with concepts that I know I don’t have a great grasp of. I’m writing from the level of amateur who likes reading science books, and studied science in college—an entertainment layreader. My worry was whether I was BSing too much [of the science]. There are parts where it’s clearly fictional science, but there are other parts that I cite things that are real, and is anyone who reads this who actually knows something about science going to say “What the heck is this guy saying?”
SPhD: How To Live… is written in a very atavistic, retro 80s style of science fiction, and really reminded me of the best of Isaac Asimov. How do you feel about the current state of sci-fi literature as relates to your book?
CY: Two really big keys for me, and things I was thinking about while writing [this book], were one, there is kind of a kitchiness to sci-fi, and I think that’s kind of intentional. It has a kind of do-it-yourself aesthetic to it. In my book, you basically have a guy in the garage with his dad, and yes the dad is an engineer, but it’s in a garage without great equipment, so it’s not going to look sleek, you can imagine what it’s going to look like—it’s going to look like something you’d build with things you have lying around in the garage. On the other hand, it is supposed to be this fully realized time machine, and you’re not supposed to be able to imagine it. Even now, when I’m in the library in the science-fiction section, I’ll often look for anthologies that are from the 80s, or the greatest time travel stories from the 20th Century that cover a much greater range of time than what’s being published now. It’s almost like the advancement of real-world technology is edging closer to what used to be the realm of science fiction. The way that I would think about that is that it’s not exploting what the real possibility of science fiction is, which is to explore a current world or any other completely strange world, but not a world totally envisionable ten years from now. You end up speculating on what’s possible or what’s easily extrapollatable from here; that’s not necessarily going to make for super emotional stories.
Charles Yu is a writer and attorney living in Los Angeles, CA.
Last, but certainly not least, is our final Costume of the Day. We chose this young ninja not only because of the coolness of his costume, but because of his quick wit. As we were taking the snapshot he said, “I’m smiling, you just can’t see it.” And a check mate to you, young sir.
Incidentally, you can find much more photographic coverage of Comic-Con on our Facebook fan page. Become a fan, because this week, we will be announcing Comic-Con swag giveaways that only Facebook fans are eligible for.
~*ScriptPhD*~
*****************
ScriptPhD.com covers science and technology in entertainment, media and advertising. Hire our consulting company for creative content development.
Subscribe to free email notifications of new posts on our home page.
]]>
You spot someone across a crowded room. There is eye contact. Your heart beats a little faster, palms are sweaty, you’re light-headed, and your suddenly-squeamish stomach has dropped to your knees. You’re either suffering from an onset of food poisoning or you’re in love. But what does that mean, scientifically, to fall in love, to be in love, to stay in love? In our special Valentine’s Day post, Editor Jovana Grbić expounds on the neuronal and biophysical markers of love, how psychologists and mathematicians have harnessed (and sometimes manipulated) this information to foster 21st Century digital-style romance, and concludes with a personal reflection on what love really means in the face of all of this science. You might just be surprised. So, Cupid, draw back your sword… and click “continue reading” for more!
What is This Thing Called Love?
Scientists are naturally attracted to romance. Why do we love? Falling in love can be many things emotionally, ranging from the exhilarating to the truly frightening. It is, however, also remarkably methodical, with three stages developed by legendary biological anthropologist Helen Fisher of Rutgers University—lust, attraction and attachment—each with its own distinct neurochemistry. During lust, in both men and women, two basic sex hormones, testosterone and estrogen, primarily drive behavior and psychology. Interestingly enough, although lust has been memorialized by artists aplenty, this stage of love is remarkably analytical. Psychologists have shown that those individuals primed with thoughts of lust had the highest levels of analytical thinking, while those primed with thoughts of love had the highest levels of creativity and insight. But we’ll get to love in a minute.
During attraction, a crucial trio of neurotransmitters, adrenaline, dopamine and serotonin, literally change our brain chemistry, leading to the phase of being obsessed and love-struck. Remember how we talked about a racing heart and sweaty palms upon seeing someone you’re smitten with? That would be a rush of adrenaline (also referred to as epinephrine), the “fight or flight” hormone/neurotransmitter, responsible for increased heart rate, contraction of blood vessels, and dilation of air passages. Dopamine is an evolutionarily-conserved, ubiquitous neurotransmitter the regulates basic functions such as anatomy, movement and cognition—a reason the loss of dopamine in Parkinson’s Disease patients can be so devastating. Dopamine is also responsible for the pleasure and reward mechanisms in the brain, hyperactivated by abuse of drugs such as cocaine and heroin. It has even been linked to creativity and idea generation via interactions of the frontal and temporal lobes and the limbic system. This is, therefore, that link between love and creativity that we mentioned above. Incidentally, the releasing or induction agent of norepinephrine and dopamine is a chemical called phenethylamine (PEA). Did you give your sweetheart chocolates for Valentine’s Day? If so, you did well, because chocolate is loaded with some of the highest naturally-occurring levels of phenethylamine, leading to a “chocolate theory of love.” If you can’t stop thinking about your beloved, it’s because of serotonin, one of love’s most important chemicals. Its neuronal functions include regulation of mood, appetite, sleep, and cognitive functions—all affected by love. Most modern generation antidepressants involve alteration of serotonin levels in the brain.
During latent attachment, two important chemicals “seal the deal” for long-term commitment: oxytocin and vasopressin. Oxytocin, often referred to as “the hormone of love,” is a neurotransmitter released during childbirth, breastfeeding and orgasms, and is crucial for species bonding, trust, and unconditional love. A sequence of experiments showed that trust formation in group activities, social interaction, and even psychological betrayal hinged on oxytocin levels. Vasopressin is a hormone responsible for memory formation and aggressive behavior. Recent research also suggests a role for vasopressin in sexual activity and in pair-bond formation. When vasopressin receptor gene was transplanted into mice (natural loners), they exhibited gregarious, social behaviors. That gene, the vasopressin receptor, was isolated in the prarie vole, among the select few of habitually monogamous mammals. When the receptor was introduced into their highly promiscuous Don Juan meadow vole relatives, they reformed their wicked rodent ways, fixated on one partner, guarded her jealously, and helped rear their young.
With all these chemicals floating around in the brain of the aroused and the amorous, it’s not surprising that scientists have deduced that the same brain chemistry responsible for addiction is also responsible for love!
The aforementioned Dr. Fisher gave an exceptional TED Talk in 2006 about her research in romantic love; its evolution, its biochemistry, and its social importance:
While the heart may hold the key to love, the brain helps unlock it. In fact, modern neuroscience and magnetic resonance imaging (MRI) scanning has helped answer a lot of questions about lasting romances, what being in love looks like, and whether there is a neurological difference between how we feel about casual sex, platonic friends, and those we’re in love with. In a critical fMRI study the brains of people who were newly in love were scanned while they looked at photographs, some of their friends and some of their lovers. Pictures of lovers activated specific areas of the brain (pictured on the left) that were not active when looking at pictures of good friends or thinking about sexual arousal, suggesting that romantic love and mate attachment aren’t so much of an emotion or state of mind as they are a deeply rooted drive akin to hunger, thirst and sex. Furthermore, a 2009 Florida State study showed that people in a committed relationship and are thinking of their partner subconsciously avert their eyes from an attractive member of the opposite sex. The most heartwarming part of all? It lasts. fMRI imaging of 10 women and 7 men still claiming to be madly in love with their partners after an average of 21 years of marriage showed equal brain activation to the earlier studies of nascent romances.
In case you’re blinded by all this science, remember this central fact about love: it’s good for you! The art of kissing has been shown to promote many health benefits, including stress relief, prevention of tooth and gum decay, a muscle workout, anti-aging effects, and therapeutic healing. If everything goes well with the kissing, it could lead to an even more healthy activity… sex! Not only does sex improve your sense of smell, boost fitness and weight loss, mitigate depression and pain, but it also strengthens the immune system, prevents heart disease and prostate cancer. In fact, “I have a headache” may be a specious excuse to avoid a little lovin’ since sex has been shown to cure migraines (and cause them, so be careful!). All of these facts and more, along with everything you ever wanted to know about sex, were collected and studied by neuroscientist Barry Komisaruk, endocrinologist Carlos Beyer-Flores and sexuality researcher Beverly Whipple in The Science of Orgasm. Add it to your shopping list today! The above activities may find you marching down the aisle, which especially for men is a very, very good thing. Studies show that married men not only live longer and healthier lives but also made more money and were more successful professionally (terrific New York Times article here).
Love in the Age of Algebra
While science can pinpoint the biological markers of love, can it act as a prognosticator of who will get together, and more importantly, stay together? Mathematicians and statisticians are sure trying! One of the foremost world-renowned experts on relationship and marriage modeling is University of Washington psychology professor John Gottman, head of The Gottman Institute and author of Why Marriages Succeed or Fail. Dr. Gottman uses complex mathematical modeling and microexpression analysis to predict with 90% accuracy which newlyweds will remain married four to six years later, and with 83% accuracy seven to nine years thereafter. In this terrific profile of Gottman’s “love lab,” we see that his methodology includes use of a facial action coding system (FACS) to analyze videotapes for minute signs of negative expressions such as contempt or disgust during simple conversations or stories. Take a look at this brief, fascinating video of how it all works:
Naturally, the next evolutionary step has been to cash in on this science in the online dating game, where successful matchmaking hinges on predicting which couples will be ideally suited to each other on paper. Dr. Helen Fisher has used her expertise in the chemicals of love to match couples through their brain chemistry personality profiles on Chemistry.com. eHarmony has an in-house research psychologist, Gian Gonzaga, an ardent proponent of personality assessment and skeptic of opposites attracting. Finally, the increasingly popular Match.com boasts of a radically advanced new personality profile called “match insights,” devised by none other than Dr. Fisher and medical doctor Jason Stockwood. If you don’t believe in the power of the soft sciences, you can take your matchmaking to a molecular level, with several new companies claiming to connect couples based on DNA fingerprints and the biological instinct to breed with people whose immune system differs significantly from ours for genetic stability. ScientificMatch.com promises that its pricey, patent-pending technology “uses your DNA to find others with a natural odor you’ll love, with whom you’d have healthier children, a more satisfying sex life and more,” while GenePartner.com tests couples based on only one group of genes: human leukocyte antigens (HLAs), which play an essential role in immune function. The accuracy of all of these sites? Mixed. Despite the problem of rampant lying in internet dating profiles and dating in volume to pinpoint the right match, some research has shown remarkably high success (as high as 94%) in e-partners that had met in person.
What’s Science Got To Do With It?
In the shadow of such vast technological advancement and deduction of romance to the binary and biological, readers of this blog might imagine that its scientist editor might condemn decidedly empirical views of love. They would be wrong. For while numbers and test tubes and brain scanning machines can help us describe love’s physiological and psychological nimbus, its esoteric nucleus will forever be elusive. And thank heavens for that! There exists no mathematical formula (other than perhaps chaos theory) that can explain the idea of two people, diametrical as day and night, falling in love and somehow making it work. No MRI is equipped with a magnet strong enough to properly quantify the utter heartbreak of those that don’t. There is not a statistical deviation alive that could categorize my grandparents’ unlikely 55-year marriage, a marriage that survived World War II, Communism, a miscarriage, the death of a child, poverty, imprisonment in a political gulag, and yes, even a torrid affair. After my grandfather died, my grandmother eked out another feeble few years before succumbing to what else but a broken heart. It is within that enigma that generations of poets, scribes, musicians, screenwriters, and artists dating all the way back to humanity’s cultural dawning—the Stone Age—have never exhausted of material, and they never will.
Love exists outside of all the things that science is—the ordered, the examined, the sterile, the safe, and the rational. It is inherently irrational, messy, disordered and frustrating. Science and technology forever aim to eliminate mistakes, imperfection and any obstacles to precision, which in matters of the heart would be a downright shame. Love is not about impersonal personality surveys, neurotransmitter cascades or the incessant beeping of laboratory machines measuring its output. It’s about magic, mystery, voodoo and charm. It’s about experimenting, floating off the ground, being scared out of your mind, laughing uncontrollably and inexplicably, flowers, bad dates, good dates, tubs of ice cream and chocolate with your closest friends, picking yourself up and starting the whole process all over again. It’s not about guarantees or prognostications, not even by smart University of Washington psychologists. It’s about having no clue what you’re doing, figuring it out as you go along, deviating from formulas, books, and everything scientists have ever told you, taking a chance on the stranger across a crowded room, and the moon hitting your eye like a big-a pizza pie.
Now that’s amore!
Hope everyone had a great Valentine’s Day.
~*ScriptPhD*~
*****************
ScriptPhD.com covers science and technology in entertainment, media and advertising. Hire our consulting company for creative content development.
Follow us on Twitter and our Facebook fan page. Subscribe to free email notifications of new posts on our home page.
]]>First of all, let me assert my firm belief that the only thing we have to fear is fear itselfnameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance. These inspiring words, borrowed from scribes Henry David Thoreau and Michel de Montaigne, were spoken by President Franklin Delano Roosevelt at his first inauguration during the only era more perilous than the one we currently face. But FDR had it easy. All he had to face was 25% unemployment and 2 million homeless Americans. We have, among other things, climate change, carcinogens, leaky breast implants, the obesity epidemic, the West Nile virus, SARS, avian/swine flu, flesh-eating disease, pedophiles, predators, herpes, satanic cults, mad cow disease, crack cocaine, and lets not forget that paragon of Malthusian-like fatalismterror. In his brilliant book The Science of Fear, journalist Daniel Gardner delves into the psychology and physiology of fear and the incendiary factors that drive it, including media, advertising, government, business and our own evolutionary mold. For our final blog post of 2009, ScriptPhD.com extends the science into a personal reflection, a discussion of why, despite there never having been a better time to be alive, we are more afraid than ever, and how we can turn a more rational leaf in the year 2010.
Prehistoric Predispositions and the Human Brain
Lets talk about psychology for a moment. The psychology of fear, to be specific. Our minds largely evolved to cope with the Environment of Evolutionary Adaptation; Stone Age survival needs hard-wired into our brains to create a two-tiered system of conscious and subconscious thought. Elucidated by 2002 Nobel Prize winner in economics Daniel Kahneman, the systems are divided into the prehistoric System One (Gut) and System Two (Head). Gut is quick, evolutionary and designed to react to mortal threats, while Head is more modern, conscious thought capable of analyzing statistics and being rational. In a seminal 1974 paper published in the journal Science, Kahneman and his research partner Amos Tversky punctured the long-held belief that decision-making occurs via a Homo economicus (rational man) by proving that decisions are mostly made by the gut using three simple heuristics, or rules. The anchoring and adjustment heuristic (Anchoring Rule) involves grabbing hold of the nearest or most recent number when uncertain about a correct answer. This helps explain why the number 50,000 has been used to describe everything from how many predators are on the Internet in the 2000s to how many children were kidnapped by strangers every day in the 80s to the number of murders committed by Satanic cults in the 90s. The representativeness heuristic, or Rule of Typical Things, is our Gut judging things based on largely learned intuition. This explains why many predictions by experts are often as wrong as they are right. And why, despite being convinced they are not racist, Western societies perpetuate a dangerous stereotype of the non-white male. Finally, and most importantly, the availability heuristic, or Example rule, dictates that the easier it is to recall examples of something, the more common it must be. This is particularly sensitive to our memory formation and retention, particularly of violent or painful events, which were key to species survival in dangerous prehistoric times.
University of Oregon psychologist Paul Slovic added to these the affect heuristic, or Good/Bad Rule. When faced with something unfamiliar, Gut instantly decides how likely it is to kill it based on whether it feels good or should be good. This explains why we irrationally fear nuclear power, which have intellectually been shown to not be nearly as dangerous as we think they are, while we have no qualms about suntanning on a beach, which feels good, or getting an X-ray at the doctors office despite both having been shown to be more dangerous than estimated. Psychologists Marty Frank and Thomas Gilovich showed that in all but one season from 1970 to 1986, the five teams in the NFL and three teams in the NHL that wore black uniforms (black = bad) got more penalty yards and penalty minutes than league-average, respectively, even when wearing their alternate uniforms. Finally, scientist Peter Watson discovered that people judge risk not based on scientific information, but rather herd mentality and conformity, termed confirmation bias. Once having formed a view, we cling to information that supports that view while rejecting or ignoring information that casts any doubt on it. This can be seen in internet blogs of like-minded individuals that act as echo chambers and media and organization perpetuating a fear as rumor until it is accepted by the group as a mortal danger, despite no rational evidence to the contrary.
Its a downright shame that this hallowed body of research took scientists such a long time to amass and ascertain, because they could have easily found it in the skyscrapers of Madison Avenue, where the psychology of fear has not only been long-defined, but long-exploited by advertising agencies and media moguls to sell products, news, and…more fear.
At its heart, effective advertising has always been about forming an emotional bond with consumers on an individual level. People are more likely to engage with a brand or buy a product if they can feel a deep connection or personal stake. This can be achieved through targeted storytelling, creativity, and the tapping and marketing of subconscious fear, coined as schockvertising by ad agencies. X is a frightening threat to your safety or health, but use our product to make it go away. Its a surprisingly effective strategy and has been applied of late to disease (terrific read on pharmaceutical advertising tactics), the organic/natural movement, and politics. Purell (which The ScriptPhD will disclose being a huge fan of) was originally created by Pfizer for in-hospital use by medical professionals. In 1997, it was introduced to market with an ad blitz that included the slogan Imagine a Touchable World. I just did; its called the world up until 1997. Erectile dysfunction, hair loss, osteoporosis, restless leg syndrome, shyness, and even toenail fungus are now serious ailments that you need to ask your doctor about today. This camouflaged marketing extends to health lobby groups, professional associations, and even awareness campaigns. Roundly excoriated by medical professionals, a 2007 ad campaign pictured on the right warned of the looming dangers of skin cancer. Though the logo on the poster is that of the American Cancer Society, its sponsored by Neutrogenaa leading manufacturer of sunscreen.
For the first time in the history of the world, every human being is now subjected to contact with dangerous chemicals, from the moment of conception until death, wrote marine biologist Rachel Carson in her 1962 environmental bombshell Silent Spring. Up until then, ‘chemical’ was not a dirty word. In fact it was associated with progress, modernity, and prosperity, as evidenced by DuPont Corporations 1935 slogan Better things for better living through chemistry (the latter part being dropped in 1982). Carsons book preyed on what has become the biggest fear in the last half-century: cancer. It famously predicted that one in every four people would be stricken with the disease over the course of their lifetimes. These fears have been capitalized on by the health, nutrition and wellness industry to peddle organic, natural foods and supplements that veer far away from laboratories and manufactured synthesis. While there is nothing wrong with digging into a delicious meal of organic produce or popping a ginseng pill, the naturally occurring chemicals in the food supply exceed one million, none of which are liable to the rigorous safety guidelines and regulations that are performed on perscription drugs, pesticides and non-organic foods. In fact, the lifetime cancer risk figures invoked by everyone from Greenpeace to Whole Foods is not nearly as scary when adjusted for age and lifestyle. Exposure to pollutants in occupational, community, and other settings is thought to account for a relatively small percentage of cancer deaths, according to the American Cancer Society in Cancer Facts and Figures 2006. Its lifestylesmoking, drinking, diet, obesity and exercisethat accounts for 65% of all cancers, but its not nearly as sexy to stop smoking as it is to buy that imported Nepalese pear hand-picked by a Himalayan sherpa. Of course, this hasnt prevented the organic market from growing 20% each year since the early 90s, with a 20-year marketing plan. The last frontier of fear advertising is politics. Anyone who has seen a grainy, black and white negative ad may be cognizant they are being manipulated, but may not know exactly how. Coined the get em sick, get em well model in the 1980s, political scientist Ted Brader notes in Campaigning for Hearts and Minds that 72 percent of such ads dominate an appeal to emotions versus logic, with nearly half appealing to anger, fear or pride. Take a look at the gem that started them all, a controversial, game-changing 1964 commercial called Daisy Girl. Emotional, visceral, and only aired once, this ad was widely credited with helping Lyndon B. Johnson defeat Barry Goldwater in the Presidential election:
And finally, we have the media. That venerated congregation of communicators dedicated to broadcasting all thats newsworthy with integrity. Perhaps in an alternate universe. Psychologists specializing in fear perception have concluded that media disproportionately covers dramatic, violent, catastrophic causes of death. So after watching a Law and Order or CSI marathon, a Dateline special about online predators, and a CNN special on the missing blonde white woman du jour, you tune in to your local news (Find out how your windshield wipers could kill you, tonight at 11!). While the simplest explanation is that media profits from fear, it is also tailor made to two of the rules discussed above: the Example Rule and the Good/Bad rule. We dont recognize that a disease covered on House or a news special about a kidnapping or violent suburban murder spree are rare, only that they are Bad and the last thing we saw, so they head straight to our Gut. The overwhelming preoccupation of the modern media is crime and terrorism, with a heavy focus on individual acts bereft of broader context. Just as they are in advertising, emotions and individual connection are essential in media crime reporting. The evolutionary desire to punish someone for wrongdoing towards someone else is hard-wired into our brains, and easier to conjure when watching an isolated story about someone relatable (a daughter, a mother, a little old lady) than about scores of dead or suffering millions of miles away, no matter how tragic. A convincing body of psychology research has concluded that individuals who watch a large amount of television are more likely to feel a greater threat from crime, believe crime is more prevalent than statistics indicate, and take more precautions against crime. Furthermore, crime portrayed on television is significantly more violent, random, and dangerous than crime in the “real” world. While this blog diverges with Gardners offhanded minimalization of the very real threat posed by radical terrorist organizations, he does bring forth a valid argument of reality versus risk perception. Excluding the State of Israel, a lifetime risk of injury or death in a terror attack is between 1 in 10,000 and 1 in a million. Your risk of being killed by a venomous plant or animal? 1 in 39,873. Drowning in a bathtub? 1 in 11,289. Being killed in a car crash? 1 in 84. Think about the last time CNN devoted a Wolf Blitzer special to venomous plants, bathtubs, or car crashes. Furthermore, the total cost of counterterror spending in the US between 2001 and 2007 was $58.3 billion, not including the $500 billion – $2 trillion price tag on the Iraq war. And the extra-half hour delay at the airport for security, the effectiveness of which we have witnessed this very week, costs the US economy $15 billion per year. A panel of experts gathered a month ago
at the Paley Media Center in New York City to discuss the future of news media, audience preferences, and content in a competing marketplace. The conclusion? More sex, more scandal. In other words, more of the same.
It is highly unusual, if rare, for me to break the fourth wall as Editor of ScriptPhD.com to get personal and let readers in as Jovana Grbi?. But, in the spirit of New Years resolutions, holiday warmth, and a little too much champagne, what the heck lean in closely, because Ill only say this once for posterity. I am more often afraid than unafraid. I am not an outlying exception to the principles listed above, but rather an adherent to their very human origins. From as far back as I can remember, I have wanted to be a professional writer. I was very good at math and science, but creative doodling, daydreaming, and filling my head and notebooks with words was what I lived for. Dramatic, artsy types were who I hung out with, admired, even lived with in college. So why not study film or creative writing? And, in the middle of graduate school, when it dawned on me that I loved science but didnt live it, why not change course and pursue my dreams right away? Because I chose to follow the tenable rewards of the logical and attainable rather than risk the abstract (gut beat out head). Quite simply, I was afraid. I began 2009 by standing on the Mall in Washington, DC with a million or so of my closest friends to witness our nations first African-American President take the oath of office. 2009 was also the year that I broke free to launch this blog, its accompanying creative consulting practice, and my career as a writer. Neither accomplishment, one so public and one so personal, could have transpired through fear. Naturally, since we move and think as a herd, I had a lot of help from my friends. But the bitter irony is that cracking my fears has left me more scared than ever. There are still a million things that could go wrong, a million burdens and responsibilities that I now shoulder alone, and lets not romanticize the creative lifestyle, the ultimate high-risk, low-reward proposition. But you know what, dear readers? Ive never been happier! So, as we head into this, the last year of the first decade of our new millennium, we must individually and collectively ask ourselves, What do I fear? What great glucocordicoid blockade, personal or professional, prevents you from existential liberation?
Especially amidst vulnerable times like these, a retrospective insight into the rational reveals the psychological trail of breadcrumbs that cracked the mirage weve been living under this past decade. We would see that Jim Cramer, a so-called expert in the capricious art of the stock market (the fluctuations of which have been linked by psychologists to weather patterns) was simply evoking the anchoring and heuristics numbers rule as he confidently reassured his viewers to invest in Bear Stearns. The company went under a week later. We would see that scores of bankers, investors, insurance giants, and people just like you and me were lulled by the example rule to falsely believe the housing bubble would never burst because, hey, a week ago, everything was fine. And we can still see the media (print, social, digital, and everything in-between) trying to painfully bend the rule of typical things into a pretzel to forecast when and how this current recession will end, with optimistic economists predicting a 2010 recovery and more austere types warning that high unemployment might take a full decade to assuage. Caught in this miasmic cloud is a primed public receptive to the advertising, entertainment and news messages aimed straight at our primordial gut instincts to perpetuate a culture of fear. To step out of the cloud is to shed the hypothetical for the actual. While recessions are frightening and uncertain, they are also a natural part of the economic cycle, and have given birth to the upper echelon of the Fortune 500 and even the iPod youre listening to as you read this blog. Rather than focusing on the myriad of byzantine ways we could die, we might devote energy and economy to putting a dent in our true terrorists: heart and cardiovascular diseases, cancer and diabetes. Rather than worrying about whether were eating the perfect, macrobiotic, garden-grown heirloom tomato, we should all just eat more tomatoes. Instead of acquiescing to appeals of vanity from pharmaceutical companies, we should worry about feeding the hungry and rebuilding inner-city neighborhoods. And every once in a while, we should let our gut worry about digesting dinner, and let our head worry about risk-assessment. We might stop worrying so much. I will conclude 2009 by echoing President Roosevelt’s preamble to not fearing fear: This great Nation will endure as it has endured, will revive and will prosper. I personally extend his wishes to every single one of you around the world. Just make sure to look both ways before crossing the street. Theres a 1 in 626 chance you could die.
Happy New Year.
~*ScriptPhD*~
*****************
ScriptPhD.com covers science and technology in entertainment, media and pop culture. Follow us on Twitter and our Facebook fan page. Subscribe to free email notifications of new posts on our home page.
Original Premise
Who among us hasn’t wanted, nay desperately needed, to forget a painful event, relationship, person, or circumstance that can’t seem to escape their memory? Oh to be able to just wipe it from your brain and pretend it never happened! The concept sounds like something straight out of the imaginative mind of screenwriter Charlie Kaufman. In his movie, Eternal Sunshine of the Spotless Mind, ex-lovers Joel and Clementine, played by Jim Carrey and Kate Winslet, erase memories of each other after their relationship sours. To do this, they seek out the bioengineering company Lacuna Inc, whose scruples are more than ambiguous. All’s well that ends well for the lovers, as they reconnect towards the end of the movie, rebuild new memories of one another and fall back in love.
Indeed, plenty of recent movies deal with memory loss, of varying degree, origin and consequence. In Christopher Nolan’s brilliant and esoteric Memento, Leonard Shelby (Guy Pearce), suffering from antiretrograde amnesia rendering him unable to form new memories, is trying to piece together the events of the vicious attack and murder of his wife. A similar condition is suffered by Drew Barrymore’s character in the romantic comedy 50 First Dates and has to “meet” her character’s love interest anew every day. In Paycheck, the film adaptation of Philip K. Dick’s science fiction story, Ben Affleck’s character takes extreme measure to protect his clients’ intellectual property, in the form of wiping his own memory, almost costing him his own life as his last deal embroils him in a standoff with the FBI.
Indeed, a slew of medical and psychological syndromes can cause, or is associated with, memory loss. But the idea of selective memory engineering has been the stuff of science fiction fancy.
Until now.
Current Research
While watching an episode of the television version of This American Life, I was struck by the episode entitled “Pandora’s Box”, which profiled the work of SUNY Downstate Medical researchers Drs. Todd Sacktor and Andre Fenton. Dr. Sacktor had a revolutionary idea about how memory is formed in the brain, and the elementary, yet powerful, way to manipulate it by eradicating the function of one regulatory molecule. And what a Pandora’s box did they open! Take a look at this short clip:
Powerful stuff, no? This research, in effect, suggests that a single molecule, Protein Kinase Mzeta, regulates the brain’s ability to form and retain memories, and consequently lies at the heart of memory erasure potential. In a recent New York Times interview, Dr. Sacktor admitted that his scientist dad directed him to a family of molecules called Protein Kinase C in 1985, from which his lab derived PKMzeta as a brain-specific member of that family. In a 1999 paper in the journal Nature Neuroscience, Drs. Jeff Lichtman and Joshua Sanes narrowed down 117 hypothetical molecules involved in long-term potentiation (LTP), the communication between two neurons when stimulated simultaneously. Following this paper, in a subsequent 2002 Nature Neuroscience paper, Dr. Sacktor’s lab was able to isolate PKMzeta as the absolute “it” memory factor, showing that it congregates semi-permanently en masse around these activated neuronal connections. At that point, he was off to the races. He joined forces with the friendly neighbor downstairs, neuroscientist Dr. Andre Fenton, who just happened to study spatial memory in mice and rats. He had previously shown that mice and rats placed in a circular chamber learn how to move around to avoid getting their feet shocked, a memory they retain, days, weeks, even months later. Sacktor’s lab injected an inhibitor for PKMzeta into the rats’ hippocampus, the part of our brain that regulates memory. The results were stunning. Two pioneering papers (paper 1 and paper 2) in the elite research journal Science showed that these “blockers” both reversed the rats’ neurons from forming long-term potentiation, and that it manifested in them forgetting the spatial information they’d learned in the chamber, an effect that seemed to last for weeks. Drs. Sacktor and Fenton had erased the rats’ memory!
Dr. Fenton and Dr. Sacktor’s reaction to their research in the This American Life piece was notable. Normally, scientists are shielded well behind the safe solitude of the ivory tower: long work hours, constant pressure, achieving the next research milestone. It’s not that scientists don’t ever think about the implications of their work per se, but they rarely have the luxury of time for such contemplation or the fortune of far-reaching results. While he read letters from victims of post-traumatic stress disorder, Dr. Fenton broke down crying, and expressed a desire to just help these people.
Less than two months ago, scientists at the Toronto’s Hospital for Sick Children [sorry I can’t help myself… as opposed to healthy ones? I love Canadians!] have added an important piece to this canon of research. In a Science paper, the scientists identified the exact group of neurons—lateral amygdala (LA) neurons with increased cyclic adenosine monophosphate response element-binding protein (CREB)—responsible for formation of a given memory (the neuronal memory trace). Selective targeting and deletion of these neurons using an injectable, inducible neurotoxin blocked all learned memories.
Eventually, of course, all of this body of science will coalesce into a more coherent picture of how memories are formed, what subsets of neurons in which portions of the brain store them, and what molecules and proteins we can manipulate to control, enhance or erase memory altogether. But that still leaves us to grapple with some very powerful and comprehensive bioethical dilemmas. Assuming that this translates into a medical procedure or pharmaceutical treatment for memory manipulation, who will regulate it? How will rules be established to regulate how far to take this therapy? Is memory erasure the equivalent of altering our personalities, the essence of who we are, a psychological lobotomy? Most importantly, however, is the question of how much we need memories, even painful, negative ones, to build the cornerstones of human morality, empathy, and the absolute meaning of right and wrong.
Sheena Jocelyn, one of the researchers involved in the University of Toronto study, acknowledged the bifurcated ethical implications of the research: “Our experiences, both good and bad, teach us things,” she said. “If we didn’t remember that the last time we touched a hot stove we got burned, we would be more likely to do it again. So in this sense, even memories of bad or frightening experiences are useful. However, there are some cases in which fearful memories become maladaptive, such as with post-traumatic stress disorder or severe phobia. Selectively erasing these intrusive memories may improve the lives of afflicted individuals.” In fact, Anjan Chatterjee, M.D., a neuroethicist at the University of Pennsylvania Ethics Center, penned an incredibly prescient piece two years ago that equated psychological mitigation of painful memories to “cosmetic neurology”. “If, as many religions and philosophies argue, struggle and even pain are important to the development of character,” Dr. Chatterje asks, “Does the use of pharmacological interventions to ameliorate our struggles undermine this essential process?”
To shed some light of this ethical quandary, ScriptPhD.com enlisted the help of Mary Devereaux, PhD, a bioethics expert at The Center for Ethics in Science and Technology in San Diego, CA and Peter Wagner, MD, a professor in the Schools of Medicine and Bioengineering at UCSD.
To continue reading these enlightening interviews, click “Continue Reading”…
Interview #1: Peter Wagner, MD, biomedical/pharmacological ethics of memory erasure
1) When an issue presents an ethical dilemma to the medical community, what is the role of bioethicists and the American Medical Association in any FDA approval process for a treatment or drug?
Both AMA and FDA organizations would be the best source of answers to these questions, not me.
That said, my understanding of the general process for bringing a treatment or drug to market is something like this:
The investigator wishing to gain FDA approval will have had the product go through a series of clinical trials (Phases I, II, III etc) that in a sequential manner establish safety and efficacy and also often side effects and indications and exclusions, all over some period of time. These trials may well have had their experimental protocols in part dictated by the FDA itself. The FDA has expert panels to review the trial data and render their verdict. As far as I am aware, the investigator would have the choice of including an ethicist in the process. I do not know whether the FDA gets involved in ethics issues, even at a high level, but I suspect not. This would be a very slippery slope and deciding what constitutes a trigger for FDA involvement might create more problems than ethical intervention would solve. Thus, genetic testing technology exists, but the ethical issues of being able to obtain specific genetic information does not seem to be part of the FDA piece. Ditto for organ transplant and gene therapy and their risks versus benefits. I also suspect that the AMA would not be involved until and unless, after FDA approval and only when the treatment/drug was now being used, there was reported some major medical dilemma that would prompt them to make a statement.
I would point out that there are many cases of treatments getting FDA approval after research suggesting safety and efficacy, yet which, after more experience in the field, were found to have serious risks not evident at the time of approval. How many years should pass and how many patients should be studied before such longer term risks are declared non-existent? During which time, many patients who would benefit from the new treatment are denied access to it? That is an ethical dilemma intrinsic to treatment development and approval, and it can never go away.
2) When anesthesia and epidurals first became available, there were many people who resisted their use on the similar grounds… not sure of the safety, pain as strengthening character. Couldn’t you argue that once we become familiar enough with the usage of such a technology, we might look back on our reticence in the same light?
I would separate medical from ethical issues here as much as possible: safety is a medical issue, and the inventor, the FDA, and the user (surgeon, prescriber etc) all have major, cut and dried, responsibilities to maximize safety during development and use. Ethics enters the room in deciding when a treatment has been tested enough to be sure bad side effects have “reasonably” been identified, as stated above.
Putting up with pain is different – to me that has to be a choice for each person to make based on balancing their own beliefs, their own informed concerns over safety and side effects and in this example, their pain tolerance. Medical professionals have the absolute responsibility of informing the patient of risks and benefits honestly and accurately, but the patient must be responsible for making the choice. Ethics comes in here when the professional misinforms the patient through ignorance or malfeasance.
3) If memory erasure becomes a medical reality, what kind of evaluations or consultations would a patient have to undergo before being allowed to undergo this procedure? What can possibly prepare a patient to mentally comprehend that their memories will be gone?
Nothing is ever simple, and memory manipulation may seem thornier to grapple with than something less mysterious such as heart surgery. Thus, messing with the mind conjures up images of brainwashing by mad scientists; heart surgery just opens clogged vessels but with well-defined physical risks. Yet my answer to your question is exactly as above – the caregiving health professional has to give the patient a detailed, honest and informed account of the risks and benefits. Then the patient has to be the one to decide. In either case, if I were a prospective patient, I would ask a bunch of questions. They would clearly be different between heart surgery and memory erasure. I would want to know if the memory treatment was permanent, would it wipe out good memories along with the bad, could I still form new memories going forward, would it have neural effects on other brain functions from emotional reaction to taste and smell to motor control to control of heart rate – and so on and so on. But the core principle seems to be no different than for heart surgery: properly informed consent so the patient can weigh the risks and benefits and balance them to reach their own decision. In the case of memory erasure, the unknowns may be so many and profound that for many, I expect the answer would be “no thanks”.
The more complicated we try and make the rules imposed on others, the more it actually becomes an ethical problem for us all.
The FDA has to be responsible for regulating treatment availability by requiring and evaluating the studies of treatment development and its job is to be as certain as possible that the efficacy and side effects have all been identified and risks quantified, such that the risk/benefit ratio is acceptable. While the FDA has to grapple with the ethics of what is the right balance of risks to benefits, it should not be charged with societal ethics or moral issues of treatment choice once a treatment is available.
The physician has to be responsible for informing the patient about treatment options and his/her ethical responsibility is to be complete, honest, accurate and unbiased.
The patient then has the responsibility of accepting or rejecting the treatment, be it for heart surgery or memory erasure.
Interview #2: Mary Devereaux, PhD
The ScriptPhD: Back when [Eternal Sunshine of the Spotless Mind] came out, the research hadn’t caught up. And one thing that kind of caught my attention was that now this has actually been done in the lab. I mean, in rats and in mice, but they served as an excellent template for human research. And so there’s no reason to believe that a researcher would not cross over and say, “Well if we can do this in mice, and if there are these chemicals that we can manipulate, or groups of cells….” What interests me is not necessarily the idea of “Can we do the research?” because I think technology races ahead; that’s not something that we can stop. I’m more interested in stopping and saying to ourselves, “What are the long-term ramifications of this and what are some important questions to ask before we race ahead?” And so my first question is a very general one. What is the relationship between memory and self? Between memory erasure and self? And then I’d go on to ask, in erasing painful memories, do we not irrevocably alter what we define as the major component that constitutes personality, and do we have the right to do that? And the reason that I ask this question is that I think it’s an important one to consider before doing something this drastic. What are your thoughts on this?
Mary Devereaux: Well, you know I tend to respond to these things in terms of ethics, because I’m a bioethicist. So, my first question before we get into the more philosophical things, from an ethical point of view, would be, “How good is the technology? How well does it work?” And I think the answer to that is “We don’t know.” That something works in mice doesn’t mean that it works in people. But in order to establish that it works in people, you would have to attempt it. I mean, I take it you have to run some kind of clinical trial. And that raises questions of safety, doing this kind of thing in human beings. Where, I take it, the aim would be to target, or erase, specific memories. But you might get this wrong.
SPhD: Mmm-hmm. Absolutely.
MD: So I think there are real questions about whether it would work and if it would work, how safe it is and how you’re going to establish that with human subjects. Of course you would need to get people’s informed consent and they’d need to understand the risks just like they would for any other kind of scientific research. In terms of actually targeting specific memories, where the idea is to erase those memories, one of my first questions would be about the coherence of the sort of narrative that’s left. That is, supposing, as in a lot of the discussion, we target memories that have to do with trauma, something like the kinds of things that lead to post-traumatic stress disorder. For example, somebody is attacked in the street, or somebody has a particular war memory. I’m not sure what happens here. If what’s left [is], “I had been walking down the street, somebody asks me for the time, and the next thing I know, I wake up in the hospital and I have all of these bruises and maybe I’ve been shot and so on, but I have no recollection of this.” So when you move to thinking about the impact of memory erasure on the self, there are scientific questions that, until you answer those questions, make it very difficult to answer your more general questions about what this is going to mean of ethics or personhood.
SPhD: Absolutely. But I think the point you raise about erasing a war memory, let’s talk about that in more depth. Because it leads really well into my second question. Let’s look at an instance where you have someone who has survived, let’s say Hurricane Katrina, which was a tremendously stressful event. Or they’ve come back from the Iraq War and they have images in their mind that are literally causing them to not be able to live a normal life. Or you’ve been raped—look at the survivors of the Darfur Crisis and what they’re having to look at on a daily basis. One of the things I’ve been reading about in the literature and the ethical literature, is if you look across religions, if you look across cultures, it is written about pain, and painful memories, as something that is a shared human experience. That it brings people together, it causes them to bond, let’s say over the death of a loved one. Another important thing that I’ve seen brought up is that it acts as a social deterrent. For example, if you have something like the Holocaust, and it’s so painful for people that they just choose to eradicate it from their memories, how does that give you the impetus to prevent something like it from happening again? Or to codify moral imperatives as a society? And so the question that I have for you is there a danger in making it really easy, [assuming that the technology is safe], should you do it? Because although there are these incredibly painful things that we go through as human beings, in a way, there’s this risk of numbing us as a society.
MD: Wellll, I think that seems like it’s jumping ahead in two ways. One thing is that there’s a sort of pattern of argument that is constantly used when talking about human enhancements. In a way, it’s kind of funny to talk about memory erasure as a sort of cognitive enhancement because you’re taking something away, but in another sense, you could clearly use the same technology to improve memory. But in taking away something that’s painful, you’re also improving the quality of someone’s cognitive or emotional life. So that too is an enhancement. My one response is that we always say, “Let’s assume that it’s safe and that it’s effective.” I think that’s a big assumption. And I think we too readily make that assumption. I think we’re a long way from it being safe and effective. So that’s point #1. The second point is that even if I give you your assumption, let’s assume that memory erasure works, we haven’t harmed anyone in demonstrating scientifically that it works, and now we have something that not only works, but it works safely. Well, it seems to me your question is sliding between two levels. One level of question is should we do this in individual cases for very specific memories? So, [a fictitious person named] Sarah Jones at Stanford is brutally attacked some night by a group of rowdy people on campus. And we now have the expertise that we can target and remove that memory. That’s a very different kind of question from a question like your example or the Holocaust or Hurricane Katrina. There you’re not talking about targeting individual memories in particular individuals. You’re talking about much more than a given particular memory. You’re talking about a whole historical event. Which is days, if not years, of activity. But the other thing is that you’re talking about, I mean, to erase the memory of the Holocaust, you would be talking about having to erase everyone—almost everyone living’s — memory in some specific way. And that seems to me …
SPhD: I think what I meant was more like an iterative effect. OK, let’s bring it on a much more simple level, like your example. [Fictitious] Sarah Jones is raped at Stanford. And we can erase that memory for her. It still kind of goes to asking about codifying moral imperatives. Because if it becomes easy enough to just erase her memory—I hate to say this, this is a horrible thing to say, but just bear with me for ethical purposes—then why does her rape seem as painful? In a way it seems—
MD: I get what you’re saying. Why punish these young guys who are all on their way to becoming engineers and senators, and they didn’t beat her up, they just raped her. And she didn’t get pregnant, and she didn’t get any sexually transmitted diseases, and now we can remove the memory. So might this actually change our view of the moral awfulness of what they’ve done?
SPhD: Yeah, and what’s to prevent future people from committing this crime? Because part of the horror of what we go through also prevents us from hurting other people. I do think that there’s a certain level of morality that is intrinsic, if you really take religion out of the question, or how we codify our moral standards. I think that human beings, because of our ability to feel and think and process these events, and to store them as memories if we really want to talk about it like that, I think it acts as a certain impetus to not hurt other people. Well, if you can just take [Sarah] Jones to the hospital, and take away her memory—maybe the Holocaust was [too big] of an example—but on an iterative scale—
MD: I think Hurricane Katrina is a much better example.
SPhD: Hurricane Katrina is an EXCELLENT example. But what I was worrying about on an iterative scale is that when all of these add up, that there’s a numbing effect, that what makes rape [as an example] so horrible now if you can just wipe it out of your mind, or any other terrible event for that matter?
MD: Well one thing is it’s horrible for those people who do it. It affects other people’s safety and so on. And it isn’t usually the case that there are no physical or other kinds of consequences. But the other thing is that putting the question this way suggests that if we open the door to the cases we started with, the rape and the Iraqi soldier, both of whom are having symptoms, say, of PTSD, then it will be available to everyone who has ANY kind of painful memory. And I don’t think that follows. I’m also not sure that I share your intuition that the main thing that keeps us from hurting each other is our memory of ourselves being hurt.
SPhD: But there are consequences. And so if you take, for example, the Iraqi soldier. War is horrible. It’s absolutely awful. And if you erase those memories from them [so easily], in a way it is a bit of a numbing down.
MD: Well that’s true. I mean, if you could simply send people off to war and they come back and not remember the two years at all, or however many years they were serving. On the other hand, then part of the appeal of military life, the kind of experience, the good things you gain from those experiences, it would also, I take it, be gone. In short, I’m not sure that you could eliminate something that—I mean, I’m not sure how, scientifically, they locate these memories and how they target them.
SPhD: That’s why I share your concern.
MD: In mice, you administer a shock, they learn to avoid a particular place on the floor of the cage. And then you eradicate that memory. But in order to eradicate that memory, you have to be able to target it. Now, with the mice, it may not matter if we’re targeting the rest of their memory. I mean, do we know, for example, that in these mice experiments, that the only thing the mice have lost is the memory of how to avoid the shock? I mean, they might have also lost where to find their water and where to find their food, whereas with human beings, we’re going to have to be much more fine-grained. We want to lose the memory of, say, the rape, but we don’t want to lose the memory of who we are or who our friends were when we left the bar.
SPhD: And that is the collateral. It could be that we can just never get the technology perfect. There’s a risk with everything we do medically. And so that’s why I think some of the issues you bring forth are really important. And you know, we’ve been talking about these horrific examples, and you can see why there’s a definite yin and yang to it, in the sense that you can really help people who are suffering, but it really begs you to ask these very fundamental questions of how you erase these memories and what is the collateral.
MD: Well, I think you’re right, because there are different kinds of memory. I mean, there’s memory of an event, like that I had dinner last night with such-and-such. But then, I take it there’s memory of a more general kind of experience. But personally, I do think that the real issues concern how you would establish the effectiveness of removing specific memories, and how you would do this safely. My own guess is that if we were able to do this at all, that it would be used very sparingly. Because the risks would be really significant. I mean, would you want somebody to put a probe into your brain to try to identify the correct memory? Well, maybe if you couldn’t sleep and you couldn’t eat and you couldn’t study, then you would be willing to risk that. But if it’s for something more moderate than that, you know the boyfriend or the girlfriend you don’t want to remember anymore—
SPhD: Well, that actually, you’re leading into my next question really well, which is given the assumption, and I think it’s a very fair assumption, that we’re never going to have 100% effectiveness with anything, and the human mind is so complex, how are you ever going to be able to pinpoint a perfect technology—
MD: Well, but it doesn’t have to be perfect. Most of what we do in medicine isn’t perfect. It just has to be good enough.
SPhD: Exactly. Good enough. Let’s say it’s good enough. How do you determine, then, that a certain memory is painful enough that it warrants erasing all others? If you’re the physician or the psychologist working with the physician, how do you assess that? Where is that line drawn in the sand? And I think that’s another really important question to ask.
MD: But from an ethical point of view, that wouldn’t be up to the physician. I mean the physician or the psychologist, I take it, would say to the patient, “You’re clearly suffering from a severe form of PTSD. You’ve not responded to our standard-of-care strategies. We do have something new that’s available. It has higher risks, but since these other things haven’t worked, you might want to consider it.” And then it would really be up to the patient who would be informed of the risks and benefits, and then you’d have to do informed consent, and all of that kind of thing beforehand, just like with any other surgery.
SPhD: That’s a really tough thing with informed consent. That’s actually something that Dr. Wagner talked about a lot, which is that ultimately, it REALLY comes down to the patient. And to me, it’s like being able to grasp the idea that in a state of suffering, I just feel like how can you ever fully comprehend the idea that all of your memories to date might be erased?
MD: But they wouldn’t be doing THAT. They could never. I mean, if that was the program, nobody would sign on for it. I think the only discussion about memory erasure technology has been targeting specific memories. Because if you were wiping somebody’s memory, I mean what is that show—Dollhouse—where they DO wipe people’s memories and so on, no one would agree. So no, I think this is something that’s [potentially] used to target specifically distressing memories. Of course the other question might be, and it might be interesting as a background for what it is you’re writing or posting, and that is, what is the state of PTSD research and therapy now? Because from what I understand, there are reasonably effective desensitizing or de-briefing strategies that people are developing for PTSD. And so, I don’t need you to put a probe in my brain if there are other ways we can de-intensify my memory and my suffering. And I think there are strategies. I think there are various behavioral kinds of things of bringing up memories in various sorts of ways that are effective. So I think it would be interesting to look at the risks and benefits of memory erasure in comparison to what else the neurosciences are allowing us to understand and do.
SPhD: Absolutely! I agree with you. I heard you earlier use the phrase “cognitive enhancement” which is another really interesting aspect of this. The article by Dr. Anjan Chatterjee was really interesting. He’s over at the University of Pennsylvania Ethics Center. And he really seemed to take a stance of, this is no different than, and I don’t want to misquote him, but based on his article, he really likened it to athletes taking steroids to enhance performance. And out of that article derived this very important question, which is, is it psychotropic evolution that presents this unfair advantage to those people for whom it’s either affordable or accessible?
MD: Well the answer to that is yes, and yes. But in that sense, cognitive enhancement or any of these enhancements are no different from almost everything else we’re doing in medicine. There’s very little in the United States medical system that isn’t equally, if not more, unfair. I mean, I don’t know if you’re familiar with this, but apparently, there’s literature suggesting that if you look at soldiers in the same theater of operations, that those who are officers as opposed to unlisted actually have lower rates of PTSD. And I think there’s some suggestion that they have perhaps more resources for self-understanding or managing stress or what have you. So it’s not just money, there are other things—education, psychological resilence, and how people were brought up and what kind of security they had as children, and what education they have, and so on. So yes I think it’s very unfair, but again, I don’t think that’s saying anything specific about cognitive enhancement because all kinds of—almost all medical care in the United States is unfair in that respect. Unless…well…
SPhD: Well, unless you have access to it, and in our country unless you have insurance, which is another—I mean, it just goes back to the idea of inequity in general in Western society and even within Western society within subsets of it. But I thought that particular question is an interesting one if you look at the metadata, and if you look at this kind of 10, 15, 20, 50 years down the line, which is why I was asking some of those questions long-term consequences of just being able to say, “Poof! Bad memory gone!” There is a sort of evolution of the mind there in terms of personality, in terms of consequence of bad things, in terms of how we relate to each other, of being able to bond with another human being over a tragedy as opposed to saying, “Doctor, doctor, make it go away.”
MD: Well, I mean I think if you’re really going to pursue this, then you need to distinguish between shared experience and empathy. Sometimes we commiserate with another because we in fact have experienced the same things. Somebody tells you that they’ve had a heart attack, and you’ve had one, so you know exactly what that’s like. But the other case is where somebody loses the use of their legs and we have our legs and we have no idea what that’s like but we’re able to empathize anyway through the power of the imagination. And I think a lot of what you’re talking about in terms of moral connection and community and how we bond to each other, I don’t think depends upon direct shared experiences of actual events. I think a lot of it comes from watching things, from literature, from storytelling—
SPhD: And I think 9/11 was such a great example of that. There were people who were thousands of miles away from 9/11 and it still brought communities together.
MD: Exactly. That’s a very good example. And the same thing with Katrina, although maybe less so. I think 9/11 really united the country in a way that the slogan “We’re all New Yorkers” exemplified. That’s exactly the kind of thing I’m talking about.
SPhD: And that’s sort of uplifting to think about. Because in researching this topic, it did get me to thinking of, gosh, there’s a nightmare scenario of living in this numb world of erased memories and, I don’t know, it’s kind of a very futuristic, sort of dark, Sci-Fi way of looking at it. And maybe that’s my cynical nature, but it is good to know I think that there is this delineation, and that in a way we would always be impervious to being [completely] numb. There is something in the human existence that even if you were to erase a memory, you can’t wipe away people’s ability to come together, to have this shared experience. And that is wonderful, I think.
I’d like to give a sincere and heartfelt thanks to Dr. Peter Wagner and Dr. Mary Devereaux for the engaging discussion and now open the floor to you, faithful readers. Feel free to comment or give your own opinion on what we’ve been discussing, or bring forth your own ethical concerns!
]]>