Monday, February 21, 2011

2045

Immortality is just decades away


Aubrey de Grey

Thursday, Feb. 10, 2011

2045: The Year Man Becomes Immortal

By Lev Grossman

On Feb. 15, 1965, a diffident but self-possessed high school student named Raymond Kurzweil appeared as a guest on a game show called I've Got a Secret. He was introduced by the host, Steve Allen, then he played a short musical composition on a piano. The idea was that Kurzweil was hiding an unusual fact and the panelists — they included a comedian and a former Miss America — had to guess what it was.
On the show (see the clip on YouTube), the beauty queen did a good job of grilling Kurzweil, but the comedian got the win: the music was composed by a computer. Kurzweil got $200. (See TIME's photo-essay "Cyberdyne's Real Robot.")
Kurzweil then demonstrated the computer, which he built himself — a desk-size affair with loudly clacking relays, hooked up to a typewriter. The panelists were pretty blasé about it; they were more impressed by Kurzweil's age than by anything he'd actually done. They were ready to move on to Mrs. Chester Loney of Rough and Ready, Calif., whose secret was that she'd been President Lyndon Johnson's first-grade teacher.
But Kurzweil would spend much of the rest of his career working out what his demonstration meant. Creating a work of art is one of those activities we reserve for humans and humans only. It's an act of self-expression; you're not supposed to be able to do it if you don't have a self. To see creativity, the exclusive domain of humans, usurped by a computer built by a 17-year-old is to watch a line blur that cannot be unblurred, the line between organic intelligence and artificial intelligence.
That was Kurzweil's real secret, and back in 1965 nobody guessed it. Maybe not even him, not yet. But now, 46 years later, Kurzweil believes that we're approaching a moment when computers will become intelligent, and not just intelligent but more intelligent than humans. When that happens, humanity — our bodies, our minds, our civilization — will be completely and irreversibly transformed. He believes that this moment is not only inevitable but imminent. According to his calculations, the end of human civilization as we know it is about 35 years away. (See the best inventions of 2010.)
Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they're getting faster is increasing.
True? True.
So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.
If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville.
Probably. It's impossible to predict the behavior of these smarter-than-human intelligences with which (with whom?) we might one day share the planet, because if you could, you'd be as smart as they would be. But there are a lot of theories about it. Maybe we'll merge with them to become super-intelligent cyborgs, using computers to extend our intellectual abilities the same way that cars and planes extend our physical abilities. Maybe the artificial intelligences will help us treat the effects of old age and prolong our life spans indefinitely. Maybe we'll scan our consciousnesses into computers and live inside them as software, forever, virtually. Maybe the computers will turn on humanity and annihilate us. The one thing all these theories have in common is the transformation of our species into something that is no longer recognizable as such to humanity circa 2011. This transformation has a name: the Singularity. (Comment on this story.)
The difficult thing to keep sight of when you're talking about the Singularity is that even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but suppress it if you can, because while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation.
See pictures of cinema's most memorable robots.
From TIME's archives: "Can Machines Think?"
See TIME's special report on gadgets, then and now.
People are spending a lot of money trying to understand it. The three-year-old Singularity University, which offers inter-disciplinary courses of study for graduate students and executives, is hosted by NASA. Google was a founding sponsor; its CEO and co-founder Larry Page spoke there last year. People are attracted to the Singularity for the shock value, like an intellectual freak show, but they stay because there's more to it than they expected. And of course, in the event that it turns out to be real, it will be the most important thing to happen to human beings since the invention of language. (See "Is Technology Making Us Lonelier?")
The Singularity isn't a wholly new idea, just newish. In 1965 the British mathematician I.J. Good described something he called an "intelligence explosion":
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
The word singularity is borrowed from astrophysics: it refers to a point in space-time — for example, inside a black hole — at which the rules of ordinary physics do not apply. In the 1980s the science-fiction novelist Vernor Vinge attached it to Good's intelligence-explosion scenario. At a NASA symposium in 1993, Vinge announced that "within 30 years, we will have the technological means to create super-human intelligence. Shortly after, the human era will be ended."
By that time Kurzweil was thinking about the Singularity too. He'd been busy since his appearance on I've Got a Secret. He'd made several fortunes as an engineer and inventor; he founded and then sold his first software company while he was still at MIT. He went on to build the first print-to-speech reading machine for the blind — Stevie Wonder was customer No. 1 — and made innovations in a range of technical fields, including music synthesizers and speech recognition. He holds 39 patents and 19 honorary doctorates. In 1999 President Bill Clinton awarded him the National Medal of Technology. (See pictures of adorable robots.)
But Kurzweil was also pursuing a parallel career as a futurist: he has been publishing his thoughts about the future of human and machine-kind for 20 years, most recently in The Singularity Is Near, which was a best seller when it came out in 2005. A documentary by the same name, starring Kurzweil, Tony Robbins and Alan Dershowitz, among others, was released in January. (Kurzweil is actually the subject of two current documentaries. The other one, less authorized but more informative, is called The Transcendent Man.) Bill Gates has called him "the best person I know at predicting the future of artificial intelligence."(See the world's most influential people in the 2010 TIME 100.)
In real life, the transcendent man is an unimposing figure who could pass for Woody Allen's even nerdier younger brother. Kurzweil grew up in Queens, N.Y., and you can still hear a trace of it in his voice. Now 62, he speaks with the soft, almost hypnotic calm of someone who gives 60 public lectures a year. As the Singularity's most visible champion, he has heard all the questions and faced down the incredulity many, many times before. He's good-natured about it. His manner is almost apologetic: I wish I could bring you less exciting news of the future, but I've looked at the numbers, and this is what they say, so what else can I tell you?
Kurzweil's interest in humanity's cyborganic destiny began about 1980 largely as a practical matter. He needed ways to measure and track the pace of technological progress. Even great inventions can fail if they arrive before their time, and he wanted to make sure that when he released his, the timing was right. "Even at that time, technology was moving quickly enough that the world was going to be different by the time you finished a project," he says. "So it's like skeet shooting — you can't shoot at the target." He knew about Moore's law, of course, which states that the number of transistors you can put on a microchip doubles about every two years. It's a surprisingly reliable rule of thumb. Kurzweil tried plotting a slightly different curve: the change over time in the amount of computing power, measured in MIPS (millions of instructions per second), that you can buy for $1,000.
As it turned out, Kurzweil's numbers looked a lot like Moore's. They doubled every couple of years. Drawn as graphs, they both made exponential curves, with their value increasing by multiples of two instead of by regular increments in a straight line. The curves held eerily steady, even when Kurzweil extended his backward through the decades of pretransistor computing technologies like relays and vacuum tubes, all the way back to 1900. (Comment on this story.)
Kurzweil then ran the numbers on a whole bunch of other key technological indexes — the falling cost of manufacturing transistors, the rising clock speed of microprocessors, the plummeting price of dynamic RAM. He looked even further afield at trends in biotech and beyond — the falling cost of sequencing DNA and of wireless data service and the rising numbers of Internet hosts and nanotechnology patents. He kept finding the same thing: exponentially accelerating progress. "It's really amazing how smooth these trajectories are," he says. "Through thick and thin, war and peace, boom times and recessions." Kurzweil calls it the law of accelerating returns: technological progress happens exponentially, not linearly.
See TIME's video "Five Worst Inventions."
See the 100 best gadgets of all time.
Then he extended the curves into the future, and the growth they predicted was so phenomenal, it created cognitive resistance in his mind. Exponential curves start slowly, then rocket skyward toward infinity. According to Kurzweil, we're not evolved to think in terms of exponential growth. "It's not intuitive. Our built-in predictors are linear. When we're trying to avoid an animal, we pick the linear prediction of where it's going to be in 20 seconds and what to do about it. That is actually hardwired in our brains."
Here's what the exponential curves told him. We will successfully reverse-engineer the human brain by the mid-2020s. By the end of that decade, computers will be capable of human-level intelligence. Kurzweil puts the date of the Singularity — never say he's not conservative — at 2045. In that year, he estimates, given the vast increases in computing power and the vast reductions in the cost of same, the quantity of artificial intelligence created will be about a billion times the sum of all the human intelligence that exists today. (See how robotics are changing the future of medicine.)
The Singularity isn't just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.
Not all of them are Kurzweilians, not by a long chalk. There's room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won't happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.
In addition to the Singularity University, which Kurzweil co-founded, there's also a Singularity Institute for Artificial Intelligence, based in San Francisco. It counts among its advisers Peter Thiel, a former CEO of PayPal and an early investor in Facebook. The institute holds an annual conference called the Singularity Summit. (Kurzweil co-founded that too.) Because of the highly interdisciplinary nature of Singularity theory, it attracts a diverse crowd. Artificial intelligence is the main event, but the sessions also cover the galloping progress of, among other fields, genetics and nanotechnology. (See TIME's computer covers.)
At the 2010 summit, which took place in August in San Francisco, there were not just computer scientists but also psychologists, neuroscientists, nanotechnologists, molecular biologists, a specialist in wearable computers, a professor of emergency medicine, an expert on cognition in gray parrots and the professional magician and debunker James "the Amazing" Randi. The atmosphere was a curious blend of Davos and UFO convention. Proponents of seasteading — the practice, so far mostly theoretical, of establishing politically autonomous floating communities in international waters — handed out pamphlets. An android chatted with visitors in one corner.
After artificial intelligence, the most talked-about topic at the 2010 summit was life extension. Biological boundaries that most people think of as permanent and inevitable Singularitarians see as merely intractable but solvable problems. Death is one of them. Old age is an illness like any other, and what do you do with illnesses? You cure them. Like a lot of Singularitarian ideas, it sounds funny at first, but the closer you get to it, the less funny it seems. It's not just wishful thinking; there's actual science going on here.
For example, it's well known that one cause of the physical degeneration associated with aging involves telomeres, which are segments of DNA found at the ends of chromosomes. Every time a cell divides, its telomeres get shorter, and once a cell runs out of telomeres, it can't reproduce anymore and dies. But there's an enzyme called telomerase that reverses this process; it's one of the reasons cancer cells live so long. So why not treat regular non-cancerous cells with telomerase? In November, researchers at Harvard Medical School announced in Nature that they had done just that. They administered telomerase to a group of mice suffering from age-related degeneration. The damage went away. The mice didn't just get better; they got younger. (Comment on this story.)
Aubrey de Grey is one of the world's best-known life-extension researchers and a Singularity Summit veteran. A British biologist with a doctorate from Cambridge and a famously formidable beard, de Grey runs a foundation called SENS, or Strategies for Engineered Negligible Senescence. He views aging as a process of accumulating damage, which he has divided into seven categories, each of which he hopes to one day address using regenerative medicine. "People have begun to realize that the view of aging being something immutable — rather like the heat death of the universe — is simply ridiculous," he says. "It's just childish. The human body is a machine that has a bunch of functions, and it accumulates various types of damage as a side effect of the normal function of the machine. Therefore in principal that damage can be repaired periodically. This is why we have vintage cars. It's really just a matter of paying attention. The whole of medicine consists of messing about with what looks pretty inevitable until you figure out how to make it not inevitable."
Kurzweil takes life extension seriously too. His father, with whom he was very close, died of heart disease at 58. Kurzweil inherited his father's genetic predisposition; he also developed Type 2 diabetes when he was 35. Working with Terry Grossman, a doctor who specializes in longevity medicine, Kurzweil has published two books on his own approach to life extension, which involves taking up to 200 pills and supplements a day. He says his diabetes is essentially cured, and although he's 62 years old from a chronological perspective, he estimates that his biological age is about 20 years younger.
From TIME's archives: "The Immortality Enzyme."
See Healthland's 5 rules for good health in 2011.
But his goal differs slightly from de Grey's. For Kurzweil, it's not so much about staying healthy as long as possible; it's about staying alive until the Singularity. It's an attempted handoff. Once hyper-intelligent artificial intelligences arise, armed with advanced nanotechnology, they'll really be able to wrestle with the vastly complex, systemic problems associated with aging in humans. Alternatively, by then we'll be able to transfer our minds to sturdier vessels such as computers and robots. He and many other Singularitarians take seriously the proposition that many people who are alive today will wind up being functionally immortal.
It's an idea that's radical and ancient at the same time. In "Sailing to Byzantium," W.B. Yeats describes mankind's fleshly predicament as a soul fastened to a dying animal. Why not unfasten it and fasten it to an immortal robot instead? But Kurzweil finds that life extension produces even more resistance in his audiences than his exponential growth curves. "There are people who can accept computers being more intelligent than people," he says. "But the idea of significant changes to human longevity — that seems to be particularly controversial. People invested a lot of personal effort into certain philosophies dealing with the issue of life and death. I mean, that's the major reason we have religion." (See the top 10 medical breakthroughs of 2010.)
Of course, a lot of people think the Singularity is nonsense — a fantasy, wishful thinking, a Silicon Valley version of the Evangelical story of the Rapture, spun by a man who earns his living making outrageous claims and backing them up with pseudoscience. Most of the serious critics focus on the question of whether a computer can truly become intelligent.
The entire field of artificial intelligence, or AI, is devoted to this question. But AI doesn't currently produce the kind of intelligence we associate with humans or even with talking computers in movies — HAL or C3PO or Data. Actual AIs tend to be able to master only one highly specific domain, like interpreting search queries or playing chess. They operate within an extremely specific frame of reference. They don't make conversation at parties. They're intelligent, but only if you define intelligence in a vanishingly narrow way. The kind of intelligence Kurzweil is talking about, which is called strong AI or artificial general intelligence, doesn't exist yet.
Why not? Obviously we're still waiting on all that exponentially growing computing power to get here. But it's also possible that there are things going on in our brains that can't be duplicated electronically no matter how many MIPS you throw at them. The neurochemical architecture that generates the ephemeral chaos we know as human consciousness may just be too complex and analog to replicate in digital silicon. The biologist Dennis Bray was one of the few voices of dissent at last summer's Singularity Summit. "Although biological components act in ways that are comparable to those in electronic circuits," he argued, in a talk titled "What Cells Can Do That Robots Can't," "they are set apart by the huge number of different states they can adopt. Multiple biochemical processes create chemical modifications of protein molecules, further diversified by association with distinct structures at defined locations of a cell. The resulting combinatorial explosion of states endows living systems with an almost infinite capacity to store information regarding past and present conditions and a unique capacity to prepare for future events." That makes the ones and zeros that computers trade in look pretty crude. (See how to live 100 years.)
Underlying the practical challenges are a host of philosophical ones. Suppose we did create a computer that talked and acted in a way that was indistinguishable from a human being — in other words, a computer that could pass the Turing test. (Very loosely speaking, such a computer would be able to pass as human in a blind test.) Would that mean that the computer was sentient, the way a human being is? Or would it just be an extremely sophisticated but essentially mechanical automaton without the mysterious spark of consciousness — a machine with no ghost in it? And how would we know?
Even if you grant that the Singularity is plausible, you're still staring at a thicket of unanswerable questions. If I can scan my consciousness into a computer, am I still me? What are the geopolitics and the socioeconomics of the Singularity? Who decides who gets to be immortal? Who draws the line between sentient and nonsentient? And as we approach immortality, omniscience and omnipotence, will our lives still have meaning? By beating death, will we have lost our essential humanity?
Kurzweil admits that there's a fundamental level of risk associated with the Singularity that's impossible to refine away, simply because we don't know what a highly advanced artificial intelligence, finding itself a newly created inhabitant of the planet Earth, would choose to do. It might not feel like competing with us for resources. One of the goals of the Singularity Institute is to make sure not just that artificial intelligence develops but also that the AI is friendly. You don't have to be a super-intelligent cyborg to understand that introducing a superior life-form into your own biosphere is a basic Darwinian error. (Comment on this story.)
If the Singularity is coming, these questions are going to get answers whether we like it or not, and Kurzweil thinks that trying to put off the Singularity by banning technologies is not only impossible but also unethical and probably dangerous. "It would require a totalitarian system to implement such a ban," he says. "It wouldn't work. It would just drive these technologies underground, where the responsible scientists who we're counting on to create the defenses would not have easy access to the tools."
Kurzweil is an almost inhumanly patient and thorough debater. He relishes it. He's tireless in hunting down his critics so that he can respond to them, point by point, carefully and in detail.
See TIME's photo-essay "A Global Look at Longevity."
See how genes, gender and diet may be life extenders.
Take the question of whether computers can replicate the biochemical complexity of an organic brain. Kurzweil yields no ground there whatsoever. He does not see any fundamental difference between flesh and silicon that would prevent the latter from thinking. He defies biologists to come up with a neurological mechanism that could not be modeled or at least matched in power and flexibility by software running on a computer. He refuses to fall on his knees before the mystery of the human brain. "Generally speaking," he says, "the core of a disagreement I'll have with a critic is, they'll say, Oh, Kurzweil is underestimating the complexity of reverse-engineering of the human brain or the complexity of biology. But I don't believe I'm underestimating the challenge. I think they're underestimating the power of exponential growth."
This position doesn't make Kurzweil an outlier, at least among Singularitarians. Plenty of people make more-extreme predictions. Since 2005 the neuroscientist Henry Markram has been running an ambitious initiative at the Brain Mind Institute of the Ecole Polytechnique in Lausanne, Switzerland. It's called the Blue Brain project, and it's an attempt to create a neuron-by-neuron simulation of a mammalian brain, using IBM's Blue Gene super-computer. So far, Markram's team has managed to simulate one neocortical column from a rat's brain, which contains about 10,000 neurons. Markram has said that he hopes to have a complete virtual human brain up and running in 10 years. (Even Kurzweil sniffs at this. If it worked, he points out, you'd then have to educate the brain, and who knows how long that would take?) (See portraits of centenarians.)
By definition, the future beyond the Singularity is not knowable by our linear, chemical, animal brains, but Kurzweil is teeming with theories about it. He positively flogs himself to think bigger and bigger; you can see him kicking against the confines of his aging organic hardware. "When people look at the implications of ongoing exponential growth, it gets harder and harder to accept," he says. "So you get people who really accept, yes, things are progressing exponentially, but they fall off the horse at some point because the implications are too fantastic. I've tried to push myself to really look."
In Kurzweil's future, biotechnology and nanotechnology give us the power to manipulate our bodies and the world around us at will, at the molecular level. Progress hyperaccelerates, and every hour brings a century's worth of scientific breakthroughs. We ditch Darwin and take charge of our own evolution. The human genome becomes just so much code to be bug-tested and optimized and, if necessary, rewritten. Indefinite life extension becomes a reality; people die only if they choose to. Death loses its sting once and for all. Kurzweil hopes to bring his dead father back to life.
We can scan our consciousnesses into computers and enter a virtual existence or swap our bodies for immortal robots and light out for the edges of space as intergalactic godlings. Within a matter of centuries, human intelligence will have re-engineered and saturated all the matter in the universe. This is, Kurzweil believes, our destiny as a species. (See the costs of living a long life.)
Or it isn't. When the big questions get answered, a lot of the action will happen where no one can see it, deep inside the black silicon brains of the computers, which will either bloom bit by bit into conscious minds or just continue in ever more brilliant and powerful iterations of nonsentience.
But as for the minor questions, they're already being decided all around us and in plain sight. The more you read about the Singularity, the more you start to see it peeking out at you, coyly, from unexpected directions. Five years ago we didn't have 600 million humans carrying out their social lives over a single electronic network. Now we have Facebook. Five years ago you didn't see people double-checking what they were saying and where they were going, even as they were saying it and going there, using handheld network-enabled digital prosthetics. Now we have iPhones. Is it an unimaginable step to take the iPhones out of our hands and put them into our skulls?
Already 30,000 patients with Parkinson's disease have neural implants. Google is experimenting with computers that can drive cars. There are more than 2,000 robots fighting in Afghanistan alongside the human troops. This month a game show will once again figure in the history of artificial intelligence, but this time the computer will be the guest: an IBM super-computer nicknamed Watson will compete on Jeopardy! Watson runs on 90 servers and takes up an entire room, and in a practice match in January it finished ahead of two former champions, Ken Jennings and Brad Rutter. It got every question it answered right, but much more important, it didn't need help understanding the questions (or, strictly speaking, the answers), which were phrased in plain English. Watson isn't strong AI, but if strong AI happens, it will arrive gradually, bit by bit, and this will have been one of the bits. (Comment on this story.)
A hundred years from now, Kurzweil and de Grey and the others could be the 22nd century's answer to the Founding Fathers — except unlike the Founding Fathers, they'll still be alive to get credit — or their ideas could look as hilariously retro and dated as Disney's Tomorrowland. Nothing gets old as fast as the future.
But even if they're dead wrong about the future, they're right about the present. They're taking the long view and looking at the big picture. You may reject every specific article of the Singularitarian charter, but you should admire Kurzweil for taking the future seriously. Singularitarianism is grounded in the idea that change is real and that humanity is in charge of its own fate and that history might not be as simple as one damn thing after another. Kurzweil likes to point out that your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than the computer he had at MIT 40 years ago. Flip that forward 40 years and what does the world look like? If you really want to figure that out, you have to think very, very far outside the box. Or maybe you have to think further inside it than anyone ever has before.

Thursday, February 03, 2011

Wisdom from Epicurus and Diogenes




Egypt. Photo by Dar



As the world in general and my inner circle in particular struggle with the riddles of existence in this new millennium, I find Epicurus and his compatriot Diogenes to offer sage solutions:


The fundamental obstacle to happiness, says Epicurus, is anxiety. No matter how rich or famous you are, you won't be happy if you're anxious to be richer or more famous. No matter how good your health is, you won't be happy if you're anxious about getting sick. You can't be happy in this life if you're worried about the next life. You can't be happy as a human being if you're worried about being punished or victimized by powerful divine beings. But you can be happy if you believe in the four basic truths of Epicureanism: there are no divine beings which threaten us; there is no next life; what we actually need is easy to get; what makes us suffer is easy to put up with. This is the so-called 'four-part cure', the Epicurean remedy for the epidemic sickness of human anxiety; as a later Epicurean puts it, "Don't fear god, don't worry about death; what's good is easy to get, and what's terrible is easy to endure."1
"What's good is easy to get." We need food, water, shelter from the elements, and safety from hostile animals and people. All these things lie ready to hand and can be acquired with little effort or money. We don't need caviar, champagne, palaces, or bodyguards, which are expensive and difficult to acquire and keep. People who want more than they need are making a fundamental mistake, a mistake that reduces their chances of being satisfied and causes needless anxiety. While our bodies need food, water, shelter, and safety, all that our souls need is to be confident that our bodies will get what they need. If my body is contented and my soul is confident, then I will be cheerful, and being cheerful is the key to being happy. As long as we are cheerful it takes very little to keep us happy, but without cheerfulness we cannot really enjoy even the so-called 'pleasures' of life. Being cheerful is a state which is full of pleasure—indeed Epicurus calls it 'the limit of pleasure'—and it is a normal state, but if we suffer from anxiety we need to train ourselves to attain and maintain it. The discipline of Epicurean philosophy enables its followers to recognize how little they actually need, to enjoy possessing it, and to enjoy the confidence that they will continue to possess it. On the other hand, there is no reason not to enjoy occasional luxuries, if they happen to be easily available. There is nothing wrong with luxury in itself, but any dependence on luxuries is harmful to our happiness, as is every desire for unnecessary things.
"What's terrible is easy to endure." There is no denying that illness and pain are disagreeable, but nature has so constituted us that we need not suffer very much from them. Sickness is either brief or chronic, and either mild or intense, but discomfort that is both chronic and intense is very unusual; so there is no need to be concerned about the prospect of suffering. This is admittedly a difficult teaching to accept, especially for young people, but as people get older and more experienced in putting up with suffering, they tend to recognize its truth more and more, as did the Roman philosopher Seneca, whose health was anything but strong.2 Epicurus himself died in excruciating pain, from kidney failure after two weeks of pain caused by kidney stones; but he died cheerfully, he claimed, because he kept in mind the memory of his friends and the agreeable experiences and conversations they had had together. Mental suffering, unlike physical suffering, is agony to endure, but once you grasp the Epicurean philosophy you won't need to face it again. Know the limits of what you need, recognize the limits of what your body is likely to suffer, and enjoy the confidence that your life will be overwhelmingly pleasant, unless you poison it with anxiety.
"Don't worry about death." While you are alive, you don't have to deal with being dead, but when you are dead you don't have to deal with it either, because you aren't there to deal with it. "Death is nothing to us," as Epicurus puts it, for "when we exist, death is not yet present, and when death is present, then we do not exist."3 Death is always irrelevant to us, even though it causes considerable anxiety to many people for much of their lives. Worrying about death casts a general pall over the experience of living, either because people expect to exist after their deaths and are humbled and terrified into ingratiating themselves with the gods, who might well punish them for their misdeeds, or else because they are saddened and terrified by the prospect of not existing after their deaths. But there are no gods which threaten us, and, even if there were, we would not be there to be punished. Our souls are flimsy things which are dissipated when we die, and even if the stuff of which they were made were to survive intact, that would be nothing to us, because what matters to us is the continuity of our experience, which is severed by the parting of body and soul. It is not sensible to be afraid of ceasing to exist, since you already know what it is like not to exist; consider any time before your birth-was it disagreeable not to exist? And if there is nothing bad about not existing, then there is nothing bad for your friend when he ceases to exist, nor is there anything bad for you about being fated to cease to exist. It is a confusion to be worried by your mortality, and it is an ingratitude to resent the limitations of life, like some greedy dinner guest who expects an indefinite number of courses and refuses to leave the table.
"Don't fear god." The gods are happy and immortal, as the very concept of 'god' indicates. But in Epicurus' view, most people were in a state of confusion about the gods, believing them to be intensely concerned about what human beings were up to and exerting tremendous effort to favour their worshippers and punish their mortal enemies. No; it is incompatible with the concept of divinity to suppose that the gods exert themselves or that they have any concerns at all. The most accurate, as well as the most agreeable, conception of the gods is to think of them, as the Greeks often did, in a state of bliss, unconcerned about anything, without needs, invulnerable to any harm, and generally living an enviable life. So conceived, they are role models for Epicureans, who emulate the happiness of the gods, within the limits imposed by human nature. "Epicurus said that he was prepared to compete with Zeus in happiness, as long as he had a barley cake and some water."4
If, however, the gods are as independent as this conception indicates, then they will not observe the sacrifices we make to them, and Epicurus was indeed widely regarded as undermining the foundations of traditional religion. Furthermore, how can Epicurus explain the visions that we receive of the gods, if the gods don't deliberately send them to us? These visions, replies Epicurus, are material images travelling through the world, like everything else that we see or imagine, and are therefore something real; they travel through the world because of the general laws of atomic motion, not because god sends them. But then what sort of bodies must the gods have, if these images are always streaming off them, and yet they remain strong and invulnerable? Their bodies, replies Epicurus, are continually replenished by images streaming towards them; indeed the 'body' of a god may be nothing more than a focus to which the images travel, the images that later travel to us and make up our conception of its nature.5
If the gods do not exert themselves for our benefit, how is it that the world around us is suitable for our habitation? It happened by accident, said Epicurus, an answer that gave ancient critics ample opportunity for ridicule, and yet it makes him a thinker of a very modern sort, well ahead of his time. Epicurus believed that the universe is a material system governed by the laws of matter. The fundamental elements of matter are atoms,6 which move, collide, and form larger structures according to physical laws. These larger structures can sometimes develop into yet larger structures by the addition of more matter, and sometimes whole worlds will develop. These worlds are extremely numerous and variable; some will be unstable, but others will be stable. The stable ones will persist and give the appearance of being designed to be stable, like our world, and living structures will sometimes develop out of the elements of these worlds. This theory is no longer as unbelievable as it was to the non-Epicurean scientists and philosophers of the ancient world, and its broad outlines may well be true.
We happen to have a great deal of evidence about the Epicurean philosophy of nature, which served as a philosophical foundation for the rest of the system. But many Epicureans would have had little interest in this subject, nor did they need to, if their curiosity or scepticism did not drive them to ask fundamental questions. What was most important in Epicurus' philosophy of nature was the overall conviction that our life on this earth comes with no strings attached; that there is no Maker whose puppets we are; that there is no script for us to follow and be constrained by; that it is up to us to discover the real constraints which our own nature imposes on us. When we do this, we find something very delightful: life is free, life is good, happiness is possible, and we can enjoy the bliss of the gods, rather than abasing ourselves to our misconceptions of them.
To say that life is free is not to say that we don't need to observe any moral constraints. It is a very bad plan to cheat on your friends or assault people in the street or do anything else that would cause you to worry about their reactions. Why is this a bad plan? Not because god has decreed that such things are ‘immoral’, but because it is stupid to do anything that would cause you to worry about anything. In the view of some moral philosophers (both ancient and modern) this view makes Epicureanism an immoral philosophy, because it denies that there is anything intrinsically wrong with immoral conduct. If we could be sure that nobody would find out, then we would have no reason to worry about the consequences, and therefore no reason not to be immoral. True, admits Epicurus, but we can never be sure that nobody will find out, and so the most tranquil course is to obey the rules of social morality quite strictly. These have been developed over the centuries for quite understandable reasons, mostly to give ourselves mutual protection against hostile animals and people. The legal and moral rules of society serve a good purpose, although it is not worthwhile to exert yourself to become prominent in public affairs and have the anxiety of public office. Much more satisfying and valuable is to develop individual relationships of mutual confidence, for a friend will come to your assistance when an ordinary member of the public will not. In fact, friends are our most important defence against insecurity and are our greatest sources of strength, after the truths of Epicurean philosophy itself.
Friends and philosophy are the two greatest resources available to help us live our lives in confidence and without anxiety. Perhaps the best thing of all would be to have friends who shared our Epicurean philosophy with us; many Epicureans lived in small Epicurean communities, as did the followers of Pythagoras in earlier times. These Epicurean communities were probably modelled on the community that Epicurus established on the outskirts of Athens, called "The Garden." We know very little about the organization of these communities, except that they did not require their members to give up their private property to the commune (unlike the Pythagoreans and some modern religious cults) and that they probably involved regular lessons or discussions of Epicurean philosophy. They also included household servants and women on equal terms with the men, which was completely out of line with the social norms of the time, but Epicurus believed that humble people and women could understand and benefit from his philosophy as well as educated men, another respect in which Epicurean philosophy was well ahead of its time.
The membership of women caused scandalous rumours, spread by hostile sources, that "The Garden" was a place for continuous orgies and parties, rumours apparently supported by Epicurus' thesis that bodily pleasure is the original and basic form of pleasure. But Epicurus believed in marriage and the family, for those who are ready for the responsibility, and he disapproved of sexual love, because it ensnares the lover in tangles of unnecessary needs and vulnerabilities. Here's the typical pattern: first lust, then infatuation, then consummation, then jealousy or boredom. There’s only anxiety and distress in this endlessly repeated story, except for the sex itself, and Epicurus regarded sex as an unnecessary pleasure, which never did anybody any real good—count yourself lucky if it does you no harm!7 There is nothing intrinsically wrong with casual sex, but much more important than either love or sex is friendship, which "dances around the world, announcing to all of us that we must wake up to blessedness."8
One of the remarkable features of Epicurus' philosophy is that it can be understood at several levels of subtlety. You don't need to be a philosophical genius to grasp the main points, which is why Epicurus coined slogans and maxims for ordinary people to memorize, to help them relieve their anxiety whenever it might arise. There were signet rings and hand mirrors, for example, engraved with the words 'death is nothing', so the faithful could be reminded while going about their daily business. Suppose, though, that you're not convinced that 'death is nothing', for example, and you want proof before you organize your life around that idea. For people like you, Epicurus wrote letters outlining his basic arguments, which circulated freely among those interested in the topic. Suppose, again, that you already have a philosophical education, and you want to assess Epicurus arguments against the competing arguments, from other philosophers, for example. For this purpose he wrote elaborately careful and thorough memoranda of his arguments; his main treatise on natural philosophy ran to a staggering thirty-seven volumes. This extremely long book was given an intermediate (but still quite detailed) summary by Epicurus, and there may have been other levels of length and subtlety. If on a certain topic all our evidence seems superficial, that is probably because the more extensive discussions of that topic have not survived.
* * * * *


Alexander the Great meets the Greek philosopher Diogenes of Sinope circa 335 BC.
 (Both died in 323 BC.-- some say on the same day)

Diogenes of Sinope

diogenes_of_sinopeThe most illustrious of the Cynic philosophers, Diogenes of Sinope (c. 404-323 B.C.E.) serves as the template for the Cynic sage in antiquity. An alleged student of Antisthenes, Diogenes maintains his teacher’s asceticism and emphasis on ethics, but brings to these philosophical positions a dynamism and sense of humor unrivaled in the history of philosophy. Though originally from Sinope, the majority of the stories comprising his philosophical biography occur in Athens, and some of the most celebrated of these place Alexander the Great or Plato as his foil.It is disputed whether Diogenes left anything in writing. If he did, the texts he composed have since been lost. In Cynicism, living and writing are two components of ethical practice, but Diogenes is much like Socrates and even Plato in his sentiments regarding the superiority of direct verbal interaction over the written account. Diogenes scolds Hegesias after he asks to be lent one of Diogenes’ writing tablets: “You are a simpleton, Hegesias; you do not choose painted figs, but real ones; and yet you pass over the true training and would apply yourself to written rules” (Diogenes Laertius, Lives of Eminent Philosophers, Book 6, Chapter 48). In reconstructing Diogenes’ ethical model, then, the life he lived is as much his philosophical work as any texts he may have composed.

1. Life

The exceptional nature of Diogenes’ life generates some difficulty for determining the exact events that comprise it. He was a citizen of Sinope who either fled or was exiled because of a problem involving the defacing of currency. Thanks to numismatic evidence, the adulteration of Sinopean coinage is one event about which there is certainty. The details of the defacing, though, are murkier: “Diocles relates that [Diogenes] went into exile because his father was entrusted with the money of the state and adulterated the coinage. But Eubulides in his book on Diogenes says that Diogenes himself did this and was forced to leave home along with his father” (Diogenes Laertius, Lives of Eminent Philosophers, Book 6, Chapter 20). Whether it was Diogenes or his father who defaced the currency, and for whatever reasons they may have done so, the act lead to Diogenes’ relocation to Athens.
Diogenes’ biography becomes, historically, only sketchier. For example, one story claims that Diogenes was urged by the oracle at Delphi to adulterate the political currency, but misunderstood and defaced the state currency (Diogenes Laertius, Lives of Eminent Philosophers, Book 6, Chapter 20). A second version tells of Diogenes traveling to Delphi and receiving this same oracle after he had already altered the currency, turning his crime into a calling. It is, finally, questionable whether Diogenes ever consulted the oracle at all; the Delphic advice is curiously close to Socrates’ own injunction, and the interweaving of life and legend in Diogenes’ case is just as substantial.
Once in Athens, Diogenes famously took a tub, or a pithos, for an abode. In Lives of Eminent Philosophers, it is reported that Diogenes “had written to some one to try and procure a cottage for him. When this man was a long time about it, he took for his abode the tub in the Metroön, as he himself explains in his letters” (Diogenes Laertius, Book 6, Chapter 23). Apparently Diogenes discovered that he had no need for conventional shelter or any other “dainties” from having watched a mouse. The lesson the mouse teaches is that he is capable of adapting himself to any circumstance. This adaptability is the origin of Diogenes’ legendary askēsis, or training.
Diogenes Laertius reports that Diogenes of Sinope “fell in” with Antisthenes who, though not in the habit of taking students, was worn out by Diogenes’ persistence (Lives of Eminent Philosophers, Book 6, Chapter 22). Although this account has been met with suspicion, especially given the likely dates of Diogenes’ arrival in Athens and Antisthenes’ death, it supports the perception that the foundation of Diogenes’ philosophical practice rests with Antisthenes.
Another important, though possibly invented, episode in Diogenes’ life centers around his enslavement in Corinth after having been captured by pirates. When asked what he could do, he replied “Govern men,” which is precisely what he did once bought by Xeniades. He was placed in charge of Xeniades’ sons, who learned to follow his ascetic example. One story tells of Diogenes’ release after having become a cherished member of the household, another claims Xeniades freed him immediately, and yet another maintains that he grew old and died at Xeniades’ house in Corinth. Whichever version may be true (and, of course, they all could be false), the purpose is the same: Diogenes the slave is freer than his master, who he rightly convinces to submit to his obedience.
Though most accounts agree that he lived to be quite old— some suggesting he lived until ninety— the tales of Diogenes’ death are no less multiple than those of his life. The possible cause of death includes a voluntary demise by holding his breath, an illness brought on by eating raw octopus, or death by dog bite. Given the embellished feel of each of these reports, it is more likely that he died of old age.

2. Philosophical Practice: A Socrates Gone Mad

When Plato is asked what sort of man Diogenes is, he responds, “A Socrates gone mad” (Diogenes Laertius, Book 6, Chapter 54). Plato’s label is representative, for Diogenes’ adaptation of Socratic philosophy has frequently been regarded as one of degradation. Certain scholars have understood Diogenes as an extreme version of Socratic wisdom, offering a fascinating, if crude, moment in the history of ancient thought, but which ought not to be confused with the serious business of philosophy. This reading is influenced by the mixture of shamelessness and askēsis which riddle Diogenes’ biography. This understanding, though, overlooks the centrality of reason in Diogenes’ practice.
Diogenes’ sense of shamelessness is best seen in the context of Cynicism in general. Specifically, though, it stems from a repositioning of convention below nature and reason. One guiding principle is that if an act is not shameful in private, that same act is not made shameful by being performed in public. For example, it was contrary to Athenian convention to eat in the marketplace, and yet there he would eat for, as he explained when reproached, it was in the marketplace that he felt hungry. The most scandalous of these sorts of activities involves his indecent behavior in the marketplace, to which he responded “he wished it were as easy to relieve hunger by rubbing an empty stomach” (Diogenes Laertius, Lives of Eminent Philosophers, Book 6, Chapter 46).
He is labeled mad for acting against convention, but Diogenes points out that it is the conventions which lack reason: “Most people, he would say, are so nearly mad that a finger makes all the difference. For if you go along with your middle finger stretched out, some one will think you mad, but, if it’s the little finger, he will not think so” (Diogenes Laertius, Lives of Eminent Philosophers, Book 6, Chapter 35). In these philosophical fragments, reason clearly has a role to play. There is a report that Diogenes “would continually say that for the conduct of life we need right reason or a halter” (Diogenes Laertius, Lives of Eminent Philosophers, Book 6, Chapter 24). For Diogenes, each individual should either allow reason to guide her conduct, or, like an animal, she will need to be lead by a leash; reason guides one away from mistakes and toward the best way in which to live life. Diogenes, then, does not despise knowledge as such, but despises pretensions to knowledge that serve no purpose.
He is especially scornful of sophisms. He disproves an argument that a person has horns by touching his forehead, and in a similar manner, counters the claim that there is no such thing as motion by walking around. He elsewhere disputes Platonic definitions and from this comes one of his more memorable actions: “Plato had defined the human being as an animal, biped and featherless, and was applauded. Diogenes plucked a fowl and brought it into the lecture-room with the words, ‘Here is Plato’s human being.’ In consequence of which there was added to the definition, ‘having broad nails’” (Diogenes Laertius, Lives of Eminent Philosophers, Book 6, Chapter 40). Diogenes is a harsh critic of Plato, regularly disparaging Plato’s metaphysical pursuits and thereby signaling a clear break from primarily theoretical ethics.
Diogenes’ talent for undercutting social and religious conventions and subverting political power can tempt readers into viewing his position as merely negative. This would, however, be a mistake. Diogenes is clearly contentious, but he is so for the sake of promoting reason and virtue. In the end, for a human to be in accord with nature is to be rational, for it is in the nature of a human being to act in accord with reason. Diogenes has trouble finding such humans, and expresses his sentiments regarding his difficulty theatrically. Diogenes is reported to have “lit a lamp in broad daylight and said, as he went about, ‘I am searching for a human being’” (Diogenes Laertius, Lives of Eminent Philosophers, Book 6, Chapter 41).
For the Cynics, life in accord with reason is lived in accord with nature, and therefore life in accord with reason is greater than the bounds of convention and the polis. Furthermore, the Cynics claim that such a life is the life worth living. As a homeless and penniless exile, Diogenes experienced the greatest misfortunes of which the tragedians write, and yet he insisted that he lived the good life: “He claimed that to fortune he could oppose courage, to convention nature, to passion reason” (Diogenes Laertius, Lives of Eminent Philosophers, Book 6, Chapter 38).


File:Bastein-Lepage Diogenes.jpg
Diogenes
Jules Bastien-Lepage (1848–1884)

 

3. References and Further Reading

  • Billerbeck, Margarethe. Die Kyniker in der modernen Forschung. Amsterdam: B.R. Grüner, 1991.
  • Branham, Bracht and Marie-Odile Goulet-Cazé, eds. The Cynics: The Cynic Movement in Antiquity and Its Legacy. Berkeley: University of California Press, 1996.
  • Dudley, D. R. A History of Cynicism from Diogenes to the 6th Century A.D. Cambridge: Cambridge University Press, 1937.
  • Goulet-Cazé, Marie-Odile. L’Ascèse cynique: Un commentaire de Diogène Laërce VI 70-71, Deuxième édition. Paris: Libraire Philosophique J. VRIN, 2001.
  • Goulet-Cazé, Marie-Odile and Richard Goulet, eds. Le Cynisme ancien et ses prolongements. Paris: Presses Universitaires de France, 1993.
  • Diogenes Laertius. Lives of Eminent Philosophers Vol. I-II. Trans. R.D. Hicks. Cambridge: Harvard University Press, 1979.
  • Long, A.A. and David N. Sedley, eds. The Hellenistic Philosophers, Volume 1 and Volume 2. Cambridge: Cambridge University Press, 1987.
  • Malherbe, Abraham J., ed. and trans. The Cynic Epistles. Missoula, Montana: Scholars Press, 1977.
  • Navia, Luis E. Diogenes of Sinope: The Man in the Tub. Westport, Connecticut: Greenwood Press, 1990.
  • Navia, Luis E. Classical Cynicism: A Critical Study. Westport, Connecticut: Greenwood Press, 1996.
  • Paquet, Léonce. Les Cyniques grecs: fragments et témoignages. Ottawa: Presses de l’Universitaire d’Ottawa, 1988.


Epicurus:

http://www.epicurus.info/etexts/ier.html

see also:  http://plato.stanford.edu/entries/epicurus/