In the recent movie “Interstellar,” set in a futuristic, downtrodden America where NASA has been forced into hiding, school textbooks say the Apollo moon landings were faked.
Excerpt from
There’s a scene in Stanley Kubrick’s comic masterpiece “Dr. Strangelove” in which Jack D. Ripper, an American general who’s gone rogue and ordered a nuclear attack on the Soviet Union, unspools his paranoid worldview — and the explanation for why he drinks “only distilled water, or rainwater, and only pure grain alcohol” — to Lionel Mandrake, a dizzy-with-anxiety group captain in the Royal Air Force.
Ripper: “Have you ever heard of a thing called fluoridation? Fluoridation of water?”Mandrake: “Ah, yes, I have heard of that, Jack. Yes, yes.”Ripper: “Well, do you know what it is?”Mandrake: “No. No, I don’t know what it is, no.”Ripper: “Do you realize that fluoridation is the most monstrously conceived and dangerous communist plot we have ever had to face?” The movie came out in 1964, by which time the health benefits of fluoridation had been thoroughly established and anti-fluoridation conspiracy theories could be the stuff of comedy. Yet half a century later, fluoridation continues to incite fear and paranoia. In 2013, citizens in Portland, Ore., one of only a few major American cities that don’t fluoridate, blocked a plan by local officials to do so. Opponents didn’t like the idea of the government adding “chemicals” to their water. They claimed that fluoride could be harmful to human health.
Actually fluoride is a natural mineral that, in the weak concentrations used in public drinking-water systems, hardens tooth enamel and prevents tooth decay — a cheap and safe way to improve dental health for everyone, rich or poor, conscientious brushers or not. That’s the scientific and medical consensus.To which some people in Portland, echoing anti-fluoridation activists around the world, reply: We don’t believe you.
We live in an age when all manner of scientific knowledge — from the safety of fluoride and vaccines to the reality of climate change — faces organized and often furious opposition. Empowered by their own sources of information and their own interpretations of research, doubters have declared war on the consensus of experts. There are so many of these controversies these days, you’d think a diabolical agency had put something in the water to make people argumentative.
Science doubt has become a pop-culture meme. In the recent movie “Interstellar,” set in a futuristic, downtrodden America where NASA has been forced into hiding, school textbooks say the Apollo moon landings were faked. The debate about mandated vaccinations has the political world talking. A spike in measles cases nationwide has President Obama, lawmakers and even potential 2016 candidates weighing in on the vaccine controversy. (Pamela Kirkland/The Washington Post)
In a sense this is not surprising. Our lives are permeated by science and technology as never before. For many of us this new world is wondrous, comfortable and rich in rewards — but also more complicated and sometimes unnerving. We now face risks we can’t easily analyze.We’re asked to accept, for example, that it’s safe to eat food containing genetically modified organisms (GMOs) because, the experts point out, there’s no evidence that it isn’t and no reason to believe that altering genes precisely in a lab is more dangerous than altering them wholesale through traditional breeding. But to some people, the very idea of transferring genes between species conjures up mad scientists running amok — and so, two centuries after Mary Shelley wrote “Frankenstein,” they talk about Frankenfood.The world crackles with real and imaginary hazards, and distinguishing the former from the latter isn’t easy. Should we be afraid that the Ebola virus, which is spread only by direct contact with bodily fluids, will mutate into an airborne super-plague? The scientific consensus says that’s extremely unlikely: No virus has ever been observed to completely change its mode of transmission in humans, and there’s zero evidence that the latest strain of Ebola is any different. But Google “airborne Ebola” and you’ll enter a dystopia where this virus has almost supernatural powers, including the power to kill us all.
In this bewildering world we have to decide what to believe and how to act on that. In principle, that’s what science is for. “Science is not a body of facts,” says geophysicist Marcia McNutt, who once headed the U.S. Geological Survey and is now editor of Science, the prestigious journal. “Science is a method for deciding whether what we choose to believe has a basis in the laws of nature or not.”
The scientific method leads us to truths that are less than self-evident, often mind-blowing and sometimes hard to swallow. In the early 17th century, when Galileo claimed that the Earth spins on its axis and orbits the sun, he wasn’t just rejecting church doctrine. He was asking people to believe something that defied common sense — because it sure looks like the sun’s going around the Earth, and you can’t feel the Earth spinning. Galileo was put on trial and forced to recant. Two centuries later, Charles Darwin escaped that fate. But his idea that all life on Earth evolved from a primordial ancestor and that we humans are distant cousins of apes, whales and even deep-sea mollusks is still a big ask for a lot of people.Even when we intellectually accept these precepts of science, we subconsciously cling to our intuitions — what researchers call our naive beliefs. A study by Andrew Shtulman of Occidental College showed that even students with an advanced science education had a hitch in their mental gait when asked to affirm or deny that humans are descended from sea animals and that the Earth goes around the sun. Both truths are counterintuitive. The students, even those who correctly marked “true,” were slower to answer those questions than questions about whether humans are descended from tree-dwelling creatures (also true but easier to grasp) and whether the moon goes around the Earth (also true but intuitive). Shtulman’s research indicates that as we become scientifically literate, we repress our naive beliefs but never eliminate them entirely. They nest in our brains, chirping at us as we try to make sense of the world.
Most of us do that by relying on personal experience and anecdotes, on stories rather than statistics. We might get a prostate-specific antigen test, even though it’s no longer generally recommended, because it caught a close friend’s cancer — and we pay less attention to statistical evidence, painstakingly compiled through multiple studies, showing that the test rarely saves lives but triggers many unnecessary surgeries. Or we hear about a cluster of cancer cases in a town with a hazardous-waste dump, and we assume that pollution caused the cancers. Of course, just because two things happened together doesn’t mean one caused the other, and just because events are clustered doesn’t mean they’re not random. Yet we have trouble digesting randomness; our brains crave pattern and meaning.
Even for scientists, the scientific method is a hard discipline. They, too, are vulnerable to confirmation bias — the tendency to look for and see only evidence that confirms what they already believe. But unlike the rest of us, they submit their ideas to formal peer review before publishing them. Once the results are published, if they’re important enough, other scientists will try to reproduce them — and, being congenitally skeptical and competitive, will be very happy to announce that they don’t hold up. Scientific results are always provisional, susceptible to being overturned by some future experiment or observation. Scientists rarely proclaim an absolute truth or an absolute certainty. Uncertainty is inevitable at the frontiers of knowledge.That provisional quality of science is another thing a lot of people have trouble with. To some climate-change skeptics, for example, the fact that a few scientists in the 1970s were worried (quite reasonably, it seemed at the time) about the possibility of a coming ice age is enough to discredit what is now the consensus of the world’s scientists: The planet’s surface temperature has risen by about 1.5 degrees Fahrenheit in the past 130 years, and human actions, including the burning of fossil fuels, are extremely likely to have been the dominant cause since the mid-20th century.It’s clear that organizations funded in part by the fossil-fuel industry have deliberately tried to undermine the public’s understanding of the scientific consensus by promoting a few skeptics. The news media gives abundant attention to such mavericks, naysayers, professional controversialists and table thumpers. The media would also have you believe that science is full of shocking discoveries made by lone geniuses. Not so. The (boring) truth is that science usually advances incrementally, through the steady accretion of data and insights gathered by many people over many years. So it has with the consensus on climate change. That’s not about to go poof with the next thermometer reading.But industry PR, however misleading, isn’t enough to explain why so many people reject the scientific consensus on global warming. The “science communication problem,” as it’s blandly called by the scientists who study it, has yielded abundant new research into how people decide what to believe — and why they so often don’t accept the expert consensus. It’s not that they can’t grasp it, according to Dan Kahan of Yale University. In one study he asked 1,540 Americans, a representative sample, to rate the threat of climate change on a scale of zero to 10. Then he correlated that with the subjects’ science literacy. He found that higher literacy was associated with stronger views — at both ends of the spectrum. Science literacy promoted polarization on climate, not consensus. According to Kahan, that’s because people tend to use scientific knowledge to reinforce their worldviews. Americans fall into two basic camps, Kahan says. Those with a more “egalitarian” and “communitarian” mind-set are generally suspicious of industry and apt to think it’s up to something dangerous that calls for government regulation; they’re likely to see the risks of climate change. In contrast, people with a “hierarchical” and “individualistic” mind-set respect leaders of industry and don’t like government interfering in their affairs; they’re apt to reject warnings about climate change, because they know what accepting them could lead to — some kind of tax or regulation to limit emissions.In the United States, climate change has become a litmus test that identifies you as belonging to one or the other of these two antagonistic tribes. When we argue about it, Kahan says, we’re actually arguing about who we are, what our crowd is. We’re thinking: People like us believe this. People like that do not believe this.
Science appeals to our rational brain, but our beliefs are motivated largely by emotion, and the biggest motivation is remaining tight with our peers. “We’re all in high school. We’ve never left high school,” says Marcia McNutt. “People still have a need to fit in, and that need to fit in is so strong that local values and local opinions are always trumping science. And they will continue to trump science, especially when there is no clear downside to ignoring science.”
Meanwhile the Internet makes it easier than ever for science doubters to find their own information and experts. Gone are the days when a small number of powerful institutions — elite universities, encyclopedias and major news organizations — served as gatekeepers of scientific information. The Internet has democratized it, which is a good thing. But along with cable TV, the Web has also made it possible to live in a “filter bubble” that lets in only the information with which you already agree.
How to penetrate the bubble? How to convert science skeptics? Throwing more facts at them doesn’t help. Liz Neeley, who helps train scientists to be better communicators at an organization called Compass, says people need to hear from believers they can trust, who share their fundamental values. She has personal experience with this. Her father is a climate-change skeptic and gets most of his information on the issue from conservative media. In exasperation she finally confronted him: “Do you believe them or me?” She told him she believes the scientists who research climate change and knows many of them personally. “If you think I’m wrong,” she said, “then you’re telling me that you don’t trust me.” Her father’s stance on the issue softened. But it wasn’t the facts that did it.If you’re a rationalist, there’s something a little dispiriting about all this. In Kahan’s descriptions of how we decide what to believe, what we decide sometimes sounds almost incidental. Those of us in the science-communication business are as tribal as anyone else, he told me. We believe in scientific ideas not because we have truly evaluated all the evidence but because we feel an affinity for the scientific community. When I mentioned to Kahan that I fully accept evolution, he said: “Believing in evolution is just a description about you. It’s not an account of how you reason.”Maybe — except that evolution is real. Biology is incomprehensible without it. There aren’t really two sides to all these issues. Climate change is happening. Vaccines save lives. Being right does matter — and the science tribe has a long track record of getting things right in the end. Modern society is built on things it got right.
Doubting science also has consequences, as seen in recent weeks with the measles outbreak that began in California. The people who believe that vaccines cause autism — often well educated and affluent, by the way — are undermining “herd immunity” to such diseases as whooping cough and measles. The anti-vaccine movement has been going strong since a prestigious British medical journal, the Lancet, published a study in 1998 linking a common vaccine to autism. The journal later retracted the study, which was thoroughly discredited. But the notion of a vaccine-autism connection has been endorsed by celebrities and reinforced through the usual Internet filters. (Anti-vaccine activist and actress Jenny McCarthy famously said on “The Oprah Winfrey Show,” “The University of Google is where I got my degree from.”)In the climate debate, the consequences of doubt are likely to be global and enduring. Climate-change skeptics in the United States have achieved their fundamental goal of halting legislative action to combat global warming. They haven’t had to win the debate on the merits; they’ve merely had to fog the room enough to keep laws governing greenhouse gas emissions from being enacted.Some environmental activists want scientists to emerge from their ivory towers and get more involved in the policy battles. Any scientist going that route needs to do so carefully, says Liz Neeley. “That line between science communication and advocacy is very hard to step back from,” she says. In the debate over climate change, the central allegation of the skeptics is that the science saying it’s real and a serious threat is politically tinged, driven by environmental activism and not hard data. That’s not true, and it slanders honest scientists. But the claim becomes more likely to be seen as plausible if scientists go beyond their professional expertise and begin advocating specific policies.It’s their very detachment, what you might call the cold-bloodedness of science, that makes science the killer app. It’s the way science tells us the truth rather than what we’d like the truth to be. Scientists can be as dogmatic as anyone else — but their dogma is always wilting in the hot glare of new research. In science it’s not a sin to change your mind when the evidence demands it. For some people, the tribe is more important than the truth; for the best scientists, the truth is more important than the tribe.
View Article Here
Read More
The Future of Technology in 2015?
Excerpt from
cnet.com
The year gone by brought us more robots, worries about artificial intelligence, and difficult lessons on space travel. The big question: where's it all taking us?
Every year, we capture a little bit more of the future -- and yet the future insists on staying ever out of reach.
Consider space travel. Humans have been traveling beyond the atmosphere for more than 50 years now -- but aside from a few overnights on the moon four decades ago, we have yet to venture beyond low Earth orbit.
Or robots. They help build our cars and clean our kitchen floors, but no one would mistake a Kuka or a Roomba for the replicants in "Blade Runner." Siri, Cortana and Alexa, meanwhile, are bringing some personality to the gadgets in our pockets and our houses. Still, that's a long way from HAL or that lad David from the movie "A.I. Artificial Intelligence."
Self-driving cars? Still in low gear, and carrying some bureaucratic baggage that prevents them from ditching certain technology of yesteryear, like steering wheels.
And even when these sci-fi things arrive, will we embrace them? A Pew study earlier this year found that Americans are decidedly undecided. Among the poll respondents, 48 percent said they would like to take a ride in a driverless car, but 50 percent would not. And only 3 percent said they would like to own one.
"Despite their general optimism about the long-term impact of technological change," Aaron Smith of the Pew Research Center wrote in the report, "Americans express significant reservations about some of these potentially short-term developments" such as US airspace being opened to personal drones, robot caregivers for the elderly or wearable or implantable computing devices that would feed them information.
Let's take a look at how much of the future we grasped in 2014 and what we could gain in 2015.
Space travel: 'Space flight is hard'
In 2014, earthlings scored an unprecedented achievement in space exploration when the European Space Agency landed a spacecraft on a speeding comet, with the potential to learn more about the origins of life. No, Bruce Willis wasn't aboard. Nobody was. But when the 220-pound Philae lander, carried to its destination by the Rosetta orbiter, touched down on comet 67P/Churyumov-Gerasimenko on November 12, some 300 million miles from Earth, the celebration was well-earned.A shadow quickly fell on the jubilation, however. Philae could not stick its first landing, bouncing into a darker corner of the comet where its solar panels would not receive enough sunlight to charge the lander's batteries. After two days and just a handful of initial readings sent home, it shut down. For good? Backers have allowed for a ray of hope as the comet passes closer to the sun in 2015. "I think within the team there is no doubt that [Philae] will wake up," lead lander scientist Jean-Pierre Bibring said in December. "And the question is OK, in what shape? My suspicion is we'll be in good shape."
The trip for NASA's New Horizons spacecraft has been much longer: 3 billion miles, all the way to Pluto and the edge of the solar system. Almost nine years after it left Earth, New Horizons in early December came out of hibernation to begin its mission: to explore "a new class of planets we've never seen, in a place we've never been before," said project scientist Hal Weaver. In January, it will begin taking photos and readings of Pluto, and by mid-July, when it swoops closest to Pluto, it will have sent back detailed information about the dwarf planet and its moon, en route to even deeper space.
Also in December, NASA made a first test spaceflight of its Orion capsule on a quick morning jaunt out and back, to just over 3,600 miles above Earth (or approximately 15 times higher than the International Space Station). The distance was trivial compared to those those traveled by Rosetta and New Horizons, and crewed missions won't begin till 2021, but the ambitions are great -- in the 2030s, Orion is expected to carry humans to Mars.
In late March 2015, two humans will head to the ISS to take up residence for a full year, in what would be a record sleepover in orbit. "If a mission to Mars is going to take a three-year round trip," said NASA astronaut Scott Kelly, who will be joined in the effort by Russia's Mikhail Kornienko, "we need to know better how our body and our physiology performs over durations longer than what we've previously on the space station investigated, which is six months."
There were more sobering moments, too, in 2014. In October, Virgin Galactic's sleek, experimental SpaceShipTwo, designed to carry deep-pocketed tourists into space, crashed in the Mojave Desert during a test flight, killing one test pilot and injuring the other. Virgin founder Richard Branson had hoped his vessel would make its first commercial flight by the end of this year or in early 2015, and what comes next remains to be seen. Branson, though, expressed optimism: "Space flight is hard -- but worth it," he said in a blog post shortly after the crash, and in a press conference, he vowed "We'll learn from this, and move forward together." Virgin Galactic could begin testing its next spaceship as soon as early 2015.
The crash of SpaceShipTwo came just a few days after the explosion of an Orbital Sciences rocket lofting an unmanned spacecraft with supplies bound for the International Space Station. And in July, Elon Musk's SpaceX had suffered the loss of one of its Falcon 9 rockets during a test flight. Musk intoned, via Twitter, that "rockets are tricky..."
Still, it was on the whole a good year for SpaceX. In May, it unveiled its first manned spacecraft, the Dragon V2, intended for trips to and from the space station, and in September, it won a $2.6 billion contract from NASA to become one of the first private companies (the other being Boeing) to ferry astronauts to the ISS, beginning as early as 2017. Oh, and SpaceX also has plans to launch microsatellites to establish low-cost Internet service around the globe, saying in November to expect an announcement about that in two to three months -- that is, early in 2015.
One more thing to watch for next year: another launch of the super-secret X-37B space place to do whatever it does during its marathon trips into orbit. The third spaceflight of an X-37B -- a robotic vehicle that, at 29 feet in length, looks like a miniature space shuttle -- ended in October after an astonishing 22 months circling the Earth, conducting "on-orbit experiments."
Self-driving cars: Asleep at what wheel?
Spacecraft aren't the only vehicles capable of autonomous travel -- increasingly, cars are, too. Automakers are toiling toward self-driving cars, and Elon Musk -- whose name comes up again and again when we talk about the near horizon for sci-fi tech -- says we're less than a decade away from capturing that aspect of the future. In October, speaking in his guise as founder of Tesla Motors, Musk said: "Like maybe five or six years from now I think we'll be able to achieve true autonomous driving where you could literally get in the car, go to sleep and wake up at your destination." (He also allowed that we should tack on a few years after that before government regulators give that technology their blessing.)That comment came as Musk unveiled a new autopilot feature -- characterizing it as a sort of super cruise control, rather than actual autonomy -- for Tesla's existing line of electric cars. Every Model S manufactured since late September includes new sensor hardware to enable those autopilot capabilities (such as adaptive cruise control, lane-keeping assistance and automated parking), to be followed by an over-the-air software update to enable those features.
Google has long been working on its own robo-cars, and until this year, that meant taking existing models -- a Prius here, a Lexus there -- and buckling on extraneous gear. Then in May, the tech titan took the wraps off a completely new prototype that it had built from scratch. (In December, it showed off the first fully functional prototype.) It looked rather like a cartoon car, but the real news was that there was no steering wheel, gas pedal or brake pedal -- no need for human controls when software and sensors are there to do the work.
Or not so fast. In August, California's Department of Motor Vehicles declared that Google's test vehicles will need those manual controls after all -- for safety's sake. The company agreed to comply with the state's rules, which went into effect in September, when it began testing the cars on private roads in October.
Regardless of who's making your future robo-car, the vehicle is going to have to be not just smart, but actually thoughtful. It's not enough for the car to know how far it is from nearby cars or what the road conditions are. The machine may well have to make no-win decisions, just as human drivers sometimes do in instantaneous, life-and-death emergencies. "The car is calculating a lot of consequences of its actions," Chris Gerdes, an associate professor of mechanical engineering, said at the Web Summit conference in Dublin, Ireland, in November. "Should it hit the person without a helmet? The larger car or the smaller car?"
Robots: Legging it out
So when do the robots finally become our overlords? Probably not in 2015, but there's sure to be more hand-wringing about both the machines and the artificial intelligence that could -- someday -- make them a match for homo sapiens. At the moment, the threat seems more mundane: when do we lose our jobs to a robot?The inquisitive folks at Pew took that very topic to nearly 1,900 experts, including Vint Cerf, vice president at Google; Web guru Tim Bray; Justin Reich of Harvard University's Berkman Center for Internet & Society; and Jonathan Grudin, principal researcher at Microsoft. According to the resulting report, published in August, the group was almost evenly split -- 48 percent thought it likely that, by 2025, robots and digital agents will have displaced significant numbers of blue- and white-collar workers, perhaps even to the point of breakdowns in the social order, while 52 percent "have faith that human ingenuity will create new jobs, industries, and ways to make a living, just as it has been doing since the dawn of the Industrial Revolution."
Still, for all of the startling skills that robots have acquired so far, they're often not all there yet. Here's some of what we saw from the robot world in 2014:
Teamwork: Researchers at the École Polytechnique Fédérale De Lausanne in May showed off their "Roombots," cog-like robotic balls that can join forces to, say, help a table move across a room or change its height.
A sense of balance: We don't know if Boston Dynamics' humanoid Atlas is ready to trim bonsai trees, but it has learned this much from "The Karate Kid" (the original from the 1980s) -- it can stand on cinder blocks and hold its balance in a crane stance while moving its arms up and down.
Catlike jumps: MIT's cheetah-bot gets higher marks for locomotion. Fed a new algorithm, it can run across a lawn and bound like a cat. And quietly, too. "Our robot can be silent and as efficient as animals. The only things you hear are the feet hitting the ground," MIT's Sangbae Kim, a professor of mechanical engineering, told MIT News. "This is kind of a new paradigm where we're controlling force in a highly dynamic situation. Any legged robot should be able to do this in the future."
Sign language: Toshiba's humanoid Aiko Chihira communicated in Japanese sign language at the CEATEC show in October. Her rudimentary skills, limited for the moment to simple messages such as signed greetings, are expected to blossom by 2020 into areas such as speech synthesis and speech recognition.
Dance skills: Robotic pole dancers? Tobit Software brought a pair, controllable by an Android smartphone, to the Cebit trade show in Germany in March. More lifelike was the animatronic sculpture at a gallery in New York that same month -- but what was up with that witch mask?
Emotional ambition: Eventually, we'll all have humanoid companions -- at least, that's always been one school of thought on our robotic future. One early candidate for that honor could be Pepper, from Softbank and Aldebaran Robotics, which say the 4-foot-tall Pepper is the first robot to read emotions. This emo-bot is expected to go on sale in Japan in February.
Ray guns: Ship shape
Damn the photon torpedoes, and full speed ahead. That could be the motto for the US Navy, which in 2014 deployed a prototype laser weapon -- just one -- aboard a vessel in the Persian Gulf. Through some three months of testing, the device "locked on and destroyed the targets we designated with near-instantaneous lethality," Rear Adm. Matthew L. Klunder, chief of naval research, said in a statement. Those targets were rather modest -- small objects mounted aboard a speeding small boat, a diminutive Scan Eagle unmanned aerial vehicle, and so on -- but the point was made: the laser weapon, operated by a controller like those used for video games, held up well, even in adverse conditions.Artificial intelligence: Danger, Will Robinson?
What happens when robots and other smart machines can not only do, but also think? Will they appreciate us for all our quirky human high and low points, and learn to live with us? Or do they take a hard look at a species that's run its course and either turn us into natural resources, "Matrix"-style, or rain down destruction?As we look ahead to the reboot of the "Terminator" film franchise in 2015, we can't help but recall some of the dire thoughts about artificial intelligence from two people high in the tech pantheon, the very busy Musk and the theoretically inclined Stephen Hawking.
Musk himself more than once in 2014 invoked the likes of the "Terminator" movies and the "scary outcomes" that make them such thrilling popcorn fare. Except that he sees a potentially scary reality evolving. In an interview with CNBC in June, he spoke of his investment in AI-minded companies like Vicarious and Deep Mind, saying: "I like to just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome."
He has put his anxieties into some particularly colorful phrases. In August, for instance, Musk tweeted that AI is "potentially more dangerous than nukes." And in October, he said this at a symposium at MIT: "With artificial intelligence, we are summoning the demon. ... You know all those stories where there's the guy with the pentagram and the holy water and he's like... yeah, he's sure he can control the demon, [but] it doesn't work out."
Musk has a kindred spirit in Stephen Hawking. The physicist allowed in May that AI could be the "biggest event in human history," and not necessarily in a good way. A month later, he was telling John Oliver, on HBO's "Last Week Tonight," that "artificial intelligence could be a real danger in the not too distant future." How so? "It could design improvements to itself and outsmart us all."
But Google's Eric Schmidt, is having none of that pessimism. At a summit on innovation in December, the executive chairman of the far-thinking tech titan -- which in October teamed up with Oxford University to speed up research on artificial intelligence -- said that while our worries may be natural, "they're also misguided." View Article Here Read More