Tag: consequences (page 2 of 4)

Biologists fear DNA editing procedure can alter human DNA




Excerpt from themarketbusiness.com

A group of biologists was alarmed with the use a new genome-editing technique to modify human DNA in a way that it can become hereditary.
The biologists worry that the new technique is so effective and easy to use that some physicians may push ahead with it before its safety can be weigh up. They also want the public to understand the ethical issues surrounding the technique, which could be used to cure genetic diseases, but also to enhance qualities like beauty or intelligence. The latter is a path that many ethicists believe should never be taken.


“You could exert control over human heredity with this technique, and that is why we are raising the issue,” said David Baltimore, a former president of the California Institute of Technology and a member of the group whose paper on the topic was published in the journal Science.

Ethicists have been concerned for decades about the dangers of altering the human germ line — meaning to make changes to human sperm, eggs or embryos that will last through the life of the individual and be passed on to future generations. Until now, these worries have been theoretical. But a technique invented in 2012 makes it possible to edit the genome precisely and with much greater ease. The technique has already been used to edit the genomes of mice, rats and monkeys, and few doubt that it would work the same way in people.

The new genome-editing technique holds the power to repair or enhance any human gene. “It raises the most fundamental of issues about how we are going to view our humanity in the future and whether we are going to take the dramatic step of modifying our own germline and in a sense take control of our genetic destiny, which raises enormous peril for humanity,” said George Daley, a stem cell expert at Boston Children’s Hospital and a member of the group.

The biologists writing in Science support continuing laboratory research with the technique, and few if any scientists believe it is ready for clinical use. Any such use is tightly regulated in the United States and Europe. American scientists, for instance, would have to present a plan to treat genetic diseases in the human germline to the Food and Drug Administration.

The paper’s authors, however, are concerned about countries that have less regulation in science. They urge that “scientists should avoid even attempting, in lax jurisdictions, germ line genome modification for clinical application in humans” until the full implications “are discussed among scientific and governmental organizations.”

Though such a moratorium would not be legally enforceable and might seem unlikely to exert global sway, there is a precedent. In 1975, scientists worldwide were asked to refrain from using a method for manipulating genes, the recombinant DNA technique, until rules had been established.

“We asked at that time that nobody do certain experiments, and in fact nobody did, to my knowledge,” said Baltimore, who was a member of the 1975 group. “So there is a moral authority you can assert from the U.S., and that is what we hope to do.”

Recombinant DNA was the first in a series of ever-improving steps for manipulating genetic material. The chief problem has always been one of accuracy, of editing the DNA at precisely the intended site, since any off-target change could be lethal. Two recent methods, known as zinc fingers and TAL effectors, came close to the goal of accurate genome editing, but both are hard to use. The new genome-editing approach was invented by Jennifer Doudna of the University of California, Berkeley, and Emmanuelle Charpentier of Umea University in Sweden.

Their method, known by the acronym Crispr-Cas9, co-opts the natural immune system with which bacteria remember the DNA of the viruses that attack them so they are ready the next time those same invaders appear. Researchers can simply prime the defense system with a guide sequence of their choice and it will then destroy the matching DNA sequence in any genome presented to it. Doudna is the lead author of the Science article calling for control of the technique and organized the meeting at which the statement was developed.

Though highly efficient, the technique occasionally cuts the genome at unintended sites. The issue of how much mistargeting could be tolerated in a clinical setting is one that Doudna’s group wants to see thoroughly explored before any human genome is edited.

Scientists also say that replacing a defective gene with a normal one may seem entirely harmless but perhaps would not be.
“We worry about people making changes without the knowledge of what those changes mean in terms of the overall genome,” Baltimore said. “I personally think we are just not smart enough — and won’t be for a very long time — to feel comfortable about the consequences of changing heredity, even in a single individual.”
Many ethicists have accepted the idea of gene therapy, changes that die with the patient, but draw a clear line at altering the germline, since these will extend to future generations. The British Parliament in February approved the transfer of mitochondria, small DNA-containing organelles, to human eggs whose own mitochondria are defective. But that technique is less far-reaching because no genes are edited.

There are two broad schools of thought on modifying the human germline, said R. Alta Charo, a bioethicist at the University of Wisconsin and a member of the Doudna group. One is pragmatic and seeks to balance benefit and risk. The other “sets up inherent limits on how much humankind should alter nature,” she said. 
Some Christian doctrines oppose the idea of playing God, whereas in Judaism and Islam there is the notion “that humankind is supposed to improve the world.” She described herself as more of a pragmatist, saying, “I would try to regulate such things rather than shut a new technology down at its beginning.”

Other scientists agree with the Doudna group’s message.
“It is very clear that people will try to do gene editing in humans,” said Rudolf Jaenisch, a stem cell biologist at the Whitehead Institute in Cambridge, Massachusetts, who was not a member of the Doudna group. “This paper calls for a moratorium on any clinical application, which I believe is the right thing to do.”
Writing in Nature last week, Edward Lanphier and other scientists involved in developing the rival zinc finger technique for genome editing also called for a moratorium on human germline modification, saying that use of current technologies would be “dangerous and ethically unacceptable.”

The International Society for Stem Cell Research said Thursday that it supported the proposed moratorium.

The Doudna group calls for public discussion but is also working to develop some more formal process, such as an international meeting convened by the National Academy of Sciences, to establish guidelines for human use of the genome-editing technique.

“We need some principled agreement that we want to enhance humans in this way or we don’t,” Jaenisch said. “You have to have this discussion because people are gearing up to do this.”

View Article Here Read More

NASA To Study Mysterious ‘Magnetic Explosions’ Between Earth, Sun That Unleash Dangerous X-Rays By Brandon Mercer


(NASA)



Excerpt from sanfrancisco.cbslocal.com

NASA AMES RESEARCH CENTER (CBS SF) — Earth and the Sun may be 93 million miles apart, but cosmic explosions between the two celestial spheres occur often and with devastating effects–unleashing waves of X-ray radiation and disrupting GPS communications, and it is with this danger in mind that next month, NASA will launch four “Magnetospheric Multiscale Mission” satellites, studying these “magnetic reconnections” and better predicting the consequences of these cosmic phenomena.

NASA Ames Research Center in Mountain View uses supercomputers to create theoretical models of the magnetic fields on the sun, but the new mission will be able to actually observe what is happening, from a lofty vantage point `far above the Earth’s pole.




The mysterious magnetic reconnections actually transfer energy and physical particles from the Sun to Earth. The forces at work can accelerate particles to nearly the speed of light, with devastating consequences.

In October 2003, a massive release of X-ray radiation hit Earth in what became known as the Halloween Storms. The energy triggered the first ever radiation warning to aircraft, alerting pilots that high altitude flights could expose passengers and crew to unhealthy levels of radiation.

Simultaneously, the GPS location system was impacted. Back then, this wasn’t as great a concern for the general public. It mainly affected the military, pilots, and sea captains, but were the same event to occur today, it may be much more noticeable with today’s smartphone world where everything we do is geo-tagged and coordinated using the GPS signals. In the future, it could evven impact autonomous self-driving vehicles and airborne drones that rely on GPS.

Karen C. Fox from NASA’s Goddard Space Flight Center in Greenbelt, Maryland writes, “Understanding vast systems in space requires understanding what’s happening on widely different scales. Giant events can turn out to have tiny drivers — take, for example, what rocked near-Earth space in October 2003.”
The Halloween geomagnetic storms had a beautiful side too. The Northern Lights were visible clear down to Southern California, and even Texas.

The Magnetospheric Multiscale, or MMS, mission will be the first ever mission dedicated to studying this universal process by orbiting Earth, and passing directly through nearby magnetic reconnection regions.

“Armed with this data, scientists will have their first chance to watch magnetic reconnection from the inside, right as it’s occurring. By focusing on the small-scale process, scientists open the door to understanding what happens on larger scales throughout the universe,” wrote Fox. 

View Article Here Read More

Earth’s Moon May Not Be Critical to Life Afterall




Excerpt from space.com

The moon has long been viewed as a crucial component in creating an environment suitable for the evolution of complex life on Earth, but a number of scientific results in recent years have shown that perhaps our planet doesn't need the moon as much as we have thought.

In 1993, French astronomer Jacques Laskar ran a series of calculations indicating that the gravity of the moon is vital to stabilizing the tilt of our planet. Earth's obliquity, as this tilt is technically known as, has huge repercussions for climate. Laskar argued that should Earth's obliquity wander over hundreds of thousands of years, it would cause environmental chaos by creating a climate too variable for complex life to develop in relative peace.
So his argument goes, we should feel remarkably lucky to have such a large moon on our doorstep, as no other terrestrial planet in our solar system has such a moon. Mars' two satellites, Phobos and Deimos, are tiny, captured asteroids that have little known effect on the Red Planet. Consequently, Mars' tilt wobbles chaotically over timescales of millions of years, with evidence for swings in its rotational axis at least as large as 45 degrees. 


The stroke of good fortune that led to Earth possessing an unlikely moon, specifically the collision 4.5 billion years ago between Earth and a Mars-sized proto-planet that produced the debris from which our Moon formed, has become one of the central tenets of the 'Rare Earth' hypothesis. Famously promoted by Peter Ward and Don Brownlee, it argues that planets where everything is just right for complex life are exceedingly rare.

New findings, however, are tearing up the old rule book. In 2011, a trio of scientists — Jack Lissauer of NASA Ames Research Center, Jason Barnes of the University of Idaho and John Chambers of the Carnegie Institution for Science — published results from new simulations describing what Earth's obliquity would be like without the moon. What they found was surprising.

"We were looking into how obliquity might vary for all sorts of planetary systems," says Lissauer. "To test our code we began with integrations following the obliquity of Mars and found similar results to other people. But when we did the obliquity of Earth we found the variations were much smaller than expected — nowhere near as extreme as previous calculations suggested they would be."
Lissauer's team found that without the moon, Earth's rotational axis would only wobble by 10 degrees more than its present day angle of 23.5 degrees. The reason for such vastly different results to those attained by Jacques Laskar is pure computing power. Today's computers are much faster and capable of more accurate modeling with far more data than computers of the 1990s.

Lissauer and his colleagues also found that if Earth were spinning fast, with one day lasting less than 10 hours, or rotating retrograde (i.e. backwards so that the sun rose in the West and set in the East), then Earth stabilized itself thanks to the gravitational resonances with other planets, most notably giant Jupiter. There would be no need for a large moon. 

Earth's rotation has not always been as leisurely as the current 24 hour spin-rate. Following the impact that formed the moon, Earth was spinning once every four or five hours, but it has since gradually slowed by the moon's presence. As for the length of Earth's day prior to the moon-forming impact, nobody really knows, but some models of the impact developed by Robin Canup of the Southwest Research Institute, in Boulder, Colorado, suggest that Earth could have been rotating fast, or even retrograde, prior to the collision.

Tilted Orbits
Planets with inclined orbits could find that their increased obliquity is beneficial to their long-term climate – as long as they do not have a large moon.


"Collisions in the epoch during which Earth was formed determined its initial rotation," says Lissauer. "For rocky planets, some of the models say most of them will be prograde, but others say comparable numbers of planets will be prograde and retrograde. Certainly, retrograde worlds are not expected to be rare."

The upshot of Lissauer's findings is that the presence of a moon is not the be all and end all as once thought, and a terrestrial planet can exist without a large moon and still retain its habitability. Indeed, it is possible to imagine some circumstances where having a large moon would actually be pretty bad for life.

Rory Barnes, of the University of Washington, has also tackled the problem of obliquity, but from a different perspective. Planets on the edge of habitable zones exist in a precarious position, far enough away from their star that, without a thick, insulating atmosphere, they freeze over, just like Mars. Barnes and his colleagues including John Armstrong of Weber State University, realized that torques from other nearby worlds could cause a planet's inclination to the ecliptic plane to vary. This in turn would result in a change of obliquity; the greater the inclination, the greater the obliquity to the Sun. Barnes and Armstrong saw that this could be a good thing for planets on the edges of habitable zones, allowing heat to be distributed evenly over geological timescales and preventing "Snowball Earth" scenarios. They called these worlds "tilt-a-worlds," but the presence of a large moon would counteract this beneficial obliquity change.

"I think one of the most important points from our tilt-a-world paper is that at the outer edge of the habitable zone, having a large moon is bad, there's no other way to look at it," says Barnes. "If you have a large moon that stabilizes the obliquity then you have a tendency to completely freeze over."

Barnes is impressed with the work of Lissauer's team.
"I think it is a well done study," he says. "It suggests that Earth does not need the moon to have a relatively stable climate. I don't think there would be any dire consequences to not having a moon."

Mars' Changing Tilt
The effects of changing obliquity on Mars’ climate. Mars’ current 25-degree tilt is seen at top left. At top right is a Mars that has a high obliquity, leading to ice gather at its equator while the poles point sunwards. At bottom is Mars with low obliquity, which sees its polar caps grow in size.


Of course, the moon does have a hand in other factors important to life besides planetary obliquity. Tidal pools may have been the point of origin of life on Earth. Although the moon produces the largest tides, the sun also influences tides, so the lack of a large moon is not necessarily a stumbling block. Some animals have also evolved a life cycle based on the cycle of the moon, but that's more happenstance than an essential component for life.

"Those are just minor things," says Lissauer.

Without the absolute need for a moon, astrobiologists seeking life and habitable worlds elsewhere face new opportunities. Maybe Earth, with its giant moon, is actually the oddball amongst habitable planets. Rory Barnes certainly doesn't think we need it.
"It will be a step forward to see the myth that a habitable planet needs a large moon dispelled," he says, to which Lissauer agrees.
Earth without its moon might therefore remain habitable, but we should still cherish its friendly presence. After all, would Beethoven have written the Moonlight Sonata without it?

View Article Here Read More

Why science is so hard to believe?

 
In the recent movie “Interstellar,” set in a futuristic, downtrodden America where NASA has been forced into hiding, school textbooks say the Apollo moon landings were faked.


Excerpt from 


There’s a scene in Stanley Kubrick’s comic masterpiece “Dr. Strangelove” in which Jack D. Ripper, an American general who’s gone rogue and ordered a nuclear attack on the Soviet Union, unspools his paranoid worldview — and the explanation for why he drinks “only distilled water, or rainwater, and only pure grain alcohol” — to Lionel Mandrake, a dizzy-with-anxiety group captain in the Royal Air Force.
Ripper: “Have you ever heard of a thing called fluoridation? Fluoridation of water?”
Mandrake: “Ah, yes, I have heard of that, Jack. Yes, yes.”Ripper: “Well, do you know what it is?”
Mandrake: “No. No, I don’t know what it is, no.”
Ripper: “Do you realize that fluoridation is the most monstrously conceived and dangerous communist plot we have ever had to face?” 

The movie came out in 1964, by which time the health benefits of fluoridation had been thoroughly established and anti-fluoridation conspiracy theories could be the stuff of comedy. Yet half a century later, fluoridation continues to incite fear and paranoia. In 2013, citizens in Portland, Ore., one of only a few major American cities that don’t fluoridate, blocked a plan by local officials to do so. Opponents didn’t like the idea of the government adding “chemicals” to their water. They claimed that fluoride could be harmful to human health.

Actually fluoride is a natural mineral that, in the weak concentrations used in public drinking-water systems, hardens tooth enamel and prevents tooth decay — a cheap and safe way to improve dental health for everyone, rich or poor, conscientious brushers or not. That’s the scientific and medical consensus.
To which some people in Portland, echoing anti-fluoridation activists around the world, reply: We don’t believe you.
We live in an age when all manner of scientific knowledge — from the safety of fluoride and vaccines to the reality of climate change — faces organized and often furious opposition. Empowered by their own sources of information and their own interpretations of research, doubters have declared war on the consensus of experts. There are so many of these controversies these days, you’d think a diabolical agency had put something in the water to make people argumentative.
Science doubt has become a pop-culture meme. In the recent movie “Interstellar,” set in a futuristic, downtrodden America where NASA has been forced into hiding, school textbooks say the Apollo moon landings were faked.


The debate about mandated vaccinations has the political world talking. A spike in measles cases nationwide has President Obama, lawmakers and even potential 2016 candidates weighing in on the vaccine controversy. (Pamela Kirkland/The Washington Post)
In a sense this is not surprising. Our lives are permeated by science and technology as never before. For many of us this new world is wondrous, comfortable and rich in rewards — but also more complicated and sometimes unnerving. We now face risks we can’t easily analyze.
We’re asked to accept, for example, that it’s safe to eat food containing genetically modified organisms (GMOs) because, the experts point out, there’s no evidence that it isn’t and no reason to believe that altering genes precisely in a lab is more dangerous than altering them wholesale through traditional breeding. But to some people, the very idea of transferring genes between species conjures up mad scientists running amok — and so, two centuries after Mary Shelley wrote “Frankenstein,” they talk about Frankenfood.
The world crackles with real and imaginary hazards, and distinguishing the former from the latter isn’t easy. Should we be afraid that the Ebola virus, which is spread only by direct contact with bodily fluids, will mutate into an airborne super-plague? The scientific consensus says that’s extremely unlikely: No virus has ever been observed to completely change its mode of transmission in humans, and there’s zero evidence that the latest strain of Ebola is any different. But Google “airborne Ebola” and you’ll enter a dystopia where this virus has almost supernatural powers, including the power to kill us all.
In this bewildering world we have to decide what to believe and how to act on that. In principle, that’s what science is for. “Science is not a body of facts,” says geophysicist Marcia McNutt, who once headed the U.S. Geological Survey and is now editor of Science, the prestigious journal. “Science is a method for deciding whether what we choose to believe has a basis in the laws of nature or not.”
The scientific method leads us to truths that are less than self-evident, often mind-blowing and sometimes hard to swallow. In the early 17th century, when Galileo claimed that the Earth spins on its axis and orbits the sun, he wasn’t just rejecting church doctrine. He was asking people to believe something that defied common sense — because it sure looks like the sun’s going around the Earth, and you can’t feel the Earth spinning. Galileo was put on trial and forced to recant. Two centuries later, Charles Darwin escaped that fate. But his idea that all life on Earth evolved from a primordial ancestor and that we humans are distant cousins of apes, whales and even deep-sea mollusks is still a big ask for a lot of people.
Even when we intellectually accept these precepts of science, we subconsciously cling to our intuitions — what researchers call our naive beliefs. A study by Andrew Shtulman of Occidental College showed that even students with an advanced science education had a hitch in their mental gait when asked to affirm or deny that humans are descended from sea animals and that the Earth goes around the sun. Both truths are counterintuitive. The students, even those who correctly marked “true,” were slower to answer those questions than questions about whether humans are descended from tree-dwelling creatures (also true but easier to grasp) and whether the moon goes around the Earth (also true but intuitive).
Shtulman’s research indicates that as we become scientifically literate, we repress our naive beliefs but never eliminate them entirely. They nest in our brains, chirping at us as we try to make sense of the world.
Most of us do that by relying on personal experience and anecdotes, on stories rather than statistics. We might get a prostate-specific antigen test, even though it’s no longer generally recommended, because it caught a close friend’s cancer — and we pay less attention to statistical evidence, painstakingly compiled through multiple studies, showing that the test rarely saves lives but triggers many unnecessary surgeries. Or we hear about a cluster of cancer cases in a town with a hazardous-waste dump, and we assume that pollution caused the cancers. Of course, just because two things happened together doesn’t mean one caused the other, and just because events are clustered doesn’t mean they’re not random. Yet we have trouble digesting randomness; our brains crave pattern and meaning.
Even for scientists, the scientific method is a hard discipline. They, too, are vulnerable to confirmation bias — the tendency to look for and see only evidence that confirms what they already believe. But unlike the rest of us, they submit their ideas to formal peer review before publishing them. Once the results are published, if they’re important enough, other scientists will try to reproduce them — and, being congenitally skeptical and competitive, will be very happy to announce that they don’t hold up. Scientific results are always provisional, susceptible to being overturned by some future experiment or observation. Scientists rarely proclaim an absolute truth or an absolute certainty. Uncertainty is inevitable at the frontiers of knowledge.
That provisional quality of science is another thing a lot of people have trouble with. To some climate-change skeptics, for example, the fact that a few scientists in the 1970s were worried (quite reasonably, it seemed at the time) about the possibility of a coming ice age is enough to discredit what is now the consensus of the world’s scientists: The planet’s surface temperature has risen by about 1.5 degrees Fahrenheit in the past 130 years, and human actions, including the burning of fossil fuels, are extremely likely to have been the dominant cause since the mid-20th century.
It’s clear that organizations funded in part by the fossil-fuel industry have deliberately tried to undermine the public’s understanding of the scientific consensus by promoting a few skeptics. The news media gives abundant attention to such mavericks, naysayers, professional controversialists and table thumpers. The media would also have you believe that science is full of shocking discoveries made by lone geniuses. Not so. The (boring) truth is that science usually advances incrementally, through the steady accretion of data and insights gathered by many people over many years. So it has with the consensus on climate change. That’s not about to go poof with the next thermometer reading.
But industry PR, however misleading, isn’t enough to explain why so many people reject the scientific consensus on global warming.
The “science communication problem,” as it’s blandly called by the scientists who study it, has yielded abundant new research into how people decide what to believe — and why they so often don’t accept the expert consensus. It’s not that they can’t grasp it, according to Dan Kahan of Yale University. In one study he asked 1,540 Americans, a representative sample, to rate the threat of climate change on a scale of zero to 10. Then he correlated that with the subjects’ science literacy. He found that higher literacy was associated with stronger views — at both ends of the spectrum. Science literacy promoted polarization on climate, not consensus. According to Kahan, that’s because people tend to use scientific knowledge to reinforce their worldviews.
Americans fall into two basic camps, Kahan says. Those with a more “egalitarian” and “communitarian” mind-set are generally suspicious of industry and apt to think it’s up to something dangerous that calls for government regulation; they’re likely to see the risks of climate change. In contrast, people with a “hierarchical” and “individualistic” mind-set respect leaders of industry and don’t like government interfering in their affairs; they’re apt to reject warnings about climate change, because they know what accepting them could lead to — some kind of tax or regulation to limit emissions.
In the United States, climate change has become a litmus test that identifies you as belonging to one or the other of these two antagonistic tribes. When we argue about it, Kahan says, we’re actually arguing about who we are, what our crowd is. We’re thinking: People like us believe this. People like that do not believe this.
Science appeals to our rational brain, but our beliefs are motivated largely by emotion, and the biggest motivation is remaining tight with our peers. “We’re all in high school. We’ve never left high school,” says Marcia McNutt. “People still have a need to fit in, and that need to fit in is so strong that local values and local opinions are always trumping science. And they will continue to trump science, especially when there is no clear downside to ignoring science.”
Meanwhile the Internet makes it easier than ever for science doubters to find their own information and experts. Gone are the days when a small number of powerful institutions — elite universities, encyclopedias and major news organizations — served as gatekeepers of scientific information. The Internet has democratized it, which is a good thing. But along with cable TV, the Web has also made it possible to live in a “filter bubble” that lets in only the information with which you already agree.
How to penetrate the bubble? How to convert science skeptics? Throwing more facts at them doesn’t help. Liz Neeley, who helps train scientists to be better communicators at an organization called Compass, says people need to hear from believers they can trust, who share their fundamental values. She has personal experience with this. Her father is a climate-change skeptic and gets most of his information on the issue from conservative media. In exasperation she finally confronted him: “Do you believe them or me?” She told him she believes the scientists who research climate change and knows many of them personally. “If you think I’m wrong,” she said, “then you’re telling me that you don’t trust me.” Her father’s stance on the issue softened. But it wasn’t the facts that did it.
If you’re a rationalist, there’s something a little dispiriting about all this. In Kahan’s descriptions of how we decide what to believe, what we decide sometimes sounds almost incidental. Those of us in the science-communication business are as tribal as anyone else, he told me. We believe in scientific ideas not because we have truly evaluated all the evidence but because we feel an affinity for the scientific community. When I mentioned to Kahan that I fully accept evolution, he said: “Believing in evolution is just a description about you. It’s not an account of how you reason.”
Maybe — except that evolution is real. Biology is incomprehensible without it. There aren’t really two sides to all these issues. Climate change is happening. Vaccines save lives. Being right does matter — and the science tribe has a long track record of getting things right in the end. Modern society is built on things it got right.
Doubting science also has consequences, as seen in recent weeks with the measles outbreak that began in California. The people who believe that vaccines cause autism — often well educated and affluent, by the way — are undermining “herd immunity” to such diseases as whooping cough and measles. The anti-vaccine movement has been going strong since a prestigious British medical journal, the Lancet, published a study in 1998 linking a common vaccine to autism. The journal later retracted the study, which was thoroughly discredited. But the notion of a vaccine-autism connection has been endorsed by celebrities and reinforced through the usual Internet filters. (Anti-vaccine activist and actress Jenny McCarthy famously said on “The Oprah Winfrey Show,” “The University of Google is where I got my degree from.”)
In the climate debate, the consequences of doubt are likely to be global and enduring. Climate-change skeptics in the United States have achieved their fundamental goal of halting legislative action to combat global warming. They haven’t had to win the debate on the merits; they’ve merely had to fog the room enough to keep laws governing greenhouse gas emissions from being enacted.
Some environmental activists want scientists to emerge from their ivory towers and get more involved in the policy battles. Any scientist going that route needs to do so carefully, says Liz Neeley. “That line between science communication and advocacy is very hard to step back from,” she says. In the debate over climate change, the central allegation of the skeptics is that the science saying it’s real and a serious threat is politically tinged, driven by environmental activism and not hard data. That’s not true, and it slanders honest scientists. But the claim becomes more likely to be seen as plausible if scientists go beyond their professional expertise and begin advocating specific policies.
It’s their very detachment, what you might call the cold-bloodedness of science, that makes science the killer app. It’s the way science tells us the truth rather than what we’d like the truth to be. Scientists can be as dogmatic as anyone else — but their dogma is always wilting in the hot glare of new research. In science it’s not a sin to change your mind when the evidence demands it. For some people, the tribe is more important than the truth; for the best scientists, the truth is more important than the tribe.

View Article Here Read More

Earth To Aliens: Scientists Want To Send Messages To Extraterrestrial Intelligence Possibly Living On Exoplanets

Excerpt from techtimes.comExtraterrestrial research experts have said that now is the time to contact intelligent life on alien worlds.Leading figures behind the Search for Extraterrestrial Intelligence (Seti), which has been using radio telescop...

View Article Here Read More

The Future of Technology in 2015?




Excerpt from
cnet.com


The year gone by brought us more robots, worries about artificial intelligence, and difficult lessons on space travel. The big question: where's it all taking us?

Every year, we capture a little bit more of the future -- and yet the future insists on staying ever out of reach.
Consider space travel. Humans have been traveling beyond the atmosphere for more than 50 years now -- but aside from a few overnights on the moon four decades ago, we have yet to venture beyond low Earth orbit.
Or robots. They help build our cars and clean our kitchen floors, but no one would mistake a Kuka or a Roomba for the replicants in "Blade Runner." Siri, Cortana and Alexa, meanwhile, are bringing some personality to the gadgets in our pockets and our houses. Still, that's a long way from HAL or that lad David from the movie "A.I. Artificial Intelligence."
Self-driving cars? Still in low gear, and carrying some bureaucratic baggage that prevents them from ditching certain technology of yesteryear, like steering wheels.
And even when these sci-fi things arrive, will we embrace them? A Pew study earlier this year found that Americans are decidedly undecided. Among the poll respondents, 48 percent said they would like to take a ride in a driverless car, but 50 percent would not. And only 3 percent said they would like to own one.
"Despite their general optimism about the long-term impact of technological change," Aaron Smith of the Pew Research Center wrote in the report, "Americans express significant reservations about some of these potentially short-term developments" such as US airspace being opened to personal drones, robot caregivers for the elderly or wearable or implantable computing devices that would feed them information.
Let's take a look at how much of the future we grasped in 2014 and what we could gain in 2015.

Space travel: 'Space flight is hard'

In 2014, earthlings scored an unprecedented achievement in space exploration when the European Space Agency landed a spacecraft on a speeding comet, with the potential to learn more about the origins of life. No, Bruce Willis wasn't aboard. Nobody was. But when the 220-pound Philae lander, carried to its destination by the Rosetta orbiter, touched down on comet 67P/Churyumov-Gerasimenko on November 12, some 300 million miles from Earth, the celebration was well-earned.
A shadow quickly fell on the jubilation, however. Philae could not stick its first landing, bouncing into a darker corner of the comet where its solar panels would not receive enough sunlight to charge the lander's batteries. After two days and just a handful of initial readings sent home, it shut down. For good? Backers have allowed for a ray of hope as the comet passes closer to the sun in 2015. "I think within the team there is no doubt that [Philae] will wake up," lead lander scientist Jean-Pierre Bibring said in December. "And the question is OK, in what shape? My suspicion is we'll be in good shape."
The trip for NASA's New Horizons spacecraft has been much longer: 3 billion miles, all the way to Pluto and the edge of the solar system. Almost nine years after it left Earth, New Horizons in early December came out of hibernation to begin its mission: to explore "a new class of planets we've never seen, in a place we've never been before," said project scientist Hal Weaver. In January, it will begin taking photos and readings of Pluto, and by mid-July, when it swoops closest to Pluto, it will have sent back detailed information about the dwarf planet and its moon, en route to even deeper space.


Also in December, NASA made a first test spaceflight of its Orion capsule on a quick morning jaunt out and back, to just over 3,600 miles above Earth (or approximately 15 times higher than the International Space Station). The distance was trivial compared to those those traveled by Rosetta and New Horizons, and crewed missions won't begin till 2021, but the ambitions are great -- in the 2030s, Orion is expected to carry humans to Mars.
In late March 2015, two humans will head to the ISS to take up residence for a full year, in what would be a record sleepover in orbit. "If a mission to Mars is going to take a three-year round trip," said NASA astronaut Scott Kelly, who will be joined in the effort by Russia's Mikhail Kornienko, "we need to know better how our body and our physiology performs over durations longer than what we've previously on the space station investigated, which is six months."
There were more sobering moments, too, in 2014. In October, Virgin Galactic's sleek, experimental SpaceShipTwo, designed to carry deep-pocketed tourists into space, crashed in the Mojave Desert during a test flight, killing one test pilot and injuring the other. Virgin founder Richard Branson had hoped his vessel would make its first commercial flight by the end of this year or in early 2015, and what comes next remains to be seen. Branson, though, expressed optimism: "Space flight is hard -- but worth it," he said in a blog post shortly after the crash, and in a press conference, he vowed "We'll learn from this, and move forward together." Virgin Galactic could begin testing its next spaceship as soon as early 2015.
The crash of SpaceShipTwo came just a few days after the explosion of an Orbital Sciences rocket lofting an unmanned spacecraft with supplies bound for the International Space Station. And in July, Elon Musk's SpaceX had suffered the loss of one of its Falcon 9 rockets during a test flight. Musk intoned, via Twitter, that "rockets are tricky..."
Still, it was on the whole a good year for SpaceX. In May, it unveiled its first manned spacecraft, the Dragon V2, intended for trips to and from the space station, and in September, it won a $2.6 billion contract from NASA to become one of the first private companies (the other being Boeing) to ferry astronauts to the ISS, beginning as early as 2017. Oh, and SpaceX also has plans to launch microsatellites to establish low-cost Internet service around the globe, saying in November to expect an announcement about that in two to three months -- that is, early in 2015.
One more thing to watch for next year: another launch of the super-secret X-37B space place to do whatever it does during its marathon trips into orbit. The third spaceflight of an X-37B -- a robotic vehicle that, at 29 feet in length, looks like a miniature space shuttle -- ended in October after an astonishing 22 months circling the Earth, conducting "on-orbit experiments."

Self-driving cars: Asleep at what wheel?

Spacecraft aren't the only vehicles capable of autonomous travel -- increasingly, cars are, too. Automakers are toiling toward self-driving cars, and Elon Musk -- whose name comes up again and again when we talk about the near horizon for sci-fi tech -- says we're less than a decade away from capturing that aspect of the future. In October, speaking in his guise as founder of Tesla Motors, Musk said: "Like maybe five or six years from now I think we'll be able to achieve true autonomous driving where you could literally get in the car, go to sleep and wake up at your destination." (He also allowed that we should tack on a few years after that before government regulators give that technology their blessing.)
Prototype, unbound: Google's ride of the future, as it looks today Google
That comment came as Musk unveiled a new autopilot feature -- characterizing it as a sort of super cruise control, rather than actual autonomy -- for Tesla's existing line of electric cars. Every Model S manufactured since late September includes new sensor hardware to enable those autopilot capabilities (such as adaptive cruise control, lane-keeping assistance and automated parking), to be followed by an over-the-air software update to enable those features.
Google has long been working on its own robo-cars, and until this year, that meant taking existing models -- a Prius here, a Lexus there -- and buckling on extraneous gear. Then in May, the tech titan took the wraps off a completely new prototype that it had built from scratch. (In December, it showed off the first fully functional prototype.) It looked rather like a cartoon car, but the real news was that there was no steering wheel, gas pedal or brake pedal -- no need for human controls when software and sensors are there to do the work.
Or not so fast. In August, California's Department of Motor Vehicles declared that Google's test vehicles will need those manual controls after all -- for safety's sake. The company agreed to comply with the state's rules, which went into effect in September, when it began testing the cars on private roads in October.
Regardless of who's making your future robo-car, the vehicle is going to have to be not just smart, but actually thoughtful. It's not enough for the car to know how far it is from nearby cars or what the road conditions are. The machine may well have to make no-win decisions, just as human drivers sometimes do in instantaneous, life-and-death emergencies. "The car is calculating a lot of consequences of its actions," Chris Gerdes, an associate professor of mechanical engineering, said at the Web Summit conference in Dublin, Ireland, in November. "Should it hit the person without a helmet? The larger car or the smaller car?"

Robots: Legging it out

So when do the robots finally become our overlords? Probably not in 2015, but there's sure to be more hand-wringing about both the machines and the artificial intelligence that could -- someday -- make them a match for homo sapiens. At the moment, the threat seems more mundane: when do we lose our jobs to a robot?
The inquisitive folks at Pew took that very topic to nearly 1,900 experts, including Vint Cerf, vice president at Google; Web guru Tim Bray; Justin Reich of Harvard University's Berkman Center for Internet & Society; and Jonathan Grudin, principal researcher at Microsoft. According to the resulting report, published in August, the group was almost evenly split -- 48 percent thought it likely that, by 2025, robots and digital agents will have displaced significant numbers of blue- and white-collar workers, perhaps even to the point of breakdowns in the social order, while 52 percent "have faith that human ingenuity will create new jobs, industries, and ways to make a living, just as it has been doing since the dawn of the Industrial Revolution."


Still, for all of the startling skills that robots have acquired so far, they're often not all there yet. Here's some of what we saw from the robot world in 2014:
Teamwork: Researchers at the École Polytechnique Fédérale De Lausanne in May showed off their "Roombots," cog-like robotic balls that can join forces to, say, help a table move across a room or change its height.
A sense of balance: We don't know if Boston Dynamics' humanoid Atlas is ready to trim bonsai trees, but it has learned this much from "The Karate Kid" (the original from the 1980s) -- it can stand on cinder blocks and hold its balance in a crane stance while moving its arms up and down.
Catlike jumps: MIT's cheetah-bot gets higher marks for locomotion. Fed a new algorithm, it can run across a lawn and bound like a cat. And quietly, too. "Our robot can be silent and as efficient as animals. The only things you hear are the feet hitting the ground," MIT's Sangbae Kim, a professor of mechanical engineering, told MIT News. "This is kind of a new paradigm where we're controlling force in a highly dynamic situation. Any legged robot should be able to do this in the future."
Sign language: Toshiba's humanoid Aiko Chihira communicated in Japanese sign language at the CEATEC show in October. Her rudimentary skills, limited for the moment to simple messages such as signed greetings, are expected to blossom by 2020 into areas such as speech synthesis and speech recognition.
Dance skills: Robotic pole dancers? Tobit Software brought a pair, controllable by an Android smartphone, to the Cebit trade show in Germany in March. More lifelike was the animatronic sculpture at a gallery in New York that same month -- but what was up with that witch mask?
Emotional ambition: Eventually, we'll all have humanoid companions -- at least, that's always been one school of thought on our robotic future. One early candidate for that honor could be Pepper, from Softbank and Aldebaran Robotics, which say the 4-foot-tall Pepper is the first robot to read emotions. This emo-bot is expected to go on sale in Japan in February.

Ray guns: Ship shape

Damn the photon torpedoes, and full speed ahead. That could be the motto for the US Navy, which in 2014 deployed a prototype laser weapon -- just one -- aboard a vessel in the Persian Gulf. Through some three months of testing, the device "locked on and destroyed the targets we designated with near-instantaneous lethality," Rear Adm. Matthew L. Klunder, chief of naval research, said in a statement. Those targets were rather modest -- small objects mounted aboard a speeding small boat, a diminutive Scan Eagle unmanned aerial vehicle, and so on -- but the point was made: the laser weapon, operated by a controller like those used for video games, held up well, even in adverse conditions.

Artificial intelligence: Danger, Will Robinson?

What happens when robots and other smart machines can not only do, but also think? Will they appreciate us for all our quirky human high and low points, and learn to live with us? Or do they take a hard look at a species that's run its course and either turn us into natural resources, "Matrix"-style, or rain down destruction?
laser-weapon-system-on-uss-ponce.jpg
When the machines take over, will they be packing laser weapons like this one the US Navy just tried out? John F. Williams/US Navy
As we look ahead to the reboot of the "Terminator" film franchise in 2015, we can't help but recall some of the dire thoughts about artificial intelligence from two people high in the tech pantheon, the very busy Musk and the theoretically inclined Stephen Hawking.
Musk himself more than once in 2014 invoked the likes of the "Terminator" movies and the "scary outcomes" that make them such thrilling popcorn fare. Except that he sees a potentially scary reality evolving. In an interview with CNBC in June, he spoke of his investment in AI-minded companies like Vicarious and Deep Mind, saying: "I like to just keep an eye on what's going on with artificial intelligence. I think there is potentially a dangerous outcome."
He has put his anxieties into some particularly colorful phrases. In August, for instance, Musk tweeted that AI is "potentially more dangerous than nukes." And in October, he said this at a symposium at MIT: "With artificial intelligence, we are summoning the demon. ... You know all those stories where there's the guy with the pentagram and the holy water and he's like... yeah, he's sure he can control the demon, [but] it doesn't work out."
Musk has a kindred spirit in Stephen Hawking. The physicist allowed in May that AI could be the "biggest event in human history," and not necessarily in a good way. A month later, he was telling John Oliver, on HBO's "Last Week Tonight," that "artificial intelligence could be a real danger in the not too distant future." How so? "It could design improvements to itself and outsmart us all."
But Google's Eric Schmidt, is having none of that pessimism. At a summit on innovation in December, the executive chairman of the far-thinking tech titan -- which in October teamed up with Oxford University to speed up research on artificial intelligence -- said that while our worries may be natural, "they're also misguided."

View Article Here Read More

Study: Space travel causes higher heart rates and more fainting in women than men




Excerpt from 
thespacereporter.com



Space travel has different health effects on men than it does on women, according to a recent study jointly conducted by NASA and by the National Space Biomedical Research Institute (NSBRI).
The study, which looked at 477 male astronauts and 57 female astronauts, all of whom had been to space up to June 2013, was conducted in anticipation of longer duration spaceflights in the future. One of these will include a manned mission to Mars in the 2030s.

Six working groups studied data from the spaceflights in which the astronauts had participated. They concentrated on cardiovascular, sensorimotor, behavioral, musculoskeletal, immunological, and reproductive systems and negative impacts on these due to having spent long periods in space.

In several of these areas, men appear to tolerate spaceflight better than women. Female astronauts tended to experience increased heart rates in times of stress and had higher rates of Urinary Tract Infections (UTIs), as well as higher rates of cancer caused by radiation, than their male counterparts.

After returning to Earth, women astronauts also had a harder time standing without fainting–a condition known as orthostatic intolerance–than did men.

Men were found to be more likely to experience loss of hearing and vision as consequences of space travel, the study indicated.

Behavioral responses appeared the same in both genders.

The study is reported in a recent issue of the Journal of Women’ Health.

View Article Here Read More

What Would You Take With You to the Afterlife? – Life, Death, Out-of-Body Experiences & the Journey of Consciousness




beforeitsnews.com
By Matthew Butler 

People save up for retirement, but how well do we prepare for the journey after? Ancient cultures put great emphasis on the afterlife, because they knew consciousness continued after death. They were right: Out-of-body experiences reveal we really do exist beyond the body. Knowing this truth should inspire us to seek in life what really matters and remains after death – awakened consciousness.

What is the greatest mystery of life? According to a legendary Q&A in the Indian spiritual epic the Mahabharata, the greatest wonder is that countless people die every day, yet those left behind believe they will live forever.
There is a well-known saying that the only certainty in life is death, but our hyper-connected modern society is not exactly inspiring much reflection on what lies beyond the transient.
People put aside savings for retirement, and some take out life insurance to take care of the loved ones they leave behind. This looks after physical needs, but what about the needs of consciousness which continues without the body? What preparations are made for its journey after death – the ultimate journey of a lifetime?
Religious institutions offer a solution to their followers that usually depends on adopting a set of beliefs rather than personal spiritual discovery.  On the other hand, some scientists will tell you with equal conviction that nothing comes after death, so don’t worry about it. Both of these points of view depend on belief, but what if, when the final moment comes, you realise you wasted the great opportunity your life provided? An alternative option is to discover for ourselves why we are here, and what  our place in the universe is, while we are alive and have the opportunity to do something with the knowledge we gain.
Ancient spiritual cultures almost universally placed importance on the individual’s preparations and journey into the afterlife. They clearly understood our existence extended beyond our bodies, and that life and death were best seen with the bigger picture of creation in mind – as part of an ongoing journey of consciousness – with life presenting an amazing opportunity for conscious evolution that we take the fruits from after death.
This was bought home to me in an interesting way during a trip to a museum exhibition showcasing ancient Egyptian afterlife cosmology; it reminded me of the universal nature of the afterlife, and how Near-Death Experiences and Out-of-Body Experiences offer us a glimpse into the reality of existence beyond the body, revealing that awakening consciousness is what creation is really all about.
With our modern culture drifting more and more into shallow short-sighted materialism and faux metaphysics, the need to re-discover and live this deeper purpose to life, so cherished by the ancients, is more important than ever.

A Journey into the Ancient Egyptian Afterlife

A while back I was fortunate to have the opportunity to take a one-way self-guided tour through the ancient Egyptian afterlife, thanks to a special museum exhibition featuring artefacts from the British Museum collection.
The local museum was packed, and we had to wait in a queue before being allowed in. Finally we entered a dimly-lit passage thronging with people, winding past ancient Egyptian artefacts, artworks, tools, scriptures, and mummies.

Geb_Nut_Shu-300x202The exhibit started with depictions of ancient Egyptian cosmology like this. Here the sky goddess is held up above the earth.


It was arranged so that you went on an afterlife “journey” vicariously, stage by stage, in the way the ancient Egyptians understood it. It began with displays showing ancient Egyptian depictions of the world’s creation, and culminated with the judgement of the soul and its journey after death. In between you were shown artefacts demonstrating how ancient Egyptians understood and prepared for death.
There were ancient scrolls of the pyramid texts on display, and ancient art depicting the soul’s journey through the afterlife. A major theme in their art was judgement and the “weighing of the heart”, where a deceased person’s heart was weighed against a feather, and their fate was dependent on their inner qualities and the sum of their actions while alive. Toward the end of the exhibition they had a mockup display of this, with a large set of scales on which you could weigh your “heart” against a feather, while Egyptian Gods looked on from a mural.
After that, you passed into a depiction of the Egyptian paradise before stepping outside into the sunlight. I doubt the effect was intentional on the part of the exhibitors, but after passing through the exhibition’s dark passageway with its ordered depictions of the afterlife, judgment and then stepping into the light, I couldn’t help but think of accounts of near-death experiences, in which people often report passing through a dark tunnel toward the light, and experiencing a life review where they see the consequences of all their actions.

BD_Hunefer_cropped_1-300x231Depiction of the “weighing of the heart”

The exhibit really brought home to me how the ancient Egyptians understood they existed for a purpose that went beyond everyday life. Death was a doorway to the next stage of existence, and their lives were an opportunity to prepare for it. They knew we do not cease to exist when we die, and saw the quest for immortality through awakening consciousness as the real purpose to creation.
From looking at artefacts from different periods, it was apparent the ancient Egyptian understanding of death changed over time. It seemed to me that originally, the emphasis was on living spiritually and obtaining an immortality of the soul, while in later periods their understanding declined into more literal interpretations of preparing the body (rather than consciousness) for the afterlife through mummification, and a preoccupation with the arrangement of one’s burial and tomb with the right spells and amulets.
But I was vividly struck by how through that civilisation’s long and varied existence, the importance of the afterlife always reigned supreme, and being prepared for life after death was absolutely central to existence. Death, and therefore life, was taken very seriously.

I couldn’t help but notice a stark contrast between our modern culture and theirs. It was a bit like being in some kind of time warp, where two very different cultures collided. The artefacts of the Egyptians gave a sense of the sacredness of life and creation, but the bustling, noisy crowds of modern onlookers apparently saw this ancient preoccupation with the afterlife as mere novelty and amusement. How different ancient Egypt was to our modern society where the reality, and inevitability, of death is given little thought or preparation, and the understanding that consciousness continues after death is often summarily discounted and ridiculed.
I highly doubt that many people who attended the exhibition paused to reflect on whether they would continue to exist after death and, if so, how? And why are we here anyway? This was driven home when, just prior to reaching the scales of “judgement”, I noticed a whiteboard, styled with papyrus veneer, with a pertinent question written at the top.

What would you take with you to the afterlife?

Good question. A pen hung from the board, inviting people to write their response underneath. The answers ranged from the sentimental, to the mundane, to the silly.

WP_000293-EDIT1-1024x845How would you answer the question?

Some wanted to take their friends and family with them, while others wanted to take things like their iPhone, make-up, favourite band, football team, favourite rock star, chocolate, alcohol, and so forth.
A “time machine” was perhaps the only clever response. I could see the benefit of that if you realised you had wasted your life. I don’t think it’s really an option however.
This brought home how we don’t take death and the meaning of our lives anywhere near as seriously as we should today. The ancients knew a lot more about life and death than we do. We have lost their ancient wisdom, and with it the understanding of the amazing opportunity our existence in this universe presents.
This is a serious problem. Our consciousness will continue to exist without the body. But if we don’t question our existence and why we are here, we will not awaken consciousness and we will never reach our true potential.

Near-Death Experiences and the Reality of Existence Beyond the Body

Existence after death is not something the ancient Egyptians invented. Concepts of an afterlife are so common across geographically isolated cultures around the world that it cannot simply be dismissed as a coincidence. There may be cultural differences in the details, but the understanding that we continue existing without the body has been pretty much universal for thousands of years.
In fact, the burial of the dead and the realisation of an afterlife are considered some of the most important hallmarks of cultural development in Stone Age people. It was a sign of intelligence distinguishing people from animals, and paved the way for the development of more sophisticated civilisations.

Hieronymus_Bosch_013The medieval painting ‘Ascent of the Blessed’ by Hieronymus Bosch shows the light at the end of the tunnel common to NDE accounts

Near-Death Experiences (NDEs) provide compelling anecdotal evidence that the afterlife mythologies of the world share a real common source and that consciousness exists beyond the brain. In NDEs, people who are clinically dead or close to death go through experiences that follow a pattern with universal traits, which they recall after being revived.
These include an out of body experience, where they leave their body and realise they are separate from it, perhaps seeing their body lying beneath them. Then they may go on a journey, which may feature common aspects like travelling through a tunnel, and a life review, where a person is shown everything they have done, and feels the effects of their actions toward others, whether good or bad.
Although some scientists speculate that these phenomena may be caused by the brain, the reality is that these experiences have occurred when patients are clinically brain dead, and it has not been proven these experiences are produced biologically. Furthermore, there is no ultimate proof that consciousness is produced by the brain anyway, although this is a strongly-held assumption among those entrenched in materialistic beliefs.
NDEs challenge rigid materialistic beliefs about life. In light of the prevalence and commonality of NDEs, some scientists now suggest that consciousness interacts with the brain rather than being produced by it. Rather, the brain is a conduit through which consciousness can express itself, much like the way a computer is a conduit for the internet, but the internet continues to exist when the computer is switched off.
NDEs are increasingly reported in the modern world due to improvements in health care leading to more people being revived, but they are also an ancient phenomenon. Research by the scholar Gregory Shushan found there are universal afterlife experiences which underpinned both modern NDE accounts and ancient afterlife mythologies. His research involved an in-depth comparative analysis of afterlife conceptions of five ancient civilisations (Old and Middle Kingdom Egypt, Sumerian and Old Babylonian Mesopotamia, Vedic India, pre-Buddhist China, and pre-Columbian Mesoamerica) and compared them to modern NDE accounts. He demonstrated that, although there were some variations in the details based on the cultural origin, there were specific recurring similarities that reappeared too consistently to be mere coincidence, suggesting that, “afterlife conceptions are not entirely culturally-determined and… appear to be universal or quasi-universal to some degree”.

Life is an Opportunity to Awaken Consciousness

Realising that you are consciousness, and continue to exist without the body, awakens you to the bigger picture of life. It puts your whole life in perspective.
In an NDE life review, people tend to see that what really matters in life is not how much money they made or what they achieved in a given field, but how they treated other people, and whether they acted with love. These experiences tend to change people’s lives, inspiring them to be more spiritual.
afterlife
Discovering you exist beyond the body can be a life-changing revelation

We do not need to have an NDE to verify that we exist without the body, or to have life-changing experiences. Through astral projection, we can have wilful out of body experiences and use these mystical experiences to learn about ourselves and make positive changes in our lives.
Realising that we exist beyond the body can open the door to awakening. You realise that what really matters in life is not what we gain physically, but developing consciousness. Then the question, “what will you take with you to the afterlife” becomes much more meaningful. You can’t take physical things with you when you die like your iPhone, but you can take consciousness. Then you see that the focus on the afterlife in ancient cultures was not a preoccupation with death, but a deep understanding of life, and how to live it in the most meaningful way to bring spiritual benefits to yourself and others, the effects of which continue after death.
States like anger, greed and hatred have their consequences in the world which are bad enough, but who wants to take these states with them to afterlife? If these states don’t bring happiness here, why drag them along after death? Expressions of consciousness like love, wisdom and inner peace are much  better qualities to carry within. By awakening and expressing consciousness in a world filled with ignorance, hatred and darkness, we not only help to make the world a better place, but continue to carry these spiritual qualities in our consciousness when our body is left behind.
Understanding this is so important today. We live in a society bombarded with elite-controlled propaganda and entertainment that not only hides the darker agendas working in the world, but blankets people in ignorance, keeping us from uncovering the deeper potential of our consciousness and empowering ourselves by striving to awaken – which enables us to break free of the grip of darkness that exerts its influence over humanity. Failing to wake up to this agenda has it implications in the world, and also for our consciousness, and it’s consciousness that really counts, both in life and beyond.
So what would you take with you to the afterlife?

View Article Here Read More

What Would You Take With You to the Afterlife? – Life, Death, Out-of-Body Experiences and the Journey of Consciousness

Matthew Butler, GuestPeople save up for retirement, but how well do we prepare for the journey after? Ancient cultures put great emphasis on the afterlife, because they knew consciousness continued after death. They were right: Out-of-body experiences reveal we really do exist beyond the body. Knowing this truth should inspire us to seek in life what really matters and remains after death – awakened consciousness.What is the greatest mystery of life? According to a legendar [...]

View Article Here Read More

The Meaning of Peace in the Bhagavad Gita

V. Susan Ferguson, ContributorThe superb Sanskrit text, The Bhagavad Gita, is an amazing guide and in my view the ultimate ‘user’s manual’ for the human adventure. This ancient text is a dialogue between two mighty warrior heroes: Krishna and Arjuna. Krishna represents the God within us all, who is always waiting patiently to guide us – if we can listen. Arjuna is the greatest warrior of the time and Krishna is his charioteer, his guide in the battle of life. He wil [...]

View Article Here Read More

David Wilcock – The Solar System Is Moving Into A New Area Of Vibration

According to the research of David Wilcock, there is an impending shift going on within our solar system that will give us all the opportunity to make a quantum leap in consciousness.I was watching a "Contact In The Desert" video featuringDavid Wilcock and he brought up some information that is quite fascinating.The following is an excerpt from "The Brown Notebook" which is a channeling from Walt Rogers that was done in the 1950's.   Much of what was channeled is proving to be tr [...]

View Article Here Read More

7 Ways Cannabis Legalization Has Already Benefited Colorado

Jeff Roberts, Collective-EvolutionJanuary 1st 2014 saw the opening of the very first cannabis shop in Colorado as the cultivation, manufacture and sale of the controversial plant became fully legalized. Since then, the state has seen a lot of promising results.Laura Pegram of Drugpolicy.org wrote in her article, Six Months of Marijuana Sales: Positive Trends Emerge in Colorado, that “even the state’s Director of Marijuana Coordination was quick to note [...]

View Article Here Read More

Galactic Federation of Light SaLuSa June-01-2013

Peace is Vital at This Moment — SaLuSa 1 June 2013 — Multidimensional Ocean
POSTED ON JUNE 1, 2013 BY LAURA
http://multidimensionalocean.wordpress.com/2013/06/01/peace-is-vital-at-this-moment-salusa-1-june-2013-multidimensional-ocean/
Dear ones, we salute your brave courage during

View Article Here Read More
Older posts Newer posts

Creative Commons License
This work is licensed under a
Creative Commons Attribution 4.0
International License
.
unless otherwise marked.

Terms of Use | Privacy Policy



Up ↑