Tag: brain (page 4 of 11)

Have Aliens Left The Universe? Theory Predicts We’ll Follow

























Excerpt from robertlanza.com

In Star Wars, the bars are bustling with all types of alien creatures. And then, of course, there’s Yoda and Chewbacca. Recently, renowned scientist Stephen Hawking stated that he too believes aliens exist: “To my mathematical brain, the numbers alone make thinking about aliens perfectly rational.”

Hawking thinks we should be cautious about interacting with aliens — that they might raid Earth’s resources, take our ores, and then move on like pirates. “I imagine they might exist in massive ships, having used up all the resources from their home planet. Such advanced aliens would perhaps become nomads, looking to conquer and colonize whatever planets they can reach.”
But where are they all anyhow?

For years, NASA and others have been searching for extraterrestrial intelligence. The universe is 13.7 billion years old and contains some 10 billion trillion stars. Surely, in this lapse of suns, advanced life would have evolved if it were possible. Yet despite half a century of scanning the sky, astronomers have failed to find any evidence of life or to pick up any of the interstellar radio signals that our great antennas should be able to easily detect.

Some scientists point to the “Fermi Paradox,” noting that extraterrestrials should have had plenty of time to colonize the entire galaxy but that perhaps they’ve blown themselves up. It’s conceivable the problem is more fundamental and that the answer has to do with the evolutionary course of life itself.

Look at the plants in your backyard. What are they but a stem with roots and leaves bringing nutriments to the organism? After billions of years of evolution, it was inevitable life would acquire the ability to locomote, to hunt and see, to protect itself from competitors. 
Observe the ants in the woodpile — they can engage in combat just as resolutely as humans. Our guns and ICBM are merely the mandibles of a cleverer ant. The effort for self-preservation is vague and varied. But when we’ve overcome our struggles, what do we do next? Build taller and more splendid houses?

What happens after life completes its transition to perfection? Perhaps across space, more advanced intelligences have taken the next evolutionary step. Perhaps they’ve evolved beyond the three dimensions we vertebrates know. A new theory — Biocentrism — tells us that space and time aren’t physical matrices, but simply tools our mind uses to put everything together. These algorithms are the key to consciousness, and why space and time — indeed the properties of matter itself — are relative to the observer. More advanced civilizations would surely understand these algorithms well enough to create realities that we can’t even imagine, and to have expanded beyond our corporeal cage.

Like breathing, we take for granted how our mind puts everything together. I can recall a dream I had of a flying saucer landing in Times Square. It was so real it took awhile to convince myself that it was a dream (that I was actually at home in bed). I was standing in a crowd surrounded by skyscrapers when a massive spaceship appeared overhead. Everyone started running. My mind had somehow generated this spatio-temporal experience out of electrochemical information. I could feel the vibrations under my feet as the ship started to land, merging this 3D world with my inner thoughts and sensations.

Although I was in bed with my eyes closed, I was able to run and move my arms and fingers. My mind had created a fully functioning body and placed it in a virtual world (replete with clouds in the sky and the Sun) that was indistinguishable from the one I’m in right now. Life as we know it is defined by this spatial-temporal logic, which traps us in the universe of up and down. But like my dream, quantum theory confirms that the properties of particles in the “real” world are also observer-determined.

Other information systems surely exist that correspond to other physical realities, universes based on logic completely different from ours and not based on space and time as we know it. In fact, the simplest invertebrates may only experience existence in one dimension of space. Evolutionary biology suggests life has progressed from a one dimensional reality, to two dimensions to three dimensions, and there’s no scientific reason to think that the evolution of life stops there.

Advanced civilizations would certainly have changed the algorithms so that instead of being trapped in the linear dimensions we find ourselves in, their consciousness moves through the multiverse and beyond. Why would Aliens build massive ships and spend thousands of years to colonize planetary systems (most of which are probably useless and barren), when they could simply tinker with the algorithms and get whatever they want?

Life on Earth is just beginning to send its shoots upward into the heavens. We’ve even flung a piece of metal outside the solar system. Affixed to the spacecraft is a record with greetings in 60 languages. One can’t but wonder whether some civilization more advanced than ours will come upon it. Or will it just drift across the gulf of space? To me the answer is clear. But in case I’m wrong, I have a pitch fork guarding the ore in my backyard.

View Article Here Read More

What happens to your body when you give up sugar?





Excerpt from independent.co.uk
By Jordan Gaines Lewis


In neuroscience, food is something we call a “natural reward.” In order for us to survive as a species, things like eating, having sex and nurturing others must be pleasurable to the brain so that these behaviours are reinforced and repeated.
Evolution has resulted in the mesolimbic pathway, a brain system that deciphers these natural rewards for us. When we do something pleasurable, a bundle of neurons called the ventral tegmental area uses the neurotransmitter dopamine to signal to a part of the brain called the nucleus accumbens. The connection between the nucleus accumbens and our prefrontal cortex dictates our motor movement, such as deciding whether or not to taking another bite of that delicious chocolate cake. The prefrontal cortex also activates hormones that tell our body: “Hey, this cake is really good. And I’m going to remember that for the future.”
Not all foods are equally rewarding, of course. Most of us prefer sweets over sour and bitter foods because, evolutionarily, our mesolimbic pathway reinforces that sweet things provide a healthy source of carbohydrates for our bodies. When our ancestors went scavenging for berries, for example, sour meant “not yet ripe,” while bitter meant “alert – poison!”
Fruit is one thing, but modern diets have taken on a life of their own. A decade ago, it was estimated that the average American consumed 22 teaspoons of added sugar per day, amounting to an extra 350 calories; it may well have risen since then. A few months ago, one expert suggested that the average Briton consumes 238 teaspoons of sugar each week.
Today, with convenience more important than ever in our food selections, it’s almost impossible to come across processed and prepared foods that don’t have added sugars for flavour, preservation, or both.
These added sugars are sneaky – and unbeknown to many of us, we’ve become hooked. In ways that drugs of abuse – such as nicotine, cocaine and heroin – hijack the brain’s reward pathway and make users dependent, increasing neuro-chemical and behavioural evidence suggests that sugar is addictive in the same way, too.

Sugar addiction is real

Anyone who knows me also knows that I have a huge sweet tooth. I always have. My friend and fellow graduate student Andrew is equally afflicted, and living in Hershey, Pennsylvania – the “Chocolate Capital of the World” – doesn’t help either of us. But Andrew is braver than I am. Last year, he gave up sweets for Lent. “The first few days are a little rough,” Andrew told me. “It almost feels like you’re detoxing from drugs. I found myself eating a lot of carbs to compensate for the lack of sugar.”
There are four major components of addiction: bingeing, withdrawal, craving, and cross-sensitisation (the notion that one addictive substance predisposes someone to becoming addicted to another). All of these components have been observed in animal models of addiction – for sugar, as well as drugs of abuse.
A typical experiment goes like this: rats are deprived of food for 12 hours each day, then given 12 hours of access to a sugary solution and regular chow. After a month of following this daily pattern, rats display behaviours similar to those on drugs of abuse. They’ll binge on the sugar solution in a short period of time, much more than their regular food. They also show signs of anxiety and depression during the food deprivation period. Many sugar-treated rats who are later exposed to drugs, such as cocaine and opiates, demonstrate dependent behaviours towards the drugs compared to rats who did not consume sugar beforehand.
Like drugs, sugar spikes dopamine release in the nucleus accumbens. Over the long term, regular sugar consumption actually changes the gene expression and availability of dopamine receptors in both the midbrain and frontal cortex. Specifically, sugar increases the concentration of a type of excitatory receptor called D1, but decreases another receptor type called D2, which is inhibitory. Regular sugar consumption also inhibits the action of the dopamine transporter, a protein which pumps dopamine out of the synapse and back into the neuron after firing.
In short, this means that repeated access to sugar over time leads to prolonged dopamine signalling, greater excitation of the brain’s reward pathways and a need for even more sugar to activate all of the midbrain dopamine receptors like before. The brain becomes tolerant to sugar – and more is needed to attain the same “sugar high.”

Sugar withdrawal is also real

Although these studies were conducted in rodents, it’s not far-fetched to say that the same primitive processes are occurring in the human brain, too. “The cravings never stopped, [but that was] probably psychological,” Andrew told me. “But it got easier after the first week or so.”
In a 2002 study by Carlo Colantuoni and colleagues of Princeton University, rats who had undergone a typical sugar dependence protocol then underwent “sugar withdrawal.” This was facilitated by either food deprivation or treatment with naloxone, a drug used for treating opiate addiction which binds to receptors in the brain’s reward system. Both withdrawal methods led to physical problems, including teeth chattering, paw tremors, and head shaking. Naloxone treatment also appeared to make the rats more anxious, as they spent less time on an elevated apparatus that lacked walls on either side.
Similar withdrawal experiments by others also report behaviour similar to depression in tasks such as the forced swim test. Rats in sugar withdrawal are more likely to show passive behaviours (like floating) than active behaviours (like trying to escape) when placed in water, suggesting feelings of helplessness.
A new study published by Victor Mangabeira and colleagues in this month’s Physiology & Behavior reports that sugar withdrawal is also linked to impulsive behaviour. Initially, rats were trained to receive water by pushing a lever. After training, the animals returned to their home cages and had access to a sugar solution and water, or just water alone. After 30 days, when rats were again given the opportunity to press a lever for water, those who had become dependent on sugar pressed the lever significantly more times than control animals, suggesting impulsive behaviour.
These are extreme experiments, of course. We humans aren’t depriving ourselves of food for 12 hours and then allowing ourselves to binge on soda and doughnuts at the end of the day. But these rodent studies certainly give us insight into the neuro-chemical underpinnings of sugar dependence, withdrawal, and behaviour.
Through decades of diet programmes and best-selling books, we’ve toyed with the notion of “sugar addiction” for a long time. There are accounts of those in “sugar withdrawal” describing food cravings, which can trigger relapse and impulsive eating. There are also countless articles and books about the boundless energy and new-found happiness in those who have sworn off sugar for good. But despite the ubiquity of sugar in our diets, the notion of sugar addiction is still a rather taboo topic.
Are you still motivated to give up sugar? You might wonder how long it will take until you’re free of cravings and side-effects, but there’s no answer – everyone is different and no human studies have been done on this. But after 40 days, it’s clear that Andrew had overcome the worst, likely even reversing some of his altered dopamine signalling. “I remember eating my first sweet and thinking it was too sweet,” he said. “I had to rebuild my tolerance.”
And as regulars of a local bakery in Hershey – I can assure you, readers, that he has done just that.
Jordan Gaines Lewis is a Neuroscience Doctoral Candidate at Penn State College of Medicine

View Article Here Read More

Bees Do It, Humans Do It ~ Bees can experience false memories, scientists say



Excerpt from csmonitor.com


Researchers at Queen Mary University of London have found the first evidence of false memories in non-human animals.

It has long been known that humans – even those of us who aren't famous news anchors – tend to recall events that did not actually occur. The same is likely true for mice: In 2013, scientists at MIT induced false memories of trauma in mice, and the following year, they used light to manipulate mice brains to turn painful memories into pleasant ones.

Now, researchers at Queen Mary University of London have shown for the first time that insects, too, can create false memories. Using a classic Pavlovian experiment, co-authors Kathryn Hunt and Lars Chittka determined that bumblebees sometimes combine the details of past memories to form new ones. Their findings were published today in Current Biology.
“I suspect the phenomenon may be widespread in the animal kingdom," Dr. Chittka said in a written statement to the Monitor.
First, Chittka and Dr. Hunt trained their buzzing subjects to expect a reward if they visited two artificial flowers – one solid yellow, the other with black-and-white rings. The order didn’t matter, so long as the bee visited both flowers. In later tests, they would present a choice of the original two flower types, plus one new one. The third type was a combination of the first two, featuring yellow-and-white rings. At first, the bees consistently selected the original two flowers, the ones that offered a reward.

But a good night’s sleep seemed to change all that. One to three days after training, the bees became confused and started incorrectly choosing the yellow-and-white flower (up to fifty percent of the time). They seemed to associate that pattern with a reward, despite having never actually seen it before. In other words, the bumblebees combined the memories of two previous stimuli to generate a new, false memory.

“Bees might, on occasion, form merged memories of flower patterns visited in the past,” Chittka said. “Should a bee unexpectedly encounter real flowers that match these false memories, they might experience a kind of deja-vu and visit these flowers expecting a rich reward.”

Bees have a rather limited brain capacity, Chittka says, so it’s probably useful for them to “economize” by storing generalized memories instead of minute details.

“In bees, for example, the ability to learn more than one flower type is certainly useful,” Chittka said, “as is the ability to extract commonalities of multiple flower patterns. But this very ability might come at the cost of bees merging memories from multiple sequential experiences.”

Chittka has studied memory in bumblebees for two decades. Bees can be raised and kept in a lab setting, so they make excellent long-term test subjects.

“They are [also] exceptionally clever animals that can memorize the colors, patterns, and scents of multiple flower species – as well as navigate efficiently over long distances,” Chittka said.

In past studies, it was assumed that animals that failed to perform learned tasks had either forgotten them or hadn’t really learned them in the first place. Chittka’s research seems to show that animal memory mechanisms are much more elaborate – at least when it comes to bumblebees.

“I think we need to move beyond understanding animal memory as either storing or not storing stimuli or episodes,” Chittka said. “The contents of memory are dynamic. It is clear from studies on human memory that they do not just fade over time, but can also change and integrate with other memories to form new information. The same is likely to be the case in many animals.”

Chittka hopes this study will lead to a greater biological understanding of false memories – in animals and humans alike. He says that false memories aren’t really a “bug in the system,” but a side effect of complex brains that strive to learn the big picture and to prepare for new experiences.

“Errors in human memory range from misremembering minor details of events to generating illusory memories of entire episodes,” Chittka said. “These inaccuracies have wide-ranging implications in crime witness accounts and in the courtroom, but I believe that – like the quirks of information processing that occur in well known optical illusions – they really are the byproduct of otherwise adaptive processes.”

“The ability to memorize the overarching principles of a number of different events might help us respond in previously un-encountered situations,” Chittka added. “But these abilities might come at the expense of remembering every detail correctly.”
So, if generating false memories goes hand in hand with having a nervous system, does all this leave Brian Williams off the hook?

“It is possible that he conflated the memories,” Chittka said, “depending on his individual vulnerability to witnessing a traumatic event, plus a possible susceptibility to false memories – there is substantial inter-person variation with respect to this. It is equally possible that he was just ‘showing off’ when reporting the incident, and is now resorting to a simple lie to try to escape embarrassment. That is impossible for me to diagnose.”

But if Mr. Williams genuinely did misremember his would-be brush with death, Chittka says he shouldn’t be vilified.

“You cannot morally condemn someone for reporting something they think really did happen to them,” Chittka said. “You cannot blame an Alzheimer patient for forgetting to blow out the candle, even if they burn down the house as a result. In the same way, you can't blame someone who misremembers a crime as a result of false memory processes."

View Article Here Read More

Another Problem for Evolution Theory? ‘Big Brain’ Gene Found in Humans, But Not in Chimps



Image: Mouse brain
M. Florio and W. Huttner / Max Planck Institute
This embryonic mouse cerebral cortex was stained to identify cell nuclei (in blue) and a marker for deep-layer neurons (in red). The human-specific gene known as ARHGAP11B was selectively expressed in the right hemisphere: Note the folding of the neocortical surface.

Excerpt from  nbcnews.com

By Tia Ghose

ave the way for the rise of human intelligence by dramatically increasing the number of neurons found in a key brain region. 

This gene seems to be uniquely human: It is found in modern-day humans, Neanderthals and another branch of extinct humans called Denisovans, but not in chimpanzees. 

By allowing the brain region called the neocortex to contain many more neurons, the tiny snippet of DNA may have laid the foundation for the human brain's massive expansion.
"It is so cool that one tiny gene alone may suffice to affect the phenotype of the stem cells, which contributed the most to the expansion of the neocortex," said study lead author Marta Florio, a doctoral candidate in molecular and cellular biology and genetics at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany. 

She and her colleagues found that the gene, called ARHGAP11B, is turned on and highly activated in the human neural progenitor cells, but isn't present at all in mouse cells. This tiny snippet of DNA, just 804 genetic bases long, was once part of a much longer gene. Somehow, this fragment was duplicated, and the duplicated fragment was inserted into the human genome. 

In follow-up experiments, the team inserted and turned on this DNA snippet in the brains of mice. The mice with the gene insertion grew what looked like larger neocortex regions. 

The researchers reviewed a wide variety of genomes from modern-day and extinct species — confirming that Neanderthals and Denisovans had this gene, while chimpanzees and mice do not. That suggests that the gene emerged soon after humans split off from chimpanzees, and that it helped pave the way for the rapid expansion of the human brain. 

Florio stressed that the gene is probably just one of many genetic changes that make human cognition special.

The gene was described in a paper published online Thursday by the journal Science.

View Article Here Read More

Why science is so hard to believe?

 
In the recent movie “Interstellar,” set in a futuristic, downtrodden America where NASA has been forced into hiding, school textbooks say the Apollo moon landings were faked.


Excerpt from 


There’s a scene in Stanley Kubrick’s comic masterpiece “Dr. Strangelove” in which Jack D. Ripper, an American general who’s gone rogue and ordered a nuclear attack on the Soviet Union, unspools his paranoid worldview — and the explanation for why he drinks “only distilled water, or rainwater, and only pure grain alcohol” — to Lionel Mandrake, a dizzy-with-anxiety group captain in the Royal Air Force.
Ripper: “Have you ever heard of a thing called fluoridation? Fluoridation of water?”
Mandrake: “Ah, yes, I have heard of that, Jack. Yes, yes.”Ripper: “Well, do you know what it is?”
Mandrake: “No. No, I don’t know what it is, no.”
Ripper: “Do you realize that fluoridation is the most monstrously conceived and dangerous communist plot we have ever had to face?” 

The movie came out in 1964, by which time the health benefits of fluoridation had been thoroughly established and anti-fluoridation conspiracy theories could be the stuff of comedy. Yet half a century later, fluoridation continues to incite fear and paranoia. In 2013, citizens in Portland, Ore., one of only a few major American cities that don’t fluoridate, blocked a plan by local officials to do so. Opponents didn’t like the idea of the government adding “chemicals” to their water. They claimed that fluoride could be harmful to human health.

Actually fluoride is a natural mineral that, in the weak concentrations used in public drinking-water systems, hardens tooth enamel and prevents tooth decay — a cheap and safe way to improve dental health for everyone, rich or poor, conscientious brushers or not. That’s the scientific and medical consensus.
To which some people in Portland, echoing anti-fluoridation activists around the world, reply: We don’t believe you.
We live in an age when all manner of scientific knowledge — from the safety of fluoride and vaccines to the reality of climate change — faces organized and often furious opposition. Empowered by their own sources of information and their own interpretations of research, doubters have declared war on the consensus of experts. There are so many of these controversies these days, you’d think a diabolical agency had put something in the water to make people argumentative.
Science doubt has become a pop-culture meme. In the recent movie “Interstellar,” set in a futuristic, downtrodden America where NASA has been forced into hiding, school textbooks say the Apollo moon landings were faked.


The debate about mandated vaccinations has the political world talking. A spike in measles cases nationwide has President Obama, lawmakers and even potential 2016 candidates weighing in on the vaccine controversy. (Pamela Kirkland/The Washington Post)
In a sense this is not surprising. Our lives are permeated by science and technology as never before. For many of us this new world is wondrous, comfortable and rich in rewards — but also more complicated and sometimes unnerving. We now face risks we can’t easily analyze.
We’re asked to accept, for example, that it’s safe to eat food containing genetically modified organisms (GMOs) because, the experts point out, there’s no evidence that it isn’t and no reason to believe that altering genes precisely in a lab is more dangerous than altering them wholesale through traditional breeding. But to some people, the very idea of transferring genes between species conjures up mad scientists running amok — and so, two centuries after Mary Shelley wrote “Frankenstein,” they talk about Frankenfood.
The world crackles with real and imaginary hazards, and distinguishing the former from the latter isn’t easy. Should we be afraid that the Ebola virus, which is spread only by direct contact with bodily fluids, will mutate into an airborne super-plague? The scientific consensus says that’s extremely unlikely: No virus has ever been observed to completely change its mode of transmission in humans, and there’s zero evidence that the latest strain of Ebola is any different. But Google “airborne Ebola” and you’ll enter a dystopia where this virus has almost supernatural powers, including the power to kill us all.
In this bewildering world we have to decide what to believe and how to act on that. In principle, that’s what science is for. “Science is not a body of facts,” says geophysicist Marcia McNutt, who once headed the U.S. Geological Survey and is now editor of Science, the prestigious journal. “Science is a method for deciding whether what we choose to believe has a basis in the laws of nature or not.”
The scientific method leads us to truths that are less than self-evident, often mind-blowing and sometimes hard to swallow. In the early 17th century, when Galileo claimed that the Earth spins on its axis and orbits the sun, he wasn’t just rejecting church doctrine. He was asking people to believe something that defied common sense — because it sure looks like the sun’s going around the Earth, and you can’t feel the Earth spinning. Galileo was put on trial and forced to recant. Two centuries later, Charles Darwin escaped that fate. But his idea that all life on Earth evolved from a primordial ancestor and that we humans are distant cousins of apes, whales and even deep-sea mollusks is still a big ask for a lot of people.
Even when we intellectually accept these precepts of science, we subconsciously cling to our intuitions — what researchers call our naive beliefs. A study by Andrew Shtulman of Occidental College showed that even students with an advanced science education had a hitch in their mental gait when asked to affirm or deny that humans are descended from sea animals and that the Earth goes around the sun. Both truths are counterintuitive. The students, even those who correctly marked “true,” were slower to answer those questions than questions about whether humans are descended from tree-dwelling creatures (also true but easier to grasp) and whether the moon goes around the Earth (also true but intuitive).
Shtulman’s research indicates that as we become scientifically literate, we repress our naive beliefs but never eliminate them entirely. They nest in our brains, chirping at us as we try to make sense of the world.
Most of us do that by relying on personal experience and anecdotes, on stories rather than statistics. We might get a prostate-specific antigen test, even though it’s no longer generally recommended, because it caught a close friend’s cancer — and we pay less attention to statistical evidence, painstakingly compiled through multiple studies, showing that the test rarely saves lives but triggers many unnecessary surgeries. Or we hear about a cluster of cancer cases in a town with a hazardous-waste dump, and we assume that pollution caused the cancers. Of course, just because two things happened together doesn’t mean one caused the other, and just because events are clustered doesn’t mean they’re not random. Yet we have trouble digesting randomness; our brains crave pattern and meaning.
Even for scientists, the scientific method is a hard discipline. They, too, are vulnerable to confirmation bias — the tendency to look for and see only evidence that confirms what they already believe. But unlike the rest of us, they submit their ideas to formal peer review before publishing them. Once the results are published, if they’re important enough, other scientists will try to reproduce them — and, being congenitally skeptical and competitive, will be very happy to announce that they don’t hold up. Scientific results are always provisional, susceptible to being overturned by some future experiment or observation. Scientists rarely proclaim an absolute truth or an absolute certainty. Uncertainty is inevitable at the frontiers of knowledge.
That provisional quality of science is another thing a lot of people have trouble with. To some climate-change skeptics, for example, the fact that a few scientists in the 1970s were worried (quite reasonably, it seemed at the time) about the possibility of a coming ice age is enough to discredit what is now the consensus of the world’s scientists: The planet’s surface temperature has risen by about 1.5 degrees Fahrenheit in the past 130 years, and human actions, including the burning of fossil fuels, are extremely likely to have been the dominant cause since the mid-20th century.
It’s clear that organizations funded in part by the fossil-fuel industry have deliberately tried to undermine the public’s understanding of the scientific consensus by promoting a few skeptics. The news media gives abundant attention to such mavericks, naysayers, professional controversialists and table thumpers. The media would also have you believe that science is full of shocking discoveries made by lone geniuses. Not so. The (boring) truth is that science usually advances incrementally, through the steady accretion of data and insights gathered by many people over many years. So it has with the consensus on climate change. That’s not about to go poof with the next thermometer reading.
But industry PR, however misleading, isn’t enough to explain why so many people reject the scientific consensus on global warming.
The “science communication problem,” as it’s blandly called by the scientists who study it, has yielded abundant new research into how people decide what to believe — and why they so often don’t accept the expert consensus. It’s not that they can’t grasp it, according to Dan Kahan of Yale University. In one study he asked 1,540 Americans, a representative sample, to rate the threat of climate change on a scale of zero to 10. Then he correlated that with the subjects’ science literacy. He found that higher literacy was associated with stronger views — at both ends of the spectrum. Science literacy promoted polarization on climate, not consensus. According to Kahan, that’s because people tend to use scientific knowledge to reinforce their worldviews.
Americans fall into two basic camps, Kahan says. Those with a more “egalitarian” and “communitarian” mind-set are generally suspicious of industry and apt to think it’s up to something dangerous that calls for government regulation; they’re likely to see the risks of climate change. In contrast, people with a “hierarchical” and “individualistic” mind-set respect leaders of industry and don’t like government interfering in their affairs; they’re apt to reject warnings about climate change, because they know what accepting them could lead to — some kind of tax or regulation to limit emissions.
In the United States, climate change has become a litmus test that identifies you as belonging to one or the other of these two antagonistic tribes. When we argue about it, Kahan says, we’re actually arguing about who we are, what our crowd is. We’re thinking: People like us believe this. People like that do not believe this.
Science appeals to our rational brain, but our beliefs are motivated largely by emotion, and the biggest motivation is remaining tight with our peers. “We’re all in high school. We’ve never left high school,” says Marcia McNutt. “People still have a need to fit in, and that need to fit in is so strong that local values and local opinions are always trumping science. And they will continue to trump science, especially when there is no clear downside to ignoring science.”
Meanwhile the Internet makes it easier than ever for science doubters to find their own information and experts. Gone are the days when a small number of powerful institutions — elite universities, encyclopedias and major news organizations — served as gatekeepers of scientific information. The Internet has democratized it, which is a good thing. But along with cable TV, the Web has also made it possible to live in a “filter bubble” that lets in only the information with which you already agree.
How to penetrate the bubble? How to convert science skeptics? Throwing more facts at them doesn’t help. Liz Neeley, who helps train scientists to be better communicators at an organization called Compass, says people need to hear from believers they can trust, who share their fundamental values. She has personal experience with this. Her father is a climate-change skeptic and gets most of his information on the issue from conservative media. In exasperation she finally confronted him: “Do you believe them or me?” She told him she believes the scientists who research climate change and knows many of them personally. “If you think I’m wrong,” she said, “then you’re telling me that you don’t trust me.” Her father’s stance on the issue softened. But it wasn’t the facts that did it.
If you’re a rationalist, there’s something a little dispiriting about all this. In Kahan’s descriptions of how we decide what to believe, what we decide sometimes sounds almost incidental. Those of us in the science-communication business are as tribal as anyone else, he told me. We believe in scientific ideas not because we have truly evaluated all the evidence but because we feel an affinity for the scientific community. When I mentioned to Kahan that I fully accept evolution, he said: “Believing in evolution is just a description about you. It’s not an account of how you reason.”
Maybe — except that evolution is real. Biology is incomprehensible without it. There aren’t really two sides to all these issues. Climate change is happening. Vaccines save lives. Being right does matter — and the science tribe has a long track record of getting things right in the end. Modern society is built on things it got right.
Doubting science also has consequences, as seen in recent weeks with the measles outbreak that began in California. The people who believe that vaccines cause autism — often well educated and affluent, by the way — are undermining “herd immunity” to such diseases as whooping cough and measles. The anti-vaccine movement has been going strong since a prestigious British medical journal, the Lancet, published a study in 1998 linking a common vaccine to autism. The journal later retracted the study, which was thoroughly discredited. But the notion of a vaccine-autism connection has been endorsed by celebrities and reinforced through the usual Internet filters. (Anti-vaccine activist and actress Jenny McCarthy famously said on “The Oprah Winfrey Show,” “The University of Google is where I got my degree from.”)
In the climate debate, the consequences of doubt are likely to be global and enduring. Climate-change skeptics in the United States have achieved their fundamental goal of halting legislative action to combat global warming. They haven’t had to win the debate on the merits; they’ve merely had to fog the room enough to keep laws governing greenhouse gas emissions from being enacted.
Some environmental activists want scientists to emerge from their ivory towers and get more involved in the policy battles. Any scientist going that route needs to do so carefully, says Liz Neeley. “That line between science communication and advocacy is very hard to step back from,” she says. In the debate over climate change, the central allegation of the skeptics is that the science saying it’s real and a serious threat is politically tinged, driven by environmental activism and not hard data. That’s not true, and it slanders honest scientists. But the claim becomes more likely to be seen as plausible if scientists go beyond their professional expertise and begin advocating specific policies.
It’s their very detachment, what you might call the cold-bloodedness of science, that makes science the killer app. It’s the way science tells us the truth rather than what we’d like the truth to be. Scientists can be as dogmatic as anyone else — but their dogma is always wilting in the hot glare of new research. In science it’s not a sin to change your mind when the evidence demands it. For some people, the tribe is more important than the truth; for the best scientists, the truth is more important than the tribe.

View Article Here Read More

Did Michelangelo conceal brain stem in painting of the Separation of Light from Darkness?






Excerpt from livescience.com

Michelangelo's depiction of God's throat in one panel of his Sistine Chapel fresco is awkward, which is odd for an artist so devoted to the study of anatomy. Now researchers have a theory to explain why: Michelangelo embedded an image of a human brain stem in God’s throat, they find.
The Renaissance artist is known to have studied human anatomy by dissecting cadavers when he was a young man, and continued until late in his 89 years. This practice informed his powerful depictions of the human and the divine. 

But one panel of his Sistine Chapel frescoes contains an oddly lit and awkward image of God's neck and head as seen from below. The light illuminating the neck was different from that of the rest of the painting. Also, God's beard is foreshortened and appears to roll up along the sides of his jaw, and his bulbous neck has prompted speculation that Michelangelo intended to portray God with a goiter, or abnormally enlarged thyroid gland. 

Two researchers – one a neurosurgeon, the other a medical illustrator – writing in the May issue of the journal Neurosurgery have another, more flattering theory. In this panel, which portrays the Separation of Light from Darkness, from the Book of Genesis, Michelangelo embedded a ventral view of the brainstem, they wrote. 

Using a digital analysis, they compared the shadows outlining the features of God’s neck and a photograph of a model of this section of the brain, which connects with the spinal cord, and found a close correspondence. 

This is not the first anatomical image found hidden in the frescoes of the Sistine Chapel. In an article published in 1990, Frank Lynn Meshberger, a gynecologist, identified an outline of the human brain in the Creation of Adam. Among other details, he noted that the shroud surrounding God had the shape of the cerebrum, or the upper part of the brain. A decade later, another researcher pointed out a kidney motif. 

"We speculated that having used the brain motif successfully in the Creation of Adam almost a year earlier, Michelangelo wanted to once again associate the figure of God with a brain motif in the iconographically critical Separation of Light from Darkness," wrote authors Ian Suk, a medical illustrator, and neurosurgeon Rafael Tamargo, both of the Johns Hopkins School of Medicine.

They do point out "the perils of overinterpreting a masterpiece," saying that not all art historians and other viewers will agree with their conclusions. Even so, they say their analysis, along with historical records, backs the interpretation.


View Article Here Read More

Striking Similarities Between Brain Cells and Our Universe



The two pictures below illustrate the similarities. The top picture shows the neural network of a brain cell; the bottom picture shows the distribution of dark matter in the universe as simulated by Millennium Simulation.


Excerpt from  themindunleashed.org


The structures of the universe and the human brain are strikingly similar.

In the Eastern spiritual discipline of Daoism, the human body has long been viewed as a small universe, as a microcosm. As billion-dollar investments are made in the United States and Europe to research brain functioning, the correlations between the brain and the universe continue to emerge.

The two pictures below illustrate the similarities. The top picture shows the neural network of a brain cell; the bottom picture shows the distribution of dark matter in the universe as simulated by Millennium Simulation.

The pictures show a structural similarity in terms of connections and distribution of matter in the brain and in the universe. The photo on the left is a microscopic view, the one on the right is a macroscopic view.

The brain is like a microcosm.

A study conducted by Dmitri Krioukov of the University of California and a team of researchers published in Nature last year shows striking similarities between neural networks in the brain and network connections between galaxies.

Krioukov’s team created a computer simulation that broke the known universe down into tiny, subatomic units of space-time, explained Live Science. The simulation added more space-time units as the history of the universe progressed. The developing interactions between matter in galaxies was similar to the interactions that comprise neural networks in the human brain.
Physicist Kevin Bassler of the University of Houston, who was not involved in the study, told Live Science that the study suggests a fundamental law governing these networks.

In May 2011, Seyed Hadi Anjamrooz of the Kerman University of Medical Sciences and other Iranian medical scientists published an article in the International Journal of the Physical Sciences on the similarities between cells and the universe. They explain that a black hole resembles the cell nucleus. A black hole’s event horizon—a sort of point of no return where the gravitational pull will suck objects into the black hole—also resembles the nuclear membrane.

The event horizon is double-layered, as is the nuclear membrane. Much like the event horizon, which prevents anything that enters from leaving, the nuclear membrane separates cell fluids, preventing mixing, and regulates the exchange of matter between the inside and outside of the nucleus. Black holes and living cells also both emit pockets of electromagnetic radiation, among other similarities.

The researchers wrote: “Nearly all that exists in the macrouniverse is mirrored in a biological cell as a microuniverse. Simply put, the universe can be pictured as a cell.”

View Article Here Read More

Study Suggests Baby Chicks Can Count! ~ Video





Excerpt from nbcnews.com
By Tia Ghose, LiveScience



It's not just humans who can count: Newly published research suggests chicks seem to have a number sense, too. 

Scientists found that chicks seem to count upward, moving from left to right. They put smaller numbers on the left, and larger numbers on the right — the same mental representation of the number line that humans use. 

"Our results suggest a rethinking of the relationship between numerical abilities and verbal language, providing further evidence that language and culture are not necessary for the development of a mathematical cognition," said study lead author Rosa Rugani, a psychologist at the University of Padova in Italy.

The left-to-right way of thinking about ascending numbers seems to be embedded in people's mental representations of numbers, but it's not clear exactly why. Is it an artifact of some long-lost accident of history, or is it a fundamental aspect of the way the brain processes numbers? 

To help answer those questions, Rugani and her colleagues trained 3-day-old chicks to travel around a screen panel with five dots on it to get to a food treat behind it. This made the five-dot panel an anchor number that the chicks could use for comparison with other numbers. 

After the chicks learned that the five-dot panel meant food, the researchers removed that panel and then placed the chicks in front of two panels, one to the left and the other to the right, that each had two dots. The chicks tended to go to the left panel, suggesting that they mentally represent numbers smaller than five as being to the left of five. 

When the researchers put the chicks in front of two panels that each had eight dots, the chicks walked to the panel on the right. This suggests the chicks mentally represent numbers larger than five as being to the right of five, the researchers said. 

In a second experiment, the researchers repeated the whole process, but started with a panel that had 20 dots instead of five. They then added two other panels that had either eight or 32 dots. Sure enough, the baby chicks tended to go to the left when the screens had just eight dots, and to the right when they had 32 dots, according to the findings published in this week's issue of the journal Science. 

"I would not at all be surprised that the number spatial mapping is also found in other animals, and in newborn infants," Rugani said.



Click to zoom

View Article Here Read More

The Best Star Gazing Binoculars for 2015




Excerpt from space.com

Most people have two eyes. Humans evolved to use them together (not all animals do). People form a continuous, stereoscopic panorama movie of the world within in their minds. With your two eyes tilted upward on a clear night, there's nothing standing between you and the universe. The easiest way to enhance your enjoyment of the night sky is to paint your brain with two channels of stronger starlight with a pair of binoculars. Even if you live in — or near — a large, light-polluted city, you may be surprised at how much astronomical detail you'll see through the right binoculars!
Our editors have looked at the spectrum of current binocular offerings. Thanks to computer-aided design and manufacturing, there have never been more high-quality choices at reasonable prices. Sadly, there's also a bunch of junk out there masquerading as fine stargazing instrumentation. We've selected a few that we think will work for most skywatchers.
There was a lot to consider: magnification versus mass, field of view, prism type, optical quality ("sharpness"), light transmission, age of the user (to match "exit pupil" size, which changes as we grow older), shock resistance, waterproofing and more. 

The best binoculars for you

"Small" astronomy binoculars would probably be considered "medium" for bird watching, sports observation and other terrestrial purposes. This comes about as a consequence of optics (prism type and objective size, mostly). "Large" binoculars are difficult to use for terrestrial applications and have a narrow field of view. They begin to approach telescope quality in magnification, resolution and optical characteristics.

Most of our Editors' Choicesfor stargazing binoculars here are under $300. You can pay more than 10 times that for enormous binocular telescopes used by elite enthusiasts on special mounts! You'll also pay more for ruggedized ("mil spec," or military standard) binoculars, many of which suspend their prisms on shock mounts to keep the optics in precise alignment.

Also, our Editors' Choices use Porro prism optics. Compact binoculars usually employ "roof" prisms, which can be cast more cheaply, but whose quality can vary widely. [There's much more about Porro prisms in our Buyer's Guide.]
We think your needs are best served by reviewing in three categories.
  • Small, highly portable binoculars can be hand-held for viewing ease.
  • Medium binoculars offer higher powers of magnification, but still can be hand-held, if firmly braced.
  • Large binoculars have bigger "objective" lenses but must be mounted on a tripod or counterweighted arm for stability.
Here's a detailed look at our Editor's Choice selections for stargazing binoculars:

Best Small Binoculars 

Editor's Choice: Oberwerk Mariner 8x40 (Cost: $150)

Oberwerk in German means "above work." The brand does indeed perform high-level optical work, perfect for looking at objects above, as well as on the ground or water. Founder Kevin Busarow's Mariner series is not his top of the line, but it benefits greatly from engineering developed for his pricier models. The Oberwerk 8x40’s treat your eyes to an extremely wide field, at very high contrast, with razor-sharp focus; they are superb for observing the broad starscapes of the Milky Way. Just 5.5 inches (14 cm) from front to back and 6.5 inches wide (16.5 cm), the Mariners are compact and rugged enough to be your favorite "grab and go binoculars." But at 37 ounces, they may be more than a small person wants to carry for a long time.


Runner-Up: Celestron Cometron 7x50 (Cost: $30)

Yes, you read that price correctly! These Celestron lightweight, wide-field binoculars bring honest quality at a remarkably low price point. The compromise comes in the optics, particularly the prism's glass type (you might see a little more chromatic aberration around the edges of the moon, and the exit pupil isn't a nice, round circle). Optimized for "almost infinitely distant" celestial objects, these Cometrons won't focus closer than about 30 feet (9.1 meters).  But that's fine for most sports and other outdoor use. If you're gift-buying for multiple young astronomers – or you want an inexpensive second set for yourself – these binoculars could be your answer. Just maybe remind those young folks to be a little careful around water; Celestron claims only that the Cometrons are "water resistant," not waterproof. 


Honorable Mention: Swarovski Habicht 8x30 (Cost: $1,050)

From the legendary Austrian firm of Swarovski Optik, these "bins" are perfect. Really. Very sharp. Very lightweight. Very wide field. Very versatile. And very expensive! Our editors would have picked them if we could have afforded them. 

Honorable Mention: Nikon Aculon 7x50 (Cost: $110) 

Nikon's legendary optical quality and the large, 7mm exit pupil diameter make these appropriate as a gift for younger skywatchers. 

Best Medium Binoculars

Editor's Choice: Celestron SkyMaster 8x56 (Cost: $210)

A solid, chunky-feeling set of quality prisms and lenses makes these binoculars a pleasant, 38oz. handful. A medium wide 5.8 degrees filed of view and large 7mm exit pupil brings you gently into a sweet sky of bright, though perhaps not totally brilliant, stars. Fully dressed in a rubber wetsuit, these SkyMasters are waterproof. Feel free to take them boating or birding on a moist morning. Their optical tubes were blown out with dry nitrogen at the factory, then sealed. So you can expect them not to fog up, at least not from the inside. Celestron's strap-mounting points on the Skymaster 8x56 are recessed, so they don't bother your thumbs, but that location makes them hard to fasten. 


Runner-Up: Oberwerk Ultra 15x70 (Cost: $380)

The most rugged pair we evaluated, these 15x70s are optically outstanding. Seen through the Ultra's exquisitely multi-coated glass, you may find yourself falling in love with the sky all over again. Oberwerk's method of suspending their BAK4 glass Porro prisms offers greater shock-resistance than most competitors’ designs. While more costly than some comparable binoculars, they deliver superior value. Our only complaint is with their mass: At 5.5 lbs., these guys are heavy!  You can hand-hold them for a short while, if you’re lying down. But they are best placed on a tripod, or on a counterweighted arm, unless you like shaky squiggles where your point-source stars are supposed to be. Like most truly big binoculars, the eyepieces focus independently; there’s no center focus wheel. These "binos" are for true astronomers. 


Honorable Mention: Vixen Ascot 10x50 (Cost:$165)

These quirky binoculars present you with an extremely wide field. But they are not crash-worthy – don't drop them in the dark – nor are they waterproof, and the focus knob is not conveniently located. So care is needed if you opt for these Vixen optics. 

Best Large Binoculars

Don't even think about hand-holding this 156-ounce beast! The SkyMaster 25x100 is really a pair of side-by-side 100mm short-tube refractor telescopes. Factor the cost of a sturdy tripod into your purchase decision, if you want to go this big.  The monster Celestron comes with a sturdy support spar for mounting. Its properly multi-coated optics will haul in surprising detail from the sky.  Just make sure your skies are dark; with this much magnification, light pollution can render your images dingy. As with many in the giant and super-giant class of binoculars, the oculars (non-removable eyepieces) focus separately, each rotating through an unusually long 450 degrees.  Getting to critical focus can be challenging, but the view is worth it. You can resolve a bit of detail on face of the new moon (lit by "Earthshine") and pick out cloud bands on Jupiter; tha's pretty astonishing for binoculars. 


Runner-Up: Orion Astronomy 20x80 (Cost: $150)

These big Orions distinguish themselves by price point; they're an excellent value. You could pay 10 times more for the comparably sized Steiners Military Observer 20x80 binoculars! Yes, the Orions are more delicate, a bit less bright and not quite as sharp. But they do offer amazingly high contrast; you'll catch significant detail in galaxies, comets and other "fuzzies." Unusually among such big rigs, the Astronomy 20x80 uses a center focus ring and one "diopter" (rather than independently focusing oculars); if you’re graduating from smaller binoculars, which commonly use that approach, this may be a comfort. These binoculars are almost lightweight enough to hold them by hand. But don't do that, at least not for long periods. And don't drop them. They will go out of alignment if handled roughly. 


Honorable Mention: Barska Cosmos 25x100 (Cost: $230)

They are not pretty, but you're in the dark, right? Built around a tripod-mountable truss tube, these Barskas equilibrate to temperature quickly and give you decent viewing at rational cost. They make for a cheaper version of our Editors' Choice Celestron SkyMasters. 

Honorable Mention: Steiner Observer 20x80 (Cost: $1,500)

Not at all a practical cost choice for a beginning stargazer, but you can dream, can't you? These Steiner binoculars are essentially military optics "plowshared" for peaceful celestial observing. 

Why we chose NOT to review certain types

Image stabilized?

Binoculars with active internal image stabilization are a growing breed. Most use battery-powered gyroscope/accelerometer-driven dynamic optical elements. We have left this type out of our evaluation because they are highly specialized and pricey ($1,250 and up). But if you are considering active stabilization, you can apply the same judgment methods detailed in our Buyer's Guide.

Comes with a camera?

A few binoculars are sold with built-in cameras. That seems like a good idea. But it isn't, at least not for skywatching. Other than Earth's moon, objects in the night sky are stingy with their photons. It takes a lengthy, rock-steady time exposure to collect enough light for a respectable image. By all means, consider these binocular-camera combos for snapping Facebook shots of little Jenny on the soccer field. But stay away from them for astronomy.

Mega monster-sized?

Take your new binoculars out under the night sky on clear nights, and you will fall in love with the universe. You will crave more ancient light from those distant suns. That may translate into a strong desire for bigger stereo-light buckets.

Caution: The next level up is a quantum jump of at least one financial order of magnitude. But if you have the disposable income and frequent access to dark skies, you may want to go REALLY big. Binocular telescopes in this class can feature interchangeable matching eyepieces, individually focusing oculars, more than 30x magnification and sturdy special-purpose tripods. Amateurs using these elite-level stereoscopes have discovered several prominent comets.

Enjoy your universe

If you are new to lens-assisted stargazing, you'll find excellent enhanced views among the binocular choices above. To get in deeper and to understand how we picked the ones we did, jump to our Buyer's Guide: How to Choose Binoculars for Sky Watching.

You have just taken the first step to lighting up your brain with star fire. May the photons be with you. Always. 

Skywatching Events 2015

Once you have your new binoculars, it's time to take them for a spin. This year intrepid stargazers will have plenty of good opportunities to use new gear.

On March 20, for example, the sun will go through a total solar eclipse. You can check out the celestial sight using the right sun-blocking filters for binoculars, but NEVER look at the sun directly, even during a solar eclipse. It's important to find the proper filters in order to observe the rare cosmic show. 

Observers can also take a look at the craggy face of the moon during a lunar eclipse on April 4. Stargazers using binoculars should be able to pick out some details not usually seen by the naked eye when looking at Earth's natural satellite.

Skywatchers should also peek out from behind the binoculars for a chance to see a series of annual meteor showers throughout the year.

View Article Here Read More

Banned TED Talk: The Science Delusion ~ Is science way off about the nature of our reality?



The following statement has been posted by Tedstaff at blog.ted.com: "After due diligence, including a survey of published scientific research and recommendations from our Science Board and our community, we have decided that Graham Hancock’s and Rupert Sheldrake’s talks from TEDxWhitechapel should be removed from distribution on the TEDx YouTube channel... All talks on the TEDxTalks channel represent the opinion of the speaker, not of TED or TEDx, but we feel a responsibility not to provide a platform for talks which appear to have crossed the line into pseudoscience.

Response to the TED Scientific Board’s Statement
Rupert Sheldrake
March 18, 2013

I would like to respond to TED’s claims that my TEDx talk “crossed the line into pseudoscience”, contains ”serious factual errors” and makes “many misleading statements.”
This discussion is taking place because the militant atheist bloggers Jerry Coyne and P.Z. Myers denounced me, and attacked TED for giving my talk a platform. I was invited to give my talk as part of a TEDx event in Whitechapel, London, called “Challenging Existing Paradigms.” That’s where the problem lies: my talk explicitly challenges the materialist belief system. It summarized some of the main themes of my recent book Science Set Free (in the UK called The Science Delusion). Unfortunately, the TED administrators have publically aligned themselves with the old paradigm of materialism, which has dominated science since the late nineteenth century.
TED say they removed my talk from their website on the advice of their Scientific Board, who also condemned Graham Hancock’s talk. Hancock and I are now facing anonymous accusations made by a body on whose authority TED relies, on whose advice they act, and behind whom they shelter, but whose names they have not revealed.
TED’s anonymous Scientific Board made three specific accusations:
Accusation 1:“he suggests that scientists reject the notion that animals have consciousness, despite the fact that it’s generally accepted that animals have some form of consciousness, and there’s much research and literature exploring the idea.”
I characterized the materialist dogma as follows: “Matter is unconscious: the whole universe is made up of unconscious matter. There’s no consciousness in stars in galaxies, in planets, in animals, in plants and there ought not to be any in us either, if this theory’s true. So a lot of the philosophy of mind over the last 100 years has been trying to prove that we are not really conscious at all.” Certainly some biologists, including myself, accept that animals are conscious. In August, 2012, a group of scientists came out with an endorsement of animal consciousness in “The Cambridge Declaration on Consciousness”. As Discovery News reported, “While it might not sound like much for scientists to declare that many nonhuman animals possess conscious states, it’s the open acknowledgement that’s the big news here.” (http://news.discovery.com/human/genetics/animals-consciousness-mammals-birds-octopus-120824.htm)
But materialist philosophers and scientists are still in the majority, and they argue that consciousness does nothing – it is either an illusion or an ”epiphenomenon” of brain activity. It might as well not exist in animals – or even in humans. That is why in the philosophy of mind, the very existence of consciousness is often called “the hard problem”.http://en.wikipedia.org/wiki/Hard_problem_of_consciousness
Accusation 2:“He also argues that scientists have ignored variations in the measurements of natural constants, using as his primary example the dogmatic assumption that a constant must be constant and uses the speed of light as example.… Physicist Sean Carroll wrote a careful rebuttal of this point.”
TED’s Scientific Board refers to a Scientific American article that makes my point very clearly: “Physicists routinely assume that quantities such as the speed of light are constant.”
In my talk I said that the published values of the speed of light dropped by about 20 km/sec between 1928 and 1945. Carroll’s “careful rebuttal” consisted of a table copied from Wikipedia showing the speed of light at different dates, with a gap between 1926 and 1950, omitting the very period I referred to. His other reference (http://micro.magnet.fsu.edu/primer/lightandcolor/speedoflight.html) does indeed give two values for the speed of light in this period, in 1928 and 1932-35, and sure enough, they were 20 and 24km/sec lower than the previous value, and 14 and 18 km/sec lower than the value from 1947 onwards.
1926: 299,798
1928: 299,778
1932-5: 299,774
1947: 299,792

In my talk I suggest how a re-examination of existing data could resolve whether large continuing variations in the Universal Gravitational Constant, G, are merely errors, as usually assumed, or whether they show correlations between different labs that might have important scientific implications hitherto ignored. Jerry Coyne and TED’s Scientific Board regard this as an exercise in pseudoscience. I think their attitude reveals a remarkable lack of curiosity.
Accusation 3:“Sheldrake claims to have “evidence” of morphic resonance in crystal formation and rat behavior. The research has never appeared in a peer-reviewed journal, despite attempts by other scientists eager to replicate the work.”
I said, “There is in fact good evidence that new compounds get easier to crystallize all around the world.” For example, turanose, a kind of sugar, was considered to be a liquid for decades, until it first crystallized in the 1920s. Thereafter it formed crystals everyehere. (Woodard and McCrone Journal of Applied Crystallography (1975). 8, 342). The American chemist C. P. Saylor, remarked it was as though “the seeds of crystallization, as dust, were carried upon the winds from end to end of the earth” (quoted by Woodard and McCrone).
The research on rat behavior I referred to was carried out at Harvard and the Universities of Melbourne and Edinburgh and was published in peer-reviewed journals, including the British Journal of Psychology and the Journal of Experimental Biology. For a fuller account and detailed references see Chapter 11 of my book Morphic Resonance (in the US) / A New Science of Life (in the UK). The relevant passage is online here: http://sciencesetfree.tumblr.com/
The TED Scientific Board refers to ”attempts by other scientists eager to replicate the work” on morphic resonance. I would be happy to work with these eager scientists if the Scientific Board can reveal who they are.
This is a good opportunity to correct an oversimplification in my talk. In relation to the dogma that mechanistic medicine is the only kind that really works, I said, “that’s why governments only fund mechanistic medicine and ignore complementary and alternative therapies.” This is true of most governments, but the US is a notable exception. The US National Center for Complementary and Alternative Medicine receives about $130 million a year, about 0.4% of the National Institutes of Health (NIH) total annual budget of $31 billion.
Obviously I could not spell out all the details of my arguments in an 18-minute talk, but TED’s claims that it contains “serious factual errors,” “many misleading statements” and that it crosses the line into “pseudoscience” are defamatory and false.

Click to zoom

View Article Here Read More

A Physicist’s Explanation of Why the Soul May Exist







By Tara Maclsaac
Excerpt from
theepochtimes.com
 Henry Stapp is a theoretical physicist at the University of California's Lawrence Berkeley Laboratory, specializing in the mathematical and logical foundations of quantum mechanics. - See more at: http://www.nourfoundation.com/speakers/henry-p-stapp-phd.html#sthash.ZJS7Zrm3.dpuf
Dr. Henry Stapp is a theoretical physicist at the University of California's Lawrence Berkeley Laboratory, specializing in the mathematical and logical foundations of quantum mechanics. - See more at: http://www.nourfoundation.com/speakers/henry-p-stapp-phd.html#sthash.ZJS7Zrm3.dpuf



Henry P. Stapp is a theoretical physicist at the University of California–Berkeley who worked with some of the founding fathers of quantum mechanics. He does not seek to prove that the soul exists, but he does say that the existence of the soul fits within the laws of physics.

He does not seek to prove that the soul exists, but he does say that the existence of the soul fits within the laws of physics.

It is not true to say belief in the soul is unscientific, according to Stapp. Here the word “soul” refers to a personality independent of the brain or the rest of the human body that can survive beyond death.  In his paper, “Compatibility of Contemporary Physical Theory With Personality Survival,” he wrote: “Strong doubts about personality survival based solely on the belief that postmortem survival is incompatible with the laws of physics are unfounded.”
He works with the Copenhagen interpretation of quantum mechanics—more or less the interpretation used by some of the founders of quantum mechanics, Niels Bohr and Werner Heisenberg. Even Bohr and Heisenberg had some disagreements on how quantum mechanics works, and understandings of the theory since that time have also been diverse. Stapp’s paper on the Copenhagen interpretation has been influential. It was written in the 1970s and Heisenberg wrote an appendix for it. 

Stapp noted of his own concepts: “There has been no hint in my previous descriptions (or conception) of this orthodox quantum mechanics of any notion of personality survival.”

Why Quantum Theory Could Hint at Life After Death

Stapp explains that the founders of quantum theory required scientists to essentially cut the world into two parts. Above the cut, classical mathematics could describe the physical processes empirically experienced. Below the cut, quantum mathematics describes a realm “which does not entail complete physical determinism.”

Of this realm below the cut, Stapp wrote: “One generally finds that the evolved state of the system below the cut cannot be matched to any conceivable classical description of the properties visible to observers.”

So how do scientists observe the invisible? They choose particular properties of the quantum system and set up apparatus to view their effects on the physical processes “above the cut.”

The key is the experimenter’s choice. When working with the quantum system, the observer’s choice has been shown to physically impact what manifests and can be observed above the cut. 

Stapp cited Bohr’s analogy for this interaction between a scientist and his experiment results: “[It's like] a blind man with a cane: when the cane is held loosely, the boundary between the person and the external world is the divide between hand and cane; but when held tightly the cane becomes part of the probing self: the person feels that he himself extends to the tip of the cane.”

The physical and mental are connected in a dynamic way. In terms of the relationship between mind and brain, it seems the observer can hold in place a chosen brain activity that would otherwise be fleeting. This is a choice similar to the choice a scientist makes when deciding which properties of the quantum system to study. 

The quantum explanation of how the mind and brain can be separate or different, yet connected by the laws of physics “is a welcome revelation,” wrote Stapp. “It solves a problem that has plagued both science and philosophy for centuries—the imagined science-mandated need either to equate mind with brain, or to make the brain dynamically independent of the mind.”

View Article Here Read More

Brain Magic ~ With Keith Barry

First, Keith Barry shows us how our brains can fool our bodies -- in a trick that works via podcast too. Then he involves the audience in some jaw-dropping (and even a bit dangerous) feats of brain magic.Click to zoom

View Article Here Read More

The Whispering Mind: The Enduring Conundrum of Consciousness

It's an old question: what is consciousness? Today, sophisticated brain imaging technologies, clinical studies, as well as the newfound ability to listen to the whisper of even an individual nerve cell, are bringing scientists closer than ever to t...

View Article Here Read More
Older posts Newer posts

Creative Commons License
This work is licensed under a
Creative Commons Attribution 4.0
International License
.
unless otherwise marked.

Terms of Use | Privacy Policy



Up ↑