Tag: medicine (page 3 of 6)

US Government Admits Americans Have Been Overdosed on Fluoride

Dr. MercolaThe US government has finally admitted they’ve overdosed Americans on fluoride and, for first time since 1962, are lowering its recommended level of fluoride in drinking water.1,2,3About 40 percent of American teens have dental fluorosis,4 a condition referring to changes in the appearance of tooth enamel—from chalky-looking lines and splotches to dark staining and pitting—caused by long-term ingestion of fluoride during the time teeth are forming.In some areas, fluoro [...]

View Article Here Read More

How Your Mind Affects Your Body

Excerpt from huffingtonpost.comWe are at last beginning to show that there is an intimate and dynamic relationship between what is going on with our feelings and thoughts and what happens in the body. A Time magazine special showed that happiness, h...

View Article Here Read More

Lab for genetic modification of human embryos just $2,000 away – report


Reuters / Christian Charisius



Reuters

With the right expertise in molecular biology, one could start a basic laboratory to modify human embryos using a genome-editing computer technique all for a couple thousand dollars, according to a new report.

Genetic modification has received heightened scrutiny recently following last week’s announcement that Chinese researchers had, for the first time, successfully edited human embryos’ genomes. 
The team at Sun Yat-Sen University in Guangzhou, China, used CRISPR (clustered regularly interspaced palindromic repeats), a technique that relies on “cellular machinery” used by bacteria in defense against viruses. 

This machinery is copied and altered to create specific gene-editing complexes, which include the wonder enzyme Cas9. The enzyme works its way into the DNA and can be used to alter the molecule from the inside. The combination is attached to an RNA guide that takes the gene-editing complex to its target, telling Cas9 where to operate. 

Use of the CRISPR technique is not necessarily relegated to the likes of cash-flush university research operations, according to a report by Business Insider. 


Geneticist George Church, who runs a top CRISPR research program at the Harvard Medical School, said the technique could be employed with expert knowledge and about half of the money needed to pay for an average annual federal healthcare plan in 2014 -- not to mention access to human embryos. 

"You could conceivably set up a CRISPR lab for $2,000,” he said, according to Business Insider. 

Other top researchers have echoed this sentiment. 

"Any scientist with molecular biology skills and knowledge of how to work with [embryos] is going to be able to do this,” Jennifer Doudna, a biologist at the University of California, Berkeley, recently told MIT Tech Review, which reported that Doudna co-discovered how to edit genetic code using CRISPR in 2012. 

Last week, the Sun Yat-Sen University research team said it attempted to cure a gene defect that causes beta-thalassemia (a genetic blood disorder that could lead to severe anemia, poor growth, skeletal abnormalities and even death) by editing the germ line. For that purpose they used a gene-editing technique based on injecting non-viable embryos with a complex, which consists of a protective DNA element obtained from bacteria and a specific protein. 

"I suspect this week will go down as a pivotal moment in the history of medicine," wrote science journalist Carl Zimmer for National Geographic.


Response to the new research has been mixed. Some experts say the gene editing could help defeat genetic diseases even before birth. Others expressed concern. 

“At present, the potential safety and efficacy issues arising from the use of this technology must be thoroughly investigated and understood before any attempts at human engineering are sanctioned, if ever, for clinical testing,” a group of scientists, including some who had worked to develop CRISPR, warned in Science magazine. 

Meanwhile, the director of the US National Institutes for Health (NIH) said the agency would not fund such editing of human embryo genes. 

“Research using genomic editing technologies can and are being funded by NIH,” Francis Collins said Wednesday. “However, NIH will not fund any use of gene-editing technologies in human embryos. The concept of altering the human germline in embryos for clinical purposes ... has been viewed almost universally as a line that should not be crossed.”

Although the discovery of CRISPR sequences dates back to 1987 – when it was first used to cure bacteria of viruses – its successes in higher animals and humans were only achieved in 2012-13, when scientists achieved a revolution by combining the resulting treatment system with Cas9 for the first time. 


On April 17, the MIT’s Broad Institute announced that has been awarded the first-ever patent for working with the Crisp-Cas9 system. 

The institute’s director, Eric Lander, sees the combination as “an extraordinary, powerful tool. The ability to edit a genome makes it possible to discover the biological mechanisms underlying human biology.”

The system’s advantage over other methods is in that it can also target several genes at the same time, working its way through tens of thousands of so-called 'guide' RNA sequences that lead them to the weapon to its DNA targets. 

Meanwhile, last month in the UK, a healthy baby was born from an embryo screened for genetic diseases, using karyomapping, a breakthrough testing method that allows doctors to identify about 60 debilitating hereditary disorders.

View Article Here Read More

Did natural selection make the Dutch the tallest people on the planet?

Dutch national women's field hockey team



Excerpt from news.sciencemag.org
ByMartin Enserink

AMSTERDAM—Insecure about your height? You may want to avoid this tiny country by the North Sea, whose population has gained an impressive 20 centimeters in the past 150 years and is now officially the tallest on the planet. Scientists chalk up most of that increase to rising wealth, a rich diet, and good health care, but a new study suggests something else is going on as well: The Dutch growth spurt may be an example of human evolution in action.
The study, published online today in the Proceedings of the Royal Society B, shows that tall Dutch men on average have more children than their shorter counterparts, and that more of their children survive. That suggests genes that help make people tall are becoming more frequent among the Dutch, says behavioral biologist and lead author Gert Stulp of the London School of Hygiene & Tropical Medicine.

"This study drives home the message that the human population is still subject to natural selection," says Stephen Stearns, an evolutionary biologist at Yale University who wasn't involved in the study. "It strikes at the core of our understanding of human nature, and how malleable it is." It also confirms what Stearns knows from personal experience about the population in the northern Netherlands, where the study took place: "Boy, they are tall."

For many years, the U.S. population was the tallest in the world. In the 18th century, American men were 5 to 8 centimeters taller than those in the Netherlands. Today, Americans are the fattest, but they lost the race for height to northern Europeans—including Danes, Norwegians, Swedes, and Estonians—sometime in the 20th century.

Just how these peoples became so tall isn't clear, however. Genetics has an important effect on body height: Scientists have found at least 180 genes that influence how tall you become. Each one has only a small effect, but together, they may explain up to 80% of the variation in height within a population. Yet environmental factors play a huge role as well. The children of Japanese immigrants to Hawaii, for instance, grew much taller than their parents. Scientists assume that a diet rich in milk and meat played a major role.

The Dutch have become so much taller in such a short period that scientists chalk most of it up to their changing environment. As the Netherlands developed, it became one of the world's largest producers and consumers of cheese and milk. An increasingly egalitarian distribution of wealth and universal access to health care may also have helped.

Still, scientists wonder whether natural selection has played a role as well. For men, being tall is associated with better health, attractiveness to the opposite sex, a better education, and higher income—all of which could lead to more reproductive success, Stulp says.
Yet studies in the United States don't show this. Stulp's own research among Wisconsinites born between 1937 and 1940, for instance, showed that average-sized men had more children than shorter and taller men, and shorter women had more children than those of average height. Taken together, Stulp says, this suggests natural selection in the United States pulls in the opposite direction of environmental factors like diet, making people shorter instead of taller. That may explain why the growth in average American height has leveled off.

Stulp—who says his towering 2-meter frame did not influence his research interest—wondered if the same was true in his native country. To find out, he and his colleagues turned to a database tracking key life data for almost 100,000 people in the country's three northern provinces. The researchers included only people over 45 who were born in the Netherlands to Dutch-born parents. This way, they had a relatively accurate number of total children per subject (most people stop having children after 45) and they also avoided the effects of immigration.

In the remaining sample of 42,616 people, taller men had more children on average, despite the fact that they had their first child at a higher age. The effect was small—an extra 0.24 children at most for taller men—but highly significant. (Taller men also had a smaller chance of remaining childless, and a higher chance of having a partner.)  The same effect wasn't seen in women, who had the highest reproductive success when they were of average height.  The study suggests this may be because taller women had a smaller chance of finding a mate, while shorter women were at higher risk of losing a child.

Because tall men are likely to pass on the genes that made them tall, the outcome suggests that—in contrast to Americans—the Dutch population is evolving to become taller, Stulp says. "This is not what we've seen in other studies—that's what makes it exciting," says evolutionary biologist Simon Verhulst of the University of Groningen in the Netherlands, who was Stulp's Ph.D. adviser but wasn't involved in the current study. Verhulst points out that the team can't be certain that genes involved in height are actually becoming more frequent, however, as the authors acknowledge.

The study suggests that sexual selection is at work in the Dutch population, Stearns says: Dutch women may prefer taller men because they expect them to have more resources to invest in their children. But there are also other possibilities. It could be that taller men are more resistant to disease, Stearns says, or that they are more likely to divorce and start a second family. "It will be a difficult question to answer.”

Another question is why tall men in Holland are at a reproductive advantage but those in the United States are not. Stulp says he can only speculate. One reason may be that humans often choose a partner who's not much shorter or taller than they are themselves. Because shorter women in the United States have more children, tall men may do worse than those of average height because they're less likely to partner with a short woman.

In the end, Stearns says, the advantage of tall Dutchmen may be only temporary. Often in evolution, natural selection will favor one trend for a number of generations, followed by a stabilization or even a return to the opposite trend. In the United States, selection for height appears to have occurred several centuries ago, leading to taller men, and then it stopped. "Perhaps the Dutch caught up and actually overshot the American men," he says.

View Article Here Read More

MRSA superbug killed by 1,100-year-old home remedy, researchers say


MRSA attacks a human cell. The bacteria shown is the strain MRSA 252, a leading cause of hospital-associated infections. (Rocky Mountain Laboratories, NIAID, NIH)


Excerpt from washingtonpost.com
By Justin Wm. Moyer 

Even in the age of AIDS, avian flu and Ebola, methicillin-resistant Staphylococcus aureus, better known as MRSA, is terrifying.

The superbug, which is resistant to conventional antibiotics because of their overuse, shrugs at even the deadliest weapons modern medicine offers. The Centers for Disease Control and Prevention estimated MRSA contributed to the deaths of more than 5,000 people in the United States in 2013. It even attacked the NFL, and some say it could eventually kill more people than cancer. And presidential commissions have advised that technological progress is the only way to fight MRSA.

But researchers in the United Kingdom now report that the superbug proved vulnerable to an ancient remedy. The ingredients? Just a bit of garlic, some onion or leek, copper, wine and oxgall — a florid name for cow’s bile.

This medicine sounds yucky, but it’s definitely better than the bug it may be able to kill.

“We were absolutely blown away by just how effective the combination of ingredients was,” Freya Harrison of the University of Nottingham, who worked on the research, told the BBC.

The oxgall remedy, billed as an eye salve, was found in a manuscript written in Old English from the 10th century called “Bald’s Leechbook” — a sort of pre-Magna Carta physician’s desk reference. Garlic and copper are commonly thought to have antibiotic or antimicrobial properties, but seeing such ingredients in a home remedy at Whole Foods is a far cry from researchers killing a superbug with it.

According to Christina Lee, an associate professor in Viking studies at Nottingham, the MRSA research was the product of conversations among academics of many stripes interested in infectious disease and how people fought it before antibiotics.

“We were talking about the specter of antibiotic resistance,” she told The Washington Post in a phone interview. The medical researchers involved in the discussions said to the medievalists: “In your period, you guys must have had something.”

Not every recipe in Bald’s Leechbook is a gem. Other advice, via a translation from the Eastern Algo-Saxonist: “Against a woman’s chatter; taste at night fasting a root of radish, that day the chatter cannot harm thee.” And: “In case a man be a lunatic; take skin of a mereswine or porpoise, work it into a whip, swinge the man therewith, soon he will be well. Amen.”

Though the Leechbook may include misses, it may help doctors find a solution to a problem that only seems to be getting worse.

If the oxgall remedy proves effective against MRSA outside of the lab — which researchers caution it may not — it would be a godsend. Case studies of MRSA’s impact from the CDC’s charmingly named Morbidity and Mortality Weekly Report seem medieval.

In July 1997, a 7-year-old black girl from urban Minnesota was admitted to a tertiary-care hospital with a temperature of 103 F.” Result: Death from pulmonary hemorrhage after five weeks of hospitalization.

In January 1998, a 16-month-old American Indian girl from rural North Dakota was taken to a local hospital in shock and with a temperature of 105.2 F.” Result: After respiratory failure and cardiac arrest, death within two hours of hospital admission.

In January 1999, a 13-year-old white girl from rural Minnesota was brought to a local hospital with fever, hemoptysis” — that’s coughing up blood — “and respiratory distress.” The result: Death from multiple organ failure after seven days in the hospital.

“We believe modern research into disease can benefit from past responses and knowledge, which is largely contained in non-scientific writings,” Lee told the Telegraph. “But the potential of these texts to contribute to addressing the challenges cannot be understood without the combined expertise of both the arts and science.”

Lee stressed that it was the combination of ingredients that proved effective against MRSA — which shows that people living in medieval times were not as barbaric as popularly thought. Even 1,000 years ago, when people got sick, other people tried to figure out how to help.

“We associate ‘medieval’ with dark, barbaric,” Lee said. “… It’s not. I’ve always believed in the pragmatic medieval ages.”
The research will be presented at the Annual Conference of the Society for General Microbiology in Birmingham. In an abstract for the conference, the team cautioned oxgall was no cure-all.

“Antibacterial activity of a substance in laboratory trials does not necessarily mean the historical remedy it was taken from actually worked in toto,” they wrote.

Lee said researchers hope to turn to other remedies in Bald’s Leechbook — including purported cures for headaches and ulcers — to see what other wisdom the ancients have to offer.

“At a time when you don’t have microscope, medicine would have included things we find rather odd,” she said. “In 200 years, people will judge us.”

View Article Here Read More

What happens to your body when you give up sugar?





Excerpt from independent.co.uk
By Jordan Gaines Lewis


In neuroscience, food is something we call a “natural reward.” In order for us to survive as a species, things like eating, having sex and nurturing others must be pleasurable to the brain so that these behaviours are reinforced and repeated.
Evolution has resulted in the mesolimbic pathway, a brain system that deciphers these natural rewards for us. When we do something pleasurable, a bundle of neurons called the ventral tegmental area uses the neurotransmitter dopamine to signal to a part of the brain called the nucleus accumbens. The connection between the nucleus accumbens and our prefrontal cortex dictates our motor movement, such as deciding whether or not to taking another bite of that delicious chocolate cake. The prefrontal cortex also activates hormones that tell our body: “Hey, this cake is really good. And I’m going to remember that for the future.”
Not all foods are equally rewarding, of course. Most of us prefer sweets over sour and bitter foods because, evolutionarily, our mesolimbic pathway reinforces that sweet things provide a healthy source of carbohydrates for our bodies. When our ancestors went scavenging for berries, for example, sour meant “not yet ripe,” while bitter meant “alert – poison!”
Fruit is one thing, but modern diets have taken on a life of their own. A decade ago, it was estimated that the average American consumed 22 teaspoons of added sugar per day, amounting to an extra 350 calories; it may well have risen since then. A few months ago, one expert suggested that the average Briton consumes 238 teaspoons of sugar each week.
Today, with convenience more important than ever in our food selections, it’s almost impossible to come across processed and prepared foods that don’t have added sugars for flavour, preservation, or both.
These added sugars are sneaky – and unbeknown to many of us, we’ve become hooked. In ways that drugs of abuse – such as nicotine, cocaine and heroin – hijack the brain’s reward pathway and make users dependent, increasing neuro-chemical and behavioural evidence suggests that sugar is addictive in the same way, too.

Sugar addiction is real

Anyone who knows me also knows that I have a huge sweet tooth. I always have. My friend and fellow graduate student Andrew is equally afflicted, and living in Hershey, Pennsylvania – the “Chocolate Capital of the World” – doesn’t help either of us. But Andrew is braver than I am. Last year, he gave up sweets for Lent. “The first few days are a little rough,” Andrew told me. “It almost feels like you’re detoxing from drugs. I found myself eating a lot of carbs to compensate for the lack of sugar.”
There are four major components of addiction: bingeing, withdrawal, craving, and cross-sensitisation (the notion that one addictive substance predisposes someone to becoming addicted to another). All of these components have been observed in animal models of addiction – for sugar, as well as drugs of abuse.
A typical experiment goes like this: rats are deprived of food for 12 hours each day, then given 12 hours of access to a sugary solution and regular chow. After a month of following this daily pattern, rats display behaviours similar to those on drugs of abuse. They’ll binge on the sugar solution in a short period of time, much more than their regular food. They also show signs of anxiety and depression during the food deprivation period. Many sugar-treated rats who are later exposed to drugs, such as cocaine and opiates, demonstrate dependent behaviours towards the drugs compared to rats who did not consume sugar beforehand.
Like drugs, sugar spikes dopamine release in the nucleus accumbens. Over the long term, regular sugar consumption actually changes the gene expression and availability of dopamine receptors in both the midbrain and frontal cortex. Specifically, sugar increases the concentration of a type of excitatory receptor called D1, but decreases another receptor type called D2, which is inhibitory. Regular sugar consumption also inhibits the action of the dopamine transporter, a protein which pumps dopamine out of the synapse and back into the neuron after firing.
In short, this means that repeated access to sugar over time leads to prolonged dopamine signalling, greater excitation of the brain’s reward pathways and a need for even more sugar to activate all of the midbrain dopamine receptors like before. The brain becomes tolerant to sugar – and more is needed to attain the same “sugar high.”

Sugar withdrawal is also real

Although these studies were conducted in rodents, it’s not far-fetched to say that the same primitive processes are occurring in the human brain, too. “The cravings never stopped, [but that was] probably psychological,” Andrew told me. “But it got easier after the first week or so.”
In a 2002 study by Carlo Colantuoni and colleagues of Princeton University, rats who had undergone a typical sugar dependence protocol then underwent “sugar withdrawal.” This was facilitated by either food deprivation or treatment with naloxone, a drug used for treating opiate addiction which binds to receptors in the brain’s reward system. Both withdrawal methods led to physical problems, including teeth chattering, paw tremors, and head shaking. Naloxone treatment also appeared to make the rats more anxious, as they spent less time on an elevated apparatus that lacked walls on either side.
Similar withdrawal experiments by others also report behaviour similar to depression in tasks such as the forced swim test. Rats in sugar withdrawal are more likely to show passive behaviours (like floating) than active behaviours (like trying to escape) when placed in water, suggesting feelings of helplessness.
A new study published by Victor Mangabeira and colleagues in this month’s Physiology & Behavior reports that sugar withdrawal is also linked to impulsive behaviour. Initially, rats were trained to receive water by pushing a lever. After training, the animals returned to their home cages and had access to a sugar solution and water, or just water alone. After 30 days, when rats were again given the opportunity to press a lever for water, those who had become dependent on sugar pressed the lever significantly more times than control animals, suggesting impulsive behaviour.
These are extreme experiments, of course. We humans aren’t depriving ourselves of food for 12 hours and then allowing ourselves to binge on soda and doughnuts at the end of the day. But these rodent studies certainly give us insight into the neuro-chemical underpinnings of sugar dependence, withdrawal, and behaviour.
Through decades of diet programmes and best-selling books, we’ve toyed with the notion of “sugar addiction” for a long time. There are accounts of those in “sugar withdrawal” describing food cravings, which can trigger relapse and impulsive eating. There are also countless articles and books about the boundless energy and new-found happiness in those who have sworn off sugar for good. But despite the ubiquity of sugar in our diets, the notion of sugar addiction is still a rather taboo topic.
Are you still motivated to give up sugar? You might wonder how long it will take until you’re free of cravings and side-effects, but there’s no answer – everyone is different and no human studies have been done on this. But after 40 days, it’s clear that Andrew had overcome the worst, likely even reversing some of his altered dopamine signalling. “I remember eating my first sweet and thinking it was too sweet,” he said. “I had to rebuild my tolerance.”
And as regulars of a local bakery in Hershey – I can assure you, readers, that he has done just that.
Jordan Gaines Lewis is a Neuroscience Doctoral Candidate at Penn State College of Medicine

View Article Here Read More

Google’s AI Program Is Better At Video Games Than You





pcmag.com

IBM's Watson supercomputer may be saving lives and educating children, but Google's new AI program can master video games without human guidance.

The artificial intelligence system from London-based DeepMind, which Google acquired last year for a reported $400 million, represents a major step toward a future of smart machines.

Computers running the deep Q-network (DQN) algorithm were exposed to 49 retro games on the Atari 2600 and told to play them, without any direction from researchers. Using the same network architecture and tuning parameters, the machines were given only raw screen pixels, available actions, and game score as input.

For each level passed or high score earned, the computer was automatically rewarded with a digital treat.

"Strikingly, DQN was able to work straight 'out of the box' across all these games," DeepMind's Dharshan Kumaran and Demis Hassabis wrote in a blog post. The executives cited classic titles like Breakout, River Raid, Boxing, and Enduro.

The AI crushed even the most expert humans at 29 games, sometimes composing what the creators called "surprisingly far-sighted strategies" that allowed maximum scoring possibilities. It also outperformed previous machine-learning methods in 43 of 49 instances.

VIEW ALL PHOTOS IN GALLERY
Google DeepMind's findings were presented in a paper published in this week's Nature journal, which describes the key DQN features that allow it to learn.

"This work offers the first demonstration of a general purpose learning agent that can be trained end-to-end to handle a wide variety of challenging tasks," the researchers said. "This kind of technology should help us build more useful products."

Imagine asking the Google app to complete a complex task—like plan a backpacking trip through Europe, for example.

Google's DeepMind also hopes its technology will give researchers new ways to make sense of large-scale data, opening the door to discoveries in fields like climate science, physics, medicine, and genomics.

"And it may even help scientists better understand the process by which humans learn," Kumaran and Hassabis said, citing physicist Richard Feynman, who famously said, "What I cannot create, I do not understand."

For more, see How DeepMind Can Bring Google Artificial Intelligence to Life in the slideshow above.

View Article Here Read More

Putting Lazy to Bed: Chronic fatigue syndrome is a physical disorder, not a psychological illness, panel says




Excerpt from washingtonpost.com

Chronic fatigue syndrome is a "serious, debilitating" condition with a cluster of clear physical symptoms — not a psychological illness — a panel of experts reported Tuesday as it called for more research into a disease that may affect as many as 2.5 million Americans.
"We just needed to put to rest, once and for all, the idea that this is just psychosomatic or that people were making this up, or that they were just lazy," said Ellen Wright Clayton, a professor of pediatrics and law at Vanderbilt University, who chaired the committee of the Institute of Medicine, the health arm of the National Academy of Sciences.
Although the cause of the disorder is still unknown, the panel established three critical symptoms for the condition (also known as myalgic encephalomyelitis):

  • A sharp reduction in the ability to engage in pre-illness activity levels that lasts for more than six months and is accompanied by deep fatigue that only recently developed.
  • Worsening of symptoms after any type of exertion, including "physical, cognitive or emotional stress."
  • Sleep that doesn't refresh the sufferer.
In addition, the committee said, true chronic fatigue syndrome also includes either cognitive impairment or the inability to remain upright with symptoms that improve when the person with the condition lies down, known as "orthostatic intolerance."
The panel acknowledged what people with chronic fatigue syndrome have long complained about: They struggle, sometimes for years, before finding a health-care provider who diagnoses a disorder that often devastates their lives. Sixty-seven percent to 77 percent reported in surveys that it took longer than a year to receive a diagnosis, and about 29 percent said it took longer than five years. The vast majority of people with the disorder remain undiagnosed, the panel said, estimating that between 836,000 and 2.5 million Americans have it.
"Seeking and receiving a diagnosis can be a frustrating process for several reasons, including skepticism of health care providers about the serious nature of [chronic fatigue syndrome] and the misconception that it is a psychogenic illness or even a figment of the patient’s imagination," the panel wrote.  Less than a third of medical schools include the condition in their curricula and only 40 percent of medical textbooks contain information on it, the experts said.
Christine Williams, who has the illness herself and is vice-chair of the board of directors for the advocacy group Solve ME/CFS Initiative, welcomed the IOM report.
“I have been sick for six-and-a-half-years, and this is definitely the most encouraging thing that I have seen,” she said. Williams praised the IOM for setting forth a set of clearly understandable diagnostic criteria, including the hallmark symptom “post-exertional malaise.”
Williams predicted that the IOM panel’s proposed new name for the illness -- "systemic exertion intolerance disease"--would be widely debated by patients’ groups. But she added that the IOM “moved in the right direction by getting away from 'chronic fatigue syndrome',” which she said  trivialized a serious disease.
Williams, who spent three decades working as a health policy expert in the federal government, said she hopes the report sparks additional research into new treatments for the illness.
The cause of chronic fatigue syndrome remains unknown, but symptoms may be triggered by an infection or "immunization, anesthetics, physical trauma, exposure to environmental pollutants, chemicals and heavy metals and, rarely, blood transfusions," the panel reported. Clayton said mononucleosis is "a major trigger" of chronic fatigue syndrome among adolescents, but little is known about causes beyond that.
Treatments can include drugs such as anti-depressants and sleeping pills; gentle exercise and psychological counseling; and lifestyle changes such as limiting stress, caffeine, nicotine and alcohol.
Clayton also emphasized that many people with chronic fatigue syndrome also have other medical problems, which can complicate diagnosis and treatment.
"Lots of adults have more than one thing going on," she said. "If they meet these criteria, they have this disorder. They can have something else as well, which is not uncommon in medicine."

View Article Here Read More

Did Michelangelo conceal brain stem in painting of the Separation of Light from Darkness?






Excerpt from livescience.com

Michelangelo's depiction of God's throat in one panel of his Sistine Chapel fresco is awkward, which is odd for an artist so devoted to the study of anatomy. Now researchers have a theory to explain why: Michelangelo embedded an image of a human brain stem in God’s throat, they find.
The Renaissance artist is known to have studied human anatomy by dissecting cadavers when he was a young man, and continued until late in his 89 years. This practice informed his powerful depictions of the human and the divine. 

But one panel of his Sistine Chapel frescoes contains an oddly lit and awkward image of God's neck and head as seen from below. The light illuminating the neck was different from that of the rest of the painting. Also, God's beard is foreshortened and appears to roll up along the sides of his jaw, and his bulbous neck has prompted speculation that Michelangelo intended to portray God with a goiter, or abnormally enlarged thyroid gland. 

Two researchers – one a neurosurgeon, the other a medical illustrator – writing in the May issue of the journal Neurosurgery have another, more flattering theory. In this panel, which portrays the Separation of Light from Darkness, from the Book of Genesis, Michelangelo embedded a ventral view of the brainstem, they wrote. 

Using a digital analysis, they compared the shadows outlining the features of God’s neck and a photograph of a model of this section of the brain, which connects with the spinal cord, and found a close correspondence. 

This is not the first anatomical image found hidden in the frescoes of the Sistine Chapel. In an article published in 1990, Frank Lynn Meshberger, a gynecologist, identified an outline of the human brain in the Creation of Adam. Among other details, he noted that the shroud surrounding God had the shape of the cerebrum, or the upper part of the brain. A decade later, another researcher pointed out a kidney motif. 

"We speculated that having used the brain motif successfully in the Creation of Adam almost a year earlier, Michelangelo wanted to once again associate the figure of God with a brain motif in the iconographically critical Separation of Light from Darkness," wrote authors Ian Suk, a medical illustrator, and neurosurgeon Rafael Tamargo, both of the Johns Hopkins School of Medicine.

They do point out "the perils of overinterpreting a masterpiece," saying that not all art historians and other viewers will agree with their conclusions. Even so, they say their analysis, along with historical records, backs the interpretation.


View Article Here Read More

6 Supermaterials That Could Change Our World


Graphene

Excerpt from gizmodo.com

Graphene isn't the only game-changing material to come out of a lab. From aerogels nearly as light as air to metamaterials that manipulate light, here are six supermaterials that have the potential to transform the world of the future.

Self-healing Materials — Bioinspired Plastics

6 Supermaterials That Could Change Our World 
Self-healing plastic. Image credit: UIUC


The human body is very good at fixing itself. The built environment is not. Scott White at the University of Illinois at Urbana Champlain has been engineering bioinspired plastics that can self-heal. Last year, White's lab created a new polymer that oozes to repair a visible hole. The polymer is embedded with a vascular system of liquids that when broken and combined, clot just like blood. While other materials have been able to heal microscopic cracks, this new one repaired a hole 4 millimeter wide with cracks radiating all around it. Not big deal for a human skin, but a pretty big deal for plastic.

Engineers have also been envisioning concrete, asphalt, and metal that can heal themselves. (Imagine a city with no more potholes!) The rub, of course, lies in making them cheap enough to actually use, which is why the first applications for self-healing materials are most likely to be in space or in remote areas on Earth. 

Thermoelectric Materials — Heat Scavengers

6 Supermaterials That Could Change Our World 
Power blocks with thermoelectric material sued inside Alphabet Energy 's generator. Image credit: Alphabet Energy


If you've ever had a laptop burn up in your lap or touched the hot hood of car, then you've felt evidence of waste. Waste heat is the inevitable effect of running any that device that uses power. One estimate puts the amount of waste heat as two-thirds of all energy used. But what if there was a way to capture all that wasted energy? The answer to that "what if" is thermoelectric materials, which makes electricity from a temperature gradient.

Last year, California-based Alphabet Energy introduced a thermoelectric generator that plugs right into the exhaust pipe of ordinary generator, turning waste heat back into useful electricity. Alphabet Energy's generator uses a relatively cheap and naturally occurring thermoelectric material called tetrahedrite. Alphabet Energy says tetrahedrite can reach 5 to 10 percent efficiency.
Back in the lab, scientists have also been tinkering with another promising and possibly even more efficient thermoelectric material called skutterudite, which is a type of mineral that contains cobalt. Thermoelectric materials have already had niche applications—like on spacecraft—but skutterudite could get cheap and efficient enough to be wrapped around the exhaust pipes of cars or fridges or any other power-hogging machine you can think of. [Nature, MIT Technology Review, New Scientist]

Perovskites — Cheap Solar Cells

6 Supermaterials That Could Change Our World 
Solar cells made of perovskites. Image credit: University of Oxford


The biggest hurdle in moving toward renewable energy is, as these things always are, money. Solar power is getting ever cheaper, but making a plant's worth of solar cells from crystalline silicon is still an expensive, energy-intensive process. There's an alternative material that has the solar world buzzing though, and that's perovskites. 

Perovskites were first discovered over a century ago, but scientists are only just realizing its potential. In 2009, solar cells made from perovskites had a solar energy conversion efficiency of a measly 3.8 percent. In 2014, the number had leapt to 19.3 percent. That may not seem like much compared to traditional crystalline silicon cells with efficiencies hovering around 20 percent, but there's two other crucial points to consider: 1) perovskites have made such leaps and bounds in efficiency in just a few years that scientist think it can get even better and 2) perovskites are much, much cheaper. 

Perovskites are a class of materials defined by a particular crystalline structure. They can contain any number of elements, usually lead and tin for perovskites used in solar cells. These raw materials are cheap compared to crystalline silicon, and they can be sprayed onto glass rather than meticulously assembled in clean rooms. Oxford Photovoltaics is one of the leading companies trying to commercialize perovskites, which as wonderful as they have been in the lab, still do need to hold up in the real world. [WSJ, IEEE Spectrum, Chemical & Engineering News, Nature Materials]

Aerogels — Superlight and Strong

6 Supermaterials That Could Change Our World 
Image credit: NASA

Aerogels look like they should not be real. Although ghostly and ethereal, they can easily withstand the heat of a blowtorch and the weight of a car. The material is almost what exactly the name implies: gels where where the liquid has been replaced entirely by air. But you can see why it's also been called "frozen smoke" or "blue smoke." The actual matrix of an aerogel can be made of any number of substances, including silica, metal oxides, and, yes, also graphene. But the fact that aerogel is actually mostly made of air means that it's an excellent insulator (see: blowtorch). Its structure also makes it incredibly strong (see: car).

Aerogels do have one fatal flaw though: brittleness, especially when made from silica. But NASA scientists have been experimenting with flexible aerogels made of polymers to use insulators for spacecraft burning through the atmosphere. Mixing other compounds into even silica-based aerogels could make them more flexible. Add that to aerogel's lightness, strength, and insulating qualities, and that's one incredible material. [New Scientist, Gizmodo]

Metamaterials — Light Manipulators

If you've heard of metamaterials, you likely heard about it in a sentence that also mentioned "Harry Potter" and "invisibility cloak." And indeed, metamaterials, whose nanostructures are design to scatter light in specific ways, could possibly one day be used to render objects invisible—though it still probably wouldn't be as magical as Harry Potter's invisibility cloak. 

What's more interesting about metamaterials is that they don't just redirect visible light. Depending on how and what a particular metamaterial is made of, it can also scatter microwaves, radiowaves, or the little-known T-rays, which are between microwaves and infrared light on the electromagnetic spectrum. Any piece of electromagnetic spectrum could be manipulated by metamaterials. 

That could be, for example, new T-ray scanners in medicine or security or a compact radio antennae made of metamaterials whose properties change on the fly. Metamaterials are at the promising but frustrating cusp where the theoretical possibilities are endless, but commercialization is still a long, hard road. [Nature, Discover Magazine]

Stanene — 100 percent efficient conductor

6 Supermaterials That Could Change Our World 
The molecular structure of stanene. Image credit: SLAC


Like the much better known graphene, stanene is also made of a single layer of atoms. But instead of carbon, stanene is made of tin, and this makes all the difference in allowing stanene to possibly do what even wondermaterial extraordinaire graphene cannot: conduct electricity with 100 percent efficiency.

Stanene was first theorized in 2013 by Stanford professor Shoucheng Zhang, whose lab specializes in, along other things, predicting the electronic properties of materials like stanene. According to their models, stanene is a topological insulator, which means its edges are a conductor and its inside is an insulator. (Think of a chocolate-covered ice cream bar. Chocolate conductor, ice cream insulator.) 

This means stanene could conduct electricity with zero resistance even, crucially, at room temperature. Stanene's properties have yet to been tested experimentally—making a single-atom sheet tin is no easy task—but several of Zhang's predictions about other topological insulators have proven correct.

If the predictions about stanene bear out, it could revolutionize the microchips inside all your devices. Namely, the chips could get a lot more powerful. Silicon chips are limited by the heat created by electrons zipping around—work 'em too fast and they'll simply get too hot. Stanene, which conducts electricity 100 percent efficiency, would have no such problem. [SLAC, Physical Review Letters, Scientific American]

View Article Here Read More

Banned TED Talk: The Science Delusion ~ Is science way off about the nature of our reality?



The following statement has been posted by Tedstaff at blog.ted.com: "After due diligence, including a survey of published scientific research and recommendations from our Science Board and our community, we have decided that Graham Hancock’s and Rupert Sheldrake’s talks from TEDxWhitechapel should be removed from distribution on the TEDx YouTube channel... All talks on the TEDxTalks channel represent the opinion of the speaker, not of TED or TEDx, but we feel a responsibility not to provide a platform for talks which appear to have crossed the line into pseudoscience.

Response to the TED Scientific Board’s Statement
Rupert Sheldrake
March 18, 2013

I would like to respond to TED’s claims that my TEDx talk “crossed the line into pseudoscience”, contains ”serious factual errors” and makes “many misleading statements.”
This discussion is taking place because the militant atheist bloggers Jerry Coyne and P.Z. Myers denounced me, and attacked TED for giving my talk a platform. I was invited to give my talk as part of a TEDx event in Whitechapel, London, called “Challenging Existing Paradigms.” That’s where the problem lies: my talk explicitly challenges the materialist belief system. It summarized some of the main themes of my recent book Science Set Free (in the UK called The Science Delusion). Unfortunately, the TED administrators have publically aligned themselves with the old paradigm of materialism, which has dominated science since the late nineteenth century.
TED say they removed my talk from their website on the advice of their Scientific Board, who also condemned Graham Hancock’s talk. Hancock and I are now facing anonymous accusations made by a body on whose authority TED relies, on whose advice they act, and behind whom they shelter, but whose names they have not revealed.
TED’s anonymous Scientific Board made three specific accusations:
Accusation 1:“he suggests that scientists reject the notion that animals have consciousness, despite the fact that it’s generally accepted that animals have some form of consciousness, and there’s much research and literature exploring the idea.”
I characterized the materialist dogma as follows: “Matter is unconscious: the whole universe is made up of unconscious matter. There’s no consciousness in stars in galaxies, in planets, in animals, in plants and there ought not to be any in us either, if this theory’s true. So a lot of the philosophy of mind over the last 100 years has been trying to prove that we are not really conscious at all.” Certainly some biologists, including myself, accept that animals are conscious. In August, 2012, a group of scientists came out with an endorsement of animal consciousness in “The Cambridge Declaration on Consciousness”. As Discovery News reported, “While it might not sound like much for scientists to declare that many nonhuman animals possess conscious states, it’s the open acknowledgement that’s the big news here.” (http://news.discovery.com/human/genetics/animals-consciousness-mammals-birds-octopus-120824.htm)
But materialist philosophers and scientists are still in the majority, and they argue that consciousness does nothing – it is either an illusion or an ”epiphenomenon” of brain activity. It might as well not exist in animals – or even in humans. That is why in the philosophy of mind, the very existence of consciousness is often called “the hard problem”.http://en.wikipedia.org/wiki/Hard_problem_of_consciousness
Accusation 2:“He also argues that scientists have ignored variations in the measurements of natural constants, using as his primary example the dogmatic assumption that a constant must be constant and uses the speed of light as example.… Physicist Sean Carroll wrote a careful rebuttal of this point.”
TED’s Scientific Board refers to a Scientific American article that makes my point very clearly: “Physicists routinely assume that quantities such as the speed of light are constant.”
In my talk I said that the published values of the speed of light dropped by about 20 km/sec between 1928 and 1945. Carroll’s “careful rebuttal” consisted of a table copied from Wikipedia showing the speed of light at different dates, with a gap between 1926 and 1950, omitting the very period I referred to. His other reference (http://micro.magnet.fsu.edu/primer/lightandcolor/speedoflight.html) does indeed give two values for the speed of light in this period, in 1928 and 1932-35, and sure enough, they were 20 and 24km/sec lower than the previous value, and 14 and 18 km/sec lower than the value from 1947 onwards.
1926: 299,798
1928: 299,778
1932-5: 299,774
1947: 299,792

In my talk I suggest how a re-examination of existing data could resolve whether large continuing variations in the Universal Gravitational Constant, G, are merely errors, as usually assumed, or whether they show correlations between different labs that might have important scientific implications hitherto ignored. Jerry Coyne and TED’s Scientific Board regard this as an exercise in pseudoscience. I think their attitude reveals a remarkable lack of curiosity.
Accusation 3:“Sheldrake claims to have “evidence” of morphic resonance in crystal formation and rat behavior. The research has never appeared in a peer-reviewed journal, despite attempts by other scientists eager to replicate the work.”
I said, “There is in fact good evidence that new compounds get easier to crystallize all around the world.” For example, turanose, a kind of sugar, was considered to be a liquid for decades, until it first crystallized in the 1920s. Thereafter it formed crystals everyehere. (Woodard and McCrone Journal of Applied Crystallography (1975). 8, 342). The American chemist C. P. Saylor, remarked it was as though “the seeds of crystallization, as dust, were carried upon the winds from end to end of the earth” (quoted by Woodard and McCrone).
The research on rat behavior I referred to was carried out at Harvard and the Universities of Melbourne and Edinburgh and was published in peer-reviewed journals, including the British Journal of Psychology and the Journal of Experimental Biology. For a fuller account and detailed references see Chapter 11 of my book Morphic Resonance (in the US) / A New Science of Life (in the UK). The relevant passage is online here: http://sciencesetfree.tumblr.com/
The TED Scientific Board refers to ”attempts by other scientists eager to replicate the work” on morphic resonance. I would be happy to work with these eager scientists if the Scientific Board can reveal who they are.
This is a good opportunity to correct an oversimplification in my talk. In relation to the dogma that mechanistic medicine is the only kind that really works, I said, “that’s why governments only fund mechanistic medicine and ignore complementary and alternative therapies.” This is true of most governments, but the US is a notable exception. The US National Center for Complementary and Alternative Medicine receives about $130 million a year, about 0.4% of the National Institutes of Health (NIH) total annual budget of $31 billion.
Obviously I could not spell out all the details of my arguments in an 18-minute talk, but TED’s claims that it contains “serious factual errors,” “many misleading statements” and that it crosses the line into “pseudoscience” are defamatory and false.

Click to zoom

View Article Here Read More

Move Over Predator Alien: The human eye can see ‘invisible’ infrared light too


The eye can detect light at wavelengths in the visual spectrum. Other wavelengths, such as infrared and ultraviolet, are supposed to be invisible to the human eye, but Washington University scientists have found that under certain conditions, it’s possible for us to see otherwise invisible infrared light. Image: Sara Dickherber

Excerpt from
news.wustl.edu
By Jim Dryden

Any science textbook will tell you we can’t see infrared light. Like X-rays and radio waves, infrared light waves are outside the visual spectrum. 

But an international team of researchers co-led by scientists at Washington University School of Medicine in St. Louis has found that under certain conditions, the retina can sense infrared light after all. 

Using cells from the retinas of mice and people, and powerful lasers that emit pulses of infrared light, the researchers found that when laser light pulses rapidly, light-sensing cells in the retina sometimes get a double hit of infrared energy. When that happens, the eye is able to detect light that falls outside the visible spectrum.

The findings are published Dec. 1 in the Proceedings of the National Academy of Sciences (PNAS) Online Early Edition. The research was initiated after scientists on the research team reported seeing occasional flashes of green light while working with an infrared laser. Unlike the laser pointers used in lecture halls or as toys, the powerful infrared laser the scientists worked with emits light waves thought to be invisible to the human eye.

“They were able to see the laser light, which was outside of the normal visible range, and we really wanted to figure out how they were able to sense light that was supposed to be invisible,” said Frans Vinberg, PhD, one of the study’s lead authors and a postdoctoral research associate in the Department of Ophthalmology and Visual Sciences at Washington University. 

Vinberg, Kefalov and their colleagues examined the scientific literature and revisited reports of people seeing infrared light. They repeated previous experiments in which infrared light had been seen, and they analyzed such light from several lasers to see what they could learn about how and why it sometimes is visible.

“We experimented with laser pulses of different durations that delivered the same total number of photons, and we found that the shorter the pulse, the more likely it was a person could see it,” Vinberg explained. “Although the length of time between pulses was so short that it couldn’t be noticed by the naked eye, the existence of those pulses was very important in allowing people to see this invisible light.”



Robert Boston

Kefalov’s team developed this adapter that allowed scientists to analyze retinal cells and photopigment molecules as they were exposed to infrared light. The device already is commercially available and in use at several vision research centers around the world.
“The visible spectrum includes waves of light that are 400-720 nanometers long,” explained Kefalov, an associate professor of ophthalmology and visual sciences. “But if a pigment molecule in the retina is hit in rapid succession by a pair of photons that are 1,000 nanometers long, those light particles will deliver the same amount of energy as a single hit from a 500-nanometer photon, which is well within the visible spectrum. That’s how we are able to see it.”

Robert Boston

Frans Vinberg, PhD (left), and Vladimir J. Kefalov, PhD, sit in front of a tool they developed that allows them to detect light responses from retinal cells and photopigment molecules.

View Article Here Read More

Amazon, Google, IBM & Microsoft Want to Store Your Genome


Excerpt from  technologyreview.com


By Antonio Regalado

 For $25 a year, Google will keep a copy of any genome in the cloud.

Google is approaching hospitals and universities with a new pitch. Have genomes? Store them with us.

The search giant’s first product for the DNA age is Google Genomics, a cloud computing service that it launched last March but went mostly unnoticed amid a barrage of high profile R&D announcements from Google...

Google Genomics could prove more significant than any of these moonshots. Connecting and comparing genomes by the thousands, and soon by the millions, is what’s going to propel medical discoveries for the next decade. The question of who will store the data is already a point of growing competition between Amazon, Google, IBM, and Microsoft.

Google began work on Google Genomics 18 months ago, meeting with scientists and building an interface, or API, that lets them move DNA data into its server farms and do experiments there using the same database technology that indexes the Web and tracks billions of Internet users.

This flow of data is smaller than what is routinely handled by large Internet companies (over two months, Broad will produce the equivalent of what gets uploaded to YouTube in one day) but it exceeds anything biologists have dealt with. That’s now prompting a wide effort to store and access data at central locations, often commercial ones. The National Cancer Institute said last month that it would pay $19 million to move copies of the 2.6 petabyte Cancer Genome Atlas into the cloud. Copies of the data, from several thousand cancer patients, will reside both at Google Genomics and in Amazon’s data centers.

The idea is to create “cancer genome clouds” where scientists can share information and quickly run virtual experiments as easily as a Web search, says Sheila Reynolds, a research scientist at the Institute for Systems Biology in Seattle. “Not everyone has the ability to download a petabyte of data, or has the computing power to work on it,” she says.

Also speeding the move of DNA data to the cloud has been a yearlong price war between Google and Amazon. Google says it now charges about $25 a year to store a genome, and more to do computations on it. Scientific raw data representing a single person’s genome is about 100 gigabytes in size, although a polished version of a person’s genetic code is far smaller, less than a gigabyte. That would cost only $0.25 cents a year.


The bigger point, he says, is that medicine will soon rely on a kind of global Internet-of-DNA which doctors will be able to search. “Our bird’s eye view is that if I were to get lung cancer in the future, doctors are going to sequence my genome and my tumor’s genome, and then query them against a database of 50 million other genomes,” he says. “The result will be ‘Hey, here’s the drug that will work best for you.’ ”


At Google, Glazer says he began working on Google Genomics as it became clear that biology was going to move from “artisanal to factory-scale data production.” He started by teaching himself genetics, taking an online class, Introduction to Biology, taught by Broad’s chief, Eric Lander. He also got his genome sequenced and put it on Google’s cloud.

Glazer wouldn’t say how large Google Genomics is or how many customers it has now, but at least 3,500 genomes from public projects are already stored on Google’s servers. He also says there’s no link, as of yet, between Google’s cloud and its more speculative efforts in health care, like the company Google started this year, called Calico, to investigate how to extend human lifespans. “What connects them is just a growing realization that technology can advance the state of the art in life sciences,” says Glazer.

Datta says some Stanford scientists have started using a Google database system, BigQuery, that Glazer’s team made compatible with genome data. It was developed to analyze large databases of spam, web documents, or of consumer purchases. But it can also quickly perform the very large experiments comparing thousands, or tens of thousands, of people’s genomes that researchers want to try. “Sometimes they want to do crazy things, and you need scale to do that,” says Datta. “It can handle the scale genetics can bring, so it’s the right technology for a new problem.”

View Article Here Read More
Older posts Newer posts

Creative Commons License
This work is licensed under a
Creative Commons Attribution 4.0
International License
.
unless otherwise marked.

Terms of Use | Privacy Policy



Up ↑