Tag: require (page 2 of 11)

New research shows Universe expansion pace isn’t as fast as assumed earlier



universe


Excerpt from thewestsidestory.net

The Universe is expanding and any student of astronomy will vouch to this fact. However according to a team of astronomers the acceleration of the universe may not be as quick as it was assumed earlier.

A team of astronomers have discovered that certain types of supernova are more varied than earlier thought of and in the process have led to the biggest mystery of the universe-how fast is the universe expanding after the big bang?

Peter A. Milne of the University of Arizona said, “We found that the differences are not random, but lead to separating Ia supernovae into two groups, where the group that is in the minority near us are in the majority at large distances — and thus when the universe was younger, there are different populations out there, and they have not been recognized. The big assumption has been that as you go from near to far, type Ia supernovae are the same. That doesn’t appear to be the case.”
The discovery throws new light on the currently accepted view of the universe expanding at a faster and faster rate pulled apart by an unknown force called dark energy this observation resulted in 2011 Nobel Prize for Physics.
Milne said, “The idea behind this reasoning, is that type Ia supernovae happen to be the same brightness — they all end up pretty similar when they explode. Once people knew why, they started using them as mileposts for the far side of the universe.The faraway supernovae should be like the ones nearby because they look like them, but because they’re fainter than expected, it led people to conclude they’re farther away than expected, and this in turn has led to the conclusion that the universe is expanding faster than it did in the past.”
The researchers felt that the accelerating universe can be explained on the basis of color difference in between two groups of supernova leaving less acceleration than earlier assumed and in the process will require lesser dark energy.

Milne said, “We’re proposing that our data suggest there might be less dark energy than textbook knowledge, but we can’t put a number on it, until our paper, the two populations of supernovae were treated as the same population. To get that final answer, you need to do all that work again, separately for the red and for the blue population.

Type la supernovae are considered as a benchmark for far away sources of light they do have a fraction of variability which has limited our knowledge of the size of the universe.
The distance of objects with the aid of our binocular vision and the best space-based telescopes and most sophisticated techniques works out in the range of ten or twenty thousand light years. 
However as compared to the vastness of space, this is just pea nuts.
For Distances greater than that it is imperative to compare the absolute and observed brightness of well understood objects and to use the difference to determine the object’s distance.

In astronomy it is difficult to find an object of known brightness since there are examples of both bright and dim stars and galaxies. However there is one event which can be used to work out its absolute brightness. Supernovas are the final stages of a dying star and it explodes with such violence, the flash can be seen across the vast universe.

Type la Supernovae occurs in a binary star system when a white dwarf scoops off mass from its fellow star. This reproducible mechanism gives a well determined brightness and therefore scientists term such Type la supernovae as ‘standard candles’.

Astronomers found that the Type la supernovae is so uniform that it has been designated as cosmic beacons and used to assess the depths of the universe. It is now revealed that they fall into different populations and are not very uniform as previously thought. .

View Article Here Read More

Huge Alien Planet Bathes in the Light of Four Suns



30 Ari with its newly discovered companion stars
Karen Teramura

Excerpt from nbcnews.com


Astronomers have spotted a fourth star in a planetary system called 30 Ari, bringing the number of known planet-harboring quadruple-sun systems to two. 

"Star systems come in myriad forms. There can be single stars, binary stars, triple stars, even quintuple star systems," study lead author Lewis Roberts, of NASA's Jet Propulsion Laboratory, said in a statement. "It's amazing the way nature puts these things together." 

30 Ari lies 136 light-years from the sun in the constellation Aries. Astronomers discovered a giant planet in the system in 2009; the world is about 10 times more massive than Jupiter and orbits its primary star every 335 days. There's also a pair of stars that lie approximately 1,670 astronomical units away. (One AU is the distance between Earth and the sun — about 93 million miles, or 150 million kilometers).

The newfound star circles its companion once every 80 years, at a distance of just 22 AU, but it does not appear to affect the exoplanet's orbit despite such proximity. This is a surprising result that will require further observations to understand, researchers said. 

To a hypothetical observer cruising through the giant planet's atmosphere, the sky would appear to host one small sun and two bright stars visible in daylight. With a large enough telescope, one of the bright stars could be resolved into a binary pair. 

The discovery marks just the second time a planet has been identified in a four-star system. The first four-star planet, PH1b or Kepler-64b, was spotted in 2012 by citizen scientists using publicly available data from NASA's Kepler mission. 

Planets with multiple suns have become less of a novelty in recent years, as astronomers have found a number of real worlds that resemble Tatooine, Luke Skywalker's home planet in the Star Wars films. 

The research was published online this month in the Astronomical Journal.

View Article Here Read More

NASA’s Plan to Give the Moon a Moon


arm-capture_0




Excerpt from wired.com

It sounds almost like a late ’90s sci-fi flick: NASA sends a spacecraft to an asteroid, plucks a boulder off its surface with a robotic claw, and brings it back in orbit around the moon. Then, brave astronaut heroes go and study the space rock up close—and bring samples back to Earth.
Except it’s not a movie: That’s the real-life idea for the Asteroid Redirect Mission, which NASA announced today. Other than simply being an awesome space version of the claw arcade game (you know you really wanted that stuffed Pikachu), the mission will let NASA test technology and practice techniques needed for going to Mars.
The mission, which will cost up to $1.25 billion, is slated to launch in December 2020. It will take about two years to reach the asteroid (the most likely candidate is a quarter-mile-wide rock called 2008 EV5). The spacecraft will spend up to 400 days there, looking for a good boulder. After picking one—maybe around 13 feet in diameter—it will bring the rock over to the moon. In 2025, astronauts will fly NASA’s still-to-be-built Orion to dock with the asteroid-carrying spacecraft and study the rock up close.
Although the mission would certainly give scientists an up-close opportunity to look at an asteroid, its main purpose is as a testing ground for a Mars mission. The spacecraft will test a solar electronic propulsion system, which uses the power from solar panels to pump out charged particles to provide thrust. It’s slower than conventional rockets, but a lot more efficient. You can’t lug a lot of rocket fuel to Mars.
Overall, the mission gives NASA a chance at practicing precise navigation and maneuvering techniques that they’ll need to master for a Mars mission. Such a trip will also require a lot more cargo, so grabbing and maneuvering a big space rock is good practice. Entering lunar orbit and docking with another spacecraft would also be helpful, as the orbit might be a place for a deep-space habitat, a rendezvous point for astronauts to pick up cargo or stop on their way to Mars.
And—you knew this part was coming, Armageddon fans—the mission might teach NASA something about preventing an asteroid from striking Earth. After grabbing the boulder, the spacecraft will orbit the asteroid. With the added heft from the rock, the spacecraft’s extra gravity would nudge the asteroid, creating a slight change in trajectory that NASA could measure from Earth. “We’re not talking about a large deflection here,” says Robert Lightfoot, an associate administrator at NASA. But the idea is that a similar technique could push a threatening asteroid off a collision course with Earth.
NASA chose this mission concept over one that would’ve bagged an entire asteroid. In that plan, the spacecraft would’ve captured the space rock by enclosing it in a giant, flexible container. The claw concept won out because its rendezvous and soft-landing on the asteroid will allow NASA to test and practice more capabilities in preparation for a Mars mission, Lightfoot says. The claw would’ve also given more chances at grabbing a space rock, whereas it was all or nothing with the bag idea. “It’s a one-shot deal,” he says. “It is what it is when we get there.” But the claw concept offers some choices. “I’ve got three to five opportunities to pull one of the boulders off,” he says. Not bad odds. Better than winning that Pikachu

View Article Here Read More

Top Secret Government Programs That Your Not Supposed To Know About

Originally Posted at in5d.com The following is the alleged result of the actions of one or more scientists creating a covert, unauthorized notebook documenting their involvement with an Above Top Secret government program. Government publications and information obtained by the use of public tax monies cannot be subject to copyright. This document is released into the public domain for all citizens of the United States of America. THE ‘MAJIC PROJECTS’ SIGMA is the project whic [...]

View Article Here Read More

Does the Past Exist Yet? Evidence Suggests Your Past Isn’t Set in Stone


thumbnail



Excerpt from robertlanza.com
By Robert Lanza 

Recent discoveries require us to rethink our understanding of history. “The histories of the universe,” said renowned physicist Stephen Hawking “depend on what is being measured, contrary to the usual idea that the universe has an objective observer-independent history.”

Is it possible we live and die in a world of illusions? Physics tells us that objects exist in a suspended state until observed, when they collapse in to just one outcome. Paradoxically, whether events happened in the past may not be determined until sometime in your future – and may even depend on actions that you haven’t taken yet.

In 2002, scientists carried out an amazing experiment, which showed that particles of light “photons” knew — in advance — what their distant twins would do in the future. They tested the communication between pairs of photons — whether to be either a wave or a particle. Researchers stretched the distance one of the photons had to take to reach its detector, so that the other photon would hit its own detector first. The photons taking this path already finished their journeys — they either collapse into a particle or don’t before their twin encounters a scrambling device.
Somehow, the particles acted on this information before it happened, and across distances instantaneously as if there was no space or time between them. They decided not to become particles before their twin ever encountered the scrambler. It doesn’t matter how we set up the experiment. Our mind and its knowledge is the only thing that determines how they behave. Experiments consistently confirm these observer-dependent effects.

More recently (Science 315, 966, 2007), scientists in France shot photons into an apparatus, and showed that what they did could retroactively change something that had already happened. As the photons passed a fork in the apparatus, they had to decide whether to behave like particles or waves when they hit a beam splitter. 
Later on – well after the photons passed the fork – the experimenter could randomly switch a second beam splitter on and off. It turns out that what the observer decided at that point, determined what the particle actually did at the fork in the past. At that moment, the experimenter chose his history.

Of course, we live in the same world. Particles have a range of possible states, and it’s not until observed that they take on properties. So until the present is determined, how can there be a past? According to visionary physicist John Wheeler (who coined the word “black hole”), “The quantum principle shows that there is a sense in which what an observer will do in the future defines what happens in the past.” Part of the past is locked in when you observe things and the “probability waves collapse.” But there’s still uncertainty, for instance, as to what’s underneath your feet. If you dig a hole, there’s a probability you’ll find a boulder. Say you hit a boulder, the glacial movements of the past that account for the rock being in exactly that spot will change as described in the Science experiment.

But what about dinosaur fossils? Fossils are really no different than anything else in nature. For instance, the carbon atoms in your body are “fossils” created in the heart of exploding supernova stars. 
Bottom line: reality begins and ends with the observer. “We are participators,” Wheeler said “in bringing about something of the universe in the distant past.” Before his death, he stated that when observing light from a quasar, we set up a quantum observation on an enormously large scale. It means, he said, the measurements made on the light now, determines the path it took billions of years ago.

Like the light from Wheeler’s quasar, historical events such as who killed JFK, might also depend on events that haven’t occurred yet. There’s enough uncertainty that it could be one person in one set of circumstances, or another person in another. Although JFK was assassinated, you only possess fragments of information about the event. But as you investigate, you collapse more and more reality. According to biocentrism, space and time are relative to the individual observer – we each carry them around like turtles with shells.

History is a biological phenomenon — it’s the logic of what you, the animal observer experiences. You have multiple possible futures, each with a different history like in the Science experiment. Consider the JFK example: say two gunmen shot at JFK, and there was an equal chance one or the other killed him. This would be a situation much like the famous Schrödinger’s cat experiment, in which the cat is both alive and dead — both possibilities exist until you open the box and investigate.

“We must re-think all that we have ever learned about the past, human evolution and the nature of reality, if we are ever to find our true place in the cosmos,” says Constance Hilliard, a historian of science at UNT. Choices you haven’t made yet might determine which of your childhood friends are still alive, or whether your dog got hit by a car yesterday. In fact, you might even collapse realities that determine whether Noah’s Ark sank. “The universe,” said John Haldane, “is not only queerer than we suppose, but queerer than we can suppose.”

View Article Here Read More

New research shows billions of habitable planets exist in our galaxy



CGI of how the Milky Way galaxy may appear from deep space


Excerpt from thespacereporter.com


Analysis of data collected by NASA’s Kepler space telescope has led researchers at the Australian National University and the Niels Bohr Institute to conclude that Earth is only one of billions of potentially life-sustaining planets in our galaxy.

In order for a planet to sustain life, it must orbit its star at just the right distance to provide sufficient light and warmth to maintain liquid water without too much radiation. This perfect orbital distance is considered to be the habitable zone.

According to a Weather Channel report, there are an average of two planets per star in the Milky Way Galaxy orbiting within their habitable zones. That brings the total number of planets with the potential for holding liquid water to 100 billion.

Scientists assume that water would be an essential ingredient for life to evolve on other planets, but it is not a certainty.

“If you have liquid water, then you should have better conditions for life, we think,” said Steffen Jacobsen of Niels Bohr. “Of course, we don’t know this yet. We can’t say for certain.”

To reach their conclusion, the researchers studied 151 planetary systems and focused on those with four or more planets. They used a concept called the Titus-Bode law to calculate where unseen planets might be located in a system based on the placements of other planets around the star. The Titus-Bode law suggested the existence of Uranus before it was actually seen.

The data will require further analysis and the sky will require further searching to yield a more accurate number of potentially life-harboring planets.
“Some of these planets are so small the Kepler team will probably have missed them in the first attempt because the signals we get are so weak. They may be hidden in the noise,” Jacobsen said.

The initial analysis, however, is extremely promising in the possibility of finding habitable planets. “Our research indicates that there are a lot of planets in the habitable zone and we know there are a lot of stars like the one we’re looking at. We know that means we’re going to have many billions of planets in the habitable zone,” said Jacobsen, who considers that “very good news for the search for life.”

View Article Here Read More

Should Humanity Try to Contact Alien Civilizations?



Some researchers want to use big radio dishes like the 305-meter Arecibo Observatory in Puerto Rico to announce our presence to intelligent aliens.



Excerpt from space.com
by Mike Wall

Is it time to take the search for intelligent aliens to the next level?
For more than half a century, scientists have been scanning the heavens for signals generated by intelligent alien life. They haven't found anything conclusive yet, so some researchers are advocating adding an element called "active SETI" (search for extraterrestrial intelligence) — not just listening, but also beaming out transmissions of our own designed to catch aliens' eyes.

Active SETI "may just be the approach that lets us make contact with life beyond Earth," Douglas Vakoch, director of interstellar message composition at the SETI Institute in Mountain View, California, said earlier this month during a panel discussion at the annual meeting of the American Association for the Advancement of Science (AAAS) in San Jose.

Seeking contact


Vakoch envisions using big radio dishes such as the Arecibo Observatory in Puerto Rico to blast powerful, information-laden transmissions at nearby stars, in a series of relatively cheap, small-scale projects.

"Whenever any of the planetary radar folks are doing their asteroid studies, and they have an extra half an hour before or after, there's always a target star readily available that they can shift to without a lot of extra slough time," he said.

The content of any potential active SETI message is a subject of considerable debate. If it were up to astronomer Seth Shostak, Vakoch's SETI Institute colleague, we'd beam the entire Internet out into space.

"It's like sending a lot of hieroglyphics to the 19th century — they [aliens] can figure it out based on the redundancy," Shostak said during the AAAS discussion. "So, I think in terms of messages, we should send everything."

While active SETI could help make humanity's presence known to extrasolar civilizations, the strategy could also aid the more traditional "passive" search for alien intelligence, Shostak added.
"If you're going to run SETI experiments, where you're trying to listen for a putative alien broadcast, it may be very instructive to have to construct a transmitting project," he said. "Because now, you walk a mile in the Klingons' shoes, assuming they have them."

Cause for concern?

But active SETI is a controversial topic. Humanity has been a truly technological civilization for only a few generations; we're less than 60 years removed from launching our first satellite to Earth orbit, for example. So the chances are that any extraterrestrials who pick up our signals would be far more advanced than we are. 

This likelihood makes some researchers nervous, including famed theoretical physicist Stephen Hawking.

"Such advanced aliens would perhaps become nomads, looking to conquer and colonize whatever planets they could reach," Hawking said in 2010 on an episode of "Into the Universe with Stephen Hawking," a TV show that aired on the Discovery Channel. "If so, it makes sense for them to exploit each new planet for material to build more spaceships so they could move on. Who knows what the limits would be?"

Astrophysicist and science fiction author David Brin voiced similar concerns during the AAAS event, saying there's no reason to assume that intelligent aliens would be altruistic.

"This is an area in which discussion is called for," Brin said. "What are the motivations of species that they might carry with them into their advanced forms, that might color their cultures?"

Brin stressed that active SETI shouldn't be done in a piecemeal, ad hoc fashion by small groups of astronomers.

"This is something that should be discussed worldwide, and it should involve our peers in many other specialties, such as history," he said. "The historians would tell us, 'Well, gee, we have some examples of first-contact scenarios between advanced technological civilizations and not-so-advanced technological civilizations.' Gee, how did all of those turn out? Even when they were handled with goodwill, there was still pain."

Out there already

Vakoch and Shostak agreed that international discussion and cooperation are desirable. But Shostak said that achieving any kind of consensus on the topic of active SETI may be difficult. For example, what if polling reveals that 60 percent of people on Earth are in favor of the strategy, while 40 percent are opposed?

"Do we then have license to go ahead and transmit?" Shostak said. "That's the problem, I think, with this whole 'let's have some international discussion' [idea], because I don't know what the decision metric is."

Vakoch and Shostak also said that active SETI isn't as big a leap as it may seem at first glance: Our civilization has been beaming signals out into the universe unintentionally for a century, since the radio was invented.

"The reality is that any civilization that has the ability to travel between the stars can already pick up our accidental radio and TV leakage," Vakoch said. "A civilization just 200 to 300 years more advanced than we are could pick up our leakage radiation at a distance of several hundred light-years. So there are no increased dangers of an alien invasion through active SETI."

But Brin disputed this assertion, saying the so-called "barn door excuse" is a myth.

"It is very difficult for advanced civilizations to have picked us up at our noisiest in the 1980s, when we had all these military radars and these big television antennas," he said.

Shostak countered that a fear of alien invasion, if taken too far, could hamper humanity's expansion throughout the solar system, an effort that will probably require the use of high-powered transmissions between farflung outposts.

"Do you want to hamstring all that activity — not for the weekend, not just shut down the radars next week, or active SETI this year, but shut down humanity forever?" Shostak said. "That's a price I'm not willing to pay."

So the discussion and debate continues — and may continue for quite some time.

"This is the only really important scientific field without any subject matter," Brin said. "It's an area in which opinion rules, and everybody has a very fierce opinion."

View Article Here Read More

Windwheel concept combines tourist attraction with "silent turbine"


 The Dutch Windwheel concept is designed to be part energy icon, part tourist attraction an...


Excerpt from gizmag.com
By Stu Robarts


The Dutch have long used windmills to harness wind energy. A new concept proposed for city of Rotterdam, however, is surely one of the most elaborate windmills ever conceived. The Dutch Windwheel is a huge circular wind energy converter that houses apartments, a hotel and a giant coaster ride.

The concept is designed to be part energy icon, part tourist attraction and part residential building. It is a 174-m (571-ft) structure comprising two huge rings that appear to lean against each other. "We wanted to combine a big attraction for Rotterdam with a state-of-the-art sustainable concept," explains Lennart Graaff of the Dutch Windwheel Corporation, to Gizmag.

The larger outer ring houses 40 pods on rails that move around the ring and provide those who visit with views of Rotterdam and its port. The smaller inner ring, meanwhile, houses 72 apartments, a 160-room hotel across seven floors and a panoramic restaurant and viewing gallery. Perhaps most remarkable feature of of all, however, is a huge "bladeless turbine" that spans the center smaller ring.

Although this may look and sound like some of the more out-there architectural concepts that Gizmag has featured, it is actually based on existing (albeit prototypical) technology. The electrostatic wind energy convertor (EWICON) was developed at Delft Technical University and generates electricity by harnessing the movement of charged water droplets in the wind. Its lack of moving parts makes it noiseless and easier to maintain than traditional turbines.

Dhiradj Djairam, of the TU Delft team that developed the EWICON, tells Gizmag that the Dutch Windwheel Corporation has expressed "a serious interest" in the technology. Djairam says he has provided an explanation of the technology to the organization and provided a rough outline for a realistic research and development program. To date, only small-scale research projects have been carried out, with additional funding opportunities being explored.

The Dutch Windwheel concept is 174 m (571 ft) tall and has underwater foundations

The Dutch Windwheel concept has other sustainable aspects, too. Photovoltaic thermal hybrid panels would be used to contribute to the generation of electricity, and rainwater would be collected for use in the building. The Dutch Windwheel Corporation says the building itself is designed to be built with locally-sourced materials, and in such a way as it could ultimately be disassembled and re-used elsewhere.

Among the other features of the design are space for commercial functions in the structure's plinth, and foundations that are underwater, making it it look as though the structure is floating. 

We're told that the amount of power the Dutch Windwheel will require to run – and be able to generate – is not yet clear. Likewise, the final technologies and additional sustainability features that would be present in the building have yet to be finalized...

View Article Here Read More

Is playing ‘Space Invaders’ a milestone in artificial intelligence?





Excerpt from latimes.com

Computers have beaten humans at chess and "Jeopardy!," and now they can master old Atari games such as "Space Invaders" or "Breakout" without knowing anything about their rules or strategies.

Playing Atari 2600 games from the 1980s may seem a bit "Back to the Future," but researchers with Google's DeepMind project say they have taken a small but crucial step toward a general learning machine that can mimic the way human brains learn from new experience.

Unlike the Watson and Deep Blue computers that beat "Jeopardy!" and chess champions with intensive programming specific to those games, the Deep-Q Network built its winning strategies from keystrokes up, through trial and error and constant reprocessing of feedback to find winning strategies.

Image result for space invaders

“The ultimate goal is to build smart, general-purpose [learning] machines. We’re many decades off from doing that," said artificial intelligence researcher Demis Hassabis, coauthor of the study published online Wednesday in the journal Nature. "But I do think this is the first significant rung of the ladder that we’re on." 
The Deep-Q Network computer, developed by the London-based Google DeepMind, played 49 old-school Atari games, scoring "at or better than human level," on 29 of them, according to the study.
The algorithm approach, based loosely on the architecture of human neural networks, could eventually be applied to any complex and multidimensional task requiring a series of decisions, according to the researchers. 

The algorithms employed in this type of machine learning depart strongly from approaches that rely on a computer's ability to weigh stunning amounts of inputs and outcomes and choose programmed models to "explain" the data. Those approaches, known as supervised learning, required artful tailoring of algorithms around specific problems, such as a chess game.

The computer instead relies on random exploration of keystrokes bolstered by human-like reinforcement learning, where a reward essentially takes the place of such supervision.
“In supervised learning, there’s a teacher that says what the right answer was," said study coauthor David Silver. "In reinforcement learning, there is no teacher. No one says what the right action was, and the system needs to discover by trial and error what the correct action or sequence of actions was that led to the best possible desired outcome.”

The computer "learned" over the course of several weeks of training, in hundreds of trials, based only on the video pixels of the game -- the equivalent of a human looking at screens and manipulating a cursor without reading any instructions, according to the study.

Over the course of that training, the computer built up progressively more abstract representations of the data in ways similar to human neural networks, according to the study.
There was nothing about the learning algorithms, however, that was specific to Atari, or to video games for that matter, the researchers said.
The computer eventually figured out such insider gaming strategies as carving a tunnel through the bricks in "Breakout" to reach the back of the wall. And it found a few tricks that were unknown to the programmers, such as keeping a submarine hovering just below the surface of the ocean in "Seaquest."

The computer's limits, however, became evident in the games at which it failed, sometimes spectacularly. It was miserable at "Montezuma's Revenge," and performed nearly as poorly at "Ms. Pac-Man." That's because those games also require more sophisticated exploration, planning and complex route-finding, said coauthor Volodymyr Mnih.

And though the computer may be able to match the video-gaming proficiency of a 1980s teenager, its overall "intelligence" hardly reaches that of a pre-verbal toddler. It cannot build conceptual or abstract knowledge, doesn't find novel solutions and can get stuck trying to exploit its accumulated knowledge rather than abandoning it and resort to random exploration, as humans do. 

“It’s mastering and understanding the construction of these games, but we wouldn’t say yet that it’s building conceptual knowledge, or abstract knowledge," said Hassabis.

The researchers chose the Atari 2600 platform in part because it offered an engineering sweet spot -- not too easy and not too hard. They plan to move into the 1990s, toward 3-D games involving complex environments, such as the "Grand Theft Auto" franchise. That milestone could come within five years, said Hassabis.

“With a few tweaks, it should be able to drive a real car,” Hassabis said.

DeepMind was formed in 2010 by Hassabis, Shane Legg and Mustafa Suleyman, and received funding from Tesla Motors' Elon Musk and Facebook investor Peter Thiel, among others. It was purchased by Google last year, for a reported $650 million. 

Hassabis, a chess prodigy and game designer, met Legg, an algorithm specialist, while studying at the Gatsby Computational Neuroscience Unit at University College, London. Suleyman, an entrepreneur who dropped out of Oxford University, is a partner in Reos, a conflict-resolution consulting group.

View Article Here Read More

‘Firefly’ Starship to Blaze a Trail to Alpha Centauri?

The Icarus Interstellar 'Firefly' starship concept could use novel nuclear fusion techniques to power its way to Alpha Centauri within 100 years.Adrian MannExcerpt from news.discovery.com As part of Icarus Interstellar's continuing series ...

View Article Here Read More

Cluster Filled with Dark Matter May House ‘Failed Galaxies’

The Coma Cluster


Excerpt space.com

A strange set of 48 galaxies appears to be rich in dark matter and lacking in stars, suggesting that they may be so-called "failed" galaxies, a new study reports.

The galaxies in question are part of the Coma Cluster, which lies 300 million light-years from Earth and packs several thousand galaxies into a space just 20 million light-years across. To study them, Pieter van Dokkum of Yale University and his colleagues used the Dragonfly Telephoto Array in New Mexico.

The array's eight connected Canon telephoto lenses allow the researchers to search for extremely faint objects that traditional telescope surveys miss. Often, such as when the researchers used the array to search for the faint glow that dark matter might create, the hunt comes up empty. 


But when van Dokkum and his colleagues looked toward the Coma Cluster, they found a pleasant surprise.

"We noticed all these faint little smudges in the images from the Dragonfly telescope," van Dokkum told Space.com.

The mysterious blobs nagged at van Dokkum, compelling him to look into the objects further. Fortuitously, NASA's Hubble Space Telescope had recently captured one of these objects with its sharp eye. 

"It turned out that they're these fuzzy blobs that look somewhat like dwarf spheroidal galaxies around our own Milky Way," van Dokkum said. "So they looked familiar in some sense … except that if they are at the distance of the Coma Cluster, they must be really huge."

And with very few stars to account for the mass in these galaxies, they must contain huge amounts of dark matter, the researchers said. In fact, to stay intact, the 48 galaxies must contain 98 percent dark matter and just 2 percent "normal" matter that we can see. The fraction of dark matter in the universe as a whole is thought to be around 83 percent. 

But before making this claim, the team had to verify that these blobs really are as distant as the Coma Cluster. (In fact, the team initially thought the galaxies were much closer.). But even in the Hubble image the stars were not resolved. If Hubble — one of the most powerful telescopes in existence — can't resolve the stars, those pinpricks of light must be pretty far away, study team members reasoned. 

Now, van Dokkum and his colleagues have definitive evidence: They've determined the exact distance to one of the galaxies. The team used the Keck Telescope in Hawaii to look at one of the objects for two hours. This gave them a hazy spectrum, from which they were able to tease out the galaxy's recessional velocity — that is, how fast it is moving away from Earth.

That measure traces back to the Hubble Telescope's namesake. In 1929, American astronomer Edwin Hubble discovered one of the simplest and most surprising relationships in astronomy: The more distant a galaxy, the faster it moves away from the Milky Way.

Today, astronomers use the relationship to measure a galaxy's recessional velocity and thus calculate the galaxy's distance. In this case, the small fuzzy blob observed with Keck was moving away from Earth at 15.7 million mph (25.3 million km/h). That places it at 300 million light-years away from Earth, the distance of the Coma Cluster.

So the verdict is officially in: These galaxies must be associated with the Coma Cluster and therefore must be extremely massive.
"It looks like the universe is able to make unexpected galaxies," van Dokkum said, adding that there is an amazing diversity of massive galaxies.

But the clusters still present a mystery: The team doesn't know why they have so much dark matter and so few stars.

Though they look serene and silent from our vantage on Earth, stars are actually roiling balls of violent plasma. Test your stellar smarts with this quiz.
One possibility is that these are "failed" galaxies. A galaxy's first supernova explosions will drive away huge amounts of gas. 

Normally, the galaxy has such a strong gravitational pull that most of the expelled gas falls back onto the galaxy and forms the next generations of stars. But maybe the strong gravitational pull of the other galaxies in the Coma Cluster interfered with this process, pulling the gas away.

"If that happened, they had no more fuel for star formation and they were sort of stillborn galaxies where they started to get going but then failed to really build up a lot of stars," said van Dokkum, adding that this is the most likely scenario. 

Another possibility is that these galaxies are in the process of being ripped apart. But astronomers expect that if this were the case, the galaxies would be distorted and streams of stars would be flowing away from them. Because these effects don't appear, this scenario is very unlikely.

The next step is to try to measure the individual motions of stars within the galaxies. If the team knew those stars' speeds, it could calculate the galaxies' exact mass, and therefore the amount of dark matter they contain. If the stars move faster, the galaxy is more massive. And if they move slower, the galaxy is less massive. 
However, this would require a better spectrum than the one the team has right now.

"But it's not outside the realm of what's possible," van Dokkum assured. "It's just very hard."

The original study has been published in Astrophysical Journal Letters. You can read it for free on the preprint site arXiv.org.

View Article Here Read More

Mysterious plumes in Mars’ atmosphere baffle astronomers




Excerpt from thespacereporter.com

Astronomers are baffled by images of plumes rising from Mars’ atmosphere in images taken by amateur astronomers in March and April 2012.

The plumes were present for about 10 days though their shapes and sizes changed rapidly during that time, from finger-like tendrils to spherical blobs.

Researchers have proposed several possible explanations for the plumes, which are discussed in an article just published in the journal Nature.

Each of the theories being considered poses problems. One theory, for instaqnce, proposes the plumes are caused by the same magnetic influence that causes the aurora borealis, or Northern Lights, on Earth. The movement of electrically charged particles from the Sun, driven by the solar wind towards Earth’s poles, results in these particles colliding with molecules of gas. These collisions produce the strange lights known as aurorae.

In the study, the researchers admit, “Mars aurorae have been observed near where the plume occurs, a region with a large anomaly in the crustal magnetic field that can drive the precipitation of solar wind particles into the atmosphere.”
The problem with this theory is this would only happen if the Sun released an exceptional amount of energetic particles during the time the plumes were seen. Yet the level of solar output in 2012 was nowhere near sufficient to release such a powerful stream of particles, the authors of the paper acknowledge.

They move on to consider another option, namely that the plumes might be clouds high in the Martian atmosphere.

A highly reflective cloud of either water ice, carbon dioxide ice, or dust particles could explain the plumes. But according to computer models, the presence of these clouds “would require exceptional deviations from standard atmospheric circulation models to explain cloud formations at such high altitudes,” explained the paper’s lead author, Agustin Sanchez-Lavega of the Universidad del Pais Vasco in Spain.

The plumes were seen approximately 120 miles (200 km) from Mars’ surface, which is problematic because the highest Martian clouds are seen is 60 miles (100 km) above the planet’s surface. The only way water can condense so far up is if the temperature in that part of Mars’ atmosphere drops 370 degrees Fahrenheit, or 50 degrees Kelvin, below its norm.

Condensation of carbon dioxide would require twice this temperature drop.

A third theory posits the flumes are caused by atmospheric dust. A wind powerful enough to transport dust 111 miles (180 km) above Mars’ surface could occur only around noon, when the Sun’s heat would be strong enough to create such wind currents.

However, the plumes were seen not at noon but in the mornings along the terminator that separates the planet’s day and night sides.
Recently, data from the Hubble Space Telescope was found showing the plumes back in 1997.

View Article Here Read More

Earth To Aliens: Scientists Want To Send Messages To Extraterrestrial Intelligence Possibly Living On Exoplanets

Excerpt from techtimes.comExtraterrestrial research experts have said that now is the time to contact intelligent life on alien worlds.Leading figures behind the Search for Extraterrestrial Intelligence (Seti), which has been using radio telescop...

View Article Here Read More
Older posts Newer posts

Creative Commons License
This work is licensed under a
Creative Commons Attribution 4.0
International License
.
unless otherwise marked.

Terms of Use | Privacy Policy



Up ↑