ScienceDaily (Nov. 17, 2011) — Galaxies learned to "go green" early in the history of the universe, continuously recycling immense volumes of hydrogen gas and heavy elements to build successive generations of stars stretching over billions of years.

This ongoing recycling keeps galaxies from emptying their "fuel tanks" and therefore stretches out their star-forming epoch to over 10 billion years. However, galaxies that ignite a rapid firestorm of star birth can blow away their remaining fuel, essentially turning off further star-birth activity.

This conclusion is based on a series of Hubble Space Telescope observations that flexed the special capabilities of its comparatively new Cosmic Origins Spectrograph (COS) to detect otherwise invisible mass in the halo of our Milky Way and a sample of more than 40 other galaxies. Data from large ground-based telescopes in Hawaii, Arizona, and Chile also contributed to the studies by measuring the properties of the galaxies.

This invisible mass is made up of normal matter -- hydrogen, helium, and heavier elements such as carbon, oxygen, nitrogen, and neon -- as opposed to dark matter that is an unknown exotic particle pervading space.

The results are being published in three papers in the November 18 issue of Science magazine. The leaders of the three studies are Nicolas Lehner of the University of Notre Dame in South Bend, Ind.; Jason Tumlinson of the Space Telescope Science Institute in Baltimore, Md.; and Todd Tripp of the University of Massachusetts at Amherst.

The Key Findings

The color and shape of a galaxy is largely controlled by gas flowing through an extended halo around it. All modern simulations of galaxy formation find that they cannot explain the observed properties of galaxies without modeling the complex accretion and "feedback" processes by which galaxies acquire gas and then later expel it after processing by stars. The three studies investigated different aspects of the gas-recycling phenomenon.

"Our results confirm a theoretical suspicion that galaxies expel and can recycle their gas, but they also present a fresh challenge to theoretical models to understand these gas flows and integrate them with the overall picture of galaxy formation," Tumlinson says.

The team used COS observations of distant stars to demonstrate that a large mass of clouds is falling through the giant corona halo of our Milky Way, fueling its ongoing star formation. These clouds of ionized hydrogen reside within 20,000 light-years of the Milky Way disk and contain enough material to make 100 million suns. Some of this gas is recycled material that is continually being replenished by star formation and the explosive energy of novae and supernovae, which kicks chemically enriched gas back into the halo; the remainder is gas being accreted for the first time. The infalling gas from this vast reservoir fuels the Milky Way with the equivalent of about a solar mass per year, which is comparable to the rate at which our galaxy makes stars. At this rate the Milky Way will continue making stars for another billion years by recycling gas into the halo and back onto the galaxy. "We now know where is the missing fuel for galactic star formation," Lehner concludes. "We now have to find out its birthplace."

One goal of the studies was to study how other galaxies like our Milky Way accrete mass for star making. But instead of widespread accretion, the team found nearly ubiquitous halos of hot gas surrounding vigorous star-forming galaxies. These galaxy halos, rich in heavy elements, extend as much as 450,000 light-years beyond the visible portions of their galactic disks. The surprise was discovering how much mass in heavy elements is far outside a galaxy. COS measured 10 million solar masses of oxygen in a galaxy's halo, which corresponds to about 1 billion solar masses of gas -- as much as in the entire interstellar medium between stars in a galaxy's disk. They also found that this gas is nearly absent from galaxies that have stopped forming stars. This is evidence that widespread outflows, rather than accretion, determine a galaxy's fate. "We didn't know how much mass was there in these gas halos, because we couldn't do these observations until we had COS," Tumlinson says. "This stuff is a huge component of galaxies but can't be seen in any images."

He points out that because so much of the heavy elements has been ejected into the halos instead of sticking around in the galaxies, the formation of planets, life, and other things requiring heavy elements could have been delayed in these galaxies.

The COS data also demonstrate that those galaxies forming stars at a very rapid rate, perhaps a hundred solar masses per year, can drive 2-million-degree gas very far out into intergalactic space at speeds of up to 2 million miles per hour. That's fast enough for the gas to escape forever and never refuel the parent galaxy. While hot plasma "winds" from galaxies have been known for some time, the new COS observations reveal that hot outflows extend to much greater distances than previously thought and can carry a tremendous amount of mass out of a galaxy. Some of the hot gas is moving more slowly and could eventually be recycled. The Hubble observations show how gas-rich star-forming spiral galaxies can evolve to quiescent elliptical galaxies that no longer have star formation. "So not only have we found that star-forming galaxies are pervasively surrounded by large halos of hot gas," says Tripp, "we have also observed that hot gas in transit -- we have caught the stuff in the process of moving out of a galaxy and into intergalactic space."

The light emitted by this hot plasma is invisible, so the researchers used COS to detect the presence of the gas by the way it absorbs certain colors of light from background quasars. The brightest objects in the universe, quasars are the brilliant cores of active galaxies that contain rapidly accreting supermassive black holes. The quasars serve as distant lighthouse beacons that shine through the gas-rich "fog" of hot plasma encircling galaxies. At ultraviolet wavelengths, COS is sensitive to absorption from many ionized heavy elements, such as nitrogen, oxygen, and neon. COS's high sensitivity allows many galaxies that happen to lie in front of the much more distant quasars to be studied. The ionized heavy elements serve as proxies for estimating how much mass is in a galaxy's halo.

"Only with COS can we now address some of the most crucial questions that are at the forefront of extragalactic astrophysics," Tumlinson says.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by NASA/Goddard Space Flight Center.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal References:

T. M. Tripp, J. D. Meiring, J. X. Prochaska, C. N. A. Willmer, J. C. Howk, J. K. Werk, E. B. Jenkins, D. V. Bowen, N. Lehner, K. R. Sembach, C. Thom, J. Tumlinson. The Hidden Mass and Large Spatial Extent of a Post-Starburst Galaxy Outflow. Science, 2011; 334 (6058): 952 DOI: 10.1126/science.1209850N. Lehner, J. C. Howk. A Reservoir of Ionized Gas in the Galactic Halo to Sustain Star Formation in the Milky Way. Science, 2011; 334 (6058): 955 DOI: 10.1126/science.1209069J. Tumlinson, C. Thom, J. K. Werk, J. X. Prochaska, T. M. Tripp, D. H. Weinberg, M. S. Peeples, J. M. O'Meara, B. D. Oppenheimer, J. D. Meiring, N. S. Katz, R. Dave, A. B. Ford, K. R. Sembach. The Large, Oxygen-Rich Halos of Star-Forming Galaxies Are a Major Reservoir of Galactic Metals. Science, 2011; 334 (6058): 948 DOI: 10.1126/science.1209840

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 17, 2011) — New research provides the first evidence that depression can be treated by only targeting an individual's style of thinking through repeated mental exercises in an approach called cognitive bias modification.

The study suggests an innovative psychological treatment called 'concreteness training' can reduce depression in just two months and could work as a self-help therapy for depression in primary care.

Led by the University of Exeter and funded by the Medical Research Council, the research shows how this new treatment could help some of the 3.5 million people in the UK living with depression.

People suffering from depression have a tendency towards unhelpful abstract thinking and over-general negative thoughts, such as viewing a single mistake as evidence that they are useless at everything. Concreteness training (CNT) is a novel and unique treatment approach that attempts to directly target this tendency. Repeated practice of CNT exercises can help people to shift their thinking style.

CNT teaches people how to be more specific when reflecting on problems. This can help them to keep difficulties in perspective, improve problem-solving and reduce worry, brooding, and depressed mood. This study provided the first formal test of this treatment for depression in the NHS.

121 individuals who were currently experiencing an episode of depression were recruited from GP practices. They took part in the clinical trial and were randomly allocated into three groups. A third received their usual treatment from their GP, plus CNT, while some were offered relaxation training in addition to their usual treatment and the remainder simply continued their usual treatment. All participants were assessed by the research team after two months and then three and six months later to see what progress they had made.

The CNT involved the participants undertaking a daily exercise in which they focused on a recent event that they had found mildly to moderately upsetting. They did this initially with a therapist and then alone using an audio CD that provided guided instructions. They worked through standardised steps and a series of exercises to focus on the specific details of that event and to identify how they might have influenced the outcome.

CNT significantly reduced symptoms of depression and anxiety, on average reducing symptoms from severe depression to mild depression during the first two months and maintaining this effect over the following three and six months. On average, those individuals who simply continued with their usual treatment remained severely depressed.

Although concreteness training and relaxation training both significantly reduced depression and anxiety, only concreteness training reduced the negative thinking typically found in depression. Moreover, for those participants who practised it enough to ensure it became a habit, CNT reduced symptoms of depression more than relaxation training.

Professor Edward Watkins of the University of Exeter said: "This is the first demonstration that just targeting thinking style can be an effective means of tackling depression. Concreteness training can be delivered with minimal face-to-face contact with a therapist and training could be accessed online, through CDs or through smartphone apps. This has the advantage of making it a relatively cheap form of treatment that could be accessed by large numbers of people. This is a major priority in depression treatment and research, because of the high prevalence and global burden of depression, for which we need widely available cost-effective interventions."

The researchers are now calling for larger effectiveness clinical trials so that the feasibility of CNT as part of the NHS's treatment for depression can be assessed.

Published in the journal Psychological Medicine, this study was carried out by a team from the Mood Disorders Centre, which is a partnership between the NHS and the University of Exeter and the Peninsula College of Medicine and Dentistry, a joint entity of the Universities of Exeter and Plymouth and the NHS in the South West.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Exeter.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 18, 2011) — Suitable habitat for native fishes in many Great Plains streams has been significantly reduced by the pumping of groundwater from the High Plains aquifer -- and scientists analyzing the water loss say ecological futures for these fishes are "bleak."


Results of their study have been published in the journal Ecohydrology.


Unlike alluvial aquifers, which can be replenished seasonally with rain and snow, these regional aquifers were filled by melting glaciers during the last Ice Age, the researchers say. When that water is gone, it won't come back -- at least, until another Ice Age comes along.


"It is a finite resource that is not being recharged," said Jeffrey Falke, a post-doctoral researcher at Oregon State University and lead author on the study. "That water has been there for thousands of years, and it is rapidly being depleted. Already, streams that used to run year-round are becoming seasonal, and refuge habitats for native fishes are drying up and becoming increasingly fragmented."


Falke and his colleagues, all scientists from Colorado State University where he earned his Ph.D., spent three years studying the Arikaree River in eastern Colorado. They conducted monthly low-altitude flights over the river to map refuge pool habitats and connectivity, and compared it to historical data.


They conclude that during the next 35 years -- under the most optimistic of circumstances -- only 57 percent of the current refuge pools would remain -- and almost all of those would be isolated in a single mile-long stretch of the Arikaree River. Water levels today already are significantly lower than they were 40 and 50 years ago.


Though their study focused on the Arikaree, other dryland streams in the western Great Plains -- composed of eastern Colorado, western Nebraska and western Kansas -- face the same fate, the researchers say.


Falke said the draining of the regional aquifers lowers the groundwater input to alluvial aquifers through which the rivers flow, creating the reduction in streamflow. He and his colleagues estimate that it would require a 75 percent reduction in the rate of groundwater pumping to maintain current water table levels and refuge pools, which is "not economically or politically feasible," the authors note in the study.


Dryland streams in the Great Plains host several warm-water native fish species that have adapted over time to harsh conditions, according to Falke, who is with the Department of Fisheries and Wildlife at Oregon State University. Brassy minnows, orange-throat darters and other species can withstand water temperatures reaching 90 degrees, as well as low levels of dissolved oxygen, but the increasing fragmentation of their habitats may impede their life cycle, limiting the ability of the fish to recolonize.


"The Arikaree River and most dryland streams are shallow, with a sandy bottom, and often silty," Falke said. "The water can be waist-deep, and when parts of the river dry up from the pumping of groundwater, it is these deeper areas that become refuge pools. But they are becoming scarcer, and farther apart each year."


Falke said the changing hydrology of the system has implications beyond the native fishes. The aquifer-fed stream influences the entire riparian area, where cottonwood trees form their own ecosystem and groundwater-dependent grasses support the grazing of livestock and other animals.


Pumping of regional aquifers is done almost entirely for agriculture, Falke said, with about 90 percent of the irrigation aimed at corn production, with some alfalfa and wheat.


"The impact goes well beyond the Arikaree River," Falke said. "Declines in streamflow are widespread across the western Great Plains, including all 11 headwaters of the Republican River. Ultimately, the species inhabiting these drainages will decline in range and abundance, and become more imperiled as groundwater levels decline and climate changes continue."


Other authors on the study include Kurt Fausch, Robin Magelky, Angela Aldred, Deanna Durnford, Linda Riley and Ramchand Oad, all of Colorado State University. The study was supported by the Colorado Division of Wildlife and the Colorado Agricultural Experiment Station.


Recommend this story on Facebook, Twitter,
and Google +1:


Other bookmarking and sharing tools:

ScienceDaily (Nov. 18, 2011) — Scientists at the University of California, San Diego School of Medicine and UC San Diego Moores Cancer Center, in collaboration with colleagues in Boston and South Korea, say they have identified a novel gene mutation that causes at least one form of glioblastoma (GBM), the most common type of malignant brain tumor.


The findings are reported in the online edition of the journal Cancer Research.


Perhaps more importantly, the researchers found that two drugs already being used to treat other forms of cancer effectively prolonged the survival of mice modeling this particular form of GBM. That could be good news for at least some GBM patients. More than 9,000 new cases of the disease are diagnosed each year in the United States and effective treatments are limited. The tumors are aggressive and resistant to current therapies, such as surgery, radiation and chemotherapy. The median survival rate for newly diagnosed GBM patients is just 14 months.


Past studies have identified epidermal growth factor receptor (EGFR) as a common genetically altered gene in GBM, though the cause or causes of the alteration is not known. The research team, led by scientists at the Dana-Farber Cancer Institute in Boston, analyzed the GBM genomic database, ultimately identifying and characterizing an exon 27 deletion mutation within the EGFR carboxyl-terminus domain (CTD). An exon is a segment of a DNA or RNA molecule containing information coding for a protein or peptide sequence.


"The deletion mutant seems to possess a novel mechanism for inducing cellular transformation," said Frank Furnari, PhD, associate professor of medicine at the UC San Diego School of Medicine and an associate investigator at the San Diego branch of the Ludwig Institute for Cancer Research.


The study researchers determined that cellular transformation was induced by the previously unknown EGFR CTD deletion mutant, both in vitro and in vivo, and resulted in GBM in the animals. The researchers then turned to testing a pair of approved drugs that target EGFR: a monoclonal antibody called cetuximab and a small molecule inhibitor called erlotinib.


Cetuximab, marketed under the name Erbitux, is currently approved for use in treating metastatic colorectal cancer and squamous cell carcinoma of the head and neck. Erlotinib, marketed under the name Tarceva, is used to treat lung and pancreatic cancers.


Both drugs were found to effectively impair the tumor-forming abilities of oncogenic EGFR CTD deletion mutants. Cetuximab, in particular, prolonged survival of mice with the deletion mutants when compared to untreated control mice.


However, neither cetuximab nor erlotinib is an unabashed success story. The drugs work by binding to sites on the EGFR protein and inhibiting activation, but they are not effective in all cancer patients and produce some adverse side effects, such as rashes and diarrhea.


But Santosh Kesari, MD, PhD, Director of Neuro-Oncology at UC San Diego Moores Cancer Center and the UCSD Department of Neurosciences, and co-corresponding author of the study, said the new study points to a more selective, effective use of the drugs for some patients with GBM.


"In the past when we treated brain cancer patients with these drugs, the response rate was very small," Kesari said. "What we now show is that the tumors with CTD mutations respond best to these EGFR targeted agents. If we knew this beforehand, we might have been able to select patients most likely to respond to these agents. We are now trying to put together a prospective clinical trial to prove this. We would select only patients with these tumor mutations and treat them. This kind of research gets us closer to identifying genetic subtypes, to doing better biomarker-based clinical trials, and to personalizing treatments in brain cancers."


"This is a great example of personalized medicine in action," said Webster Cavenee, PhD, director of the Ludwig Institute at UC San Diego. "UCSD has made a concerted effort in the past few years to develop a first-class brain tumor research and therapy group that includes adult neuro-oncology, neurosurgery, neuropathology and their pediatric equivalents to join with internationally-renowned brain tumor research. This is making UCSD a destination for the very best in brain tumor management."


Co-authors of the study are Jeonghee Cho, Department of Medical Oncology, Dana-Farber Cancer Institute, Center for Cancer Genome Discovery, Boston, MA, Genomic Analysis Center, Samsung Cancer Research Institute, Seoul, Republic of Korea; Sandra Pastorino and Ying S. Chao, Department of Neurosciences, Moores Cancer Center, UC San Diego; Qing Zeng and Xiaoyin Xu, Department of Radiology, Brigham and Women's Hospital, Boston; William Johnson, Dana-Farber Cancer Institute, Center for Cancer Genome Discovery, Boston; Scott Vandenberg, Department of Pathology, UC San Diego; Roel Verhaak, Amit Dutt, Derek Chiang and Yuki Yuza, Department of Medical Oncology, Dana-Farber Cancer Institute and Broad Institute of MIT and Harvard; Andrew Cherniack and Robert C. Onofrio, Broad Institute of MIT and Harvard; Hideo Watanabe and Matthew Meyerson, Department of Medical Oncology, Dana-Farber Cancer Institute, Center for Cancer Genome Discovery and Broad Institute of MIT and Harvard; Jihyun Kwon, Genomic Analysis Center, Samsung Cancer Research Institute.


Funding for this research came, in part, from the National Institutes of Health, the Sontag Foundation Distinguished Scientist Award, James S. McDonnell and the Samsung Cancer Research Institution.


Recommend this story on Facebook, Twitter,
and Google +1:


Other bookmarking and sharing tools:

ScienceDaily (Nov. 18, 2011) — Researchers at the Naval Research Laboratory Marine Meteorology Division (MMD), Monterey, Calif., have developed the Coupled Ocean/Atmosphere Mesoscale Prediction System Tropical Cyclone (COAMPS-TC™) model, achieving a significant research milestone in predictions of tropical cyclone intensity and structure.


While the predictions of the paths or tracks of hurricanes, more generally referred to as tropical cyclones (TC), have steadily improved over the last few decades, improvements in the predictions of storm intensity have proven much more difficult.


"Over the past two years, the COAMPS-TC model has shown to be the most accurate emerging research model for predicting tropical cyclone intensity," said Dr. Jim Doyle, research meteorologist, NRL Monterey. "There is no better example of these difficult challenges than the intensity predictions for Hurricane Irene this past August."


Producing very accurate intensity predictions during a real-time experimental demonstration of Hurricane Irene, COAMPS-TC intensity errors were six knots on average for a series of three-day forecasts, a clear improvement over the official National Hurricane Center (NHC) forecast and other operational models that ranged from 20-30 knots.


The successful predictions have demonstrated that Numerical Weather Prediction (NWP) models can outperform operational statistical-dynamic models that are based on climatology and previous behavior. It is further believed that NWP models, which explicitly predict the location, dynamics and intensity of a storm, will eventually provide the most promising approach to achieve accurate TC intensity and structure prediction.


Advancing further methodologies used for vortex initialization, data assimilation and representation of physical processes, COAMPS-TC is expected to become fully-operational in 2013 at the Navy's Fleet Numerical Meteorology and Oceanography Center (FNMOC) in Monterey. Considerable advancements have been made to several components of the modeling system including the data assimilation of conventional and special TC synthetic observations, vortex initialization, and representation of key TC physical processes such as air-sea fluxes, clouds and convection.


The COAMPS-TC project will potentially signal a paradigm shift in TC forecasting and is already making a strong impression on the forecasting community. Focusing on the development and transition of a fully coupled air-ocean-wave prediction system, the COAMPS-TC model includes nonhydrostatic atmospheric dynamics, multiple nested moving grids that follow the center of the storm and improved boundary layer and cloud physical parameterizations.


COAMPS-TC was first tested in real-time in support of two field campaigns sponsored by the Office of Naval Research (ONR). The Tropical Cyclone Structure-08 (TCS-08) conducted as part of The Observing System Research and Predictability Experiment (THORPEX) Pacific Asian Regional Campaign (T-PARC) in 2008 and the Impact of Typhoons on the Ocean in the Pacific (ITOP) in 2010, both of which took place in the Western Pacific. Additionally, COAMPS-TC advancements and real-time demonstrations in the Eastern Pacific and Western Atlantic have taken place through collaboration with the National Oceanic and Atmospheric Administration (NOAA) as part of the Hurricane Forecast Improvement Project (HFIP) -- a community-wide effort focused on improving operational hurricane prediction.


In June 2011, COAMPS-TC was one of nine worldwide winners of the inaugural High Performance Computing (HPC) Excellence Award presented at the ISC-11 International Supercomputing Conference in Hamburg, Germany -- an award presented annually to recognize noteworthy achievements by users of HPC technologies. As a result, COAMPS-TC was recognized for achieving 'a significantly improved model for tropical cyclone forecasting.' COAMPS-TC development benefited significantly from the Department of Defense HPC Modernization Program Office (HPCMO) computational assets at the Navy Defense Supercomputing Resource Center (DSRC) at Mississippi's Stennis Space Center.


Increasingly-sophisticated developmental versions of COAMPS-TC will continue to be demonstrated in real-time and in support of the Joint Typhoon Warning Center and the National Hurricane Center. A key additional enhancement will be a fully coupled ocean-atmosphere version in which the NRL Costal Ocean Model (NCOM) and the Wave Watch III (WWIII) will provide the ocean circulation and wave components, respectively.


Recommend this story on Facebook, Twitter,
and Google +1:


Other bookmarking and sharing tools:

ScienceDaily (Nov. 18, 2011) — Why do people with a hereditary mutation of the red blood pigment hemoglobin (as is the case with sickle-cell anemia prevalent in Africa) not contract severe malaria? Scientists in the group headed by Prof. Michael Lanzer of the Department of Infectious Diseases at Heidelberg University Hospital have now solved this mystery.


A degradation product of the altered hemoglobin provides protection from severe malaria. Within the red blood cells infected by the malaria parasite, it blocks the establishment of a trafficking system used by the parasite's special adhesive proteins -- adhesins -- to access the exterior of the blood cells. As a result, the infected blood cells do not adhere to the vessel walls, as is usually the case for this type of malaria. This means that no dangerous circulatory disorders or neurological complications occur.


The research study has been published in the journal Science, appearing initially online.


In the 1940s, researchers already discovered that sickle-cell anemia with its characteristic blood mutation was particularly prevalent in certain population groups in Africa. They also survived malaria tropica, whose course is usually especially virulent. With malaria tropica, the malaria parasites (Plasmodia) enter the person after a bite of an infected Anopheles mosquito. The mosquito first multiplies in the person's liver cells and then infects the red blood cells (erythrocytes). Once inside the erythrocytes, they divide again and ultimately destroy them. The nearly simultaneous bursting of all infected blood cells causes the characteristic symptoms, which include bouts of fever and anemia.


Adhesins on red blood cells cause circulatory disorders


In patients with malaria tropica, neurological complications such as paralysis, seizures, coma and severe brain damage also frequently occur. This is caused by an anomaly of the parasite Plasmodium falciparum. It forms special adhesins that reach the cell surface of the infected blood cell. Once there, it causes the erythrocytes to adhere to the vessel walls, preventing them from being recognized in the spleen as damaged and removed from circulation. The parasite's protective mechanism results in smaller vessels closing, becoming inflamed and for example, prevents parts of the nervous system from being adequately supplied with oxygen.


In humans with mutated hemoglobin, these complications occur in a weakened form or not at all. "At the cell surface of infected erythrocytes with mutated hemoglobin, there are significantly fewer adhesins of the parasite than in normal red blood cells," explained Prof. Lanzer, Director of the Dept. of Infectious Diseases, Parasitology. "For this reason, we had a closer look at the trafficking system within the host cell." To this end, the team compared the blood cells with normal hemoglobin and two hemoglobin variants (hemoglobin S and hemoglobin C), which occur in around one-fifth of the African population in malaria-infected areas.


Trafficking system of the malaria parasite visualized for the first time


In so doing, the scientists used high-resolution microscopy techniques such as cryoelectron tomography to discover a new transport mechanism. The parasite uses a certain protein (actin) from the cytoskeleton (cellular skeleton) of the erythrocytes for its own trafficking network. "It forms a completely new structure that has nothing in common with the rest of the cytoskeleton," explained Dr. Marek Cyrklaff, group leader at the Dept. of Infectious Diseases, Parasitology and first author of the article. "The vesicles with the adhesins reach the cell surface of the red blood cells directly via these actin filaments."


In contrast to erythrocytes with the two hemoglobin variants,here only short pieces of actin filaments are found. Targeted transport to the surface is not possible. "The entire transport system of the malaria parasite is degenerated in these blood cells," Cyrklaff added. Laboratory tests showed that the hemoglobins themselves were not responsible for this, but rather a degradation product, ferryl hemoglobin. This is an irreversibly damaged, chemically altered hemoglobin that is no longer able to bind oxygen. The hemoglobins S and C are considerably more unstable than normal hemoglobin. As a result, blood cells with these variants contain ten times more ferryl hemoglobin than other erythrocytes. This high concentration destabilizes the binding of the actin structure and it disintegrates.


"With these results, we have now described a molecular mechanism for the first time that explains this hemoglobin variant's protective effect against malaria," Lanzer said.


Recommend this story on Facebook, Twitter,
and Google +1:


Other bookmarking and sharing tools:

ScienceDaily (Nov. 18, 2011) — Scientists at Chalmers have succeeded in creating light from vacuum -- observing an effect first predicted over 40 years ago. In an innovative experiment, the scientists have managed to capture some of the photons that are constantly appearing and disappearing in the vacuum.


The results have been published in the journal Nature.


The experiment is based on one of the most counterintuitive, yet, one of the most important principles in quantum mechanics: that vacuum is by no means empty nothingness. In fact, the vacuum is full of various particles that are continuously fluctuating in and out of existence. They appear, exist for a brief moment and then disappear again. Since their existence is so fleeting, they are usually referred to as virtual particles.


Chalmers scientist, Christopher Wilson and his co-workers have succeeded in getting photons to leave their virtual state and become real photons, i.e. measurable light. The physicist Moore predicted way back in 1970 that this should happen if the virtual photons are allowed to bounce off a mirror that is moving at a speed that is almost as high as the speed of light. The phenomenon, known as the dynamical Casimir effect, has now been observed for the first time in a brilliant experiment conducted by the Chalmers scientists.


"Since it's not possible to get a mirror to move fast enough, we've developed another method for achieving the same effect," explains Per Delsing, Professor of Experimental Physics at Chalmers. "Instead of varying the physical distance to a mirror, we've varied the electrical distance to an electrical short circuit that acts as a mirror for microwaves."


The "mirror" consists of a quantum electronic component referred to as a SQUID (Superconducting quantum interference device), which is extremely sensitive to magnetic fields. By changing the direction of the magnetic field several billions of times a second the scientists were able to make the "mirror" vibrate at a speed of up to 25 percent of the speed of light.


"The result was that photons appeared in pairs from the vacuum, which we were able to measure in the form of microwave radiation," says Per Delsing. "We were also able to establish that the radiation had precisely the same properties that quantum theory says it should have when photons appear in pairs in this way."


What happens during the experiment is that the "mirror" transfers some of its kinetic energy to virtual photons, which helps them to materialise. According to quantum mechanics, there are many different types of virtual particles in vacuum, as mentioned earlier. Göran Johansson, Associate Professor of Theoretical Physics, explains that the reason why photons appear in the experiment is that they lack mass.


"Relatively little energy is therefore required in order to excite them out of their virtual state. In principle, one could also create other particles from vacuum, such as electrons or protons, but that would require a lot more energy."


The scientists find the photons that appear in pairs in the experiment interesting to study in closer detail. They can perhaps be of use in the research field of quantum information, which includes the development of quantum computers.


However, the main value of the experiment is that it increases our understanding of basic physical concepts, such as vacuum fluctuations -- the constant appearance and disappearance of virtual particles in vacuum. It is believed that vacuum fluctuations may have a connection with "dark energy" which drives the accelerated expansion of the universe. The discovery of this acceleration was recognised this year with the awarding of the Nobel Prize in Physics.


Recommend this story on Facebook, Twitter,
and Google +1:


Other bookmarking and sharing tools:

ScienceDaily (Nov. 18, 2011) — A team of researchers belonging to the Universitat Politècnica de València's CUINA group has achieved a 50% reduction in the amount of salt in already desalted cod, thus obtaining a final product that preserves all its sensory properties and is particularly suitable for persons with hypertension.


This research has been published in the Journal of Food Engineering.


The key to reducing the amount of salt in cod is to partially replace sodium with potassium after the desalting process. "Once we have desalted the cod, we introduce a piece of it in a solution containing potassium chloride. During this process, a partial exchange of sodium for potassium takes place -it is like a second desalting. Thus, we get a piece of cod containing 50% less sodium than standard desalted cod," says José Manuel Barat, a researcher at the UPV's CUINA group. The fish also retains all its properties of flavour, texture, etc., as show the results of several sensory studies that have been conducted in the UPV's laboratories. It also contains enough salt so that it can be stored under refrigeration for as long as is needed. So far, this new technique has been applied -and validated- in laboratory tests.


This new method proposed by researchers at the UPV's CUINA group responds to an increasingly important demand by the food industry for developing low-salt products. "With this technique, we open the door to offering a new product both to those consumers who, for medical reasons, must have little salt in their diet, and to the general public, who are advised to reduce their sodium intake. Furthermore, by replacing sodium chloride with potassium chloride we get an even healthier product," says José Manuel Barat.


Researchers at the UPV's CUINA group have extensive experience in the processes of salting and desalting food. They also have several patents, including a method for desalting and preserving fish.


This experience and this knowledge were applied to a collaborative project with the fishing industry company Conservas Ubago, which resulted in the commercialization of ready-to-cook refrigerated desalted cod. "Even though it was desalted cod, it still had a certain amount of salt, as it is necessary in order to store refrigerated cod. Now we have gone a step further, and have reduced even that sodium content. We have thus laid the ground for the development of a new product, with less sodium and more potassium, with all its properties unaltered, particularly suitable for diets with a low sodium content," said José Manuel Barat.


Recommend this story on Facebook, Twitter,
and Google +1:


Other bookmarking and sharing tools:

ScienceDaily (Nov. 18, 2011) — In the fast-paced world of health care, doctors are often pressed for time during patient visits. Researchers at the University of Missouri developed a tool that allows doctors to view electronic information about patients' health conditions related to diabetes on a single computer screen. A new study shows that this tool, the diabetes dashboard, saves time, improves accuracy and enhances patient care.


The diabetes dashboard provides information about patients' vital signs, health conditions, current medications, and laboratory tests that may need to be performed. The study showed that physicians who used the dashboard were able to correctly identify data they were searching for 100 percent of the time, compared with 94 percent using traditional electronic medical records. Further, the number of mouse clicks needed to find the information was reduced from 60 to three when using the diabetes dashboard.


Richelle Koopman, associate professor of family and community medicine in the School of Medicine, says diabetes care is complex because there are so many other health conditions associated with the disease; thus coordination of treatments is required. The goal of the diabetes dashboard is to make it easier for doctors to make the right decision about treatments.


"The diabetes dashboard is so intuitive that it makes it hard for physicians not to do the right thing," Koopman said. "Doctors can see, at a glance, everything that might affect their decision. This frees up their minds and helps them make better decisions about patients' care."


According to Koopman, the research has important implications for patient safety and costs. For example, the dashboard shows doctors a list of tests that are standard for diabetes patients and indicates whether patients have recently had the tests or need to have them. This eliminates the potential for physicians to order costly tests that are not necessary.


"It is difficult to quantify how much money the dashboard saves, but in terms of time and accuracy, the savings are substantial," Koopman said. "Doctors are still going to spend 15 minutes with each patient, but instead of using a large portion of that time to search through charts for information, they can have interactive conversations with patients about lifestyle and diet changes that are important for diabetes care."


The researchers say the dashboard was well received by doctors who tested it because it was designed by physicians familiar with their needs. The study, published in Annals of Family Medicine, was a collaboration among the MU School of Medicine, The Informatics Institute, the School of Information Science and Learning Technologies in the College of Education, the Center for Health Care Quality and the Sinclair School of Nursing.


Recommend this story on Facebook, Twitter,
and Google +1:


Other bookmarking and sharing tools:

ScienceDaily (Nov. 18, 2011) — Many experts believe that advanced biofuels made from cellulosic biomass are the most promising alternative to petroleum-based liquid fuels for a renewable, clean, green, domestic source of transportation energy. Nature, however, does not make it easy. Unlike the starch sugars in grains, the complex polysaccharides in the cellulose of plant cell walls are locked within a tough woody material called lignin. For advanced biofuels to be economically competitive, scientists must find inexpensive ways to release these polysaccharides from their bindings and reduce them to fermentable sugars that can be synthesized into fuels.


An important step towards achieving this goal has been taken by researchers with the U.S. Department of Energy (DOE)'s Joint BioEnergy Institute (JBEI), a DOE Bioenergy Research Center led by the Lawrence Berkeley National Laboratory (Berkeley Lab).


A team of JBEI researchers, working with researchers at the U.S. Department of Agriculture's Agricultural Research Service (ARS), has demonstrated that introducing a maize (corn) gene into switchgrass, a highly touted potential feedstock for advanced biofuels, more than doubles (250 percent) the amount of starch in the plant's cell walls and makes it much easier to extract polysaccharides and convert them into fermentable sugars. The gene, a variant of the maize gene known as Corngrass1 (Cg1), holds the switchgrass in the juvenile phase of development, preventing it from advancing to the adult phase.


"We show that Cg1 switchgrass biomass is easier for enzymes to break down and also releases more glucose during saccharification," says Blake Simmons, a chemical engineer who heads JBEI's Deconstruction Division and was one of the principal investigators for this research. "Cg1 switchgrass contains decreased amounts of lignin and increased levels of glucose and other sugars compared with wild switchgrass, which enhances the plant's potential as a feedstock for advanced biofuels."


The results of this research are described in a paper published in the Proceedings of the National Academy of Sciences (PNAS) titled "Overexpression of the maize Corngrass1 microRNA prevents flowering, improves digestibility, and increases starch content of switchgrass."


Lignocellulosic biomass is the most abundant organic material on earth. Studies have consistently shown that biofuels derived from lignocellulosic biomass could be produced in the United States in a sustainable fashion and could replace today's gasoline, diesel and jet fuels on a gallon-for-gallon basis. Unlike ethanol made from grains, such fuels could be used in today's engines and infrastructures and would be carbon-neutral, meaning the use of these fuels would not exacerbate global climate change. Among potential crop feedstocks for advanced biofuels, switchgrass offers a number of advantages. As a perennial grass that is both salt- and drought-tolerant, switchgrass can flourish on marginal cropland, does not compete with food crops, and requires little fertilization. A key to its use in biofuels is making it more digestible to fermentation microbes.


"The original Cg1 was isolated in maize about 80 years ago. We cloned the gene in 2007 and engineered it into other plants, including switchgrass, so that these plants would replicate what was found in maize," says George Chuck, lead author of the PNAS paper and a plant molecular geneticist who holds joint appointments at the Plant Gene Expression Center with ARS and the University of California (UC) Berkeley. "The natural function of Cg1 is to hold pants in the juvenile phase of development for a short time to induce more branching. Our Cg1 variant is special because it is always turned on, which means the plants always think they are juveniles."


Chuck and his colleague Sarah Hake, another co-author of the PNAS paper and director of the Plant Gene Expression Center, proposed that since juvenile biomass is less lignified, it should be easier to break down into fermentable sugars. Also, since juvenile plants don't make seed, more starch should be available for making biofuels. To test this hypothesis, they collaborated with Simmons and his colleagues at JBEI to determine the impact of introducing the Cg1 gene into switchgrass.


In addition to reducing the lignin and boosting the amount of starch in the switchgrass, the introduction and overexpression of the maize Cg1 gene also prevented the switchgrass from flowering even after more than two years of growth, an unexpected but advantageous result.


"The lack of flowering limits the risk of the genetically modified switchgrass from spreading genes into the wild population," says Chuck.


The results of this research offer a promising new approach for the improvement of dedicated bioenergy crops, but there are questions to be answered. For example, the Cg1 switchgrass biomass still required a pre-treatment to efficiently liberate fermentable sugars.


"The alteration of the switchgrass does allow us to use less energy in our pre-treatments to achieve high sugar yields as compared to the energy required to convert the wild type plants," Simmons says. "The results of this research set the stage for an expanded suite of pretreatment and saccharification approaches at JBEI and elsewhere that will be used to generate hydrolysates for characterization and fuel production."


Another question to be answered pertains to the mechanism by which Cg1 is able to keep switchgrass and other plants in the juvenile phase.


"We know that Cg1 is controlling an entire family of transcription factor genes," Chuck says, "but we have no idea how these genes function in the context of plant aging. It will probably take a few years to figure this out."


Co-authoring the PNAS paper with Chuck and Simmons were Christian Tobias, Lan Sun, Florian Kraemer, Chenlin Li, Dean Dibble, Rohit Arora, Jennifer Bragg, John Vogel, Seema Singh, Markus Pauly and Sarah Hake.


Recommend this story on Facebook, Twitter,
and Google +1:


Other bookmarking and sharing tools:

ScienceDaily (Oct. 31, 2011) — Researchers have used zinc oxide microwires to significantly improve the efficiency at which gallium nitride light-emitting diodes (LED) convert electricity to ultraviolet light. The devices are believed to be the first LEDs whose performance has been enhanced by the creation of an electrical charge in a piezoelectric material using the piezo-phototronic effect.
See Also:
Matter & Energy

Spintronics
Optics
Energy Technology
Graphene
Technology
Electronics

Reference

Electrical phenomena
Crystal structure
Semiconductor
Electricity

By applying mechanical strain to the microwires, researchers at the Georgia Institute of Technology created a piezoelectric potential in the wires, and that potential was used to tune the charge transport and enhance carrier injection in the LEDs. This control of an optoelectronic device with piezoelectric potential, known as piezo-phototronics, represents another example of how materials that have both piezoelectric and semiconducting properties can be controlled mechanically.

"By utilizing this effect, we can enhance the external efficiency of these devices by a factor of more than four times, up to eight percent," said Zhong Lin Wang, a Regents professor in the Georgia Tech School of Materials Science and Engineering. "From a practical standpoint, this new effect could have many impacts for electro-optical processes -- including improvements in the energy efficiency of lighting devices."

Details of the research were reported in the Sept. 14 issue of the journal Nano Letters. The research was sponsored by the Defense Advanced Research Projects Agency (DARPA) and the U.S. Department of Energy (DOE). In addition to Wang, the research team mainly included Qing Yang, a visiting scientist at Georgia Tech from the Department of Optical Engineering at Zhejiang University in China.

Because of the polarization of ions in the crystals of piezoelectric materials such as zinc oxide, mechanically compressing or otherwise straining structures made from the materials creates a piezoelectric potential -- an electrical charge. In the gallium nitride LEDs, the researchers used the local piezoelectric potential to tune the charge transport at the p-n junction.

The effect was to increase the rate at which electrons and holes recombined to generate photons, enhancing the external efficiency of the device through improved light emission and higher injection current. "The effect of the piezo potential on the transport behavior of charge carriers is significant due to its modification of the band structure at the junction," Wang explained.

The zinc oxide wires form the "n" component of a p-n junction, with the gallium nitride thin film providing the "p" component. Free carriers were trapped at this interface region in a channel created by the piezoelectric charge formed by compressing the wires.

Traditional LED designs use structures such as quantum wells to trap electrons and holes, which must remain close together long enough to recombine. The longer that electrons and holes can be retained in proximity to one another, the higher the efficiency of the LED device will ultimately be.

The devices produced by the Georgia Tech team increased their emission intensity by a factor of 17 and boosted injection current by a factor of four when compressive strain of 0.093 percent was applied to the zinc oxide wire. That improved conversion efficiency by as much as a factor of 4.25.

The LEDs fabricated by the research team produced emissions at ultraviolet wavelengths (about 390 nanometers), but Wang believes the wavelengths can be extended into the visible light range for a variety of optoelectronic devices. "These devices are important for today's focus on green and renewable energy technology," he said.

In the experimental devices, a single zinc oxide micro/nanowire LED was fabricated by manipulating a wire on a trenched substrate. A magnesium-doped gallium nitride film was grown epitaxially on a sapphire substrate by metalorganic chemical vapor deposition, and was used to form a p-n junction with the zinc oxide wire.

A sapphire substrate was used as the cathode that was placed side-by-side with the gallium nitride substrate with a well-controlled gap. The wire was placed across the gap in close contact with the gallium nitride. Transparent polystyrene tape was used to cover the nanowire. A force was then applied to the tape by an alumina rod connected to a piezo nanopositioning stage, creating the strain in the wire.

The researchers then studied the change in light emission produced by varying the amount of strain in 20 different devices. Half of the devices showed enhanced efficiency, while the others -- fabricated with the opposite orientation of the microwires -- showed a decrease. This difference was due to the reversal in the sign of the piezo potential because of the switch of the microwire orientation from +c to -c.

High-efficiency ultraviolet emitters are needed for applications in chemical, biological, aerospace, military and medical technologies. Although the internal quantum efficiencies of these LEDs can be as high as 80 percent, the external efficiency for a conventional single p-n junction thin-film LED is currently only about three percent.

Beyond LEDs, Wang believes the approach pioneered in this study can be applied to other optical devices that are controlled by electrical fields.

"This opens up a new field of using the piezoelectric effect to tune opto-electronic devices," Wang said. "Improving the efficiency of LED lighting could ultimately be very important, bringing about significant energy savings because so much of the world's energy is used for lighting."

Recommend this story on Facebook, Twitter,
and Google +1:
Researchers in the University of Toronto's Department of Materials Science & Engineering have developed the world's most efficient organic light-emitting diodes (OLEDs) on plastic. This result enables a flexible form factor, not to mention a less costly, alternative to traditional OLED manufacturing, which currently relies on rigid glass.
See Also:
Matter & Energy

Chemistry
Materials Science
Graphene
Inorganic Chemistry
Optics
Electronics

Reference

Plastic
Metallurgy
Solar cell
Materials science

The results are reported online in the latest issue of Nature Photonics.

OLEDs provide high-contrast and low-energy displays that are rapidly becoming the dominant technology for advanced electronic screens. They are already used in some cell phone and other smaller-scale applications.

Current state-of-the-art OLEDs are produced using heavy-metal doped glass in order to achieve high efficiency and brightness, which makes them expensive to manufacture, heavy, rigid and fragile.

"For years, the biggest excitement behind OLED technologies has been the potential to effectively produce them on flexible plastic," says Materials Science & Engineering Professor Zheng-Hong Lu, the Canada Research Chair (Tier I) in Organic Optoelectronics.

Using plastic can substantially reduce the cost of production, while providing designers with a more durable and flexible material to use in their products.

The research, which was supervised by Professor Lu and led by PhD Candidates Zhibin Wang and Michael G. Helander, demonstrated the first high-efficiency OLED on plastic. The performance of their device is comparable with the best glass-based OLEDs, while providing the benefits offered by using plastic.

"This discovery, unlocks the full potential of OLEDs, leading the way to energy-efficient, flexible and impact-resistant displays," says Professor Lu.

Recommend this story on Facebook, Twitter,
and Google +1: