ScienceDaily (Oct. 27, 2011) — Patients who had a transient ischaemic attack (TIA), sometimes referred to as a "mini stroke," were much less likely to experience further vascular events in the first year if their care was co-ordinated by a special hospital team. That is the key finding from a study published in the November issue of the European Journal of Neurology.

Researchers from the Department of Neurology at Aarhus University Hospital in Denmark studied 306 patients admitted to the hospital with a TIA. They found that when the patients were treated by an acute TIA team their cumulated risk of having a stroke in the first seven days was 65% lower than expected. The cumulated risk in the first 90 days fell by 74%.

"The aim of our study was to see if patients had better clinical outcomes if they were under the care of a special team, which integrated outpatient care and stroke unit facilities and provided on-going nurse-led counselling" says lead author Dr Paul von Weitzel-Mudersbach.

"TIA, which is caused by a temporary lack of blood to part of the brain, is a serious condition associated with a high short-term risk of ischaemic stroke. Previous research has shown that the cumulated stroke risk in the first three months after a TIA is ten to 12% in unselected patients and more than 30% in patients with carotid stenosis, a dangerous narrowing of the largest blood vessels that deliver blood to the brain.

"Although urgent intervention has been shown to reduce the risk of stroke, a number of previous studies have shown poor long-term drug compliance in many patients."

The patients were referred directly to the acute TIA team by their family doctor or ambulance, bypassing the emergency department. Patients who had suffered a TIA in the last 48 hours, and those with multiple TIA, faced a high risk of stroke and were admitted to the stroke unit. This offered the option for immediate preventative action, including thrombolysis drugs, to break up blood clots in the case of recurrent stroke. The other patients were seen in the outpatients department within three days of referral.

All the patients seen by the team received acute treatment with antithrombotic and cholesterol lowering drugs and were offered fast-track surgery if they had carotid stenosis. Follow-up included nurse-conducted health counselling after seven, 90 and 365 days. Each contact included the importance of secondary prevention, such as drug compliance and stopping smoking.

Key findings of the study included:

Just under two-thirds of the patients were admitted immediately after their TIA (65%) with the rest being seen as outpatients. Inpatient stays averaged one day.Over half (58%) were seen within 24 hours of their TIA and 70% within 24 hours of the call for attention. The figures at one week were 76% and 89% respectively.Just over 5% had a stroke, non-fatal heart attack or died from a vascular event within a year of their TIA.The cumulated stroke risk was calculated and compared with the ABCD2 score, an established method of identifying individuals with a high early risk of stroke after a TIA. The actual scores in the Aarhus study were 1.6% and 2% after seven and 90 days, significantly lower than the ABCD2 predicted stroke scores of 4.5% and 7.5%.Early surgery to remove the build up of plaque in the carotid blood vessels was performed in 8.5% of patients. However, the authors believe this only played a minor role in the reduced risk.The majority of the patients (95%) fulfilled at least one secondary prevention measure: reduced blood pressure, reduced cholesterol, no smoking and self-reported adherence to antithrombotic treatment. 48% achieved three out of the four targets.Most of the patients (93%) adhered to their antithrombotic treatment.More than 60% of the patients who smoked at the time of their TIA changed their smoking habits -- 31% quit and 29.5% reduced their smoking by at least 50%. Most of the changes happened in the first seven days.

"Our study shows that urgent treatment of patients with TIA is feasible and associated with a substantial reduction in stroke risk during the first three months, which is consistent with previous studies from the UK and France" says Dr von Weitzel-Mudersbach.

"We believe that early and aggressive antithrombotic treatment may play a major role in the reduction of short-term stroke risk in most patients. Meanwhile, the combination of secondary prevention efforts with a relatively high compliance rate -- including the essential telephone follow-up provided by a specially trained nurse in the first three months -- was probably responsible for the low long-term risk of adverse clinical outcome.

"Treating TIA by deploying a specialist team that can admit patients when the risk of recurrent symptoms is highest and prompt thrombolysis can be used, combined with nurse-conducted health counselling, seems to be effective."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Wiley-Blackwell.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

P. v. Weitzel-Mudersbach, S. P. Johnsen, G. Andersen. Low risk of vascular events following urgent treatment of transient ischaemic attack: the Aarhus TIA study. European Journal of Neurology, 2011; 18 (11): 1285 DOI: 10.1111/j.1468-1331.2011.03452.x

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Venerannda Leon Guerrero cradled her slumbering infant in her arms in a CEDDERS testing center at the University of Guam as she watched an audiologist in Colorado conduct a diagnostic test to determine whether or not her baby has a hearing loss. The remote test was held on October 19 and marked the first technology-enabled distance diagnostic testing for hearing loss on very young infants on the island.

This event was made possible through the Teleaudiology Project, a collaboration between Dr. Debra Hayes and Dr. Susan Dreith of the Bill Daniels Center for Children's Hearing, Children's Hospital-Colorado, and the University of Guam CEDDERS Guam Early Hearing Detection and Intervention (EHDI) project, with support from the Guam Department of Education, Division of Special Education -- Early Intervention Program. Dr. Dreith and Dr. Ericka Schicke have obtained their licenses to practice as audiologists on Guam.

Drs. Dreith and Schicke at Children's Hospital-Colorado operate the diagnostic audiological equipment remotely from Colorado, after audiometrists on Guam prepare the parent and infant for testing. The Diagnostic Audiological Evaluation (DAE) may take 2 hours to complete, which requires the infant to be asleep during the evaluation. Parents know at the end of the test whether or not their infant has a hearing loss.

The urgent need for diagnosis of very young infants for hearing loss prompted this much-needed collaboration to bring this service to families on Guam. Infants on Guam that do not pass their newborn hearing screening can now be evaluated for any hearing loss before 3 months of age, thereby allowing early intervention services to be initiated, if needed, by the time the infant reaches 6 months of age. This timely early intervention service provides the infant and family the greatest opportunity for the child to develop speech and language in a timely manner for life-long success. Families no longer have to travel off-island to obtain diagnostic audiological evaluations for their infants.

"I think this accomplishment under UOG/Guam CEDDERS is a major step forward in the use of technology to support our community. Thanks to this partnership, babies on this island will get the needed pediatric audiological services from certified professionals, an area lacking on Guam," said Velma Sablan, professor at the University of Guam and experienced professional in the field of early hearing detection and intervention.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Guam.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Poorer countries and those that spend proportionately less money on health care have more stroke and stroke deaths than wealthier nations and those that allocate more to health care, according to new research in Stroke: Journal of the American Heart Association.

Poorer countries also had a greater incidence of hemorrhagic stroke -- caused by a burst blood vessel bleeding in or near the brain -- and had more frequent onset at younger ages.

Regardless of overall wealth, countries that spend less money proportionately on health care also had higher incidences of all four outcomes.

"Not only is the economic wellness of a country important, but also significant is what proportion of their gross domestic product is expended on health," said Luciano A. Sposato, M.D., M.B.A., study lead author and director of the neurology department at the Vascular Research Institute at INECO Foundation in Buenos Aires, Argentina. "This is very important for developing healthcare strategies to prevent stroke and other cardiovascular diseases."

In the large-scale literature review, researchers took a unique approach to identify stroke risk by correlating it to nationwide socioeconomic status.

Previous research tended to focus on the link between stroke and individual or family financial standing, said Sposato, also director of the Stroke Center at the Institute of Neurosciences, University Hospital Favaloro Foundation.

The study linked lower gross domestic product to:

32 percent higher risk of strokes;43 percent increase of post-stroke deaths at 30 days;43 percent increase in hemorrhagic stroke; and47 percent higher incidence of younger-age-onset stroke.

Similarly, a lower percentage of health spending correlated to a comparable increase in the 30-day death rate and:

26 percent higher risk of strokes;45 percent increase of post-stroke deaths at 30 days;32 percent increase in hemorrhagic stroke;36 percent higher incidence of younger-age-onset stroke.

Investigators analyzed 30 population-based studies conducted between 1998 and 2008 in 22 countries. They used statistical methods to link stroke risk, 30-day death rate, hemorrhagic stroke incidence and age at disease onset to three internationally accepted economic indicators. The indicators included gross domestic product, health expenditure per capita and unemployment rate. Unlike the other two indicators, unemployment rate didn't affect stroke or other outcomes.

"It is important to further discuss the health priorities for different countries," said Gustavo Saposnik, M.D., M.Sc., study co-author and director of stroke outcomes research at St. Michael's Hospital, University of Toronto, Canada. "This will provide the necessary background to help countries make the changes in how different resources and money are allocated."

Stroke is the fourth leading cause of death in the United States and a major cause of long-term disability. Worldwide, stroke is the second leading killer.

Dr. Sposato's participation was funded in part by the INECO Foundation.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by American Heart Association.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Luciano A. Sposato, Gustavo Saposnik. Gross Domestic Product and Health Expenditure Associated With Incidence, 30-Day Fatality, and Age at Stroke Onset: A Systematic Review. Stroke, 2011; DOI: 10.1161/STROKEAHA.111.632158

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Being hard up socially and financially during adolescence and early adulthood takes its toll on the body, and leads to physiological wear and tear in middle aged men and women, irrespective of how tough things have been in the interim. According to Dr. Per E. Gustafsson from UmeÃ¥ University in Sweden and colleagues, experience of social and material stressors around the time of transition into adulthood is linked to a rise in disease risk factors in middle age, including higher blood pressure, body weight and cholesterol.

Their work is published online in Springer's journal Annals of Behavioral Medicine.

The authors looked at the influence of both social factors and material deprivation during adolescence and adulthood on the physiological wear and tear on the body that results from ongoing adaptive efforts to maintain stability in response to stressors. These adaptive efforts are known as 'allostatic load'. Allostatic load is thought to predict various health problems, including declines in physical and cognitive functioning, and cardiovascular disease and mortality.

The researchers analyzed data for 822 participants in the Northern Swedish Cohort, which follows subjects from the age of 16 for a 27-year period. They looked at measures of social adversity including parental illness and loss, social isolation, exposure to threat or violence and material adversity including parental unemployment, poor standard of living, low income and financial strain. They also examined allostatic load at age 43 based on 12 biological factors linked to cardiovascular regulation, body fat deposition, lipid metabolism, glucose metabolism, inflammation and neuroendocrine regulation.

They found that early adversity involved a greater risk for adverse life circumstances later in adulthood. The analyses revealed adolescence as a particularly sensitive period for women and young adulthood as a particularly sensitive period for men. Specifically, women who had experienced social adversity in adolescence, and men who had experienced it during young adulthood, suffered greater allostatic load at age 43. This was independent of overall socioeconomic disadvantage and also of later adversity exposure during adulthood.

The authors conclude: "Our results support the hypothesis that physiological wear and tear visible in mid-adulthood is influenced by the accumulation of unfavourable social exposures over the life course, but also by social adversity measured around the transition into adulthood, independent of later adversity."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Springer Science+Business Media.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Per E. Gustafsson, Urban Janlert, Töres Theorell, Hugo Westerlund, Anne Hammarström. Social and Material Adversity from Adolescence to Adulthood and Allostatic Load in Middle-Aged Women and Men: Results from the Northern Swedish Cohort. Annals of Behavioral Medicine, 2011; DOI: 10.1007/s12160-011-9309-6

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — In a development that sheds new light on the pathology of Alzheimer's disease (AD), a team of Whitehead Institute scientists has identified connections between genetic risk factors for the disease and the effects of a peptide toxic to nerve cells in the brains of AD patients.

The scientists, working in and in collaboration with the lab of Whitehead Member Susan Lindquist, established these previously unknown links in an unexpected way. They used a very simple cell type -- yeast cells -- to investigate the harmful effects of amyloid beta (Aß), a peptide whose accumulation in amyloid plaques is a hallmark of AD. This new yeast model of Aß toxicity, which they further validated in the worm C. elegans and in rat neurons, enables researchers to identify and test potential genetic modifiers of this toxicity.

"As we tackle other diseases and extend our lifetimes, Alzheimer's and related diseases will be the most devastating personal challenge for our families and one the most crushing burdens on our economy," says Lindquist, who is also a professor of biology at Massachusetts Institute of Technology and an investigator of the Howard Hughes Medical Institute. "We have to try new approaches and find out-of the-box solutions."

In a multi-step process, reported in the journal Science, the researchers were able to introduce the form of Aß most closely associated with AD into yeast in a manner that mimics its presence in human cells. The resulting toxicity in yeast reflects aspects of the mechanism by which this protein damages neurons. This became clear when a screen of the yeast genome for genes that affect Aß toxicity identified a dozen genes that have clear human homologs, including several that have previously been linked to AD risk by genome-wide association studies (GWAS) but with no known mechanistic connection.

With these genetic candidates in hand, the team set out to answer two key questions: Would the genes identified in yeast actually affect Aß toxicity in neurons? And if so, how?

To address the first issue, in a collaboration with Guy Caldwell's lab at the University of Alabama, researchers created lines of C. elegans worms expressing the toxic form of Aß specifically in a subset of neurons particularly vulnerable in AD. This resulted in an age-dependent loss of these neurons. Introducing the genes identified in the yeast that suppressed Aß toxicity into the worms counteracted this toxicity. One of these modifiers is the homolog of PICALM, one of the most highly validated human AD risk factors. To address whether PICALM could also suppress Aß toxicity in mammalian neurons, the group exposed cultured rat neurons to toxic Aß species. Expressing PICALM in these neurons increased their survival.

The question of how these AD risk genes were actually impacting Aß toxicity in neurons remained. The researchers had noted that many of the genes were associated with a key cellular protein-trafficking process known as endocytosis. This is the pathway that nerve cells use to move around the vital signaling molecules with which they connect circuits in the brain. They theorized that perhaps Aß was doing its damage by disrupting this process. Returning to yeast, they discovered that, in fact, the trafficking of signaling molecules in yeast was adversely affected by Aß. Here again, introducing genes identified as suppressors of Aß toxicity helped restore proper functioning.

Much remains to be learned, but the work provides a new and promising avenue to explore the mechanisms of genes identified in studies of disease susceptibility.

"We now have the sequencing power to detect all these important disease risk alleles, but that doesn't tell us what they're actually doing, how they lead to disease," says Sebastian Treusch, a former graduate student in the Lindquist lab and now a postdoctoral research associate at Princeton University.

Jessica Goodman, a postdoctoral fellow in the Lindquist lab, says the yeast model provides a link between genetic data and efforts to understand AD from the biochemical and neurological perspectives.

"Our yeast model bridges the gap between these two fields," Goodman adds. "It enables us to figure out the mechanisms of these risk factors which were previously unknown."

Members of the Lindquist lab intend to fully exploit the yeast model, using it to identify novel AD risk genes, perhaps in a first step to determining if identified genes have mutations in AD patient samples. The work will undoubtedly take the lab into uncharted territory.

Notes staff scientist Kent Matlack: "We know that Aß is toxic, and so far, the majority of efforts in the area of Aß have been focused on ways to prevent it from forming in the first place. But we need to look at everything, including ways to reduce or prevent its toxicity. That's the focus of the model. Any genes that we find that we can connect to humans will go into an area of research that has been less explored so far."

This work was supported by an HHMI Collaborative Innovation Award, an NRSA fellowship, the Cure Alzheimer's Fund, the National Institutes of Health, the Kempe foundation, and Alzheimerfonden.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Whitehead Institute for Biomedical Research. The original article was written by Matt Fearer.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Sebastian Treusch, Shusei Hamamichi, Jessica L. Goodman, Kent E. S. Matlack, Chee Yeun Chung, Valeriya Baru, Joshua M. Shulman, Antonio Parrado, Brooke J. Bevis, Julie S. Valastyan, Haesun Han, Malin Lindhagen-Persson, Eric M. Reiman, Denis A. Evans, David A. Bennett, Anders Olofsson, Philip L. Dejager, Rudolph E. Tanzi, Kim A. Caldwell, Guy A. Caldwell, Susan Lindquist. Functional Links Between Aß Toxicity, Endocytic Trafficking, and Alzheimer’s Disease Risk Factors in Yeast. Science, 2011; DOI: 10.1126/science.1213210

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Curiosity may have killed the cat, but it's good for the student. That's the conclusion of a new study published in Perspectives in Psychological Science, a journal of the Association for Psychological Science. The authors show that curiosity is a big part of academic performance. In fact, personality traits like curiosity seem to be as important as intelligence in determining how well students do in school.

Intelligence is important to academic performance, but it's not the whole story. Everyone knows a brilliant kid who failed school, or someone with mediocre smarts who made up for it with hard work. So psychological scientists have started looking at factors other than intelligence that make some students do better than others.

One of those is conscientiousness -- basically, the inclination to go to class and do your homework. People who score high on this personality trait tend to do well in school. "It's not a huge surprise if you think of it, that hard work would be a predictor of academic performance," says Sophie von Stumm of the University of Edinburgh in the UK. She co-wrote the new paper with Benedikt Hell of the University of Applied Sciences Northwestern Switzerland and Tomas Chamorro-Premuzic of Goldsmiths University of London.

von Stumm and her coauthors wondered if curiosity might be another important factor. "Curiosity is basically a hunger for exploration," von Stumm says. "If you're intellectually curious, you'll go home, you'll read the books. If you're perceptually curious, you might go traveling to foreign countries and try different foods." Both of these, she thought, could help you do better in school.

The researchers performed a meta-analysis, gathering the data from about 200 studies with a total of about 50,000 students. They found that curiosity did, indeed, influence academic performance. In fact, it had quite a large effect, about the same as conscientiousness. When put together, conscientiousness and curiosity had as big an effect on performance as intelligence.

von Stumm wasn't surprised that curiosity was so important. "I'm a strong believer in the importance of a hungry mind for achievement, so I was just glad to finally have a good piece of evidence," she says. "Teachers have a great opportunity to inspire curiosity in their students, to make them engaged and independent learners. That is very important."

Employers may also want to take note: a curious person who likes to read books, travel the world, and go to museums may also enjoy and engage in learning new tasks on the job. "It's easy to hire someone who has the done the job before and hence, knows how to work the role," von Stumm says. "But it's far more interesting to identify those people who have the greatest potential for development, i.e. the curious ones."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Association for Psychological Science.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

S. von Stumm, B. Hell, T. Chamorro-Premuzic. The Hungry Mind: Intellectual Curiosity Is the Third Pillar of Academic Performance. Perspectives on Psychological Science, 2011; 6 (6): 574 DOI: 10.1177/1745691611421204

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — One of the things that makes inhalational anthrax so worrisome for biodefense experts is how quickly a relatively small number of inhaled anthrax spores can turn into a lethal infection. By the time an anthrax victim realizes he or she has something worse than the flu and seeks treatment, it's often too late; even the most powerful antibiotics may be no help against the spreading bacteria and the potent toxins they generate.

Now, though, University of Texas Medical Branch at Galveston researchers have found new allies for the fight against anthrax. Known as natural killer cells, they're a part of the immune system normally associated with eliminating tumor cells and cells infected by viruses. But natural killer cells also attack bacteria -- including anthrax, according to the UTMB group.

"People become ill so suddenly from inhalational anthrax that there isn't time for a T cell response, the more traditional cellular immune response," said UTMB assistant professor Janice Endsley, lead author of a paper now online in the journal Infection and Immunity. "NK cells can do a lot of the same things, and they can do them immediately."

In test-tube experiments, a collaborative team led by Endsley and Professor Johnny Peterson profiled the NK cell response to anthrax, documenting how NK cells successfully detected and killed cells that had been infected by anthrax, destroying the bacteria inside the cells along with them. Surprisingly, they found that NK cells were also able to detect and kill anthrax bacteria outside of human cells.

"Somehow these NK cells were able to recognize that there was something hostile there, and they actually caused the death of these bacteria," Endsley said.

In further experiments, the group compared the anthrax infection responses of normal mice and mice that were given a treatment to remove NK cells from the body. All the mice died with equal rapidity when given a large dose of anthrax spores, but the non-treated (NK cell-intact) mice had much lower levels of bacteria in their blood. "This is a significant finding," Endsley said. "Growth of bacteria in the bloodstream is an important part of the disease process."

The next step, according to Endsley, is to apply an existing NK cell-augmentation technique (many have already been developed for cancer research) to mice, in an attempt to see if the more numerous and active NK cells can protect them from anthrax. Even if the augmented NK cells don't provide enough protection by themselves, they could give a crucial boost in combination with antibiotic treatment.

"We may not be able to completely control something just by modulating the immune response," Endsley said. "But if we can complement antibiotic effects and improve the efficiency of antibiotics, that would be of value as well."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Texas Medical Branch at Galveston.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

C. M. Gonzales, C. B. Williams, V. E. Calderon, M. B. Huante, S. T. Moen, V. L. Popov, W. B. Baze, J. W. Peterson, J. J. Endsley. Antibacterial Role for Natural Killer Cells in Host Defense to Bacillus Anthracis. Infection and Immunity, 2011; DOI: 10.1128/IAI.05439-11

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Researchers have built a map that shows how thousands of proteins in a fruit fly cell communicate with each other. This is the largest and most detailed protein interaction map of a multicellular organism, demonstrating how approximately 5,000, or one third, of the proteins cooperate to keep life going.

"My group has been working for decades, trying to unravel the precise connections among the proteins and gain insight into how the cell functions as a whole," says Spyros Artavanis-Tsakonas, Harvard Medical School professor of cell biology and senior author on the paper. "For me, and hopefully researchers studying protein interactions, this map is a dream come true."

The study is published October 28 in the journal Cell.

While genes are a cell's data repository, containing all the instructions necessary for life, proteins are its labor force, talking to each other constantly and channeling vital information through vast and complicated networks to keep life stable and healthy. Humans and fruit flies are both descended from a common ancestor, and in most cases, both species still rely on the same ancient cellular machinery for survival. In that respect, the fruit fly's map serves as sort of a blueprint, a useful guide into the cellular activity of many higher organisms.

Understanding how proteins behave normally is often the key to their disease-causing behavior.

For this study, Artavanis-Tsakonas and his colleagues provide the first large-scale map of this population of proteins. Their map, which is not yet fully complete, reveals many of the relationships these myriad proteins make with each other as they collaborate, something which, to date, has been to a large degree an enduring mystery among biologists.

"We already know what approximately one-third of these proteins do," Artavanis-Tsakonas said. "For another third of them we can sort of guess. But there's another third that we know nothing about. And now through this kind of analysis we can begin to explore the functions of these proteins. This is giving us extraordinary insight into how the cell works."

One significant use for such a map is to assess how a cell responds to changes in metabolic conditions, such as interactions with drugs or in conditions where genetic alterations occur. Finding such answers might lead to future drug treatments for disease, and perhaps to a deeper understanding of what occurs in conditions such as cancer.

"This is of extraordinary translational value," Artavanis-Tsakonas said. "In order to know how the proteins work you must know who they talk to. And then you can examine whether a disease somehow alters this conversation."

A pivotal part of this research involved a scientific technique called mass spectrometry, which is relatively new to the science of biology. The ultra-precise mass spectrometry experiments were done by HMS professor of cell biology Steven Gygi. Mass spectrometry is used to measure the exact weight (the mass) and thus identify each individual protein in a sample. It is a technique originally devised by physicists for analyzing atomic particles. But in recent years mass spectrometry was adapted and refined for new and powerful uses in basic biological research. Other studies using similar techniques to date have focused on small groups of related proteins or single celled model organisms such as bacteria and yeast.

Despite the huge amount already known about the fruit fly and its genetic endowment, much about the function of thousands of proteins remains a mystery. This map, however, now gives us precise clues about their function. Filling in the detailed protein map can help scientists gain important insights into the process of development, that is, how a creature is put together, maintained and operated.

"Our analyses also sheds light on how proteins and protein networks have evolved in different animals," said K. G. Guruharsha, a postdoctoral fellow in Artavanis-Tsakonas's lab and a first author on the paper.

Co-lead authors on the paper included Jean-Francois Rual, also a postdoctoral fellow in Artavanis-Tsakonas's lab, and Julian Mintseris and Bo Zhai, both research fellows in Gygi's lab.

Also important in this effort was the work of K. VijayRaghavan, at the National Centre for Biological Sciences in Bangalore, India. Similarly, crucial contributions to this work also came from the University of California, in Berkeley, where Susan E. Celniker collaborated through her studies in the fruit fly genome center.

This research was funded by the National Institutes of Health.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Harvard Medical School. The original article was written by Robert Cooke.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — In a recent study by University of Kentucky researchers, watermelon was shown to reduce atherosclerosis in animals.

The animal model used for the study involved mice with diet-induced high cholesterol. A control group was given water to drink, while the experimental group was given watermelon juice. By week eight of the study, the animals given watermelon juice had lower body weight than the control group, due to decrease of fat mass. They experienced no decrease in lean mass. Plasma cholesterol concentrations were significantly lower in the experimental group, with modestly reduced intermediate and low-density lipoprotein cholesterol concentrations as compared to the control group.

A measurement of atherosclerotic lesion areas revealed that the watermelon juice group also experienced statistically significant reductions in atherosclerotic lesions, as compared to the control group.

"Melons have many health benefits," said lead investigator Dr. Sibu Saha. "This pilot study has found three interesting health benefits in mouse model of atherosclerosis. Our ultimate goal is to identify bioactive compounds that would improve human health."

The study was conducted by Sibu Saha, UK Department of Surgery; Aruna Poduri, UK Saha Cardiovascular Research Center (UK Saha CRVC); Debra L. Rateri, UK Saha CVRC; Shubin Saha of Purdue Univ.; and Alan Daugherty, director, UK Saha CVRC.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Kentucky.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Three planets -- each orbiting its own giant, dying star -- have been discovered by an international research team led by a Penn State University astronomer.

Using the Hobby-Eberly Telescope, astronomers observed the planets' parent stars -- called HD 240237, BD +48 738, and HD 96127 -- tens of light years away from our solar system. One of the massive, dying stars has an additional mystery object orbiting it, according to team leader Alex Wolszczan, an Evan Pugh Professor of Astronomy and Astrophysics at Penn State, who, in 1992, became the first astronomer ever to discover planets outside our solar system. The new research is expected to shed light on the evolution of planetary systems around dying stars. It also will help astronomers to understand how metal content influences the behavior of dying stars.

The research will be published in December in the Astrophysical Journal. The first author of the paper is Sara Gettel, a graduate student from Penn State's Department of Astronomy and Astrophysics, and the paper is co-authored by three graduate students from Poland.

The three newly-discovered planetary systems are more evolved than our own solar system. "Each of the three stars is swelling and has already become a red giant -- a dying star that soon will gobble up any planet that happens to be orbiting too close to it," Wolszczan said. "While we certainly can expect a similar fate for our own Sun, which eventually will become a red giant and possibly will consume our Earth, we won't have to worry about it happening for another five-billion years." Wolszczan also said that one of the massive, dying stars -- BD +48 738 -- is accompanied not only by an enormous, Jupiter-like planet, but also by a second, mystery object. According to the team, this object could be another planet, a low-mass star, or -- most interestingly -- a brown dwarf, which is a star-like body that is intermediate in mass between the coolest stars and giant planets. "We will continue to watch this strange object and, in a few more years, we hope to be able to reveal its identity," Wolszczan said.

The three dying stars and their accompanying planets have been particularly useful to the research team because they have helped to illuminate such ongoing mysteries as how dying stars behave depending on their metallicity. "First, we know that giant stars like HD 240237, BD +48 738, and HD 96127 are especially noisy. That is, they appear jittery, because they oscillate much more than our own, much-younger Sun. This noisiness disturbs the observation process, making it a challenge to discover any companion planets," Wolszczan said. "Still, we persevered and we eventually were able to spot the planets orbiting each massive star."

Once Wolszczan and his team had confirmed that HD 240237, BD +48 738, and HD 96127 did indeed have planets orbiting around them, they measured the metal content of the stars and found some interesting correlations. "We found a negative correlation between a star's metallicity and its jitteriness. It turns out that the less metal content each star had, the more noisy and jittery it was," Wolszczan explained. "Our own Sun vibrates slightly too, but because it is much younger, its atmosphere is much less turbulent."

Wolszczan also pointed out that, as stars swell to the red-giant stage, planetary orbits change and even intersect, and close-in planets and moons eventually get swallowed and sucked up by the dying star. For this reason, it is possible that HD 240237, BD +48 738, and HD 96127 once might have had more planets in orbit, but that these planets were consumed over time. "It's interesting to note that, of these three newly-discovered stars, none has a planet at a distance closer than 0.6 astronomical units -- that is, 0.6 the distance of the Earth to our Sun," Wolszczan said. "It might be that 0.6 is the magic number at which any closer distance spells a planet's demise."

Observations of dying stars, their metal content, and how they affect the planets around them could provide clues about the fate of our own solar system. "Of course, in about five-billion years, our Sun will become a red giant and likely will swallow up the inner planets and the planets' accompanying moons. However, if we're still around in, say, one-billion to three-billion years, we might consider taking up residence on Jupiter's moon, Europa, for the remaining couple billion years before that happens," Wolszczan said. "Europa is an icy wasteland and it is certainly not habitable now, but as the Sun continues to heat up and expand, our Earth will become too hot, while at the same time, Europa will melt and may spend a couple billion years in the Goldilocks zone -- not to hot, not to- old, covered by vast, beautiful oceans."

Penn State's Center for Exoplanets and Habitable Worlds is organizing a conference in January 2012 to discuss planets and their dying stars. The conference will be held in Puerto Rico and is scheduled to take place at exactly 20 years from when Wolszczan used the 1,000-foot Arecibo radiotelescope to detect three planets orbiting a rapidly spinning neutron star -- the very first discovery of planets outside our solar system. This discovery opened the door to the current intense era of planet hunting by suggesting that planet formation could be quite common throughout the universe and that planets can form around different types of stellar objects. More information about the conference is online.

In addition to Wolszczan and Gettel at Penn State, other members of the research team include Andrzej Niedzielski and Gracjan Maciejewski; and three graduate students, Grzegorz Nowak, Monika Adamów, and Pawel Zielinski, who are all from Nicolaus Copernicus University in Torun, Poland.

Funding for this research was provided by NASA and the Polish Ministry of Science and Higher Education.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Penn State.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 28, 2011) — Fat doughnut-shaped dust shrouds that obscure about half of supermassive black holes could be the result of high speed crashes between planets and asteroids, according to a new theory from an international team of astronomers.

The scientists, led by Dr. Sergei Nayakshin of the University of Leicester, are publishing their results in the journal Monthly Notices of the Royal Astronomical Society.

Supermassive black holes reside in the central parts of most galaxies. Observations indicate that about 50% of them are hidden from view by mysterious clouds of dust, the origin of which is not completely understood. The new theory is inspired by our own Solar System, where the so-called zodiacal dust is known to originate from collisions between solid bodies such as asteroids and comets. The scientists propose that the central regions of galaxies contain not only black holes and stars but also planets and asteroids.

Collisions between these rocky objects would occur at colossal speeds as large as 1000 km per second, continuously shattering and fragmenting the objects, until eventually they end up as microscopic dust. Dr. Nayakshin points out that this harsh environment -- radiation and frequent collisions -- would make the planets orbiting supermassive black holes sterile, even before they are destroyed. "Too bad for life on these planets," he says, "but on the other hand the dust created in this way blocks much of the harmful radiation from reaching the rest of the host galaxy. This in turn may make it easier for life to prosper elsewhere in the rest of the central region of the galaxy."

He also believes that understanding the origin of the dust near black holes is important in our models of how these monsters grow and how exactly they affect their host galaxies. "We suspect that the supermassive black hole in our own Galaxy, the Milky Way, expelled most of the gas that would otherwise turn into more stars and planets," he continues, "Understanding the origin of the dust in the inner regions of galaxies would take us one step closer to solving the mystery of the supermassive black holes."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Royal Astronomical Society (RAS).

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Sergei Nayakshin, Sergey Sazonov, Rashid Sunyaev. Are SMBHs shrouded by 'super-Oort' clouds of comets and asteroids? Monthly Notices of the Royal Astronomical Society, 2011; (submitted) [link]

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Researchers from North Carolina State University have developed a new computational approach to improve the utility of superconductive materials for specific design applications -- and have used the approach to solve a key research obstacle for the next-generation superconductor material yttrium barium copper oxide (YBCO).

A superconductor is a material that can carry electricity without any loss -- none of the energy is dissipated as heat, for example. Superconductive materials are currently used in medical MRI technology, and are expected to play a prominent role in emerging power technologies, such as energy storage or high-efficiency wind turbines.

One problem facing systems engineers who want to design technologies that use superconductive materials is that they are required to design products based on the properties of existing materials. But NC State researchers are proposing an approach that would allow product designers to interact directly with the industry that creates superconductive materials -- such as wires -- to create superconductors that more precisely match the needs of the finished product.

"We are introducing the idea that wire manufacturers work with systems engineers earlier in the process, utilizing computer models to create better materials more quickly," says Dr. Justin Schwartz, lead author of a paper on the process and Kobe Steel Distinguished Professor and head of NC State's Department of Materials Science and Engineering. "This approach moves us closer to the ideal of having materials engineering become part of the product design process."

To demonstrate the utility of the process, researchers tackled a problem facing next-generation YBCO superconductors. YBCO conductors are promising because they are very strong and have a high superconducting current density -- meaning they can handle a large amount of electricity. But there are obstacles to their widespread use.

One of these key obstacles is how to handle "quench." Quench is when a superconductor suddenly loses its superconductivity. Superconductors are used to store large amounts of electricity in a magnetic field -- but a quench unleashes all of that stored energy. If the energy isn't managed properly, it will destroy the system -- which can be extremely expensive. "Basically, the better a material is as a superconductor, the more electricity it can handle, so it has a higher energy density, and that makes quench protection more important, because the material may release more energy when quenched," Schwartz says.

To address the problem, researchers explored seven different variables to determine how best to design YBCO conductors in order to optimize performance and minimize quench risk. For example, does increasing the thickness of the YBCO increase or decrease quench risk? As it turns out, it actually decreases quench risk. A number of other variables come into play as well, but the new approach was effective in helping researchers identify meaningful ways of addressing quench risk.

"The insight we've gained into YBCO quench behavior, and our new process for designing better materials, will likely accelerate the use of YBCO in areas ranging from new power applications to medical technologies -- or even the next iteration of particle accelerators," Schwartz says.

"This process is of particular interest given the White House's Materials Genome Initiative," Schwartz says. "The focus of that initiative is to expedite the process that translates new discoveries in materials science into commercial products -- and I think our process is an important step in that direction."

The paper was co-authored by Dr. Wan Kan Chan, a research associate at NC State. The paper is available online from IEEE Transactions on Applied Superconductivity. The research was funded by the Air Force Research Laboratory.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by North Carolina State University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Wan Kan Chan, Justin Schwartz. Three-Dimensional Micrometer-Scale Modeling of Quenching in High-Aspect-Ratio YBa2Cu3O7-d Coated Conductor Tapes -- Part II: Influence of Geometric and Material Properties and Implications for Conductor Engineering and Magnet Design. IEEE Transactions on Applied Superconductivity, 2011; DOI: 10.1109/TASC.2011.2169670

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Identification of three fatty acids involved in the extreme growth of Burmese pythons' hearts following large meals could prove beneficial in treating diseased human hearts, according to research co-authored by a University of Alabama scientist and publishing in the Oct. 28 issue of Science.

Growth of the human heart can be beneficial when resulting from exercise -- a type of growth known as physiological cardiac hypertrophy -- but damaging when triggered by disease -- growth known as pathological hypertrophy. The new research shows a potential avenue by which to make the unhealthy heart growth more like the healthy version.

"We may later be able to turn the tables, in a sense, in the processes involved in pathological hypertrophy by administering a combination of fatty acids that occur in very high concentrations in the blood of digesting pythons," said Dr. Stephen Secor, associate professor of biological sciences at UA and one of the paper's co-authors. "This could trigger, perhaps, something more akin to the physiological form of hypertrophy."

The research, conducted in collaboration with multiple researchers at the University of Colorado working in the lab of Dr. Leslie Leinwand, identified three fatty acids, myristic acid, palmitic acid and palmitoleic acid, for their roles in the snakes' healthy heart growths following a meal.

Researchers took these fatty acids from feasting pythons and infused them into fasting pythons. Afterward, those fasting pythons underwent heart-rate growths similar to that of the feasting pythons. In a similar fashion, the researchers were able to induce comparable heart-rate growths in rats, indicating that the fatty acids have a similar effect on the mammalian heart.

The paper, whose lead author was Dr. Cecilia Riquelme of the University of Colorado, also showed that the pythons' heart growth was a result of the individual heart cells growing in size, rather than multiplying in number.

By studying gene expression in the python hearts -- which genes are turned on following feasting -- the research, Secor said, shows that the changes the pythons' hearts undergo is more like the positive changes seen in a marathon runner rather than the types of changes seen in a diseased, or genetically altered, heart.

"Cyclists, marathon runners, rowers, swimmers, they tend to have larger hearts," Secor said. "It's the heart working harder to move blood through it. The term is 'volume overload,' in reference to more blood being pumped to tissues. In response, the heart's chambers get larger, and more blood is pushed out with every contraction, resulting in increased cardiac performance."

However, the time-frame of this increased heart performance of a python blows away even the most physically-fit distance runner, Secor said.

"Instead of experiencing elevated cardiac performance for several hours with running, the Burmese python is maintaining heightened cardiac output for five to six days, non-stop, while digesting their large meal."

Another interesting finding of the research, Secor said, is even with the increased volume of triglycerides circulating in the snakes after feeding, those lipids are not remaining within the snakes' hearts or vascular systems after the completion of digestion.

"The python hearts are using the circulating lipids to fuel the increase in performance."

Traditionally, mice have been the preferred animal model used to study the genetic heart disease known as hypertrophic cardiomyopathy, characterized by heart growth and contractile dysfunction. However, the snakes' unusual physiological responses render them more insightful models, in some cases, Secor said.

Pythons are infrequent feeders, sometimes eating only once or twice a year in the wild. When they do eat, they undergo extreme physiologic and metabolic changes that include increases in the size of the heart, along with the liver, pancreas, small intestine and kidney. Three days after a feeding, a python's heart mass can increase as much as 40 percent, before reverting to its pre-meal size once digestion is completed, Secor said.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Alabama in Tuscaloosa.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Cecilia A. Riquelme, Jason A. Magida, Brooke C. Harrison, Christopher E. Wall, Thomas G. Marr, Stephen M. Secor, Leslie A. Leinwand. Fatty Acids Identified in the Burmese Python Promote Beneficial Cardiac Growth. Science, 2011; 334 (6055): 528-531 DOI: 10.1126/science.1210558

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Although corticosteroid injections are one of the most common treatments for shoulder pain, there have been relatively few high-quality investigations of their efficacy and duration of action. In a study scheduled for publication in the December issue of the Archives of Physical Medicine and Rehabilitation, researchers report on the first comparative study of the two most commonly corticosteroid doses administered for shoulder pain. They found that lower doses were just as effective as higher doses in terms of reduction of pain, improved range of motion and duration of efficacy.

"There has been no guidance for adequate corticosteroid doses during subacromial injection. Physicians have depended mainly on their experience for the selection of dose," commented lead investigator Seung-Hyun Yoon, MD, PhD, Assistant Professor, Department of Physical Medicine and Rehabilitation, Ajou University School of Medicine, Suwon, Republic of Korea. "This is the first study to assess the efficacy of corticosteroid according to two different doses, which are the most widely used in subacromial injection for participants with periarticular shoulder disorders. Initial use of a low dose is encouraged because there was no difference in efficacy according to dose, and the effect of corticosteroid lasted up to 8 weeks."

Investigators conducted a randomized, triple-blind, placebo-controlled clinical trial in which 79 patients with at least one month's duration of pain were enrolled. Subjects were randomly assigned to three groups with 27 participants receiving a 40 mg dose of triamcinolone acetonide; 25 a 20 mg dose and 27 a placebo injection. All were followed up at 2, 4, and 8 weeks after treatment. All injections were performed using ultrasound guidance to insure proper placement of the therapeutic agent in the bursa.

Participants were asked to rate their degree of shoulder pain on a 0 to 10 scale and to answer a Shoulder Disability Questionnaire. They also were asked to move their shoulders slowly until they experienced pain, and evaluators measured the Active Range of Motion (AROM) in 4 different directions (forward flexion, abduction, internal rotation, and external rotation of the shoulder in a standing position).

Compared with pretreatment (within-group comparisons), the high- (40 mg) and low-dose corticosteroid (20 mg) groups both showed improvement in pain, disability, and AROM, while the placebo group showed no difference. Importantly, this study showed no significant inter-group differences between the high- and low-dose corticosteroid groups. Because a higher dose may increase the incidence of local and general complications, a lower dose is indicated at the initial treatment stage.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Elsevier Health Sciences.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Ji Yeon Hong, Seung-Hyun Yoon, Do Jun Moon, Kyu-Sung Kwack, Bohyun Joen, Hyun Young Lee. Comparison of High- and Low-Dose Corticosteroid in Subacromial Injection for Periarticular Shoulder Disorder: A Randomized, Triple-Blind, Placebo-Controlled Trial. Archives of Physical Medicine and Rehabilitation, December 2011; DOI: 10.1016/j.apmr.2011.06.033

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — A new analysis of images from the Hubble Space Telescope combined with supercomputer simulations of galaxy collisions has cleared up years of confusion about the rate at which smaller galaxies merge to form bigger ones. This paper, led by Jennifer Lotz of Space Telescope Science Institute, is about to be published in The Astrophysical Journal.

Galaxies grow mostly by acquiring small amounts of matter from their surroundings. But occasionally galaxies merge with other galaxies large or small. Collisions between big galaxies can change rotating disk galaxies like the Milky Way into featureless elliptical galaxies, in which the stars are moving every which way.

In order to understand how galaxies have grown, it is essential to measure the rate at which galaxies merge. In the past, astronomers have used two principal techniques: counting the number of close pairs of galaxies about to collide and by counting the number of galaxies that appear to be disturbed in various ways. The two techniques are analogous to trying to estimate the number of automobile accidents by counting the number of cars on a collision course versus counting the number of wrecked cars seen by the side of the road.

However, these studies have often led to discrepant results. "These different techniques probe mergers at different 'snapshots' in time along the merger process," Lotz says. "Studies that looked for close pairs of galaxies that appeared ready to collide gave much lower numbers of mergers (5%) than those that searched for galaxies with disturbed shapes, evidence that they're in smashups (25%)."

In the new work, all the previous observations were reanalyzed using a key new ingredient: highly accurate computer simulations of galaxy collisions. These simulations, which include the effects of stellar evolution and dust, show the lengths of time over which close galaxy pairs and various types of galaxy disturbances are likely to be visible. Lotz's team accounted for a broad range of merger possibilities, from a pair of galaxies with equal masses joining together to an interaction between a giant galaxy and a puny one. The team also analyzed the effects of different orbits for the galaxies, possible collision impacts, and how the galaxies were oriented to each other.

The simulations were done by T. J. Cox (now at Carnegie Observatories in Pasadena), Patrik Jonsson (now at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts), and Joel Primack (at the University of California, Santa Cruz -- UCSC), using small supercomputers at UCSC and the large Columbia supercomputer at NASA Ames Research Center. These simulations were "observed" as if through Hubble Space Telescope by Jennifer Lotz in a series of papers with Cox, Jonsson, and Primack that were published over the past three years. A key part of the analysis was a new way of measuring galaxy disturbances that was developed by Lotz, Primack, and Piero Madau in 2004. All this work was begun when Lotz was a postdoc with Primack, and Cox and Jonsson were his graduate students.

"Viewing the simulations was akin to watching a slow-motion car crash," Lotz says. "Having an accurate value for the merger rate is critical because galactic collisions may be a key process that drives galaxy assembly, rapid star formation at early times, and the accretion of gas onto central supermassive black holes at the centers of galaxies."

"The new paper led by Jennifer Lotz for the first time makes sense of all the previous observations, and shows that they are consistent with theoretical expectations," says Primack. "This is a great example of how new astronomical knowledge is now emerging from a combination of observations, theory, and supercomputer simulations." Primack now heads the University of California High-Performance AstroComputing Center (UC-HiPACC), headquartered at the University of California, Santa Cruz.

This research was funded by grants from NASA and NSF, and Hubble Space Telescope and Spitzer Space Telescope Theory Grants

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - Santa Cruz.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Lotz, Jennifer M.; Jonsson, Patrik; Cox, T. J.; Croton, Darren; Primack, Joel R.; Somerville, Rachel S.; Stewart, Kyle. The Major and Minor Galaxy Merger Rates at z < 1.5. The Astrophysical Journal, 2011 [link]

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Are children suffering needlessly after surgery? UC Irvine anesthesiologists who specialize in pediatric care believe so.

An operation can be one of the most traumatic events children face, and according to a UCI study, many of them experience unnecessary postsurgical pain lasting weeks or months.

Such chronic pain is well understood and treated in adults but has been generally overlooked in pediatric patients, said Dr. Zeev Kain, professor and chair of anesthesiology & perioperative care.

This month, he and his UCI colleagues published in the Journal of Pediatric Surgery the first-ever study of chronic postoperative pain in children. Out of 113 youngsters who had procedures ranging from appendectomies to orthopedic surgery, 13 percent reported pain that lingered for months.

While the sample group was small, Kain said, the study's implications are profound. Four million children undergo surgical procedures in the U.S. each year, suggesting that more than half a million of them suffer well after leaving the hospital. This results in more school absences and visits to the doctor and, for parents, days off work.

Kain said the research indicates that physicians need to more effectively manage pain within 48 hours of surgery -- which, in adults, has been shown to minimize the potential for chronic pain -- and that parents should be properly prepared to alleviate their child's pain at home.

"Medical professionals must understand this issue better and learn how to work with parents to care for chronic pain," he said. "We hope this study marks a first step toward long-term, definitive solutions."

UCI pediatric pain psychologist Michelle Fortier led the study -- which involved patients from CHOC Children's Hospital in Orange, Calif. -- and Drs. Jody Chou and Eva Mauer also participated.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - Irvine.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Michelle A. Fortier, Jody Chou, Eva L. Maurer, Zeev N. Kain. Acute to chronic postoperative pain in children: preliminary findings. Journal of Pediatric Surgery, 2011; 46 (9): 1700 DOI: 10.1016/j.jpedsurg.2011.03.074

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — A new analysis of Hubble surveys, combined with simulations of galaxy interactions, reveals that the merger rate of galaxies over the last 8 billion to 9 billion years falls between the previous estimates.

The galaxy merger rate is one of the fundamental measures of galaxy evolution, yielding clues to how galaxies bulked up over time through encounters with other galaxies. And yet, a huge discrepancy exists over how often galaxies coalesced in the past. Measurements of galaxies in deep-field surveys made by NASA's Hubble Space Telescope generated a broad range of results: anywhere from 5 percent to 25 percent of the galaxies were merging.

The study, led by Jennifer Lotz of the Space Telescope Science Institute in Baltimore, Md., analyzed galaxy interactions at different distances, allowing the astronomers to compare mergers over time. Lotz's team found that galaxies gained quite a bit of mass through collisions with other galaxies. Large galaxies merged with each other on average once over the past 9 billion years. Small galaxies were coalescing with large galaxies more frequently. In one of the first measurements of smashups between dwarf and massive galaxies in the distant universe, Lotz's team found these mergers happened three times more often than encounters between two hefty galaxies.

"Having an accurate value for the merger rate is critical because galactic collisions may be a key process that drives galaxy assembly, rapid star formation at early times, and the accretion of gas onto central supermassive black holes at the centers of galaxies," Lotz explains.

The team's results are accepted for publication appeared in The Astrophysical Journal.

The problem with previous Hubble estimates is that astronomers used different methods to count the mergers.

"These different techniques probe mergers at different 'snapshots' in time along the merger process," Lotz says. "It is a little bit like trying to count car crashes by taking snapshots. If you look for cars on a collision course, you will only see a few of them. If you count up the number of wrecked cars you see afterwards, you will see many more. Studies that looked for close pairs of galaxies that appeared ready to collide gave much lower numbers of mergers than those that searched for galaxies with disturbed shapes, evidence that they're in smashups."

To figure out how many encounters happen over time, Lotz needed to understand how long merging galaxies would look like "wrecks" before they settle down and begin to look like normal galaxies again.

That's why Lotz and her team turned to highly detailed computer simulations to help make sense of the Hubble photographs. The team made simulations of the many possible galaxy collision scenarios and then mapped them to Hubble images of galaxy interactions.

Creating the computer models was a time-consuming process. Lotz's team tried to account for a broad range of merger possibilities, from a pair of galaxies with equal masses joining together to an interaction between a giant galaxy and a puny one. The team also analyzed different orbits for the galaxies, possible collision impacts, and how galaxies were oriented to each other. In all, the group came up with 57 different merger scenarios and studied the mergers from 10 different viewing angles. "Viewing the simulations was akin to watching a slow-motion car crash," Lotz says.

The simulations followed the galaxies for 2 billion to 3 billion years, beginning at the first encounter and continuing until the union was completed, about a billion years later.

"Our simulations offer a realistic picture of mergers between galaxies," Lotz says.

In addition to studying the smashups between giant galaxies, the team also analyzed encounters among puny galaxies. Spotting collisions with small galaxies are difficult because the objects are so dim relative to their larger companions.

"Dwarf galaxies are the most common galaxy in the universe," Lotz says. "They may have contributed to the buildup of large galaxies. In fact, our own Milky Way galaxy had several such mergers with small galaxies in its recent past, which helped to build up the outer regions of its halo. This study provides the first quantitative understanding of how the number of galaxies disturbed by these minor mergers changed with time."

Lotz compared her simulation images with pictures of thousands of galaxies taken from some of Hubble's largest surveys, including the All-Wavelength Extended Groth Strip International Survey (AEGIS), the Cosmological Evolution Survey (COSMOS), and the Great Observatories Origins Deep Survey (GOODS), as well as mergers identified by the DEEP2 survey with the W.M. Keck Observatory in Hawaii. She and other groups had identified about a thousand merger candidates from these surveys but initially found very different merger rates.

"When we applied what we learned from the simulations to the Hubble surveys in our study, we derived much more consistent results," Lotz says.

Her next goal is to analyze galaxies that were interacting around 11 billion years ago, when star formation across the universe peaked, to see if the merger rate rises along with the star formation rate. A link between the two would mean galaxy encounters incite rapid star birth.

In addition to Lotz, the coauthors of the paper include Patrik Jonsson of Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass; T. J. Cox of Carnegie Observatories in Pasadena, Calif.; Darren Croton of the Centre for Astrophysics and Supercomputing at Swinburne University of Technology in Hawthorn, Australia; Joel R. Primack of the University of California, Santa Cruz; Rachel S. Somerville of the Space Telescope Science Institute and The Johns Hopkins University in Baltimore, Md.; and Kyle Stewart of NASA's Jet Propulsion Laboratory in Pasadena, Calif.

The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI) conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by NASA/Goddard Space Flight Center.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Stir this clear liquid in a glass vial and nothing happens. Shake this liquid, and free-floating sheets of protein-like structures emerge, ready to detect molecules or catalyze a reaction. This isn't the latest gadget from James Bond's arsenal -- rather, the latest research from the U. S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory (Berkeley Lab) scientists unveiling how slim sheets of protein-like structures self-assemble. This "shaken, not stirred" mechanism provides a way to scale up production of these two-dimensional nanosheets for a wide range of applications, such as platforms for sensing, filtration and templating growth of other nanostructures.

"Our findings tell us how to engineer two-dimensional, biomimetic materials with atomic precision in water," said Ron Zuckermann, Director of the Biological Nanostructures Facility at the Molecular Foundry, a DOE nanoscience user facility at Berkeley Lab. "What's more, we can produce these materials for specific applications, such as a platform for sensing molecules or a membrane for filtration."

Zuckermann, who is also a senior scientist at Berkeley Lab, is a pioneer in the development of peptoids, synthetic polymers that behave like naturally occurring proteins without degrading. His group previously discovered peptoids capable of self-assembling into nanoscale ropes, sheets and jaws, accelerating mineral growth and serving as a platform for detecting misfolded proteins.

In this latest study, the team employed a Langmuir-Blodgett trough -- a bath of water with Teflon-coated paddles at either end -- to study how peptoid nanosheets assemble at the surface of the bath, called the air-water interface. By compressing a single layer of peptoid molecules on the surface of water with these paddles, said Babak Sanii, a post-doctoral researcher working with Zuckermann, "we can squeeze this layer to a critical pressure and watch it collapse into a sheet."

"Knowing the mechanism of sheet formation gives us a set of design rules for making these nanomaterials on a much larger scale," added Sanii.

To study how shaking affected sheet formation, the team developed a new device called the SheetRocker to gently rock a vial of peptoids from upright to horizontal and back again. This carefully controlled motion allowed the team to precisely control the process of compression on the air-water interface.

"During shaking, the monolayer of peptoids essentially compresses, pushing chains of peptoids together and squeezing them out into a nanosheet. The air-water interface essentially acts as a catalyst for producing nanosheets in 95% yield," added Zuckermann. "What's more, this process may be general for a wide variety of two-dimensional nanomaterials."

This research is reported in a paper titled, "Shaken, not stirred: Collapsing a peptoid monolayer to produce free-floating, stable nanosheets," appearing in the Journal of the American Chemical Society (JACS) and available in JACS online. Co-authoring the paper with Zuckermann and Sanii were Romas Kudirka, Andrew Cho, Neeraja Venkateswaran, Gloria Olivier, Alexander Olson, Helen Tran, Marika Harada and Li Tan.

This work at the Molecular Foundry was supported by DOE's Office of Science and the Defense Threat Reduction Agency.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by DOE/Lawrence Berkeley National Laboratory.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Babak Sanii, Romas Kudirka, Andrew Cho, Neeraja Venkateswaran, Gloria K. Olivier, Alexander M. Olson, Helen Tran, R. Marika Harada, Li Tan, Ronald N. Zuckermann. Shaken, Not Stirred: Collapsing a Peptoid Monolayer To Produce Free-Floating, Stable Nanosheets. Journal of the American Chemical Society, 2011; : 111012114427004 DOI: 10.1021/ja206199d

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Researcher Thijs Meenink at Eindhoven University of Technology (TU/e) has developed a smart eye-surgery robot that allows eye surgeons to operate with increased ease and greater precision on the retina and the vitreous humor of the eye. The system also extends the effective period during which ophthalmologists can carry out these intricate procedures.

Meenink will defend his PhD thesis on Oct. 31 for his work on the robot, and intends later to commercialize his system.

Filters-out tremors

Eye operations such as retina repairs or treating a detached retina demands high precision. In most cases surgeons can only carry out these operations for a limited part of their career. "When ophthalmologists start operating they are usually already at an advanced stage in their careers," says Thijs Meenink. "But at a later age it becomes increasingly difficult to perform these intricate procedures." The new system can simply filter-out hand tremors, which significantly increases the effective working period of the ophthalmologist.

Same location every time

The robot consists of a 'master' and a 'slave'. The ophthalmologist remains fully in control, and operates from the master using two joysticks. This master was developed in an earlier PhD project at TU/e by dr.ir. Ron Hendrix. Two robot arms (the 'slave' developed by Meenink) copy the movements of the master and carry out the actual operation. The tiny needle-like instruments on the robot arms have a diameter of only 0.5 millimeter, and include forceps, surgical scissors and drains. The robot is designed such that the point at which the needle enters the eye is always at the same location, to prevent damage to the delicate eye structures.

Quick instrument change

Meenink has also designed a unique 'instrument changer' for the slave allowing the robot arms to change instruments, for example from forceps to scissors, within only a few seconds. This is an important factor in reducing the time taken by the procedure. Some eye operations can require as many as 40 instrument changes, which are normally a time consuming part of the overall procedure.

High precision movements

The surgeon's movements are scaled-down, for example so that each centimeter of motion on the joystick is translated into a movement of only one millimeter at the tip of the instrument. "This greatly increases the precision of the movements," says Meenink.

Haptic feedback

The master also provides haptic feedback. Ophthalmologists currently work entirely by sight -- the forces used in the operation are usually too small to be felt. However Meenink's robot can 'measure' these tiny forces, which are then amplified and transmitted to the joysticks. This allows surgeons to feel the effects of their actions, which also contributes to the precision of the procedure.

Comfort

The system developed by Meenink and Hendrix also offers ergonomic benefits. While surgeons currently are bent statically over the patient, they will soon be able to operate the robot from a comfortable seated position. In addition, the slave is so compact and lightweight that operating room staff can easily carry it and attach it to the operating table.

New procedures

Ophthalmologist prof.dr. Marc de Smet (AMC Amsterdam), one of Meenink's PhD supervisors, is enthusiastic about the system -- not only because of the time savings it offers, but also because in his view the limits of manual procedures have now been reached. "Robotic eye surgery is the next step in the evolution of microsurgery in ophthalmology, and will lead to the development of new and more precise procedures," de Smet explains.

Market opportunities

Both slave and master are ready for use, and Meenink intends to optimize them in the near future. The first surgery on humans is expected within five years. He also plans to investigate the market opportunities for the robot system. Robotic eye surgery is a new development; eye surgery robots are not yet available on the market.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Eindhoven University of Technology.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Scientists at UC Santa Barbara have discovered that patients with an inherited kidney disease may be helped by a drug that is currently available for other uses. The findings are published in this week's issue of the Proceedings of the National Academy of Sciences.

Over 600,000 people in the U.S., and 12 million worldwide, are affected by the inherited kidney disease known as autosomal-dominant polycystic kidney disease (ADPKD). The disease is characterized by the proliferation of thousands of cysts that eventually debilitate the kidneys, causing kidney failure in half of all patients by the time they reach age 50. ADPKD is one of the leading causes of renal failure in the U.S.

"Currently, no treatment exists to prevent or slow cyst formation, and most ADPKD patients require kidney transplants or lifelong dialysis for survival," said Thomas Weimbs, director of the laboratory at UCSB where the discovery was made. Weimbs is an associate professor in the Department of Molecular, Cellular and Developmental Biology, and in the Neuroscience Research Institute at UCSB.

Recent work in the Weimbs laboratory has revealed a key difference between kidney cysts and normal kidney tissue. They found that the STAT6 signaling pathway -- previously thought to be mainly important in immune cells -- is activated in kidney cysts, while it is dormant in normal kidneys. Cystic kidney cells are locked in a state of continuous activation of this pathway, which leads to the excessive proliferation and cyst growth in ADPKD.

The drug Leflunomide, which is clinically approved for use in rheumatoid arthritis, has previously been shown to inhibit the STAT6 pathway in cells. Weimbs and his team found that Leflunomide is also highly effective in reducing kidney cyst growth in a mouse model of ADPKD.

"These results suggest that the STAT6 pathway is a promising drug target for possible future therapy of ADPKD," said Weimbs. "This possibility is particularly exciting because drugs that inhibit the STAT6 pathway already exist, or are in active development."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - Santa Barbara.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

E. E. Olsan, S. Mukherjee, B. Wulkersdorfer, J. M. Shillingford, A. J. Giovannone, G. Todorov, X. Song, Y. Pei, T. Weimbs. Signal transducer and activator of transcription-6 (STAT6) inhibition suppresses renal cyst growth in polycystic kidney disease. Proceedings of the National Academy of Sciences, 2011; DOI: 10.1073/pnas.1111966108

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Researchers have identified a safer, more cost effective way to provide anesthesia for patients undergoing endovascular repair of an abdominal aortic aneurysm -- a common, often asymptomatic condition that, if not found and treated, can be deadly.

A new study done by investigators at Wake Forest Baptist Medical Center found that using less invasive spinal, epidural and local/monitored anesthesia care (MAC) is better than general anesthesia for elective endovascular repair of infrarenal abdominal aortic aneurysms (EVAR).

Details of the research appear in the November issue of the Journal of Vascular Surgery, the official publication of the Society for Vascular Surgery.

Aortic aneurysms are abnormal bulges, or "ballooning" in the walls of the aorta, the body's largest artery. Roughly the diameter of a garden hose, this artery brings oxygen-rich blood from the heart to the rest of the body. It extends from the heart down through the chest and abdominal region, where it divides into a blood vessel that supplies each leg. Although an aneurysm can develop anywhere along the aorta, most occur in the section running through the abdomen (abdominal aneurysms). An infrarenal abdominal aortic aneurysm is one that occurs in the belly, below the kidney arteries.

Occasionally an aneurysm may occur because of an area of weakness within the artery wall. An aortic aneurysm is serious because it may rupture, causing life-threatening internal bleeding. The risk of an aneurysm rupturing increases as the aneurysm gets larger. Each year, approximately 15,000 Americans die of a ruptured aortic aneurysm, however the condition is usually asymptomatic until the point of rupture. As such, most aortic aneurysms are unexpectedly identified while a patient is having a computed tomography (CT) scan or ultrasound done for another condition. Men over the age of 65 with a history of ever smoking can have an ultrasound done to specifically screen for aneurysms as part of a "Welcome-to-Medicare" visit with their physician. When detected in time, an aortic aneurysm can usually be repaired with surgery.

Infrarenal abdominal aortic aneurysms make up about 95 percent or more of abdominal aortic aneurysms and, while they occur in both sexes, they are most prevalent in men older than 60, affecting about 3 percent of this population, explained study co-author Matthew S. Edwards, B.A., M.S., M.D., a professor of vascular and endovascular surgery and public health sciences at Wake Forest Baptist.

"That's a lot of people," Edwards said. "If aortic aneurysms aren't repaired, they can burst and 80 to 90 percent of people who have a ruptured aortic aneurysm die. It's necessary for those who are suitable candidates for surgery to have their aneurysms repaired."

EVAR has completely revolutionized the care of aneurysms, allowing doctors to do repairs through two small incisions in the groin, Edwards said. It is currently the most common procedure for repairing aortic aneurysms in the United States. Historic trends have led to general anesthesia being the most common mode of anesthesia used for this procedure, but it is sometimes associated with the development of pneumonia, the need for a breathing tube and other pulmonary complications, he explained.

Other anesthetic techniques can also be used, such as local anesthesia, local anesthesia plus sedation (called "monitored" or "MAC"), spinal anesthesia and epidural anesthesia. According to this study, these other methods result in a shortened hospital stay and fewer pulmonary complications.

"In our study, general anesthesia was associated with increased postoperative length of stay (LOS) and increased complications involving the lungs when compared to the other anesthetic methods," Edwards said.

The researchers collected data on 6,009 patients who had elective EVAR performed between 2005 to 2008 at one of 221 North American hospitals. General anesthesia was used in 4,868 of the cases, while 419 patients had spinal anesthesia during their procedure; 331 had epidural anesthesia; and 391 had local/MAC. Emergency cases and patients who had other procedures being done at the same time that required general anesthesia were excluded from the study.

The team then reviewed the data to evaluate rates of mortality, morbidity and length of stay (LOS), or how long the patient remained in the hospital after the procedure.

The researchers found that general anesthesia was associated with an increase in pulmonary complications when compared to spinal and local/MAC anesthesia. Use of general anesthesia also was associated with a 10 percent increase in LOS for general when compared to spinal anesthesia, and a 20 percent increase when compared to general versus local/MAC anesthesia. Trends toward increased pulmonary complications and LOS were not observed for general versus epidural anesthesia. No significant association between anesthesia type and mortality was observed.

"Our study data suggest that increasing the use of less invasive anesthetic techniques, when appropriate, may limit postoperative complications in EVAR patients," Edwards said.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Wake Forest Baptist Medical Center.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Matthew S. Edwards, Jeanette S. Andrews, Angela F. Edwards, Racheed J. Ghanami, Matthew A. Corriere, Philip P. Goodney, Christopher J. Godshall, Kimberley J. Hansen. Results of endovascular aortic aneurysm repair with general, regional, and local/monitored anesthesia care in the American College of Surgeons National Surgical Quality Improvement Program database. Journal of Vascular Surgery, 2011; 54 (5): 1273 DOI: 10.1016/j.jvs.2011.04.054

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here