ScienceDaily (Nov. 30, 2011) — The effectiveness of using specific fungi as mycoherbicides to combat illicit drug crops remains questionable due to the lack of quality, in-depth research, says a new report from the National Research Council. Questions about the degree of control that could be achieved with such mycoherbicides, as well as uncertainties about their potential effects on nontarget plants, microorganisms, animals, humans, and the environment must be addressed before considering deployment. The report states that additional research is needed to assess the safety and effectiveness of proposed strains of mycoherbicides.

Mycoherbicides, created from plant pathogenic fungi, have been proposed as one tool to eradicate illicit drug crops. Congress requested an independent examination of the scientific issues associated with the feasibility of developing and implementing naturally occurring strains of these fungi to control the illicit cultivation of cannabis, coca, and opium poppy crops.

As an initial step, the report recommends research to study several candidate strains of each fungus in order to identify the most efficacious under a broad array of environmental conditions. The resulting information would guide decisions regarding product formulation, the appropriate delivery method, and the scale required to generate enough mycoherbicide product to achieve significant control. However, conducting the research does not guarantee that a feasible mycoherbicide product will result. Furthermore, countermeasures can be developed against mycoherbicides, and there are unavoidable risks from releasing substantial numbers of living organisms into an ecosystem.

Multiple regulatory requirements would also have to be met before a mycoherbicide could be deployed. Additional regulations and agreements might also be needed before these tools could be used internationally, as approval to conduct tests in countries where mycoherbicides might be used has been difficult or impossible to obtain in the past.

The study was sponsored by the Office of National Drug Control Policy. The National Academy of Sciences, National Academy of Engineering, Institute of Medicine, and National Research Council make up the National Academies. They are independent, nonprofit institutions that provide science, technology, and health policy advice under an 1863 congressional charter. Panel members, who serve pro bono as volunteers, are chosen by the Academies for each study based on their expertise and experience and must satisfy the Academies' conflict-of-interest standards. The resulting consensus reports undergo external peer review before completion.

Report.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by National Academy of Sciences.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — Research conducted by a pair of physicians at Boston University School of Medicine (BUSM) and Boston Medical Center (BMC) has led to the development of a test that can help diagnose membranous nephropathy in its early stages. The test, which is currently only offered in the research setting and is awaiting commercial development, could have significant implications in the diagnosis and treatment of the disease. Currently, the only way to diagnose the disease is through a biopsy.

The pioneering work is being led by Laurence Beck, MD, PhD, assistant professor of medicine at BUSM and a nephrologist at BMC, and David Salant, MD, professor of medicine at BUSM and chief of the renal section at BMC.

Over the past four years, the Halpin Foundation has contributed more than $350,000 to Beck to investigate the genetics and molecular mechanisms behind membranous nephropathy. Most recently, Beck was awarded a $50,000 grant from the Foundation to further his efforts.

Membranous nephropathy is an autoimmune disease caused by the immune system attacking the kidneys, resulting in the thickening and dysfunction of the kidney's filters, called glomeruli. When antibodies attack the glomeruli, large amounts of protein in the urine are released. In 2009, Beck and Salant identified that the antibodies were binding to a protein in the glomeruli. They determined that the target was a protein called PLA2R, or phospholipase A2 receptor, and these findings were published in the New England Journal of Medicine.

"For the first time, a specific biomarker has been identified for this relatively common kidney disease," said Beck, who is part of an international collaboration that has demonstrated that these antibodies are present in patients from many different ethnicities.

With the antigen protein identified, Beck and Salant have developed a blood test to detect and measure the amount of the specific antibodies in a sample.

Approximately one third of patients with membranous nephropathy eventually develop kidney failure, requiring dialysis or a kidney transplant. According to the University of North Carolina's Kidney Center, the disease affects people over the age of 40, is rare in children and affects more men than women. This disease is treated by high powered chemotherapy, and if successful, the antibodies go away.

"Being able to detect the presence of these antibodies using a blood test has tremendous implications about who is treated, and for how long, with the often toxic immunosuppressive drugs," said Beck.

Beck continues his research focus on the treatment of the disease by targeting the antibodies and stopping them from attacking the glomeruli.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Boston University Medical Center.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — A comparison of home-birth trends of the 1970s finds many similarities -- and some differences -- related to current trends in home births.

For instance, in the 1970s -- as now -- women opting to engage in home births tended to have higher levels of education. That's according to a 1978 survey by Home Oriented Maternity Experience (HOME) that was recently found by University of Cincinnati historian Wendy Kline in the archives of the American Congress of Obstetricians and Gynecologists (ACOG).

That survey showed that in the late 1970s, one third of the group's members participating in home births had a bachelor's, master's or doctoral degree. Fewer than one percent did not have a high school education.

Also, according to the 2,000 respondents to HOME's 1978 survey, 36 percent of women engaging in home births at the time were attended by physicians. That is a much higher percentage than is the case currently for mothers participating in home births. (In research by Eugene Declerq, Boston University School of Public Health, and Mairi Breen Rothman, Metro Area Midwives and Allied Services, it was found that about five percent of homebirths were attended by a physician in 2008.)

These comparisons are possible because of historical information found by UC's Kline, including "A Survey of Current Trends in Home Birth" by the founders HOME and published in 1979.

Kline is also conducting interviews with and has obtained historical documents from the founders of and the midwives first associated with HOME, a grass roots organization founded in 1974, to provide information and education related to home births.

Kline will present this research and related historical information as one of only nine international presenters invited to the "Communicating Reproduction" conference at Cambridge University Dec. 6-7.

The debate surrounding health, safety and home births rose to national prominence as recently as October 2011 during the Home Birth Consensus Summit in Virginia, held because of increasing interest in home births as an option for expectant mothers.

Overall, Kline's research of HOME and of ACOG counters the stereotypical view of the 1970s home-birth movement as countercultural and peopled by "hippies." In fact, the founders of HOME deliberately reached out to a broad cross section of women across the political and religious spectrum, including religious conservatives as well as those on the left of the political spectrum.

Said Kline, "In looking through the historical record, we find that many women involved in home births in the 1970s signed their names 'Mrs. Robert Smith' or 'Mrs. William Hoffman.' The movement included professionals, business people, farmers, laborers and artists. It defies simplistic categorization."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Cincinnati.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — In 2008, according to the National Highway Traffic Safety Administration, 2.3 million automobile crashes occurred at intersections across the United States, resulting in some 7,000 deaths. More than 700 of those fatalities were due to drivers running red lights. But, according to the Insurance Institute for Highway Safety, half of the people killed in such accidents are not the drivers who ran the light, but other drivers, passengers and pedestrians.

In order to reduce the number of accidents at intersections, researchers at MIT have devised an algorithm that predicts when an oncoming car is likely to run a red light. Based on parameters such as the vehicle's deceleration and its distance from a light, the group was able to determine which cars were potential "violators" -- those likely to cross into an intersection after a light has turned red -- and which were "compliant."

The researchers tested the algorithm on data collected from an intersection in Virginia, finding that it accurately identified potential violators within a couple of seconds of reaching a red light -- enough time, according to the researchers, for other drivers at an intersection to be able to react to the threat if alerted. Compared to other efforts to model driving behavior, the MIT algorithm generated fewer false alarms, an important advantage for systems providing guidance to human drivers. The researchers report their findings in a paper that will appear in the journal IEEE Transactions on Intelligent Transportation Systems.

Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics at MIT, says "smart" cars of the future may use such algorithms to help drivers anticipate and avoid potential accidents.

Video: See the team's algorithm in action as robots are able to negotiate a busy intersection and avoid potential accidents.

"If you had some type of heads-up display for the driver, it might be something where the algorithms are analyzing and saying, 'We're concerned,'" says How, who is one of the paper's authors. "Even though your light might be green, it may recommend you not go, because there are people behaving badly that you may not be aware of."

How says that in order to implement such warning systems, vehicles would need to be able to "talk" with each other, wirelessly sending and receiving information such as a car's speed and position data. Such vehicle-to-vehicle (V2V) communication, he says, can potentially improve safety and avoid traffic congestion. Today, the U.S. Department of Transportation (DOT) is exploring V2V technology, along with several major car manufacturers -- including Ford Motor Company, which this year has been road-testing prototypes with advanced Wi-Fi and collision-avoidance systems.

"You might have a situation where you get a snowball effect where, much more rapidly than people envisioned, this [V2V] technology may be accepted," How says.

In the meantime, researchers including How are developing algorithms to analyze vehicle data that would be broadcast via such V2V systems. Georges Aoude SM '07, PhD '11, a former student of How's, designed an algorithm based on a technique that has been successfully applied in many artificial intelligence domains, but is relatively new to the transportation field. This algorithm is able to capture a vehicle's motion in multiple dimensions using a highly accurate and efficient classifier that can be executed in less than five milliseconds.

Along with colleagues Vishnu Desaraju SM '10 and Lauren Stephens, an MIT undergraduate, How and Aoude tested the algorithm using an extensive set of traffic data collected at a busy intersection in Christianburg, Va. The intersection was heavily monitored as part of a safety-prediction project sponsored by the DOT. The DOT outfitted the intersection with a number of instruments that tracked vehicle speed and location, as well as when lights turned red.

Aoude and colleagues applied their algorithm to data from more than 15,000 approaching vehicles at the intersection, and found that it was able to correctly identify red-light violators 85 percent of the time -- an improvement of 15 to 20 percent over existing algorithms.

The researchers were able to predict, within a couple of seconds, whether a car would run a red light. The researchers actually found a "sweet spot" -- one to two seconds in advance of a potential collision -- when the algorithm has the highest accuracy and when a driver may still have enough time to react.

Compared to similar safety-prediction technologies, the group found that its algorithm generated fewer false positives. How says this may be due to the algorithm's ability to analyze multiple parameters. He adds that other algorithms tend to be "skittish," erring on the side of caution in flagging potential problems, which may itself be a problem when cars are outfitted with such technology.

"The challenge is, you don't want to be overly pessimistic," How says. "If you're too pessimistic, you start reporting there's a problem when there really isn't, and then very rapidly, the human's going to push a button that turns this thing off."

The researchers are now investigating ways to design a closed-loop system -- to give drivers a recommendation of what to do in response to a potential accident -- and are also planning to adapt the existing algorithm to air traffic control, to predict the behavior of aircraft.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Massachusetts Institute of Technology.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — With the December holidays a peak season for indulging in marzipan, scientists are reporting development of a new test that can tell the difference between the real thing -- a pricey but luscious paste made from ground almonds and sugar -- and cheap fakes made from ground soy, peas and other ingredients. The report appears in ACS' Journal of Agricultural and Food Chemistry.

Ilka Haase and colleagues explain that marzipan is a popular treat in some countries, especially at Christmas and New Year's, when displays of marzipan sculpted into fruit, Santa and tree shapes pop up in stores. And cakes like marzipan stollen (a rich combo of raisins, nuts and cherries with a marzipan filling) are a holiday tradition. But the cost of almonds leads some unscrupulous manufacturers to use cheap substitutes like ground-up peach seeds, soybeans or peas.

Current methods for detecting that trickery have drawbacks, allowing counterfeit marzipan to slip onto the market to unsuspecting consumers. To improve the detection of contaminants in marzipan, the researchers became food detectives and adapted a method called the polymerase chain reaction (PCR) -- the same test famed for use in crime scene investigations.

They tested various marzipan concoctions with different amounts of apricot seeds, peach seeds, peas, beans, soy, lupine, chickpeas, cashews and pistachios. PCR enabled them to easily finger the doctored pastes. They could even detect small amounts -- as little as 0.1% -- of an almond substitute. The researchers say that the PCR method could serve as a perfect tool for the routine screening of marzipan pastes for small amounts of contaminants.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by American Chemical Society.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Philipp Brüning, Ilka Haase, Reinhard Matissek, Markus Fischer. Marzipan: Polymerase Chain Reaction-Driven Methods for Authenticity Control. Journal of Agricultural and Food Chemistry, 2011; 59 (22): 11910 DOI: 10.1021/jf202484a

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — Distrust is the central motivating factor behind why religious people dislike atheists, according to a new study led by University of British Columbia psychologists.

"Where there are religious majorities -- that is, in most of the world -- atheists are among the least trusted people," says lead author Will Gervais, a doctoral student in UBC's Dept. of Psychology. "With more than half a billion atheists worldwide, this prejudice has the potential to affect a substantial number of people."

While reasons behind antagonism towards atheists have not been fully explored, the study -- published in the current online issue of Journal of Personality and Social Psychology -- is among the first explorations of the social psychological processes underlying anti-atheist sentiments.

"This antipathy is striking, as atheists are not a coherent, visible or powerful social group," says Gervais, who co-authored the study with UBC Associate Prof. Ara Norenzayan and Azim Shariff of the University of Oregon. The study is titled, Do You Believe in Atheists? Distrust is Central to Anti-Atheist Prejudice.

The researchers conducted a series of six studies with 350 American adults and nearly 420 university students in Canada, posing a number of hypothetical questions and scenarios to the groups. In one study, participants found a description of an untrustworthy person to be more representative of atheists than of Christians, Muslims, gay men, feminists or Jewish people. Only rapists were distrusted to a comparable degree.

The researchers concluded that religious believer's distrust -- rather than dislike or disgust -- was the central motivator of prejudice against atheists, adding that these studies offer important clues on how to combat this prejudice.

One motivation for the research was a Gallup poll that found that only 45 per cent of American respondents would vote for a qualified atheist president, says Norenzayan. The figure was the lowest among several hypothetical minority candidates. Poll respondents rated atheists as the group that least agrees with their vision of America, and that they would most disapprove of their children marrying.

The religious behaviors of others may provide believers with important social cues, the researchers say. "Outward displays of belief in God may be viewed as a proxy for trustworthiness, particularly by religious believers who think that people behave better if they feel that God is watching them," says Norenzayan. "While atheists may see their disbelief as a private matter on a metaphysical issue, believers may consider atheists' absence of belief as a public threat to cooperation and honesty."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of British Columbia.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — Surgeons can learn their skills more quickly if they are taught how to control their eye movements. Research led by the University of Exeter shows that trainee surgeons learn technical surgical skills much more quickly and deal better with the stress of the operating theatre if they are taught to mimic the eye movements of experts.

This research, published in the journal Surgical Endoscopy, could transform the way in which surgeons are trained to be ready for the operating theatre.

Working in collaboration with the University of Hong Kong, the Royal Devon and Exeter NHS Foundation Trust and the Horizon training centre Torbay, the University of Exeter team identified differences in the eye movements of expert and novice surgeons. They devised a gaze training programme, which taught the novices the 'expert' visual control patterns. This enabled them to learn technical skills more quickly than their fellow students and perform these skills in distracting conditions similar to the operating room.

Thirty medical students were divided into three groups, each undertaking a different type of training. The 'gaze trained' group of students was shown a video, captured by an eye tracker, displaying the visual control of an experienced surgeon. The footage highlighted exactly where and when the surgeon's eyes were fixed during a simulated surgical task. The students then conducted the task themselves, wearing the same eye-tracking device. During the task they were encouraged to adopt the same eye movements as those of the expert surgeon.

Students learned that successful surgeons 'lock' their eyes to a critical location while performing complex movements using surgical instruments. This prevents them from tracking the tip of the surgical tool, helping them to be accurate and avoid being distracted.

After repeating the task a number of times, the students' eye movements soon mimicked those of a far more experienced surgeon. Members of the other groups, who were either taught how to move the surgical instruments or were left to their own devices, did not learn as quickly. Those students' performance broke down when they were put into conditions that simulated the environment of the operating theatre and they needed to multi-task.

Dr Samuel Vine of the University of Exeter explained: "It appears that teaching novices the eye movements of expert surgeons allows them to attain high levels of motor control much quicker than novices taught in a traditional way. This highlights the important link between the eye and hand in the performance of motor skills. These individuals were also able to successfully multi-task without their technical skills breaking down, something that we know experienced surgeons are capable of doing in the operating theatre.

"Teaching eye movements rather than the motor skills may have reduced the working memory required to complete the task. This may be why they were able to multi-task whilst the other groups were not."

Dr Samuel Vine and Dr Mark Wilson from Sport and Health Sciences at the University of Exeter have previously worked with athletes to help them improve their performance through gaze training, but this is the first study to examine the benefits of gaze training in surgical skills training.

Dr Vine added: "The findings from our research highlight the potential for surgical educators to 'speed up' the initial phase of technical skill learning, getting trainees ready for the operating room earlier and therefore enabling them to gain more 'hands on' experience. This is important against a backdrop of reduced government budgets and new EU working time directives, meaning that in the UK we have less money and less time to deliver specialist surgical training."

The research team is now analysing the eye movements of surgeons performing 'real life' operations and are working to develop a software training package that will automatically guide trainees to adopt surgeons eye movements.

Mr John McGrath, Consultant Surgeon at the Royal Devon and Exeter Hospital, said: "The use of simulators has become increasingly common during surgical training to ensure that trainee surgeons have reached a safe level of competency before performing procedures in the real-life operating theatre. Up to now, there has been fairly limited research to understand how these simulators can be used to their maximum potential.

"This exciting collaboration with the Universities of Exeter and Hong Kong has allowed us to trial a very novel approach to surgical education, applying the team's international expertise in the field of high performance athletes. Focussing on surgeons' eye movements has resulted in a reduction in the time taken to learn specific procedures and, more importantly, demonstrated that their skills are less likely to break down under pressure. Our current work has now moved into the operating theatre to ensure that patients will benefit from the advances in surgical training and surgical safety."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Exeter.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — Scientists investigating the interactions, or binding patterns, of a major tumor-suppressor protein known as p53 with the entire genome in normal human cells have turned up key differences from those observed in cancer cells. The distinct binding patterns reflect differences in the chromatin (the way DNA is packed with proteins), which may be important for understanding the function of the tumor suppressor protein in cancer cells.

The study was conducted by scientists at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory and collaborators at Cold Spring Harbor Laboratory, and is published in the December 15 issue of the journal Cell Cycle.

"No other study has shown such a dramatic difference in a tumor suppressor protein binding to DNA between normal and cancer-derived cells," said Brookhaven biologist Krassimira Botcheva, lead author on the paper. "This research makes it clear that it is essential to study p53 functions in both types of cells in the context of chromatin to gain a correct understanding of how p53 tumor suppression is affected by global epigenetic changes -- modifications to DNA or chromatin -- associated with cancer development."

Because of its key role in tumor suppression, p53 is the most studied human protein. It modulates a cell's response to a variety of stresses (nutrient starvation, oxygen level changes, DNA damage caused by chemicals or radiation) by binding to DNA and regulating the expression of an extensive network of genes. Depending on the level of DNA damage, it can activate DNA repair, stop the cells from multiplying, or cause them to self-destruct -- all of which can potentially prevent or stop tumor development. Malfunctioning p53 is a hallmark of human cancers.

Most early studies of p53 binding explored its interactions with isolated individual genes, and all whole-genome studies to date have been conducted in cancer-derived cells. This is the first study to present a high-resolution genome-wide p53-binding map for normal human cells, and to correlate those findings with the "epigenetic landscape" of the genome.

"We analyzed the p53 binding in the context of the human epigenome, by correlating the p53 binding profile we obtained in normal human cells with a published high-resolution map of DNA methylation -- a type of chemical modification that is one of the most important epigenetic modifications to DNA -- that had been generated for the same cells," Botcheva said.

Key findings

In the normal human cells, the scientists found p53 binding sites located in close proximity to genes and particularly at the sites in the genome, known as transcriptions start sites, which represent "start" signals for transcribing the genes. Though this association of binding sites with genes and transcription start sites was previously observed in studies of functional, individually analyzed binding sites, it was not seen in high-throughput whole-genome studies of cancer-derived cell lines. In those earlier studies, the identified p53 binding sites were found not close to genes, and not close to the sites in the human genome where transcription starts.

Additionally, nearly half of the newly identified p53 binding sites in the normal cells (in contrast to about five percent of the sites reported in cancer cells) reside in so-called CpG islands. These are short DNA sequences with unusually high numbers of cytosine and guanine bases (the C and G of the four-letter genetic code alphabet, consisting of A, T, C, and G). CpG islands tend to be hypo- (or under-) methylated relative to the heavily methylated mammalian genome.

"This association of binding sites with CpG islands in the normal cells is what prompted us to investigate a possible genome-wide correlation between the identified sites and the CpG methylation status," Botcheva said.

The scientists found that p53 binding sites were enriched at hypomethylated regions of the human genome, both in and outside CpG islands.

"This is an important finding because, during cancer development, many CpG islands are subjected to extensive methylation while the bulk of the genomic DNA becomes hypomethylated," Botcheva said. "These major epigenetic changes may contribute to the differences observed in the p53-binding-sites' distribution in normal and cancer cells."

The scientists say this study clearly illustrates that the genomic landscape -- the DNA modifications and the associated chromatin changes -- have a significant effect on p53 binding. Furthermore, it greatly extends the list of experimentally defined p53 binding sites and provides a general framework for investigating the interplay between transcription factor binding, tumor suppression, and epigenetic changes associated with cancer development.

This research, which was funded by the DOE Office of Science, lays groundwork for further advancing the detailed understanding of radiation effects, including low-dose radiation effects, on the human genome.

The research team also includes John Dunn and Carl Anderson of Brookhaven Lab, and Richard McCombie of Cold Spring Harbor Laboratory, where the high-throughput Illumina sequencing was done.

Methodology

The p53 binding sites were identified by a method called ChIP-seq: for chromatin immunoprecipitation (ChIP), which produces a library of DNA fragments bound by a protein of interest using immunochemistry tools, followed by massively parallel DNA sequencing (seq) for determining simultaneously millions of sequences (the order of the nucleotide bases A, T, C and G in DNA) for these fragments.

"The experiment is challenging, the data require independent experimental validation and extensive bioinformatics analysis, but it is indispensable for high-throughput genomic analyses," Botcheva said. Establishing such capability at BNL is directly related to the efforts for development of profiling technologies for evaluating the role of epigenetic modifications in modulating low-dose ionizing radiation responses and also applicable for plant epigenetic studies.

The analysis required custom-designed software developed by Brookhaven bioinformatics specialist Sean McCorkle.

"Mapping the locations of nearly 20 million sequences in the 3-billion-base human genome, identifying binding sites, and performing comparative analysis with other data sets required new programming approaches as well as parallel processing on many CPUs," McCorkle said. "The sheer volume of this data required extensive computing, a situation expected to become increasingly commonplace in biology. While this work was a sequence data-processing milestone for Brookhaven, we expect data volumes only to increase in the future, and the computing challenges to continue."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by DOE/Brookhaven National Laboratory.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal References:

Krassimira Botcheva, Sean R. McCorkle, W.R. McCombie, John J. Dunn, Carl W. Anderson. Distinct p53 genomic binding patterns in normal and cancer-derived human cells. Cell Cycle, 2011; 10 (24) [link]William A. Freed-Pastor, Carol Prives. Dissimilar DNA binding by p53 in normal and tumor-derived cells. Cell Cycle, 2011; 10 (24) [link]

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — Ultra-tiny zinc oxide (ZnO) particles with dimensions less than one-ten-millionth of a meter are among the ingredients list of some commercially available sunscreen products, raising concerns about whether the particles may be absorbed beneath the outer layer of skin. To help answer these safety questions, an international team of scientists from Australia and Switzerland have developed a way to optically test the concentration of ZnO nanoparticles at different skin depths. They found that the nanoparticles did not penetrate beneath the outermost layer of cells when applied to patches of excised skin.

The results, which were published this month in the Optical Society's (OSA) open-access journal Biomedical Optics Express, lay the groundwork for future studies in live patients.

The high optical absorption of ZnO nanoparticles in the UVA and UVB range, along with their transparency in the visible spectrum when mixed into lotions, makes them appealing candidates for inclusion in sunscreen cosmetics. However, the particles have been shown to be toxic to certain types of cells within the body, making it important to study the nanoparticles' fate after being applied to the skin. By characterizing the optical properties of ZnO nanoparticles, the Australian and Swiss research team found a way to quantitatively assess how far the nanoparticles might migrate into skin.

The team used a technique called nonlinear optical microscopy, which illuminates the sample with short pulses of laser light and measures a return signal. Initial results show that ZnO nanoparticles from a formulation that had been rubbed into skin patches for 5 minutes, incubated at body temperature for 8 hours, and then washed off, did not penetrate beneath the stratum corneum, or topmost layer of the skin. The new optical characterization should be a useful tool for future non-invasive in vivo studies, the researchers write.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Optical Society of America.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Zhen Song, Timothy A. Kelf, Washington H. Sanchez, Michael S. Roberts, Jaro Ricka, Martin Frenz, Andrei V. Zvyagin. Characterization of optical properties of ZnO nanoparticles for quantitative imaging of transdermal transport. Biomedical Optics Express, 2011; 2 (12): 3321 DOI: 10.1364/BOE.2.003321

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — Imagine someone inventing a "super-toner," a revolutionary new "dry ink" for copiers and laser printers that produces higher-quality, sharper color images more economically, cutting electricity by up to 30 percent. One that also reduces emissions of carbon dioxide -- the main greenhouse gas -- in the production of tens of thousands of tons of toner produced each year. One that reduces the cost of laser printing, making it more affordable in more offices, schools and homes.

Sound like a toner that is too good to be true? Well, a team of scientists at the Xerox Corporation actually invented it. A new episode in the 2011 edition of a  video series from the American Chemical Society (ACS), the world's largest scientific society, focuses on the research and the teamwork that led to this advance.

Titled Prized Science: How the Science Behind ACS Awards Impacts Your Life, the videos are available without charge at the Prized Science website and on DVD.

ACS encourages educators, schools, museums, science centers, news organizations and others to embed links to Prized Science on their websites. The videos discuss scientific research in non-technical language for general audiences. New episodes in the series, which focuses on ACS' 2011 award recipients, will be issued in November and December.

"Science awards shine light on individuals who have made impressive achievements in research," noted ACS President Nancy B. Jackson, Ph.D. "Often, the focus is on the recipients, with the public not fully grasping how the award-winning research improves the everyday lives of people around the world. The Prized Science videos strive to give people with no special scientific knowledge the chance to discover the chemistry behind the American Chemical Society's national awards and see how it improves and transforms our daily lives."

A Revolutionary New "Dry Ink" for Laser Printers & Photocopy Machines features the research of Patricia Burns, Ph.D., Grazyna Kmiecik-Lawrynowicz, Ph.D., Chieh-Min Cheng, Ph.D., and Tie Hwee Ng, Ph.D., winners of the 2011 ACS Award for Team Innovation sponsored by the ACS Corporation Associates. Toner is the fine powder used instead of ink in photocopy machines, laser printers and multifunction devices -- machines that print, copy and fax. The researchers at Xerox developed a new toner called "EA Toner," which stands for "emulsion aggregation." They start with a liquid material that looks like house paint. That's the "emulsion" part. Then, they throw in pigments for color, waxes and other useful things and let everything "aggregate," or stick together. Then, it all dries out, and what's left is a fine powder that they can put into a toner cartridge. That worked fine in the lab, but scaling it up to produce millions of toner cartridges to meet consumers' demands was difficult -- all of the scientists had to work together to make the new toner a commercial reality.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by American Chemical Society.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — A team of researchers from the University of Utah and the University of Massachusetts has identified the first gene associated with frequent herpes-related cold sores.

The findings were published in the Dec. 1, 2011, issue of the Journal of Infectious Diseases.

Herpes simplex labialis (HSL) is an infection caused by herpes simplex virus type 1 (HSV-1) that affects more than 70 percent of the U.S. population. Once HSV-1 has infected the body, it is never removed by the immune system. Instead, it is transported to nerve cell bodies, where it lies dormant until it is reactivated. The most common visible symptom of HSV-1 reactivation is a cold sore on or around the mouth. Although a majority people are infected by HSV-1, the frequency of cold sore outbreaks is extremely variable and the causes of reactivation are uncertain.

"Researchers believe that three factors contribute to HSV-1 reactivation -- the virus itself, exposure to environmental factors, and genetic susceptibility," says John D. Kriesel, M.D., research associate professor of infectious diseases at the University of Utah School of Medicine and first author on the study. "The goal of our investigation was to define genes linked to cold sore frequency."

Kriesel and his colleagues previously had identified a region of chromosome 21 containing six genes significantly linked to HSL disease using DNA collected from 43 large families to map the human genome. In the current study, Kriesel and his colleagues performed intensive analysis of this chromosome region using single nucleotide polymorphism (SNP) genotyping, a test which identifies differences in genetic make-up between individuals.

"Using SNP genotyping, we were able to identify 45 DNA sequence variations among 618 study participants, 355 of whom were known to be infected with HSV-1," says Kriesel. "We then used two methods called linkage analysis and transmission disequilibrium testing to determine if there was a genetic association between particular DNA sequence variations and the likelihood of having frequent cold sore outbreaks."

Kriesel and his colleagues discovered that an obscure gene called C21orf91 was associated with susceptibility to HSL. They identified five major variations of C21orf91, two of which seemed to protect against HSV-1 reactivation and two of which seemed to increase the likelihood of having frequent cold sore outbreaks.

"There is no cure for HSV-1 and, at this time, there is no way for us to predict or prevent cold sore outbreaks," says Kriesel. "The C21orf91 gene seems to play a role in cold sore susceptibility, and if this data is confirmed among a larger, unrelated population, this discovery could have important implications for the development of drugs that affect cold sore frequency."

Kriesel's University of Utah collaborators include Maurine R. Hobbs, Ph.D., research assistant professor of internal medicine and adjunct assistant professor of human genetics, and Mark F. Leppert, Ph.D., distinguished professor and former chair of human genetics.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Utah Health Sciences.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

J. D. Kriesel, B. B. Jones, N. Matsunami, M. K. Patel, C. A. St. Pierre, E. A. Kurt-Jones, R. W. Finberg, M. Leppert, M. R. Hobbs. C21orf91 Genotypes Correlate With Herpes Simplex Labialis (Cold Sore) Frequency: Description of a Cold Sore Susceptibility Gene. Journal of Infectious Diseases, 2011; 204 (11): 1654 DOI: 10.1093/infdis/jir633

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — Take a Petri dish containing crude petroleum and it will release a strong odor distinctive of the toxins that make up the fossil fuel. Sprinkle mushroom spores over the Petri dish and let it sit for two weeks in an incubator, and surprise, the petroleum and its smell will disappear. "The mushrooms consumed the petroleum!" says Mohamed Hijri, a professor of biological sciences and researcher at the University of Montreal's Institut de recherche en biologie végétale (IRBV).

Hijri co-directs a project with B. Franz Lang promoting nature as the number one ally in the fight against contamination. Lang holds the Canada Research Chair on Comparative and Evolutionary Genomics and is a professor at the university's Department of Biochemistry. By using bacteria to stimulate the exceptional growth capacity of certain plants and microscopic mushrooms, Hijri and Lang believe they are able to create in situ decontamination units able to successfully attack the most contaminated sites on the planet.

The recipe is simple. In the spring, we plant willow cuttings at 25-centimeter intervals so the roots dive into the ground and soak up the degrading contaminants in the timber along with the bacteria. At the end of the season, we burn the stems and leaves and we are left with a handful of ashes imprisoning all of the heavy metals that accumulated in the plant cells. Highly contaminated soil will be cleansed after just a few cycles. "In addition, it's beautiful," says Hijri pointing to a picture of dense vegetation covering the ground of an old refinery after just three weeks.

Thanks to the collaboration of an oil company from the Montreal area, the researchers had access to a microbiological paradise: an area where practically nothing can grow and where no one ventures without protective gear worthy of a space traveler. This is where Hijri collected microorganisms specialized in the ingestion of fossil fuels. "If we leave nature to itself, even the most contaminated sites will find some sort of balance thanks to the colonization by bacteria and mushrooms. But by isolating the most efficient species in this biological battle, we can gain a lot of time."

Natural and artificial selection

This is the visible part of the project, which could lead to a breakthrough in soil decontamination. The project is called Improving Bioremediation of Polluted Soils Through Environmental Genomics and it requires time-consuming sampling and fieldwork as well as DNA sequencing of the species in question. The project involves 16 researchers from the University of Montreal and McGill University, many of which are affiliated with the IRBV. The team also includes four researchers, lawyers and political scientists, specializing in the ethical, environmental, economic, legal and social aspects of genomics.

The principle is based on a well-known process in the sector called phytoremediation that consists in using plant matter for decontamination. "However, in contaminated soils, it isn't the plant doing most of the work," says Lang. "It's the microorganisms i.e. the mushrooms and bacteria accompanying the root. There are thousands of species of microorganisms and our job is to find the best plant-mushroom-bacteria combinations."

Botanist Michel Labrecque is overseeing the plant portion of the project. The willow seems to be one of the leading species at this point given its rapid growth and premature foliation. In addition, its stem grows even stronger once it has been cut. Therefore, there is no need to plant new trees every year. However, the best willow species still needs to be determined.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Université de Montréal.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — The most poisonous substance on Earth -- already used medically in small doses to treat certain nerve disorders and facial wrinkles -- could be re-engineered for an expanded role in helping millions of people with rheumatoid arthritis, asthma, psoriasis and other diseases, scientists are reporting. Their study appears in ACS' journal Biochemistry.

Edwin Chapman and colleagues explain that toxins, or poisons, produced by Clostridium botulinum bacteria, cause of a rare but severe form of food poisoning, are the most powerful toxins known to science. Doctors can inject small doses, however, to block the release of the neurotransmitters, or chemical messengers, that transmit signals from one nerve cell to another. The toxins break down a protein in nerve cells that mediates the release of neurotransmitters, disrupting nerve signals that cause pain, muscle spasms and other symptoms in certain diseases. That protein exists not just in nerve cells, but in other cells in the human body. However, these non-nerve cells lack the receptors needed for the botulinum toxins to enter and work. Chapman's group sought to expand the potential use of the botulinum toxins by hooking it to a molecule that can attach to receptors on other cells.

Their laboratory experiments showed that these engineered botulinum toxins do work in non-nerve cells, blocking the release of a protein from immune cells linked to inflammation, which is the underlying driving force behind a range of diseases. Such botulinum toxin therapy holds potential in a range of chronic inflammatory diseases and perhaps other conditions, which could expand the role of these materials in medicine.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by American Chemical Society.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Felix L. Yeh, Yiming Zhu, William H. Tepp, Eric A. Johnson, Paul J. Bertics, Edwin R. Chapman. Retargeted Clostridial Neurotoxins as Novel Agents for Treating Chronic Diseases. Biochemistry, 2011; 50 (48): 10419 DOI: 10.1021/bi201490t

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Dec. 1, 2011) — Only 21 percent of surveyed medical students could identify five true and two false indications of when and when not to wash their hands in the clinical setting, according to a study published in the December issue of the American Journal of Infection Control, the official publication of APIC -- the Association for Professionals in Infection Control and Epidemiology.

Three researchers from the Institute for Medical Microbiology and Hospital Epidemiology at Hannover Medical School in Hannover, Germany collected surveys from 85 medical students in their third year of study during a lecture class that all students must pass before bedside training and contact with patients commences. Students were given seven scenarios, of which five ("before contact to a patient," "before preparation of intravenous fluids," "after removal of gloves," "after contact to the patient's bed," and "after contact to vomit") were correct hand hygiene (HH) indications. Only 33 percent of the students correctly identified all five true indications, and only 21 percent correctly identified all true and false indications.

Additionally, the students expected that their own HH compliance would be "good" while that of nurses would be lower, despite other published data that show a significantly higher rate of HH compliance among nursing students than among medical students. The surveyed students further believed that HH compliance rates would be inversely proportional to the level of training and career attainment of the physician, which confirms a previously discovered bias among medical students that is of particular concern, as these higher-level physicians are often the ones training the medical students at the bedside.

"There is no doubt that we need to improve the overall attitude toward the use of alcohol-based hand rub in hospitals," conclude the authors. "To achieve this goal, the adequate behavior of so-called 'role models' is of particular importance."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Elsevier, via AlphaGalileo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

K. Graf and I.F. Chaberny, R.-P. Vonberg. Beliefs about hand hygiene: A survey in medical students in their first clinical year. American Journal of Infection Control, Volume 39, Issue 10 (December 2011)

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Dec. 1, 2011) — Ozark hellbenders have been bred in captivity -- a first for either of the two subspecies of hellbender. The decade-long collaboration of the Saint Louis Zoo's Ron Goellner Center for Hellbender Conservation and the Missouri Department of Conservation has yielded 63 baby hellbenders.

The first hellbender hatched on Nov. 15, and currently there are approximately 120 additional eggs that should hatch within the next week. The eggs are maintained in climate- and water quality-controlled trays behind the scenes in the Zoo's Herpetarium. For 45 to 60 days after emerging, the tiny larvae will retain their yolk sack for nutrients and move very little as they continue their development. As the larvae continue to grow, they will develop legs and eventually lose their external gills by the time they reach 1.5 to 2 years of age. At sexual maturity, at 5 to 8 years of age, adult lengths can approach two feet. Both parents are wild bred: the male has been at the Zoo for the past two years and the female arrived this past September.

Rivers in south-central Missouri and adjacent Arkansas once supported up to 8,000 Ozark hellbenders. Today, fewer than 600 exist in the world-so few that the amphibian was added in October 2011 to the federal endangered species list.

Due to these drastic declines, captive propagation became a priority in the long-term recovery of the species. Once the captive-bred larvae are 3 to 8 years old, they can then be released into their natural habitat-the Ozark aquatic ecosystem.

Also known by the colloquial names of "snot otter" and "old lasagna sides," the adult hellbender is one of the largest species of salamanders in North America, with its closest relatives being the giant salamanders of China and Japan, which can reach five feet in length.

With skin that is brown with black splotches, the Ozark hellbender has a slippery, flattened body that moves easily through water and can squeeze under rocks on the bottom of streams.

Like a Canary in a Coal Mine

Requiring cool, clean running water, the Ozark hellbender is also an important barometer of the overall health of that ecosystem-an aquatic "canary in a coal mine."

"Capillaries near the surface of the hellbender's skin absorb oxygen directly from the water -- as well as hormones, heavy metals and pesticides," said Jeff Ettling, Saint Louis Zoo curator of herpetology and aquatics. "If there is something in the water that is causing the hellbender population to decline, it can also be affecting the citizens who call the area home."

"We have a 15- to 20-year window to reverse this decline," added Missouri Department of Conservation Herpetologist Jeff Briggler, who cites a number of reasons for that decline from loss of habitat to pollution to disease to illegal capture and overseas sale of the hellbender for pets. "We don't want the animal disappearing on our watch."

Reversing A Decline

In 2001, the Ozark Hellbender Working Group of scientists from government agencies, public universities and zoos in Missouri and Arkansas launched a number of projects to staunch that decline. These included egg searches, disease sampling and behavioral studies.

In 2004, funding from private donors, the Missouri Department of Conservation, the United States Fish & Wildlife Services and the Zoo covered the cost of building sophisticated facilities including climate-controlled streams to breed the hellbender.

The hellbender propagation facilities include two outdoor streams that are 40 feet long and six feet deep. The area is landscaped with natural gravel, large rocks for hiding and artificial nest boxes, where the fertilized eggs were discovered. A nearby building houses state-of-the-art life support equipment used to filter the water and maintain the streams at the proper temperature.

In addition, two large climate-controlled rooms in the basement of the Zoo's Charles H. Hoessle Herpetarium are the headquarters for the program. The facilities recreate hellbender habitat with closely monitored temperatures, pumps to move purified water, sprinklers synced to mimic the exact precipitation and lights that flick on or dim to account for brightness and shade. The largest room includes a 32-foot simulated stream, complete with native gravel and large rocks for hiding. It houses a breeding group of adult Ozark hellbenders from the North Fork of the White River in Missouri; offspring from these hellbenders will eventually be released back into the wild.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Saint Louis Zoo.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — The record-breaking drought in Texas that has fueled wildfires, decimated crops and forced cattle sales has also reduced levels of groundwater in much of the state to the lowest levels seen in more than 60 years, according to new national maps produced by NASA and distributed by the National Drought Mitigation Center at the University of Nebraska-Lincoln.

The latest groundwater map, released on Nov. 29, shows large patches of maroon over eastern Texas, indicating severely depressed groundwater levels. The maps, generated weekly by NASA's Goddard Space Flight Center in Greenbelt, Md., are publicly available on the Drought Center's website.

"Texas groundwater will take months or longer to recharge," said Matt Rodell, a hydrologist based at Goddard. "Even if we have a major rainfall event, most of the water runs off. It takes a longer period of sustained greater-than-average precipitation to recharge aquifers significantly."

The maps are based on data from NASA's Gravity Recovery and Climate Experiment (GRACE) satellites, which detect small changes in Earth's gravity field caused primarily by the redistribution of water on and beneath the land surface. The paired satellites travel about 137 miles (220 km) apart and record small changes in the distance separating them as they encounter variations in Earth's gravitational field.

To make the maps, scientists used a sophisticated computer model that combines measurements of water storage from GRACE with a long-term meteorological dataset to generate a continuous record of soil moisture and groundwater that stretches back to 1948. GRACE data goes back to 2002. The meteorological data include precipitation, temperature, solar radiation and other ground- and space-based measurements.

The color-coded maps show how much water is stored now as a probability of occurrence in the 63-year record. The maroon shading over eastern Texas, for example, shows that the level of dryness over the last week occurred less than two percent of the time between 1948 and the present.

The groundwater maps aren't the only maps based on GRACE data that the Drought Center publishes each week. The Drought Center also distributes soil moisture maps that show moisture changes in the root zone down to about 3 feet (1 meter) below the surface, as well as surface soil moisture maps that show changes within the top inch (2 cm) of the land.

"All of these maps offer policymakers new information into subsurface water fluctuations at regional to national scales that has not been available in the past," said the Drought Center's Brian Wardlow. The maps provide finer resolution or are more consistently available than other similar sources of information, and having the maps for the three different levels should help decision makers distinguish between short-term and long-term droughts.

"These maps would be impossible to generate using only ground-based observations," said Rodell. "There are groundwater wells all around the United States and the U.S. Geological Survey does keep records from some of those wells, but it's not spatially continuous and there are some big gaps."

The maps also offer farmers, ranchers, water resource managers and even individual homeowners a new tool to monitor the health of critical groundwater resources. "People rely on groundwater for irrigation, for domestic water supply, and for industrial uses, but there's little information available on regional to national scales on groundwater storage variability and how that has responded to a drought," Rodell said. "Over a long-term dry period there will be an effect on groundwater storage and groundwater levels. It's going to drop quite a bit, people's wells could dry out, and it takes time to recover."

The maps are the result of a NASA-funded project at the Drought Center and NASA Goddard to make it easier for the weekly U.S. Drought Monitor to incorporate data from the GRACE satellites. NASA's Jet Propulsion Laboratory, Pasadena, Calif., developed GRACE and manages the mission for NASA. The groundwater and soil moisture maps are updated each Tuesday.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by NASA/Goddard Space Flight Center.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — New research finds a marker used to detect plaque in the brain may help doctors make a more accurate diagnosis between two common types of dementia -- Alzheimer's disease and frontotemporal lobar degeneration (FTLD). The study is published in the November 30, 2011, online issue of Neurology®, the medical journal of the American Academy of Neurology.

"These two types of dementia share similar symptoms, so telling the two apart while a person is living is a real challenge, but important so doctors can determine the best form of treatment," said study author Gil D. Rabinovici, MD, of the University of California San Francisco Memory and Aging Center and a member of the American Academy of Neurology.

For the study, 107 people with early onset Alzheimer's disease or FTLD underwent a brain PET scan using a PIB marker, which detects amyloid or plaque in the brain that is the hallmark of Alzheimer's disease but not related to FTLD. The participants underwent another PET scan using a FDG marker, which detects changes in the brain's metabolism and is currently used to help differentiate between the two types of dementia.

The study found the PIB PET scan performed at least as well as the FDG PET scan in differentiating between Alzheimer's disease and FTLD, but had higher sensitivity and better accuracy and precision with its qualitative readings. The study found PIB had a sensitivity of 89.5 percent compared to 77.5 percent for FDG.

"While widespread use of PIB PET scans isn't available at this time, similar amyloid markers are being developed for clinical use, and these findings support a role for amyloid imaging in correctly diagnosing Alzheimer's disease versus FTLD," said Rabinovici.

The study was conducted at the University of California (UC) San Francisco, UC Berkeley and Lawrence Berkeley National Laboratory, and supported by the National Institute on Aging, the California Department of Health Services, the Alzheimer's Association, John Douglas French Alzheimer's Foundation and the Consortium for Frontotemporal Dementia Research.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by American Academy of Neurology.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — A co-evolutionary arms race exists between social insects and their parasites. Army ants (Leptogenys distinguenda) share their nests with several parasites such as beetles, snails and spiders. They also share their food with the kleptoparasitic silverfish (Malayatelura ponerophila). New research published in BioMed Central's open access journal BMC Ecology shows that the silverfish manage to hide amongst the ants by covering themselves in the ant's chemical scent.

Myrmecophilous (ant-loving) silverfish live their lives in and amongst army ants. To avoid being killed or rejected from the nest the silverfish must be able to somehow persuade the ants to believe that they are not invaders. These ants have limited eyesight and live in a world of chemical cues, recognizing their colony members by scent. Christoph von Beeren and Volker Witte from the Ludwig Maximilian University, Munich, collected L. distinguenda ants and M. ponerophila silverfish from the tropical rainforests of Ulu Gombak, Malaysia and found that, while the ants had 70 distinct hydrocarbon compounds on their cuticles, silverfish had none that are distinct for them. Instead, they carried the host colony scent, a phenomenon known as chemical mimicry.

By tracking the transfer of tagged hydrocarbon from host ants to silverfish, it became apparent that the latter pilfered their host's scent, preferably by rubbing against defenseless 'callows' (immature ants). To avoid ant aggression silverfish must continually top-up this scent. Isolating silverfish from the colony resulted in them losing their protective scent and being targeted for chasing, seizing and biting by worker ants, sometimes resulting in their death.

Prof. Witte explained, "It seems that silverfish and ants are engaged in a co-evolutionary arms race. The ants have equipped themselves with a complicated scent recognition system to safeguard their nest from predators and parasites. While the ants were developing their nest protection strategy the silverfish evolved elaborate behavioral patterns, pilfering the hosts own recognition cues, to outwit the ant's chemical defenses. Consequently, the silverfish have access to food and shelter in the inner part of the ants nest without giving anything in return."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by BioMed Central.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Christoph von Beeren, Stefan Schulz, Rosli Hashim, Volker Witte. Acquisition of chemical recognition cues facilitates integration into ant societies. BMC Ecology, 2011; (in press) [link]

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — As global temperatures continue to rise at an accelerated rate due to deforestation and the burning of fossil fuels, natural stores of carbon in the Arctic are cause for serious concern, researchers say.

In an article scheduled to be published Dec. 1 in the journal Nature, a survey of 41 international experts led by University of Florida ecologist Edward Schuur shows models created to estimate global warming may have underestimated the magnitude of carbon emissions from permafrost over the next century. Its effect on climate change is projected to be 2.5 times greater than models predicted, partly because of the amount of methane released in permafrost, or frozen soil.

"We're talking about carbon that's in soil, just like in your garden where there's compost containing carbon slowly breaking down, but in permafrost it's almost stopped because the soil is frozen," Schuur said. "As that soil warms up, that carbon can be broken down by bacteria and fungi, and as they metabolize, they are releasing carbon and methane, greenhouse gases that cause warmer temperatures."

As a result of plant and animal remains decomposing for thousands of years, organic carbon in the permafrost zone is distributed across 11.7 million square miles of land, an amount that is more than three times larger than previously estimated. The new number is mainly based on evidence the carbon is stored much deeper as the result of observations, soil measurements and experiments.

"We know the models are not yet giving us the right answer -- it's going to take time and development to make those better, and that process is not finished yet," Schuur said. "It's an interesting exercise in watching how scientists, who are very cautious in their training, make hypotheses about what our future will look like. The numbers are significant, and they appear like they are plausible and they are large enough for significant concern, because if climate change goes 20 or 30 percent faster that we had predicted already, that's a pretty big boost."

The survey, which was completed following a National Science Foundation-funded Permafrost Carbon Network workshop about six months ago, proposed four warming scenarios until 2040, 2100 and 2300. Researchers were asked to predict the amount of permafrost likely to thaw, how much carbon would be released, and what amount would be methane, which has much more warming potential than carbon dioxide.

The occurrence of carbon in northern soils is natural and the chemical does not have an effect on climate if it remains underground, but when released as a greenhouse gas it can add to climate warming. However, humans could slow warming temperatures as the result of greenhouse gas emissions from deforestation and the burning of fossil fuels, which are what speed up the process of permafrost thaw.

"Even though we're talking about a place that is very far away and seems to be out of our control, we actually have influence over what happens based on the overall trajectory of warming. If we followed a lower trajectory of warming based on controlling emissions from the burning of fossil fuels, it has the effect of slowing the whole process down and keeping a lot more carbon in the ground," Schuur said. "Just by addressing the source of emissions that are from humans, we have this potential to just keep everything closer to its current state, frozen in permafrost, rather than going into the atmosphere."

The survey shows that by 2100, experts believe the amount of carbon released will be 1.7 to 5.2 times greater than previous models predict, under scenarios where Arctic temperatures rise 13.5 degrees Fahrenheit. Some predicted effects of global warming include sea level rise, loss of biodiversity as some organisms are unable to migrate as quickly as the climate shifts and more extreme weather events that could affect food supply and water resources.

"This new research shows that the unmanaged part of the biosphere has a major role in determining the future trajectory of climate change," said Stanford University biology professor Christopher Field, who was not involved in the study. "The implication is sobering. Whatever target we set for atmospheric CO2, this new research means we will need to work harder to reach it. But of course, limiting the amount of climate change also decreases the climate damage from permafrost melting."

When carbon is released from the ground as a result of thawing permafrost, there is no way of trapping the gases at the source, so action to slow its effect must be taken beforehand.

"If you think about fossil fuel and deforestation, those are things people are doing, so presumably if you had enough will, you could change your laws and adjust your society to slow some of that down," Schuur said. "But when carbon starts being emitted from the permafrost, you can't immediately say, 'OK, we've had enough of this, let's just stop doing it,' because it's a natural cycle emitting carbon whether you like it or not. Once we start pushing it, it's going to be releasing under its own dynamic."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Florida. The original article was written by Danielle Torrent.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Edward A. G. Schuur, Benjamin Abbott. Climate change: High risk of permafrost thaw. Nature, 2011; 480 (7375): 32 DOI: 10.1038/480032a

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — Geophysicists from Potsdam have established a mode of action that can explain the irregular distribution of strong earthquakes at the San Andreas Fault in California. As the science magazine "Nature" reports in its latest issue, the scientists examined the electrical conductivity of the rocks at great depths, which is closely related to the water content within the rocks. From the pattern of electrical conductivity and seismic activity they were able to deduce that rock water acts as a lubricant.

Los Angeles moves toward San Francisco at a pace of about six centimeters per year, because the Pacific plate with Los Angeles is moving northward, parallel to the North American plate which hosts San Francisco. But this is only the average value. In some areas, movement along the fault is almost continuous, while other segments are locked until they shift abruptly several meters against each other releasing energy in strong earthquakes. After the San Francisco earthquake of 1906, the plates had moved by six meters.

The San Andreas Fault acts like a seam of the Earth, ranging through the entire crust and reaching into the mantle. Geophysicists from the GFZ German Research Centre for Geosciences have succeeded in imaging this interface to great depths and to establish a connection between processes at depth and events at surface. "When examining the image of the electrical conductivity, it becomes clear that rock water from depths of the upper mantle, i.e. between 20 to 40 km, can penetrate the shallow areas of the creeping section of the fault, while these fluids are detained in other areas beneath an impermeable layer," says Dr. Oliver Ritter of the GFZ. "A sliding of the plates is supported, where fluids can rise."

These results suggest that significant differences exist in the mechanical and material properties along the fault at depth. The so-called tremor signals, for instance, appear to be linked to areas underneath the San Andreas Fault, where fluids are trapped. Tremors are low-frequency vibrations that are not associated with rupture processes as they are typical of normal earthquakes. These observations support the idea that fluids play an important role in the onset of earthquakes.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Helmholtz Centre Potsdam - GFZ German Research Centre for Geosciences.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Michael Becken, Oliver Ritter, Paul A. Bedrosian, Ute Weckmann. Correlation between deep fluids, tremor and creep along the central San Andreas fault. Nature, 2011; 480 (7375): 87 DOI: 10.1038/nature10609

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — As the Arctic warms, greenhouse gases will be released from thawing permafrost faster and at significantly higher levels than previous estimates, according to survey results from 41 international scientists published in the Nov. 30 issue of the journal Nature.

Permafrost thaw will release approximately the same amount of carbon as deforestation, say the authors, but the effect on climate will be 2.5 times bigger because emissions include methane, which has a greater effect on warming than carbon dioxide.

The survey, led by University of Florida researcher Edward Schuur and University of Alaska Fairbanks graduate student Benjamin Abbott, asked climate experts what percentage of the surface permafrost is likely to thaw, how much carbon will be released and how much of that carbon will be methane. The authors estimate that the amount of carbon released by 2100 will be 1.7 to 5.2 times larger than reported in recent modeling studies, which used a similar warming scenario.

"The larger estimate is due to the inclusion of processes missing from current models and new estimates of the amount of organic carbon stored deep in frozen soils," Abbott said. "There's more organic carbon in northern soils than there is in all living things combined; it's kind of mind boggling."

Northern soils hold around 1,700 billion gigatons of organic carbon, around four times more than all the carbon ever emitted by modern human activity and twice as much as is now in the atmosphere, according to the latest estimate. When permafrost thaws, organic material in the soil decomposes and releases gases such as methane and carbon dioxide.

"In most ecosystems organic matter is concentrated only in the top meter of soils, but when arctic soils freeze and thaw the carbon can work its way many meters down, said Abbott, who studies how carbon is released from collapsed landscapes called thermokarsts -- a process not accounted for in current models. Until recently that deep carbon was not included in soil inventories and it still is not accounted for in most climate models.

"We know about a lot of processes that will affect the fate of arctic carbon, but we don't yet know how to incorporate them into climate models," Abbott said. "We're hoping to identify some of those processes and help the models catch up."

Most large-scale models assume that permafrost warming depends on how much the air above the permafrost is warming. Missing from the models, say the authors, are processes such as the effects of abrupt thawing that can melt an ice wedge, result in collapsed ground and accelerate additional thawing.

"This survey is part of the scientific process, what we think is going to happen in the future, and how we come up with testable hypotheses for future research," Schurr said. "Our survey outlines the additional risk to society caused by thawing of the frozen North and the need to reduce fossil fuel use and deforestation."

By integrating data from previous models with expert predictions the authors hope to provide a frame of reference for scientists studying all aspects of climate change.

"Permafrost carbon release is not going to overshadow fossil fuel emissions as the main driver of climate change" said Schuur, "but it is an important amplifier of climate change."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Alaska Fairbanks.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Edward A. G. Schuur, Benjamin Abbott. Climate change: High risk of permafrost thaw. Nature, 2011; 480 (7375): 32 DOI: 10.1038/480032a

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Dec. 1, 2011) — An archaeological research team from North Carolina State University, the University of Washington and University of Florida has found one of the most diverse collections of prehistoric non-native animal remains in the Caribbean, on the tiny island of Carriacou. The find contributes to our understanding of culture in the region before the arrival of Columbus, and suggests Carriacou may have been more important than previously thought.

The researchers found evidence of five species that were introduced to Carriacou from South America between 1,000 and 1,400 years ago. Only one of these species, the opossum, can still be found on the island. The other species were pig-like peccaries, armadillos, guinea pigs and small rodents called agoutis.

Researchers think the animals were used as sources of food. The scarcity of the remains, and the few sites where they were found, indicate that the animals were not for daily consumption. "We suspect that they may have been foods eaten by people of high status, or used in ritual events," says Dr. Scott Fitzpatrick, an associate professor of anthropology at NC State and co-author of a paper describing the research.

"Looking for patterning in the distribution of animal remains in relation to where ritual artifacts and houses are found will help to test this idea," said Christina Giovas, lead author and a Ph.D. student at the University of Washington.

The team, which also included Ph.D. student Michelle LeFebvre of the University of Florida, found the animal remains at two different sites on the island, and used carbon dating techniques to determine their age. The opossum and agouti were the most common, with the latter remains reflecting the longest presence, running from A.D. 600 to 1400. The guinea pig remains had the shortest possible time-frame, running from A.D. 985 to 1030.

These dates are consistent with similar findings on other Caribbean islands. However, while these species have been found on other islands, it is incredibly rare for one island to have remains from all of these species. Guinea pigs, for example, were previously unknown in this part of the Caribbean. The diversity is particularly surprising, given that Carriacou is one of the smallest settled islands in the Caribbean, though the number of remains is still not that large -- a pattern seen on other islands as well.

This combination of small geographical area and robust prehistoric animal diversity, along with evidence for artifact trade with other islands and South America, suggests that Carriacou may have had some significance in the pre-Columbian Caribbean as a nexus of interaction between island communities.

The animal remains are also significant because they were found in archaeological digs at well-documented prehistoric villages -- and the remains themselves were dated, as opposed to just the materials (such as charcoal) found near the remains.

"The fact that the dates established by radiocarbon dating are consistent with the dates of associated materials from the villages means the chronology is well established," says Fitzpatrick, who has been doing research on Carriacou since 2003. "In the future we'd like to expand one of the lesser excavated sites to get more information on how common these species may have been, which could shed light on the ecological impact and social importance of these species prehistorically."

The paper, "New records for prehistoric introduction of Neotropical mammals to the West Indies: evidence from Carriacou, Lesser Antilles," is published online in the Journal of Biogeography and was co-authored by Fitzpatrick, Giovas and LeFebvre. The research was supported by the National Science Foundation, NC State, the University of Washington and the University of Florida.

NC State's Department of Sociology and Anthropology is part of the university's College of Humanities and Social Sciences.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by North Carolina State University.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Christina M. Giovas, Michelle J. LeFebvre, Scott M. Fitzpatrick. New records for prehistoric introduction of Neotropical mammals to the West Indies: evidence from Carriacou, Lesser Antilles. Journal of Biogeography, 2011; DOI: 10.1111/j.1365-2699.2011.02630.x

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — Scientists understand that Earth's magnetic field has flipped its polarity many times over the millennia. In other words, if you were alive about 800,000 years ago, and facing what we call north with a magnetic compass in your hand, the needle would point to 'south.' This is because a magnetic compass is calibrated based on Earth's poles. The N-S markings of a compass would be 180 degrees wrong if the polarity of today's magnetic field were reversed. Many doomsday theorists have tried to take this natural geological occurrence and suggest it could lead to Earth's destruction. But would there be any dramatic effects? The answer, from the geologic and fossil records we have from hundreds of past magnetic polarity reversals, seems to be 'no.'

Reversals are the rule, not the exception. Earth has settled in the last 20 million years into a pattern of a pole reversal about every 200,000 to 300,000 years, although it has been more than twice that long since the last reversal. A reversal happens over hundreds or thousands of years, and it is not exactly a clean back flip. Magnetic fields morph and push and pull at one another, with multiple poles emerging at odd latitudes throughout the process. Scientists estimate reversals have happened at least hundreds of times over the past three billion years. And while reversals have happened more frequently in "recent" years, when dinosaurs walked Earth a reversal was more likely to happen only about every one million years.

Sediment cores taken from deep ocean floors can tell scientists about magnetic polarity shifts, providing a direct link between magnetic field activity and the fossil record. Earth's magnetic field determines the magnetization of lava as it is laid down on the ocean floor on either side of the Mid-Atlantic Rift where the North American and European continental plates are spreading apart. As the lava solidifies, it creates a record of the orientation of past magnetic fields much like a tape recorder records sound. The last time that Earth's poles flipped in a major reversal was about 780,000 years ago, in what scientists call the Brunhes-Matuyama reversal. The fossil record shows no drastic changes in plant or animal life. Deep ocean sediment cores from this period also indicate no changes in glacial activity, based on the amount of oxygen isotopes in the cores. This is also proof that a polarity reversal would not affect the rotation axis of Earth, as the planet's rotation axis tilt has a significant effect on climate and glaciation and any change would be evident in the glacial record.

Earth's polarity is not a constant. Unlike a classic bar magnet, or the decorative magnets on your refrigerator, the matter governing Earth's magnetic field moves around. Geophysicists are pretty sure that the reason Earth has a magnetic field is because its solid iron core is surrounded by a fluid ocean of hot, liquid metal. This process can also be modeled with supercomputers. Ours is, without hyperbole, a dynamic planet. The flow of liquid iron in Earth's core creates electric currents, which in turn create the magnetic field. So while parts of Earth's outer core are too deep for scientists to measure directly, we can infer movement in the core by observing changes in the magnetic field. The magnetic north pole has been creeping northward -- by more than 600 miles (1,100 km) -- since the early 19th century, when explorers first located it precisely. It is moving faster now, actually, as scientists estimate the pole is migrating northward about 40 miles per year, as opposed to about 10 miles per year in the early 20th century.

Another doomsday hypothesis about a geomagnetic flip plays up fears about incoming solar activity. This suggestion mistakenly assumes that a pole reversal would momentarily leave Earth without the magnetic field that protects us from solar flares and coronal mass ejections from the sun. But, while Earth's magnetic field can indeed weaken and strengthen over time, there is no indication that it has ever disappeared completely. A weaker field would certainly lead to a small increase in solar radiation on Earth -- as well as a beautiful display of aurora at lower latitudes -- but nothing deadly. Moreover, even with a weakened magnetic field, Earth's thick atmosphere also offers protection against the sun's incoming particles.

The science shows that magnetic pole reversal is -- in terms of geologic time scales -- a common occurrence that happens gradually over millennia. While the conditions that cause polarity reversals are not entirely predictable -- the north pole's movement could subtly change direction, for instance -- there is nothing in the millions of years of geologic record to suggest that any of the 2012 doomsday scenarios connected to a pole reversal should be taken seriously. A reversal might, however, be good business for magnetic compass manufacturers.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by NASA/Goddard Space Flight Center.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here