ScienceDaily (Nov. 30, 2011) — Research conducted by a pair of physicians at Boston University School of Medicine (BUSM) and Boston Medical Center (BMC) has led to the development of a test that can help diagnose membranous nephropathy in its early stages. The test, which is currently only offered in the research setting and is awaiting commercial development, could have significant implications in the diagnosis and treatment of the disease. Currently, the only way to diagnose the disease is through a biopsy.

The pioneering work is being led by Laurence Beck, MD, PhD, assistant professor of medicine at BUSM and a nephrologist at BMC, and David Salant, MD, professor of medicine at BUSM and chief of the renal section at BMC.

Over the past four years, the Halpin Foundation has contributed more than $350,000 to Beck to investigate the genetics and molecular mechanisms behind membranous nephropathy. Most recently, Beck was awarded a $50,000 grant from the Foundation to further his efforts.

Membranous nephropathy is an autoimmune disease caused by the immune system attacking the kidneys, resulting in the thickening and dysfunction of the kidney's filters, called glomeruli. When antibodies attack the glomeruli, large amounts of protein in the urine are released. In 2009, Beck and Salant identified that the antibodies were binding to a protein in the glomeruli. They determined that the target was a protein called PLA2R, or phospholipase A2 receptor, and these findings were published in the New England Journal of Medicine.

"For the first time, a specific biomarker has been identified for this relatively common kidney disease," said Beck, who is part of an international collaboration that has demonstrated that these antibodies are present in patients from many different ethnicities.

With the antigen protein identified, Beck and Salant have developed a blood test to detect and measure the amount of the specific antibodies in a sample.

Approximately one third of patients with membranous nephropathy eventually develop kidney failure, requiring dialysis or a kidney transplant. According to the University of North Carolina's Kidney Center, the disease affects people over the age of 40, is rare in children and affects more men than women. This disease is treated by high powered chemotherapy, and if successful, the antibodies go away.

"Being able to detect the presence of these antibodies using a blood test has tremendous implications about who is treated, and for how long, with the often toxic immunosuppressive drugs," said Beck.

Beck continues his research focus on the treatment of the disease by targeting the antibodies and stopping them from attacking the glomeruli.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Boston University Medical Center.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — A comparison of home-birth trends of the 1970s finds many similarities -- and some differences -- related to current trends in home births.

For instance, in the 1970s -- as now -- women opting to engage in home births tended to have higher levels of education. That's according to a 1978 survey by Home Oriented Maternity Experience (HOME) that was recently found by University of Cincinnati historian Wendy Kline in the archives of the American Congress of Obstetricians and Gynecologists (ACOG).

That survey showed that in the late 1970s, one third of the group's members participating in home births had a bachelor's, master's or doctoral degree. Fewer than one percent did not have a high school education.

Also, according to the 2,000 respondents to HOME's 1978 survey, 36 percent of women engaging in home births at the time were attended by physicians. That is a much higher percentage than is the case currently for mothers participating in home births. (In research by Eugene Declerq, Boston University School of Public Health, and Mairi Breen Rothman, Metro Area Midwives and Allied Services, it was found that about five percent of homebirths were attended by a physician in 2008.)

These comparisons are possible because of historical information found by UC's Kline, including "A Survey of Current Trends in Home Birth" by the founders HOME and published in 1979.

Kline is also conducting interviews with and has obtained historical documents from the founders of and the midwives first associated with HOME, a grass roots organization founded in 1974, to provide information and education related to home births.

Kline will present this research and related historical information as one of only nine international presenters invited to the "Communicating Reproduction" conference at Cambridge University Dec. 6-7.

The debate surrounding health, safety and home births rose to national prominence as recently as October 2011 during the Home Birth Consensus Summit in Virginia, held because of increasing interest in home births as an option for expectant mothers.

Overall, Kline's research of HOME and of ACOG counters the stereotypical view of the 1970s home-birth movement as countercultural and peopled by "hippies." In fact, the founders of HOME deliberately reached out to a broad cross section of women across the political and religious spectrum, including religious conservatives as well as those on the left of the political spectrum.

Said Kline, "In looking through the historical record, we find that many women involved in home births in the 1970s signed their names 'Mrs. Robert Smith' or 'Mrs. William Hoffman.' The movement included professionals, business people, farmers, laborers and artists. It defies simplistic categorization."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Cincinnati.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — In 2008, according to the National Highway Traffic Safety Administration, 2.3 million automobile crashes occurred at intersections across the United States, resulting in some 7,000 deaths. More than 700 of those fatalities were due to drivers running red lights. But, according to the Insurance Institute for Highway Safety, half of the people killed in such accidents are not the drivers who ran the light, but other drivers, passengers and pedestrians.

In order to reduce the number of accidents at intersections, researchers at MIT have devised an algorithm that predicts when an oncoming car is likely to run a red light. Based on parameters such as the vehicle's deceleration and its distance from a light, the group was able to determine which cars were potential "violators" -- those likely to cross into an intersection after a light has turned red -- and which were "compliant."

The researchers tested the algorithm on data collected from an intersection in Virginia, finding that it accurately identified potential violators within a couple of seconds of reaching a red light -- enough time, according to the researchers, for other drivers at an intersection to be able to react to the threat if alerted. Compared to other efforts to model driving behavior, the MIT algorithm generated fewer false alarms, an important advantage for systems providing guidance to human drivers. The researchers report their findings in a paper that will appear in the journal IEEE Transactions on Intelligent Transportation Systems.

Jonathan How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics at MIT, says "smart" cars of the future may use such algorithms to help drivers anticipate and avoid potential accidents.

Video: See the team's algorithm in action as robots are able to negotiate a busy intersection and avoid potential accidents.

"If you had some type of heads-up display for the driver, it might be something where the algorithms are analyzing and saying, 'We're concerned,'" says How, who is one of the paper's authors. "Even though your light might be green, it may recommend you not go, because there are people behaving badly that you may not be aware of."

How says that in order to implement such warning systems, vehicles would need to be able to "talk" with each other, wirelessly sending and receiving information such as a car's speed and position data. Such vehicle-to-vehicle (V2V) communication, he says, can potentially improve safety and avoid traffic congestion. Today, the U.S. Department of Transportation (DOT) is exploring V2V technology, along with several major car manufacturers -- including Ford Motor Company, which this year has been road-testing prototypes with advanced Wi-Fi and collision-avoidance systems.

"You might have a situation where you get a snowball effect where, much more rapidly than people envisioned, this [V2V] technology may be accepted," How says.

In the meantime, researchers including How are developing algorithms to analyze vehicle data that would be broadcast via such V2V systems. Georges Aoude SM '07, PhD '11, a former student of How's, designed an algorithm based on a technique that has been successfully applied in many artificial intelligence domains, but is relatively new to the transportation field. This algorithm is able to capture a vehicle's motion in multiple dimensions using a highly accurate and efficient classifier that can be executed in less than five milliseconds.

Along with colleagues Vishnu Desaraju SM '10 and Lauren Stephens, an MIT undergraduate, How and Aoude tested the algorithm using an extensive set of traffic data collected at a busy intersection in Christianburg, Va. The intersection was heavily monitored as part of a safety-prediction project sponsored by the DOT. The DOT outfitted the intersection with a number of instruments that tracked vehicle speed and location, as well as when lights turned red.

Aoude and colleagues applied their algorithm to data from more than 15,000 approaching vehicles at the intersection, and found that it was able to correctly identify red-light violators 85 percent of the time -- an improvement of 15 to 20 percent over existing algorithms.

The researchers were able to predict, within a couple of seconds, whether a car would run a red light. The researchers actually found a "sweet spot" -- one to two seconds in advance of a potential collision -- when the algorithm has the highest accuracy and when a driver may still have enough time to react.

Compared to similar safety-prediction technologies, the group found that its algorithm generated fewer false positives. How says this may be due to the algorithm's ability to analyze multiple parameters. He adds that other algorithms tend to be "skittish," erring on the side of caution in flagging potential problems, which may itself be a problem when cars are outfitted with such technology.

"The challenge is, you don't want to be overly pessimistic," How says. "If you're too pessimistic, you start reporting there's a problem when there really isn't, and then very rapidly, the human's going to push a button that turns this thing off."

The researchers are now investigating ways to design a closed-loop system -- to give drivers a recommendation of what to do in response to a potential accident -- and are also planning to adapt the existing algorithm to air traffic control, to predict the behavior of aircraft.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Massachusetts Institute of Technology.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — With the December holidays a peak season for indulging in marzipan, scientists are reporting development of a new test that can tell the difference between the real thing -- a pricey but luscious paste made from ground almonds and sugar -- and cheap fakes made from ground soy, peas and other ingredients. The report appears in ACS' Journal of Agricultural and Food Chemistry.

Ilka Haase and colleagues explain that marzipan is a popular treat in some countries, especially at Christmas and New Year's, when displays of marzipan sculpted into fruit, Santa and tree shapes pop up in stores. And cakes like marzipan stollen (a rich combo of raisins, nuts and cherries with a marzipan filling) are a holiday tradition. But the cost of almonds leads some unscrupulous manufacturers to use cheap substitutes like ground-up peach seeds, soybeans or peas.

Current methods for detecting that trickery have drawbacks, allowing counterfeit marzipan to slip onto the market to unsuspecting consumers. To improve the detection of contaminants in marzipan, the researchers became food detectives and adapted a method called the polymerase chain reaction (PCR) -- the same test famed for use in crime scene investigations.

They tested various marzipan concoctions with different amounts of apricot seeds, peach seeds, peas, beans, soy, lupine, chickpeas, cashews and pistachios. PCR enabled them to easily finger the doctored pastes. They could even detect small amounts -- as little as 0.1% -- of an almond substitute. The researchers say that the PCR method could serve as a perfect tool for the routine screening of marzipan pastes for small amounts of contaminants.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by American Chemical Society.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Philipp Brüning, Ilka Haase, Reinhard Matissek, Markus Fischer. Marzipan: Polymerase Chain Reaction-Driven Methods for Authenticity Control. Journal of Agricultural and Food Chemistry, 2011; 59 (22): 11910 DOI: 10.1021/jf202484a

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — Distrust is the central motivating factor behind why religious people dislike atheists, according to a new study led by University of British Columbia psychologists.

"Where there are religious majorities -- that is, in most of the world -- atheists are among the least trusted people," says lead author Will Gervais, a doctoral student in UBC's Dept. of Psychology. "With more than half a billion atheists worldwide, this prejudice has the potential to affect a substantial number of people."

While reasons behind antagonism towards atheists have not been fully explored, the study -- published in the current online issue of Journal of Personality and Social Psychology -- is among the first explorations of the social psychological processes underlying anti-atheist sentiments.

"This antipathy is striking, as atheists are not a coherent, visible or powerful social group," says Gervais, who co-authored the study with UBC Associate Prof. Ara Norenzayan and Azim Shariff of the University of Oregon. The study is titled, Do You Believe in Atheists? Distrust is Central to Anti-Atheist Prejudice.

The researchers conducted a series of six studies with 350 American adults and nearly 420 university students in Canada, posing a number of hypothetical questions and scenarios to the groups. In one study, participants found a description of an untrustworthy person to be more representative of atheists than of Christians, Muslims, gay men, feminists or Jewish people. Only rapists were distrusted to a comparable degree.

The researchers concluded that religious believer's distrust -- rather than dislike or disgust -- was the central motivator of prejudice against atheists, adding that these studies offer important clues on how to combat this prejudice.

One motivation for the research was a Gallup poll that found that only 45 per cent of American respondents would vote for a qualified atheist president, says Norenzayan. The figure was the lowest among several hypothetical minority candidates. Poll respondents rated atheists as the group that least agrees with their vision of America, and that they would most disapprove of their children marrying.

The religious behaviors of others may provide believers with important social cues, the researchers say. "Outward displays of belief in God may be viewed as a proxy for trustworthiness, particularly by religious believers who think that people behave better if they feel that God is watching them," says Norenzayan. "While atheists may see their disbelief as a private matter on a metaphysical issue, believers may consider atheists' absence of belief as a public threat to cooperation and honesty."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of British Columbia.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — Surgeons can learn their skills more quickly if they are taught how to control their eye movements. Research led by the University of Exeter shows that trainee surgeons learn technical surgical skills much more quickly and deal better with the stress of the operating theatre if they are taught to mimic the eye movements of experts.

This research, published in the journal Surgical Endoscopy, could transform the way in which surgeons are trained to be ready for the operating theatre.

Working in collaboration with the University of Hong Kong, the Royal Devon and Exeter NHS Foundation Trust and the Horizon training centre Torbay, the University of Exeter team identified differences in the eye movements of expert and novice surgeons. They devised a gaze training programme, which taught the novices the 'expert' visual control patterns. This enabled them to learn technical skills more quickly than their fellow students and perform these skills in distracting conditions similar to the operating room.

Thirty medical students were divided into three groups, each undertaking a different type of training. The 'gaze trained' group of students was shown a video, captured by an eye tracker, displaying the visual control of an experienced surgeon. The footage highlighted exactly where and when the surgeon's eyes were fixed during a simulated surgical task. The students then conducted the task themselves, wearing the same eye-tracking device. During the task they were encouraged to adopt the same eye movements as those of the expert surgeon.

Students learned that successful surgeons 'lock' their eyes to a critical location while performing complex movements using surgical instruments. This prevents them from tracking the tip of the surgical tool, helping them to be accurate and avoid being distracted.

After repeating the task a number of times, the students' eye movements soon mimicked those of a far more experienced surgeon. Members of the other groups, who were either taught how to move the surgical instruments or were left to their own devices, did not learn as quickly. Those students' performance broke down when they were put into conditions that simulated the environment of the operating theatre and they needed to multi-task.

Dr Samuel Vine of the University of Exeter explained: "It appears that teaching novices the eye movements of expert surgeons allows them to attain high levels of motor control much quicker than novices taught in a traditional way. This highlights the important link between the eye and hand in the performance of motor skills. These individuals were also able to successfully multi-task without their technical skills breaking down, something that we know experienced surgeons are capable of doing in the operating theatre.

"Teaching eye movements rather than the motor skills may have reduced the working memory required to complete the task. This may be why they were able to multi-task whilst the other groups were not."

Dr Samuel Vine and Dr Mark Wilson from Sport and Health Sciences at the University of Exeter have previously worked with athletes to help them improve their performance through gaze training, but this is the first study to examine the benefits of gaze training in surgical skills training.

Dr Vine added: "The findings from our research highlight the potential for surgical educators to 'speed up' the initial phase of technical skill learning, getting trainees ready for the operating room earlier and therefore enabling them to gain more 'hands on' experience. This is important against a backdrop of reduced government budgets and new EU working time directives, meaning that in the UK we have less money and less time to deliver specialist surgical training."

The research team is now analysing the eye movements of surgeons performing 'real life' operations and are working to develop a software training package that will automatically guide trainees to adopt surgeons eye movements.

Mr John McGrath, Consultant Surgeon at the Royal Devon and Exeter Hospital, said: "The use of simulators has become increasingly common during surgical training to ensure that trainee surgeons have reached a safe level of competency before performing procedures in the real-life operating theatre. Up to now, there has been fairly limited research to understand how these simulators can be used to their maximum potential.

"This exciting collaboration with the Universities of Exeter and Hong Kong has allowed us to trial a very novel approach to surgical education, applying the team's international expertise in the field of high performance athletes. Focussing on surgeons' eye movements has resulted in a reduction in the time taken to learn specific procedures and, more importantly, demonstrated that their skills are less likely to break down under pressure. Our current work has now moved into the operating theatre to ensure that patients will benefit from the advances in surgical training and surgical safety."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Exeter.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Nov. 30, 2011) — Scientists investigating the interactions, or binding patterns, of a major tumor-suppressor protein known as p53 with the entire genome in normal human cells have turned up key differences from those observed in cancer cells. The distinct binding patterns reflect differences in the chromatin (the way DNA is packed with proteins), which may be important for understanding the function of the tumor suppressor protein in cancer cells.

The study was conducted by scientists at the U.S. Department of Energy's (DOE) Brookhaven National Laboratory and collaborators at Cold Spring Harbor Laboratory, and is published in the December 15 issue of the journal Cell Cycle.

"No other study has shown such a dramatic difference in a tumor suppressor protein binding to DNA between normal and cancer-derived cells," said Brookhaven biologist Krassimira Botcheva, lead author on the paper. "This research makes it clear that it is essential to study p53 functions in both types of cells in the context of chromatin to gain a correct understanding of how p53 tumor suppression is affected by global epigenetic changes -- modifications to DNA or chromatin -- associated with cancer development."

Because of its key role in tumor suppression, p53 is the most studied human protein. It modulates a cell's response to a variety of stresses (nutrient starvation, oxygen level changes, DNA damage caused by chemicals or radiation) by binding to DNA and regulating the expression of an extensive network of genes. Depending on the level of DNA damage, it can activate DNA repair, stop the cells from multiplying, or cause them to self-destruct -- all of which can potentially prevent or stop tumor development. Malfunctioning p53 is a hallmark of human cancers.

Most early studies of p53 binding explored its interactions with isolated individual genes, and all whole-genome studies to date have been conducted in cancer-derived cells. This is the first study to present a high-resolution genome-wide p53-binding map for normal human cells, and to correlate those findings with the "epigenetic landscape" of the genome.

"We analyzed the p53 binding in the context of the human epigenome, by correlating the p53 binding profile we obtained in normal human cells with a published high-resolution map of DNA methylation -- a type of chemical modification that is one of the most important epigenetic modifications to DNA -- that had been generated for the same cells," Botcheva said.

Key findings

In the normal human cells, the scientists found p53 binding sites located in close proximity to genes and particularly at the sites in the genome, known as transcriptions start sites, which represent "start" signals for transcribing the genes. Though this association of binding sites with genes and transcription start sites was previously observed in studies of functional, individually analyzed binding sites, it was not seen in high-throughput whole-genome studies of cancer-derived cell lines. In those earlier studies, the identified p53 binding sites were found not close to genes, and not close to the sites in the human genome where transcription starts.

Additionally, nearly half of the newly identified p53 binding sites in the normal cells (in contrast to about five percent of the sites reported in cancer cells) reside in so-called CpG islands. These are short DNA sequences with unusually high numbers of cytosine and guanine bases (the C and G of the four-letter genetic code alphabet, consisting of A, T, C, and G). CpG islands tend to be hypo- (or under-) methylated relative to the heavily methylated mammalian genome.

"This association of binding sites with CpG islands in the normal cells is what prompted us to investigate a possible genome-wide correlation between the identified sites and the CpG methylation status," Botcheva said.

The scientists found that p53 binding sites were enriched at hypomethylated regions of the human genome, both in and outside CpG islands.

"This is an important finding because, during cancer development, many CpG islands are subjected to extensive methylation while the bulk of the genomic DNA becomes hypomethylated," Botcheva said. "These major epigenetic changes may contribute to the differences observed in the p53-binding-sites' distribution in normal and cancer cells."

The scientists say this study clearly illustrates that the genomic landscape -- the DNA modifications and the associated chromatin changes -- have a significant effect on p53 binding. Furthermore, it greatly extends the list of experimentally defined p53 binding sites and provides a general framework for investigating the interplay between transcription factor binding, tumor suppression, and epigenetic changes associated with cancer development.

This research, which was funded by the DOE Office of Science, lays groundwork for further advancing the detailed understanding of radiation effects, including low-dose radiation effects, on the human genome.

The research team also includes John Dunn and Carl Anderson of Brookhaven Lab, and Richard McCombie of Cold Spring Harbor Laboratory, where the high-throughput Illumina sequencing was done.

Methodology

The p53 binding sites were identified by a method called ChIP-seq: for chromatin immunoprecipitation (ChIP), which produces a library of DNA fragments bound by a protein of interest using immunochemistry tools, followed by massively parallel DNA sequencing (seq) for determining simultaneously millions of sequences (the order of the nucleotide bases A, T, C and G in DNA) for these fragments.

"The experiment is challenging, the data require independent experimental validation and extensive bioinformatics analysis, but it is indispensable for high-throughput genomic analyses," Botcheva said. Establishing such capability at BNL is directly related to the efforts for development of profiling technologies for evaluating the role of epigenetic modifications in modulating low-dose ionizing radiation responses and also applicable for plant epigenetic studies.

The analysis required custom-designed software developed by Brookhaven bioinformatics specialist Sean McCorkle.

"Mapping the locations of nearly 20 million sequences in the 3-billion-base human genome, identifying binding sites, and performing comparative analysis with other data sets required new programming approaches as well as parallel processing on many CPUs," McCorkle said. "The sheer volume of this data required extensive computing, a situation expected to become increasingly commonplace in biology. While this work was a sequence data-processing milestone for Brookhaven, we expect data volumes only to increase in the future, and the computing challenges to continue."

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by DOE/Brookhaven National Laboratory.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal References:

Krassimira Botcheva, Sean R. McCorkle, W.R. McCombie, John J. Dunn, Carl W. Anderson. Distinct p53 genomic binding patterns in normal and cancer-derived human cells. Cell Cycle, 2011; 10 (24) [link]William A. Freed-Pastor, Carol Prives. Dissimilar DNA binding by p53 in normal and tumor-derived cells. Cell Cycle, 2011; 10 (24) [link]

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here