ScienceDaily (Oct. 27, 2011) — Identification of three fatty acids involved in the extreme growth of Burmese pythons' hearts following large meals could prove beneficial in treating diseased human hearts, according to research co-authored by a University of Alabama scientist and publishing in the Oct. 28 issue of Science.

Growth of the human heart can be beneficial when resulting from exercise -- a type of growth known as physiological cardiac hypertrophy -- but damaging when triggered by disease -- growth known as pathological hypertrophy. The new research shows a potential avenue by which to make the unhealthy heart growth more like the healthy version.

"We may later be able to turn the tables, in a sense, in the processes involved in pathological hypertrophy by administering a combination of fatty acids that occur in very high concentrations in the blood of digesting pythons," said Dr. Stephen Secor, associate professor of biological sciences at UA and one of the paper's co-authors. "This could trigger, perhaps, something more akin to the physiological form of hypertrophy."

The research, conducted in collaboration with multiple researchers at the University of Colorado working in the lab of Dr. Leslie Leinwand, identified three fatty acids, myristic acid, palmitic acid and palmitoleic acid, for their roles in the snakes' healthy heart growths following a meal.

Researchers took these fatty acids from feasting pythons and infused them into fasting pythons. Afterward, those fasting pythons underwent heart-rate growths similar to that of the feasting pythons. In a similar fashion, the researchers were able to induce comparable heart-rate growths in rats, indicating that the fatty acids have a similar effect on the mammalian heart.

The paper, whose lead author was Dr. Cecilia Riquelme of the University of Colorado, also showed that the pythons' heart growth was a result of the individual heart cells growing in size, rather than multiplying in number.

By studying gene expression in the python hearts -- which genes are turned on following feasting -- the research, Secor said, shows that the changes the pythons' hearts undergo is more like the positive changes seen in a marathon runner rather than the types of changes seen in a diseased, or genetically altered, heart.

"Cyclists, marathon runners, rowers, swimmers, they tend to have larger hearts," Secor said. "It's the heart working harder to move blood through it. The term is 'volume overload,' in reference to more blood being pumped to tissues. In response, the heart's chambers get larger, and more blood is pushed out with every contraction, resulting in increased cardiac performance."

However, the time-frame of this increased heart performance of a python blows away even the most physically-fit distance runner, Secor said.

"Instead of experiencing elevated cardiac performance for several hours with running, the Burmese python is maintaining heightened cardiac output for five to six days, non-stop, while digesting their large meal."

Another interesting finding of the research, Secor said, is even with the increased volume of triglycerides circulating in the snakes after feeding, those lipids are not remaining within the snakes' hearts or vascular systems after the completion of digestion.

"The python hearts are using the circulating lipids to fuel the increase in performance."

Traditionally, mice have been the preferred animal model used to study the genetic heart disease known as hypertrophic cardiomyopathy, characterized by heart growth and contractile dysfunction. However, the snakes' unusual physiological responses render them more insightful models, in some cases, Secor said.

Pythons are infrequent feeders, sometimes eating only once or twice a year in the wild. When they do eat, they undergo extreme physiologic and metabolic changes that include increases in the size of the heart, along with the liver, pancreas, small intestine and kidney. Three days after a feeding, a python's heart mass can increase as much as 40 percent, before reverting to its pre-meal size once digestion is completed, Secor said.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of Alabama in Tuscaloosa.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Cecilia A. Riquelme, Jason A. Magida, Brooke C. Harrison, Christopher E. Wall, Thomas G. Marr, Stephen M. Secor, Leslie A. Leinwand. Fatty Acids Identified in the Burmese Python Promote Beneficial Cardiac Growth. Science, 2011; 334 (6055): 528-531 DOI: 10.1126/science.1210558

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Although corticosteroid injections are one of the most common treatments for shoulder pain, there have been relatively few high-quality investigations of their efficacy and duration of action. In a study scheduled for publication in the December issue of the Archives of Physical Medicine and Rehabilitation, researchers report on the first comparative study of the two most commonly corticosteroid doses administered for shoulder pain. They found that lower doses were just as effective as higher doses in terms of reduction of pain, improved range of motion and duration of efficacy.

"There has been no guidance for adequate corticosteroid doses during subacromial injection. Physicians have depended mainly on their experience for the selection of dose," commented lead investigator Seung-Hyun Yoon, MD, PhD, Assistant Professor, Department of Physical Medicine and Rehabilitation, Ajou University School of Medicine, Suwon, Republic of Korea. "This is the first study to assess the efficacy of corticosteroid according to two different doses, which are the most widely used in subacromial injection for participants with periarticular shoulder disorders. Initial use of a low dose is encouraged because there was no difference in efficacy according to dose, and the effect of corticosteroid lasted up to 8 weeks."

Investigators conducted a randomized, triple-blind, placebo-controlled clinical trial in which 79 patients with at least one month's duration of pain were enrolled. Subjects were randomly assigned to three groups with 27 participants receiving a 40 mg dose of triamcinolone acetonide; 25 a 20 mg dose and 27 a placebo injection. All were followed up at 2, 4, and 8 weeks after treatment. All injections were performed using ultrasound guidance to insure proper placement of the therapeutic agent in the bursa.

Participants were asked to rate their degree of shoulder pain on a 0 to 10 scale and to answer a Shoulder Disability Questionnaire. They also were asked to move their shoulders slowly until they experienced pain, and evaluators measured the Active Range of Motion (AROM) in 4 different directions (forward flexion, abduction, internal rotation, and external rotation of the shoulder in a standing position).

Compared with pretreatment (within-group comparisons), the high- (40 mg) and low-dose corticosteroid (20 mg) groups both showed improvement in pain, disability, and AROM, while the placebo group showed no difference. Importantly, this study showed no significant inter-group differences between the high- and low-dose corticosteroid groups. Because a higher dose may increase the incidence of local and general complications, a lower dose is indicated at the initial treatment stage.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Elsevier Health Sciences.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Ji Yeon Hong, Seung-Hyun Yoon, Do Jun Moon, Kyu-Sung Kwack, Bohyun Joen, Hyun Young Lee. Comparison of High- and Low-Dose Corticosteroid in Subacromial Injection for Periarticular Shoulder Disorder: A Randomized, Triple-Blind, Placebo-Controlled Trial. Archives of Physical Medicine and Rehabilitation, December 2011; DOI: 10.1016/j.apmr.2011.06.033

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — A new analysis of images from the Hubble Space Telescope combined with supercomputer simulations of galaxy collisions has cleared up years of confusion about the rate at which smaller galaxies merge to form bigger ones. This paper, led by Jennifer Lotz of Space Telescope Science Institute, is about to be published in The Astrophysical Journal.

Galaxies grow mostly by acquiring small amounts of matter from their surroundings. But occasionally galaxies merge with other galaxies large or small. Collisions between big galaxies can change rotating disk galaxies like the Milky Way into featureless elliptical galaxies, in which the stars are moving every which way.

In order to understand how galaxies have grown, it is essential to measure the rate at which galaxies merge. In the past, astronomers have used two principal techniques: counting the number of close pairs of galaxies about to collide and by counting the number of galaxies that appear to be disturbed in various ways. The two techniques are analogous to trying to estimate the number of automobile accidents by counting the number of cars on a collision course versus counting the number of wrecked cars seen by the side of the road.

However, these studies have often led to discrepant results. "These different techniques probe mergers at different 'snapshots' in time along the merger process," Lotz says. "Studies that looked for close pairs of galaxies that appeared ready to collide gave much lower numbers of mergers (5%) than those that searched for galaxies with disturbed shapes, evidence that they're in smashups (25%)."

In the new work, all the previous observations were reanalyzed using a key new ingredient: highly accurate computer simulations of galaxy collisions. These simulations, which include the effects of stellar evolution and dust, show the lengths of time over which close galaxy pairs and various types of galaxy disturbances are likely to be visible. Lotz's team accounted for a broad range of merger possibilities, from a pair of galaxies with equal masses joining together to an interaction between a giant galaxy and a puny one. The team also analyzed the effects of different orbits for the galaxies, possible collision impacts, and how the galaxies were oriented to each other.

The simulations were done by T. J. Cox (now at Carnegie Observatories in Pasadena), Patrik Jonsson (now at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts), and Joel Primack (at the University of California, Santa Cruz -- UCSC), using small supercomputers at UCSC and the large Columbia supercomputer at NASA Ames Research Center. These simulations were "observed" as if through Hubble Space Telescope by Jennifer Lotz in a series of papers with Cox, Jonsson, and Primack that were published over the past three years. A key part of the analysis was a new way of measuring galaxy disturbances that was developed by Lotz, Primack, and Piero Madau in 2004. All this work was begun when Lotz was a postdoc with Primack, and Cox and Jonsson were his graduate students.

"Viewing the simulations was akin to watching a slow-motion car crash," Lotz says. "Having an accurate value for the merger rate is critical because galactic collisions may be a key process that drives galaxy assembly, rapid star formation at early times, and the accretion of gas onto central supermassive black holes at the centers of galaxies."

"The new paper led by Jennifer Lotz for the first time makes sense of all the previous observations, and shows that they are consistent with theoretical expectations," says Primack. "This is a great example of how new astronomical knowledge is now emerging from a combination of observations, theory, and supercomputer simulations." Primack now heads the University of California High-Performance AstroComputing Center (UC-HiPACC), headquartered at the University of California, Santa Cruz.

This research was funded by grants from NASA and NSF, and Hubble Space Telescope and Spitzer Space Telescope Theory Grants

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - Santa Cruz.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Lotz, Jennifer M.; Jonsson, Patrik; Cox, T. J.; Croton, Darren; Primack, Joel R.; Somerville, Rachel S.; Stewart, Kyle. The Major and Minor Galaxy Merger Rates at z < 1.5. The Astrophysical Journal, 2011 [link]

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Are children suffering needlessly after surgery? UC Irvine anesthesiologists who specialize in pediatric care believe so.

An operation can be one of the most traumatic events children face, and according to a UCI study, many of them experience unnecessary postsurgical pain lasting weeks or months.

Such chronic pain is well understood and treated in adults but has been generally overlooked in pediatric patients, said Dr. Zeev Kain, professor and chair of anesthesiology & perioperative care.

This month, he and his UCI colleagues published in the Journal of Pediatric Surgery the first-ever study of chronic postoperative pain in children. Out of 113 youngsters who had procedures ranging from appendectomies to orthopedic surgery, 13 percent reported pain that lingered for months.

While the sample group was small, Kain said, the study's implications are profound. Four million children undergo surgical procedures in the U.S. each year, suggesting that more than half a million of them suffer well after leaving the hospital. This results in more school absences and visits to the doctor and, for parents, days off work.

Kain said the research indicates that physicians need to more effectively manage pain within 48 hours of surgery -- which, in adults, has been shown to minimize the potential for chronic pain -- and that parents should be properly prepared to alleviate their child's pain at home.

"Medical professionals must understand this issue better and learn how to work with parents to care for chronic pain," he said. "We hope this study marks a first step toward long-term, definitive solutions."

UCI pediatric pain psychologist Michelle Fortier led the study -- which involved patients from CHOC Children's Hospital in Orange, Calif. -- and Drs. Jody Chou and Eva Mauer also participated.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by University of California - Irvine.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Michelle A. Fortier, Jody Chou, Eva L. Maurer, Zeev N. Kain. Acute to chronic postoperative pain in children: preliminary findings. Journal of Pediatric Surgery, 2011; 46 (9): 1700 DOI: 10.1016/j.jpedsurg.2011.03.074

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — A new analysis of Hubble surveys, combined with simulations of galaxy interactions, reveals that the merger rate of galaxies over the last 8 billion to 9 billion years falls between the previous estimates.

The galaxy merger rate is one of the fundamental measures of galaxy evolution, yielding clues to how galaxies bulked up over time through encounters with other galaxies. And yet, a huge discrepancy exists over how often galaxies coalesced in the past. Measurements of galaxies in deep-field surveys made by NASA's Hubble Space Telescope generated a broad range of results: anywhere from 5 percent to 25 percent of the galaxies were merging.

The study, led by Jennifer Lotz of the Space Telescope Science Institute in Baltimore, Md., analyzed galaxy interactions at different distances, allowing the astronomers to compare mergers over time. Lotz's team found that galaxies gained quite a bit of mass through collisions with other galaxies. Large galaxies merged with each other on average once over the past 9 billion years. Small galaxies were coalescing with large galaxies more frequently. In one of the first measurements of smashups between dwarf and massive galaxies in the distant universe, Lotz's team found these mergers happened three times more often than encounters between two hefty galaxies.

"Having an accurate value for the merger rate is critical because galactic collisions may be a key process that drives galaxy assembly, rapid star formation at early times, and the accretion of gas onto central supermassive black holes at the centers of galaxies," Lotz explains.

The team's results are accepted for publication appeared in The Astrophysical Journal.

The problem with previous Hubble estimates is that astronomers used different methods to count the mergers.

"These different techniques probe mergers at different 'snapshots' in time along the merger process," Lotz says. "It is a little bit like trying to count car crashes by taking snapshots. If you look for cars on a collision course, you will only see a few of them. If you count up the number of wrecked cars you see afterwards, you will see many more. Studies that looked for close pairs of galaxies that appeared ready to collide gave much lower numbers of mergers than those that searched for galaxies with disturbed shapes, evidence that they're in smashups."

To figure out how many encounters happen over time, Lotz needed to understand how long merging galaxies would look like "wrecks" before they settle down and begin to look like normal galaxies again.

That's why Lotz and her team turned to highly detailed computer simulations to help make sense of the Hubble photographs. The team made simulations of the many possible galaxy collision scenarios and then mapped them to Hubble images of galaxy interactions.

Creating the computer models was a time-consuming process. Lotz's team tried to account for a broad range of merger possibilities, from a pair of galaxies with equal masses joining together to an interaction between a giant galaxy and a puny one. The team also analyzed different orbits for the galaxies, possible collision impacts, and how galaxies were oriented to each other. In all, the group came up with 57 different merger scenarios and studied the mergers from 10 different viewing angles. "Viewing the simulations was akin to watching a slow-motion car crash," Lotz says.

The simulations followed the galaxies for 2 billion to 3 billion years, beginning at the first encounter and continuing until the union was completed, about a billion years later.

"Our simulations offer a realistic picture of mergers between galaxies," Lotz says.

In addition to studying the smashups between giant galaxies, the team also analyzed encounters among puny galaxies. Spotting collisions with small galaxies are difficult because the objects are so dim relative to their larger companions.

"Dwarf galaxies are the most common galaxy in the universe," Lotz says. "They may have contributed to the buildup of large galaxies. In fact, our own Milky Way galaxy had several such mergers with small galaxies in its recent past, which helped to build up the outer regions of its halo. This study provides the first quantitative understanding of how the number of galaxies disturbed by these minor mergers changed with time."

Lotz compared her simulation images with pictures of thousands of galaxies taken from some of Hubble's largest surveys, including the All-Wavelength Extended Groth Strip International Survey (AEGIS), the Cosmological Evolution Survey (COSMOS), and the Great Observatories Origins Deep Survey (GOODS), as well as mergers identified by the DEEP2 survey with the W.M. Keck Observatory in Hawaii. She and other groups had identified about a thousand merger candidates from these surveys but initially found very different merger rates.

"When we applied what we learned from the simulations to the Hubble surveys in our study, we derived much more consistent results," Lotz says.

Her next goal is to analyze galaxies that were interacting around 11 billion years ago, when star formation across the universe peaked, to see if the merger rate rises along with the star formation rate. A link between the two would mean galaxy encounters incite rapid star birth.

In addition to Lotz, the coauthors of the paper include Patrik Jonsson of Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass; T. J. Cox of Carnegie Observatories in Pasadena, Calif.; Darren Croton of the Centre for Astrophysics and Supercomputing at Swinburne University of Technology in Hawthorn, Australia; Joel R. Primack of the University of California, Santa Cruz; Rachel S. Somerville of the Space Telescope Science Institute and The Johns Hopkins University in Baltimore, Md.; and Kyle Stewart of NASA's Jet Propulsion Laboratory in Pasadena, Calif.

The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA's Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI) conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc., in Washington, D.C.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by NASA/Goddard Space Flight Center.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Stir this clear liquid in a glass vial and nothing happens. Shake this liquid, and free-floating sheets of protein-like structures emerge, ready to detect molecules or catalyze a reaction. This isn't the latest gadget from James Bond's arsenal -- rather, the latest research from the U. S. Department of Energy (DOE)'s Lawrence Berkeley National Laboratory (Berkeley Lab) scientists unveiling how slim sheets of protein-like structures self-assemble. This "shaken, not stirred" mechanism provides a way to scale up production of these two-dimensional nanosheets for a wide range of applications, such as platforms for sensing, filtration and templating growth of other nanostructures.

"Our findings tell us how to engineer two-dimensional, biomimetic materials with atomic precision in water," said Ron Zuckermann, Director of the Biological Nanostructures Facility at the Molecular Foundry, a DOE nanoscience user facility at Berkeley Lab. "What's more, we can produce these materials for specific applications, such as a platform for sensing molecules or a membrane for filtration."

Zuckermann, who is also a senior scientist at Berkeley Lab, is a pioneer in the development of peptoids, synthetic polymers that behave like naturally occurring proteins without degrading. His group previously discovered peptoids capable of self-assembling into nanoscale ropes, sheets and jaws, accelerating mineral growth and serving as a platform for detecting misfolded proteins.

In this latest study, the team employed a Langmuir-Blodgett trough -- a bath of water with Teflon-coated paddles at either end -- to study how peptoid nanosheets assemble at the surface of the bath, called the air-water interface. By compressing a single layer of peptoid molecules on the surface of water with these paddles, said Babak Sanii, a post-doctoral researcher working with Zuckermann, "we can squeeze this layer to a critical pressure and watch it collapse into a sheet."

"Knowing the mechanism of sheet formation gives us a set of design rules for making these nanomaterials on a much larger scale," added Sanii.

To study how shaking affected sheet formation, the team developed a new device called the SheetRocker to gently rock a vial of peptoids from upright to horizontal and back again. This carefully controlled motion allowed the team to precisely control the process of compression on the air-water interface.

"During shaking, the monolayer of peptoids essentially compresses, pushing chains of peptoids together and squeezing them out into a nanosheet. The air-water interface essentially acts as a catalyst for producing nanosheets in 95% yield," added Zuckermann. "What's more, this process may be general for a wide variety of two-dimensional nanomaterials."

This research is reported in a paper titled, "Shaken, not stirred: Collapsing a peptoid monolayer to produce free-floating, stable nanosheets," appearing in the Journal of the American Chemical Society (JACS) and available in JACS online. Co-authoring the paper with Zuckermann and Sanii were Romas Kudirka, Andrew Cho, Neeraja Venkateswaran, Gloria Olivier, Alexander Olson, Helen Tran, Marika Harada and Li Tan.

This work at the Molecular Foundry was supported by DOE's Office of Science and the Defense Threat Reduction Agency.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by DOE/Lawrence Berkeley National Laboratory.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Journal Reference:

Babak Sanii, Romas Kudirka, Andrew Cho, Neeraja Venkateswaran, Gloria K. Olivier, Alexander M. Olson, Helen Tran, R. Marika Harada, Li Tan, Ronald N. Zuckermann. Shaken, Not Stirred: Collapsing a Peptoid Monolayer To Produce Free-Floating, Stable Nanosheets. Journal of the American Chemical Society, 2011; : 111012114427004 DOI: 10.1021/ja206199d

Note: If no author is given, the source is cited instead.

Disclaimer: Views expressed in this article do not necessarily reflect those of ScienceDaily or its staff.


View the original article here

ScienceDaily (Oct. 27, 2011) — Researcher Thijs Meenink at Eindhoven University of Technology (TU/e) has developed a smart eye-surgery robot that allows eye surgeons to operate with increased ease and greater precision on the retina and the vitreous humor of the eye. The system also extends the effective period during which ophthalmologists can carry out these intricate procedures.

Meenink will defend his PhD thesis on Oct. 31 for his work on the robot, and intends later to commercialize his system.

Filters-out tremors

Eye operations such as retina repairs or treating a detached retina demands high precision. In most cases surgeons can only carry out these operations for a limited part of their career. "When ophthalmologists start operating they are usually already at an advanced stage in their careers," says Thijs Meenink. "But at a later age it becomes increasingly difficult to perform these intricate procedures." The new system can simply filter-out hand tremors, which significantly increases the effective working period of the ophthalmologist.

Same location every time

The robot consists of a 'master' and a 'slave'. The ophthalmologist remains fully in control, and operates from the master using two joysticks. This master was developed in an earlier PhD project at TU/e by dr.ir. Ron Hendrix. Two robot arms (the 'slave' developed by Meenink) copy the movements of the master and carry out the actual operation. The tiny needle-like instruments on the robot arms have a diameter of only 0.5 millimeter, and include forceps, surgical scissors and drains. The robot is designed such that the point at which the needle enters the eye is always at the same location, to prevent damage to the delicate eye structures.

Quick instrument change

Meenink has also designed a unique 'instrument changer' for the slave allowing the robot arms to change instruments, for example from forceps to scissors, within only a few seconds. This is an important factor in reducing the time taken by the procedure. Some eye operations can require as many as 40 instrument changes, which are normally a time consuming part of the overall procedure.

High precision movements

The surgeon's movements are scaled-down, for example so that each centimeter of motion on the joystick is translated into a movement of only one millimeter at the tip of the instrument. "This greatly increases the precision of the movements," says Meenink.

Haptic feedback

The master also provides haptic feedback. Ophthalmologists currently work entirely by sight -- the forces used in the operation are usually too small to be felt. However Meenink's robot can 'measure' these tiny forces, which are then amplified and transmitted to the joysticks. This allows surgeons to feel the effects of their actions, which also contributes to the precision of the procedure.

Comfort

The system developed by Meenink and Hendrix also offers ergonomic benefits. While surgeons currently are bent statically over the patient, they will soon be able to operate the robot from a comfortable seated position. In addition, the slave is so compact and lightweight that operating room staff can easily carry it and attach it to the operating table.

New procedures

Ophthalmologist prof.dr. Marc de Smet (AMC Amsterdam), one of Meenink's PhD supervisors, is enthusiastic about the system -- not only because of the time savings it offers, but also because in his view the limits of manual procedures have now been reached. "Robotic eye surgery is the next step in the evolution of microsurgery in ophthalmology, and will lead to the development of new and more precise procedures," de Smet explains.

Market opportunities

Both slave and master are ready for use, and Meenink intends to optimize them in the near future. The first surgery on humans is expected within five years. He also plans to investigate the market opportunities for the robot system. Robotic eye surgery is a new development; eye surgery robots are not yet available on the market.

Recommend this story on Facebook, Twitter,
and Google +1:

Other bookmarking and sharing tools:

Story Source:

The above story is reprinted from materials provided by Eindhoven University of Technology.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.

Note: If no author is given, the source is cited instead.

Disclaimer: This article is not intended to provide medical advice, diagnosis or treatment. Views expressed here do not necessarily reflect those of ScienceDaily or its staff.


View the original article here