Posts Tagged ‘technology’
By: Sterling Payne, B.Sc.
Minimally invasive surgery has been around since the late 20th century, however, technological advancement has sent robotic surgeons to the forefront of medicine in the past 20 years. The term “minimally invasive” refers to the performance of a surgery through small, precise incisions a far distance away from the target, thus having less of a physical impact on the patient in terms of pain and recovery times. As one can imagine, surgeons must use small instruments during a minimally invasive procedure and operate with a high-level of control in order to perform a successful operation. In light of these requirements, and due to fast-paced advances in robotics in the last decade, robots have become more common in the operating room. Though their use benefits all parties involved if used correctly, several questions of policy accompany the robotic advance and the goal of fully autonomous surgery.
The da Vinci system is one of the most popular devices used for minimally invasive surgeries, and was approved by the FDA in 2000 for use in surgical procedures. The newest model, the da Vinci Xi® System, includes four separate robotic arms that operate a camera and multiple arrays of tools. The camera projects a 3D view of the environment onto a monitor for the surgeon, who in turn operates the other 3 arms to perform highly precise movements. The da Vinci arms and instruments allow the surgeon more control over the subject via additional degrees of freedom (less restricted movement), and features such as tremor reduction.
Though the da Vinci system is widely used, its success still depends on the skill and experience of the operator. Surgical robotics engineer Azad Shademan and colleagues acknowledged this in a recent publication in Science, highlighting their successful design, manufacturing, and use of the Smart Tissue Autonomous Robot (STAR). The STAR contains a complex imaging system for tracking the dynamic movement of soft tissue, as well as a custom algorithm that allows the robot to perform a fully autonomous suturing procedure. Azad and colleagues demonstrated the effectiveness of their robot by having it perform various stitching procedures on non-living pig tissue in an open surgical setting. Not only did the STAR succeed in both procedures, it outperformed highly experienced surgeons that it was pitted against. More information on the STAR can be found here.
In response to the da Vinci system, Google recently announced Verb Surgical, a joint-venture company with Johnson & Johnson. Verb aims to create “a new future, a future unimagined even a few years ago, which will involve machine learning, robotic surgery, instrumentation, advanced visualization, and data analytics”. Whereas the da Vinci system helps the surgeon perform small, precise, movements, Verb will use artificial intelligence amongst other technologies to augment the surgeon’s view, providing information such as anatomy and various boundaries of bodies such as tumors. A procedure assisted by the da Vinci system can increase the physical dexterity and mobility of the surgeon, however, Verb aims to achieve that and give a “good” surgeon the knowledge and thinking modalities previously confined to expert surgeons gathered over time through hundreds of surgeries. In a way, Verb could level the playing field in more ways than one, allowing all surgeons access to a vast knowledge base accumulated through machine learning.
As proven by the introduction of fully self-driving cars by Tesla in October, autonomous robots are becoming integrated into society; surgery is no exception. A 2014 paper in the American Medical Association Journal of Ethics states that we can apply Isaac Asimov’s (author of I, Robot) three laws of robotics to robot-assisted surgery “if we acknowledge that the autonomy resides in the surgeon”. However, the policy discussion for fully autonomous robot surgeons is still emergent. In the case of malpractice, the doctor performing the operation is usually the responsible party. When you replace the doctor with an algorithm, where does the accountability lie? When a robot surgeon makes a mistake, one could argue that the human surgeon failed to step in when necessary or supervise the surgery adequately. One could also argue logically that the manufacturers should claim responsibility for a malfunction during an automated surgery. Other possibilities include the programmer(s) who designed the algorithms (like the stitching algorithm featured in the STAR), as well as the hospital housing the robot. This entry from a clinical robotics law blog highlights the aforementioned questions from a litigator’s standpoint.
A final talking-point amidst the dawn of autonomous surgical technology is the safeguarding of wireless connections to prevent “hacking” or unintended use of the machine during telesurgery. Telesurgery refers to the performance of an operation by a surgeon who is physically separated from the patient by a long distance, accomplished through wireless connections, at times open and unsecured. In 2015, a team of researchers at the University of Washington addressed the weaknesses of the procedure by hacking into a teleoperated surgical robot, the Raven II. The attacks highlighted vulnerabilities by flooding the robot with useless data, thus making intended movements less fluid, even forcing an emergency stop mechanism. Findings such as this will help with the future development and security of teleoperated surgical robots, their fully autonomous counterparts, and the policy which binds them.
When a web browser or computer application crashes, we simply hit restart, relying on autosave or some other mechanism to preserve our previous work. Unlike a computer, a human has no “refresh” button; any wrongful actions that harm the patient cannot be reversed, placing a far greater weight on all parties involved when a mistake is made. As it stands, the policy discussion for accountable, autonomous robots and algorithms is gaining much-needed momentum as said devices inch their way into society.
Have an interesting science policy link? Share it in the comments!
By Rebecca Cerio
The field of synthetic biology–most broadly described as the design and construction of new biological functions and systems not found in nature–has been quietly advancing ever since the discovery of restriction enzymes in the 1970s. Being able to cut-and-paste DNA segments in combinations different than those created by nature opened the door to molecular biology and the burgeoning biotechnology field. Such technologies, as well as our understanding of DNA functional and regulatory elements, now allow us to genetically engineer organisms to produce needed medicines, to bioengineer pest- and chemical-resistant food crops, and to sequence and study the genome of any organism for useful and harmful mutations.
Recently, the J. Craig Venter Institute’s announcement that they can chemically synthesize an entire, functional genome in the lab has led to new public awareness of the potential power, benefits, and dangers of synthetic biology. One question raised is: just because we can, does that mean that we should?
Or, from a regulatory standpoint, just because it is possible, should it be allowed? Synthetic biology technology can be used for legitimate scientific purposes (i.e., producing vaccines) and to threaten public safety (i.e., producing deadly pathogens). But what are the actual, plausible risks and benefits of synthetic biology, beyond movie-plot scenarios and inflammatory rhetoric about “playing God”? Read the rest of this entry »