Imagine a time in the future when all operations are performed by robots acting independently of people: You fall one day and break your ankle, and are told by your doctor that you need surgery. A robot carries out the operation. At first, it seems to have gone well. But when you visit the doctor two weeks after the operation for a follow-up appointment, X-rays show that the bone is in the wrong place and you will need another operation. Who should be held responsible for this?

Increasingly, robots have the potential to be more independent and learn from the work they do. Instead of being controlled by a person, these systems use sensors to guide them. The machine processes the data it gathers and can learn from it—changing how it functions in the future. This poses problems in trying to work out who is responsible when a robot makes a mistake and a person is harmed. This is particularly critical when robots are used for surgical procedures where people are vulnerable.

Working out who is responsible when a robot is involved in surgery is complicated because the use of robots in a procedure can vary greatly.

We propose a simple classification system that includes the full range of robotic systems:

  1. Human controlled robotic systems: These systems include robots that are completely controlled by the surgeon, who can sometimes work remotely (telesurgical robots).
  2. Robot-assisted systems: These systems help the surgeon carry out specific tasks such as stitching wounds.
  3. Autonomous robotic systems: Such systems can conduct entire surgical procedures with minimal or no human supervision.