Researchers have hacked into a remotely-operated surgical robot, to see what chaos could be caused by such a high-tech hijacking.

A University of Washington engineering team has hacked a next generation tele-operated surgical robot - one used only for research purposes – in the hope of making the systems more secure.

Real-world tele-operated robots - controlled by a human who may be physically far away - are expected to become more commonplace as the technology evolves.

They are among a generation of new ‘bots designed for situations that are dangerous for people; conducting surgery on a miniscule scale, fighting fires in chemical plants, diffusing explosive devices or extricating earthquake victims from collapsed buildings.

At the moment, doctors typically use surgical robots to operate on a patient in the same room using a secure, hardwired connection. But many expect telerobots may be used to provide medical treatment in underdeveloped rural areas, battlefield scenarios, disease wards or catastrophes around the world.

But researchers have demonstrated that next generation tele-operated robots using non-private networks - which may be the only option in disasters or in remote locations - can be easily disrupted or derailed by common forms of cyberattack.

To spot the vulnerabilities, the team mounted common types of cyberattacks as study participants used a tele-operated surgical robot to move rubber blocks between pegs on a pegboard.

The team mounted ‘man in the middle’ attacks, which alter the commands flowing between the operator and robot.

They were able to maliciously disrupt a wide range of the robot's functions - making it hard to grasp objects with the robot's arms, and even completely overrode command inputs.

During denial-of-service attacks, in which the attacking machine flooded the system with useless data, the robots became jerky and harder to use.

In some cases, the human operators were able to compensate for the disruptions, given the relatively simple task of moving blocks.

But in situations where precise movements can mean the difference between life and death - such as surgery or a search and rescue extrication - these types of cyberattacks could have more serious consequences.

With a single packet of bad data, for instance, the team was able to maliciously trigger the robot's emergency stop mechanism, rendering it useless.

The surgical robots approved for clinical use today use a different communication channel and typically do not rely on publicly available networks, which would make the cyberattacks the team tested much harder to mount.

But if tele-operated robots will be used in locations where there was no secure alternative to networks or other communication channels that are easy to hack, it would be important to design and incorporate additional security, the researchers argue.

“If there's been a disaster, the network has probably been damaged too. So you might have to fly a drone and put a router on it and send signals up to it,” said Howard Chizeck, professor of electrical engineering and co-director of the University of Washington BioRobotics Lab.

“In an ideal world, you'd always have a private network and everything could be controlled, but that's not always going to be the case.

“We need to design for and test additional security measures now, before the next generation of telerobots are deployed.”

Encrypting data packets that flow between the robot and human operator would help prevent certain types of cyberattacks, but would not help against denial-of-service attacks that bog down the system with extraneous data.

With the hi-res video streaming requirements, encryption also runs the risk of causing unacceptable delays in delicate operations.

So the team is developing “operator signatures”, which use the specific ways that a particular surgeon or other tele-operator interacts with a robot, to create a unique biometric signature.

By tracking the forces and torques that a particular operator applies to the console instruments and their interactions with the robot's tools, the researchers have developed a new way to validate that person’s identity and authenticate that the operator is the person they claim to be.

“Just as everyone signs something a little bit differently and you can identify people from the way they write different letters, different surgeons move the robotic system differently,” Chizeck said.

“This would allow us to detect and raise the alarm if all of a sudden someone who doesn't seem to be [an operator]... is maliciously controlling or interfering with the procedure.”