Skip to main content
Tufts Mobile homeNews home
Story
18 of 19

Building in Guardrails on Robots and AI

It’s important to create safeguards for robots that reflect social norms, says a human-robot interaction expert

Think of robots and it brings to mind images of little vacuums spinning around the floor or huge pieces of machinery in factory settings. Increasingly, though, robots seem set to enter even more human-centered settings—hospitals, offices, even grocery stores. That’s presenting new challenges. 

Matthias Scheutz knows this better than most. The Karol Family Applied Technology Professor, he has been working on human-robot interaction for several decades. Now, he says, companies are actively exploring how to use AI large language models, like ChatGPT in robotics. The AI could generate action sequences to direct robots to perform tasks, trying to move away from humans creating instructions for robots.

For those trying to employ more AI in robotics a key question is how to make sure the machines’ behavior is consistent with human values. In the robotics field, it’s called the value alignment problem, Scheutz says.

But that’s not quite the right approach, he argues. “It should be called ‘norm alignment,’ because machines really don’t have any values,” he notes. “Norms are less about individual assessment, and more a social regulatory mechanism.” 

In his office, he points to his umbrella, and says that if he opened it and held it over his head during a meeting, he wouldn’t be violating anyone’s values, but clearly, he’d be operating outside of social norms.

These are “social principles, in a way—and they are not written down. If machines don’t have them and the machines are embedded in our society, that’s going to be a problem,” he says. 

At the Human-Robot Interaction Lab that Scheutz runs in the Joyce Cummings Center, he and his students are working with robots to develop algorithms for interaction with humans, taking into account people’s reactions to robots in everyday settings, anticipating how robots will operate in different social interactions.

What’s Normal for You, Robot?

The question is how to train the AI that’s running the robot so that it understands reasonable human norms. Of course, there are too many variables to represent all norms, so Scheutz says it’s best to start with a set of built-in principles. “Maybe start with the law and larger principles—don’t kill, don’t steal, be polite,” he says. “Then you adapt them to the culture and the circumstance, and you pick up whatever the local norms are.”

Scheutz’s group was one of the first to work on norms for robots. “I got into robot ethics in the mid-2000s, before AI ethics was a thing, and started implementing normative reasoning for robots in 2014,” he says. 

Evan Krause, senior robotics programmer and lab manager, and Matthias Scheutz, set up for an experiment in the Human-Robot Interaction Lab. Photo: Alonso Nichols

He recently received a patent for a system and method for ensuring safe, norm-conforming, and ethical behavior of intelligent systems. Using the system would involve generating a clone of the robot’s operating system and running simulated tests of certain behaviors. If the robot passes the norms standards in the testing, it would be allowed to continue operating in the real-world environment. But if it fails the standards test, the system could override the robot’s intended actions, or even shut it down.

Learning the norms we take for granted isn’t as simple as it sounds. Take something as basic as having a robot go grocery shopping. It knows what grocery stores look like in general, but not each store in particular. How does it find food items? What if there is a person standing in front of an item it needs? If multiple brands of a product are on offer, how does it choose the one to buy? 

Take another example. Say you ask a hospital robot to get something in the supply closet, and that door is locked. “Now you have to find the key,” Scheutz says. “Then the plan gets more complicated—can the robot replan automatically?” 

Where the Danger Comes In

Perhaps the AI, though, would have some ideas because it has been trained on material that referenced locked doors and what people do to get them open. “There’s a big push right now to utilize the knowledge that is encoded in these large language models to make robots more adaptive and easier to instruct,” he says.

The trouble is, AI is hardly failsafe. Scheutz tells how he queried ChatGPT about himself—and the AI gave a good summary of his work, including a book he had written on human-robot interaction. The trouble is, that book doesn’t exist, plausible as it is that Scheutz might have written it.

“Because the architecture is intrinsically built on creating patterns that seem likely, given the past, you can never be sure that what it creates is actually accurate and true,” he says. “That’s where the danger comes in.”

In the Human-Robot Interaction Lab, a graduate student demonstrates one of the test robots, as it responds to vocal commands to move a bottle of pills from one location to another, as in a hospital setting. It’s a laborious process, stage by stage, but gets done. On another test, the robot faces an obstacle to accomplish the task, and it has to make adjustments to achieve its goal.

There are also fail-safe instructions in place, as the people controlling the robots have different levels of access to giving commands. “For example, if you’re trusted, you can tell the robot new facts and it will believe you,” Scheutz says. “If you’re not trusted, you can’t do that.”

By having this very explicit built-in access control that cannot be circumvented, “the robot itself already has a notion of, for example, who’s authorized to give it instructions,” he says.

It is a start, but with AI, it’s even more important to have fail-safe mechanisms because it’s unclear exactly how the AI is learning. “You throw more data at it, and you let it learn, and see what it learns,” says Scheutz. “Do we understand how and why and what it is about the structural input that allows it to answer questions that it wasn’t specifically trained on? We don’t know that.”