The Human-Robot Interaction Laboratory (HRI Lab) is in a far-flung corner of Tufts: 200 Boston Avenue, a location populated primarily by researchers, engineering students and, of course, robots.
One of the HRI Lab’s recent experiments, a study called “Sorry, I can’t do that,” cowritten by Director of the HRI Lab Matthias Scheutz and then-Ph.D. student Gordon Briggs, has garnered a great deal of media attention. A video from the HRI Lab's YouTube channel shows that if a human researcher tells a robot to walk forward, the robot can refuse, “seeing” that if it obeys the command it will fall off the edge of the table. The human researcher then tells the robot that he will catch it, and the robot acquiesces and is caught.
This research received a significant amount of press, with headlines like “Why robots must learn to disobey their human masters” in a Dec. 4, 2015 Tech Insider article, “Saying no: teaching robots to reject orders” in a Nov. 30, 2015 R&D Magazine article and “Robots learn to disobey humans! Watch machine say ‘no’ to voice commands” in a Nov. 26, 2015 The Mirror article.
Many articles have focused on the “disobedience” aspect of the study, which Scheutz says is misleading.
“I don’t like that term [‘disobedient’]—the newspapers came up with ‘disobedient robots’ because they like it; it’s flashy," Scheutz said. "The way I would like to look at it is -- and that’s the intention of what we’re doing—is robots being more aware of what actions result in, what actions will cause. Specifically in contexts where the actions could cause harm or damage, the robot should not blindly follow them."
In fact, the robots are not learning to “say no,” according to Scheutz. They are simply obeying more complex commands.
“The robot has a rule; if I’m instructed to do something, and in doing that I do harm, I cannot do that--that’s a principle it has," Scheutz said. "So it’s not disobedient, it’s reasoning through the obligations it has."
According to Scheutz, the goal of this would be to develop robots that understand not only the meaning but the intent behind human words—and this means not only coding formal language but attempting to code the informal language of intentions as well.
“My assumption is that we’re going in the direction of a human-robot society," Scheutz said. "We will have more and more autonomous machines among us …Our society is governed by social and moral norms and by laws that are derived from them. We want to make sure that whatever machines we’ve built, they not only operate in the confines of the legal system, they also have to respect normative expectations."
A large component of this research is getting robots to understand “indirect speech acts,” or language which is not meant to be taken literally. Tufts senior Monika Dragulski has worked in the lab since her sophomore year and helped with this particular project.
“For example, when you go to a restaurant, and you say ‘could I get a glass of water?’ you’re actually saying ‘I want a glass of water’... so we’re working to train future robots to understand [that],” Dragulski said.
According to Scheutz, language and perception are major hurdles to overcome in order for robots to become extremely sophisticated machines, but the lab is working on it.
“Perception is a really big issue," he said. "How do we know, based on what we see and can process, that the situation is potentially dangerous or morally charged? … It’s a very difficult vision problem, and it’s vision plus inference plus reasoning. We’re by far not anywhere close [to solving it]."
As there are 16 researchers working in the lab, including 10 Ph.D. students, as well as seven undergraduates,the projects they work on cover varying topics. According to Max Bennett, another Tufts senior who is working in the lab, they encompass not only indirect speech acts but also creating mental models and running studies that are actually about how humans interact directly with robots.
“The way people interact with the robots informs what we should be trying to implement with the robots…It’s a two way street: the cognitive models we’ve made, we’re able to test in robot settings and then the information we get from those settings informs whatever revisions have to be made to whatever project it is,” Bennett said.
Dragulski and Bennett are both computer science and cognitive and brain science double majors, which factors into their work as well -- the ones being tested for a reaction in most of their studies are humans, not robots.
“I wouldn’t separate [psych study and robot study] -- I think all the robot studies we do are psych studies," Bennett said. "They just sometimes involve a lot of computer science and robots.”
Dragulski also views the research as focusing on human and robot interactions.
“Very basically, we’re looking at how humans and robots interact and trying to improve that interaction,” she said.
Programming robots to understand more complex commands is only one aspect of research in artificial intelligence (AI). Bennett added that there are many aspects yet to explore in the field.
“There’s so many open AI questions … what does it mean to think? If you could perfectly model how a brain works, how is that network and activation and simulation not sufficient for being called a brain in function?” Bennett said.
Scheutz, Dragulski and Bennett all remain optimistic about the future of robotics, and derided the “AI apocalypse” tone that the press has taken toward some of the lab's experiments.
“[A ‘sentient’ AI is] far right now, but once it’s close, it’s really close," Bennett said. "Once there is an agent that is able to learn, it will be able to learn at an exponential rate."
In fact, there is already a group dedicated to the rights of the kinds of machines envisioned in science fiction: PETRL, People for the Ethical Treatment of Reinforcement Learners. Their stated aim is “promoting moral consideration for algorithms." While they admit that most machines do not currently have “significant moral weight,” they also assert that “this could very well change as AI research develops.”
However, according to Scheutz, these sentient robots of the future won't be built tomorrow, though they may be on the horizon.
“It’s not sci-fi yet...but it’s definitely interesting to see how certain things that would not have been thinkable 10 or 15 years ago are starting to take shape,” Scheutz said.
More from The Tufts Daily
A trip to Teele Square
By
Sarah Firth
| December 10
Hey Wait Just One Second: Wonder
By
Max Turnacioglu
| December 10
A Jumbo’s Journey: I’ll lasso the moon for you
By
Ben Rachel
| December 10