The increasing growth of artificial intelligence has come with fear that these robots could be a threat to humanity. To lessen this anxiety, a team of researchers developed a method that will train AI how to behave in social settings.
Robots learn socially accepted behavior by reading and understanding children’s books, particularly stories about chivalry. Researchers developed a technology called “Quixote” that can teach robots how to align their goals with proper human behavior in social settings.
(Photo : Georgia Institute of Technology)
The new technology is called “Quixote” and it teaches robots to read children’s stories, understand acceptable social behavior in societies and learn standard event sequences. The new technology was developed by a team from Georgia Institute of Technology’s School of Interactive Computing.
Mark Riedl, Entertainment Intelligence Lab’s director and associate professor,says they believe if robots can understand the stories, that can prevent “psychotic-appearing behavior” in AI and promote the options that will not cause harm to humans while completing the required task.
Quixote is a “value alignment” method that connects the robots’ goals with appropriate behaviors in social settings. Building from Riedl’s previous research, Quixote enables the robot to act like the story protagonist in children’s stories in anticipation of a reward.
For instance, a robot who needs to pick a medicine prescription for a human can possibly do of the following: rob the clinic or pharmacy to get the medicine it needs and run; talk to the pharmacists to get the medicine; or patiently wait in line for a turn at the counter.
Without the Quixote system, robots will figure out that robbing or stealing the medicine is the quickest and most inexpensive way to finish the task. However, aligning the robots’ goals with socially accepted behaviors, the AI learns that it will be rewarded if it chooses either the second or third option.
“The technique is best for robots that have a limited purpose but need to interact with humans to achieve it. It is a primitive first step toward general moral reasoning in AI,” says Riedl who worked with Brent Harrison in developing Quixote. He adds that the most practical way to teach a robot’s value alignment is by teaching it to read and understand children’s books without a human user guidebook.
Through grants, The Office of Naval Research and the U.S. Defense Advanced Research Projects Agency or DARPA supported the research, which can beviewed [pdf] online. The researchers will make the project’s debut at the AAAI-16 Conference on Feb. 12 to 17 in Phoenix, Arizona.
Source Article from http://feedproxy.google.com/~r/blacklistednews/hKxa/~3/6rPjE2_1Ej0/M.html
Related posts:
Views: 0