Tuesday, May 28, 2013

Beer-pouring robot programmed to anticipate human actions

Beer-pouring robot programmed to anticipate human actions [ Back to EurekAlert! ] Public release date: 28-May-2013
[ | E-mail | Share Share ]

Contact: Syl Kacapyr
vpk6@cornell.edu
607-255-7701
Cornell University

ITHACA, N.Y. A robot in Cornell's Personal Robotics Lab has learned to foresee human action in order to step in and offer a helping hand, or more accurately, roll in and offer a helping claw.

Video: https://www.youtube.com/watch?v=xaa_wEkCvG0

Understanding when and where to pour a beer or knowing when to offer assistance opening a refrigerator door can be difficult for a robot because of the many variables it encounters while assessing the situation. A team from Cornell has created a solution.

Gazing intently with a Microsoft Kinect 3-D camera and using a database of 3D videos, the Cornell robot identifies the activities it sees, considers what uses are possible with the objects in the scene and determines how those uses fit with the activities. It then generates a set of possible continuations into the future such as eating, drinking, cleaning, putting away and finally chooses the most probable. As the action continues, the robot constantly updates and refines its predictions.

"We extract the general principles of how people behave," said Ashutosh Saxena, Cornell professor of computer science and co-author of a new study tied to the research. "Drinking coffee is a big activity, but there are several parts to it." The robot builds a "vocabulary" of such small parts that it can put together in various ways to recognize a variety of big activities, he explained.

Saxena will join Cornell graduate student Hema S. Koppula as they present their research at the International Conference of Machine Learning, June 18-21 in Atlanta, and the Robotics: Science and Systems conference June 24-28 in Berlin, Germany.

In tests, the robot made correct predictions 82 percent of the time when looking one second into the future, 71 percent correct for three seconds and 57 percent correct for 10 seconds.

"Even though humans are predictable, they are only predictable part of the time," Saxena said. "The future would be to figure out how the robot plans its action. Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond."

###

The research was supported by the U.S. Army Research Office, the Alfred E. Sloan Foundation and Microsoft.


[ Back to EurekAlert! ] [ | E-mail | Share Share ]

?


AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.


Beer-pouring robot programmed to anticipate human actions [ Back to EurekAlert! ] Public release date: 28-May-2013
[ | E-mail | Share Share ]

Contact: Syl Kacapyr
vpk6@cornell.edu
607-255-7701
Cornell University

ITHACA, N.Y. A robot in Cornell's Personal Robotics Lab has learned to foresee human action in order to step in and offer a helping hand, or more accurately, roll in and offer a helping claw.

Video: https://www.youtube.com/watch?v=xaa_wEkCvG0

Understanding when and where to pour a beer or knowing when to offer assistance opening a refrigerator door can be difficult for a robot because of the many variables it encounters while assessing the situation. A team from Cornell has created a solution.

Gazing intently with a Microsoft Kinect 3-D camera and using a database of 3D videos, the Cornell robot identifies the activities it sees, considers what uses are possible with the objects in the scene and determines how those uses fit with the activities. It then generates a set of possible continuations into the future such as eating, drinking, cleaning, putting away and finally chooses the most probable. As the action continues, the robot constantly updates and refines its predictions.

"We extract the general principles of how people behave," said Ashutosh Saxena, Cornell professor of computer science and co-author of a new study tied to the research. "Drinking coffee is a big activity, but there are several parts to it." The robot builds a "vocabulary" of such small parts that it can put together in various ways to recognize a variety of big activities, he explained.

Saxena will join Cornell graduate student Hema S. Koppula as they present their research at the International Conference of Machine Learning, June 18-21 in Atlanta, and the Robotics: Science and Systems conference June 24-28 in Berlin, Germany.

In tests, the robot made correct predictions 82 percent of the time when looking one second into the future, 71 percent correct for three seconds and 57 percent correct for 10 seconds.

"Even though humans are predictable, they are only predictable part of the time," Saxena said. "The future would be to figure out how the robot plans its action. Right now we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond."

###

The research was supported by the U.S. Army Research Office, the Alfred E. Sloan Foundation and Microsoft.


[ Back to EurekAlert! ] [ | E-mail | Share Share ]

?


AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.


Source: http://www.eurekalert.org/pub_releases/2013-05/cu-brp052813.php

Mars landing Gabby Douglas John Orozco Garrett Reid shawn johnson Tony Sly Lauren Perdue

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.