MIT Researchers Develop AI Framework to Improve Home Assistants Social Intelligence
top of page

MIT Researchers Develop AI Framework to Improve Home Assistants Social Intelligence


Illustration of the desired behavior of a socially intelligent AI assistant that is capable of jointly inferring humans' goals and helping humans reach the goals faster without being explicitly told what to do. The agent initially has no knowledge about the human's goal and thus would opt to observe. As it observes more human actions, it becomes more confident in its goal inference, adapting its helping strategy. Here, when the agent sees the human walking to the cabinet, it predicts that the goal involves plates, and decides to help by handing these plates to the human. As it becomes clear that the goal is to set up the dining table, it helps with more specific strategies, such as putting the plates on the dining table. Credit: Puig et al.
Illustration of the desired behavior of a socially intelligent AI assistant that is capable of jointly inferring humans' goals and helping humans reach the goals faster without being explicitly told what to do. The agent initially has no knowledge about the human's goal and thus would opt to observe. As it observes more human actions, it becomes more confident in its goal inference, adapting its helping strategy. Here, when the agent sees the human walking to the cabinet, it predicts that the goal involves plates, and decides to help by handing these plates to the human. As it becomes clear that the goal is to set up the dining table, it helps with more specific strategies, such as putting the plates on the dining table. Credit: Puig et al.

Researchers at Massachusetts Institute of Technology (MIT) have developed a framework that could make home assistants more responsive and socially intelligent. The framework, named NOPA (neurally guided online probabilistic assistance), allows artificial agents to infer what task a human user is trying to tackle and assist them in appropriate ways. This is a major improvement from current home assistants that only help humans when explicitly instructed to do so.


Problem of "Online Watch-and-Help"


The goal of the researchers was to create AI-powered agents that can simultaneously infer what task a human user is trying to tackle and assist them accordingly. They refer to this problem as "online watch-and-help." However, reliably solving this problem can be difficult because if a robot starts helping too soon, it might fail to recognize the human's overall goal and its contribution could be counterproductive. On the other hand, if the robot waits too long, it might be too late to help.


NOPA Framework

The emergence of helping strategies from the team's method. On the top, the helper agent (Blue) decides that handing objects to the human (Orange) is the best strategy. On the bottom, the helper agent returns objects to their original location after observing the human actions, keeping the kitch. Credit: Puig et al.
The emergence of helping strategies from the team's method. On the top, the helper agent (Blue) decides that handing objects to the human (Orange) is the best strategy. On the bottom, the helper agent returns objects to their original location after observing the human actions, keeping the kitch. Credit: Puig et al.

The NOPA framework allows an agent to predict a series of goals instead of a single goal. This way, the robot or AI assistant can assist the human in ways that are consistent with these goals, without waiting too long before stepping in. The framework constantly maintains a set of possible goals that a human might be trying to tackle and updates this set as new human actions are observed. At different points in time, a helping planner searches for a common subgoal that would be a step forward in solving all the current set of possible goals and then searches for specific actions that would help to tackle this subgoal.


Interesting Behaviors Observed


The researchers evaluated the NOPA framework in a simulated environment and observed some interesting behaviors. The agents were able to correct their behaviors to minimize disruption in the house, such as putting an object back in the original place if it was not related to the task. When uncertain about a goal, the agents would pick actions that were generally helpful, such as handing a plate to the human, instead of committing to bringing it to a table or storage cabinet.


Journal Information: Xavier Puig et al, NOPA: Neurally-guided Online Probabilistic Assistance for Building Socially Intelligent Home Assistants, arXiv (2023). DOI: 10.48550/arxiv.2301.05223
Felix Warneken et al, Altruistic Helping in Human Infants and Young Chimpanzees, Science (2006). DOI: 10.1126/science.1121448
2 views0 comments

Recent Posts

See All
bottom of page