We help you find the perfect fit.

Swiss Ai Research Overview Platform

28 Research Topics
Reset all filters
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Filter
Reset all filters
Select all Unselect all
Close Show projects Back
Show all filters
71 Application Fields
Reset all filters
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Filter
Reset all filters
Select all Unselect all
Close Show projects Back
Show all filters
34 Institutions
Reset all filters
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Select all Unselect all
Close
Filter
Reset all filters
Select all Unselect all
Close Show projects Back
Show all filters
Learning by Switching Roles in Physical Human-Robot Collaboration

Lay summary

Dans un contexte de collaboration homme-robot comprenant des contacts physiques avec la personne, les robots actuels montent des limitations quant à l'observation et à l'adaptation de la dynamique humaine, ce qui réduit l'efficacité et l'ergonomie de la collaboration. SWITCH vise à développer des méthodes pour analyser en temps réel la dynamique humaine afin d'entraîner des contrôleurs anticipatifs à partir de démonstrations. 

Dans un premier temps, plusieurs sets de données, incluant forces et mouvements, seront collectés avec une personne assistant une autre à se lever d'une chaise. Pour ceci, plusieurs types de capteurs seront exploités pour mesurer les forces d'interactions entre le robot, la personne et l'environnment. Un modèle probabiliste apprenant le comportement des deux personnes sera entraîné à partir de ces données, qui sera ensuite exploité par le robot sous la forme d'un contrôleur réactif et anticipatif.

Une assistance physique efficace demande au robot d'être informé de ce que la personne assistée est en train de faire (ou est sur le point d'effectuer), incluant le mouvement des articulations et des centres de gravité, ainsi que l'échange de forces avec la personne et l'environnement. SWITCH propose d'étudier la tâche d'assister une personne à se lever en utilisant des enregistrements d'assistance homme-homme homme-robot et robot-homme. L'apprentissage sera ainsi effectué en intervertissant le rôle de l'assistant et de l'assisté. 

Nous pensons que l'introduction de cette stratégie dans un contexte d'apprentissage par la démonstration a le potentiel d'améliorer l'acquisition de tâches d'assistance en procurant des démonstrations variées et personnalisées. Pour la modélisation, nous adopterons une approche holistique encodant le comportement des deux agents dans un modèle joint, qui sera exploité par des stratégies de régression et de contrôle prédictif pour modéliser le comportement réactif et anticipatif des deux agents. Des scénarios de complexités croissantes seront considérés, d'un comportement purement réactif à un comportement anticipatif et personnalisé. 

Abstract

In physical human-robot collaboration, robots currently face a shortcoming due to their limitations in observing and adapting to human dynamics. This further results in an inefficient collaboration and non-ergonomic interaction. SWITCH will address this shortcoming by developing methods that can efficiently observe human dynamics in real-time and learn anticipatory models from demonstration. First, we will collect several datasets of force and motion capture data for a human-human standing up task. We will then develop models that can learn the behaviors of the two agents (assistant and assisted) in a probabilistic fashion. These models will be exploited for online control of robots with reactive and anticipative capabilities.An effective human-robot physical collaboration requires robots to be aware of what the human partner is and will be doing, both in terms of motion and in terms of forces exchanged. Physical collaboration requires anticipation of both partners; anticipation of partners requires models and observation; models and observation require new technologies. While current state-of-the-art technologies make it possible to estimate motion, online measuring and predicting the exchanged forces is an open and challenging problem. In SWITCH, we will exploit a fully sensorized environment and accompanying computational tools, enabling us to measure the interactive forces between robots, humans and environment.We will concentrate on the specific task of assisting a person to stand up, by considering three scenarios of increasing complexity, from purely reactive behaviors to anticipative and personalized behaviors. One of the novelty of the approach is that learning will be achieved by switching the roles of the assistant agent and the assisted agent. We believe that introducing such strategy in learning from demonstration (LfD) will speed up the learning process by providing a richer set of demonstrations with personalization capability (with the caregiver providing appropriate demonstrations for the person to be assisted). Recordings of human-human, human-robot and robot-human behaviors will also allow us to collected a wider range of sensory information (force and motion). As an encoding strategy, we will consider a novel holistic approach to encode the behaviors of the two agents in a joint model, which will be exploited within regression and model predictive control strategies for reaction and anticipation of the agent behaviors.As a more general perspective, the research proposed in SWITCH will develop technologies for assistive robots to coexist and physically interact with humans. These technologies will enable robots to be more aware of and care about their human partners, with potential impacts on assistance capabilities in healthcare and household environments. The long term goal is to enable humanoid robots to physically interact and efficiently work with humans. While humanoid robots are already capable of performing several dynamic tasks, it is clear that in human-robot physical collaboration, they have currently a blind spot. Their limitations in observing and adapting to a human dynamics lead to inefficient collaboration and interaction. The approaches and techniques that we propose to investigate in SWITCH will serve us to advance the current state of robot control and learning techniques in assistive humanoid robots to enable robust, goal-directed whole-body motion execution involving physical contacts with environment and humans.

Last updated:11.06.2022

  Sylvain Calinon
Tadej Petric