The Institute of Computer Science at the Technical University Bergakademie Freiberg performs research on various aspects of humanoid and android robotics, virtual reality and computer networks. Research topics include the imitation of human behaviour, physical human-robot interaction and the integration of physical robots in virtual reality environments. The institute runs Master’s programs in Applied Computer Science and Network Computing in which several robotics related lectures are taught. State-of-the-art research at the international level is ensured through collaborations with Universities in Japan, e.g. the Intelligent Robotics Lab (Prof. Hiroshi Ishiguro) at the University of Osaka.
We acquired the NAO robots in the beginning of 2011 for research on interaction learning methods. Currently, one Postdoc and two Master’s students are working with NAO. The objective of this research project is to capture the way humans interact with each other during conversations or during physical interaction, for example when a person hands a bottle over to a second person. Using machine learning techniques we extract information about the interaction that allows the NAO robot to engage in a similar interaction with a human being, e.g. hand over a bottle to a human partner.
Imitation learning enables a robot to copy a human behaviour seen through some camera or motion capture device. In this way, we think that robots will become more autonomous and this will also reduce the need for programming or coding. We want to capture the interaction between two humans and would like to replicate this situation afterwards with a robot and a human. In this way, we can make the interaction capabilities of robots more natural and lifelike.
We are also developing simulation and optimization techniques, which allow us to evaluate a new behaviour in a virtual environment before using it on the real robot. Using dimensionality reduction algorithms in conjunction with evolutionary algorithms, we are able to optimize each new behaviour in the NaoSim simulator. For example, we are currently working on optimizing the walking gait for stability. To initialize this process (the so called bootstrapping stage) we record a walking gait from a human demonstrator. The walking gait is then applied in simulation on the NAO robot and is then further optimized in the NaoSim simulator. Once a stable variant of the walking gait is found, we replay it using the real NAO robot.
At this point, we are trying to learn an interaction model from previously recorded motions. Then the idea is to find the robot posture that is suitable for a human posture depending on the current situation.
Another focus of our research is to increase the vividness of a robot motion; or change the visual appeal. For that, we are using mostly Disney’s principles of animation. For example, you can have the robot play back animations in an exaggerated style or you can have it play back the animation with a different mood like sadness or happiness.
To visualize this process we use a CAVE virtual reality installation. The CAVE allows us to easily embed NAO in a wide variety of different environments. We can put the robot in this environment and try out different situations and see how the robot would respond to this situation.
In our motion tracking approach, we are using the Microsoft Kinect camera to get a 3D depth picture of our scene. Based on this information, we detect up to two users and extract their body postures over time. The resulting motions and interactions get optimized for the NAO’s body configuration within simulation.
Optimized motions can then be replayed on the NAO. This allows us to easily create new behaviours. The NAO robot is an ideal platform for our research. It is probably the most sophisticated available commercial humanoid robot. NAO can perform difficult movements (standing on one leg, dancing, etc.), recognize objects and faces and comes with high-quality speech synthesis algorithms. For our research it has proven crucial, that NAO has an appealing design. The cute appearance along with the speech synthesis capabilities allow us to easily initiate conversations and human-robot interactions.
Although we have acquired our NAO robots only five months ago, we have been very productive. This is mainly due to the great SDK and the “Choregraphe“ software which are included. Using Choregraphe helps (especially students) to make the first steps with NAO and to learn about its capabilities.
In our lab programming is done using the NAO API in C++. Choregraphe and the NaoSim simulator are then used to assess the quality of a newly developed behaviour. Additionally, we use a set of in-house and open-source tools for machine learning and motion capturing. This also includes middleware for interfacing the Microsoft Kinect camera.
So far we have developed the main components of our interaction learning method, such as the imitation of the users movements. Our next goal is to evaluate the approach by performing human-robot interaction study with a number of human test subjects. The results will be presented at the ICML 2011 Workshop on New Developments in Imitation Learning. We also hope to create a rich set of interactive behaviours for NAO, which would allow him to react in a natural way to different situations.
PDF Download – Heni Ben Amor – T.U. Bergakademie Freiberg (Germany)