Skip to: main navigation | main content | sitemap | accessibility page

 

Researchers in Edinburgh are working to develop robots that can learn, adapt and take decisions independently of human control.

Over the next decade or so, researchers in Edinburgh are working to develop robots that can learn, adapt and take decisions independently of human control. This ambitious and far-reaching work brings together dozens of scientists and engineers from both the University of Edinburgh and Heriot-Watt University together with multiple industrial partners.

The work at Centre for Doctoral Training (CDT) in Robotics and Autonomous Systems (RAD) has funding from EPSRC and industry totalling around £13 million. The CDT addresses the key challenges for understanding and managing the interactions between robots, humans and their environment.

Dr Ramamoorthy’s work is focused on building autonomous robots that are capable of acting intelligently, as equals to human co-workers, in human-robot interactions. The overall goal is to create a structured space where robots can collaborate with each other as well as humans in an effective and efficient way. In order to achieve this there are many aspects to the research work and topics include – vision, manipulation, navigation, language, task and goal recognition, learning and adaptation amongst others.

Dr Ramamoorthy’s group was the first UK educational institution to take delivery of a pair of Baxter robots, he explains how he came to learn about Baxter and why he needed two of them…

Professor Subramanian Ramamoorthy headshot“I remember my first experience with Baxter was when I heard that Rodney Brooks was setting up a robotics company. We were in the planning stages of our research work and so we were looking at the tools we would be using. My own interests are in the area of robotics and collaborative AI so this seemed like a perfect fit. I liked the fact that Rodney was explicitly looking to make a robot that was easy to work, not the kind of robot where you spend all of your time doing mechanical maintenance or low level programming, but one that you could work with out of the box.

We found that there really isn’t much of an alternative, at this price point, for a bi-manual manipulation robot. We had already been using the Aldebaran Nao robot for some navigation and vision work but that that platform is somewhat constrained in terms of computational resources and we really couldn’t get it to do much bi-manual manipulation. Our other alternative would have been to construct something like Baxter ourselves, and some of my colleagues have made such robots using commercially available arm and hand sub-systems, but we were actively looking for a packaged robot which came with maintenance and a user base. Active Robots came here with Baxter and did a demonstration and presentation for us, and very soon after we bought a pair of them.”

University of Edinburgh Informatics Group Shot

“There were a few things that were important to us when evaluating Baxter” continues Ramamoorthy, “ROS was a key requirement of course, we’re not interested in closed proprietary systems as maintenance is a real pain. Robotics is also at a stage where it is very important for us to be able to borrow from and share libraries and modules with others.

We also knew it was important for us to have a robot that could work in a human environment for our interaction work and having two Baxters also allows us to explore robot-robot interactions as well.

We’ve had the Baxters for just over a year now and the experience has been very good. Our first project was undertaken by an MSc student who worked on having Baxter pick up a multi-coloured cube from a human hand, simulating un-fixtured handover of tools and other objects. Although this is a simple problem to start with, it is representative of the kind of work we are doing here. For example, how do we address the context of people holding the cube in a different way, and continually moving and shaking it during the interaction? How does varied lighting affect the ability to track the cube? Once the student had solved these issues we took the robot to the National Museum of Scotland as a demonstration at the Edinburgh International Science festival, where a very diverse selection of members of the public interacted with the Baxter in this way.”

There is an excellent YouTube video that provides an overview of the research by Dr Ramamoorthy and contains some footage of Baxter working on the multi-coloured cube problem.

Ramamoorthy adds “We use Baxter at an undergraduate level for final year projects and sometimes even earlier for teaching. Baxter is useful to us as it is an easy to use physical robot and also comes with a simulation framework that you can get into relatively easily. It’s actually a simulation framework that we teach to everybody, to every robotics student. So after they have figured out a certain amount of programming, once they become serious robotics students, we get them involved with ROS and the simulator. So the fact that with Baxter there is a good simulator, a physical robot and easy to access public libraries means it’s relatively low cost for us to get a robotics student up to a sufficient level quickly.”

The Human – Robotics interaction laboratory is a modular space where walls, ceilings and floors can be configured for specific experiments. Soon there will be an array of around 120 cameras surrounding the space to provide a dense video stream that can be used to detect gestures, activity and interactive behaviours in long term studies. Active8 Robots took the time to talk to a couple of the groups researchers about their specific research projects.

Nantas Nardelli is from Italy and a 3rd year undergraduate student and research assistant.

Nantas Nardelli headshot“With the cameras around our working area we can use all this information to provide large, possibly longitudinal, visual data libraries for interaction research.” Nardelli continues, “The cameras will feed their data to a large compute server which will perform a bulk of the computation and then feed this information back to the robots within our laboratory.

My project is working with Baxter in the area of task recognition and inference to assess the pattern of specific tasks. Often when humans interact and collaborate on a task there is no speech between them. Generally, humans are really good at interpreting and understanding the task that another is doing. For a robot it has no idea about a task, but they need the ability to recognise tasks and intentions as part of our larger research goals, otherwise an independent autonomous collaboration will not be possible.

The robot could of course receive instructions from a human. But in the first instance they need to be able to recognise a task and its goal independently, so as to not overwhelm the human co-worker with a slew of detailed but essentially trivial questions. Without the ability to recognise task context, robots cannot collaborate with humans effectively. The goal is for them to be largely capable of autonomy, but if a human provides them with instructions, they can learn to get better. So my project is to take some patterns that a human makes, with physical objects like blocks, and see if a robot can complete a pattern that a human has started or learn what the pattern is. This leads to the robot understanding that when a human next builds a similar pattern, and the robot already has a model it can infer from, it can understand the new pattern. This is a basic skill that humans learn as toddlers, and that is perhaps a good place for an intelligent robot to start as well.

We have two Baxters to work with. This enables us to not only look at Human-Robot interactions but also Robot-Robot interactions. If we have an environment where there are multiple robots and humans, the robots need to be able to communicate and work together collaboratively. So we need to be able to look at this as a whole system. In our work we consider each robot as an independent agent, but we also need to study multi agent systems. Each agent has a particular goal or task within the environment, the idea is they can interact and work as one integrated agent, or they can work independently. In our work, we prefer to consider everything separately so each robot as an independent agent. But there is also the possibility that they can assist and help each other, or humans, if the task requires it.

Over the last 20 years the robotics community has started to use techniques developed in natural language processing and understanding as a way to model goals and intention and this is something we are beginning to get curious about. We are looking into fusing all of this work together, to be able to apply it to robotics so that it becomes possible for robots to understand task context.

It turns out that using languages, ‘I do this and then I do this, because I did this’ I’m able to not only map these actions but also the relationships between them, so they can form a goal, and this is really powerful.

Once you have this construct then you can share information far more efficiently and you can transfer knowledge to robots more effectively, rather than program it step by step to do a specific task. So the robot is told the task and goal using these methods and it gets on with the job, but it can ask the human co-worker when needed, being autonomous otherwise. This is exactly the same way that a human approaches a task, they work with what they already know, or have learned, and only seek assistance or more information if required. That is the key, that is what we want to achieve.”

Alejandro Bordallo is from Spain and is a final year Ph. D student.

Alejandro Bordallo headshot“The aim is to have everything interconnected. So the robots connect to the camera array, and each other robot, everything will be interconnected. This is required in order for us to do the large experiments that we have planned for our research. This will involve multiple robots, of different types, all interacting and co-operating in the same space in real-time. This helps us to move along the path we are taking to establishing a space where there are independent, autonomous robots interacting with humans in a co-operative environment.

We consider this space as a ‘work-cell’. A place where people come and go whilst the robots are going about their business, but the robots should be able to interact with the humans if necessary. So if a person needs help to perform a task, say to move a table, then the robot should understand the task and know what it’s potential role is in this task; maybe it is to observe and provide situational awareness feedback, maybe it is to offer physical assistance, or maybe it has to take charge entirely and follow high level instructions. All of these things we humans take for granted but for a robot, it’s quite hard!

In my research I focus on navigation. So I look at how people navigate their way through a space and record this in video capture. So we start with video recordings. We already have some basic cameras set-up for this. We have data sets of how people navigate their way through space. It’s amazing how people – who have never met – can predict each other’s movements so well. Robots have a problem, they can’t predict very well. So we are using the data sets collated from human interactions to show how a robot should do it. We try to fit models to this data, we have parameters within our models that we can adjust, so that the robot learns how to behave like a human and navigate through a space.

What I am doing at the moment is a form of on-line learning. This is essential in order for a robot to operate autonomously in a space that has changed – say a table has been moved – or it finds itself in a new space. I am working in a dynamic environment and everything has to be done in real-time. So when the robot ‘wakes up’ everything is new and it has to quickly assimilate the topography of the environment it is now in.

One of the other emerging research topics here has to do with ‘language’ and ‘grammar’ of motion and tasks. We vare becoming interested in looking more closely at these areas within the context of human-robot interaction. This does not refer necessarily to speaking, but rather how the task-level interaction works.

This is important when we consider “shaping the intention”. What we mean by that is this: you are solving a task, you have a goal and you are doing it a certain way, but I may prefer for you to do it in a different idiosyncratic way. So, say we are carrying a table and I have a preference for how it is carried, or which way we will go around an object, then I will be defining the interaction by moving or distributing the mass of the table a certain way. So this is something we are looking into with respect to navigation and manipulation within the context of “shaping the intention”

Dr. Ramamoorthy concludes “So, the next frontier for research is to bring in learning, structured representations of tasks, intention and goal recognition into the human – robot interaction. From a research community point of view that is seen as a goal for the upcoming decade. This may well be optimistic, but it is a worthwhile goal to go for, perhaps it takes a bit longer but that is fine as long as we make progress along the way. Many of the components required already exist. Machine learning has really matured during the past decade or two and provides an array of tools, including the emerging possibility of automatically discovering ‘features’ and ‘symbols’ from data. There are emerging techniques for plan and intention recognition. They are already being used, for instance, in specific application areas like language-based and natural user interface work and even in computer security applications. The big open question is how you bring these algorithms, which exist in computational settings where one is already dealing with abstract symbols, and apply them to the physical world where we have until recently been struggling to make sense of much lower level sensorimotor experience.

The most important piece is that there is a planned activity, a part of the activity is consisting of human tasks and the other part is autonomous robot tasks. There are two different kinds of thing here, one of them is how does the human tell the robot, in a natural and iterative way, “this is the task”. You can use a number of modalities here. Of course there is some low level capability already in the system like how to grasp something, but at the next level you want to be able to say “get this object now, don’t do that other thing you are busy with”. That requires some combination of language, gesture, these kind of modalities and there has to be a certain layer of intelligence about how you represent the plan in this kind of a setting and how you understand what is being said.

That’s the kind of problem we are looking into. There are sensing tasks within this, so some of the time you are trying to ask ‘should I go now?’ and that will depend upon what you believe the other person is trying to do. But other times you are trying to ask a more structural question like “are you trying to build a tower or build something else altogether?’ and this is a plan recognition type question, grounded in noisy sensory experience. So for each of these topics, I have a different Ph. D student working on that subcomponent.”

This ambitious program will require many people, from diverse perspectives. There will be more than 80 Ph.D. students within the RAS CDT over the coming eight years, and the funding received so far will help support this and the long term research plans of the group. When opening the centre in September 2014, Prof. Sethu Vijayakumar, co-Director of the Centre and Professor of Robotics at the University of Edinburgh, explained that “The Edinburgh Centre for Robotics aims to help the country realise its industrial potential in this revolution by producing a new generation of highly skilled researchers, trained to take a leading role, technically skilled, industry and market aware, and prepared to create and lead the UK’s innovation pipeline for jobs and growth.”

Ramamoorthy concludes “We have laid out a vision that is quite long term and we are fortunate enough to have the funding to pursue our work over that timeframe. However, we are well aware of the magnitude of this enterprise and we try to approach this one small step at a time.

In my personal view, research is only interesting if you have something interesting on the distant horizon that you are slowly working towards.”