We have a new video deliverable online on Deliverable 5.4: “A model based task specification that includes programming by demonstration aspects”
Have a look at: http://www.factory-in-a-day.eu/media/videos/
- PAL Robotics: in Barcelona
PAL ROBOTICS OPEN DAY – 25 NOV #ERW2016
Discover what’s behind our robots on Friday November 25th! Let us know if you want to come at one of the tours sending an e-mail to email@example.com:
– Technical tour (12:00 – 14:00h)
– General tour (16:30 – 18:00h)
The team and robots are looking forward to meet you!
- TU München – Chair for Cognitive Systems
On Thursday, November 24, 2016, the Chair for Cognitive Systems cordially invites everybody to come and visit our chair for an open lab afternoon. This is a unique possibility to learn more about the robotics research at the chair. We will introduce some of our research topics, so just come by. We are happy to answer all questions!
This event is part of the European Robotics Week 2016 and offers one week of various robotics related activities across Europe for the general public.
Where: Munich, Karlstr. 45, 2nd floor Time: 17h-19h
More detials and other events on http://www.eu-robotics.net/robotics_week/events/index.html
In Work Package 5 – Learnable Skills – our partner Universal Robots developed an assembly skill with their existing robot controller and GUI. The intuitive programming of this assembly skill was the challenge. The videos are part of Deliverable 5.4 “A model-based task specification that includes PbD aspects (Programming by demonstration)”.
The two videos are available in our video section.
At the end of October, Factory-in-a-day had its second project meeting in this year. This time we were in Barcelona, at our partner PAL Robotics.
The focus of this meeting was to set the plan for the final year of the project. Even though we progressed towards the project’s goal of reducing the time for integrating robotic solutions in an assembly chain in a short period of time, there was still a lot to discuss and talk about. On which demonstrations will we focus and which integration work has to be done?
Here is also a short blog post on our meeting at the website of PAL Robotics.
The paper “Understanding the Intention of Human Activities through Semantic Perception: Observation, Understanding and Execution on a Humanoid Robot” by Karinne Ramirez-Amaro, Michael Beetz & Gordon Cheng is now available for download by the publishing company:
Programming industrial robots to perform repetitive tasks is time-consuming. Based on our knowledge in manipulation planning, we have developed
- a software that handles the manipulation constraints evoked above, and
- a graphical user interface that enables a programmer to define the manipulation constraints relative to a problem.
The movies illustrate a robot programming interface to build a sequence of motions in order to perform an industrial task.
The input is:
– a model of the robot (UR5 robot arm with gripper)
– a model of the environment and objects, like trays, shaver parts,
– a position of frames on shaver parts, trays and gripper in order to specify the relative position of gripper/object to grasp objects (shaver part or tray). Future work should focus on building interactively those frames with external vision systems,
The user specifies through the graphical interface:
– initial position of each object (which part on which slot)
– final position of each object (first movie),
– the possible grasps (which gripper can grasp which object),
– the possible slots for each object.
The software then computes a collision-free motion between initial and goal configurations. Repeating this actions several times results in the movie below. All videos are on our YouTube Channel: https://www.youtube.com/channel/UCr-FPaBG3MJ5t_oyUSmt-ow
To ease motion planning, pre-grasp positions are automatically defined for the gripper above the shaver part to grasp.
This work was done by our partner LAAS: http://projects.laas.fr/gepetto/index.php/Members/FlorentLamiraux
Team Delft – the winner of the Amazon Pickling Challenge 2016 – published a blog with technical details on
their robotic system.
Team Delft won the Amazon Picking Challenge (APC) almost two weeks ago. Now that everybody had a bit of time to recover, we interviewed Dr. Carlos Hernandez Corbato,TU Delft Robotics Institute, one of the team leaders of Team Delft, on the technical details.
Congratulations to the great performance. What was the most difficult challenge from a technical point-of-view in the APC?
Dr. Hernandez Corbato: Dealing with objects with reflective surfaces, e.g. transparent plastic wrapping, was probably the hardest challenge for our vision system. Our main solution for grasp planning required to estimate the object position and orientation, and for that we relied on PointCloud data (depth information) from our 3D camera to match with a 3D model of the object. But due to reflections there was simply not enough Point Cloud data to determine the pose of some objects in certain situations.
How did you solve it?
We included application-specific heuristics on top of our pose-estimation algorithm (Super4PCS) to correct it: e.g. we snapped the estimated height of the object position to be lying in its bin, since objects could not be piled or ‘floating’. This did not solve the problem completely, but our approach was to have a fast and robust system, so that upon difficulties it puts the difficult object in a ‘to do’ list, that is addressed once the rest of the task is finished. After some trials and different viewpoints to get a better 3D image, the robot manages to pick also these difficult objects.
Which part that you thought to be easy in the beginning proved to be more difficult?
It happened the other way around. We though there was no easy part in the challenge, and were specially concerned about deformable objects, e.g. t-shirt, gloves. They could be hard to recognise, and moreover you do not have a static 3D model of these object for pose estimation. However, our robust approach to object recognition using Deep Learning performed excellent even with those objects, and the powerful suction in our gripper paved the way for grasping. We discovered that the robot could easily pick deformable products without complete 3D pose estimation, by simply aiming the suction cup to the centre of the region where it is detected.
Please give us a short description of the outstanding features of your robot.
Following the Factory-in-a-day approach, we analysed the picking application and chose the best technologies and components to build a robust and fast solution, as industry would require. We aimed for a really fast and robust system, capable of detecting failed picks and try them again. Instead of trying slow complex manoeuvres to reach occluded products, our robot simply removes blocking objects to other bins, keeping track of the location of all products. We used ROS-Industrial to quickly test and integrate the software components that we needed, and develop those not available (e.g. a generic grasp planner for a suction-based-gripper).
- The Robot itself: After preliminary analysis, we concluded that maximum reachability in the workspace was a critical requirement for the competition. We chose a configuration with aSIA20F Motoman robot arm by Yaskawa mounted on a rail. The total 8 degrees of freedom allowed the system to reach all the bins with enough manoeuvrability to pick the target objects. Thanks to the ROS-Industrial drive supported by Motoman could integrate our complex robot configuration in our system, and we have contributed to improved the driver features.
- Perception: We decided to avoid noise and calibration problems using industrial stereo camera’s with rgb overlay camera’s. One for detection of objects in the tote and one on the robot gripper to scan the bins. Detection of the target object went in two steps. For
object recognition we used a deep learning neural network based on Faster RCNN. Next we used an implementation
of super4PCS to estimate the pose of the object, refining it with ICP (Iterative Closest Point).
- Grasping: Following the Factory-in-a-day approach of quick 3D prototyping, we developed a hybrid suction+pinch gripper customised to handle all the 39 products in the competition. We designed an algorithm that auto generates candidate grasp locations on the objects based on their estimated pose from the vision system, their geometry and application constraints. Object-specific heuristics
were also included after intensive testing.
- Motion: For motion planning and control we develop customised ROS-services on top of MoveIt!. To optimise motions we created a ‘motion-database’, with static free-collision trajectories between relevant locations, e.g. image-capture and approach locations in front of the shelf’s bins and over the tote. For the approach and retreat motions to pick objects we used dynamic cartesian planning using the depth information from the 3D camera for collision avoidance.
Factory-in-a-day aims at quick installation time: did you fulfill this and could you repeat the Picking challenge with another robot as successfully?
The development of the APC project took four months, from March to June. But we got the robot only five weeks before the competition. Thanks to ROS we could prepare the required components and test them in advance using Rviz and the ROS-Industrial simulator. But time was so short, we could only focus on testing for the Picking challenge with the real robot, and not for the new Stowing challenge. I think this is a good demonstration of the Factory-in-a-day approach. We arrived to the competition, installed the system and calibrated it in less than a day. Then we integrated and tested the stowing during a day and a half. Next day we won the Stowing Challenge with an almost perfect scoring.
Our solution for perception, grasp planning and task planning and coordination could be easily re-used. Only our robot motion and control subsystem would need to be adapted.
How long did it take to teach the robot grasping 40 objects? And is there some kind of “learning algorithm” that would make it faster in the future?
We used Deep learning techniques for the recognition of the products. Given the time constraint and our custom hybrid gripper with suction and pinch, we chose a customised solution to automatically generate the grasp strategy based on geometry information, item-specific heuristics and application constraints. This took 10 person week to develop. We are now considering using Deep Learning also to teach the system how to grasp objects.
I imagine the last months were quite stressful, what was the first thing you did after returning to Delft?
I went to have a nice dinner at a new restaurant in the cozy Delft’s city centre! Now we are working on consolidating the knowledge acquired during the project, and preparing the technology we developed to release it to the community.
Here are links to the real-time videos of Team Delft’s performance:
There are also a number of articles in our media section.
Team Delft won the Amazon Picking Challenge 2016 in Leipzig yesterday! Congratulations to the team! The competition was part of the RoboCup 2016, an international robot competition with different categories.
The Amazon Picking Challenge consists of two parts: In the picking challenge a set of items from the Amazon product range needs to be picked from the shelf and placed in a bin. For the stowing challenge it was the other way around. Team Delft, which consists of members from TU Delft and the company Delft Robotics, won both challenges: the picking and stowing finals and convinced with their speed and precision. The team won 50,000€ as prize money.
Team Delft won the stow task finals by collecting 214 points. Second came NimbRo Picking (186 points) and the team from MIT ended third (164 points). In the picking finals, the competition was so close that the judges had to resort to the second tie-breaker using video replay to determine the winner. Both Team Delft and PFN finished with 105 points, but Delft achieved their first pick in a mere 30 seconds, beating PFN’s 1:07 time.
Team Delft built a flexible robot system based on industry standards. The system is equipped with an Yaskawa robot arm with seven degrees of freedom, high-quality 3D cameras and an in-house developed gripper. To control the robot, the team integrated advanced software components based on state of the art artificial intelligent techniques and robotics. The components are developed with the Robot Operating System for industry (ROS-Industrial), and will be released as open software.
Although the development of the system still took well over 1 day, we are still proud of the achievement!
Prof. Martijn Wisse, coordinator of Factory-in-day commented the success of his team: “As part of the Factory-in-a-Day project, partners TU Delft and Delft Robotics BV collaborated to participate in the Amazon Picking Challenge. And we won! The Picking Challenge represents the high degree of variability that is needed for robots in the quickly-changing SME environments. By participating, we proved that we could install the system within a few hours after arrival. We also demonstrated the integration of a number of techniques, such as deep learning, 3D perception, motion planning, and robust task execution, all thanks to the ROS-Industrial framework. Although the development of the system still took well over 1 day, we are still proud of the achievement: we had possession of the Yaskawa Motoman robot only less than two months before the date of the Challenge, yet we managed to outperform all the other teams in both of the (quite separate) contests. We can call ourselves the best bin-pickers and bin-stowers of the world!”
Here is a video (thanks to RoboValley for making them) of the final of the stowing tasks:
and the picking tasks:
There is also a nice article on the website of RoboValley about the winning team: http://www.robovalley.com/news/team-delft-wins-amazon-picking-challenge2/
This afternoon Team Delft is finally able to demonstrate their skills in the Amazon Picking Challenge. Today, the teams have to are the to pick up items from the bin and put them in the shelf, referred to as the stow task finals.
We’ll cross our fingers!
Here is a video from their test trials yesterday: