Reaching Motion Planning with Vision-Based Deep Neural Networks for Dual Arm Robots

Intelligent Robotics Lab. Utsunomiya University
16 Jun 202215:00

Summary

TLDRThis presentation explores the use of vision-based deep neural networks (DNNs) for motion planning in robotic systems, focusing on tasks like reaching and grasping. The approach involves generating target object images using instance segmentation, and using a CNN-based classifier to determine the correct hand for reaching. The robot learns motions through imitation learning, then predicts suitable motions during execution. Experimental results demonstrate the robot's ability to generalize its motion planning, successfully reaching and grasping objects in both known and unknown positions, highlighting the potential of deep learning to enhance robotic autonomy in dynamic environments.

Takeaways

  • 😀 Deep neural networks (DNNs) are used to solve motion planning problems for dual-arm robots.
  • 😀 Vision-based motion planning relies on RGB camera input for detecting objects and determining hand positions.
  • 😀 Convolutional neural networks (CNNs) are employed to predict suitable motions for the robot's hands based on image inputs.
  • 😀 Imitation learning is utilized to train the robot by having a supervisor instruct the robot on how to reach target objects.
  • 😀 A significant challenge in multi-object scenes is generating a 'target object image' without depth information.
  • 😀 The proposed method uses instance segmentation and CNN classifiers to segment objects and determine the target object and reaching hand.
  • 😀 The robot's motion planning involves calculating joint angles using inverse kinematics and avoiding self-collision during movement.
  • 😀 The robot grasps the target object when the hand status is detected as 'closed,' signaling successful completion of the task.
  • 😀 The robot was tested using a dual-arm setup with a Kinect sensor, performing reaching tasks for cylindrical objects placed in various positions.
  • 😀 The system demonstrated good generalization, allowing the robot to handle new object positions (both known and unknown).
  • 😀 Results showed that the robot successfully reached for and grasped the target objects, even when object positions were outside the instruction area.

Q & A

  • What is the primary goal of the research presented in the script?

    -The primary goal of the research is to develop a motion planning system for robots that enables them to reach and grasp target objects using vision-based deep neural networks, while avoiding collisions and adjusting motions dynamically based on object recognition.

  • How does the robot identify target objects for motion planning?

    -The robot identifies target objects through an RGB camera mounted on the robot. The images captured by the camera are processed using a convolutional neural network (CNN) for object segmentation and classification, determining the target object and the required reaching order for each hand.

  • What role does imitation learning play in the research?

    -Imitation learning is used to train the robot during the motion instruction phase. The robot is instructed by a supervisor to reach towards objects, and it registers joint angles, hand positions, and camera images, which are then used as training data to optimize the neural network for future tasks.

  • What is the purpose of the grasping classifier in the motion planning process?

    -The grasping classifier determines whether the robot's hands should remain open or close during the motion process. It processes the target object images and classifies the hand status, triggering the robot's action, either to grasp the object or continue moving towards it.

  • How does the robot handle situations with multiple objects in its environment?

    -The robot handles multiple objects by generating a 'target object image' through instance segmentation. This segmentation classifies the objects and determines which one to target by assigning a reaching order to the hands (right or left). The object with the lowest index is selected as the target for each hand.

  • What challenges arise when the objects are close to each other, and how are they overcome?

    -When objects are close to each other, the depth information becomes similar, making it difficult to distinguish between them. This challenge is overcome by using instance segmentation, which allows the robot to separately identify each object, even in cluttered environments.

  • How does the CNN architecture contribute to the robot's motion planning?

    -The CNN architecture has two branches, one for the right hand and another for the left hand, allowing the robot to plan and control both hands simultaneously. The CNN predicts the amount of movement for each hand, helping the robot to plan and execute the reaching motion for both hands independently.

  • What does the experimental setup involve, and how is the performance evaluated?

    -The experimental setup involves a dual-arm robot equipped with a Microsoft Kinect, placed in front of a workbench with cylindrical objects. The robot performs reaching tasks in both known and unknown object positions. The performance is evaluated by assessing whether the robot can correctly reach and grasp the target objects, including those placed outside the known positions.

  • What results were observed during the robot's trials with known and unknown object positions?

    -During the trials, the robot successfully reached and grasped the target objects in both known and unknown positions. The robot demonstrated its ability to generalize and adapt to new object placements, showing robust performance even when objects were outside the instruction area.

  • How does the research demonstrate the potential of deep neural networks in robotics?

    -The research demonstrates that deep neural networks, specifically CNNs, can be effectively applied to motion planning, object recognition, and collision-free path generation. This enables robots to perform complex tasks like reaching, grasping, and packing, all while adapting to new environments and object configurations.

Outlines

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now

Mindmap

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now

Keywords

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now

Highlights

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now

Transcripts

plate

This section is available to paid users only. Please upgrade to access this part.

Upgrade Now
Rate This

5.0 / 5 (0 votes)

Related Tags
Robot Motion PlanningDeep LearningDual-Arm RobotsVision-Based ControlImitation LearningRGB Image ProcessingInstance SegmentationObject RecognitionGrasping TasksMachine Vision