top of page

R-RNN Doodle

Doodling with Robot & RNN

Description

R-RNN Doodle

Doodling with Robots and RNN

Robotic-RNN Doodle is an exploration about the possibilities of integrating the RNN-Sketch Demo1 with a robotic arm to build a collaborative workflow between the user, RNN, and the robot. 

Sketch-RNN is trained on a data set of millions of doodles collected through Quick Draw project2. It leverages LSTM architecture to generate svg doodles based on a given svg stroke as the initial seed. Sketch-RNN comes with hundred of trained models, ranging from trivial doodles of animals and object to complex combination of them. For this assignment I used the Flamingo model to generate flamingos.

Motivation:

The motivation behind this project is to find a robot-human interaction scenario to open the discussion for further development in Robot-Art competition. UR robots are categorized as collaborative robots, which are safer for human users to engage in a collaborative scenario. This

project aims to develop a pipeline to connect user input, machine learning back-end, and a robotic control/simulation workflow.

grasshopperFileSmall.png

Architecture:

The model consists of two main component, one is the grasshopper part that handles image processing process, toolpath generation, simulation, and communication with the robot. The other component is Sketch_RNN code that generates the doodle based on the user drawn seed.

Architecture
R-RNN doodle diagram.png

Workflow

1. User draws a simple stroke on a piece of paper;

2. Robot moves over the image and take a snapshot;

3. Image will be processed to find the boundary of the canvas and correct the perspective;

4. The stroke will be extracted as a openCV contour and will be passed to the Sketch_RNN;

Workflow
registeredDrawing.png

5. Sketch-RNN will generate doodles and pass them back to the Grasshopper definition;

drawingRegistration.png

6. Using HAL plug-in for Grasshopper, the toolpath for the robot will be generated.

Demo

Note:

The codes for this project actually includes functions to control a GoPro camera in real time.

However, for the purpose of this demo, I used images already captured and stored on the disk.

The process of matching the drawn object and robot’s drawing coordination is still a challenge.

Acknowledgment:

This project is developed as a part of Art and Machine Learning course at Carnegie Mellon University, School of Computer Science, under the supervision of Prof. Eunsu Kang and Prof. Barnabas Poczos.

bottom of page