![](https://static.wixstatic.com/media/2e85f4_f0135ef0a3ed455386894dd34dff37a4f000.jpg/v1/fill/w_980,h_551,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/2e85f4_f0135ef0a3ed455386894dd34dff37a4f000.jpg)
R-RNN Doodle
Doodling with Robot & RNN
R-RNN Doodle
Doodling with Robots and RNN
Robotic-RNN Doodle is an exploration about the possibilities of integrating the RNN-Sketch Demo1 with a robotic arm to build a collaborative workflow between the user, RNN, and the robot.
Sketch-RNN is trained on a data set of millions of doodles collected through Quick Draw project2. It leverages LSTM architecture to generate svg doodles based on a given svg stroke as the initial seed. Sketch-RNN comes with hundred of trained models, ranging from trivial doodles of animals and object to complex combination of them. For this assignment I used the Flamingo model to generate flamingos.
Motivation:
The motivation behind this project is to find a robot-human interaction scenario to open the discussion for further development in Robot-Art competition. UR robots are categorized as collaborative robots, which are safer for human users to engage in a collaborative scenario. This
project aims to develop a pipeline to connect user input, machine learning back-end, and a robotic control/simulation workflow.
![grasshopperFileSmall.png](https://static.wixstatic.com/media/2e85f4_26103a0a60814d55b87f6d6b8c1a4d24~mv2.png/v1/fill/w_967,h_116,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/grasshopperFileSmall.png)
Architecture:
The model consists of two main component, one is the grasshopper part that handles image processing process, toolpath generation, simulation, and communication with the robot. The other component is Sketch_RNN code that generates the doodle based on the user drawn seed.
![R-RNN doodle diagram.png](https://static.wixstatic.com/media/2e85f4_3f03eab454ed4ba78473d1c846a18267~mv2_d_3372_1273_s_2.png/v1/fill/w_956,h_361,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/R-RNN%20doodle%20diagram.png)
Workflow
1. User draws a simple stroke on a piece of paper;
2. Robot moves over the image and take a snapshot;
3. Image will be processed to find the boundary of the canvas and correct the perspective;
4. The stroke will be extracted as a openCV contour and will be passed to the Sketch_RNN;
![registeredDrawing.png](https://static.wixstatic.com/media/2e85f4_ad8cddeeb7f74fa6b8a63c1aa05e0906~mv2_d_3616_1510_s_2.png/v1/fill/w_546,h_228,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/registeredDrawing.png)
5. Sketch-RNN will generate doodles and pass them back to the Grasshopper definition;
![drawingRegistration.png](https://static.wixstatic.com/media/2e85f4_838d84de61fd4dfd9600077f04c90367~mv2.png/v1/fill/w_556,h_210,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/drawingRegistration.png)
6. Using HAL plug-in for Grasshopper, the toolpath for the robot will be generated.
Note:
The codes for this project actually includes functions to control a GoPro camera in real time.
However, for the purpose of this demo, I used images already captured and stored on the disk.
The process of matching the drawn object and robot’s drawing coordination is still a challenge.
Acknowledgment:
This project is developed as a part of Art and Machine Learning course at Carnegie Mellon University, School of Computer Science, under the supervision of Prof. Eunsu Kang and Prof. Barnabas Poczos.