top of page

Artistic Style in Robotic Painting

Machine Learning for Creative Computing Toolmaking

Artistic Style in Robotic Painting

A Machine Learning Approach to Learning Brushstroke from Human Artists

The 29th IEEE International Conference on Robot and Human Interactive Communication

In Collaboration with Manuel Ladron De Guevara, Cinnie Hsiung, Jean Oh, Eunsu Kang

Intro

Robotic painting has been a subject of interest among both artists and roboticists since the 1970s. Researchers and interdisciplinary artists have employed various painting techniques and human-robot collaboration models to create visual mediums on canvas. One of the challenges of robotic painting is to apply a desired artistic style to the painting. Style transfer techniques with machine learning models have helped us address this challenge with the visual style of a specific painting. However, other manual elements of style, i.e., painting techniques and brushstrokes of an artist, have not been fully addressed. 
 

We propose a method to integrate an artistic style to the brushstrokes and the painting process through collaboration with a human artist. In this paper, we describe our approach to 1) collect brushstrokes and hand-brush motion samples from an artist, and 2) train a generative model to generate brushstrokes that pertains to the artist’s style, and 3) integrate the learned model on a robot arm to paint on a canvas. In a preliminary study, 71% of human evaluators find our robot’s paintings pertaining to the characteristics of the artist’s style. 

This paper aims to study the affordances of machine learning generative models to develop a style learner model at the brushstroke level. We hypothesize that training a generative machine learning model on an artist’s demonstrations can help us build a model to generate brushstrokes that pertain to the style of an artist. This model can then be used to generate a range of individual brushstrokes to paint an intricate target image. Our approach distances from learning the artistic visual styles; instead, it focuses on the techniques and characteristics of brushstrokes as an intrinsic element of an artistic style. 


Our primary contribution is to develop a method to generate brushstrokes that mimic an artist’s style. These brushstrokes can be combined with a stroke-based renderer to form a stylizing method for robotic painting processes. This research aims to achieve this goal by developing a learning-based approach to train a model from a collection of an artist’s demonstrations as follows:

  1. Adapting a stroke-based rendering (SBR) model to convert an image to a series of brush strokes.

  2. Training a stylized brushstroke generator based on the collected demonstration by an artist.

  3. Feeding the outcomes of the SBR to the stylizer model and execute the strokes with a robotic painting apparatus.

For the SBR model, we utilize the Learning To Paint model [4] to render a given image into a sequence of brushstrokes. We modify and retrain the model to match the constraints of our robot platform, an ABB-IRB120 robotic arm. The robotic arm holds a custom-made holster that could carry a standard acrylic paintbrush and acrylic paint. For the brushstroke generator, we develop 1) a data collection apparatus to collect both brushstrokes, and brush motions, 2) a data processing pipeline to prepare data for the learning process, and a 3) variational autoencoder to learn the style of the brushstrokes and generate new ones. 
We evaluate the proposed idea through a set of user studies. In the user study, we investigate three questions:

  1. Can participants distinguish a painting made by a robot from a visually similar painting created by a human artist?

  2. Can participants distinguish between a brushstroke drawn by an artist from its replay by a robot?

  3. Does the brushstroke generator pertain to the characteristics of an artist’s brushstrokes?

Portrait of Misun Lean

To create the Portrait of Misun Lean, we utilized the optimized LearningToPaint model to convert an image of the fictional reporter Misun Lean [23] into a series of brushstrokes. The strokes then directly executed using the robotic painting apparatus without applying any artistic style. To do so, we fed the brushstrokes from the Learning to Paint model as the input and programmed the robot to follow the provided strokes strictly. For simplicity, we reduced the resolution to 250 brushstrokes. Each stroke had three variables, a) path in the form of a Bezier curve, 2) thickness, limited to four values, and 3) color, limited to a palette of five shades of gray. A Grasshopper definition converted the Bezier curve into a sequence of strictly horizontal target poses. By default, the brush was perpendicular to the Bezier curve to create the thickest brush strokes. For thinner brushstrokes, target planes were rotated on the horizontal plane to compensate for different thicknesses. We utilized HAL add-on to convert these targets into RAPID code, which could run on ABB IRB robotic arms. We also took advantage of ABB drivers to address inverse kinematics and control of the operation. At this point, we did not implement any closed feedback loop.

Portrait of Misun Lean
plots_new-12_edited.jpg
robot_painting_process.gif
plots-10.png

In the survey, we asked participants, “from the 5 images below, determine which ones are painted by a robotic arm. (You can select more than one. Select ‘None’ if you think all of them are painted by human artists)”. The pool of portraits was composed of digital pictures of 1) Ghosts of Human-Likeness by Nicole Coson, 2) Portrait of Misun Lean by authors, 3) 2Face by Ryan Hewett, 4) Number 7 by Jackson Pollock, and 5. Untitled 0016 from Artonomous project by Arman Van Pindar.

comp%5Ba%5Bre_edited.jpg

Learning from a User

A key idea of our approach is learning from an artist’s demonstration. Due to the lack of existing datasets on this end, we design and collect a set of data to train our model. In this section, we describe the data collection and processing steps. 

Hardware: To record brushstroke motions, we designed, and 3D printed a brush fixture equipped with three reflective markers that form a rigid body for motion capture system. Moreover, to track the position of papers during the data collection sessions, they were fixed in a frame with another set of three reflective markers. A Motion Capture system with six cameras was used to track these two rigid bodies and reconstruct the brush motions in space with six degrees of freedom.


During the data collection process, a user with a background in painting generated over 730 strokes with different lengths, thicknesses, and forms. Brushstrokes were indexed in two types of grid-like datasheets. Each grid contains either 20 2″×2″ square cells or 14 cells combining square cells and 2″×4″ rectangular cells to draw single strokes per cell.


For each brushstroke sample, we also recorded the corresponding sequence of motions to form our raw data set. Motion capture sessions ran continuously while the user was working on each grid of strokes. The motion capture data comprise cartesian coordinates and Euler angles of the three markers attached to the brush fixture recorded at a rate of 120 frames per second.  This continuous stream of raw data is then exported as csv files and post-processed in Grasshopper. The post-processing was focused on isolating each stroke from the continuous stream of motion capture data and matching the brush motion with the corresponding brush stroke on the paper grid.

Learning from a User
manual_samples.gif
robot_replay.gif
plots-11.png

We randomly selected one of the collected sample grids and processed the motion capture data in Grasshopper/HAL definition to produce the corresponding RAPID program to control the robot with no closed feedback loop. At this stage, we used the same brush and fixture as the end-of-the-arm-tool on the robot and executed the program. The paint was applied to the brush tip manually as we eliminated all other motions—i.e., refreshing paint—from the raw datasets. 
We aimed to evaluate whether a set of brushstrokes replayed by a robotic arm is differentiable from a set of brushstrokes drawn by a human, or alternatively, do they pertain to the style of the artist? We asked the participants, “Which set of brushstrokes is drawn by a robotic arm? (You can select Both, None, Left, or Right)” and provide them with the replayed brushstrokes as well as the original ones. 
Only 40% of participants could select the correct option, while 40% chose the wrong set, 13% said both and 7% said none. Thus, we can assume that a well-executed robotic playback can produce brushstrokes that are not distinguishable from the original strokes made by a human artist. In the future, we aim to test the same robotic setup with motions generated by a generative model.

brushes_survey_edited.jpg

Generative Model to Stylized Brushstroke

Our efforts on developing a stylized brushstroke generator are focused on two sets of data: 1) brushstrokes, which are the traces of paint on the paper made by the brush and 2) motions, which are the sequences of poses representing the location and orientation of the brush during each brushstroke. 
Working with N-type inputs allows us to establish and potentially explore N^2 paths. This research works with the pair B-M inputs: Brushstrokes and Motions, generating four bidirectional learning paths. This paper focuses on developing a model that pertains to the artist’s style and generate samples based on that. We will focus on two other goals in our future work: making a conditional generative model to stylizer model that converts raw strokes to new ones with artist’s style, and a generator that generates motions to draw the stylized brush strokes. 
Between various available generative models, we decided to work with variational autoencoders (VAE). This decision is two-fold; First, it will help us avoiding the GAN’s training challenges with respect to the small size of our dataset. Second, while we conducted our survey on the reconstruction aspect of the model, we will heavily investigate its generative capabilities during the next steps of this research. This renders AutoEncoders a less desirable choice in comparison with VAEs. 

Generative Model
generated_brushes.gif
bottom of page