Showcase » 2D-to-3D AR cartoons

This is an implementation of sketch based modelling for the generation of 3D models from 2D children’s drawings. Additionally, the generated models should be available in augmented reality for interaction with a robot. Inspired by the awesome works of [1], [2] and [3]. This was a project during my master’s studies, implemented in Java on Android with ARCore, libGDX and OpenCV.

Acquisition

For the 2D drawing acquisition a photo can be taken through the camera activity. Additionally, a preview of the detected segments can be enabled and a region of interest selected.

Image showing a screenshot of the acquisition view: a camera view of some drawings.
Screenshot of the acquisition view
The region of interest was defined around the drawing of the tree. Overlaid in teal is the detected drawing’s surface.

Segmentation and model generation

Image showing the pipeline of the model generation.
Pipeline of the model generation

1. Input image. 2. Adaptive thresholding. 3. Noise removal.
4. Finished region map. 5. Refined Delauney triangulation. 6. Circularly mapped distance map.

Image showing a screenshot of the model view: sliders for parameters and the result of the selected processing step.
Screenshot of the model view

Next, the captured image goes through the processing pipeline to generate a 3D model. First, adaptive thresholding is applied to the saturation channel of the input image. Then, some morphological transformations help clean up the outline map. Enclosed regions are extracted into the region map for further processing. The map is triangulated and the result further refined by adding interior points to the mesh. Finally, a distance map is generated from the region map which is used to inflate the mesh along its z axis.

For debugging, the standalone desktop application (screenshot) allows for easy testing and tweaking of the 2D-to-3D process.


Robot interaction

For the robot interaction a fiducial marker is attached to a LEGO EV3 robot running LeJOS. The marker screen position is tracked and rays are cast based on two positions on the marker. The intersections of these rays with the AR plane allow the position of the robot to be translated into the AR world space. Finally, the position can be used to calculate distances which are sent to the robot as “virtual” distance sensor readings.

Image showing a fiducial marker with red crosses
Marker with ray origins
Image showing a screenshot of the Android application: robot with marker and vectors next to a generated object.
Screenshot of the augmented reality view
The tracked fiducial marker shows the up and forward vectors of the robot.

References