An ML approach to derive CAD models from Lidar point clouds

Lidar point clouds provide excellent and quite accurate points in space, including the quite large rooms of factories or ships. Although they are much more accurate than point clouds derived by other means such as regular cameras or structure light systems, the problem remains that it is really difficult to correctly guess and reconstruct the surfaces from which these points have been sampled. It is typical to first guess and construct meshes from the point clouds, giving a potentially quite uneven structure where a flat wall or flaw is supposed to be. Guessing whether such unevenness is true or an artifact of an inaccuracy is not impossible because there is a branch of computer vision called photogrammetry, which estimates surface curvature from color gradients. However, this still leaves the problem of cleanly terminating such surfaces in edges, which may be sharp, rounded or ragged. Edge detection for sharp edges can be implemented by a well-known computer vision algorithm called the Hough Transform. By putting all these pieces together, a human can receive a lot of help in creating a CAD model from some images and a point cloud.

What we want to find out is whether machine learning can be trained to create an appropriate CAD model from only the Lidar point cloud, or a Lidar point cloud with some images, if it has been trained with CAD reconstructions that have been created by humans using all the methods mentioned above.


The first step towards this is to create a system for the creation of ground truth data that can be used in training the machine learning algorithm. This system must support humans in creating the CAD models from points clouds and images.

Learning outcome

-Working on a real world application
-Collaboration with researchers
-Possibility to implement and research a novel approach


  • Programming
  • Motivation
  • Mathematics


Collaboration partners