Some context…

One of my ideas is to perform a 3D reconstruction of a structure in order to generate its 3D mesh. This involves using SLAM methods or similar techniques. Some ask for RGB-D point clouds, and others for 2D images.

UR10e with FETMO Mega camera

To capture the point clouds needed for 3D reconstruction, I used the UR10e robotic arm equipped with a FEMTO Mega camera. The robot can move around the object, capturing it from multiple angles.

The video below demonstrates this concept in action: the UR10e systematically scans the target structure, generating overlapping point clouds from different perspectives. These raw point clouds are then aligned and merged to form a full 3D representation of the object.

Implementing reliable methods to achieve this has been very challenging. Even with Gonçalo’s invaluable help last weekend, we couldn’t complete a reconstruction.

Therefore, I decided to split the task into two parts:

  • 3D reconstruction (postponed)
  • 3D mesh surface reconstruction (converting point clouds into 3D meshes)

Creating 3D Meshes from Point Clouds with Python

For our latest experiment, Professor Rui Moreira and I selected a exhaust manifold as the test structure. It presents a challenging geometry with multiple complex surfaces, curves, and cavities, making it an ideal candidate to test the robustness of our 3D reconstruction and mesh generation workflow.

Below is a photograph of the manifold.

Exhaust Manifold

I generated a detailed point cloud of the manifold, which you can explore interactively below. This dense point cloud is the raw data that will be processed to create the 3D mesh.

Info: The cylinder visible on the top of the point cloud is an artifact from a bad crop and includes part of a hammer that was unintentionally captured during generation of the point cloud.

3D mesh conversion

I created 3D meshes from raw point cloud data using Python, following a practical tutorial on automatic mesh generation and surface reconstruction from Florent Poux.

This post explains the process to transform point clouds into clean, usable 3D meshes, including generating multiple Levels of Detail (LoD).

My Workflow Overview

I followed these main steps, adapted from the tutorial:

  1. Load and preprocess the point cloud
    • Read raw points
    • Remove noise and outliers
    • Estimate and orient normals
  2. Mesh reconstruction using two methods
    • Ball Pivoting Algorithm (BPA): rolls a virtual ball over points to form triangles
    • Poisson Reconstruction: fits a smooth, watertight surface enveloping the points
  3. Mesh cleanup
    • Remove degenerate or duplicated triangles and vertices
    • Fix non-manifold edges
  4. Generate Levels of Detail (LoD)
    • Simplify the mesh to various triangle counts for performance tuning
  5. Export and visualize results
    • Save meshes in the format .ply. But they can also be saved in formats like .obj, and .stl

Results

Comparison: BPA vs. Poisson

Generate Levels of Detail using the Poisson method

Conclusions

Main conclusions:

  • The BPA mesh captures sharp edges well but requires cleaner input. It is not suited for this kind of meshes.
  • Poisson produces smoother, watertight meshes better suited for our goals.

Warning: There are details that the meshes do not show (e.g. some holes on the top and bottom). Which is the result of the input point cloud not being detailed enough.


Code and Data

The full source code and experimental data used are available on GitHub PhD repository.

The article 5-Step Guide to generate 3D meshes from point clouds with Python from Florent Poux can be found here.