Point Cloud Registration: A case for Exhaustive Grid Search
Introduction to Point Cloud Registration
Point cloud registration is a fundamental task in computer vision and robotics, aiming to align two or more 3D point clouds into a common coordinate system. It serves as a cornerstone for numerous applications, including 3D reconstruction, autonomous navigation, object recognition, and augmented or virtual reality.
The goal of registration is to estimate the rigid transformation — composed of a rotation and a translation — that best aligns a source point cloud to a target point cloud. This process is often complicated by challenges such as partial overlap, sensor noise, and occlusions.
What is Exhaustive Grid Search?
Exhaustive Grid Search (EGS) is a featureless, non-learning method for 3D point cloud registration introduced by Bojanić et al. (2024) [1]. Unlike deep learning–based approaches that rely on neural networks to extract and match features, EGS directly explores the entire transformation space to find the best alignment.
Because EGS does not depend on feature extraction, correspondences, or training data, it generalizes well across different datasets. Despite its simplicity, EGS has been shown to outperform many deep learning–based methods on standard benchmarks such as ETH and FAUST-partial.
Benchmarks for several datasets
This plot shows the Registration Recall (RR) performance of methods across all four benchmarks — 3DMatch, KITTI, ETH, and FAUST-partial — as a function of publication year.
EGS can be found in the cluster on the top right. It was published on the year 2024.
Note: Some methods appear in every benchmark, while others are included in only a few.
Conclusion
The comparative performance of EGS across several standard 3D registration benchmarks is summarized in the table below. It highlights how EGS performs relative to leading deep learning methods in both in-domain and cross-domain scenarios.
| Benchmark | Type / Domain | RR Metric (%) | Best Deep Learning RR (%) | Observations |
|---|---|---|---|---|
| 3DMatch | Indoor RGB-D (training domain for most DL methods) | 84.11 | 95.0 (GeoTransformer, 2023) | Deep methods dominate here — they were trained and tuned on this dataset. |
| KITTI | Outdoor lidar (cross-domain) | 94.95 | 97.7 (SC2-PCR, PointDSC) | EGS nearly matches or surpasses most DL models — strong generalization. |
| ETH | Outdoor vegetation (different geometry) | ≈90–95 | Comparable or slightly worse | EGS performs on par or better, showing robustness to noisy, irregular data. |
| FAUST-partial (FP) | Human body scans (unseen data) | ≈80–90 | Deep models drop below 70 | EGS clearly outperforms deep methods on unseen shapes. |
While deep learning methods excel on benchmarks that match their training distributions (e.g., 3DMatch), their performance often drops significantly when evaluated on unseen or cross-domain data such as KITTI, ETH, or FAUST-partial.
In contrast, EGS — despite being a featureless, non-learning approach — achieves competitive or superior RR across all benchmarks.
Determing the reflection angles on 3D laser Vibrometry
The experiment aims to determine the laser reflection angle when a vibrometer measures the vibration of a point on a plate. The process consists in performing a point cloud registration to extract the angular orientation of each measurement node.
Setup
-
A 3D laser vibrometer is positioned at a orthogonal location to measure surface vibrations at a single point.
-
The point is measured at different positions or viewing angles (nodes 1–12), resulting in a separate point cloud for each position.
-
These point clouds are aligned relative to a reference position (node 1) using EGS, which finds the optimal rigid transformation (rotation + translation) between scans.
Goal
-
Determine how each measurement node (i.e., each reflected beam) is rotated relative to the reference — these rotations correspond directly to reflection angles.
-
The rotation matrices are extracted from the EGS alignment results and compiled into the table presented in this report.
-
By analyzing these transformations, we can map directional reflections (x, y and z).
Note: Only measurements 1 to 7 are showed, since they were measured in the same plane. The other measurements were done on different horizontal planes.
Point Cloud Registration: Example
This is an example of two point clouds being merged using the EGS method. The red and blue point clouds represent scans of the same space captured from different positions.
Results
The following table shows the results of rotation transformations between point clouds, which can be used to infer reflection angles.
| Point Cloud | Point Cloud | Rotation Transformation | Direction |
|---|---|---|---|
|
1
|
2
|
(3.22°, -0.01°, 0.71°) | Up |
|
1
|
3
|
(7.46°, -0.04°, 1.57°) | Up |
|
1
|
13
|
(21.84°, -0.64°, 0.28°) | Up |
|
1
|
4
|
(-0.03°, -13.47°, 0.13°) | Left |
|
1
|
6
|
(-0.22°, -42.50°, 1.05°) | Left |
|
4
|
5
|
(0.02°, -18.42°, 0.50°) | Left |
|
1
|
7
|
(-0.31°, 20.57°, -0.52°) | Right |
|
1
|
8
|
(-0.68°, 36.83°, -1.44°) | Right |
|
8
|
9
|
(0.08°, 11.03°, -0.20°) | Right |
|
5
|
6
|
(0.11°, -10.73°, 0.39°) | Right |
|
9
|
10
|
(8.08°, -0.12°, 2.05°) | Right & Up |
|
10
|
12
|
(13.23°, -3.47°, 3.44°) | Right & Up |
|
9
|
12
|
(21.21°, -4.03°, 5.33°) | Right & Up |
|
13
|
12
|
(5.26°, 41.08°, 24.60°) | Right & Up |
Conclusion
The EGS method worked successfully for this experiment, allowing accurate alignment of the 3D point clouds and determination of reflection angles.
These results show that EGS can be effectively used in future experiments involving 3D laser vibrometry.
Code and Data
The full source code and experimental data used are available on GitHub PhD repository.