I Need Help with Processing Large Point Cloud Data Efficiently in CloudCompare

Feel free to ask any question here
Post Reply
komyash
Posts: 1
Joined: Thu Sep 26, 2024 9:09 am

I Need Help with Processing Large Point Cloud Data Efficiently in CloudCompare

Post by komyash »

Hello there,

I have been working with CloudCompare for some time now; primarily for analyzing point cloud data from terrestrial laser scans. Lately; I have been dealing with larger datasets; and I have started running into performance issues; particularly during import; rendering; and applying certain filters like subsampling and segmentation.

I am working on a machine with decent specs: 32 GB RAM; an NVIDIA RTX 3060 GPU, and an Intel i7 processor. Despite this; the performance seems to degrade significantly as soon as the point cloud exceeds about 100 million points. My workflow mainly involves importing point clouds; aligning them; applying a few basic transformations, and running segmentation and distance calculations between clouds.

The point clouds take a long time to load or display when I pan or zoom in.

Applying subsampling algorithms like random sampling to reduce the number of points often results in very slow performance.

Running cloud to cloud distance comparisons with large datasets seems to take ages.

Also; I have gone through this post; https://www.cloudcompare.org/forum/viewtopic.php?tableau=1414 which definitely helped me out a lot.

I was wondering if anyone had experience optimizing CloudCompare for handling larger datasets? Is there anything I can do to improve performance; such as specific settings within the software; or is my hardware the bottleneck here?

Thanks in advance for your help and assistance.
daniel
Site Admin
Posts: 7707
Joined: Wed Oct 13, 2010 7:34 am
Location: Grenoble, France
Contact:

Re: I Need Help with Processing Large Point Cloud Data Efficiently in CloudCompare

Post by daniel »

On my side, I can work with up to 300M or 400M of points before it starts lagging. I have a GeForce RTX 4070, Intel i9-10900F processor (20 cores) and 64Gb of memory (DDR4, 3200MHz). But memory should not be the bottleneck for 100M of points (or even 300M). However, the speed of the RAM would definitely be a major factor (as well as the cache of your processor, etc. - these factors are generally overlooked while they are very important when dealing with data 'quantity').

Also, if you have normals, this is a totally different story. Displaying normals is always super slow.

And for scalar field colors, make sure the option to use a shader to display them is checked in the 'Display > Display settings' dialog.
Daniel, CloudCompare admin
Post Reply