Processing a massive pointcloud
Posted: Tue Jun 11, 2019 9:08 am
Hello,
Im currently working on a project thats turning into quite a challenge.
im new when it comes to point clouds so im looking for a push in the right direction.
our client provided us with a massive point cloud, and i need to make this behemoth VR ready.(unity)
The point cloud:
the point cloud is a "SZF5" scan, separated into 6 sections, each section is split into ~250mb FLS files (about 10-100 mil points)
the total size is 101 GB
beside this their is also an 27gb .e57 file called noisehood full
in order to get this working in VR some serious optimization is required.
an octree-point visualization isn't what im looking for, meshes are required for other features(raycasting, lighting,collision,...)
what i am looking for is a way do to the following, and automate this for future clouds.
- i need to turn the entire cloud into a chunked grid of N meters big,
- each chunk needs needs to become mesh (triangulated, max vertex count per chunk is 65534 ) and a texture needs to be baked from the points that are ignored
- all of this needs to happen without loading the entire thing into memory,
e.g. each chunk reads every pointcloud en looks for points in its range,
after this, it can do the triangulation and the baking, save the mesh and start the process again for the next chunk
- i know that this might be a time consuming process but that isn't really an issue (a dedicated device can be set up)
before i dive into cloudcompare i would like to know if its capable of doing something like this? (on a smaller scale)
i have experience in c++ i'm willing to get my hands dirty in order to automate the process
thanks in advance!
Im currently working on a project thats turning into quite a challenge.
im new when it comes to point clouds so im looking for a push in the right direction.
our client provided us with a massive point cloud, and i need to make this behemoth VR ready.(unity)
The point cloud:
the point cloud is a "SZF5" scan, separated into 6 sections, each section is split into ~250mb FLS files (about 10-100 mil points)
the total size is 101 GB
beside this their is also an 27gb .e57 file called noisehood full
in order to get this working in VR some serious optimization is required.
an octree-point visualization isn't what im looking for, meshes are required for other features(raycasting, lighting,collision,...)
what i am looking for is a way do to the following, and automate this for future clouds.
- i need to turn the entire cloud into a chunked grid of N meters big,
- each chunk needs needs to become mesh (triangulated, max vertex count per chunk is 65534 ) and a texture needs to be baked from the points that are ignored
- all of this needs to happen without loading the entire thing into memory,
e.g. each chunk reads every pointcloud en looks for points in its range,
after this, it can do the triangulation and the baking, save the mesh and start the process again for the next chunk
- i know that this might be a time consuming process but that isn't really an issue (a dedicated device can be set up)
before i dive into cloudcompare i would like to know if its capable of doing something like this? (on a smaller scale)
i have experience in c++ i'm willing to get my hands dirty in order to automate the process
thanks in advance!