I am interested in reducing the overall file size of a mesh (PLY) file.
The 'sample points on a mesh' tool appeared to successfully reduce the density of my mesh, but the file size remained the same. well over 100mb.
Is there a way, with cloudcompare or some other software, to essentially compress a high-density full color mesh so that the final file size will be reduced but still retain the basic structure and features of a (PLY) mesh?
Thank you,
Peter
Mesh Subsampling and Compression
Re: Mesh Subsampling and Compression
CloudCompare has no 'mesh simplification' tool (as we are more focused on clouds).
You should look at Meshlab, InstantMeshes, or Graphite (INRIA). And of course a lot of other commercial tools ;).
You should look at Meshlab, InstantMeshes, or Graphite (INRIA). And of course a lot of other commercial tools ;).
Daniel, CloudCompare admin
Re: Mesh Subsampling and Compression
In the video on the cloudcompare home page 'How to subsample a point cloud and how to sample points on a mesh'--the example with the skull starting at ~4:20--my understanding was that he essentially simplified the mesh by reducing polygon count. Is that what was done, or is there another way of thinking about how the 'sample points on a mesh' tool works? I thought perhaps in the process of reducing the poly count of the mesh there could be a way to reduce the overall file size.
Thanks,
Peter
Thanks,
Peter
Re: Mesh Subsampling and Compression
The 'sample points' method really only samples points on a mesh (randomly). The output is a point cloud and you lose all the triangles / topology.
You could re-create a mesh from the sampled cloud (with Poisson Reconstruction for instance) but the result is not always satisfactory.
You could re-create a mesh from the sampled cloud (with Poisson Reconstruction for instance) but the result is not always satisfactory.
Daniel, CloudCompare admin
Re: Mesh Subsampling and Compression
Blender might work better than Meshlab for decimation, if I recall.
Re: Mesh Subsampling and Compression
Thanks for the pointer towards Instant Meshes, that nearly does what I want. It nicely reduced a 100+mb model to just a few mb.
The issue is that the final output PLY or OBJ is monochrome ie it loses all color information.
I have the same issue with blender. When I import into blender my colorized photogrammetry-derived mesh that I have generated in photoscan or visualSFM, it only shows up as a monochrome model.
In this video:
http://www.blendernation.com/2015/11/16 ... y-program/
they use instant meshes and blender to apparently do what I want to do and they seem to keep color information, but I have not figured out how to import a color PLY or OBJ into blender.
I realize this is out of the scope of this forum, but does anyone know how to turn-on or restore the color of a (photogrammetry) model in blender? Any additional thoughts or tips would be appreciated.
Peter
The issue is that the final output PLY or OBJ is monochrome ie it loses all color information.
I have the same issue with blender. When I import into blender my colorized photogrammetry-derived mesh that I have generated in photoscan or visualSFM, it only shows up as a monochrome model.
In this video:
http://www.blendernation.com/2015/11/16 ... y-program/
they use instant meshes and blender to apparently do what I want to do and they seem to keep color information, but I have not figured out how to import a color PLY or OBJ into blender.
I realize this is out of the scope of this forum, but does anyone know how to turn-on or restore the color of a (photogrammetry) model in blender? Any additional thoughts or tips would be appreciated.
Peter
Re: Mesh Subsampling and Compression
Hi,
you have multiple possibilities. The required workflow depends a bit on whether you are talking about vertex-colors or proper texture maps.
For texture mapping, you could decimate your mesh using either Meshlabs "Quadric Edge Collapse Decimation", which usually creates quite good results. Alternatively you could, as been suggested, just remesh the points of the mesh using CC poisson implementation (you don't have to randomly subsample points from mesh, just use a lower octree depth). After decimation just reimport the mesh to photoscan and rebuild the texture. Just make sure, that the decimated mesh is in the same position as the camera poses.
Edit: Of course you could just use the decimation algorithm implemented in photoscan.
Using vertex colors on adecimated model usually does not look very good, since the surface of each triangle is just the color interpolation of the three corner vertices. When using the poisson recon you can just check "keep color" or something like that.
Otherwise Meshlab has a fairly efficient implementation of vertex attribute transfer (transferring the color information from one model to another).
What's the problem in blender? Vertex colors or texture maps? Both work flawlessly.
Cheers
you have multiple possibilities. The required workflow depends a bit on whether you are talking about vertex-colors or proper texture maps.
For texture mapping, you could decimate your mesh using either Meshlabs "Quadric Edge Collapse Decimation", which usually creates quite good results. Alternatively you could, as been suggested, just remesh the points of the mesh using CC poisson implementation (you don't have to randomly subsample points from mesh, just use a lower octree depth). After decimation just reimport the mesh to photoscan and rebuild the texture. Just make sure, that the decimated mesh is in the same position as the camera poses.
Edit: Of course you could just use the decimation algorithm implemented in photoscan.
Using vertex colors on adecimated model usually does not look very good, since the surface of each triangle is just the color interpolation of the three corner vertices. When using the poisson recon you can just check "keep color" or something like that.
Otherwise Meshlab has a fairly efficient implementation of vertex attribute transfer (transferring the color information from one model to another).
What's the problem in blender? Vertex colors or texture maps? Both work flawlessly.
Cheers