Hello,
I come in search of clarification on kernel size as it pertains to the roughness tool. I'm attempting to bring in dense point clouds from Agisoft PhotoScan and compare surface roughness between each. However, the automatically assigned kernel size is different for each and thus I'm concerned that I'm comparing roughness measured at different scales, which is not my aim. I have tried setting the kernel size the same for each, however I have had limited success because, in certain cases, the initial kernel size was so much smaller that, by increasing it too much, it was too large to actually calculate roughness (and the program crashed)
How is kernel size calculated? Is it a product of the total number of points, the density of points, or the size of an individual point compared to the size of the total surface? If, for two surfaces (of physically different sizes), I assign the same kernel size, will those kernels actually be the same size or only the same size relative to the total surface size.
Any answers or advice on how to calculate roughness at the same scale across surfaces of different sizes would be greatly appreciated,
Emmett
Comparing Kernel Size across Surfaces
-
- Posts: 1
- Joined: Thu Oct 06, 2016 4:01 am
Re: Comparing Kernel Size across Surfaces
In fact, the default kernel size is a very dumb guess (something related to the extents of the cloud bounding-box).
Your first idea was the right one: you should definitely use the same kernel size for all clouds. Of course, as you already realized, it should be big enough so that it works on all clouds. But it shouldn't crash if you use a big kernel (does CC actually crash, or is it only frozen? The second option only means it takes a lot of CPU power and time, but you should bear with it and wait for it to finish --> I'd be very interested to get more information on your test if CC actually crashes).
In theory you should set the right kernel value based only on the size of the features you are looking at. The tool extracts neighbors around each point inside a sphere (kernel = sphere radius). It then fits a plane on the neighbors (so you need at least 3 neighbors, but you'd be better get a lot more) and eventually it computes the distance between the central point and this plane. The 'roughness' is simply the deviation from the mean / average local surface. If some clouds have a lower density (i.e. less than 4 points in the sphere) then they should be considered unfit to this study. Of course if you are looking at globally flat shapes, you can in this case increase the kernel (radius) a lot.
Your first idea was the right one: you should definitely use the same kernel size for all clouds. Of course, as you already realized, it should be big enough so that it works on all clouds. But it shouldn't crash if you use a big kernel (does CC actually crash, or is it only frozen? The second option only means it takes a lot of CPU power and time, but you should bear with it and wait for it to finish --> I'd be very interested to get more information on your test if CC actually crashes).
In theory you should set the right kernel value based only on the size of the features you are looking at. The tool extracts neighbors around each point inside a sphere (kernel = sphere radius). It then fits a plane on the neighbors (so you need at least 3 neighbors, but you'd be better get a lot more) and eventually it computes the distance between the central point and this plane. The 'roughness' is simply the deviation from the mean / average local surface. If some clouds have a lower density (i.e. less than 4 points in the sphere) then they should be considered unfit to this study. Of course if you are looking at globally flat shapes, you can in this case increase the kernel (radius) a lot.
Daniel, CloudCompare admin