Hi there,
Firstly, 3DMASC is a great plugin which has delivered us accurate results so far!
The one issue we've been running into is that during training or classification the main load seems to go to the CPU, resulting in very long waiting times where often CloudCompare freezes and in some cases even our machine.
So, I was wondering if somehow it's possible to put the load on the GPU instead of the CPU? Or if there are other configuration options which would speed up the process?
Thanks!
Kind regards,
Sjors Jessen
Use GPU instead of CPU during training and classification
-
- Posts: 1
- Joined: Wed Nov 22, 2023 9:59 am
Re: Use GPU instead of CPU during training and classification
Nope, sadly this algorithm cannot be transferred to the GPU...
And as for the speed-up, I guess a processor with more core might help?
And as for the speed-up, I guess a processor with more core might help?
Daniel, CloudCompare admin
-
- Posts: 296
- Joined: Sat Jan 20, 2018 1:57 pm
Re: Use GPU instead of CPU during training and classification
could you please share an screenshot of the type of data yo're processing?SjorsJessen wrote: ↑Wed Nov 22, 2023 12:43 pm Hi there,
Firstly, 3DMASC is a great plugin which has delivered us accurate results so far!
The one issue we've been running into is that during training or classification the main load seems to go to the CPU, resulting in very long waiting times where often CloudCompare freezes and in some cases even our machine.
So, I was wondering if somehow it's possible to put the load on the GPU instead of the CPU? Or if there are other configuration options which would speed up the process?
Thanks!
Kind regards,
Sjors Jessen
The times I tried I failed, I was only able to classify the tile I did use for training. I guess I used not the proper params and scales. Just if you could please share the type of data and may be the geoemtric features used. tx
Re: Use GPU instead of CPU during training and classification
Hi,SjorsJessen wrote: ↑Wed Nov 22, 2023 12:43 pm Hi there,
Firstly, 3DMASC is a great plugin which has delivered us accurate results so far!
The one issue we've been running into is that during training or classification the main load seems to go to the CPU, resulting in very long waiting times where often CloudCompare freezes and in some cases even our machine.
So, I was wondering if somehow it's possible to put the load on the GPU instead of the CPU? Or if there are other configuration options which would speed up the process?
Thanks!
Kind regards,
Sjors Jessen
An option to speed up the process and not freeze your computer can also be to use 3DMASC in python for the training and application parts, and in the command line for feature computation.
Re: Use GPU instead of CPU during training and classification
Hi,
The bottleneck at training is due to the random forest algorithm not being parallelized in the openCV library that we use. It's too bad because it's an algorithm that is well suited for parallelization, but there does not seem to be a project to get this part of the openCV library improved.
Now, so far we found that we don't need a lot of samples (i.e., 1000 per class) to get very good results, and not too many scales. But as Mathilde suggests, if your proficient in Python, just call 3DMASC in command line for the feature computation, and train with scikit learn where the random forest implementation is super fast
Cheers
The bottleneck at training is due to the random forest algorithm not being parallelized in the openCV library that we use. It's too bad because it's an algorithm that is well suited for parallelization, but there does not seem to be a project to get this part of the openCV library improved.
Now, so far we found that we don't need a lot of samples (i.e., 1000 per class) to get very good results, and not too many scales. But as Mathilde suggests, if your proficient in Python, just call 3DMASC in command line for the feature computation, and train with scikit learn where the random forest implementation is super fast
Cheers