Using GPU to accelerate Geo2Grid

Post Reply
n2cr
Posts: 2
Joined: Thu Jun 18, 2020 3:50 pm

Using GPU to accelerate Geo2Grid

Post by n2cr »

There has been quite a bit of work recently to accelerate Numpy and other Python computational components using the CUDA elements in an NVIDIA graphics card. Is there any future consideration to Geo2Grid leveraging a GPU to improve it's performance?

https://developer.nvidia.com/how-to-cuda-python
https://devblogs.nvidia.com/numba-pytho ... eleration/
https://cupy.chainer.org/

Chris Wiener N2CR
CR Labs
Morris Plains, NJ
kathys
Posts: 487
Joined: Tue Jun 22, 2010 4:51 pm

Re: Using GPU to accelerate Geo2Grid

Post by kathys »

Dear Sir,

The underlying code for Geo2Grid makes extensive use of Dask/Xarray. You may want to look into the efforts that nVidia is funding for running Dask computations using CUDA, which is the most likely path to running geo2grid on GPU at this time.

We will always pursue new and better ways to reduce output image latencies, including leveraging GPU use in the future.

Kathy
davidh
Posts: 116
Joined: Tue Jun 04, 2013 11:19 am

Re: Using GPU to accelerate Geo2Grid

Post by davidh »

Hi Chris,

I'm just getting back from leave and have been thinking about this for a couple days and would like to add to Kathy's response. This is just some extra information for anyone who is interested in this topic in the future. As Kathy said, we depend heavily on Xarray and Dask and there is a lot of work (as you pointed out) on taking advantage of the GPU from these libraries. In my experience the main difficulty is actually getting improved performance when moving algorithms to the GPU. Running operations on the GPU is not a 1:1 translation on the code side and algorithms often have to be completely rewritten to get good performance. Additionally, a lot of the operations that Geo2Grid runs actually perform fairly well on the CPU and we end up getting the best performance by balancing memory usage and multi-core execution. This is where the switch to basic CPU-side dask has bought us a lot. We end up shuffling a lot of bits around in memory/disk and doing this efficiently while make the code maintainable can be difficult.

The main exception here is resampling. I think resampling would be a good operation that could be moved to the GPU to get some great performance improvements. So far though I've had trouble finding an existing easy-to-use resampling algorithm that runs on the GPU and I could use from Geo2Grid/Satpy. If this is something you have experience with I'd be very interested in hearing what existing projects you think we could take advantage of.

As always, we'll keep trying to make Geo2Grid faster whether it be on the CPU or GPU.

Dave
Post Reply