Skip to content

GPU Computing

7 messages · Norm Matloff, M. Edward (Ed) Borasky, Simon Urbanek

#
In-Reply-To
Peter Chausse wrote:

            
The short answer is no.

Functions like mclapply() work on, say a quad core machine, by setting
up new invocations of R to run on each of the four CPU cores.  What you
have in mind would mean having R run on each of the GPU cores.  This is
not possible, for a variety of reasons (R needs a terminal shell, it
needs I/O etc.).

To have R take advantage of GPUs, one must write C/C++ (or FORTRAN)
code.  Currently packages that do this are very limited.  See the
relevant CRAN Task View, at

http://cran.r-project.org/web/views/HighPerformanceComputing.html

You might also take a look at my Rth package, at

http://heather.cs.ucdavis.edu/~matloff/rth.html 

Norm Matloff
#
On Tue, Aug 21, 2012 at 1:54 PM, Norm Matloff <matloff at cs.ucdavis.edu> wrote:
There are a number of GPU packages in that task view, but few of them
compile and execute in a pure open source environment (*none* on any
of my recent tests with openSUSE and Fedora; I haven't attempted
Debian or Ubuntu). And as far as I can recall they are NVidia only;
there is precious little pure open source code that actually works out
of the box for Intel or AMD/ATI GPUs. There are GCC gotchas, missing
libraries and all sorts of other non-repeatabilities to waste effort
on. I gave up on pure open source GPU usage with R.
#
Yes, very true.  

Indeed, I currently have a nasty bug I'm trying to solve, involving R,
CUDA and the torque cluster manager.  No output!  Is the problem in R,
CUDA, torque or what?

There are other problems in addition to the platform issues, notably the
limited amount of memory in GPUs and the lack of effcient
synchronization hardware.

For now, GPU is not for the faint of heart (or the short of patience).

In my Rth library that I mentioned, I currently view it more as a tool
for multicore than GPU.  The latter is still very good on many problems,
though.

Norm
On Tue, Aug 21, 2012 at 03:10:42PM -0700, M. Edward (Ed) Borasky wrote:

            
#
On Tue, Aug 21, 2012 at 3:33 PM, Norm Matloff <matloff at cs.ucdavis.edu> wrote:
Or for the unbudgeted open source advocate - it's a capital-intensive
minefield of patents and is likely to remain so for a long time.

  
    
#
On Aug 21, 2012, at 6:10 PM, M. Edward (Ed) Borasky <znmeb at znmeb.net> wrote:

            
That is not true - OpenCL is platform independent and you can use a wide range of back-end implementations - not only GPUs but also accelerators and CPUs. See the installation notes in the package for details - there are many to choose from. You may still need to tweak kernels for performance on particular type of back-end (e.g., kernels optimized for CPUs won't be very fast on GPUs and vice-versa), but at least you need not worry about vendors. The nice thing about OpenCL is that the API is open source and free so you can use it without any baggage, however, drivers for particular hardware tend to be proprietary just because GPU vendors like their secret performance sauce to stay secret.

Cheers,
Simon
#
On Wed, Aug 22, 2012 at 6:54 AM, Simon Urbanek
<simon.urbanek at r-project.org> wrote:

            
I wasn't able to get OpenCL to run on anything except Intel CPUs. My
ancient NVidia GeForce 6150SE nForce 430 didn't work, even with
proprietary Linux drivers and other binary stuff downloaded from the
NVidia site. The open source "nouveau" drivers? Fugeddaboutit! The
built-in GPU on my year-old Intel i5 laptop? Nope. I don't have an ATI
/ AMD GPU any more, so maybe that one actually works.

This stuff is hand-tuned to work on Windows gamer gear, Macs and
strategic government HPC procurements. Everybody else is a
second-class citizen.
#
On Aug 22, 2012, at 12:45 PM, "M. Edward (Ed) Borasky" <znmeb at znmeb.net> wrote:

            
I had no trouble getting OpenCL working on pretty much all machines I have been testing. Obviously, on Macs it is the easiest as it comes pre-installed with the system so it just works. I don't use Windows so I can't comment on that one, but Linux (Debian and Ubuntu) it was still easy - the API comes as open-source with the distribution, so the only thing you need is the libOpenCL.so from your vendor - which was just one download that you don't even have to install (i.e., you don't need to set any special env vars or so). I grabbed just a random version, and didn't need to upgrade drivers.

That said, using random GPUs won't do you any good with R - the performance of regular "consumer" GPUs is horrible with double-precision arithmetics (even more so with mobile GPUs). Also they are not much faster compared to modern CPUs (other than for very specific class of problems that can be hand-tuned to a particular GPU and its hardware specifics, but I don't consider that realistic use for data analysis). The only serious benefit is with modern GPUs that are geared specifically at HPC (like Teslas etc.) - they can really blow CPUs out of the water, but they have an appropriate cost. So even though you can run GPU code on random hardware as a first-class citizen, it won't help you solve a problem.

Cheers,
Simon