Skip to content

R, PostgresSQL and poor performance

5 messages · Berry, David I., Gabor Grothendieck, Joe Conway +1 more

#
On Thu, Dec 1, 2011 at 10:02 AM, Berry, David I. <dyb at noc.ac.uk> wrote:
If this is a large table of which the desired rows are a small
fraction of all rows then be sure there indexes on the variables in
your where clause.

You can also try it with the RpgSQL driver although there is no reason
to think that that would be faster.
#
On 01/12/2011 17:01, "Gabor Grothendieck" <ggrothendieck at gmail.com> wrote:

            
Thanks for the reply and suggestions. I've tried the RpgSQL drivers and
the results are pretty similar in terms of performance.

The ~1.5M records I'm trying to read into R are being extracted from a
table with ~300M rows (and ~60 columns) that has been indexed on the
relevant columns and horizontally partitioned (with constraint checking
on). I do need to try and optimize the database a bit more but I don?t
think this is the cause of the performance issues.

As an example, when I run the query purely in R it takes 273s to run
(using system.time() to time it). When I extract the data via psql and
system() and then import it into R using read.table() it takes 32s. The
code I've used for both are below. The second way of doing it (psql and
read.table()) is less than ideal but does seem to have a big performance
advantage at the moment ? the only difference in the results is that the
date variables are stored as strings in the second example.

# Query purely in R
# ------------------------
dbh <- dbConnect(drv,user="?",password="?", dbname="?",host="?")

sql <- "select id, date, lon, lat, date_trunc('day' , date) as jday,
extract('hour' from date) as hour, extract('year' from date) as year from
observations where pt = 6 and date >= '1990-01-01' and date < '1995-01-01'
and lon > 180 and lon < 290 and lat > -30 and lat < 30 and sst is not
null;"

dataIn <- dbGetQuery(dbh,sql)



# Query via command line
# ----------------------------------
system('psql ?h myhost ?d mydb ?U myuid ?f getData.sql')

system('cat tmp.csv | sed 's/^,/""&/g;s/^[0-9a-zA-Z]\+/"&"/g' > tmp2.csv')
# This just ensures the first column is quoted

dataIn <- read.table('tmp2.csv',sep=',' ,col.names=c(
"id","date","lon","lat","jday","hour","year") )


# Contents of getData.sql
# ---------------------------------
\o ./tmp.csv
\pset format unaligned
\pset fieldsep ','
\pset tuples_only
select 
	id, date, lon, lat, date_trunc('day' , date) as jday, extract('hour' from
date) as hour, extract('year' from date) as year
from 
	observations 
where 
	pt = 6 and date >= '1990-01-01' and date < '1995-01-01' and lon > 180 and
lon < 290 and lat > -30 and lat < 30 and sst is not null;
\q


----------------------------------------------
David Berry
National Oceanography Centre, UK
#
On 12/02/2011 09:46 PM, Berry, David I. wrote:
With that much data you might want to consider PL/R:
  http://www.joeconway.com/plr/

HTH,

Joe
10 days later
#
BD> All variables are reals other than id which is varchar(10) and date
BD> which is a timestamp, approximately 1.5 million rows are returned by
BD> the query and it takes order 10 second to execute using psql (the
BD> command line client for Postgres) and a similar time using pgAdmin
BD> 3. In R it takes several minutes to run and I'm unsure where the
BD> bottleneck is occurring.

You may want to test progressively smaller chunks of the data to see how
quickly R slows down as compared to psql on that query.

My first guess is that something allocating and re-allocating ram in a
quadratic (or worse) fashion.

I don't know whether OSX has anything equivilent, but you could test on
the linux box using oprofile (http://oprofile.sourceforge.net; SuSE
should have an rpm for it and kernel support compiled in) to confirm
where the time is spent.

It is /possible/ that the (sql)NULL->(r)NA logic in RS-PostgreSQL.c may
be slow (relatively speaking), but it is necessary.  Nothing else jumps
out as a possible choke point.

Oprofile (or the equivilent) would best answer the question.

-JimC