Dear list I am running some simulations in R involving reading in several hundred datasets, performing some statistics and outputting those statistics to file. I have noticed that it seems that the time it takes to process of a dataset (or, say, a set of 100 datasets) seems to take longer as the simulation progresses. Has anyone else noticed this? I am curious to know if this has to do with how R processes code in loops or if it might be due to memory usage issues (e.g., repeatedly reading data into the same matrix). Thanks in advance Barth PRIVILEGED AND CONFIDENTIAL INFORMATION This transmittal and any attachments may contain PRIVILEGED AND CONFIDENTIAL information and is intended only for the use of the addressee. If you are not the designated recipient, or an employee or agent authorized to deliver such transmittals to the designated recipient, you are hereby notified that any dissemination, copying or publication of this transmittal is strictly prohibited. If you have received this transmittal in error, please notify us immediately by replying to the sender and delete this copy from your system. You may also call us at (309) 827-6026 for assistance.
for loop performance
5 messages · Barth B. Riley, Philipp Pagel, Martin Morgan
I am running some simulations in R involving reading in several hundred datasets, performing some statistics and outputting those statistics to file. I have noticed that it seems that the time it takes to process of a dataset (or, say, a set of 100 datasets) seems to take longer as the simulation progresses.
Reading data, e.g. with read.table can be slow because it does a fair bit of checking content, guessing data types etc. So I guess the question is: how is your data stored (files, in what format, database) and how do you read it into R? Once we know this there may be tricks to speed up the data import.
I am curious to know if this has to do with how R processes code in loops or if it might be due to memory usage issues (e.g., repeatedly reading data into the same matrix).
Probalby not - I would guess it's the parsing of the input data that is slow. cu Philipp
Dr. Philipp Pagel Lehrstuhl f?r Genomorientierte Bioinformatik Technische Universit?t M?nchen Wissenschaftszentrum Weihenstephan Maximus-von-Imhof-Forum 3 85354 Freising, Germany http://webclu.bio.wzw.tum.de/~pagel/
Thank you Phillip for your post. I am reading in: 1. a 3 x 100 item parameter file (floating point and integer data) 2. a 100 x 1000 item response file (integer data) 3. a 6 x 1000 person parameter file (contains simulation condition information, person measures) 4. I am then computing several statistics used in subsequent ROC analyses, the AUCs being stored in a 6000 x 15 matrix of floating point numbers I am using read.table for #1-#3 and write.table for #4. The process of reading files (#1-#3) and writing to file is done over 6,000 iterations. Barth PRIVILEGED AND CONFIDENTIAL INFORMATION This transmittal and any attachments may contain PRIVILEGED AND CONFIDENTIAL information and is intended only for the use of the addressee. If you are not the designated recipient, or an employee or agent authorized to deliver such transmittals to the designated recipient, you are hereby notified that any dissemination, copying or publication of this transmittal is strictly prohibited. If you have received this transmittal in error, please notify us immediately by replying to the sender and delete this copy from your system. You may also call us at (309) 827-6026 for assistance.
On Thu, Apr 14, 2011 at 06:50:56AM -0500, Barth B. Riley wrote:
Thank you Phillip for your post. I am reading in: 1. a 3 x 100 item parameter file (floating point and integer data) 2. a 100 x 1000 item response file (integer data) 3. a 6 x 1000 person parameter file (contains simulation condition information, person measures) 4. I am then computing several statistics used in subsequent ROC analyses, the AUCs being stored in a 6000 x 15 matrix of floating point numbers I am using read.table for #1-#3 and write.table for #4. The process of reading files (#1-#3) and writing to file is done over 6,000 iterations.
A few ideas: 1) try to use the colClasses argument to read.table. That way R will not have to guess the data type of columns. 2) When you say 6000 iterations - do you mean you are reading/writing the SAME files over and over again? Or do you have 6000 sets of files? In the former case the obvious advice would be to only read them once. 3) If the input files were generated in R, another option would be to save()/load() them rather than using write.table()/read.table(). 4) If the came from some other application, possibly storing everything in a database may speed up things. 5) Is your data on a file server? If yes: try moving it to the local disc temporarily to see if network i/o is limiting your speed. 6) Whatever you try to improve performance - measure the effects rather than rely on your impression (system.time, Rprof, ...) in order to find out what part of the program is actually eating up the most time. cu Philipp
Dr. Philipp Pagel Lehrstuhl f?r Genomorientierte Bioinformatik Technische Universit?t M?nchen Wissenschaftszentrum Weihenstephan Maximus-von-Imhof-Forum 3 85354 Freising, Germany http://webclu.bio.wzw.tum.de/~pagel/
On 04/13/2011 02:55 PM, Barth B. Riley wrote:
Dear list I am running some simulations in R involving reading in several hundred datasets, performing some statistics and outputting those statistics to file. I have noticed that it seems that the time it takes to process of a dataset (or, say, a set of 100 datasets) seems to take longer as the simulation progresses. Has anyone else noticed this? I am curious to know if this has to do with how R processes code in loops or if it might be due to memory usage issues (e.g., repeatedly reading data into the same matrix).
Hi Barth
The 'it gets slower' symptom is often due to repeatedly 'growing by 1' a
list or other data structure, e.g.,
m = matrix(100000, 100)
n = 20000
result = list()
system.time(for (i in seq_len(n)) result[[i]] = m)
versus 'pre-allocate and fill'
result = vector("list", n)
system.time(for (i in seq_len(n)) result[[i]] = m)
The former causes 'result' to be copied on each new assignment, and the
size of the copy gets larger each time.
Thanks in advance Barth PRIVILEGED AND CONFIDENTIAL INFORMATION This transmittal and any attachments may contain PRIVILEGED AND CONFIDENTIAL information and is intended only for the use of the addressee. If you are not the designated recipient, or an employee or agent authorized to deliver such transmittals to the designated recipient, you are hereby notified that any dissemination, copying or publication of this transmittal is strictly prohibited. If you have received this transmittal in error, please notify us immediately by replying to the sender and delete this copy from your system. You may also call us at (309) 827-6026 for assistance.
______________________________________________ R-help at r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Computational Biology Fred Hutchinson Cancer Research Center 1100 Fairview Ave. N. PO Box 19024 Seattle, WA 98109 Location: M1-B861 Telephone: 206 667-2793