Dear List, I have some projects where I use enormous datasets. For instance, the 5% PUMS microdata from the Census Bureau. After deleting cases I may have a dataset with 7 million+ rows and 50+ columns. Will R handle a datafile of this size? If so, how? Thank you in advance, Tom Volscho ************************************ Thomas W. Volscho Graduate Student Dept. of Sociology U-2068 University of Connecticut Storrs, CT 06269 Phone: (860) 486-3882 http://vm.uconn.edu/~twv00001
Enormous Datasets
3 messages · Thomas W Volscho, Peter Dalgaard, Roger D. Peng
Thomas W Volscho <THOMAS.VOLSCHO at huskymail.uconn.edu> writes:
Dear List, I have some projects where I use enormous datasets. For instance, the 5% PUMS microdata from the Census Bureau. After deleting cases I may have a dataset with 7 million+ rows and 50+ columns. Will R handle a datafile of this size? If so, how?
With a big machine... If that is numeric, non-integer data, you are looking at something like
7e6*50*8
[1] 2.8e+09 i.e. roughly 3 GB of data for one copy of the data set. You easily find yourself with multiple copies, so I suppose a machine with 16GB RAM would cut it. These days that basically suggests x86_64 architecture running Linux (e.g. multiprocessor Opterons), but there are also 64 bit Unix "big iron" solutions (Sun, IBM, HP,...). If you can avoid dealing with the whole dataset at once, smaller machines might get you there. Notice that 1 column is "only" 56MB, and you may be able to work with aggregated data from some step onwards.
O__ ---- Peter Dalgaard Blegdamsvej 3 c/ /'_ --- Dept. of Biostatistics 2200 Cph. N (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918 ~~~~~~~~~~ - (p.dalgaard at biostat.ku.dk) FAX: (+45) 35327907
It depends on what you mean by 'handle', but probably not. You'll likely have to split the file into multiple files unless you have some rather high end hardware. However, in my limited experience, there's almost always a meaningful way to split the data (geographically, or by other categories). A few things I've learned recently working with large datasets: 1. Store files in .rda format using save() -- the load times are much faster and loading takes up less memory 2. If your data are integers, store them as integers! 3. Don't store character variables in dataframes -- use factors -roger
Thomas W Volscho wrote:
Dear List, I have some projects where I use enormous datasets. For instance, the 5% PUMS microdata from the Census Bureau. After deleting cases I may have a dataset with 7 million+ rows and 50+ columns. Will R handle a datafile of this size? If so, how? Thank you in advance, Tom Volscho ************************************ Thomas W. Volscho Graduate Student Dept. of Sociology U-2068 University of Connecticut Storrs, CT 06269 Phone: (860) 486-3882 http://vm.uconn.edu/~twv00001
______________________________________________ R-help at stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
Roger D. Peng http://www.biostat.jhsph.edu/~rpeng/