different outcomes using read.table vs read.csv
2009/3/13 jatwood <jatwood at montana.edu>:
Good Afternoon I have noticed results similar to the following several times as I have used R over the past several years. My .csv file has a header row and 3073 rows of data.
rskreg<-read.table('D:/data/riskregions.csv',header=T,sep=",")
dim(rskreg)
[1] 2722 ? 13
rskreg<-read.csv('D:/data/riskregions.csv',header=T)
dim(rskreg)
[1] 3073 ? 13
Does someone know what could be causing the read.table and read.csv functions to give different results on some occasions? ?The riskregions.csv file was generated with and saved from MS.Excel.
read.table has 'comment.char="#"', so if a line starts with # it gets ignored. read.csv doesn't have this set, so it might explain why read.csv gets more than read.table... Do you have lines starting with #? Try read.table with comment.char="" and see if you get the right number. See the help for read.table for more info. I'd not seen this before, hope it hasn't bitten me... Barry