Paul,
thank you very much for this elaborate answer. There are a lot of points which make me rethink what I was planning. Especially I will take a close look on the TSdbi package/source code. Probably this is something I can rely on - that would be very great.
To conclude very shortly what my idea was: Use R to read the data in and give it a kind of standardised structure, which is then written to SQLite. Use SQLite to store the data and to filter/order it. Use R to get parts of the data and to estimate models on it. The models get an S4 class and have everything they need.
By huge I think about Tickdata from e.g. Future markets for a year: For DAX Futures this could be around 28.5 Mio. Entries (each entry with timestamp, IDs, price, volume, etc). Another example would be Spot market data for a whole Index (say the Dow30). Usually we get the data from the wrds in a batched form like a csv with all entries for all securities batched. Further order book data with each order send to the exchange for one security for three months - altogether often more than 20 Mio entries (including deleted errors etc.).