tags:

views:

111

answers:

5

I have a task and would like to develop in my mind how i should go around programming this.

I will probably be given a csv format database which would have minimum 36 million lines of data. In the future, users would need to search this "database" through CGI/perl interface based on some conditions depending on multiple column values and display the matching rows.

How should I using perl read the csv format (probably using CSV parser from CPAN) and store into what type of database? Key priority would be speed of searching of the database.

Any sample coding would appreciated

+2  A: 

Most databases will have a means of directly loading a CSV file into a table. For example SQLLoader for Oracle or the load command for MySQL.

Searching the database in an efficient manner will depend on the data and how you expect to search it. (i.e. what fields will be interesting, which ones may you do sorts on, etc.) Without more information, it's hard to give you a solid answer, though you should follow general best practices for indexing.

Concerning code examples for accessing a database, see the following links:

MySQL DBI Example

PERL DBI Doc

RC
A: 

Sometimes Perl surprises you with its efficiency in handling basic scenarios. That said, if your use cases are all read-only and the information is static/unchanging, I'd see how the brute force method of just opening up the file and searching it worked first.

Assuming that is unacceptable, then you just have to look at the DBI, Perl's way of talking to a database, and run your millions of inserts once, and then your reads will be fast with a modern RDBMS like MySQL or SQL Server.

Gabriel
+5  A: 

You probably want to go with a proper database solution. The easiest to set up (depending on your familiarity with RDBMSes) is probably MySQL. Once you have that set up you want to look into Perl modules for interfacing with the database. DBIx::Class is the "in thing" these days, and as such, there are many people using it who can answer questions.

Oh, and for your CSV parsing, look at Text::CSV, if you don't want to load it directly into the database (and if your RDBMS of choice doesn't support ingesting of CSV files directly).

CanSpice
+1  A: 

First, use Text::CSV_XS to parse to the CSV file.

Second, what sort of database to use and how it should be structured depends on what sort of searches are going to be made.

If you are doing simple keyword matching, then a key-value store will be very fast. Something like the Berkeley DB will do nicely.

If you have more complex needs, you may want to consider an SQL database like MySQL, PostgreSQL, Oracle, SyBase, or so forth. SQL database tuning and design is an entire field of study on its own. I will offer a bit of advice though, and suggest that you need to think very carefully about what indexes you can apply to your fields so that you can maximize query speed.

daotoad
+3  A: 

PostgreSQL has the ability to import CSV files:
http://www.postgresql.org/docs/current/static/sql-copy.html
The COPY command is also more efficient than committing 36M inserts, one at a time.

You should look into ways to import the data once you design on a DBMS. With that many records I'd stand clear of MySQL, though.

If the data is not relational and will only get larger, you might want to look into using Hadoop, or some other form of MapReduce. It'll turn those 30 min queries into 5min.

vol7ron
Can you give reasons why MySQL shouldn't be used for 36 million records? It's best to give reasons to your assertions instead of just throwing them out there.
CanSpice
Being cost-minded, I used to use MySQL predominantly because it was part of the default setup that hosting companies used (LAMP: Linux/Apache/PHP/MySQL). It served its purpose, but I also noticed that certain records weren't being accounted for and it's not the most efficient database available. PostgreSQL is the free alternative that has so many more options and is both reliable and efficient.
vol7ron
There is an additional concern I have since Oracle bought up MySQL. It's questionable how Oracle will treat it. They could take some of their proprietary engineering and update it, or they could phase it out entirely (more likely). Regardless, PostgreSQL has a great user and developer community. While it is a great free database, take Oracle, SQL Server, and DB2 all have their advantages as well, but at a cost.
vol7ron