views:

123

answers:

4

We're adding extra login information to an existing database record on the order of 3.85KB per login.

There are two concerns about this:

1) Is this too much on-the-wire data added per login?

2) Is this too much extra data we're storing in the database per login?

Given todays technology, are these valid concerns?

Background:

We don't have concrete usage figures, but we average about 5,000 logins per month. We hope to scale to larger customers, howerver, still in the 10's of 1000's per month, not 1000's per second.

In the US (our market) broadband has 60% market adoption.

+1  A: 

How many users do you have? How often do they have to log in? Are they likely to be on fast connections, or damp pieces of string? Do you mean you're really adding 3.85K per time someone logs in, or per user account? How long do you have to store the data? What benefit does it give you? How does it compare with the amount of data you're already storing? (i.e. is most of your data going to be due to this new part, or will it be a drop in the ocean?)

In short - this is a very context-sensitive question :)

Jon Skeet
Context added, my good sir.
Alan
The question text does say the data is added per login :)
Alan
"Login" can mean different things to different people. Some people refer to a "login" as their username. Others mean a specific login attempt.
Jon Skeet
well those people are fools. :D
Alan
+1  A: 

Given that storage and hardware are SOOO cheap these days (relatively speaking of course) this should not be a concern. Obviously if you need the data then you need the data! You can use replication to several locations so that the added data doesn't need to move over the wire as far (such as a server on the west coast and the east coast). You can manage your data by separating it by state to minimize the size of your tables (similar to what banks do, choose state as part of the login process so that they look to the right data store). You can use horizontal partitioning to minimize the number or records per table to keep your queries speedy. Lots of ways to keep large data optimized. Also check into Lucene if you plan to do lots of reads to this data.

Andrew Siemer
+3  A: 

Assuming you have ~80,000 logins per month, you would be adding ~ 3.75 GB per YEAR to your database table.

If you are using a decent RDBMS like MySQL, PostgreSQL, SQLServer, Oracle, etc... this is a laughable amount of data and traffic. After several years, you might want to start looking at archiving some of it. But by then, who knows what the application will look like?

It's always important to consider how you are going to be querying this data, so that you don't run into performance bottlenecks. Without those details, I cannot comment very usefully on that aspect.

But to answer your concern, do not be concerned. Just always keep thinking ahead.

gahooa
A: 

In terms of today's average server technology it's not a problem. In terms of your server technology it could be a problem. You need to provide more info.

ilya n.