tags:

views:

96

answers:

5

hey guys

i want to write a tracking system and now i can save in my Mysql database . but saving information about each ip that visits is a huge work for mysql

so i think if i could save the information in a file , then there is no discussion about database and its problems .

but to begin this : i realy dont know how to save in a file in a way that i can read it with no problem and show the details

i need to show information about all ips in rows , after saving in a file

what should i do to save and show in row order ( table )

php/mysql

+4  A: 

I don't understand why you think that reading and writing this data into a file by yourself is going to be faster than a database?

What you are essentially saying, right, is that you can write code that will do exactly what the database is going to do, but better?

I think most people will agree with me: you will see major problems with speed if you do NOT go with the database on this one.

Narcissus
+1, this is what databases are for - to move loads of data around
stereofrog
Writing to file will be quicker than writing to DB. e.g. IIS can log to SQL Server but "Microsoft does not recommend IIS logging to a SQL Server table if the IIS computer is a busy server.". However as VolkerK points out concurrency will be an issue.
Martin Smith
@Martin: that's interesting about not logging to SQL server. I can see how maybe appending to a small file could be faster than using a DB but I have no doubt that once you get to a certain level, the 'management' of massive files is outweighed by the DB connection. Plus, of course, there's the concurrency issue. Either way, you made a great point about the server already logging... I gave you an upvote :)
Narcissus
I think how IIS does it is to have a buffer in memory and just write to the log file in chunks. Obviously the log is written sequentially so it is quite fast. I don't know enough about PHP to know how easy or difficult that would be to implement in code.
Martin Smith
"this is what databases are for - to move loads of data around" - I have to disagree on this one (in parts). With relational databases and MySQL in particular you try to avoid moving lots of data as much as you can. And this seems to be a write-many-read-seldom problem. Not a _particular_ strong suit of MySQL. But the question still remains: Can _you_ implement a faster thing in php.
VolkerK
Databases introduce a lot of overhead, in the name of "ACID": atomicity, consistency, isolation, and durability. Writing lines out directly to a log file is much faster. And yes, it's error prone -- with high load, you'll see occasional collisions. But for a log file, that's usually just fine.
Frank Farmer
+4  A: 

What webserver are you using? Doesn't that have built in logging that you can use?

Martin Smith
+1 for a very pragmatic answer
middus
A lot of shared hosts delete access logs monthly, or weekly, depending upon their configuration. I can see the need for this. A lot of stats analyzers that they use read daily from access logs and store their own data. Its handy to be able to use SQL to access these, especially if some manager is demanding various views. Perhaps the whole world should just use Google?
Tim Post
Good point on shared hosting. I guess it depends what access the OP has to his logs and whether he needs to do real time queries against the data or can just import in periodic batches.
Martin Smith
+1  A: 

I do not know if I understood your question correctly you want to create a log file

You could always do something like this if you really need to save the data to a file

To write the data to a file do the following

$file = 'logfile.txt';

if(file_exist($file)){
    $fh = fopen($file, 'w');
}else{
    $fh = fopen($file, 'a');
}

$data = $dt."\t\t".$ipaddr."\t\t".$hostnm."\t\t".$referer."\t\t".$pg."\t\t".$pagetitle."\t\t".$dbi."\n";

fwrite($fh,$data);

fclose($fh);

To read the data from the file use the PHP file function which saves the data to array . Then you can search the array for the relevant data, readmore about it here http://php.net/manual/en/function.file.php

Roland
A: 

i always store all the data about users in database. and it works fast, and is not expensive. there are many ways to write the data in the file, but do you need them? i think no.

Syom
A: 

Is there any particular reason you can't use the server's own logs as a data store? It's pretty trivial to tear apart an IIS or Apache access_log and parse out the information you want.

If you're only interested in a certain sub-set of the access log (ie: hits on actual pages, and ignore images/javascript/css/flash/whatever files), you can parse out just those particular hits. Then take that reduced data set and stuff it into MySQL on another server.

After that, the database is a far better place to store the relevant hit data, as it can do the data mining for you, and handle all the grouping/counting without breaking a sweat. Think of what happens when you load a few hundred million hits and try to keep track of per-IP data - your in-memory data set will grow quite quickly and may exceed the available memory. Doing the summing/counting/averageing/etc... in the database will handle all such concerns for you; you just have to worry about interpreting the results afterwards.

And just as an optimization hint: store your IP addresses as an unsigned integer field, not varchar(15). They're really just 32bit numbers. The overhead of the one-time initial ascii->numeric translation will be utterly trivial next to the repeated hits in the analysis stage when you're trying to apply subnet masks and whatnot:

INSERT INTO log (ip) VALUES (INET_ATON('10.0.0.1'));
Marc B
"They're really just 32bit numbers." ....and suddenly there was ipv6 ;-)
VolkerK
True enough. Hopefully by the time ISPs actually roll out IPv6 universally, MySQL will have rolled out a 128bit datatype as well.
Marc B