Is there any particular reason you can't use the server's own logs as a data store? It's pretty trivial to tear apart an IIS or Apache access_log and parse out the information you want.
If you're only interested in a certain sub-set of the access log (ie: hits on actual pages, and ignore images/javascript/css/flash/whatever files), you can parse out just those particular hits. Then take that reduced data set and stuff it into MySQL on another server.
After that, the database is a far better place to store the relevant hit data, as it can do the data mining for you, and handle all the grouping/counting without breaking a sweat. Think of what happens when you load a few hundred million hits and try to keep track of per-IP data - your in-memory data set will grow quite quickly and may exceed the available memory. Doing the summing/counting/averageing/etc... in the database will handle all such concerns for you; you just have to worry about interpreting the results afterwards.
And just as an optimization hint: store your IP addresses as an unsigned integer field, not varchar(15). They're really just 32bit numbers. The overhead of the one-time initial ascii->numeric translation will be utterly trivial next to the repeated hits in the analysis stage when you're trying to apply subnet masks and whatnot:
INSERT INTO log (ip) VALUES (INET_ATON('10.0.0.1'));