Not entirely correct; the first hundred times your script is executed, it'll probably all fit into memory fine; so, the first two minutes or so might go as expected. But, once you push your computer into swap, your computer will spend so much time handling swap, that the next 999,800 executions might go significantly slower than you'd expect. And, as they all start competing for disk bandwidth, it will get much worse the longer it runs.
I'm also not sure about the use of the php memory_get_peak_usage() function; it is an 'internal' view of the memory the program requires, not the view from the operating system's perspective. It might be significantly worse. (Perhaps the interpreter requires 20 megs RSS just to run a hello-world. Perhaps not.)
I'm not sure what would be the best way forward for your application: maybe it could be a single long-lived process, that handles events as they are posted, and returns results. That might be able to run in significantly less memory space. Maybe the results don't actually change every 0.0005 seconds, and you could cache the results for one second, and only run it 86400 times per day. Or maybe you need to buy a few more machines. :)