I'm well aware that a single Miller-Rabin test runs in cubic log time. I know about Montgomery modular exponentiation and GNFS and I'm not asking about any of that fancy theory. What I am wondering is what some representative runtimes for MR (note that this is not the same as an RSA operation) on characteristic hardware (e.g., a 2.2 GHz Opteron or such-and-such graphics card or FPGA).