I used to think like you do but I also heavily monitor my execution times. Since changing to a system that calls FileExists several times per request I've notice 0ms difference in the milliseconds required to load pages. It's entirely probable than any frequent lookup on a given file will cause it to be cached somewhere in the system or SCSI drivers or drive hardware. It's even more likely that lookup times on SCSI hardware are sub-millisecond.
Given I use cfinclude heavily it's no surprise that one more lookup doesn't even show on the radar.
The reality is it would likely have more overhead than a variable lookup but we're probably talking 0.0001 milliseconds difference unless you have millions of files in a directory or you're running a webserver off IDE disks or something silly like that. The additional code complexity probably isn't worth the savings unless you're /. or Apple or something.
To those who say it's poor architecture I beg to differ. In the mid to long term you can buy processors far cheaper than developer time. Having a system that requires only changing files is simpler than changing files AND variables - and dealing with the additional complexity. Like most things in the optimization category many tasks that save a few MS may occupy hours that could be spent on more effective measures like improving your cache.