On a unix system I will generally assemble extremely large (> 1K files) collections of files (typically datafiles that are generated from somewhere) in separate subdirectories, usually based on a naming scheme derived from the filenames themselves (such as /2/25/257689.xml), since there are general management issues with one single directory containing tens of thousands of files (such as, just a plain "ls" can take many minutes), and some issues internal to the filesystem which can degrade performance.
But as far as a web app accessing no more than a few hundred files from an htdocs directory, there's no performance impact of any significance. Unless there's some horrendously bad architecture in the php engine.