I wonder if this is a good topic. First of all, the results are pretty obvious: the games that don't use GPU for physics will have no effect; and the for the ones that do the relationship will be "more physics -> more power needed -> more heat -> slower GPU". Although if the cooling solution is proper (factory default should do unless you overclock) then you will probably not notice any slowdown, as the card will be able to operate at max speed while keeping the temperature low enough not to affect speed.
Now, if you simply want to explore heat-performance relationship of GPUs, then it's again the same song. More heat - slower work. Although if you want to notice this you will have to lower the cooling. Also there's a fair chance that before you notice any slowdown you will get artifacts and/or BSOD, making further testing impossible.
As for the methods - I'd use an infrared laser thermometer and a cooler with adjustable speed (preferably down to 0 rpm). Also you'll have to have some display of the current rpm. Then you just run some GPU-heavy software that displays FPS, and play around with the dials, the thermometer, and the FPS counter. Note that for fair measure the GPU software should provide a steady load on the GPU. I'd advise standing still and looking at a static (albeit complex) scene. Otherwise the FPS might change due to changes in the scene complexity instead of the GPU temperature.
Although, as I said, the results are pretty obvious from the start...