First see if there are any patterns you can take advantage of - for instance, is the top-left or top-right corner (for example) always going to be of the background colour? If so, just look at the colour of that pixel.
Maybe you can get a "good enough" idea by looking at some key pixels and averaging them.
Failing something simple like that, the work you need to do starts to rise by orders of magnitude.
One nice idea I had would be to take the strip of pixels going diagonally across from the top-left corner to the bottom-right corner (maybe have a look at Bresenham's line algorithm). Look for runs of dark and light colour, and probably take the longest run; if that doesn't work, maybe you should "score" runs based on how light and dark they are.
If your image is unnecessarily large (say 1000x1000 or more) then use imagecopyresized to cheaply scale it down to something reasonable (say 80x80).
Something that will work if MOST of the image is background-colour is to resample the image to 1 pixel and check the colour of that pixel (or maybe something small, 4x4 or so, after which you count up pixels to see if the image is predominantly light or dark).
Note that imagecopyresampled is considerably more expensive than imagecopyresized, since 'resized just takes individual pixels from the original whereas 'resampled actually blends the pixels together.
If you want a measure of "lightness" you could simply add the R, G and B values together. Or you could go for the formula for luma used in YCbCr:
Y' = 0.299 * R + 0.587 * G + 0.114 * B
This gives a more "human-centric" measure of lightness.