I'd like to use the camera in my Macbook in a program. I'm fairly language agnostic - C, Java, Python etc are all fine. Could anyone suggest the best place to look for documents or "Hello world" type code?
The ImageKit framework in Leopard has an IKPictureTaker
class that will let you run the standard picture-taking sheet or panel that you seen in iChat and other applications.
If you don't want to use the standard picture-taker panel/sheet interface, you an use the QTKit Capture functionality to get an image from the iSight.
Both of these will require writing some Cocoa code in Objective-C, but that shouldn't really be an obstacle these days.
There is a utility called isightcapture that runs from the unix command line that takes a picture from the isight camera and saves it.
You can check it out at this web site: http://www.macupdate.com/info.php/id/18598
An example of using this with AppleScript is:
tell application "Terminal" do script "/Applications/isightcapture myimage.jpg" end tell
If you want to manipulate the camera directly from your code, you must use the QuickTime Capture APIs or the Cocoa QTKit Capture wrapper (much better).
The only caveat is: if you use a QTCaptureDecompressedVideoOutput, remember that the callbacks aren't made on the main thread, but on the QuickTIme-managed capture thread. Use [someObject performSelectorOnMainThread:... withObject:... waitUntilDone:NO] to send messages to an object on the main thread.
Quartz Composer is also a pleasant way to capture and work with video, when it's applicable. There's a video input patch.
Quartz Composer is a visual programming environment that integrates into a larger Cocoa program if need be.
http://developer.apple.com/graphicsimaging/quartz/quartzcomposer.html
From a related question which specifically asked the solution to be pythonic, you should give a try to motmot's camiface library from Andrew Straw. It also works with firewire cameras, but it works also with the isight, which is what you are looking for.
From the tutorial:
import motmot.cam_iface.cam_iface_ctypes as cam_iface
import numpy as np
mode_num = 0
device_num = 0
num_buffers = 32
cam = cam_iface.Camera(device_num,num_buffers,mode_num)
cam.start_camera()
frame = np.asarray(cam.grab_next_frame_blocking())
print 'grabbed frame with shape %s'%(frame.shape,)
It is used in this sample neuroscience demo