Disclosure:
I do not have any exposure to Hadoop. Some of the descriptions below may fall short of "technically accurate", and this would be due to both my own ignorance and to my attempt at putting things in simple terms, in the spirit of Rachel's original question and various "redirects" following Andrew Hare's responses.
Please feel free to leave remarks and annotations (or even to directly edit this response) if you can improve the accuracy of the text while keeping the material accessible to non practitioners.
Hadoop allows to runs distributed applications "in the cloud", i.e. on a dynamic collection of commodity hardware hosts.
In very broad terms, the idea is to a run a task in parallel fashion and without having to worry [much] about where the individual portions of the task take place, nor where the data files associated with the task (whether read or write), reside.
Examples, what can and what cannot be "Hadooped"
Search engines are a typical example of task that lend themselves to be handled by Hadoop or similar systems, but other examples include big sorting jobs, document categorization/clustering, SVD and other linear algebra operations (obviously with "big" matrices) etc. A required characteristic of the tasks to be handled with Hadoop however is that they can be described in terms of two functions a Map() function which is used to split the task in several structurally identical pieces and a Reduce() function which takes individual"pieces" and processes them by producing mergeable results.
The idea about the MapReduce model is that it provides a generic and clear description for anything that can be processed in parallel. This helps both application developers who just provide the MapReduce function pair, but otherwise do not have to worry about synchronization details etc. and the Hadoop system itself which can then handle every job in the very same fashion (even though the implementation of the Map/Reduce functions may vary tremendously).
Another key component of Hadoop is the distributed file system which allows storing reliably huge amount of data (petabyte-sized). Hadoop includes several additional subsystems, features and configuration parameters which collectively allow producing fault tolerant, dynamic systems.
Vocabulary:
A node is a computer host which can run Map() or Reduce() jobs (as well as adhere to the protocole of the Hadoop system, i.e. "obey" the Hadoop manager which orchestrates all these moving parts.) A hadoop system can have several thousands nodes, i.e. potential workers to which it can feed jobs.
Petabyte = 1 Million Gigabytes.
Cloud Computing Computing based on dynamically scalable groups of general purpose computers (physical or virtual) available over a network (or more generically over the Internet). Two key concepts are: a) that applications using the cloud have no knowledge of nor direct control over the set of resources which services their requests at a given time. b) No software specific a given application gets installed "ahead of time" on the nodes (well... they do receive, and possibly cache the logic of the Map() or Reduce() function, and they do benefit from the current state of the file system with various datasets belonging to the application, but the idea is that they are "commodity". They can come and go, and yet the work will get done).