First of all don't confuse this with data driven design.
My understanding of Data Oriented Design is that it is about organizing your data for efficient processing. Especially with respect to cache misses etc. Data Driven Design on the other hand is about letting data control a lot of your programs behavior (described very well by Andrew Keith above).
Say you have ball objects in your application with properties such as color, radius, bounciness, position etc. In OOP you would describe you balls like this:
class Ball {
Point pos;
Color color;
double radius;
void draw();
};
And then you would create a collection of balls like this:
vector<Ball> balls;
In Data Oriented Design however you are more likely to write the code like this:
class Balls {
vector<Point> pos;
vector<Color> color;
vector<double> radius;
void draw();
};
As you can see there is no single unit representing one Ball anymore. Ball objects only exist implicitly. I don't want to rewrite the article so I am not going to go into detail why one does it like this, but it can have many advantages performance wise. Usually we want to do operations on many balls at the same time. Hardware usually want large continuous chunks of memory to operate efficiently. Secondly you might do operations that affects only part of a balls properties. E.g. if you combine the colors of all the balls in various ways, then you want your cache to only contain color information. However when all ball properties are stored in one unit you will pull in all the other properties of a ball as well. Even though you don't need them.
Say a ball each ball takes up 64 bytes and a Point takes 4 bytes. A cache slot takes say 64 bytes as well. If I want to update the position of 10 balls I have to pull in 10*64 = 640 bytes of memory into cache and get 10 cache misses. If however I can work the positions of the balls as separate units, that will only take 4*10 = 40 bytes. That fits in one cache fetch. Thus we only get 1 cache miss to update all the 10 balls. These numbers are arbitrary I assume a cache block is bigger.
But it illustrates how memory layout can have severe effect cache hits and thus performance. This will only increase in importance as the difference between CPU and RAM speed widens.