This is always a struggle for me -- do I defy proper object-oriented design (e.g. option 1) or do I use an implementation that seems counterintuitive to the real world (e.g. option 2)?
Reality may be a good starting point for molding or evolving a design, but it is always a mistake to model an OO design to reality.
OO design is about interfaces, the objects that implement them, and the interaction between those objects (the messages they pass between them). Interfaces are contractual agreements between two components, modules, or software sub-systems. There are many qualities to an OO design but the most important quality to me is substitution. If I have an interface then the implementing code better adhere to it. But more importantly, if the implementation is swapped then the new implementation better adhere to it. Lastly, if the implementation is meant to be polymorphic then the various strategies and states of the polymorphic implementation better adhere to it.
Example 1
In mathematics a square is a rectangle. Sounds like a good idea to inherit class Square from class Rectangle. You do it and it leads to ruin. Why? Because the client's expectation or belief was violated. Width and height can vary idependently but Square violates that contract. I had a rectangle of dimension (10, 10) and I set the width to 20. Now I think I have a rectangle of dimension (20, 10) but the actual instance is a square instance with dimensions (20, 20) and I, the client, am in for a real big surprise. So now we have a violation of the Principle of Least Surprise.
Now you have buggy behavior, which leads to client code becoming complex as if statements are needed to work around the buggy behavior. You may also find your client code requiring RTTI to work around the buggy behavior by testing for conrete types (I have a reference to Rectange but I have to check if it is really a Square instance).
Example 2
In real life animals can be carnivores or herbivores. In real life meat and vegetables are food types. So you might think it is a good idea to have class Animal as a parent class for different animal types. You also think it is a good idea to have a FoodType parent class for class Meat and class Vegetable. Finally, you have class Animal sport a method called eat(), which accepts a FoodType as a formal argument.
Everything compiles, passes static analysis, and links. You run your program. What happens at runtime when a sub type of Animal, say a herbivore, recieves a FoodType that is an instance of the Meat class? Welcome to the world of covarience and contravarience. This is a problem for many programming languages. It's also an interesting and challenging problem for language designers.
In Conclusion...
So what do you do? You start with your problem domain, your user stories, your use cases, and your requirements. Let them drive design. Let them help you discover the entities you need to model into classes and interfaces. When you do you'll find that the end result isn't based on reality.
Check out Analysis Patterns by Martin Fowler. In there you'll see what drives his Object Oriented designs. It is mainly based on how his clients (medical people, financial people, etc.) perform their daily tasks. It has overlap with reality, but it isn't based or driven by reality.