I maintain a CAD/CAM software for metal cutting machine. So I have some experience with this issues.
When we first converted our software (it was first released in 1985!) to a object oriented designed I did just what you don't like. Objects and Interfaces had Draw, WriteToFile, etc. Discovering and reading about Design Patterns midway through the conversion helped a lot but there were still a lot of bad code smells.
Eventually I realized that none of these types of operations were really the concern of the object. But rather the various subsystems that needed to do the various operations. I handled this by using what is now called a Passive View Command object, and well defined Interface between the layers of software.
Our software is structured basically like this
- The Forms implementing various Form
Interface. These forms are a thing shell passing events to the UI Layer.
- UI layer that receives Events and manipulate forms through the Form interface.
- The UI Layer will execute commands that all implement the Command interface
- The UI Object have interfaces of their own that the command can interact with.
- The Commands get the information they need, process it, manipulates the model and then report back to the UI Objects which then does anything needed with the forms.
- Finally the models which contains the various objects of our system. Like Shape Programs, Cutting Paths, Cutting Table, and Metal Sheets.
So Drawing is handled in the UI Layer. We have different software for different machines. So while all of our software share the same model and reuse many of the same commands. They handle things like drawing very different. For a example a cutting table is draw different for a router machine versus a machine using a plasma torch despite them both being esstentially a giant X-Y flat table. This because like cars the two machines are built differently enough so that there is a visual difference to the customer.
As for shapes what we do is as follows
We have shape programs that produce cutting paths through the entered parameters. The cutting path knows which shape program produced. However a cutting path isn't a shape. It just the information needed to draw on the screen and to cut the shape. One reason for this design is that cutting paths can be created without a shape program when they are imported from a external app.
This design allows us to separate the design of the cutting path from the design of the shape which are not always the same thing. In your case likely all you need to package is the information needed to draw the shape.
Each shape program has a number of views implementing a IShapeView Interface. Through the IShapeView interface the shape program can tell the generic shape form we have how to setup itself up to show the parameters of that shape. The generic shape form implements a IShapeForm interface and registers itself with the ShapeScreen Object. The ShapeScreen Object registers itself with our application object. The shape views use whatever shapescreen that registers itself with the application.
The reason for the multiple views that we have customers that like to enter shapes in different ways. Our customer base is split in half between those who like to enter shape parameters in a table form and those who like to enter with a graphical representation of the shape in front of them. We also need to access the parameters at times through a minimal dialog rather than our full shape entry screen. Hence the multiple views.
Commands that manipulate shapes fall in one of two catagories. Either they manipulate the cutting path or they manipulate the shape parameters. To manipulate the shape parameters generally we either throw them back into the shape entry screen or show the minimal dialog. Recalculate the shape, and display it in the same location.
For the cutting path we bundled up each operation in a separate command object. For example we have command objects
ResizePath
RotatePath
MovePath
SplitPath
and so on.
When we need to add new functionality we add another command object, find a menu, keyboard short or toolbar button slot in the right UI screen and setup the UI object to ececute that command.
For example
CuttingTableScreen.KeyRoute.Add vbShift+vbKeyF1, New MirrorPath
or
CuttingTableScreen.Toolbar("Edit Path").AddButton Application.Icons("MirrorPath"),"Mirror Path", New MirrorPath
In both instances the Command object MirrorPath is being associated with a desired UI element. In the execute method of MirrorPath is all the code needed to mirror the path in a particular axis. Likely the command will have it's own dialog or use one of the UI elements to ask the user which axis to mirror. None of this is making a visitor, or adding a method to the path.
You will find that a lot can be handled through bundling actions into commands. However I caution that is not a black or white situation. You will still find that certain things work better as methods on the original object. In may experience I found that perhaps 80% of what I used to do in methods were able to be moved into the command. The last 20% just plain work better on the object.
Now some may not like this because it seems to violate encapsulations. From maintaining our software as a object oriented system for the last decade I have to say the MOST important long term thing you can do is clearly document the interactions between the different layers of your software and between the different objects.
Bundling actions into Command objects helps with this goal way better than a slavish devotion to the ideals of encapsulation. Everything that is needs to be done to Mirror a Path is bundled in the Mirror Path Command Object.