views:

154

answers:

1

Are there any well-studied design patterns related to drag & drop and mouse gestures? Consider a canvas containing objects in a parent-child hierarchy with a certain layout. Some objects can be dragged and dropped onto other objects using the mouse. In addition, objects can be resized and moved with the mouse.

Different hot-spots on objects behave differently depending on user state (dragging, selecting.) For a drag-drop operation, some elements are: 1) Visual feedback to user for source object 2) Visual feedback to user during drag 3) Drop area detection 4) Drop compatibility tests 5) Drop action

The standard API's tend to combine these into fairly monolithic code.

Additionally, gestures such as movement and resizing have certain elements in common: 1) Visual feedback to user when the mouse is in a resize/move region 2) Visual feedback during a move/resize operation 3) Completion/cancellation of operation.

The question is, is there a way to do this that streamlines the code, separates the visual and document level responsibilities, and possibly much of this declarative?

Determining whether the point under the mouse is a drop target, for example, might use some form of hit-testing combined with the Chain of Responsibility pattern.

I feel there must be good design patterns that bring discipline and organization to this otherwise messy problem.

Cheers,

A: 

To answer my own question, Chapter Two of this book has a very relevant discussion: http://www.amazon.com/gp/product/0596516258.

However, this is still not as declarative or well factored as one would like.

bright