views:

279

answers:

2

I'm attempting to create pretty large bitmaps in a C# application (6000x6000, though most is transparent) and need to draw them to a specific output API which only supports drawing rectangles.

Now, I'm wondering if anyone has an algorithm to reduce a bitmap to a series of filled rectangles of similarly-colored bitmaps; since drawing everything as a 1x1 rectangle is way too slow for this purpose. For example, a circle should be reduces to a large center rectangle, while the rest of the circle is reduced to efficient rectangles. The algorithm doesn't even need to be that fast, since most of the time taken with my single-pixel method is by the looping through every rectangle on the API itself.

+3  A: 

Sounds like you'd need the classic QuadTree structure. See this link for a nice explanation of how you'd use a quadtree to quantize an image into rectangles.

Here's a nice reference on CodeProject that provides a sample, simple implementation you could alter to your needs.

Kilanash
Is there a good example of when a quad tree results in less pixels than a naive line by line algorithm? In the example in the first link, the quad tree representation takes 22 rectangles to draw the green shape vs 28 if done pixel by pixel, and only 8 if done line by line as I describe in my answser. I can see how quad trees are great for partitioning and or storage, but not how they are optimal for decomposing into as few rectangles as possible.
Peter Recore
The example in the first link could be done in 3 overlapping rectangles, or 5 non-overlapping rectangles; which version are you trying to achieve?
Dolphin
I see how to make that image with 3 or 5 rectangles. What i can't figure out is how that relates to quad trees.
Peter Recore
Agreed, the optimal set of rectangles is not what's being described by that image. My thought was mostly pragmatic - implementing a quadtree isnt too difficult, as it's just a basic recursive algorithm.From what the original question was asking, having a fairly easy and succinct way to subdivide an arbitrary image into a series of rectangles was the simplest answer, not how to generate the most optimal set of rectangles. :)
Kilanash
+1  A: 

A simple to implement algorithm would be to draw 1xN rectangles.

Start on line 0, and find the first non blank pixel. Continue iterating through pixels until the color of the pixels you are looking at changes. Now draw that series of same colored pixels as a 1xN rectangle.

If your actual pictures have large uniform regions, this might be "good enough". Depending on what the pictures look like, drawing vertical lines might be better.

If i'm doing the math right, using this method, a circle of radius 100 pixels would use 200 "lines" to draw, rather than 30,000 pixels if done one pixel at a time. It seems to me a quad tree decomposition would use at least 1000 rectangles or more for such a circle, if you got lucky with where the quadrants fell.

Peter Recore