views:

904

answers:

2

I have a "software renderer" that I am porting from PC to the iPhone. what is the fastest way to manually update the screen with a buffer of pixels on the iphone? for instance in windows the fastest function I have found is SetDIBitsToDevice.

I don't know much about the iphone, or the libraries, and there seem to be so many layers and different types of UI elements, so I might need a lot of explanation...

for now I'm just going to constantly update a texture in opengl and render that to the screen, I very much doubt that this is going to be the best way to do it.

UPDATE:

I have tried the openGL screen sized texture method:

I got 17fps...

I used a 512x512 texture (because it needs to be a power of two)

just the call of

glTexSubImage2D(GL_TEXTURE_2D,0,0,0,512,512,GL_RGBA,GL_UNSIGNED_BYTE, baseWindowGUI->GetBuffer());

seemed pretty much responsible for ALL the slow down.

commenting it out, and leaving in all my software rendering GUI code, and the rendering of the now non updating texture, resulted in 60fps, 30% renderer usage, and no notable spikes from the cpu.

note that GetBuffer() simply returns a pointer to the software backbuffer of the GUI system, there is no re-gigging or resizing of the buffer in anyway, it is properly sized and formatted for the texture, so I am fairly certain the slowdown has nothing to do with the software renderer, which is the good news, it looks like if I can find a way to update the screen at 60, software rendering should work for the time being.

I tried doing the update texture call with 512,320 rather than 512,512 this was oddly even slower... running at 10fps, also it says the render utilization is only like 5%, and all the time is being wasted in a call to Untwiddle32bpp inside openGLES.

I can change my software render to natively render to any pixle format, if it would result in a more direct blit.

fyi, tested on a 2.2.1 ipod touch G2 (so like an Iphone 3G on steroids)

UPDATE 2:

I have just finished writting the CoreAnimation/Graphics method, it looks good, but I am a little worried about how it updates the screen each frame, basically ditching the old CGImage, creating a brand new one... check it out in 'someRandomFunction' below: is this the quickest way to update the image? any help would be greatly appreciated.

//
//  catestAppDelegate.m
//  catest
//
//  Created by User on 3/14/10.
//  Copyright __MyCompanyName__ 2010. All rights reserved.
//




#import "catestAppDelegate.h"
#import "catestViewController.h"
#import "QuartzCore/QuartzCore.h"

const void* GetBytePointer(void* info)
{
    // this is currently only called once
    return info; // info is a pointer to the buffer
}

void ReleaseBytePointer(void*info, const void* pointer)
{
    // don't care, just using the one static buffer at the moment
}


size_t GetBytesAtPosition(void* info, void* buffer, off_t position, size_t count)
{
    // I don't think this ever gets called
    memcpy(buffer, ((char*)info) + position, count);
    return count;
}

CGDataProviderDirectCallbacks providerCallbacks =
{ 0, GetBytePointer, ReleaseBytePointer, GetBytesAtPosition, 0 };


static CGImageRef cgIm;

static CGDataProviderRef dataProvider;
unsigned char* imageData;
 const size_t imageDataSize = 320 * 480 * 4;
NSTimer *animationTimer;
NSTimeInterval animationInterval= 1.0f/60.0f;


@implementation catestAppDelegate

@synthesize window;
@synthesize viewController;


- (void)applicationDidFinishLaunching:(UIApplication *)application {    


    [window makeKeyAndVisible];


    const size_t byteRowSize = 320 * 4;
    imageData = malloc(imageDataSize);

    for(int i=0;i<imageDataSize/4;i++)
            ((unsigned int*)imageData)[i] = 0xFFFF00FF; // just set it to some random init color, currently yellow


    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    dataProvider =
    CGDataProviderCreateDirect(imageData, imageDataSize,
                               &providerCallbacks);  // currently global

    cgIm = CGImageCreate
    (320, 480,
     8, 32, 320*4, colorSpace,
     kCGImageAlphaNone | kCGBitmapByteOrder32Little,
     dataProvider, 0, false, kCGRenderingIntentDefault);  // also global, probably doesn't need to be

    self.window.layer.contents = cgIm; // set the UIWindow's CALayer's contents to the image, yay works!

   // CGImageRelease(cgIm);  // we should do this at some stage...
   // CGDataProviderRelease(dataProvider);

    animationTimer = [NSTimer scheduledTimerWithTimeInterval:animationInterval target:self selector:@selector(someRandomFunction) userInfo:nil repeats:YES];
    // set up a timer in the attempt to update the image

}
float col = 0;

-(void)someRandomFunction
{
    // update the original buffer
    for(int i=0;i<imageDataSize;i++)
        imageData[i] = (unsigned char)(int)col;

    col+=256.0f/60.0f;

    // and currently the only way I know how to apply that buffer update to the screen is to
    // create a new image and bind it to the layer...???
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    cgIm = CGImageCreate
    (320, 480,
     8, 32, 320*4, colorSpace,
     kCGImageAlphaNone | kCGBitmapByteOrder32Little,
     dataProvider, 0, false, kCGRenderingIntentDefault);

    CGColorSpaceRelease(colorSpace);

    self.window.layer.contents = cgIm;

    // and that currently works, updating the screen, but i don't know how well it runs...
}


- (void)dealloc {
    [viewController release];
    [window release];
    [super dealloc];
}


@end
+1  A: 

The fastest way is to use IOFrameBuffer/IOSurface, which are private frameworks.

So OpenGL seems to be the only possible way for AppStore apps.

KennyTM
I have updated the question with the results from the opengl method,doesn't look good.
matt
+5  A: 

The fastest App Store approved way to do CPU-only 2D graphics is to create a CGImage backed by a buffer using CGDataProviderCreateDirect and assign that to a CALayer's contents property.

For best results use the kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little or kCGImageAlphaNone | kCGBitmapByteOrder32Little bitmap types and double buffer so that the display is never in an inconsistent state.

edit: this should be faster than drawing to an OpenGL texture in theory, but as always, profile to be sure.

edit2: CADisplayLink is a useful class no matter which compositing method you use.

rpetrich
Would this be fast enough to do 24 fps video?
St3fan
@St3fan: I'm hoping its fast enough to do 60fps at least! I'm beginning to get the feeling that the iPhone's Api's and OS are even more bloated than desktop Windows!@rpetrich, thanks, I'll try it out. I hardly know anything about iphone dev, like the GUI stuff ect, so it might take me a while, can you recommend a good template to use to get started? I've only used the open GL one.
matt
Matt the iPhone APIs are extremely nice compared to Win32. Some things are simply not possible with higher level Cocoa code. File a bug reuqest with Apple if you think this needs a nicer Cocoa API.
St3fan
In my experience, writing directly to an OpenGL texture will be the fastest way to present 2-D content to the screen. Core Animation is layered on top of OpenGL, so setting the contents of a CALayer just causes them to be transferred into an OpenGL texture. You can avoid the middleman by writing directly to the texture. On the Mac, we have extensions for direct memory transfer to a texture: http://developer.apple.com/mac/library/documentation/GraphicsImaging/Conceptual/OpenGL-MacProgGuide/opengl_texturedata/opengl_texturedata.html , but I don't see anything similar on the iPhone.
Brad Larson
@matt: I don't really have any templates suitable for your situation; I don't think you'll be able to get more than 60Hz as the display doesn't update any faster than that. Also, the CoreAnimation was designed with hardware acceleration in mind--as much work as possible is offloaded to the GPU.
rpetrich
@Brad Larson: On the iPhone, everything ends up in a QuartzCore scene graph which is rendered by the GPU. In the case of OpenGL content, the scene is rendered to a CALayer by the GPU in your application and then that layer is composited (along with the rest of the graph) to the framebuffer in SpringBoard. OpenGL path: CPU render -> OGL texture -> OGL CALayer -> framebuffer. CALayer path: CPU render -> standard CALayer -> framebuffer.
rpetrich
@rpetrich: That's interesting, but it runs counter to my experience on the Mac. In my testing, the fastest way to display a 2-D rectangle (a live 60 FPS video feed from a CCD camera) was to use OpenGL directly (hosted in a CAOpenGLLayer). Drawing into the contents of a CALayer was significantly slower. Now, this is a different platform, and I was using the OpenGL texture transfer extensions described above, so I could be wrong about how the iPhone handles this same situation.
Brad Larson
@Brad Larson: Drawing to the contents of a CALayer will always be slow as that involves a copy (on the CPU no less, where it's slow). Setting the contents, on the other hand can be quick if the new content is already on video memory. Since the iPhone uses a unified memory model, all memory is video memory. I would imagine OS X behaves a lot differently (and the use of IOSurface would be required).
rpetrich
@rpetrich: Fascinating. So your suggestion would be to use two CGImages with image data provided via a buffer in CGDataProviderCreateDirect(), then swap setting them as the contents for a CALayer for every other frame to be displayed? I'd like to try this out and see how it performs.
Brad Larson
@Brad Larson: Yeah, that's the gist of it. If it's the only layer visible on-screen, the only copy should be by the GPU from the CALayer to the framebuffer. If you do try it out, I'd love to see it in comparison to piping through OpenGL.
rpetrich
thanks for the discussion guys, that's quite a bit of insight. I will try this all out tonight. I am a bit worried about whether it will work anyway, we are actually targeting the ipad. it seems to have 4 times the res (1024x768) with basically the same CPU as the 3gs...It is only a basic GUI thing, (advantages over CA? its multi platform, I know how it works, and can tweak it anyway I want). but it might be all too much.also, as I said, I have no idea about iPhone/OSX stuff, so if anyone does try it out, I would love a step by step!
matt
@matt: The trouble you will encounter is not in getting the buffer to the screen quick enough, but in the drawing to the buffer itself--the GPU has a lot of dedicated hardware designed to push pixels that the CPU can't even come close to.
rpetrich
@rpetrich: hey, I have come up with some code using the CA method, Im not sure about the way I update the image each frame tho.. deleting it and creating the CGImage each frame... check it out above in the Q "UPDATE2" haven't had a chance to test this on hardware yet, will test tomorrow. thx
matt