views:

625

answers:

2

I have a serious problem: I have an array with several UIImage objects. What I now want to do, is create movie from those images. But I don't have any idea how to do so.

I hope someone can help me or send me a code snippet which does something like I want.

Thx!

A: 

Well this is a bit hard to be implemented in pure Objective-C....If you are developing for jailbroken devices , a good idea is to use the command-line tool ffmpeg from inside your app. it's quite easy to create a movie from images with a command like:

ffmpeg -r 10 -b 1800 -i %03d.jpg test1800.mp4

Note that the images have to be named sequentially , and also be placed in the same directory. For more information take a look at: http://electron.mit.edu/~gsteele/ffmpeg/

Kostas.N
Thanks for your comment. I already had an eye on ffmpeg, but I can't use for some reasons: First of all I want to make a app which should be sold using the Apple App Store, so no way to use the command version of ffmpeg. The next reasion why I didn't had a further look on ffmpeg was the license. If I want to use ffmpeg within my application, I will have to release the source code of my app. That's a thing I don't want to do.
Nuker
+2  A: 

Take a look at AVAssetWriter and the rest of the AVFoundation framework. The writer has an input of type AVAssetWriterInput, which in turn has a method called appendSampleBuffer: that lets you add individual frames to a video stream. Essentially you’ll have to:

1) Wire the writer:

NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
    [NSURL fileURLWithPath:somePath] fileType:AVFileTypeQuickTimeMovie
    error:&error];
NSParameterAssert(videoWriter);

NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
    AVVideoCodecH264, AVVideoCodecKey,
    [NSNumber numberWithInt:640], AVVideoWidthKey,
    [NSNumber numberWithInt:480], AVVideoHeightKey,
    nil];
AVAssetWriterInput* writerInput = [[AVAssetWriterInput
    assetWriterInputWithMediaType:AVMediaTypeVideo
    outputSettings:videoSettings] retain];

NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];

2) Start a session:

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:…]

3) Write some samples:

// Or you can use AVAssetWriterInputPixelBufferAdaptor.
// That lets you feed the writer input data from a CVPixelBuffer
// that’s quite easy to create from a CGImage.
[writerInput appendSampleBuffer:sampleBuffer];

4) Finish the session:

[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:…];
[videoWriter finishWriting];

You’ll still have to fill-in a lot of blanks, but I think that the only really hard remaining part is getting a pixel buffer from a CGImage:

- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
        [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
        [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
        nil];
    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
        frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, 
        &pxbuffer);
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
        frameSize.height, 8, 4*frameSize.width, rgbColorSpace, 
        kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);
    CGContextConcatCTM(context, frameTransform);
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), 
        CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

frameSize is a CGSize describing your target frame size and frameTransform is a CGAffineTransform that lets you transform the images when you draw them into frames.

zoul
Wow! Great! Thank you so much! This was exactly the hint i needed. For some reasion I didn't find anything in the Web or the Apple documentation, directing me in the correct direction... Perhaps it's because the AV Foundation is totally new since iOS 4... Thank you!
Nuker
Ok, I just don't get it. I want to write a UIImage object to the AVAssetWriterInputPixelBufferAdaptor, but I don't know how to convert the image to the apropiate format. Can you help me again, please?
Nuker
Though this does work, drawing into a `CGImage` only to draw that into a `CGBitmapContext` backed by `CVPixelBuffer` is wasteful. Similarly, instead of creating a `CVPixelBuffer` each time, `AVAssetWriterInputPixelBufferAdaptor`'s `pixelBufferPool` should be used to recycle buffers.
rpetrich