views:

55

answers:

2

Ok, I have the following very simple animation composed of 25 frames in PNG format. Each frame is 320 × 360 and about 170Kb in size. Here is the code I use

.h:

IBOutlet UIImageView *Animation_Normal_View;

In Interface Builder I have a UIImageView with a referencing outlet pointing to this. All my images are named normal_000_crop.png, normal_001_crop.png, normal_002_crop.png,...

.m:

Animation_Normal = [[NSMutableArray alloc] initWithCapacity:25];
for (int i = 0; i < 25; i++)
{
  [Animation_Normal addObject:[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:@"normal_%03d_crop.png", i] ofType:nil]]];
}

Animation_Normal_View.animationImages = Animation_Normal;
Animation_Normal_View.animationDuration = 1; // seconds
Animation_Normal_View.animationRepeatCount = 0; // 0 = loops forever
[Animation_Normal release];


[self.view addSubview:Animation_Normal_View];
[Animation_Normal_View startAnimating];

On the simulator everything loogs good visual animation start as soos as the startAnimating is issued.

But on the iPhone 3G running iOS 4.0.2, the visual animation starts a good 2 to 3 seconds after the startAnimating is issued.

I have tried about every technique on I could find in blogs or forum that should solve this to no avail.

Any hints appreciated even if it's a completly different way to to a PNG based animation.

Thanks.

+2  A: 

imageWithContentsOfFile: tends to take a long time to process, especially if there are lots of files (25 is kind of a lot) and/or they're big.

One thing you can try is to switch it out for imageNamed:, i.e.

[UIImage imageNamed:[NSString stringWithFormat:@"normal_%03d_crop.png", i]]

imageNamed: is generally much faster, but tends to cache images more or less indefinitely.

If loading the images into memory and keeping them around throughout the whole app is unacceptable, you may need to do some tweaky things to load them in at an appropriate time and to unload them after they've been used. That stuff is always tricky, and requires multithreading to not block the main UI while loading. But doable. And there are examples.

Kalle
A: 

Hey voipforces, This is a good question and I will address it here with a few thoughts.

First, you are loading a series of graphics that are around 4MB in total size. This may take a moment, especially on slower (older) devices.

In the @interface block of your .h file you may want to declare two properties such as:

IBOutlet UIImageView *animationViewNormal;
NSMutableArray *animationViewNormalImages;

The first is the UIImageView that you already have (just renamed for best practices) and the second is a mutable array to hold the stack of images for the image view. Let me state that if having "normal" implies state. For clarification, are you loading additional sets of images for different states?

In your .m file in the @interface create the following method:

- (void)loadAnimationImages;

This will provide the function to lead the image stack to the mutable array defined in the header.

In the same .m file in the @implementation you'll want the following:

- (void)loadAnimationImages {
  for (NSUInteger i = 0; i < 23; i++) {
    NSString *imageName = [NSString stringWithFormat:@"normalCrop%03u", i];
    UIImage *image = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:imageName ofType:@"png"]];
    if (image) {
      [animationViewNormalImages addObject:image];
    }
  }
}

As you can see I renamed the PNG files from normal_%03u_crop to normalCrop%03u as it is best practice to put the index label at the end of the file name (also most apps will output the content this way). The loop loads an image, checks to see that it is an image and then adds the image to the "image stack" in the mutable array.

In the init() you'll need the following:

- (id)init {
  ...
  animationViewNormalImages = [[NSMutableArray alloc] init];
  ...
}

This allocates the (animationViewNormalImages) mutable array to hold your stack of images for the image view.

We'll now move on to the code for the viewDidLoad():

- (void)viewDidLoad {
  [super viewDidLoad];
  ...
  [self loadAnimationImages];
  [animationViewNormal setAnimationImages:animationViewNormalImages];
  [animationViewNormal setAnimationDuration:1.1f];
  [animationViewNormal setAnimationRepeatCount:0];  //  0=infinite loop
  ...
}

We load the stack of images into the mutable array then set the properties of our imageView with the image stack, duration and repeat count.

Next in the viewDidAppear() we start the image view animating:

- (void)viewDidAppear:(BOOL)animated {
  [super viewDidAppear:animated];
  ...
  [animationViewNormal startAnimating];
  ...
}

Once the imageView is animating as an infinite loop we need to handle when leaving the view in the viewWillDisappear():

- (void)viewWillDisappear:(BOOL)animated {
  [super viewWillDisappear:animated];
  ...
  [animationViewNormal stopAnimating];
  ...
}

Last (which should be the second thing we add the .m file) we cleanup in the mutable array in the dealloc():

- (void)dealloc {
  ...
  [animationViewNormalImages release];
  [super dealloc];
}

This is how we handle it and works for us, but then again, we're normally not loading 4MB of images into memory to animate.

The .PNG files are compressed when building the app and I am not sure if they are decompressed on the fly when loading the images our of the resource bundle. This is a boolean value int he Build Properties Build Settings (COMPRESS_PNG_FILES).

For performance you may want to consider the following:

  • Mark opaque views as such: Compositing a view whose contents are opaque requires much less effort than compositing one that is partially transparent. To make a view opaque, the contents of the view must not contain any transparency and the opaque property of the view must be
    set to YES.
  • Remove alpha channels from opaque PNG files: If every pixel of a PNG image is opaque, removing the alpha channel avoids the need to blend the layers containing that image. This simplifies compositing of the image considerably and improves drawing performance.

Furthermore, you may find it's better the make one large image with all 24 frames (offset by the width of the individual frame) and then load it once. Then using Core Graphics with CGContextClipToRect then just offset the image context. This means more code but may be faster than using the standard stack method.

Lastly, you may want to consider is converting the .PNG files into .PVR (PVRTC) files. More information can be found here: Apple Tech QA, Apple Docs, and Sample Code.

I hope this helps and please vote it up if it does.

Best, Kevin Able Pear Software

Kevin Bomberry
Wow Kevin! Thanks for taking the time to write all this. I haven't digested all of it yet, but will read it very carefully.Again thank you. That is what the software developer community is all about !