views:

58

answers:

2

To be specific, I'm talking about writing a dealloc like so:

-(void)dealloc 
{
    self.myvar = nil;
    [super dealloc];
}

I understand this goes against Apple's recommendations. I also understand that it can cause issued with KVO as well using the setter on a partially deallocated object. But if I'm making the calls in this order (ie: setters first, then the [super dealloc]) is there any risk in doing this? I'm trying to understand exactly what the dangers are, and specifically why this is a Bad Thing(tm). Thanks....

+5  A: 

In general, a setter could require/access to another instance variable, which could cause Bad Mojo if you've already disposed of it. Is there a specific reason why you don't want to use [myvar release]?

Shaggy Frog
I'm sitting in front of a lot of code that uses self.something = nil in dealloc, and I'm trying to decide if it's worth going back and changing it everywhere. This code has existed and run fine for a while, so I'm thinking the change to [something release] should probably be done, but it doesn't qualify as a red alert.
Dave Klotz
if changing it (from `self.…`) changes behavior, then you have likely exposed a bug you didn't notice. i'd correct it, any behavioral changes will likely be positive (you say it runs fine, there may be no behavioral change), and the code will be easier to maintain and less likely to cause problems which are difficult to track down. it is fragile, suppose somebody using the object (as an ivar) changes the deallocation order, or the deallocation order changes via the system libraries when a view is destroyed -- that's enough to change the behavior in some cases (leaks, resurrection/rebuilding).
Justin
I'll take that under serious consideration. As with all things, it's an issue of time, resources and priorities. And, of course, it's very hard to convince people to make a change like this when the product appears to be working fine. But it's definitely high on my list of things to fix. Thanks to everyone who responded.
Dave Klotz
+1  A: 

in addition to the reasons stated (which can mean UB or disaster), you may end up with objects reconstructing themselves, and unnatural dependency cycles (e.g. among subclasses that override accessors) - the subclasses may have established their own dependencies, although the subclass' ivars are (effectively) out of reach and should not be known to the superclass. it can severely restrict the usability of your objects for subclasses, and break their implementation (because their implementation may expect hacks, or may go through some form of resurrection). this is all stuff you want to avoid, and you should avoid accessors in init… for similar reasons. it gets ugly in large systems, and difficult to maintain - while the issue is easily avoided.

Justin