Hi,
Have you had any case when RAII wasn't the best method for resource management?
Just couriosity...
Thanks.
Hi,
Have you had any case when RAII wasn't the best method for resource management?
Just couriosity...
Thanks.
GC can handle the memory of cyclic data structures for the programmer while RAII will require the programmer to manually break the cycle somewhere.
Sometimes two-stage initialization (create, then init, then use) is needed.
Or even three-stage: in our product, there is a collection of independent objects, each running a thread and able to subscribe to any number of other objects (including itself) via priority-inheriting queues. Objects and their subscriptions are read from the config file at startup. At construction time, each object RAIIs everything it can (files, sockets, etc), but no object can subscribe to others because they are constructed in unknown order. So then after all objects are constructed there's the second stage where all connections are made, and third stage when, once all connections are made, the threads are let go and begin messaging. Likewise, shutdown is mutli-stage as well.
RAII means that the ownership of resources is defined and managed through the guarantees provided by the language constructs, most notably, but not limited to, constructors and destructors.
The point of RAII in C++ is that the resource ownership policy can actually be enforced by the language. A lesser alternative to RAII is for the API to advise the caller (e.g., through comments or other documentation) to explicitly perform ACQUIRE()
and RELEASE()
operations at certain times. That kind of policy is not enforceable by the language.
So the original question is another way to ask whether there are cases when an unenforceable approach to resource management is preferable to RAII. The only cases I can think of are where you are deliberately circumventing the existing resource management constructs in the language, and writing your own framework. For example, you are implementing a garbage collected scripting language interpreter. The "virtual allocation" of atoms will likely play games with memory blocks. Similarly, a pool based allocator expects the program to eventually call a DESTROY_POOL()
operation, with global consequences (i.e., any item allocated from that pool will be invalidated).
The only case I can think of where RAII was not the solution is with multithreaded critical region lock management. In general it is advisable to acquire the critical region lock (consider that the resource) and hold it in a RAII object:
void push( Element e ) {
lock l(queue_mutex); // in construction acquire, in destruction release
queue.push(e);
}
But there are situation where you cannot use RAII for that purpose. In particular, if a variable used in a loop condition is shared by multiple threads, and you cannot hold the lock for the whole loop execution, then you must acquire and release the lock with a different mechanism:
void stop_thread() {
lock l(control_mutex);
exit = true;
}
void run() {
control_mutex.acquire();
while ( !exit ) { // exit is a boolean modified somewhere else
control_mutex.release();
// do work
control_mutex.acquire();
}
control_mutex.release();
}
It might even be possible to use RAII by (ab)using operator,
now that I think of, but I had never actually thought of it. But I guess this is not really natural:
void run() {
while ( lock(control_mutex), !exit ) {
// do work
}
}
So I guess that the answer is that not that I can imagine...
EDIT: Other solutions for the same problem using RAII:
bool should_exit() const {
lock l(mutex);
return exit;
}
void run() {
while ( !should_exit() ) {
// do work
}
}
@fnieto:
void run() {
while (true) {
{ lock l(mutex);
if (exit) break;
}
// do work
}
}
In cases where resource release may fail, RAII may not be sufficient to manage that resource (since destructors shouldn't throw). RAII may still be part of that solution though.