The reality is that such conversions wouldn't generally be good object oriented code. Why? Because object oriented code isn't simply moving functions into methods and data into members.
Instead, a good object should be responsible for all of its data and only take method parameters where those parameters are defining the data that will be operated on.
This means there isn't a 1:1 mapping from procedural functions and procedural data structures to the object oriented ones.
Having taking a look around I didn't find any examples I cared for online, so I will simply give my refactoring rules for converting procedural code to OOP.
The first step is to simply package each module as an object. In other words, just create an object that holds the data and the functions. This is horrible to a purist, but you have to start somewhere. For example, if you had a BankAccount module, you will now have a BankAccount object.
Obviously the functions were having the data passed into them from external calls. Here you are looking for how to internalize that data and make it private as much as possible. The goal should be that you get your data in your constructor (at least the starting point) and remove the parameters that used to receive the data manually and replace them with references to that now private data. Using the BankAccount object, all access to the account is now via methods on the object and the actual account data has been internalized.
Many of your functions probably returned modified versions of the data structures: stop returning that data directly and have these modifications stay within the private structures. Create accessor properties that return your now private data where necessary and mark them "obsolete" (your goal is to make the object the master of its data and only return results not the internal data). With the BankAccount object, we no longer return the actual account data, but we have properties for CurrentBalance and methods such as AverageBalance(int days) to see the account.
Eventually you will have a set of self contained objects which still bear little resemblance to what you would have done if you started with objects in your design, but at least you can continue your refactoring with your new objects. My next step usually to discover that the objects created from such refactoring have way to many responsibilities. At this point some common threads probably have been detected and you should create objects to refactor these common ideas into. If we have BankAccount, we probably have other account types and if we align the methods of all of these account types we can make Account as a base class that implements all the shared features while BackAccount, SavingsAccount and others implement the details.
Once the class structure starts to take shape it is time to feel better about the conversion. Refactoring is a process, not an endpoint, so I generally find that my class structure continues to evolve. One of the nice things about having gotten this far is that your data is private and manipulated through methods, so you can refactor the internals more and more freely as you progress.
One thing that makes this plausible to do is having good unit tests. When doing procedural to OOP conversions, I often keep the old code around as the "baseline" so I can test against it. I.e., the test can verify against the old procedural system's results. If you don't match up, it is a good idea to find out why. I find that often there is a bug... but sometimes your new cleaner code is actually doing something right that was wrong in the past.
Regarding creating "too deep" of object trees: that can be the result of getting too obsessive about inheritance. I find that composites often are a better idea, where you implemented the interfaces for multiple objects rather than trying to get all those features into a single parent object. If you are finding yourself creating parent objects that simply are blends of feature sets, consider simplifying by making an interface for each feature set and implementing those interfaces.