At the most basic level, if I'm reading your question right, you generally don't want to blindly update the entire record in case another user has already updated parts of that record that you've not actually changed. You would blindly and needlessly revert their updates.
I believe your current algorithm may lead to dirty writes, if you're going to read the current once for update, allow updates to be made in memory, then read the record again to allow you to figure out which fields have been updated. What happens if another user updated that record behind your back,leading your algorithm to believe that you were the one to update that field? But primarily, you shouldn't have to read every record twice to perform a single update.
If your data does not often result in conflicts, you may benefit from reading about optimistic locking, even if you don't choose to implement it.
We've implemented one method here whereby you add an update-timestamp or an incremental update-number column to your table. Then in your sandbox/memory you can keep track of which fields you have modified (oldvalue/newvalue), and you can freely issue your update SQL for that record for those fields, "UPDATE...WHERE UPDATENUM=the-original-number" (or WHERE UPDATETS=the-original-timestamp), making sure that your update SQL also increments the UPDATENUM or UPDATETS, as appropriate. If the records-affected by that update SQL is 0 you know that someone else has already modified that record in the background and you now have a conflict. But at least you didn't overwrite someone else's changes, and you can then reread the new data or have your user resolve the conflict.