"Particularly, we are not interested in Rollbacking this script, if it fails, we can run it again."
One question is, would you need to, and are you prepared to, go back to a previous backup in order to run the script again ?
That is, if the process failed after 10 million rows, would simply re-running the script result in 61 million rows being inserted, or would the 10 million be skipped/ignored or would you have 10 million updated and 41 million inserted.
Also, if you do a NOLOGGING insert, it will probably require a fresh backup immediately after the job. You would have problems doing a point-in-time recovery to a time during the script run, so you also need to consider what other activity is happening on the database while the script is running.
Depending on how you've written the PL/SQL script, you may find that using large SQLs, rather than row-by-row processing, can reduce undo (eg by minimising revisits to processed blocks).
Unless you really understand the impact of reducing undo or committing to allow reuse of the undo, my first recommendation would be simply to increase the size of the undo tablespace. Of course you do have the drawback that, if you have generated bucket loads of undo, then a failure will take a LONG time to rollback.