There's a way to do this that involves no code writing at all, and will probably run faster than anything you would write. The IBM ICTEOOL utility is a wrapper for DFSORT and can do this quite easily. Here's a sample job step that will put all duplicate records on DUPES, and the non-duplicates on NONDUPES. If you want to uniquify the duplicate file, run it through a simple sort with SUM FIELDS=NONE
afterward.
//ICE EXEC PGM=ICETOOL
//TOOLIN DD *
SELECT FROM INFILE TO(NODUPES) ON(2,10,CH) NODUPS DISCARD(DUPES)
/*
//INFILE DD DSN=MY.INFILE,DISP=SHR
//NODUPES DD DSN=MY.OUTFILE.NODUPES,DISP=(NEW,CATLG,DELETE),LIKE=MY.INFILE
//DUPES DD DSN=MY.OUTFILE.DUPES,DISP=(NEW,CATLG,DELETE),LIKE=MY.INFILE
//TOOLMSG DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
This assumes your sort key is in position 2, for a length of 10. This program should be available in your standard LPA load datasets, so no JOBLIB/STEPLIB override is needed.