If you just want to get likely duplicates, the checksum/binary_checksum functions would give you a good indication, though it's just a 32bit hash so depending on your dataset size you may end up with a few false-positives. checksum() is case-insensitive, binary_checksum() is case-sensitive. This will give you a 32bit hash for every record in your table:
select checksum(*), binary_checksum(*)
from tableName;
You could do a self join matching on duplicate hashes for records with different ID values (or different name values, etc. depending on what makes a given record unique in your dataset). Would look something like this:
select id, checksum(*)
from tableName a
join tableName b
on a.checksum(*) = b.checksum(*)
and a.id <> b.id;
These 2 functions can take any list of columns for an argument and provide a hash, so if you just want to hash the fName, lName, address, etc. columns rather than the whole record, your checksum function would look like this:
checksum(a.fName, a.lName, a.address, ...)
rather than checksum(*) like in the examples above.