YOU DON'T NEED TO INFER THE TYPE DECLARATIONS!!!
You can derive what you want directly from the .dbf files. Each column has a name, a type code (C=Character, N=Number, D=Date (yyyymmdd), L=Logical (T/F), plus more types if the files are from Foxpro), a length (where relevant), and a number of decimal places (for type N).
Whatever software that you used to dig the data out of the .dbf files needed to use that information to convert each piece of data to the appropriate Python data type.
Dictionaries? Why? With a minor amount of work, that software could be modified to produce a CREATE TABLE statement based on those column definitions, plus an INSERT statement for each row of data.
I presume you are using one of the several published Python DBF-reading modules. Any one of them should have the facilities that you need: open a .dbf file, get the column names, get the column type etc info, get each row of data. If you are unhappy with the module that you are using, talk to me; I have an unpublished one that as far as reading DBFs goes, combines the better features of the others, avoids the worst features, is as fast as you'll get with a pure Python implementation, handles all the Visual Foxpro datatypes and the _NullFlags pseudo-column, handles memoes, etc etc.
HTH
=========
Addendum:
When I said you didn't need to infer types, you hadn't made it plain that you had a bunch of fields of type C which contained numbers.
FIPS fields: some are with and some without leading zeroes. If you are going to use them, you face the '012' != '12' != 12 problem. I'd suggest stripping off the leading zeroes and keeping them in integer columns, restoring leading zeroes in reports or whatever if you really need to. Why are there 2 each of state fips and county fips?
Population: in the sample file, almost all are integer. Four are like 40552.0000, and a reasonable number are blank/empty. You seem to regard population as important, and asked "Is it possible that some small percentage of population fields contain .... ?" Anything is possible in data. Don't wonder and speculate, investigate! I'd strongly advise you to sort your data in population order and eyeball it; you'll find that multiple places in the same state share the same population count. E.g. There are 35 places in New York state whose pop'n is stated as 8,008,278; they are spread over 6 counties. 29 of them have a PL_FIPS value of 51000; 5 have 5100 -- looks like a trailing zero problem :-(
Tip for deciding between float and int: try anum = float(chars) first; if that succeeds, check if int(anum) == anum.
ID: wonderful "unique ID"; 59 cases where it's not an int -- several in Canada (the website said "US cities"; is this an artifact of some unresolved border dispute?), some containing the word 'Number', and some empty.
Low-hanging fruit: I would have thought that deducing that population was in fact integer was 0.1 inches above the ground :-)
There's a serious flaw in that if all([int(value) ... logic:
>>> all([int(value) for value in "0 1 2 3 4 5 6 7 8 9".split()])
False
>>> all([int(value) for value in "1 2 3 4 5 6 7 8 9".split()])
True
>>>
You evidently think that you are testing that all the strings can be converted to int, but you're adding the rider "and are all non-zero". Ditto float a few lines later.
IOW if there's just one zero value, you declare that the column is not integer.
Even after fixing that, if there's just one empty value, you call it varchar.
What I suggest is: count how many are empty (after normalising whitespace (which should include NBSP)), how many qualify as integer, how many non-integer non-empty ones qualify as float, and how many "other". Check the "other" ones; decide whether to reject or fix; repeat until happy :-)
I hope some of this helps.