I've worked with both normalized and Kimball star data warehouses and this doesn't sound like a problem you should be running into. I would say 140000 rows is not a lot of rows even in a small data warehouse.
Why do the inserts fail? Typically in a Kimball-style warehouse, no inserts ever fail - for instance in a fact table, inserts always have a unique set of primary keys related to the dimensions and the grain (like a date or time snapshot). In a dimmension table, changes are detected, new dimensions are inserted, existing ones are re-used. In a normalized warehouse, you usually have some kind of revision mechanism or archive process or effective date which keeps things unique.
It seems to me that regardless of your DW philosophy or architecture, there should be something keeping these rows unique.
If (as you stated in your comments) you have a single index containing every column, that's probably not a very useful index (in any database design). Are you sure your index is even being used for any queries? Is it also marked to be unique and is that constraint being violated? In any case, that's a pretty large multi-column index, and it's going to be relatively expensive to compare against - this could result in a timeout - you can always fix that in your connection to wait forever, but I would attack the problem from a design perspective.