I'm thinking of dozens of concurrent jobs writing to the same datastore Model. Does the datastore scale regardless of the number of concurrent puts?
The datastore can only handle so many writes per second to any given entity. Trying to write to a specific entity too quickly leads to contention as described in Avoiding datastore contention. This article recommends sharding an entity if you expect it be consistently updating it more than one or two times per second.
The datastore is optimized for reads, but if your concurrent jobs are writing to separate entities (even if they are within the same model) then your application might scale - it will depend on how long your request handlers take to execute.
There is no contention for entity kinds - only for entity groups (entities with the same parent entity). Since you say you're writing to a new entity each time, you should be able to scale arbitrarially.
One subtlety remains, however: If you're inserting a high rate of entities (hundreds per second), and you're using the default auto-generated IDs, you can get 'hot tablets', which can cause contention. If you expect that high a rate of insertions, you should use key names, and select a key that doesn't cluster as auto generated IDs do - examples would be an email address, or a randomly generated UUID.