I think you're overestimating the capabilities of most modern game AI; which is great, because that's exactly what modern game developers are hoping for. They invest time into making the system appear more intelligent than it is, for example by having the AI characters talk about what they're going to do, or by occasionally following pre-set scripts that perform a complex series of tasks.
If not actual machine learning
algorithms, then what is driving
bleeding edge commerical game AI? Is
it simply highly complex but static
(non-ML) algorithms that are able to
cover most of the possibilities?
There are actually very few possibilities usually. As mentioned in another answer, there is typically a finite state machine at work. eg. A typical enemy in a shooting game may be in one of the following states: idle, alert (they know there is trouble near), hunting (they are seeking an enemy), attacking (they can see the enemy and engage it), and fleeing (they are attempting to escape from an enemy). Transitions between the states can be simple events such as a noise being heard, an opponent being seen, a health value dropping below a certain threshold, etc. Very trivial, really.
Implementation of each state can usually be decomposed into a small number of simple actions, eg. move to position, look in direction, shoot at target, etc. These low level activities are well-documented and widely used. (eg. A* search for pathfinding, vector mathematics for steering and orientation.) All of these building blocks work just as well in 3D as they did in 2D, for the most part.
Additionally, the more complex-looking AI is often scripted, which is to say that the behaviour is pre-programmed in a simple programming language to work in a very specific game situation. Scripts for specific situations can make assumptions about the environment (eg. the location of cover to hide behind, the proximity of allies and enemies, etc) and can provide very specific goals accordingly. More general scripts can be triggered by a set of predetermined event types (eg. Enemy Seen, Ally Killed, Unidentified Noise Heard) and very simple responses written in each case (eg. IF self.health > 75% THEN attackNearestEnemy ELSE fleeToSafety).
...was a learning algorithm perhaps used
in the development stages to produce a
model (static algorithm), and that
model is then used to make decisions
in the game?
This is quite common in situations that are modelling vehicles, such as racing games - you might feed an algorithm the race track as a series of points and inputs based on those points, and get a learning algorithm to develop a strategy that completes the laps in the best time. Eventually you can ship that with the game. But that is a simple mapping from a small number of inputs (angle of road ahead, proximity of obstacles, current speed) to a small number of outputs (desired speed, desired steering), which lends itself well to machine learning. Games that simulate human behaviour can rarely fall back on these simple function approximations, so we tend to rely on manual simulation of distinct behavioural states.
Sometimes there could be a hybrid approach, where the finite state machine transitions could be trained to be more optimal, but that is unlikely to figure in very many games in practice, since realistic transitions are usually trivial for a designer to implement.