The Google N-gram Corpus could be used to determine the most likely phrase divisions.
For reasonably short phrases, you could generate all the possible sets of n-grams that the phrase can be divided into (e.g. ["Los", "Angeles", "pizza"]
, ["Los Angeles", "pizza"]
, ["Los", "Angeles pizza"]
and ["Los Angeles pizza"]
for your example phrase), look them up in the corpus, and see which one(s) come out with the highest number of occurrences. (Considering the size of the corpus, you'll probably need to load it into a database rather than an in-memory hashtable.)
EDIT: By the looks of things, it's not freely available. Maybe there are some similar things that you could use, though. If not, there are certainly corpora of text from the web that you can download and use to create your own lists of n-grams.