Web 2,0, for me at least, is about three key changes:
- APIs, so content can be re-used / remixed in ways that exploit a network effect amongst developers
- User Generated Content - exploiting network effects to capture exponentially more useful data from users.
- AJAX - the odd one out, but it really did change the way people make sites, but in some senses wasn't as fundamental as the first two.
Those are mostly business / social changes, and only one purely technological change (APIs have a technical aspect, but are only a big deal for business reasons).
Some people (notably a speaker at Linked Data Planet 2009) describe the Linked Data / Sem Web trend as "Web 2.0 done right". I think that's an important insight. Those APIs are all different, present data in different formats, are SOAP, or REST or whatever, and there is no set of tools for combining the data and querying it in a web-like way. The word "mash up" reveals something of the effort involved in downloading the data and combining it by brute force coding or computational effort.
The "Big S" Semantic Web cures that by standardizing:
- abstract model of data - RDF, rather than InfoSet or relational models
- serializations (N3, RDF/XML etc)
- a URL centric access model - essentially REST with RDF used in the representations.
- tools and protocols, for querying web-like data.
- using HTTP URLs to identify abstract entities, such as people, or models of car, rather than just documents.
The big change is switching from a web of documents to a web of data, specifically a web of data with unique dereferencable HTTP URIs. Granularity will increase, opening a blue ocean of things which can all participate in a network effect.
At the moment, Ford have a collection of documents that happen to describe Ford and each model of car, the company and the models are not first class entities. Given the protocols and data models are all standardized the obstacles that get in the way of dealing with those concepts as first class entities drop away, tools will translate and combine data (e.g. linking Ford UK, Ford Motors, and not Ford in Argyll ) in differing concrete models very easily because the abstract model is identical. This is not AI (though some AI tools have been re-purposed to assist), its the same type of activity as with Web 2.0 APIs, but without all the brute force mashing and hacking, and it will be more powerful for it.
So, to summarize, its not really anything new and is simply a standardization and evolution of technology that is either current or older, but it will be used in a radically different way - to talk about things other than documents.
This will evolve (in many senses has already evolved) in parallel with the current Web. As a critical mass of people get their heads around the graph-orientated nature of RDF data they'll put data in RDF and not bother writing bespoke APIs, because they are lazy and they won't need to - this will be the inflection point between 2.0 and 3.0.