views:

1511

answers:

4

Hi,

I'm doing a project on RDF, and got acquainted with the Web 3.0 (also known as Semantic Web) notion. I also got to read that Web 3.0 is something which Tim Berners Lee had in mind which came out as Web 1.0.

Web 1.0 was internet in its formative years when users could just read and share information over web pages.

Web 2.0 had internet taking up a dynamic flavor, with users able to send information. But web 2.0 also came with ideas like social bookmarking sites, video sharing, orkut, facebook, forums etc.

My questions are -

  1. Did web 2.0 rope in any big change in terms of technology apart from the way internet is used? I mean some thing like what is being expected of Web 3.0.

  2. Isn't web 3.0 some kind of a thought being advocated by purists and how feasible is it to bring in changes to the already existing humongous set up? Or would it run in parallel with the current internet?

cheers

+2  A: 

The short answer is, that the semantic web is already here, already being run in parallel, and just gets more so every day. So many websites serve RSS feeds, and JSON keeps gaining more and more ground. (though I'm not sure of the quality of semantic value in JSON)

In general, services are taking over, and people are integrating/filtering data sets the way they want. (see netvibes, igoogle, start.com, pageflakes)

altCognito
I sort of agree, but I think there is a step difference in big S and small s semantic webs. This is due to the level of standardisation, and (and this is getting very detailed for a comment) the lack of unique URLs for the properties of the entities- properties should be first class entities as well.
Simon Gibbs
+3  A: 

Web 2,0, for me at least, is about three key changes:

  • APIs, so content can be re-used / remixed in ways that exploit a network effect amongst developers
  • User Generated Content - exploiting network effects to capture exponentially more useful data from users.
  • AJAX - the odd one out, but it really did change the way people make sites, but in some senses wasn't as fundamental as the first two.

Those are mostly business / social changes, and only one purely technological change (APIs have a technical aspect, but are only a big deal for business reasons).

Some people (notably a speaker at Linked Data Planet 2009) describe the Linked Data / Sem Web trend as "Web 2.0 done right". I think that's an important insight. Those APIs are all different, present data in different formats, are SOAP, or REST or whatever, and there is no set of tools for combining the data and querying it in a web-like way. The word "mash up" reveals something of the effort involved in downloading the data and combining it by brute force coding or computational effort.

The "Big S" Semantic Web cures that by standardizing:

  • abstract model of data - RDF, rather than InfoSet or relational models
  • serializations (N3, RDF/XML etc)
  • a URL centric access model - essentially REST with RDF used in the representations.
  • tools and protocols, for querying web-like data.
  • using HTTP URLs to identify abstract entities, such as people, or models of car, rather than just documents.

The big change is switching from a web of documents to a web of data, specifically a web of data with unique dereferencable HTTP URIs. Granularity will increase, opening a blue ocean of things which can all participate in a network effect.

At the moment, Ford have a collection of documents that happen to describe Ford and each model of car, the company and the models are not first class entities. Given the protocols and data models are all standardized the obstacles that get in the way of dealing with those concepts as first class entities drop away, tools will translate and combine data (e.g. linking Ford UK, Ford Motors, and not Ford in Argyll ) in differing concrete models very easily because the abstract model is identical. This is not AI (though some AI tools have been re-purposed to assist), its the same type of activity as with Web 2.0 APIs, but without all the brute force mashing and hacking, and it will be more powerful for it.

So, to summarize, its not really anything new and is simply a standardization and evolution of technology that is either current or older, but it will be used in a radically different way - to talk about things other than documents.

This will evolve (in many senses has already evolved) in parallel with the current Web. As a critical mass of people get their heads around the graph-orientated nature of RDF data they'll put data in RDF and not bother writing bespoke APIs, because they are lazy and they won't need to - this will be the inflection point between 2.0 and 3.0.

Simon Gibbs
A: 

I created a simple video just for you guys! The three internets

http://www.youtube.com/watch?v=dskaKT1PKQE

Tyler
A: 

http://techlaugh.com/computer/web-30-technology-web-browsers/

how web 3.0 will help the next generations

Anupam Tamrakar