Friday, August 22, 2008

Why Web 3.0?

The Internet is an amazing performance in several ways. It contains an enormous amount of information it has revolutionized business, education, and everyday life, and it is still on a widely outdated infrastructure.

The Internet, these days, is caught in a cycle interesting. The more widespread adoption of the Internet is, the more information gets added. The more information in his possession, the more useful it is and the more people use it. The problem with the increasing use of the Internet and the growing amount of information it provides access to is that the structure of the Internet is not. In other words, it is not much easier access to information from the Web now holds, it was then when it first started. Sure, the Internet has some useful changes, such as to standardize the way pages are coded and rendered. And there are some promising changes on the horizon. For example, the implementation of HTML 5 will make it easier code and by expanding semantically analyze information. Even Google is in constant development of projects (Friend Connect, Open Social, Google webmaster tools, etc.) designed to standardize the way information is stored and used on the Internet. There is also a small but growing trend among the sites to open systematic access to their data in the form of an API, and there is a continuous movement to implement semantic technologies such as RDF and OWL.

The tricky thing about any of these strategies is that they are actually quite expensive in production. For example, the ideas behind HTML-5 in 2004 and it is four years after the publication of a Working Draft of the specification. Once a specification for HTML-5 was agreed it will take countless hours of developing browsers to get the syntax right, countless hours of research by the website designer / programmer to learn and adopt the new syntax, and countless years before all sites in the public sector are using HTML-5. Even the adoption of a technology such as API Open Social may require months or years of changes to one side the data infrastructure before they can be rolled, and this is after Google invested their own money and time to develop and market the API in the first place. Also, it seems that-in order to the Semantic Web to happen-it must be one or more technologies, which allow companies to refuse their data in a systematic manner at a low cost and without a lot of work and implement it are a few companies in the direction of completing this type of requirements.

Mashery

Mashery offers on-demand API infrastructure. By offering a full-service solution, documentation and maintenance Mashery enables companies to quickly implement an API for a monthly fee instead of setting one or more engineers to build an API from the ground up. How Mashery, and companies like it, win popularity will allow other website owners systematically data on their APIs at a discounted price. The lower cost of creating an API is more companies to hop on board and will hopefully with significant changes in how data can interact with.

Mozenda

Mozenda is a data management platform that allows users to connect to and use of data from different sources. With Mozenda, users can set up agents who regularly extract data from almost any website. The information, once collected, stored on a Mozenda's secure servers and can be exported in a number of file formats or systematic access by Mozenda's API. Users the opportunity to both collect data and access by a call Mozenda has in essence created the ability create an API for almost any site.





0 comments:

Post a Comment

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | Sweet Tomatoes Printable Coupons