Archive for the ‘API’ Category

Yahoo’s BBAuth Will Allow Better Mashups

October 2, 2006

Yahoo has released a new product called BBAuth just in time for its open HackDay today and tomorrow. It’s a mechanism for non-Yahoo applications to access Yahoo’s authentication mechanism and user data in a secure manner.

Most mashups today do not access personal data because of the security issues (not to mention the fact that companies usually think of user data as proprietary). The classic mashup example is mixing Google or Yahoo maps with other data. But there are far fewer examples of mashups involving user data protected from the rest of the Internet via a sign-in procedure.

BBAuth fixes that problem when it comes to accessing data locked up at Yahoo. Using the tools Yahoo provides, non-Yahoo applications can request a user to sign in to Yahoo and give permission for Yahoo user data to be sent to the non-Yahoo application. Yahoo’er Dan Theurer explains how it works in more detail, and points to two test applications he created. The first shows how it can be used to allow sign in via Yahoo credentials, and the second shows how you can access Yahoo photos data outside of Yahoo.

There are two pieces to BBAuth. The first is a single sign on tool to authenticate the user. The second piece is a set of APIs to get into specific Yahoo services and interact with user data. For example, the Yahoo Photos API allows other applications to, among other things, upload photos, tag photos, and modify titles and descriptions. Yahoo is also opening up Yahoo Mail through BBAuth.

Dave Winer says this is a “huge deal” and I agree. See what Yahoo’s Jeremy Zawodny says about BBAuth as well.

It’s worth noting that Amazon is doing the same thing (but in a limited way) with it’s S3 storage product, and eBay is supposedly testing third party authentication for purposes of verifying (but not changing) user feedback ratings.

Tags: , , , , ,

Create an API for any site with Dapper

September 15, 2006

A new service called Blotter from startup Dapper (dappit.com) is getting some good coverage
around the blogosphere today. Blotter graphs Technorati data for any
blog over time. Most exciting to me though is Dapper’s basic
service, just launched this week. The company says it’s
effectively offering an easy way to create an API from any website.
This might look like crass screen scraping on the surface, but the
company aims to offer some legitimate, valuable services and set up a
means to respect copyright. The site is clearly useful now.

Dapper
provides a point and click GUI to extract data from any web site that
can then be worked with and displayed via XML, HTML, RSS, email alerts,
Google Maps, Google Gadgets, a javascript image loop or JSON. The site
could use a UI overhaul to make it easier for nontechnical users and
copyright issues will have to be dealt with. That said, Dapper is
pretty awesome.

Dapper is lead by Jon Aizen, a Cornel CS graduate who’s worked
on the Alexa Archive and the Internet Archive and CEO Eran Shir. Aizen
says the company aims ultimately to offer a marketplace for content
reuse through Dapper, allowing publishers to set the terms and prices
for any creative reuse of their published content. This is the kind of
thing that it takes serious negotiation to do today, but Dapper has the
potential to make such deals far easier for far more people. For
developers Dapper will just save time, Aizen says.

Here’s how it works. Users identify a web site they are
interested in extracting data from and view it through the Dapper
virtual browser. Aizen showed my how to do it using Digg as an example.
I clicked on a story headline, on the number of diggs and the via URL
field. I went to another page on the same site and did the same thing
so that Dapper could clearly identify the fields I was interested in. I
then went through the various tools available on the site to set
certain conditions and threshholds and ended up with XML feeds I could
do all kinds of things with. Like send me an email whenever
there’s a TechCrunch story on the front page of digg, or when a
search results page shows a TechCrunch story with more than 10 diggs.
After I create an end product through the site, other users will be
able (after a 24 hour period in which I can edit the project) to use my
project either as is, altered to fit their needs or in the future, in
combination with other projects.

The
alerts are of most interest to me, but data from other sites can be
mapped on Google Maps, turned into an RSS feed for sites that
don’t publish feeds, turned into a slideshow if the data is in
the form of images. Aizen says he’s created a tool for himself
that runs feeds through Babblefish automatically and produces a
translated feed. The possibilities are huge.

Privacy and licencing the technology so it runs on your own servers
are both things the company is looking at for the future. Both are
pretty key.

Though the company says the site is largely a proof of concept they
are also seeking seed funding and it’s pretty usable already.
Dapper says it’s aiming high: what Geocities did for static web
pages, they want to do for dynamic content reuse. If they can find a
good way to manage the rights pitfalls around reused content, and
I’d like to believe it’s possible, then we may start seeing
a lot of dazzling new ways to interact with data built via Dapper and
popping up around the web.

Create an API for any site with Dapper