The Wind Through the Keyhole and the future of APIs

I was reading today the great article about the unknown aspects of REST and I thought that it’s finally time to talk, or in my case write more about the future of the APIs, but first let me invite you to the magical world of Stephen King’s novel The Wind Through the Keyhole and in the precise moment when the Ka-tet are hiding from the Starkblast.

In this very moment Roland tells a story that inspired me to write an article about APIs. Let me remind you the the story very shortly.

A very short summary of the story. Keep attention

The story starts off about a boy named Tim Ross who suffers a tragedy when his father Big Ross is killed by a dragon. A few months later his mother, Nell, marries his father’s best friend, Big Kells, so they can pay the Covenant Man their yearly taxes. Things start going wrong when Big Kells starts to beat Tim’s mom. It starts to get worse until tax time comes and the Covenant Man gives Tim a magical key that can open any lock. Tim opens Big Kells chest and finds his father’s axe and his father’s lucky coin. Enraged, Tim goes to talk to the Covenant Man who is in the endless forest. Once he gets there the Covenant Man shows him the body of his father and a vision of Big Kells beating his mother until she is blind.

Afterwards Tim runs home and checks on his mother who is aided by his previous teacher Widow Smack. He vows for revenge. Big Kells has disappeared and Tim wants to help his mother so he seeks out the Covenant Man again but only finds his wand. But he is able to see a man giving him an item that will help his mother’s vision. It turns out it is Maerlyn, a powerful wizard. Tim tells Widow Smack of his plans and she warns him not to go but she gives him a rifle because she knows she cannot convince him otherwise. Tim then sets out into the endless forest to find Maerlyn.

Along the way Tim is tricked by a sighe who leads him onto an  Fagonard Swamp island where he is almost killed by a dragon, alligators, and is being jeered on by Mudmen across the lake. He finally kills an alligator with his rifle and the Mudmen then believe he is a gunslinger.

They help him off the island and give him a device from the old people…

Let’s talk about the device

I will stop telling you the story about Tim and will start the story about the future of APIs. Before I begin, I want to encourage you to buy the book and read the rest of the story and maybe the whole Dark Tower series. The movie sucks, by the way.

Allright. Let’s focus on the device. What happens with Tim next – he discovers that the device can do some stuff, like orientation in an off-road terrain, connection to (gps) satelite, turning the light on and off and answering questions. The description might sound like a modern Nokia for you (of course I am talking about the lights), but it does something more and those functionalities are not supported yet by any modern device.

Why don’t we look at the use-cases:

UC1: Can I eat that?

One of the questions the little Tim asked is ‘can I eat that mushroom’ and the device replied, hell no, this is deadly.

So the intent here is that the boy is hungry and he asks the device for information. How does the device know that this is edible or no in the context of the current technical world:

  1. Point the camera to the plant
  2. Send the image to be recognized
  3. Check if edible = true
  4. Return the result together with some information
  5. Store the info for future use.

UC2: Is this person bad?

I am not sure if that question was asked directly in the novel, but when we are talking about Maerlyn in the Dark Tower, we must ask the question.

The intent here is to understand if this person has a good reputation or ot in this case. How do we do that:

  1. Describe the person if there is no picture
  2. Analyze the content
  3. Form a hypothesis and return the score in good/bad scale

If I am not mistaken the returned result was something like 50/50, maybe useless, maybe not, because it brings some hope to Tim.

There are more use-cases in the book and I am sure there are more use-cases in your head on what a device like that could do.

APIs

So, what is the connection with the API? If you look at the use-cases above, you might ask yourself – are there any mobile applications which are currently covering these use-cases? Of course there are. Are there any APIs that do that? Most probably they exists. Then what is the problem?

Problem 1: The applications cover just one use-case, they are locked and they do not expose the information to the rest of the eco-system. What if I have an use-case similar to use-case one, but not for mushrooms, but for a plant or an animal? Shall I eat that animal or I shall run from it?

Problem 2: the APIs for many use-cases exist but it is not very easy to find them or to search for them from a device or a machine.

Let’s try and see if we could transform one of the use-cases in basic API calls to some services:

  1. Open and search google for image recognition API
  2. Plenty of options, some of them not really useful
  3. Select one to explore or go to Programmableweb or other discovery service and repeat 1
  4. I have found one, let’s use Imagga
  5. Learn how to use it and get the results.
  6. The search for a mushroom recognition API, repeat 5
  7. Combine the result and return it to the consumer

Even if they have an API blueprint this could take days to implement.

Then what?

What if we have a way the devices to find and request the APIs they need. Imagine a request that does that:

  1. Searches the API discovery service to find APIs for image recognition ordered by maturity and latency (we need good results, right)
  2. Then returns the API blueprint as a result to the most useful for us API
  3. Then searches for another API that could take the result of API 1, discovered in point 2 and to return the result to the consumer
  4. …and all this for milliseconds.
  5. Then repeat this for another use-case

This problem has been known for decades. Great APIs exists, great companies are investing a lot in building and exposing them, but they make the mistake of exposing them only to humans and to optimize the experience only to developers.

We are entering very exciting times and I believe the future of the APIs is to be easy discoverable by devices and used without the need of someone to programm the interaction.

I do know there are a couple of teams working on that and I really want them to succeed, but this could be a very hard thing to do without changing the mindset of the business and of the developers.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.