Steganography API at your service.

Steganography is the art and science of embedding secret messages in a cover message so that no one, apart from the sender and intended recipient, suspects the existence of the message. 

The most common example is to hide a message in an image file without compromising how the image looks. The majority of the people are using the photos to share a fantastic moment or two and don’t know that they can contain a secret message.

What could be the use-case?

Someone can hack your phone and embed your text messages in the pictures you take and share in, say, Instagram. 

A not so happy employee can post a picture on your blog with a secret message embedded in it to share some trade secrets with your competitors. 

Another person can embed an exploit in a PNG ads image; JavaScript code would parse the PNG image, extract the malicious code, and redirect the user to the exploit kit landing page.

Steganography also is a well know method for exchanging information between spies. 

Even if it sounds like science fiction, this is a very viable threat against your systems and you.

Steganography Protector API

I have created a small API (as a Proof of concept) that could discover a secret message hidden in any image file.  

The end-point is here:

https://sapigate.herokuapp.com/steg

It accepts POST requests only. 

The input should be JSON encoded, and it should consist of a binary stream of your image.

Here is a Python example.

import requests
url = 'https://sapigate.herokuapp.com/steg' 
my_img = {'image': open('secret.png', 'rb')}
r = requests.post(url, files=my_img) 
print(r.json())  

The result of the command can be:

{'message': 'Secret Message', 'status': 'sucess'}

I am planning to extend the API by adding more use-cases and documentation, but if you are free to start using it right away.

If you have any questions about it or it seems down, contact me via Twitter – @bogomep

A practical use

You could read all of your images from your blog and via the API to check whether they contain a secret message or not or to check for hidden traces of your last Instagram image.

If you are looking for a picture with a secret message inside – why don’t you test this one:

HacktoberFest – done; Why care?

I still remember the good ol’ times where I almost convince an entire company to allow their > 500 employees to contribute to an opensource or a free software project at least 1 hour a month.

The journey was hard. I prepared a strategy and facilitated the discussion, and spend numerous hours taking to smart legal people to create a framework for that.

Also kind of convinced everyone that this is a good thing and created a list of projects to contribute to, separated in categories – for developers, for testers, for marketing and sales, for the other experts we had.

I also finally convinced the company that that one hour will be donated (because the company owns all you do during working hours).

Well, nothing happens with that Initiative after I left. I still see some ideas mentioned on their blog, but it seems it’s not supported anymore.

Maybe you lost my train of thoughts and to be honest I am in the same situation as well. Haha!

I just wanted to brag about that I finished my #hacktober challenge this year and their goals it seems to be the same as mine while trying to convince the company – teach everyone why sharing is caring and why contributing to the Open knowledge is the best you can do.

You don’t have to be a developer or a byte guru to do so; Everyone can do that, and it makes you proud, and it makes you feel happy.

Credits: Photo by OneRas. Licensed under Creative Commons license

The Wind Through the Keyhole and the future of APIs

First, let me invite you to the magical world of Stephen King’s novel The Wind Through the Keyhole and in the precise moment when the Ka-tet is hiding from the Starkblast.

At this very moment, Roland tells a story that inspired me to write an article about APIs. Let me remind you of the story very shortly.

A very short summary of the story. Keep attention

The story starts off about a boy named Tim Ross who suffers a tragedy when his father Big Ross is killed by a dragon. A few months later his mother, Nell, marries his father’s best friend, Big Kells, so they can pay the Covenant Man their yearly taxes.

Things start going wrong when Big Kells starts to beat Tim’s mom. It starts to get worse until tax time comes and the Covenant Man gives Tim a magical key that can open any lock. Tim opens Big Kells chest and finds his father’s ax and his father’s lucky coin.

Enraged, Tim goes to talk to the Covenant Man who is in the endless forest. Once he gets there the Covenant Man shows him the body of his father and a vision of Big Kells beating his mother until she is blind.

Afterward, Tim runs home and checks on his mother who is aided by his previous teacher Widow Smack. He vows for revenge. Big Kells has disappeared and Tim wants to help his mother so he seeks out the Covenant Man again but only finds his wand. But he is able to see a man giving him an item that will help his mother’s vision. It turns out it is Maerlyn, a powerful wizard.

Tim tells Widow Smack of his plans and she warns him not to go but she gives him a rifle because she knows she cannot convince him otherwise. Tim then sets out into the endless forest to find Maerlyn.

Along the way, Tim is tricked by a sighe who leads him onto an  Fagonard Swamp island where he is almost killed by a dragon, alligators, and is being jeered on by Mudmen across the lake. He finally kills an alligator with his rifle and the Mudmen then believe he is a gunslinger.

They help him off the island and give him a device from the old people…

Let’s talk about the device

I will stop telling you the story about Tim and will start the story about the future of APIs. Before I begin, I want to encourage you to buy the book and read the rest of the story and maybe the whole Dark Tower series. The movie sucks, by the way.

Alright.

Let’s focus on the device. What happens with Tim next – he discovers that the device can do some stuff, like orientation in off-road terrain, connection to (GPS) satellite, turning the light on and off and answering questions.

The description might sound like a modern Nokia for you (of course I am talking about the lights), but it does something more and those functionalities are not supported yet by any modern device.

Why don’t we look at the use-cases:

UC1: Can I eat that?

One of the questions the little Tim asked is ‘can I eat that mushroom’ and the device replied, hell no, this is deadly.

So the intent here is that the boy is hungry and he asks the device for information. How does the device know that this is edible or no in the context of the current technological world:

  1. Point the camera to the plant
  2. Send the image to be recognized
  3. Check if edible = true
  4. Return the result together with some information
  5. Store the info for future use.

UC2: Is this person bad?

I am not sure if that question was asked directly in the novel, but when we are talking about Maerlyn in the Dark Tower, we must ask the question.

The intent here is to understand if this person has a good reputation or not in this case. How do we do that:

  1. Describe the person if there is no picture
  2. Analyze the content
  3. Form a hypothesis and return the score in good/bad scale

If I am not mistaken the returned result was something like 50/50, maybe useless, maybe not, because it brings some hope to Tim.

There are more use-cases in the book and I am sure there are more use-cases in your head on what a device like that could do.

APIs

So, what is the connection with the API? If you look at the use-cases above, you might ask yourself – are there any mobile applications that are currently covering these use-cases? Of course, there are. Are there any APIs that do that? Most probably they exist. Then what is the problem?

Problem 1: The applications cover just one use-case, they are locked and they do not expose the information to the rest of the eco-system. What if I have a use-case similar to use-case one, but not for mushrooms, but for a plant or an animal? Shall I eat that animal or I shall run from it?

Problem 2: the APIs for many use-cases exist but it is not very easy to find them or to search for them from a device or a machine.

Let’s try and see if we could transform one of the use-cases in basic API calls to some services:

  1. Open and search google for image recognition API
  2. Plenty of options, some of them not really useful
  3. Select one to explore or go to Programmableweb or other discovery service and repeat 1
  4. I have found one, let’s use Imagga
  5. Learn how to use it and get the results.
  6. The search for a mushroom recognition API, repeat 5
  7. Combine the result and return it to the consumer

Even if they have an API blueprint this could take days to implement.

Then what?

What if we have a way the devices to find and request the APIs they need. Imagine a request that does that:

  1. Searches the API discovery service to find APIs for image recognition ordered by maturity and latency (we need good results, right)
  2. Then returns the API blueprint as a result of the most useful for us API
  3. Then searches for another API that could take the result of API 1, discovered in point 2 and to return the result to the consumer
  4. …and all this for milliseconds.
  5. Then repeat this for another use-case

This problem has been known for decades. Great APIs exists, great companies are investing a lot in building and exposing them, but they make the mistake of exposing them only to humans and to optimize the experience only to developers.

We are entering very exciting times and I believe the future of the APIs is to be easily discoverable by devices and used without the need of someone to program the interaction.

I do know there are a couple of teams working on that and I really want them to succeed, but this could be a very hard thing to do without changing the mindset of the business and of the developers.

More on the topic?

Credits:

  • I used some data from darktower.wikia.com
  • The second image is licensed by CreativeCommons License by Alan Levin
  • A big thanks go to Sylvia Kasabova for the edits and for introducing me to the magical world of the “Dark Tower”