Solving customer pain point through simple prototyping and using my 20/10 rule

Prototyping is part of my work. I don’t trust product owners or analysts that do not prototype ideas and share them with the customer or another stakeholder. I will tell you more now about my strategy and I hope it can be useful for you.

I have grown the ability to listen, remember and prototype nearly live time, but since my buffer size is not so large I ask the customer to do short 30 min iterations, instead of just Q&A for 1 hour.

How does it usually work?

 

First meeting

Let’s have this imaginary situation – a customer approached me to create a very simple (from hers point of view) application to help her to run their business better. The first 20 minutes of our discussion I have been asking questions and in the next 10 I have been able to come up with an idea. After 30 minutes I had this ready – ugly, but useful

Prototype on Paper

Prototype on Paper

The customer was happy with the development, so we moved on to the next screens, again using 20+10-minute approach. At the end of the second working session, I had the full concept on paper together with a lot of notes and small indicators of interaction. (see the arrows and the kpi indicators)

 

Second meeting

I put all of these papers on a wall and I have created the first digital mock-up by using balsamiq (one of my favorite tools). I tried to make it as close to the paper version as possible spending half a day in just making it “clickable”, so I can present it later faster.

 

Digital Mock-up produced after the first meeting.

Digital Mock-up produced after the first meeting.

 

After that I had a presentation session with larger group of stakeholders (from customer’s side). In general they liked what they saw and asked when they could play with it. I told them I would give them a prototype in a week, noting that the prototype would not look like the one they saw today since the appearance would have to come come from our design team (mainly thinking about UX guys first). The agreed principles, however, would remain the same.

 

Third meeting

With all the screens and my notes I contacted our UX guys and Information architects. They gave me some advices on how to incorporate our company style, usability patterns, and UI into the prototype, using the information I have collected. The final result was smashing. I have created a small Js APP using the company framework to make a real looking prototype.

 

Part of the JS solution

Part of the JS solution

Another view of the JS prototype

Another view of the JS prototype

The customer was happy with the job we have done. I gave them the prototype to play with and we tracked their way of learning the new interfaces and their journey inside the app. We have collected useful data and that helped us a lot in developing the product.  By knowing what everyone expects I can write better tasks to the developers and will shorten my communication time with the customer and other stakeholders.

It was a huge success and I am planning to use this approach again.

Voice navigation – bringing your app to the next level?

This morning I was surprised by Google Drive. They offered me to use voice for some basic commands, instead of selecting them or using a shortcut (in my case).

A few months ago I created an experiment by combining the shiny SoHo Interface with a few good working opensource javascript implementations for voice and gesture to control the interface.

I knew that some companies were experimenting with it but maybe because I was too busy with other projects and day-to-day routines I hadn’t realized that the time for it has come. 

I am sure that the experiment by Google (seems useless from user point of view) will evolve into something more usable and can save a lot of time to the end-user.

 

Pros:

  • It’s fun – you can shout commands to your website and it will respond with an action.
  • Sometimes you can do something useful – like control your HTML5  game or even login to your favorite website.
  • Brings apps to people that can’t write (yet), but can talk – this is something huge.
  • Widens the horizon of the developers and companies – think about one more usability and User Experience layer
  • It is super exciting and it evolves well.

 

Cons:

  • There are some technological ones, but I don’t want to be a hater this time :) Yay!
  • The other one is what happens with all of the data collected by the mic? Some of the devices are known for listening all the time for the our precious voice. Should we start ripping batteries off from our laptops and tablets like we do for our mobile phones?

 

How to get started?

See my demo here – there is a video  for voice and gesture controlled UI. This is how a modern app should look like – you can use your voice, but also to listen to the voice answer sent back to you and if you feel like moving things around – use your webcam to do it..

 

More links:

 

  • I am using Annyang for the voice commands 
  • Gest.JS for the gestures
  • and this JS library to interact with the GoogleTTs engine 

What is the future?

Bright – pretty soon we’ll be seeing more and more startups combining the Voice with the millions of the APIs that exists to build even interfaceless applications that will work well at the beginning and then will replace most of those apps we use these days.

 

What do you think?

 

Voice-controlled UI plus a TTS engine.

A few days ago I’ve attended the launch of SohoXI – Hook and Loop’s latest product that aims to change the look and feel of the Enterprise applications forever. I am close to believing that and not only because I work for the same company as they do.

They have delivered extremely revolutionary UI in order to take the ugly enterprise UI and UX to a whole new level – and it is more awesome than ever. Unfortunately I am now allowed to share more on that topic, but you can follow their blog.

I’ve mentioned them, because I’ve created a small app based on their work and on some open source projects to control the UI with my voice.

Is this the future of the Enterprise applications? Maybe, but hey it’s real fun if one application is talking to you and you talking to it back changing the UI with your voice and no other input devices.

How cool is that? See a small part of the experiment here (pardon my English)

Oh yes, everything is HTML/CSS and JS :)


Click here for a full-size video

Eager to learn more? Follow me @bogomep