Chatbot, what art thou?

“To be, or not to be, that is the question: Whether ‘tis nobler in the mind to parse,
The slings and arrows of outrageous language,
Or to take Arms against a Sea of intent.”

In early 2012, at the big data startup I co-founded, we were sitting on an award-winning Hadoop based search engine, which seemed to offer new possibilities, if you accept that information can be fundamentally organized, discovered, connected very differently at scale. Underlying the capacity to handle petabytes of data with ease, it also marked a shift in how we could approach data beyond well framed, well structured queries; get more hypothetical, what if this or that! From the labs a combination of high performance engineering and the ability to process large troves of data was a powerful beam to shine on hitherto unsolved problems. We chose the oft neglected post purchase support experience. We imagined that servicing customers was characterized by rich variety of user situations requiring attention. We felt a solution that was not rigid, has the ability to learn, adapt to a range of hypothetical scenarios was ripe to pursue.

Our premise that Text is at the intersection of UX and Systems. Hence a chatbot.

We picked Google chat and Facebook messenger for the UX delivery relying on their implementation of XMPP  (discontinued in mid 2013).  XMPP is the Extensible Messaging and Presence Protocol, a set of open technologies for instant messaging, presence, multi-party chat, voice and video calls, collaboration, lightweight middleware, content syndication, and generalized routing of XML data. 

We launched Txtland…

Txtland 2013, a chatbot that fetches information to natural language queries.
Txtland screenshot. The grey blocks are user texting and purple is Txtland response. User commands were not pretentious and completely functional, direct, short.

Without going into rest of the story as to what happened to Txtland, a digression, I realized the primary design challenge was in getting user at ease that she is chatting with a program at the back. It was powerfully fast in performance and response. It had access to the latest and huge repository of information to parse and serve in a fraction of second. However, like the Deepmind’s AlphaGo beating application, so well trained, solely by itself, on AlphaGo, it would not know how to play scrabble. This narrowness of specialization against the broad spectrum of human intent and sheer variety, responding back with, as Stafford Beer would say, with attenuation, not amplification, remains the challenge technically. 

As users typed we parsed and picked up ‘action words’ like stock, weather, ticket, item, SR, etc. and responses in the chatbot would provide options against that ‘action word’ along with a rudimentary, ‘TYPE THIS’ to find out more… so on. It was quite elementary. The text to engage with Txtland, a program, was honest and more machine like. If you attached a # in front of a phrase, it became an action for Txtland. See screenshot below:

From a design point of you, I pondered, as I see a proliferation of chatbots across industries, is why do the creators of chatbots continue to imitate human style, knowing well it cant live upto that label. Not authentic!

Fundamental problem with chatbots is that the interface between the user and the chatbot or agent is the same as what is used in normal, regular conversations between one human and another human. And this is validated by recent research from Pegasystems (NASDAQ: PEGA), the software company empowering customer engagement at the world’s leading enterprises

There is an opportunity that remains unexplored today to redesign the container on the tool using which a human user knows and is comfortable to chat with an artificial agent or a bot. Such a design should include predefined canned phrases and gestures. Research needs to explore whether search gestures can also be used by bot to communicate broadly. What language should a machine bot deploy to communicate with humans?

That would be besides generated text. How does the language demonstrate a ‘machine culture’ where culture could be its ‘nature’ that organizes information, its ability to find correlations across these, find and serve with great speed. And learn along the way as to what is a high value coorelation and what is a low value in what context. Txtland was leveraging this aspect to not just be a Q&A type conversation, but it could also run backend scripts, respond back, look up knowledge bases and ultimately, in case of failure, it gave option of ‘should I dial in our customer support representative?’

Chatbot parses text to respond with actions such as running scripts at the backend. 

What Chatbots can be!

I speculate that this sort of progress and exploration could help the ongoing effort in the digital transformation of businesses, including automation of business processes, strapped on with a new manner of interacting with ‘cognitive computing.’ Thinking about these kind of Technologies and problems with a very different toolkit, one of designers can help define the future of this industry and innovation. Especially, if these intelligent chatbots can conversationaly learn from a human user how it can perform that same task? How it will allow itself to be ‘handheld’ as painting robots do (record and play) to learn under supervision? Or in other cases, immerse themselves into a data rich situation given a specific human specified goal, to learn unsupervised.

One benefit of having machines have a new language to communicate with humans, and that humans retain theirs as distinct from it, we could even block or program into the machines an ability to not process certain intimate or personal human phrases. This would limit machines to what we envisage for them to productively engage with and perform within those boundary conditions efficiently. Like a bot that is expert in psephology or another in string theory. 

Assuming one gets past this ability of a chatbot communication or its UX, then comes the challenge of figuring out if there is a hierarchy among them. Afterall, we have tasks that are mundane, repetitive to challenging and complex. Can these bots be designed for such hypothetical variety? Would it be the ‘machine intelligence’ that differentiates these? How smart or fast is it? Speed and accuracy of response are critical to peg them. Of course, this is knowledge in the realm of known – knowns! Another boundary condition. From such criteria, a chatbot persona can be shaped and presented in a unique, non-human space. 

Henry VI, Part III [IV, 1]

King Edward IV:  “Now, messenger, what letters or what news from France?”
Messenger:  “My sovereign liege, no letters; and few words, But such as I, without your special pardon,
Dare not relate.”
King Edward IV:  “Go to, we pardon thee: therefore, in brief, Tell me their words as near as thou canst guess them. What answer makes King Lewis unto our letters?”

Here in Shakespeare’s play, the ‘guess’ is politically loaded. Hiding or revealing the facts may lead to harsh consequences to the messenger facing King Edward IV. Again, the pardon that follows hazarding a guess, leads the messenger to confidently conjecture on the circumstances. Guess is what humans do a wonderful job with. Guessing is such a fine way to move forward. And in the case of a chatbot, especially a smart AI driven one. A guess in that context is more heuristic and less algorithmic. That could also explain, why a rule based engine in chatbots with a NLP strapped on comes across as rigid or duh!

Can we think of chatbot conversations that approximate the King and his messenger like above. Conversations that are guided more by heuristic principles than algorithmic models. From Stackoverflow, this is high upvoted answer to differences between heurisitcs and algorithms.  Below is Kriss‘s explaination: 

An algorithm is the description of an automated solution to a problem. What the algorithm does is precisely defined. The solution could or could not be the best possible one but you know from the start what kind of result you will get. You implement the algorithm using some programming language to get (a part of) a program. Now, some problems are hard and you may not be able to get an acceptable solution in an acceptable time. In such cases you often can get a not too bad solution much faster, by applying some arbitrary choices (educated guesses): that’s a heuristic. A heuristic is still a kind of an algorithm, but one that will not explore all possible states of the problem, or will begin by exploring the most likely ones.

Irrespective of whether machine learning such as the reinforcement learning model (see image below) can be applied to build a ‘guess-as-you-go-chat’ or some other, what matters is why? 

source: KDnuggets

But why bother with guessing, I mean heuristic or 80:20 approaches that may make the chatbot fall on its face! (< any emoji to represent that?) 

In ‘Models of Ecological Rationality: The Recognition Heuristic‘ the authors Daniel G. Goldstein and Gerd Gigerenzer, from Max Planck Institute for Human Development suggests that a ‘Fast and Frugal‘ approach is one efficient method available. Can this guide the design of a chatbot?

From their paper, “One view of heuristics is that they are imperfect versions of optimal statistical procedures considered too complicated for ordinary minds to carry out. In contrast, the authors consider heuristics to be adaptive strategies that evolved in tandem with fundamental psychological mechanisms. The recognition heuristic, arguably the most frugal of all heuristics, makes inferences from patterns of missing knowledge. This heuristic exploits a fundamental adaptation of many organisms: the vast, sensitive, and reliable capacity for recognition. The authors specify the conditions under which the recognition heuristic is successful and when it leads to the counterintuitive less-is-more effect in which less knowledge is better than more for making accurate inferences.

What this would do, in addition to the mundane, repetitive, well-defined, established routines that chatbots can address decently today, is also to add that variety, that ‘masala‘ to the curry; or currying up a conversation with a human! Just more breadth.

To give Dr.Hook’s popular song a twist – take the pussy cat and turn it to a tiger; wild, in the jungle from the zoo.

Dr.Hook – Jungle to the Zoo – 
“The tiger, tiger, they’ll clip your claws, cut your hair, make a pussy cat
 out of you Its one step from the zoo to the jungle.” (edited)

Chatbot, why be anything but wild 🙂

Or as Luciana, the unmarried lady, so full of advice, says in Shakespeare’s Comedy of Errors:

“She never reprehended him but mildly,
When he demean’d himself rough, rude and wildly.
Why bear you these rebukes and answer not?”

Two simple actions that offer lasting design integration within organizations

Illustration by Simon Oxley

At the height of dotcom in 2000, I had my first opportunity to recruit and build a design team at Infosys. Hubris led me to place design on a privileged pedestal. Arrogance was part of the potent mix. We used to joke “here comes another JIP job.” JIP expands to ‘Jazz-it-up’, the most common phrasing of a new design task. Jip was also our get-back-at-them slur given the popular file compression term in a local accent substituting J for Z. Sardonic design team humour apart, strictly transactional nature of collaboration turned the shortlived design act and came a cropper.

My failure was masked by more seismic business events like the bankruptcy of Webvan and countless other dotcom wunderkind crashes. Unfazed though, growth for Infosys was starting to kick in from the enterprise side as several legacy solutions based on mainframes like AS400 or the thick client, middle ware based software started leaning onto browser-based applications for captive audiences that power Intranet traffic or as a new channel. I was presented a second chance. 

2001 – A legacy client of the too big to fail kind was being pitched to and here I was onsite consulting along with a mix of specializations and experience – architects, program managers, business analysts, engineers, developers and variations of these. All these diverse experts along with me would hold conversations with client representatives. I observed that the storyteller was the same. Their story and script mostly same barring few details. However, each of us on Infosys team took away a different story. The business analyst – business processes and SLAs, engineers – performance and non-functional requirements, architects – nature of infrastructure, currently deployed stack details, and I, about the users of the legacy system.

What struck me is the sheer waste of time for the client, repeating themselves. I was convinced there was a better way of managing requirements in a multi-disciplinary setting. And that would be to model requirements visually using Vizio like tools and UML compliant symbols.

Back at Bangalore, I worked with several internal teams responsible for process and quality. Initial conversations would go like “What ? There are overlapping goals? You mean you too capture requirements? Well, unfortunately, we didn’t plan for it! Can you work with the use cases instead?

How users were aligned to a IT solution, from being an afterthought.

oon with examples and doggedness, managed to convince the multi-disciplinary team to view design not as jazz, but as a process! Showed them a design process. Aligned it with IT’s core process. Yes! SDLC or software development lifecyle – waterfall method, used at that time aided by CMM maturity assessments to plan, track and deliver quality software.

oon with examples and doggedness, managed to convince the multi-disciplinary team to view design not as jazz, but as a process! Showed them a design process. Aligned it with IT’s core process. Yes! SDLC or software development lifecyle – waterfall method, used at that time aided by CMM maturity assessments to plan, track and deliver quality software.

Soon with examples and doggedness, managed to convince the multi-disciplinary team to view design not as jazz, but as a process! Showed them a design process. Aligned it with IT’s core process. Yes! SDLC or software development lifecyle – waterfall method, used at that time aided by CMM maturity assessments to plan, track and deliver quality software.

Soon we had a patent pending software requirements capture framework called Influx.

Design process was moved up to happen at project start from its post usecase stage, to actually drive formulation of usecases, where design approach, predominantly visual, endeared (empathy) itself to users much better than dry, structured, wordy templates.

Key innovation here was when we managed to tell each of us diverse experts that the detail each is interested in stems from the same story but differs in granularity. Our breakthrough was in respecting these differences and nesting them – business workflows breakdown into more granular task flows such that a business function such as user authentication could break down into a user navigating through a bunch of screens, and each screen breaks down into performance engineering requirements on validations, server response times etc. All beautifully nested in a single diagram with multiple levels of zoom. In fact, it inspired me to present this vision to management by layering it on the fascinating film by American Design guru Charles Eames titled ‘The Powers of Ten‘ (sponsored by IBM). As a primary contributor, I attribute the success to the mantra ‘Align and Integrate‘ – the theme of this post.

Design Process Benefits from Align and Integrate approach

Align and Integrate worked here at the process level. What I did was to first convince members that design is not a blackbox activity but an explicit process that lends itself to planning and managing. Next, I examined the classic design process of understanding users, tasks, explore design solutions and prototype both layouts and flows, visual design, high fidelity prototyping and user testing/validation. With these steps, I cut the process up into well contained steps/design activities and aligned them with the other core processes defined in software engineering. Alignment was based on when to execute best, in which context or location, with whose inputs, with what outputs, and other dependencies. Note that most projects followed waterfall model. Agile manifesto was being drafted at the same time at The Lodge at Snowbird ski resort in the Wasatch mountains of Utah. Rational and OO architectures were the flavour. With the well-aligned tasks representing multiple domains, next we examined the quality goals and efficiencies. It was obvious that even within certain aligned tasks, there were opportunities to integrate them to better represent and capture. One example is in how workflows were captured as a sequence of actor actions in a use case, and also as a set of visible, tangible actions where actor is a human user, represented in swimlanes above the line of visibility. Integrating these ensured better collaboration and holistic requirements.

At another too big to fail bank, my design team saw the challenge that when stakeholders were presented usecases as ordered lists of text items, they were dense to read and comprehend, resulting in approval delays. With a better aligned design integration, we were able to present the same for approvals visually as a prototype. To bankers this was more exciting and elicited significantly better participation. We all know that a picture represents a thousand words, right! Lesser cognitive load in some sense.

Now, that we could model requirements for design upfront, it proved to be a viable tool to not just capture design requirements, but as a tool to discover business requirements. I continued to help improve the Influx tool. The next step was to have the tool generate the english language text of the usecases. This clearly showed the dual nature of requirements – in discussions and at elicitation it was visual, but below the hood, it was XML. Post elicitation, for software engineering, it was a well defined, structured UML compliant requirements document, generated from the underlying data.

Align and Integrate at the process level worked fine. My teams effort was recognized with Infosys Chairmans Excellence award in 2002. 

As a designer, I continued to build on this process foundation. I extended the Align and integrate principle to resources and staffing. Here I have a confession to make. Designers are not easy to manage. Perhaps its the nature of work and talent. A leading Boston designer who worked on MasterLock redesign said their team size sweet spot is 20. Beyond that it becomes unmanageable. Perhaps why design companies dont scale their services like software service providers. At Infosys, I realized staffing the exponential demand for UI design across projects is huge. For designers to align well with the teams, we need to have them fit well into the base organization structure. I worked with top management to create new roles in our recently acquired SAP HR system. I used their structure to define career path and performance appraisal criteria. Compensation was pegged in line for the value that design brings to the table, plus bearing the supply of talent constraints. New hires were trained on our unique design processes and artefacts that were integrated within the overall software engineering frameworks. This ensured designers as a team remained well-aligned and integrated within the overall organization. Where the first attempt of a JIP service failed at single digit, the new approach has scaled very smartly to hundreds of designers, and the number is only growing.

To be continued…

Designing for an Authentic AI

Originally published on Medium 20th July, 2018

Mechanical Duck, built by Jaques de Vaucanson (1738, France) Source: https://commons.wikimedia.org/wiki/File:MechaDuck.png

Higher order automation as opposed to mechanical automation

During my stint as a co-founder and product manager at Bizosys (2009–2015), a company developing Hadoop based products to manage large-scale data (structured, unstructured and time-series sensor data) I had this overwhelming moment where a machine system could learn from past data and predict future events. This was for a telecom service provider who wanted the ability to accurately predict communication tower failures. There were over a hundred parameters ranging from network to the fuel levels in its power generators to weather and national holidays. Remotely located towers could go down for days unattended. Initially, we tried Weka but were able to get prediction accuracy beyond 55% — no great business benefit for such reliability. We then tried with a self-learning machine learning program deploying a window-shifting algorithm, HotSAX that discovers discordant patterns in data. The results were exciting with accuracy in high 90%. Suddenly, this opened up new opportunities for the telecom infrastructure team — they could manage their shifts better based on reliable predictions, downtime was reduced, yielding significant, tangible business benefits.

This sort of reliability can only be matched by humans having tacit knowledge gained from decades of experience. Such as a train driver on a Southern Pacific line who has memories of record snowfalls and how to deal with a developing snow storm. Where a machine lacks is the ability to predict with minimal training data. For example, in our telecom experiment, three-quarters of data fed to the algos resulted in the excellent prediction of the fourth quarter. Human, on the other hand, can manage within bounded rationality. If human thought as we know is essentially Cartesian then, our knowledge of our experiences are traceable ultimately to the knowledge of the world around us. We know that such thought leads to errors. For example, once you operate a light switch, you expect it to work the same elsewhere. When it doesn’t, we adapt to the situation or enquire into it. The difference is in our learning capacities and input conditions. This is evident in the following comparison between Mooney images and machine-based face recognition.

 
A tale of two faces!

Like this Smithsonian article says, “The early Greeks and Renaissance artists had birds on their brains” and there was always a quest for the robot. Vaucanson’s mechanical (incontinent) duck of 18th century perhaps was as awe-inspiring to the audiences then as the AI driven automation unfolding today. Till recently automation was rule based at least in production. With the announcements from deep learning successes, a new era is emerging.

This brings me to the premise of this story — how do we design experiences for a higher order automation instead of the mundane mechanical systems? Consider an old analogue temperature controller compared to a connected Nest device. How are we supposed to engage beyond its visible appearance and display controls? Cognitively, the task was straightforward — decide when in the room, how hot or cold the room should be, and turn the dial clockwise or anti-clockwise. With a connected device, there is an app that can learn from your past spins on the dial up or down, to recommend or even offer to preset, via an App toast notification sensing you are 30 minutes away from the air conditioning system; it having already contributed to the larger big data pool; an analysis of consumption patterns feed utility companies on predicted loads, resulting in them controlling sluice gates of hydroelectric dams to produce power for the consumer, who is expected to turn on the AC to a comfortable 24 degrees in 30 minutes.

When you see the capabilities of advancing technology such as New Zealand based Soul Machinestechnology is not just fascinating but resets our relationship with machines. Just as Ava has trained itself, or with the help of its creators, to mimic human expressions, would the machine be ‘aware’ of its learning? Like learning to factor in the response or expression in a conversation and change how it smiles next time it sees the same person — a man, woman or child? Would it also smile at the pet cat (which overzealous robots might see as a pet and as food) in the same manner as it would to a human? Would it spook the cat or dog with its smile, and realize “uh-oh?” The larger question, how much of the ‘cultural learning’ does the machine pick up. How would a driver-less car behave in traffic in Arizona or say in Bangalore, India (where I am from)? Would the driver-less car honk like they do in India for the heck of it? Is honking a cultural thing? Does the machine learn these nuances?

Creating Ava — Soul Machines

As a user experience designer trained to adopt user centered approach, and I do; I ask — so, which user center am I designing for? The user as an individual, or user as a part of community, or a part of the larger ecosystem, or a speck in the biome? Our knowledge has advanced thanks to cognitive neuroscience driven by FMRI insights to map human cognition better than ever before. What qualities do I care for beyond usability? What matters when it comes to user relationship to ecosystem? Transcendence? Uncertainty? Can AI help support humans with suggestion in these complex situations with its own highly scalable, high performance, processing vast data?

 
 
There are multiple degrees to the user center

New technologies have the potential to trigger these thoughts, while businesses attempt to balance growth and yet remain sustainable. Especially, platform businesses that service connected consumer needs, connecting producers to them, via a platform infrastructure. The UX designer needs to work closely with technologists (a point I have underscored in another story on “Future of UX”) to determine where to anchor the user experience in a complex, interlinked, connected world.


Nir Eyal ~ “behavioral designer, at the intersection of psychology, technology, and business.”

Langdon Winner ~ “attempts to fix and humanize the internet usually reflect the same consumerism, narcissism & profit seeking that are the root of the problem”

Authenticity and Free will

We want machines to learn to develop better products and technology (irrespective of whether it aids consumerist growth), or to understand human psychology (irrespective of whether it leads to narcissistic behaviors online), or to enhance business productivity (primarily as a profit-seeking/growth YoY measure). AI and technology here is cast in the role of a mere tool. Not the partner it ought to be.

Nir Eyal and Langdon Winner are two diverse experts I respect and am aware of as a designer — attempting to design new behaviours, yet not being naive about the responsibility to be shouldered while harnessing technology. As much as user research and ethnography feeds my creative highs when I know what interface elements to tweak above the line of visibility, yet to be bold enough to recognize that the underlying systems can and may not be apolitical when deployed is a challenge to comprehend. More here where Langdon Winner enquires “Do Artifacts have politics?

Like in the decades building up to the newer AI based solutions, we have imagined user experiences in the same, rule based manner, across collaborating experts — designers, engineers, technologists, marketeers, product managers, focusing on transactions!

 
The Half Full Cup — remove noisy information before analysis and design

Consider flipping this.

Gone are the days of limited computing power. Gone are the days of siloed organizations and consumers. We have come far from the days when Bill Gates proclaimed 640K ought to be enough! While technology has advanced beyond even Moores Law, we retain those Gatesian heuristics. We look at data as having noise — incomplete data, bad data, so on, which in the past would have crashed rigid rule based computer systems. Remember the blue screen of death!

 
Dunn/Belnap multi-valued logic

After all, what is noisy data? Is it like a proverbial weed i.e. a plant without a benefit for human consumption? I find succour in Political Theory for such behaviours. Specifically, Dunn/Belnap’s multi-valued logic. A voter in an election could be voting in multiple ways beyond the boolean for or against! What we refer to bad data or noisy data is likely to have rich information. Political, fuzzy, inconsistent, outlier tidbits of data, perhaps!

 
Not Boolean > How the swing voter went extinct by Alvin Chang. Source: https://www.vox.com/policy-and-politics/2016/11/4/13496688/swing-voters-dying-cartoon

Why not let machine learning differentiate good vs bad data? What are the opportunities for technology and design? That opportunity lies in the Half Empty cup of data that traditionally let drop to the floor!

 
The Half Empty Cup — let the machine learn to tell between Good Vs Bad/Noisy data. Let AI generate Anticipatory user interfaces. Think of them as A/B tests on steroids.

In fact, architecturally, as we move from monolithic systems to microservices based systems, there is an opportunity for us to use machine learning and information discovery automation (agents) to mashup fascinating views of information, presented within accepted aesthetic conventions, appealing to common sensibilities, as machine generated user experiences!

The key I believe lies in how we decompose the functional elements, which I construct as a diagonal that slices the vertical stack embedding system layer, interaction layer and user intent layer.

 
Decomposing Micro Interactions to be served by underlying micro services.

Assuming we progress to this scenario, then UX designers and Engineers have the opportunity to look at data as well as user experiences holistically. We could redesign the five-star based rating/feedback mechanism to transform it from its trasactional moorings.

 
Data driven, AI driven technology can lead to more wholesome, personalized user experiences provided it makes sense of all the data

Rhetorically, one may ask but are such machine generated experiences authentic? Can the mere mimic of human expressions like Soul Machines Ava create a lasting trust?

Pause and ask, is there something synthetic, unnatural about such computed personalization? Is such personalization actually benevolent? Are we allowing machines to manipulate us into believing its our free will that drives us? Is there an eery suspiscion of a manipulative entity or organization with an agenda? Is the intent behind personalization authentic, and not fake?

Designing for technology and user experiences needs to weigh in on the output of AI, how its tuned, how it learns. AI generated UX builds first on trust, wherein the user in some manner places trust in the data he or she unlocks. Such data is authentic since it flows from the user to the AI system. Its from that base that AI generates UX that generates delight. Even if UX disappoints, core trust still remains. It is authenticity flowing from the sense that a user empowered the AI system. However, technology can only go as far. As Descartes points out Free Will is “the ability to do or not do something” (Meditation IV) and “the will is by its nature so free that it can never be constrained” (Passions of the Soul, I, art. 41). But, I suppose that as long as the human consumer of tech served choices believes it is not interfering with her free will, it should be OK.

 
I choose to do or not do something — is there a tilt? is the salt enough?

A light human touch makes a thing personal. Authenticity is further cemented with the deft user touch, or tweak to personalize. When untouched by user, its incomplete, impersonal, and not empowering human free will. The role of UX for AI is a little like the light touch one gives to set right a tilted painting. Or that little dash of extra salt to a dish! Such actions make it a signature something, very personal. An expression of human free will.

Design will stay relevant to celebrate that need — free will. UX Designers recognize that and incorporate it…irrespective of process to discover it. Assume, AI builds on trust where possible and to learn and generate the delightful UX. Assume the UX is authentic because it allowed user to configure or change it. Even if the human finds it authentic, does the machine know? Algorithms that interpret this and feed back to represent as new learning will be key for scale. UX design needs to train ML for such representational feedback.

Error handling in AI driven systems, if such a thing is possible with automation

Lastly, as a design practitioner in big data space, another aspect of AI besides Authenticity that I feel UX Designers should focus on, is Error handling. If processing for choice using multi-valued logic allows automation of user interfaces, then similarly, we need to diversify post system response, or feedback to and from users in a similar manner. Errors such as 404 Page not Found is a binary setting, then in our AI driven world, there is room for error that needs to be flagged. User interface design and information architects need to device fresh UI approaches to flag false positives and false negatives that a AI based system may throw up. This will require the user experience to elicit users critical thinking to be aware and flag issues.

 
How can UX incorporate behavioural cues that trigger critical thinking — to detect errors and act to prevent them or flag them — Immensely useful in driverless car ecosystems, fake news publishing

These ~ Authenticity of AI generated UX and Error handling in unsupervised ML systems, and how UX Designers address will bridge what I call last mile delivery of UX, to help transform it will be the pivots in UX for AI — less visual and more cerbral!

When Less should be More!

Photo by Trym Nilsen on Unsplash

The VP of Design at Uber is mentioned by Fast Company recently as saying that in 2018 he “is to introduce a more empathic and considered approach to the company and the product.” The emphatic ‘more’ on empathy triggered my interest. I am aware as I pick phrases, original context and intent may get lost!

More empathy is bewildering. Let us examine it critically! A child at a buffet stocking up only on desserts is an indulgence we may empathize with while justifying, let the kid have a break! Or, poor child, little fun once in a while wont hurt! Then there is the non-indulgent empathy that goes deeper, saying I care for you and your well-being. You may have a scoop of butter scotch ice cream with caramel topping, but first finish the salad on your plate.

When we design product and services and at the same time traverse complex emotive zones, I wonder if there is a correlation between creative imagination and empathy. Do we need to artificially pump up empathy to get into a creative stupor, like Aldous Huxley may suggest; to rally the creative forces within and unleash on the problem at hand to deliver the oooh of a user experience. In the case of business problem solving, a rational justification of what seems to be naturally right thing to do. Be good! This self trip in an enhanced state of happiness and empathy, where you readily give and accept free hugs, because you believe; of course with the aid of substances perhaps or with a design thinking process instead, which can also render one euphoric!

What distinguishes ‘caring-empathy’ is that it come from within, naturally, although it sounds mystical, and not what Plato would have expounded. Before we sort the ‘artificial-empathy’ vs ‘caring/genuine-empathy’ let us examine what role empathy plays. Is empathy a means to acquire knowledge, or is empathy about deploying knowledge (and logic) to experience emotion (I refer this post by Betty Stoneman — Plato’s Empathy? Qualifying the Appetitive Aspect of Plato’s Tripartite Soul). User researchers and Design Thinking practitioners should know better or at least aware of their intent while investigating.

This tautological reference to a caring-empathy is an important distinction. Especially, as many are starting to get weary of the noun empathy, thanks to Design Thinking drumming, exhorting executives to turn on empathy…at least for the duration of a DT workshop they participate in. Empathy captured in a complex array of multi coloured post-its! (I too have indulged in these rituals).

More worrisome is the commodification of empathy, visible in media, and we get a daily dose, anytime a sensational event is reported. Fatimah Suganda, a researcher from Indonesia pointed out the tradeoff between media striving for readership/audience boost Vs. informational and educational story in her story “The Commoditization of Empathy in Media Coverage on Engeline’s Death.” Ironically, its this very approach to raising empathy that could lead to its dysfunction! I sense I am generalizing, but, nevertheless its a perspective.

So, I ask: Does ‘Artificial empathy’ lead to indulgent design, while ‘Caring empathy’ delivers good design?

What is the Future of UX Design?

This topic was triggered over here https://www.quora.com/What-is-the-future-of-UX-UI-designer 

Image courtesy: https://unsplash.com/@garidy_sanders 

I am pasting the same answer here for convenience. I have consciously left UI out and sticking to just UX. I have another post on Quora on this and the UX Vs. UI discussion.

My answer to this question is in two parts — a near future and a long-term future.

Short term (up to and around 2020) — very bright future. Demand for pixel perfect, usable and delightful UX demand is high, especially with accelerating digital transformation underway globally. Evidence to support is in this graph of top design-driven companies against all of S&P index –

 

Source: Job Trends Report: The Job Market for UX/UI Designers

If topline growth of marquee brands is significant, today, design is a buzzword among other companies too, who often are guided by the leaders. Coupled with digital transformation, where information technology is ubiquitous across most business processes, design is a key skill that teams within companies and service providers seek.


Long-term (beyond 2020) — This is the interesting one. If you subscribe to Clayton Christensen’s disruption model

Source: What Is Disruptive Innovation?

Then UI generators such as https://thegrid.io/ are the disruptors who will likely be the norm (read this about Websites that design themselves on Wired). Handcrafted UI and UX design will likely transform to curation and product management aspect of UX.

IMO, the future of UX is likely to change — Self-taught machines perhaps can soon and will iterate 1000 times faster and produce far greater variety than human history has ever. In such a scenario, whenever that happens in 10, 20 years from now, UX design education and training need to transform.

If UX design in future were to include more formal studies (no pun there) viz.

  1. Study of cognitive neurosciences and human behaviour
  2. Study of ethics
  3. Product management — to envision for technology-aided interfaces stemming from AI advances, generated and unsupervised ML-based system interactions, predictive UX, personalized robotic services, and similar emerging tech.

In conjunction with this, I predict that engineering performance will come to fore and UX designers will work closely with technical architects, who together will overshadow the current marketing driven/business agenda that is at a core of decision making. My premise is that Process is given overdue importance over design action. Agile, Design Thinking, etc will have to give way to design execution. I do not mean the process will go away, but it gets more below the hood, and intrinsic to a flexible work culture of the digital information age. As for business strategy, its agility will be about how its customers will define for them…not in some wood panelled boardroom or digital wall Pods to aid decision making. Agility will be about how plugged in businesses are to their users without restrictive filters. Agility is not just the agile as a process, but an attitude. Instead of insights gathered out of noise-free data, the effort will be to remove noise from the analytics. Decision-making power will shift to customers.

Its against such a scenario driven by large-scale deployment of AI and related tech that the future of UX designers will unravel, as to what roles they will play.


Postscript:
And this about AI impact on UX Design has been discussed a lot. So has the example of The Grid, who I refer to as a Clayton Christensen disruptive entrant. In some ways, The Grid is the hero (like the mini steel plants and minicomputers.) Here is one great piece from UX Collective by Fabricio Teixeira — “How AI has started to impact our work as designers.” Fabricio is bang on about the impact of AI and that its well-suited to do the chores like cropping images, maybe sort and tag images. However, I believe this is the sort of productivity we will see in the short term, not wholesale, but in those large agencies with large stable accounts and steady budget flows. My point is that in the longer term, as technologies mature, they will be capable of doing good design maybe with 1000x more iterations, A/B test, iterate again and publish widely. During the early days of WWW, we had designers handcrafting attractive banner ads. In the future, these may just be the output of an AI-driven ad serving platform that creates a campaign, negotiates and buys spots, runs the campaign, learns from it and repeats. Of course, the path to that has its trials and tribulations like Microsoft’s Tay! These are those patchy early version prototypes that will eventually disrupt.

Instead, UX professionals in future need not be limited to UX or the stuff above line of visibility, which machines may replace. They ought to work closely with Product Managers and Engineers to reimagine product experiences.

 
source: https://upload.wikimedia.org/wikipedia/commons/d/dd/Star_rating_1_of_5.png

 

For example let us consider the common, abstract five-star rating feedback method. This UX is a legacy of OLTP (transaction) systems in its thinking. Feedback is captured in a manner that suits how its processed, which is rule-based, rigid. Go with me and imagine how this feedback mechanism overhauled assuming the rigid rules are replaced by self-learning AI systems.

 
A concept for an AI-based feedback system where FEEDBACK is a RELATIONSHIP and not a TRANSACTION. What if AI inverts the feedback from an explicit, overt system to an implicit covert approach, wherein, AI system observes and learns user relationship with products or services in a context that it determines as appropriate, to capture the feedback as a continuous ‘relationship’ with the product/service as against a ‘transaction’ with the product. This image is only a conceptual illustration wherein Feedback = relationship is constructed and changes with time. There can be an aggregate view or the splits to drill down. User has control on which view or all views to share. This is an example of how UXers can question the norm and reimagine the product to address the power of new technologies while allowing the same system to focus on the chores of generating ‘designs’ for UXer to choose from. In that sense, the future of UX Designer would be a part curator, part designer. The distinction between what a designer does and AI does is likely to be between rich organic memories (human) and artificial rules/graphs (AI). Those memories will be our strength and guide our hand and eye.

As you leave, read this brilliant piece by Mariana Lin on the distinction between an artificial persona and a human persona.

 

Caveat: I admit to an overly optimistic and exuberant assessment here and this is an area of speculation. I am informed by my own 7-year journey as a designer co-founder at a Hadoop based big data startup (see – ramblings from a failed startup journey )

Two Big Bets Salil Parekh should take as CEO of Infosys

First posted on Medium Dec 3, 2017

The paint on the signboard with the next CEO of Infosys is fresh. Thats fresh cheer for stock markets in Mumbai. Infosys was always a darling! Salil Parekh will quit his executive board position at Capgemini, a French IT services company, to join Infosys. This in spite of prevailing sentiment that this time it has to be an old hand. Especially under the circumstances leading to Vishal’s resignation. The culture argument is that an outsider doesn’t get how Infosys works. Leadership they believe needs to be sitting closer to its headquarters in Bangalore. At least, I did so too, having spent few years there between 1995 and 2006.

Nandan Nilekani, co-founder of Infosys, author of ‘Reimagining India’ is a wise man. As the person tasked with getting Infosys back on track, he is well-aware of challenges ahead at Infosys. Nilekani said “the challenge before companies like Infosys was to get people to be up-to-date on current technology, current development and how they learn the latest.”

Considering his recent experiences in helping build the worlds largest bio metric identity system, Aadhaar, and on his way to build the next high impact ‘societal platform’ at Ekstep; to build scale fast, and to generate critical impact requires a commitment, a no nonsense attitude. Sentiment doesn’t help here!

The choice in Salil Parekh, an outsider to Infosys, yet a veteran at scaling Capgemini India operations in a market that was already witnessing eroding margins coupled with a need to reskill for new technologies reflects this approach. A Reuters report quoted Nandan saying “He (Parekh) has nearly three decades of global experience in the IT services industry. He has a strong track record of executing business turnarounds and managing very successful acquisitions.”

There are several issues to get around once Salil steps into Infosys in January 2018. But two will standout in how he leaves a lasting impact on the organization. OK, three!

#1 BIG BET – Picking up on the foundation laid by Vishal Sikka in Artificial Intelligence will be first big bet. Predictive Analytics Today did a comprehensive analysis of leading AI platforms. In that report, Nia, Infosys’s AI platform ranks fourth alongside Wipro’s Holmes and those from stalwarts such as Google, and Microsoft. Not included in this list is Indian IT leader TCS’s AI platform Ignio. But, the big boy way ahead in the AI game is IBM’s Watson. Reported widely, Watson’s estimated revenue stands at US $100 million over the past three years. IBM has set an ambitious target of $10 Billion by 2023. This is likely a challenge in this otherwise exuberant market, even for Watson.

For Salil at Infosys, the challenge will be similar. Infosys needs to solve its clients problems fast, and show business value from such solutions. He should build on Infosys’s attempt to invigorate the solutions space with its ‘Innovation Hubs’ that hire local talent, which includes user experience and design. Infosys has traditionally had the advantage over its competition, at least the India based ones, in cementing strong client relationships. Salil should quickly press these forces to deliver application ideas for Nia and other advanced technologies it possesses and show results to its customers. And the place for the action for this big bet will be its Innovation Hubs. This is very different from the past model of capturing functional requirements as usecases; a template driven approach.

Here one needs to collaborate, and co-create solutions. If such concerted effort plays out well, it will be the first solid differentiation Infosys can highlight. The transformation will need to be away from its offshore centers and closer to the client location.

Re-skilling its offshore armies of developers and technologists is already underway. Efforts include collaboration with leading online trainer Udacity to deliver ‘Nanodegrees.’

#2 BIG BET — This is more up Salil’s sleeve ie. M&A. For Infosys, India business is not as significant in revenue terms as it is in visibility. The much touted GST, a taxation modernization effort, has been an issue for Infosyswhich built and deployed it for the Government of India. India’s small businesses are up in arms against them for what they claim is poor performance and several glitches, especially its usability. Infosys though defends its record. Quoting from the report “Given the complex nature of the project and rapid change management, there have been several stakeholder concerns that have also been raised. Some of our finest engineers are supporting the GSTN team as they work towards resolving these and serving all stakeholders.”

In the past, rapid growth for service companies such as Infosys has come from implementation and customization of products. Honestly this did not involve that great an amount of thinking and innovation since the problem is already solved by SAP, Oracle, Microsoft, etc. Strategic problem solving skills is a culture and capacity found readily at reputed consulting firms such as McKinsey, BCG, Bain, Deloitte, Booz Allen, so on. Infosys has time and again tried building such a practice but has not delivered on that front as expected. But, the Innovation hubs planned at strategic locations across USA and Europe will help stimulate these problem solving skills and deliver results.

Culture inherent to a business is the big elephant in the room and there is no way past that. This applies to India too.

Especially for a high visiblity solution such as GST portal for the Government of India that impacts hundreds of millions of ordinary Indians. Now, who understands the financial thinking of millions of businesses in India better than Bharath Goenka, the founder of Tally ERP, India’s leading a ERP and accounting software product company. The journey of Tally started with a challenge posed to Mr.Goenka by his father, “Are you writing programmes to make the life of the programmer easier or the life of the user easier?”

Early on he understood the culture of accounting preferences of Indian book keepers. There are consultants like Rohit Choudhary who says “It is an accounting software with a soul!” and that “You simply don’t change for the sake of changing. Tally’s interface is very simple, unique and user friendly.” While there is a universal lesson in such philosophy, its important to note that that simplicity has emerged from a focus on what users seek or how they work.

Salil has a radical option of co-opting this deep learning on how millions of Indians prefer managing their business accounts, while they get around to complying with the new tax regulation with over 99% of taxpayers registering in a record time. If we accept the simple assertion that Bharat Goenka understands this behaviour well, then Salil should acquire Tally, convert it into a open platform, and offer it with tighter integration to GST portal, reducing the burden on users to mastering new software, instead, using the Tally like experience to segway into GST portal.

Now, would such an acquisition actually pay off in terms of license fees or subscription fees, assuming it offers a freemium model to millions of users, where they pay to upgrade. It may not. But there is a larger political gain if this is pulled off in a record time before the next general elections, and for Infosys a greater clout and influence on policies that impact its operations. One can only imagine other unknown benefits from a platform such as this when linked to other Government digital programs including Aadhaar. There could be stories of efficiency, inclusion and benefits for millions of Indians.

For Infosys, these two big bets could truly transform it into a next generation solution company, accompanied with impact and influence, built on a robust base of an efficient service culture.

The third bet is ancillary to the previous two. Salil will need to convince the Infosys board and takes them along, including prominent shareholders like Mr.NRN Murthy for these initiatives, and with transparency. But with Nandan as Chairman to guide him, the journey should be easier for Salil as compared to Vishal, relatively speaking. Now, time will tell soon whether he will be successful or not and be that bold transformer Infosys needs. But then again, he is an outsider!

Data Driven Cultures

(first published in 2015 at http://www.dataswft.com . Updated 7 Dec 2017)

What drives data driven cultures…besides coffee?

How do businesses deal with intuitive insights and machine generated insights? In a conversation with a brand consultant and travelista about my product Dataswft, sifting through #realtime #bigdata #analytics, he asked “where are the warmer human things that drives AI and ML technologies.” To be ‘data driven’ he pointed out, is a culture, unaware of the Tableau sponsored report from Economist “Fostering a data-driven culture.” To quote the report “IT security is indeed a job for experts, but data are everyone’s business.” I still struggle with the plural nature of data!

“Is Dataswft a technical thingy for data driven, or is it enabling data driven cultures?” the brand expert enquired. Time to act is a key metric for data-driven I explained. And so, to be data driven is an everyday matter as long as it provides value. But, how does that differ from a data driven culture? Then is the constant posing questions, small questions constitute a data driven culture?

Consider this scenario. For an online ad campaign the frequency of tweaks need balancing between regular and not at all. Regular requires minimal amount of data to analyze for metrics such as reach, clicks, CPC so on. Something like a weeks worth of data is good. But that is a heuristic that applies to a human scale of attention and processing. Or, compress it to a day, such that the human manager can take a look end of the day or beginning of the day so on. Also, these tweaks are post fact ie. historical data analysis using heuristic approaches.

Add machine learning, and it opens up two opportunities. First, unlike humans its not limited by fatigue, attention. Of course, we will never discount human creativity and imagination. Especially when dealing with limited information, limited time or limited capacity to process. These constraints are best obviated by humans than machines. Machine learning can give us the speed and capacity to deal with large data sets. The second opportunity for machine learning is the capactity to predict, by utilizing well defined mathematical models or algorithms.

With artifical intelligence the same campaign can now run with greater efficiency, more frequent tweaks, instead of weekly or daily windows. It can be real time, though more relevant to IT security and fraud management. Data driven coupled with such system intelligence gives us the opportunity to ask several ‘small questions’ that you can liken to ‘infinetesimal element’ in decomposing physical forces.

Representative image to demonstrate the concept of an infinitesimal element as a tangible, simple model-able, mathematical quantity (source: https://upload.wikimedia.org/wikiversity/en/thumb/5/58/InfElement.png/400px-InfElement.png )

This finer abstraction, I conjecture, will allow for more accurate sampling of data and analysis by the machine. For the campaign manager, these frequent ‘small question’ analyses can present a visualization that is richer, provide better trending on the data and lead to better decisions.

If data driven represents our ability not limited to capture and store of data, but to process it continuously, and asking of it ‘small questions’ that are well-modelled, then our ability to connect with the output of a data driven process, coupled with human intuition stands for a data driven culture.

Providing answers instantly is what technology does well. With the technology, its the culture that realizes potential and pushes the envelope.

Consider an investment bank that needs to run value at risk calculations covering a host of financial products invested in by clients, touples of market price data for months, hundreds of sophisticated risk models designed to predict risk against different scenarios. To this person, its important to know how much money is to be set aside against the dynamic risk and how much capital can be unlocked to earn. Calculations here can run into over 15 billion and to execute in under 30 seconds can make a huge difference to these money managers. This scenario is possible only in a data driven culture and keep a handle on risk in a volatile market that involves many asset classes.

Data driven cultures are those we see at the top of the ‘culture pyramid’, crunching all their data by the second, minute and hour and not some end of the day or end of the week event. But as one moves higher up the pyramid, the response times that the culture will accept reduces from minutes and hours to mere seconds. That is not to say every industry out there needs to optimize at nano second level, real time analytics, but each industry should choose based on where the opportunity lies and where demand lies. Security industry can only survive in a real time processing of events. Social media marketing may find untapped opportunity in an hourly cycle. Education and learning industry may find it suitable by moving to end of day, from end of term, so on. In all these, the data driven culture consists of small questions we ask of the data.

Put another way, data driven cultures are those complementing their solely heuristic decision making process (read gut feel) with a data driven approach and thus; what do the data say! That is not just about the quantity of it, but also a quality that heuristic rules and human cognition are likely to be overwhelmed given the volume, variety, velocity.

In a digital world, even when its the same question asked few minutes back, or yesterday and last quarter, so on, paradoxically, the answer is never the same, instead gets better or is likely different.

“Time…and data, are like a river. You never touch the same water twice!”

Of Polynyas and a Pollyanna

Yesterday, I was watching the wonderful Nature series by BBC’s David Attenborough (specifically Frozen Planet). During winter, most things that fly escape the freezing arctic to warmer southern regions of our planet.

Spectacled Eider Duck

There is an exception it seems. The Spectacled Eider, instead of moving south, heads for the frozen seas in search of ‘polynyas‘, which are naturally occurring ice holes in the otherwise frozen ocean.  The entire Eider population gambles on the open ice hole betting that it will continue to remain open through the harsh winter. Polynyas are reported to remain open over several winters sometimes. Others may just not sustain the thermodynamic conditions required to keep the ice from forming. For the Eiders its important they choose a Polynya that doesn’t close in. But when the bet fails, like in the video I watched, the open ice hole becomes too small for the crowd to stay afloat and alive. Its like a noose that slowly tightens freezing many to their deaths.

The best part of my career was like the abundance of spring. Seasons do change and when winter set in some 5 years back, I cut loose in search of my own Polynya, navigating entrepreneurial waters. While watching Frozen Planet, I was struck by what mirrored those doomed Eiders! My startup Polynya seemed perfect when I chose that space, and it was the best contrarian bet, I thought then. It was so good that it reflected in my improved health very quickly. Its worth noting that being my own boss, chasing a dream was an amazing stress buster in itself. My higher cholesterol levels were down. Later, my curious enquiry of changes in eating, exercise, sleep patterns revealed nothing. I was left to conclude stress was the silent killer. And I beat it in my own Polynya. In the micro climate of the startup ecosystem, talking to investors, fellow entrepreneurs, businesses at networking events, award functions so on, the Polynya comes to life, nurturing the dream. Until it starts closing in!

Everyone in the ecosystem is aware of the dynamics of thermodynamics at play. Similar to the Polynya, the freeze tries to extend constantly, while the upwelling currents continue their churn bringing rich nutrients to the surface for the ducks and seals to feed, not to mention the submerged naval vessels that need to rise to surface on occasion within a Polynya. If I may draw a literal parallel, these naval ships are the big enterprises with their ‘open innovation’ programs, launching funds, accelerators, shared IP, so on to ensure their large appetites constantly find new sources of food to keep that magical growth number going or in search of technological progress and visit startup Polynyas.

Unlike the poor Eiders doomed within the confines of a false Polynya, a startup is afforded an exit, for the lucky few its newsworthy and others end in a icy, watery grave. As unreal as failure is, reason informs one that I took a chance, I didn’t fail, it was my aspiration, my expectations that failed. More importantly, I realize these can be shed. I remind myself its all gravy, and I welcome back myself to this neglected blog.

The optimistic pollyanna that I am tells me – here its spring all through the year! It will be unfair to the Polynya that fed me during tough weather and open conditions and like so many of us do, I am richer for the experience! In my next post, I hope to share an insight gleaned from that journey, where I will try to makes sense of what linkages there are, if at all, between two worlds I straddled (I am no deep diving Eider either), the ‘user centered’ world of design and the ‘data centered’ world of big data. As for my future, I believe I have not failed, but emerged from a cauldron or should I say ice bucket of learning, ready to apply in new work that I will take up.

Meanwhile, enjoy this stanza V from the 1925 T.S.Eliot poem – ‘The Hollow Men’ (quote taken from All Poetry). Its a sort of feeling I get as I exit the startup Polynya.

Here we go round the prickly pear
Prickly pear prickly pear
Here we go round the prickly pear
At five o’clock in the morning.

Between the idea
And the reality
Between the motion
And the act
Falls the Shadow
For Thine is the Kingdom

Between the conception
And the creation
Between the emotion
And the response
Falls the Shadow
Life is very long

Between the desire
And the spasm
Between the potency
And the existence
Between the essence
And the descent
Falls the Shadow
For Thine is the Kingdom

For Thine is
Life is
For Thine is the

This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.

Data Fracking

In 2006, Clive Humby drew the analogy between crude oil and data in a blog piece titled “Data is the new Oil” which since then has captured the imagination of several commentators on big data. No one doubts the value of the ‘resources’ that varies in the effort required to extract. During a discussion with a billion dollar company CIO, he indicated that there is a lot of data but can you make it “analyzeable.”

Perhaps, he was inferring to the challenges of dealing with unstructured data in a company’s communication and information systems, besides the structured data silos that are also teeming with data. In our work with a few analytics companies, we found validation of this premise. Data in log files, PDFs, Images, etc. is one part of it. There is also the deep web, that part of data not readily accessible by googling, etc. or as this Howstuffworks article refers to it as ‘hidden from plain site.’

Bizosys’s HSearch is a Hadoop based search and analytics engine that has been adapted to deal with this challenge faced by data analysts referred to commonly as Data Preparation or Data Harvesting. If indeed finding value in data poses these challenges, then Clive’s analogy to crude oil is valid. Take a look at our take on this. If today, Shale gas extraction represents the next frontier in oil extraction employing a process known as Hydraulic Fracturing or Fracking, then our take on that is ‘data fracking’ as a process of making data accessible.

It’s all gravy

image source: http://en.wikipedia.org/wiki/File:Voyager_Path.svg

 
I was in my early teens when the Voyager spacecraft was launched for what was a four-year mission to explore the solar system. I still recall fascinating, never before seen close up pictures sent by Voyager as it shot past Saturn’s rings. After a long period of silence, Voyager has recently signalled, 36 years later, that it is indeed outside our solar system, sometime this week. The first man-made object to leave our solar system.

Like a bird leaving its nest to lead its own life, led by its own purpose. Its all gravy after that. To track progress, it’s only keenest speculation. The degree of Keenness matters because, if I may borrow Donald Rumsfeld’s rhetoric, now, its an unknown unknown and as it moves beyond, to exist only in imagination. The bird in the nest is a known known. To know about the bird that has flown, best predictability models matter. Patterns matter. There will be lots of noise from uncontrolled, unknown sources. When I used to gaze at those captivating pictures of Saturn and all the planets, those days there was the Skylab crashing down incident. Around that time I was living in Kolkata (Calcutta) and there was this craze, perhaps in jest, to get helmets to protect oneself from the impending crash. It’s funny that in modern times, the scientific rationale can easily be thwarted with such emotive responses. Is that natural when a familiar, well-understood worldview gets challenged and a lot of unknowns enter the picture?

This is the familiar language of big data these days, as popular as it is, and a lot of unknowns. Its all gravy today, for the Voyager, for the bird that discovered flying, and for me too. Through this blog I hope to explore and discover new aspects of our digital worlds, to thwart the known and structured definitions that I have lived by these days, the world of design, user experience, entrepreneurship encompassing Cloud, mobility and big data. I welcome you to join me on this journey and share your thoughts as it progresses.


My first post explains why UX Gravy. The content is primarily contributed by Sridhar Dhulipala until we get other like-minded individuals who want to share out here. The idea is that over the last few decades, especially since the advent of GUI, PCs, point and click devices today we are in the middle of a far more digital-driven world and highly mobile users consuming this digital content across the globe. Those initial heuristics and definitions that guided user interface design perhaps hold, but fundamental changes are questioning their relevance. The nature of tasks, how we work, interact, personal lives are all changing and user experience expectations, opportunities are different.

UX Gravy is about whats beyond previous well-defined rules of user experience. It’s about identifying, exploring, conjectures, evidence pointing to new user experience where our digital stuff has crossed thresholds of easy, ready, structured comprehension. It’s about what user interfaces, what contexts, how, why people interact with information and how it is helping them adapt at work and in life.