Chatbot, what art thou?

“To be, or not to be, that is the question: Whether ‘tis nobler in the mind to parse,
The slings and arrows of outrageous language,
Or to take Arms against a Sea of intent.”

In early 2012, at the big data startup I co-founded, we were sitting on an award-winning Hadoop based search engine, which seemed to offer new possibilities, if you accept that information can be fundamentally organized, discovered, connected very differently at scale. Underlying the capacity to handle petabytes of data with ease, it also marked a shift in how we could approach data beyond well framed, well structured queries; get more hypothetical, what if this or that! From the labs a combination of high performance engineering and the ability to process large troves of data was a powerful beam to shine on hitherto unsolved problems. We chose the oft neglected post purchase support experience. We imagined that servicing customers was characterized by rich variety of user situations requiring attention. We felt a solution that was not rigid, has the ability to learn, adapt to a range of hypothetical scenarios was ripe to pursue.

Our premise that Text is at the intersection of UX and Systems. Hence a chatbot.

We picked Google chat and Facebook messenger for the UX delivery relying on their implementation of XMPP  (discontinued in mid 2013).  XMPP is the Extensible Messaging and Presence Protocol, a set of open technologies for instant messaging, presence, multi-party chat, voice and video calls, collaboration, lightweight middleware, content syndication, and generalized routing of XML data. 

We launched Txtland…

Txtland 2013, a chatbot that fetches information to natural language queries.
Txtland screenshot. The grey blocks are user texting and purple is Txtland response. User commands were not pretentious and completely functional, direct, short.

Without going into rest of the story as to what happened to Txtland, a digression, I realized the primary design challenge was in getting user at ease that she is chatting with a program at the back. It was powerfully fast in performance and response. It had access to the latest and huge repository of information to parse and serve in a fraction of second. However, like the Deepmind’s AlphaGo beating application, so well trained, solely by itself, on AlphaGo, it would not know how to play scrabble. This narrowness of specialization against the broad spectrum of human intent and sheer variety, responding back with, as Stafford Beer would say, with attenuation, not amplification, remains the challenge technically. 

As users typed we parsed and picked up ‘action words’ like stock, weather, ticket, item, SR, etc. and responses in the chatbot would provide options against that ‘action word’ along with a rudimentary, ‘TYPE THIS’ to find out more… so on. It was quite elementary. The text to engage with Txtland, a program, was honest and more machine like. If you attached a # in front of a phrase, it became an action for Txtland. See screenshot below:

From a design point of you, I pondered, as I see a proliferation of chatbots across industries, is why do the creators of chatbots continue to imitate human style, knowing well it cant live upto that label. Not authentic!

Fundamental problem with chatbots is that the interface between the user and the chatbot or agent is the same as what is used in normal, regular conversations between one human and another human. And this is validated by recent research from Pegasystems (NASDAQ: PEGA), the software company empowering customer engagement at the world’s leading enterprises

There is an opportunity that remains unexplored today to redesign the container on the tool using which a human user knows and is comfortable to chat with an artificial agent or a bot. Such a design should include predefined canned phrases and gestures. Research needs to explore whether search gestures can also be used by bot to communicate broadly. What language should a machine bot deploy to communicate with humans?

That would be besides generated text. How does the language demonstrate a ‘machine culture’ where culture could be its ‘nature’ that organizes information, its ability to find correlations across these, find and serve with great speed. And learn along the way as to what is a high value coorelation and what is a low value in what context. Txtland was leveraging this aspect to not just be a Q&A type conversation, but it could also run backend scripts, respond back, look up knowledge bases and ultimately, in case of failure, it gave option of ‘should I dial in our customer support representative?’

Chatbot parses text to respond with actions such as running scripts at the backend. 

What Chatbots can be!

I speculate that this sort of progress and exploration could help the ongoing effort in the digital transformation of businesses, including automation of business processes, strapped on with a new manner of interacting with ‘cognitive computing.’ Thinking about these kind of Technologies and problems with a very different toolkit, one of designers can help define the future of this industry and innovation. Especially, if these intelligent chatbots can conversationaly learn from a human user how it can perform that same task? How it will allow itself to be ‘handheld’ as painting robots do (record and play) to learn under supervision? Or in other cases, immerse themselves into a data rich situation given a specific human specified goal, to learn unsupervised.

One benefit of having machines have a new language to communicate with humans, and that humans retain theirs as distinct from it, we could even block or program into the machines an ability to not process certain intimate or personal human phrases. This would limit machines to what we envisage for them to productively engage with and perform within those boundary conditions efficiently. Like a bot that is expert in psephology or another in string theory. 

Assuming one gets past this ability of a chatbot communication or its UX, then comes the challenge of figuring out if there is a hierarchy among them. Afterall, we have tasks that are mundane, repetitive to challenging and complex. Can these bots be designed for such hypothetical variety? Would it be the ‘machine intelligence’ that differentiates these? How smart or fast is it? Speed and accuracy of response are critical to peg them. Of course, this is knowledge in the realm of known – knowns! Another boundary condition. From such criteria, a chatbot persona can be shaped and presented in a unique, non-human space. 

Henry VI, Part III [IV, 1]

King Edward IV:  “Now, messenger, what letters or what news from France?”
Messenger:  “My sovereign liege, no letters; and few words, But such as I, without your special pardon,
Dare not relate.”
King Edward IV:  “Go to, we pardon thee: therefore, in brief, Tell me their words as near as thou canst guess them. What answer makes King Lewis unto our letters?”

Here in Shakespeare’s play, the ‘guess’ is politically loaded. Hiding or revealing the facts may lead to harsh consequences to the messenger facing King Edward IV. Again, the pardon that follows hazarding a guess, leads the messenger to confidently conjecture on the circumstances. Guess is what humans do a wonderful job with. Guessing is such a fine way to move forward. And in the case of a chatbot, especially a smart AI driven one. A guess in that context is more heuristic and less algorithmic. That could also explain, why a rule based engine in chatbots with a NLP strapped on comes across as rigid or duh!

Can we think of chatbot conversations that approximate the King and his messenger like above. Conversations that are guided more by heuristic principles than algorithmic models. From Stackoverflow, this is high upvoted answer to differences between heurisitcs and algorithms.  Below is Kriss‘s explaination: 

An algorithm is the description of an automated solution to a problem. What the algorithm does is precisely defined. The solution could or could not be the best possible one but you know from the start what kind of result you will get. You implement the algorithm using some programming language to get (a part of) a program. Now, some problems are hard and you may not be able to get an acceptable solution in an acceptable time. In such cases you often can get a not too bad solution much faster, by applying some arbitrary choices (educated guesses): that’s a heuristic. A heuristic is still a kind of an algorithm, but one that will not explore all possible states of the problem, or will begin by exploring the most likely ones.

Irrespective of whether machine learning such as the reinforcement learning model (see image below) can be applied to build a ‘guess-as-you-go-chat’ or some other, what matters is why? 

source: KDnuggets

But why bother with guessing, I mean heuristic or 80:20 approaches that may make the chatbot fall on its face! (< any emoji to represent that?) 

In ‘Models of Ecological Rationality: The Recognition Heuristic‘ the authors Daniel G. Goldstein and Gerd Gigerenzer, from Max Planck Institute for Human Development suggests that a ‘Fast and Frugal‘ approach is one efficient method available. Can this guide the design of a chatbot?

From their paper, “One view of heuristics is that they are imperfect versions of optimal statistical procedures considered too complicated for ordinary minds to carry out. In contrast, the authors consider heuristics to be adaptive strategies that evolved in tandem with fundamental psychological mechanisms. The recognition heuristic, arguably the most frugal of all heuristics, makes inferences from patterns of missing knowledge. This heuristic exploits a fundamental adaptation of many organisms: the vast, sensitive, and reliable capacity for recognition. The authors specify the conditions under which the recognition heuristic is successful and when it leads to the counterintuitive less-is-more effect in which less knowledge is better than more for making accurate inferences.

What this would do, in addition to the mundane, repetitive, well-defined, established routines that chatbots can address decently today, is also to add that variety, that ‘masala‘ to the curry; or currying up a conversation with a human! Just more breadth.

To give Dr.Hook’s popular song a twist – take the pussy cat and turn it to a tiger; wild, in the jungle from the zoo.

Dr.Hook – Jungle to the Zoo – 
“The tiger, tiger, they’ll clip your claws, cut your hair, make a pussy cat
 out of you Its one step from the zoo to the jungle.” (edited)

Chatbot, why be anything but wild 🙂

Or as Luciana, the unmarried lady, so full of advice, says in Shakespeare’s Comedy of Errors:

“She never reprehended him but mildly,
When he demean’d himself rough, rude and wildly.
Why bear you these rebukes and answer not?”

Two simple actions that offer lasting design integration within organizations

Illustration by Simon Oxley

At the height of dotcom in 2000, I had my first opportunity to recruit and build a design team at Infosys. Hubris led me to place design on a privileged pedestal. Arrogance was part of the potent mix. We used to joke “here comes another JIP job.” JIP expands to ‘Jazz-it-up’, the most common phrasing of a new design task. Jip was also our get-back-at-them slur given the popular file compression term in a local accent substituting J for Z. Sardonic design team humour apart, strictly transactional nature of collaboration turned the shortlived design act and came a cropper.

My failure was masked by more seismic business events like the bankruptcy of Webvan and countless other dotcom wunderkind crashes. Unfazed though, growth for Infosys was starting to kick in from the enterprise side as several legacy solutions based on mainframes like AS400 or the thick client, middle ware based software started leaning onto browser-based applications for captive audiences that power Intranet traffic or as a new channel. I was presented a second chance. 

2001 – A legacy client of the too big to fail kind was being pitched to and here I was onsite consulting along with a mix of specializations and experience – architects, program managers, business analysts, engineers, developers and variations of these. All these diverse experts along with me would hold conversations with client representatives. I observed that the storyteller was the same. Their story and script mostly same barring few details. However, each of us on Infosys team took away a different story. The business analyst – business processes and SLAs, engineers – performance and non-functional requirements, architects – nature of infrastructure, currently deployed stack details, and I, about the users of the legacy system.

What struck me is the sheer waste of time for the client, repeating themselves. I was convinced there was a better way of managing requirements in a multi-disciplinary setting. And that would be to model requirements visually using Vizio like tools and UML compliant symbols.

Back at Bangalore, I worked with several internal teams responsible for process and quality. Initial conversations would go like “What ? There are overlapping goals? You mean you too capture requirements? Well, unfortunately, we didn’t plan for it! Can you work with the use cases instead?

How users were aligned to a IT solution, from being an afterthought.

oon with examples and doggedness, managed to convince the multi-disciplinary team to view design not as jazz, but as a process! Showed them a design process. Aligned it with IT’s core process. Yes! SDLC or software development lifecyle – waterfall method, used at that time aided by CMM maturity assessments to plan, track and deliver quality software.

oon with examples and doggedness, managed to convince the multi-disciplinary team to view design not as jazz, but as a process! Showed them a design process. Aligned it with IT’s core process. Yes! SDLC or software development lifecyle – waterfall method, used at that time aided by CMM maturity assessments to plan, track and deliver quality software.

Soon with examples and doggedness, managed to convince the multi-disciplinary team to view design not as jazz, but as a process! Showed them a design process. Aligned it with IT’s core process. Yes! SDLC or software development lifecyle – waterfall method, used at that time aided by CMM maturity assessments to plan, track and deliver quality software.

Soon we had a patent pending software requirements capture framework called Influx.

Design process was moved up to happen at project start from its post usecase stage, to actually drive formulation of usecases, where design approach, predominantly visual, endeared (empathy) itself to users much better than dry, structured, wordy templates.

Key innovation here was when we managed to tell each of us diverse experts that the detail each is interested in stems from the same story but differs in granularity. Our breakthrough was in respecting these differences and nesting them – business workflows breakdown into more granular task flows such that a business function such as user authentication could break down into a user navigating through a bunch of screens, and each screen breaks down into performance engineering requirements on validations, server response times etc. All beautifully nested in a single diagram with multiple levels of zoom. In fact, it inspired me to present this vision to management by layering it on the fascinating film by American Design guru Charles Eames titled ‘The Powers of Ten‘ (sponsored by IBM). As a primary contributor, I attribute the success to the mantra ‘Align and Integrate‘ – the theme of this post.

Design Process Benefits from Align and Integrate approach

Align and Integrate worked here at the process level. What I did was to first convince members that design is not a blackbox activity but an explicit process that lends itself to planning and managing. Next, I examined the classic design process of understanding users, tasks, explore design solutions and prototype both layouts and flows, visual design, high fidelity prototyping and user testing/validation. With these steps, I cut the process up into well contained steps/design activities and aligned them with the other core processes defined in software engineering. Alignment was based on when to execute best, in which context or location, with whose inputs, with what outputs, and other dependencies. Note that most projects followed waterfall model. Agile manifesto was being drafted at the same time at The Lodge at Snowbird ski resort in the Wasatch mountains of Utah. Rational and OO architectures were the flavour. With the well-aligned tasks representing multiple domains, next we examined the quality goals and efficiencies. It was obvious that even within certain aligned tasks, there were opportunities to integrate them to better represent and capture. One example is in how workflows were captured as a sequence of actor actions in a use case, and also as a set of visible, tangible actions where actor is a human user, represented in swimlanes above the line of visibility. Integrating these ensured better collaboration and holistic requirements.

At another too big to fail bank, my design team saw the challenge that when stakeholders were presented usecases as ordered lists of text items, they were dense to read and comprehend, resulting in approval delays. With a better aligned design integration, we were able to present the same for approvals visually as a prototype. To bankers this was more exciting and elicited significantly better participation. We all know that a picture represents a thousand words, right! Lesser cognitive load in some sense.

Now, that we could model requirements for design upfront, it proved to be a viable tool to not just capture design requirements, but as a tool to discover business requirements. I continued to help improve the Influx tool. The next step was to have the tool generate the english language text of the usecases. This clearly showed the dual nature of requirements – in discussions and at elicitation it was visual, but below the hood, it was XML. Post elicitation, for software engineering, it was a well defined, structured UML compliant requirements document, generated from the underlying data.

Align and Integrate at the process level worked fine. My teams effort was recognized with Infosys Chairmans Excellence award in 2002. 

As a designer, I continued to build on this process foundation. I extended the Align and integrate principle to resources and staffing. Here I have a confession to make. Designers are not easy to manage. Perhaps its the nature of work and talent. A leading Boston designer who worked on MasterLock redesign said their team size sweet spot is 20. Beyond that it becomes unmanageable. Perhaps why design companies dont scale their services like software service providers. At Infosys, I realized staffing the exponential demand for UI design across projects is huge. For designers to align well with the teams, we need to have them fit well into the base organization structure. I worked with top management to create new roles in our recently acquired SAP HR system. I used their structure to define career path and performance appraisal criteria. Compensation was pegged in line for the value that design brings to the table, plus bearing the supply of talent constraints. New hires were trained on our unique design processes and artefacts that were integrated within the overall software engineering frameworks. This ensured designers as a team remained well-aligned and integrated within the overall organization. Where the first attempt of a JIP service failed at single digit, the new approach has scaled very smartly to hundreds of designers, and the number is only growing.

To be continued…

Designing for an Authentic AI

Originally published on Medium 20th July, 2018

Mechanical Duck, built by Jaques de Vaucanson (1738, France) Source: https://commons.wikimedia.org/wiki/File:MechaDuck.png

Higher order automation as opposed to mechanical automation

During my stint as a co-founder and product manager at Bizosys (2009–2015), a company developing Hadoop based products to manage large-scale data (structured, unstructured and time-series sensor data) I had this overwhelming moment where a machine system could learn from past data and predict future events. This was for a telecom service provider who wanted the ability to accurately predict communication tower failures. There were over a hundred parameters ranging from network to the fuel levels in its power generators to weather and national holidays. Remotely located towers could go down for days unattended. Initially, we tried Weka but were able to get prediction accuracy beyond 55% — no great business benefit for such reliability. We then tried with a self-learning machine learning program deploying a window-shifting algorithm, HotSAX that discovers discordant patterns in data. The results were exciting with accuracy in high 90%. Suddenly, this opened up new opportunities for the telecom infrastructure team — they could manage their shifts better based on reliable predictions, downtime was reduced, yielding significant, tangible business benefits.

This sort of reliability can only be matched by humans having tacit knowledge gained from decades of experience. Such as a train driver on a Southern Pacific line who has memories of record snowfalls and how to deal with a developing snow storm. Where a machine lacks is the ability to predict with minimal training data. For example, in our telecom experiment, three-quarters of data fed to the algos resulted in the excellent prediction of the fourth quarter. Human, on the other hand, can manage within bounded rationality. If human thought as we know is essentially Cartesian then, our knowledge of our experiences are traceable ultimately to the knowledge of the world around us. We know that such thought leads to errors. For example, once you operate a light switch, you expect it to work the same elsewhere. When it doesn’t, we adapt to the situation or enquire into it. The difference is in our learning capacities and input conditions. This is evident in the following comparison between Mooney images and machine-based face recognition.

 
A tale of two faces!

Like this Smithsonian article says, “The early Greeks and Renaissance artists had birds on their brains” and there was always a quest for the robot. Vaucanson’s mechanical (incontinent) duck of 18th century perhaps was as awe-inspiring to the audiences then as the AI driven automation unfolding today. Till recently automation was rule based at least in production. With the announcements from deep learning successes, a new era is emerging.

This brings me to the premise of this story — how do we design experiences for a higher order automation instead of the mundane mechanical systems? Consider an old analogue temperature controller compared to a connected Nest device. How are we supposed to engage beyond its visible appearance and display controls? Cognitively, the task was straightforward — decide when in the room, how hot or cold the room should be, and turn the dial clockwise or anti-clockwise. With a connected device, there is an app that can learn from your past spins on the dial up or down, to recommend or even offer to preset, via an App toast notification sensing you are 30 minutes away from the air conditioning system; it having already contributed to the larger big data pool; an analysis of consumption patterns feed utility companies on predicted loads, resulting in them controlling sluice gates of hydroelectric dams to produce power for the consumer, who is expected to turn on the AC to a comfortable 24 degrees in 30 minutes.

When you see the capabilities of advancing technology such as New Zealand based Soul Machinestechnology is not just fascinating but resets our relationship with machines. Just as Ava has trained itself, or with the help of its creators, to mimic human expressions, would the machine be ‘aware’ of its learning? Like learning to factor in the response or expression in a conversation and change how it smiles next time it sees the same person — a man, woman or child? Would it also smile at the pet cat (which overzealous robots might see as a pet and as food) in the same manner as it would to a human? Would it spook the cat or dog with its smile, and realize “uh-oh?” The larger question, how much of the ‘cultural learning’ does the machine pick up. How would a driver-less car behave in traffic in Arizona or say in Bangalore, India (where I am from)? Would the driver-less car honk like they do in India for the heck of it? Is honking a cultural thing? Does the machine learn these nuances?

Creating Ava — Soul Machines

As a user experience designer trained to adopt user centered approach, and I do; I ask — so, which user center am I designing for? The user as an individual, or user as a part of community, or a part of the larger ecosystem, or a speck in the biome? Our knowledge has advanced thanks to cognitive neuroscience driven by FMRI insights to map human cognition better than ever before. What qualities do I care for beyond usability? What matters when it comes to user relationship to ecosystem? Transcendence? Uncertainty? Can AI help support humans with suggestion in these complex situations with its own highly scalable, high performance, processing vast data?

 
 
There are multiple degrees to the user center

New technologies have the potential to trigger these thoughts, while businesses attempt to balance growth and yet remain sustainable. Especially, platform businesses that service connected consumer needs, connecting producers to them, via a platform infrastructure. The UX designer needs to work closely with technologists (a point I have underscored in another story on “Future of UX”) to determine where to anchor the user experience in a complex, interlinked, connected world.


Nir Eyal ~ “behavioral designer, at the intersection of psychology, technology, and business.”

Langdon Winner ~ “attempts to fix and humanize the internet usually reflect the same consumerism, narcissism & profit seeking that are the root of the problem”

Authenticity and Free will

We want machines to learn to develop better products and technology (irrespective of whether it aids consumerist growth), or to understand human psychology (irrespective of whether it leads to narcissistic behaviors online), or to enhance business productivity (primarily as a profit-seeking/growth YoY measure). AI and technology here is cast in the role of a mere tool. Not the partner it ought to be.

Nir Eyal and Langdon Winner are two diverse experts I respect and am aware of as a designer — attempting to design new behaviours, yet not being naive about the responsibility to be shouldered while harnessing technology. As much as user research and ethnography feeds my creative highs when I know what interface elements to tweak above the line of visibility, yet to be bold enough to recognize that the underlying systems can and may not be apolitical when deployed is a challenge to comprehend. More here where Langdon Winner enquires “Do Artifacts have politics?

Like in the decades building up to the newer AI based solutions, we have imagined user experiences in the same, rule based manner, across collaborating experts — designers, engineers, technologists, marketeers, product managers, focusing on transactions!

 
The Half Full Cup — remove noisy information before analysis and design

Consider flipping this.

Gone are the days of limited computing power. Gone are the days of siloed organizations and consumers. We have come far from the days when Bill Gates proclaimed 640K ought to be enough! While technology has advanced beyond even Moores Law, we retain those Gatesian heuristics. We look at data as having noise — incomplete data, bad data, so on, which in the past would have crashed rigid rule based computer systems. Remember the blue screen of death!

 
Dunn/Belnap multi-valued logic

After all, what is noisy data? Is it like a proverbial weed i.e. a plant without a benefit for human consumption? I find succour in Political Theory for such behaviours. Specifically, Dunn/Belnap’s multi-valued logic. A voter in an election could be voting in multiple ways beyond the boolean for or against! What we refer to bad data or noisy data is likely to have rich information. Political, fuzzy, inconsistent, outlier tidbits of data, perhaps!

 
Not Boolean > How the swing voter went extinct by Alvin Chang. Source: https://www.vox.com/policy-and-politics/2016/11/4/13496688/swing-voters-dying-cartoon

Why not let machine learning differentiate good vs bad data? What are the opportunities for technology and design? That opportunity lies in the Half Empty cup of data that traditionally let drop to the floor!

 
The Half Empty Cup — let the machine learn to tell between Good Vs Bad/Noisy data. Let AI generate Anticipatory user interfaces. Think of them as A/B tests on steroids.

In fact, architecturally, as we move from monolithic systems to microservices based systems, there is an opportunity for us to use machine learning and information discovery automation (agents) to mashup fascinating views of information, presented within accepted aesthetic conventions, appealing to common sensibilities, as machine generated user experiences!

The key I believe lies in how we decompose the functional elements, which I construct as a diagonal that slices the vertical stack embedding system layer, interaction layer and user intent layer.

 
Decomposing Micro Interactions to be served by underlying micro services.

Assuming we progress to this scenario, then UX designers and Engineers have the opportunity to look at data as well as user experiences holistically. We could redesign the five-star based rating/feedback mechanism to transform it from its trasactional moorings.

 
Data driven, AI driven technology can lead to more wholesome, personalized user experiences provided it makes sense of all the data

Rhetorically, one may ask but are such machine generated experiences authentic? Can the mere mimic of human expressions like Soul Machines Ava create a lasting trust?

Pause and ask, is there something synthetic, unnatural about such computed personalization? Is such personalization actually benevolent? Are we allowing machines to manipulate us into believing its our free will that drives us? Is there an eery suspiscion of a manipulative entity or organization with an agenda? Is the intent behind personalization authentic, and not fake?

Designing for technology and user experiences needs to weigh in on the output of AI, how its tuned, how it learns. AI generated UX builds first on trust, wherein the user in some manner places trust in the data he or she unlocks. Such data is authentic since it flows from the user to the AI system. Its from that base that AI generates UX that generates delight. Even if UX disappoints, core trust still remains. It is authenticity flowing from the sense that a user empowered the AI system. However, technology can only go as far. As Descartes points out Free Will is “the ability to do or not do something” (Meditation IV) and “the will is by its nature so free that it can never be constrained” (Passions of the Soul, I, art. 41). But, I suppose that as long as the human consumer of tech served choices believes it is not interfering with her free will, it should be OK.

 
I choose to do or not do something — is there a tilt? is the salt enough?

A light human touch makes a thing personal. Authenticity is further cemented with the deft user touch, or tweak to personalize. When untouched by user, its incomplete, impersonal, and not empowering human free will. The role of UX for AI is a little like the light touch one gives to set right a tilted painting. Or that little dash of extra salt to a dish! Such actions make it a signature something, very personal. An expression of human free will.

Design will stay relevant to celebrate that need — free will. UX Designers recognize that and incorporate it…irrespective of process to discover it. Assume, AI builds on trust where possible and to learn and generate the delightful UX. Assume the UX is authentic because it allowed user to configure or change it. Even if the human finds it authentic, does the machine know? Algorithms that interpret this and feed back to represent as new learning will be key for scale. UX design needs to train ML for such representational feedback.

Error handling in AI driven systems, if such a thing is possible with automation

Lastly, as a design practitioner in big data space, another aspect of AI besides Authenticity that I feel UX Designers should focus on, is Error handling. If processing for choice using multi-valued logic allows automation of user interfaces, then similarly, we need to diversify post system response, or feedback to and from users in a similar manner. Errors such as 404 Page not Found is a binary setting, then in our AI driven world, there is room for error that needs to be flagged. User interface design and information architects need to device fresh UI approaches to flag false positives and false negatives that a AI based system may throw up. This will require the user experience to elicit users critical thinking to be aware and flag issues.

 
How can UX incorporate behavioural cues that trigger critical thinking — to detect errors and act to prevent them or flag them — Immensely useful in driverless car ecosystems, fake news publishing

These ~ Authenticity of AI generated UX and Error handling in unsupervised ML systems, and how UX Designers address will bridge what I call last mile delivery of UX, to help transform it will be the pivots in UX for AI — less visual and more cerbral!

When Less should be More!

Photo by Trym Nilsen on Unsplash

The VP of Design at Uber is mentioned by Fast Company recently as saying that in 2018 he “is to introduce a more empathic and considered approach to the company and the product.” The emphatic ‘more’ on empathy triggered my interest. I am aware as I pick phrases, original context and intent may get lost!

More empathy is bewildering. Let us examine it critically! A child at a buffet stocking up only on desserts is an indulgence we may empathize with while justifying, let the kid have a break! Or, poor child, little fun once in a while wont hurt! Then there is the non-indulgent empathy that goes deeper, saying I care for you and your well-being. You may have a scoop of butter scotch ice cream with caramel topping, but first finish the salad on your plate.

When we design product and services and at the same time traverse complex emotive zones, I wonder if there is a correlation between creative imagination and empathy. Do we need to artificially pump up empathy to get into a creative stupor, like Aldous Huxley may suggest; to rally the creative forces within and unleash on the problem at hand to deliver the oooh of a user experience. In the case of business problem solving, a rational justification of what seems to be naturally right thing to do. Be good! This self trip in an enhanced state of happiness and empathy, where you readily give and accept free hugs, because you believe; of course with the aid of substances perhaps or with a design thinking process instead, which can also render one euphoric!

What distinguishes ‘caring-empathy’ is that it come from within, naturally, although it sounds mystical, and not what Plato would have expounded. Before we sort the ‘artificial-empathy’ vs ‘caring/genuine-empathy’ let us examine what role empathy plays. Is empathy a means to acquire knowledge, or is empathy about deploying knowledge (and logic) to experience emotion (I refer this post by Betty Stoneman — Plato’s Empathy? Qualifying the Appetitive Aspect of Plato’s Tripartite Soul). User researchers and Design Thinking practitioners should know better or at least aware of their intent while investigating.

This tautological reference to a caring-empathy is an important distinction. Especially, as many are starting to get weary of the noun empathy, thanks to Design Thinking drumming, exhorting executives to turn on empathy…at least for the duration of a DT workshop they participate in. Empathy captured in a complex array of multi coloured post-its! (I too have indulged in these rituals).

More worrisome is the commodification of empathy, visible in media, and we get a daily dose, anytime a sensational event is reported. Fatimah Suganda, a researcher from Indonesia pointed out the tradeoff between media striving for readership/audience boost Vs. informational and educational story in her story “The Commoditization of Empathy in Media Coverage on Engeline’s Death.” Ironically, its this very approach to raising empathy that could lead to its dysfunction! I sense I am generalizing, but, nevertheless its a perspective.

So, I ask: Does ‘Artificial empathy’ lead to indulgent design, while ‘Caring empathy’ delivers good design?

What is the Future of UX Design?

This topic was triggered over here https://www.quora.com/What-is-the-future-of-UX-UI-designer 

Image courtesy: https://unsplash.com/@garidy_sanders 

I am pasting the same answer here for convenience. I have consciously left UI out and sticking to just UX. I have another post on Quora on this and the UX Vs. UI discussion.

My answer to this question is in two parts — a near future and a long-term future.

Short term (up to and around 2020) — very bright future. Demand for pixel perfect, usable and delightful UX demand is high, especially with accelerating digital transformation underway globally. Evidence to support is in this graph of top design-driven companies against all of S&P index –

 

Source: Job Trends Report: The Job Market for UX/UI Designers

If topline growth of marquee brands is significant, today, design is a buzzword among other companies too, who often are guided by the leaders. Coupled with digital transformation, where information technology is ubiquitous across most business processes, design is a key skill that teams within companies and service providers seek.


Long-term (beyond 2020) — This is the interesting one. If you subscribe to Clayton Christensen’s disruption model

Source: What Is Disruptive Innovation?

Then UI generators such as https://thegrid.io/ are the disruptors who will likely be the norm (read this about Websites that design themselves on Wired). Handcrafted UI and UX design will likely transform to curation and product management aspect of UX.

IMO, the future of UX is likely to change — Self-taught machines perhaps can soon and will iterate 1000 times faster and produce far greater variety than human history has ever. In such a scenario, whenever that happens in 10, 20 years from now, UX design education and training need to transform.

If UX design in future were to include more formal studies (no pun there) viz.

  1. Study of cognitive neurosciences and human behaviour
  2. Study of ethics
  3. Product management — to envision for technology-aided interfaces stemming from AI advances, generated and unsupervised ML-based system interactions, predictive UX, personalized robotic services, and similar emerging tech.

In conjunction with this, I predict that engineering performance will come to fore and UX designers will work closely with technical architects, who together will overshadow the current marketing driven/business agenda that is at a core of decision making. My premise is that Process is given overdue importance over design action. Agile, Design Thinking, etc will have to give way to design execution. I do not mean the process will go away, but it gets more below the hood, and intrinsic to a flexible work culture of the digital information age. As for business strategy, its agility will be about how its customers will define for them…not in some wood panelled boardroom or digital wall Pods to aid decision making. Agility will be about how plugged in businesses are to their users without restrictive filters. Agility is not just the agile as a process, but an attitude. Instead of insights gathered out of noise-free data, the effort will be to remove noise from the analytics. Decision-making power will shift to customers.

Its against such a scenario driven by large-scale deployment of AI and related tech that the future of UX designers will unravel, as to what roles they will play.


Postscript:
And this about AI impact on UX Design has been discussed a lot. So has the example of The Grid, who I refer to as a Clayton Christensen disruptive entrant. In some ways, The Grid is the hero (like the mini steel plants and minicomputers.) Here is one great piece from UX Collective by Fabricio Teixeira — “How AI has started to impact our work as designers.” Fabricio is bang on about the impact of AI and that its well-suited to do the chores like cropping images, maybe sort and tag images. However, I believe this is the sort of productivity we will see in the short term, not wholesale, but in those large agencies with large stable accounts and steady budget flows. My point is that in the longer term, as technologies mature, they will be capable of doing good design maybe with 1000x more iterations, A/B test, iterate again and publish widely. During the early days of WWW, we had designers handcrafting attractive banner ads. In the future, these may just be the output of an AI-driven ad serving platform that creates a campaign, negotiates and buys spots, runs the campaign, learns from it and repeats. Of course, the path to that has its trials and tribulations like Microsoft’s Tay! These are those patchy early version prototypes that will eventually disrupt.

Instead, UX professionals in future need not be limited to UX or the stuff above line of visibility, which machines may replace. They ought to work closely with Product Managers and Engineers to reimagine product experiences.

 
source: https://upload.wikimedia.org/wikipedia/commons/d/dd/Star_rating_1_of_5.png

 

For example let us consider the common, abstract five-star rating feedback method. This UX is a legacy of OLTP (transaction) systems in its thinking. Feedback is captured in a manner that suits how its processed, which is rule-based, rigid. Go with me and imagine how this feedback mechanism overhauled assuming the rigid rules are replaced by self-learning AI systems.

 
A concept for an AI-based feedback system where FEEDBACK is a RELATIONSHIP and not a TRANSACTION. What if AI inverts the feedback from an explicit, overt system to an implicit covert approach, wherein, AI system observes and learns user relationship with products or services in a context that it determines as appropriate, to capture the feedback as a continuous ‘relationship’ with the product/service as against a ‘transaction’ with the product. This image is only a conceptual illustration wherein Feedback = relationship is constructed and changes with time. There can be an aggregate view or the splits to drill down. User has control on which view or all views to share. This is an example of how UXers can question the norm and reimagine the product to address the power of new technologies while allowing the same system to focus on the chores of generating ‘designs’ for UXer to choose from. In that sense, the future of UX Designer would be a part curator, part designer. The distinction between what a designer does and AI does is likely to be between rich organic memories (human) and artificial rules/graphs (AI). Those memories will be our strength and guide our hand and eye.

As you leave, read this brilliant piece by Mariana Lin on the distinction between an artificial persona and a human persona.

 

Caveat: I admit to an overly optimistic and exuberant assessment here and this is an area of speculation. I am informed by my own 7-year journey as a designer co-founder at a Hadoop based big data startup (see – ramblings from a failed startup journey )

Data Fracking

In 2006, Clive Humby drew the analogy between crude oil and data in a blog piece titled “Data is the new Oil” which since then has captured the imagination of several commentators on big data. No one doubts the value of the ‘resources’ that varies in the effort required to extract. During a discussion with a billion dollar company CIO, he indicated that there is a lot of data but can you make it “analyzeable.”

Perhaps, he was inferring to the challenges of dealing with unstructured data in a company’s communication and information systems, besides the structured data silos that are also teeming with data. In our work with a few analytics companies, we found validation of this premise. Data in log files, PDFs, Images, etc. is one part of it. There is also the deep web, that part of data not readily accessible by googling, etc. or as this Howstuffworks article refers to it as ‘hidden from plain site.’

Bizosys’s HSearch is a Hadoop based search and analytics engine that has been adapted to deal with this challenge faced by data analysts referred to commonly as Data Preparation or Data Harvesting. If indeed finding value in data poses these challenges, then Clive’s analogy to crude oil is valid. Take a look at our take on this. If today, Shale gas extraction represents the next frontier in oil extraction employing a process known as Hydraulic Fracturing or Fracking, then our take on that is ‘data fracking’ as a process of making data accessible.

The Origins of Bigdata

While sharing our thoughts on big data with our communications team, we were storytellers. The story around big data was impromptu! We realized the oft-quoted VolumeVariety and Velocity actually can be mapped to TransactionsInteractions and Actions. I have represented it using a Visual.ly infographic background.

Here is a summary –

“The trend we observe is that the problems around big data are increasingly being spoken about more in business terms viz.Transactions, Interactions, Actions and less in technology terms viz. Volume, Variety, Velocity. These two represent complementary aspects and from a big data perspective promise better business-IT alignment in 2013, as business gets hungrier still for more actionable information.”

Volume – Transactions

More interestingly, as in a story, it flowed along time and we realize that big data appears on the scene as an IT challenge to grapple with when the Volume happens, which comes from either growing rate of transactions. Sometimes transactions occur in several hundreds per second, or as billions of database records required to process in a single batch were the volume is multiplied due to newer, more sophisticated models being applied as in the case of risk analysis. Big data appears on the scene as a serious IT challenge and project to deal with associated issues around scale and performance of large volumes of data. Typically, these are Operational in nature and internal facing.

These large volumes are often dealt with by relying on a public Cloud infrastructure such as Amazon, Rackspace, Azure, etc. or more sophisticated solutions involving ‘big data appliances’ that combine Terabyte scale RAM at hardware level with  in-memory processing software from large companies such as HP, Oracle, SAP, etc.

Variety – Interactions

The next level of big data problems surface when dealing with external facing information arising out of Interactions with customers, and other stakeholders. Here one is confronted with a huge variety of information, mostly textual, captured from customers interactions with call centers, emails, or meta data from these including videos, logs, etc. The challenge is in semantic analysis of huge volumes of text to determine either user intent or sentiment and project brand reputation, etc. However, despite ability to process this volume and variety, getting a reasonably accurate measurement that is ‘good enough’ still remains a daunting challenge.

Value – Transactions + Interactions

The third level of big data appears where some try to make sense of all the data that is available – structured and unstructured, transactions and interactions, current and historical, to enrich the information, pollinate the raw text by determining business entities extracted, linking them to richer structured data, linking to yet other sources of external information, to triangulate and derive a better view of the data for more Value.

Velocity – Actions

And finally, we deal with Velocity of the information as it happens. Could be for obvious aspects like Fraud detection, etc. but also to determine actionable insights before information goes stale. This requires addressing all aspects of big data to be addressed as it flows and within a highly crunched time frame. For example, an equity analyst or broker would like to be informed about trading anomalies or patterns detected as intraday trades happen.