Designing for an Authentic AI

Originally published on Medium 20th July, 2018

Mechanical Duck, built by Jaques de Vaucanson (1738, France) Source:

Higher order automation as opposed to mechanical automation

During my stint as a co-founder and product manager at Bizosys (2009–2015), a company developing Hadoop based products to manage large-scale data (structured, unstructured and time-series sensor data) I had this overwhelming moment where a machine system could learn from past data and predict future events. This was for a telecom service provider who wanted the ability to accurately predict communication tower failures. There were over a hundred parameters ranging from network to the fuel levels in its power generators to weather and national holidays. Remotely located towers could go down for days unattended. Initially, we tried Weka but were able to get prediction accuracy beyond 55% — no great business benefit for such reliability. We then tried with a self-learning machine learning program deploying a window-shifting algorithm, HotSAX that discovers discordant patterns in data. The results were exciting with accuracy in high 90%. Suddenly, this opened up new opportunities for the telecom infrastructure team — they could manage their shifts better based on reliable predictions, downtime was reduced, yielding significant, tangible business benefits.

This sort of reliability can only be matched by humans having tacit knowledge gained from decades of experience. Such as a train driver on a Southern Pacific line who has memories of record snowfalls and how to deal with a developing snow storm. Where a machine lacks is the ability to predict with minimal training data. For example, in our telecom experiment, three-quarters of data fed to the algos resulted in the excellent prediction of the fourth quarter. Human, on the other hand, can manage within bounded rationality. If human thought as we know is essentially Cartesian then, our knowledge of our experiences are traceable ultimately to the knowledge of the world around us. We know that such thought leads to errors. For example, once you operate a light switch, you expect it to work the same elsewhere. When it doesn’t, we adapt to the situation or enquire into it. The difference is in our learning capacities and input conditions. This is evident in the following comparison between Mooney images and machine-based face recognition.

A tale of two faces!

Like this Smithsonian article says, “The early Greeks and Renaissance artists had birds on their brains” and there was always a quest for the robot. Vaucanson’s mechanical (incontinent) duck of 18th century perhaps was as awe-inspiring to the audiences then as the AI driven automation unfolding today. Till recently automation was rule based at least in production. With the announcements from deep learning successes, a new era is emerging.

This brings me to the premise of this story — how do we design experiences for a higher order automation instead of the mundane mechanical systems? Consider an old analogue temperature controller compared to a connected Nest device. How are we supposed to engage beyond its visible appearance and display controls? Cognitively, the task was straightforward — decide when in the room, how hot or cold the room should be, and turn the dial clockwise or anti-clockwise. With a connected device, there is an app that can learn from your past spins on the dial up or down, to recommend or even offer to preset, via an App toast notification sensing you are 30 minutes away from the air conditioning system; it having already contributed to the larger big data pool; an analysis of consumption patterns feed utility companies on predicted loads, resulting in them controlling sluice gates of hydroelectric dams to produce power for the consumer, who is expected to turn on the AC to a comfortable 24 degrees in 30 minutes.

When you see the capabilities of advancing technology such as New Zealand based Soul Machinestechnology is not just fascinating but resets our relationship with machines. Just as Ava has trained itself, or with the help of its creators, to mimic human expressions, would the machine be ‘aware’ of its learning? Like learning to factor in the response or expression in a conversation and change how it smiles next time it sees the same person — a man, woman or child? Would it also smile at the pet cat (which overzealous robots might see as a pet and as food) in the same manner as it would to a human? Would it spook the cat or dog with its smile, and realize “uh-oh?” The larger question, how much of the ‘cultural learning’ does the machine pick up. How would a driver-less car behave in traffic in Arizona or say in Bangalore, India (where I am from)? Would the driver-less car honk like they do in India for the heck of it? Is honking a cultural thing? Does the machine learn these nuances?

Creating Ava — Soul Machines

As a user experience designer trained to adopt user centered approach, and I do; I ask — so, which user center am I designing for? The user as an individual, or user as a part of community, or a part of the larger ecosystem, or a speck in the biome? Our knowledge has advanced thanks to cognitive neuroscience driven by FMRI insights to map human cognition better than ever before. What qualities do I care for beyond usability? What matters when it comes to user relationship to ecosystem? Transcendence? Uncertainty? Can AI help support humans with suggestion in these complex situations with its own highly scalable, high performance, processing vast data?

There are multiple degrees to the user center

New technologies have the potential to trigger these thoughts, while businesses attempt to balance growth and yet remain sustainable. Especially, platform businesses that service connected consumer needs, connecting producers to them, via a platform infrastructure. The UX designer needs to work closely with technologists (a point I have underscored in another story on “Future of UX”) to determine where to anchor the user experience in a complex, interlinked, connected world.

Nir Eyal ~ “behavioral designer, at the intersection of psychology, technology, and business.”

Langdon Winner ~ “attempts to fix and humanize the internet usually reflect the same consumerism, narcissism & profit seeking that are the root of the problem”

Authenticity and Free will

We want machines to learn to develop better products and technology (irrespective of whether it aids consumerist growth), or to understand human psychology (irrespective of whether it leads to narcissistic behaviors online), or to enhance business productivity (primarily as a profit-seeking/growth YoY measure). AI and technology here is cast in the role of a mere tool. Not the partner it ought to be.

Nir Eyal and Langdon Winner are two diverse experts I respect and am aware of as a designer — attempting to design new behaviours, yet not being naive about the responsibility to be shouldered while harnessing technology. As much as user research and ethnography feeds my creative highs when I know what interface elements to tweak above the line of visibility, yet to be bold enough to recognize that the underlying systems can and may not be apolitical when deployed is a challenge to comprehend. More here where Langdon Winner enquires “Do Artifacts have politics?

Like in the decades building up to the newer AI based solutions, we have imagined user experiences in the same, rule based manner, across collaborating experts — designers, engineers, technologists, marketeers, product managers, focusing on transactions!

The Half Full Cup — remove noisy information before analysis and design

Consider flipping this.

Gone are the days of limited computing power. Gone are the days of siloed organizations and consumers. We have come far from the days when Bill Gates proclaimed 640K ought to be enough! While technology has advanced beyond even Moores Law, we retain those Gatesian heuristics. We look at data as having noise — incomplete data, bad data, so on, which in the past would have crashed rigid rule based computer systems. Remember the blue screen of death!

Dunn/Belnap multi-valued logic

After all, what is noisy data? Is it like a proverbial weed i.e. a plant without a benefit for human consumption? I find succour in Political Theory for such behaviours. Specifically, Dunn/Belnap’s multi-valued logic. A voter in an election could be voting in multiple ways beyond the boolean for or against! What we refer to bad data or noisy data is likely to have rich information. Political, fuzzy, inconsistent, outlier tidbits of data, perhaps!

Not Boolean > How the swing voter went extinct by Alvin Chang. Source:

Why not let machine learning differentiate good vs bad data? What are the opportunities for technology and design? That opportunity lies in the Half Empty cup of data that traditionally let drop to the floor!

The Half Empty Cup — let the machine learn to tell between Good Vs Bad/Noisy data. Let AI generate Anticipatory user interfaces. Think of them as A/B tests on steroids.

In fact, architecturally, as we move from monolithic systems to microservices based systems, there is an opportunity for us to use machine learning and information discovery automation (agents) to mashup fascinating views of information, presented within accepted aesthetic conventions, appealing to common sensibilities, as machine generated user experiences!

The key I believe lies in how we decompose the functional elements, which I construct as a diagonal that slices the vertical stack embedding system layer, interaction layer and user intent layer.

Decomposing Micro Interactions to be served by underlying micro services.

Assuming we progress to this scenario, then UX designers and Engineers have the opportunity to look at data as well as user experiences holistically. We could redesign the five-star based rating/feedback mechanism to transform it from its trasactional moorings.

Data driven, AI driven technology can lead to more wholesome, personalized user experiences provided it makes sense of all the data

Rhetorically, one may ask but are such machine generated experiences authentic? Can the mere mimic of human expressions like Soul Machines Ava create a lasting trust?

Pause and ask, is there something synthetic, unnatural about such computed personalization? Is such personalization actually benevolent? Are we allowing machines to manipulate us into believing its our free will that drives us? Is there an eery suspiscion of a manipulative entity or organization with an agenda? Is the intent behind personalization authentic, and not fake?

Designing for technology and user experiences needs to weigh in on the output of AI, how its tuned, how it learns. AI generated UX builds first on trust, wherein the user in some manner places trust in the data he or she unlocks. Such data is authentic since it flows from the user to the AI system. Its from that base that AI generates UX that generates delight. Even if UX disappoints, core trust still remains. It is authenticity flowing from the sense that a user empowered the AI system. However, technology can only go as far. As Descartes points out Free Will is “the ability to do or not do something” (Meditation IV) and “the will is by its nature so free that it can never be constrained” (Passions of the Soul, I, art. 41). But, I suppose that as long as the human consumer of tech served choices believes it is not interfering with her free will, it should be OK.

I choose to do or not do something — is there a tilt? is the salt enough?

A light human touch makes a thing personal. Authenticity is further cemented with the deft user touch, or tweak to personalize. When untouched by user, its incomplete, impersonal, and not empowering human free will. The role of UX for AI is a little like the light touch one gives to set right a tilted painting. Or that little dash of extra salt to a dish! Such actions make it a signature something, very personal. An expression of human free will.

Design will stay relevant to celebrate that need — free will. UX Designers recognize that and incorporate it…irrespective of process to discover it. Assume, AI builds on trust where possible and to learn and generate the delightful UX. Assume the UX is authentic because it allowed user to configure or change it. Even if the human finds it authentic, does the machine know? Algorithms that interpret this and feed back to represent as new learning will be key for scale. UX design needs to train ML for such representational feedback.

Error handling in AI driven systems, if such a thing is possible with automation

Lastly, as a design practitioner in big data space, another aspect of AI besides Authenticity that I feel UX Designers should focus on, is Error handling. If processing for choice using multi-valued logic allows automation of user interfaces, then similarly, we need to diversify post system response, or feedback to and from users in a similar manner. Errors such as 404 Page not Found is a binary setting, then in our AI driven world, there is room for error that needs to be flagged. User interface design and information architects need to device fresh UI approaches to flag false positives and false negatives that a AI based system may throw up. This will require the user experience to elicit users critical thinking to be aware and flag issues.

How can UX incorporate behavioural cues that trigger critical thinking — to detect errors and act to prevent them or flag them — Immensely useful in driverless car ecosystems, fake news publishing

These ~ Authenticity of AI generated UX and Error handling in unsupervised ML systems, and how UX Designers address will bridge what I call last mile delivery of UX, to help transform it will be the pivots in UX for AI — less visual and more cerbral!

What is the Future of UX Design?

This topic was triggered over here 

Image courtesy: 

I am pasting the same answer here for convenience. I have consciously left UI out and sticking to just UX. I have another post on Quora on this and the UX Vs. UI discussion.

My answer to this question is in two parts — a near future and a long-term future.

Short term (up to and around 2020) — very bright future. Demand for pixel perfect, usable and delightful UX demand is high, especially with accelerating digital transformation underway globally. Evidence to support is in this graph of top design-driven companies against all of S&P index –


Source: Job Trends Report: The Job Market for UX/UI Designers

If topline growth of marquee brands is significant, today, design is a buzzword among other companies too, who often are guided by the leaders. Coupled with digital transformation, where information technology is ubiquitous across most business processes, design is a key skill that teams within companies and service providers seek.

Long-term (beyond 2020) — This is the interesting one. If you subscribe to Clayton Christensen’s disruption model

Source: What Is Disruptive Innovation?

Then UI generators such as are the disruptors who will likely be the norm (read this about Websites that design themselves on Wired). Handcrafted UI and UX design will likely transform to curation and product management aspect of UX.

IMO, the future of UX is likely to change — Self-taught machines perhaps can soon and will iterate 1000 times faster and produce far greater variety than human history has ever. In such a scenario, whenever that happens in 10, 20 years from now, UX design education and training need to transform.

If UX design in future were to include more formal studies (no pun there) viz.

  1. Study of cognitive neurosciences and human behaviour
  2. Study of ethics
  3. Product management — to envision for technology-aided interfaces stemming from AI advances, generated and unsupervised ML-based system interactions, predictive UX, personalized robotic services, and similar emerging tech.

In conjunction with this, I predict that engineering performance will come to fore and UX designers will work closely with technical architects, who together will overshadow the current marketing driven/business agenda that is at a core of decision making. My premise is that Process is given overdue importance over design action. Agile, Design Thinking, etc will have to give way to design execution. I do not mean the process will go away, but it gets more below the hood, and intrinsic to a flexible work culture of the digital information age. As for business strategy, its agility will be about how its customers will define for them…not in some wood panelled boardroom or digital wall Pods to aid decision making. Agility will be about how plugged in businesses are to their users without restrictive filters. Agility is not just the agile as a process, but an attitude. Instead of insights gathered out of noise-free data, the effort will be to remove noise from the analytics. Decision-making power will shift to customers.

Its against such a scenario driven by large-scale deployment of AI and related tech that the future of UX designers will unravel, as to what roles they will play.

And this about AI impact on UX Design has been discussed a lot. So has the example of The Grid, who I refer to as a Clayton Christensen disruptive entrant. In some ways, The Grid is the hero (like the mini steel plants and minicomputers.) Here is one great piece from UX Collective by Fabricio Teixeira — “How AI has started to impact our work as designers.” Fabricio is bang on about the impact of AI and that its well-suited to do the chores like cropping images, maybe sort and tag images. However, I believe this is the sort of productivity we will see in the short term, not wholesale, but in those large agencies with large stable accounts and steady budget flows. My point is that in the longer term, as technologies mature, they will be capable of doing good design maybe with 1000x more iterations, A/B test, iterate again and publish widely. During the early days of WWW, we had designers handcrafting attractive banner ads. In the future, these may just be the output of an AI-driven ad serving platform that creates a campaign, negotiates and buys spots, runs the campaign, learns from it and repeats. Of course, the path to that has its trials and tribulations like Microsoft’s Tay! These are those patchy early version prototypes that will eventually disrupt.

Instead, UX professionals in future need not be limited to UX or the stuff above line of visibility, which machines may replace. They ought to work closely with Product Managers and Engineers to reimagine product experiences.



For example let us consider the common, abstract five-star rating feedback method. This UX is a legacy of OLTP (transaction) systems in its thinking. Feedback is captured in a manner that suits how its processed, which is rule-based, rigid. Go with me and imagine how this feedback mechanism overhauled assuming the rigid rules are replaced by self-learning AI systems.

A concept for an AI-based feedback system where FEEDBACK is a RELATIONSHIP and not a TRANSACTION. What if AI inverts the feedback from an explicit, overt system to an implicit covert approach, wherein, AI system observes and learns user relationship with products or services in a context that it determines as appropriate, to capture the feedback as a continuous ‘relationship’ with the product/service as against a ‘transaction’ with the product. This image is only a conceptual illustration wherein Feedback = relationship is constructed and changes with time. There can be an aggregate view or the splits to drill down. User has control on which view or all views to share. This is an example of how UXers can question the norm and reimagine the product to address the power of new technologies while allowing the same system to focus on the chores of generating ‘designs’ for UXer to choose from. In that sense, the future of UX Designer would be a part curator, part designer. The distinction between what a designer does and AI does is likely to be between rich organic memories (human) and artificial rules/graphs (AI). Those memories will be our strength and guide our hand and eye.

As you leave, read this brilliant piece by Mariana Lin on the distinction between an artificial persona and a human persona.


Caveat: I admit to an overly optimistic and exuberant assessment here and this is an area of speculation. I am informed by my own 7-year journey as a designer co-founder at a Hadoop based big data startup (see – ramblings from a failed startup journey )

When Less should be More!

Photo by Trym Nilsen on Unsplash

The VP of Design at Uber is mentioned by Fast Company recently as saying that in 2018 he “is to introduce a more empathic and considered approach to the company and the product.” The emphatic ‘more’ on empathy triggered my interest. I am aware as I pick phrases, original context and intent may get lost!

More empathy is bewildering. Let us examine it critically! A child at a buffet stocking up only on desserts is an indulgence we may empathize with while justifying, let the kid have a break! Or, poor child, little fun once in a while wont hurt! Then there is the non-indulgent empathy that goes deeper, saying I care for you and your well-being. You may have a scoop of butter scotch ice cream with caramel topping, but first finish the salad on your plate.

When we design product and services and at the same time traverse complex emotive zones, I wonder if there is a correlation between creative imagination and empathy. Do we need to artificially pump up empathy to get into a creative stupor, like Aldous Huxley may suggest; to rally the creative forces within and unleash on the problem at hand to deliver the oooh of a user experience. In the case of business problem solving, a rational justification of what seems to be naturally right thing to do. Be good! This self trip in an enhanced state of happiness and empathy, where you readily give and accept free hugs, because you believe; of course with the aid of substances perhaps or with a design thinking process instead, which can also render one euphoric!

What distinguishes ‘caring-empathy’ is that it come from within, naturally, although it sounds mystical, and not what Plato would have expounded. Before we sort the ‘artificial-empathy’ vs ‘caring/genuine-empathy’ let us examine what role empathy plays. Is empathy a means to acquire knowledge, or is empathy about deploying knowledge (and logic) to experience emotion (I refer this post by Betty Stoneman — Plato’s Empathy? Qualifying the Appetitive Aspect of Plato’s Tripartite Soul). User researchers and Design Thinking practitioners should know better or at least aware of their intent while investigating.

This tautological reference to a caring-empathy is an important distinction. Especially, as many are starting to get weary of the noun empathy, thanks to Design Thinking drumming, exhorting executives to turn on empathy…at least for the duration of a DT workshop they participate in. Empathy captured in a complex array of multi coloured post-its! (I too have indulged in these rituals).

More worrisome is the commodification of empathy, visible in media, and we get a daily dose, anytime a sensational event is reported. Fatimah Suganda, a researcher from Indonesia pointed out the tradeoff between media striving for readership/audience boost Vs. informational and educational story in her story “The Commoditization of Empathy in Media Coverage on Engeline’s Death.” Ironically, its this very approach to raising empathy that could lead to its dysfunction! I sense I am generalizing, but, nevertheless its a perspective.

So, I ask: Does ‘Artificial empathy’ lead to indulgent design, while ‘Caring empathy’ delivers good design?

Two Big Bets Salil Parekh should take as CEO of Infosys

First posted on Medium Dec 3, 2017

The paint on the signboard with the next CEO of Infosys is fresh. Thats fresh cheer for stock markets in Mumbai. Infosys was always a darling! Salil Parekh will quit his executive board position at Capgemini, a French IT services company, to join Infosys. This in spite of prevailing sentiment that this time it has to be an old hand. Especially under the circumstances leading to Vishal’s resignation. The culture argument is that an outsider doesn’t get how Infosys works. Leadership they believe needs to be sitting closer to its headquarters in Bangalore. At least, I did so too, having spent few years there between 1995 and 2006.

Nandan Nilekani, co-founder of Infosys, author of ‘Reimagining India’ is a wise man. As the person tasked with getting Infosys back on track, he is well-aware of challenges ahead at Infosys. Nilekani said “the challenge before companies like Infosys was to get people to be up-to-date on current technology, current development and how they learn the latest.”

Considering his recent experiences in helping build the worlds largest bio metric identity system, Aadhaar, and on his way to build the next high impact ‘societal platform’ at Ekstep; to build scale fast, and to generate critical impact requires a commitment, a no nonsense attitude. Sentiment doesn’t help here!

The choice in Salil Parekh, an outsider to Infosys, yet a veteran at scaling Capgemini India operations in a market that was already witnessing eroding margins coupled with a need to reskill for new technologies reflects this approach. A Reuters report quoted Nandan saying “He (Parekh) has nearly three decades of global experience in the IT services industry. He has a strong track record of executing business turnarounds and managing very successful acquisitions.”

There are several issues to get around once Salil steps into Infosys in January 2018. But two will standout in how he leaves a lasting impact on the organization. OK, three!

#1 BIG BET – Picking up on the foundation laid by Vishal Sikka in Artificial Intelligence will be first big bet. Predictive Analytics Today did a comprehensive analysis of leading AI platforms. In that report, Nia, Infosys’s AI platform ranks fourth alongside Wipro’s Holmes and those from stalwarts such as Google, and Microsoft. Not included in this list is Indian IT leader TCS’s AI platform Ignio. But, the big boy way ahead in the AI game is IBM’s Watson. Reported widely, Watson’s estimated revenue stands at US $100 million over the past three years. IBM has set an ambitious target of $10 Billion by 2023. This is likely a challenge in this otherwise exuberant market, even for Watson.

For Salil at Infosys, the challenge will be similar. Infosys needs to solve its clients problems fast, and show business value from such solutions. He should build on Infosys’s attempt to invigorate the solutions space with its ‘Innovation Hubs’ that hire local talent, which includes user experience and design. Infosys has traditionally had the advantage over its competition, at least the India based ones, in cementing strong client relationships. Salil should quickly press these forces to deliver application ideas for Nia and other advanced technologies it possesses and show results to its customers. And the place for the action for this big bet will be its Innovation Hubs. This is very different from the past model of capturing functional requirements as usecases; a template driven approach.

Here one needs to collaborate, and co-create solutions. If such concerted effort plays out well, it will be the first solid differentiation Infosys can highlight. The transformation will need to be away from its offshore centers and closer to the client location.

Re-skilling its offshore armies of developers and technologists is already underway. Efforts include collaboration with leading online trainer Udacity to deliver ‘Nanodegrees.’

#2 BIG BET — This is more up Salil’s sleeve ie. M&A. For Infosys, India business is not as significant in revenue terms as it is in visibility. The much touted GST, a taxation modernization effort, has been an issue for Infosyswhich built and deployed it for the Government of India. India’s small businesses are up in arms against them for what they claim is poor performance and several glitches, especially its usability. Infosys though defends its record. Quoting from the report “Given the complex nature of the project and rapid change management, there have been several stakeholder concerns that have also been raised. Some of our finest engineers are supporting the GSTN team as they work towards resolving these and serving all stakeholders.”

In the past, rapid growth for service companies such as Infosys has come from implementation and customization of products. Honestly this did not involve that great an amount of thinking and innovation since the problem is already solved by SAP, Oracle, Microsoft, etc. Strategic problem solving skills is a culture and capacity found readily at reputed consulting firms such as McKinsey, BCG, Bain, Deloitte, Booz Allen, so on. Infosys has time and again tried building such a practice but has not delivered on that front as expected. But, the Innovation hubs planned at strategic locations across USA and Europe will help stimulate these problem solving skills and deliver results.

Culture inherent to a business is the big elephant in the room and there is no way past that. This applies to India too.

Especially for a high visiblity solution such as GST portal for the Government of India that impacts hundreds of millions of ordinary Indians. Now, who understands the financial thinking of millions of businesses in India better than Bharath Goenka, the founder of Tally ERP, India’s leading a ERP and accounting software product company. The journey of Tally started with a challenge posed to Mr.Goenka by his father, “Are you writing programmes to make the life of the programmer easier or the life of the user easier?”

Early on he understood the culture of accounting preferences of Indian book keepers. There are consultants like Rohit Choudhary who says “It is an accounting software with a soul!” and that “You simply don’t change for the sake of changing. Tally’s interface is very simple, unique and user friendly.” While there is a universal lesson in such philosophy, its important to note that that simplicity has emerged from a focus on what users seek or how they work.

Salil has a radical option of co-opting this deep learning on how millions of Indians prefer managing their business accounts, while they get around to complying with the new tax regulation with over 99% of taxpayers registering in a record time. If we accept the simple assertion that Bharat Goenka understands this behaviour well, then Salil should acquire Tally, convert it into a open platform, and offer it with tighter integration to GST portal, reducing the burden on users to mastering new software, instead, using the Tally like experience to segway into GST portal.

Now, would such an acquisition actually pay off in terms of license fees or subscription fees, assuming it offers a freemium model to millions of users, where they pay to upgrade. It may not. But there is a larger political gain if this is pulled off in a record time before the next general elections, and for Infosys a greater clout and influence on policies that impact its operations. One can only imagine other unknown benefits from a platform such as this when linked to other Government digital programs including Aadhaar. There could be stories of efficiency, inclusion and benefits for millions of Indians.

For Infosys, these two big bets could truly transform it into a next generation solution company, accompanied with impact and influence, built on a robust base of an efficient service culture.

The third bet is ancillary to the previous two. Salil will need to convince the Infosys board and takes them along, including prominent shareholders like Mr.NRN Murthy for these initiatives, and with transparency. But with Nandan as Chairman to guide him, the journey should be easier for Salil as compared to Vishal, relatively speaking. Now, time will tell soon whether he will be successful or not and be that bold transformer Infosys needs. But then again, he is an outsider!

Data Driven Cultures

(first published in 2015 at . Updated 7 Dec 2017)

What drives data driven cultures…besides coffee?

How do businesses deal with intuitive insights and machine generated insights? In a conversation with a brand consultant and travelista about my product Dataswft, sifting through #realtime #bigdata #analytics, he asked “where are the warmer human things that drives AI and ML technologies.” To be ‘data driven’ he pointed out, is a culture, unaware of the Tableau sponsored report from Economist “Fostering a data-driven culture.” To quote the report “IT security is indeed a job for experts, but data are everyone’s business.” I still struggle with the plural nature of data!

“Is Dataswft a technical thingy for data driven, or is it enabling data driven cultures?” the brand expert enquired. Time to act is a key metric for data-driven I explained. And so, to be data driven is an everyday matter as long as it provides value. But, how does that differ from a data driven culture? Then is the constant posing questions, small questions constitute a data driven culture?

Consider this scenario. For an online ad campaign the frequency of tweaks need balancing between regular and not at all. Regular requires minimal amount of data to analyze for metrics such as reach, clicks, CPC so on. Something like a weeks worth of data is good. But that is a heuristic that applies to a human scale of attention and processing. Or, compress it to a day, such that the human manager can take a look end of the day or beginning of the day so on. Also, these tweaks are post fact ie. historical data analysis using heuristic approaches.

Add machine learning, and it opens up two opportunities. First, unlike humans its not limited by fatigue, attention. Of course, we will never discount human creativity and imagination. Especially when dealing with limited information, limited time or limited capacity to process. These constraints are best obviated by humans than machines. Machine learning can give us the speed and capacity to deal with large data sets. The second opportunity for machine learning is the capactity to predict, by utilizing well defined mathematical models or algorithms.

With artifical intelligence the same campaign can now run with greater efficiency, more frequent tweaks, instead of weekly or daily windows. It can be real time, though more relevant to IT security and fraud management. Data driven coupled with such system intelligence gives us the opportunity to ask several ‘small questions’ that you can liken to ‘infinetesimal element’ in decomposing physical forces.

Representative image to demonstrate the concept of an infinitesimal element as a tangible, simple model-able, mathematical quantity (source: )

This finer abstraction, I conjecture, will allow for more accurate sampling of data and analysis by the machine. For the campaign manager, these frequent ‘small question’ analyses can present a visualization that is richer, provide better trending on the data and lead to better decisions.

If data driven represents our ability not limited to capture and store of data, but to process it continuously, and asking of it ‘small questions’ that are well-modelled, then our ability to connect with the output of a data driven process, coupled with human intuition stands for a data driven culture.

Providing answers instantly is what technology does well. With the technology, its the culture that realizes potential and pushes the envelope.

Consider an investment bank that needs to run value at risk calculations covering a host of financial products invested in by clients, touples of market price data for months, hundreds of sophisticated risk models designed to predict risk against different scenarios. To this person, its important to know how much money is to be set aside against the dynamic risk and how much capital can be unlocked to earn. Calculations here can run into over 15 billion and to execute in under 30 seconds can make a huge difference to these money managers. This scenario is possible only in a data driven culture and keep a handle on risk in a volatile market that involves many asset classes.

Data driven cultures are those we see at the top of the ‘culture pyramid’, crunching all their data by the second, minute and hour and not some end of the day or end of the week event. But as one moves higher up the pyramid, the response times that the culture will accept reduces from minutes and hours to mere seconds. That is not to say every industry out there needs to optimize at nano second level, real time analytics, but each industry should choose based on where the opportunity lies and where demand lies. Security industry can only survive in a real time processing of events. Social media marketing may find untapped opportunity in an hourly cycle. Education and learning industry may find it suitable by moving to end of day, from end of term, so on. In all these, the data driven culture consists of small questions we ask of the data.

Put another way, data driven cultures are those complementing their solely heuristic decision making process (read gut feel) with a data driven approach and thus; what do the data say! That is not just about the quantity of it, but also a quality that heuristic rules and human cognition are likely to be overwhelmed given the volume, variety, velocity.

In a digital world, even when its the same question asked few minutes back, or yesterday and last quarter, so on, paradoxically, the answer is never the same, instead gets better or is likely different.

“Time…and data, are like a river. You never touch the same water twice!”

Of Polynyas and a Pollyanna

Yesterday, I was watching the wonderful Nature series by BBC’s David Attenborough (specifically Frozen Planet). During winter, most things that fly escape the freezing arctic to warmer southern regions of our planet.

Spectacled Eider Duck

There is an exception it seems. The Spectacled Eider, instead of moving south, heads for the frozen seas in search of ‘polynyas‘, which are naturally occurring ice holes in the otherwise frozen ocean.  The entire Eider population gambles on the open ice hole betting that it will continue to remain open through the harsh winter. Polynyas are reported to remain open over several winters sometimes. Others may just not sustain the thermodynamic conditions required to keep the ice from forming. For the Eiders its important they choose a Polynya that doesn’t close in. But when the bet fails, like in the video I watched, the open ice hole becomes too small for the crowd to stay afloat and alive. Its like a noose that slowly tightens freezing many to their deaths.

The best part of my career was like the abundance of spring. Seasons do change and when winter set in some 5 years back, I cut loose in search of my own Polynya, navigating entrepreneurial waters. While watching Frozen Planet, I was struck by what mirrored those doomed Eiders! My startup Polynya seemed perfect when I chose that space, and it was the best contrarian bet, I thought then. It was so good that it reflected in my improved health very quickly. Its worth noting that being my own boss, chasing a dream was an amazing stress buster in itself. My higher cholesterol levels were down. Later, my curious enquiry of changes in eating, exercise, sleep patterns revealed nothing. I was left to conclude stress was the silent killer. And I beat it in my own Polynya. In the micro climate of the startup ecosystem, talking to investors, fellow entrepreneurs, businesses at networking events, award functions so on, the Polynya comes to life, nurturing the dream. Until it starts closing in!

Everyone in the ecosystem is aware of the dynamics of thermodynamics at play. Similar to the Polynya, the freeze tries to extend constantly, while the upwelling currents continue their churn bringing rich nutrients to the surface for the ducks and seals to feed, not to mention the submerged naval vessels that need to rise to surface on occasion within a Polynya. If I may draw a literal parallel, these naval ships are the big enterprises with their ‘open innovation’ programs, launching funds, accelerators, shared IP, so on to ensure their large appetites constantly find new sources of food to keep that magical growth number going or in search of technological progress and visit startup Polynyas.

Unlike the poor Eiders doomed within the confines of a false Polynya, a startup is afforded an exit, for the lucky few its newsworthy and others end in a icy, watery grave. As unreal as failure is, reason informs one that I took a chance, I didn’t fail, it was my aspiration, my expectations that failed. More importantly, I realize these can be shed. I remind myself its all gravy, and I welcome back myself to this neglected blog.

The optimistic pollyanna that I am tells me – here its spring all through the year! It will be unfair to the Polynya that fed me during tough weather and open conditions and like so many of us do, I am richer for the experience! In my next post, I hope to share an insight gleaned from that journey, where I will try to makes sense of what linkages there are, if at all, between two worlds I straddled (I am no deep diving Eider either), the ‘user centered’ world of design and the ‘data centered’ world of big data. As for my future, I believe I have not failed, but emerged from a cauldron or should I say ice bucket of learning, ready to apply in new work that I will take up.

Meanwhile, enjoy this stanza V from the 1925 T.S.Eliot poem – ‘The Hollow Men’ (quote taken from All Poetry). Its a sort of feeling I get as I exit the startup Polynya.

Here we go round the prickly pear
Prickly pear prickly pear
Here we go round the prickly pear
At five o’clock in the morning.

Between the idea
And the reality
Between the motion
And the act
Falls the Shadow
For Thine is the Kingdom

Between the conception
And the creation
Between the emotion
And the response
Falls the Shadow
Life is very long

Between the desire
And the spasm
Between the potency
And the existence
Between the essence
And the descent
Falls the Shadow
For Thine is the Kingdom

For Thine is
Life is
For Thine is the

This is the way the world ends
This is the way the world ends
This is the way the world ends
Not with a bang but a whimper.

Data Fracking

In 2006, Clive Humby drew the analogy between crude oil and data in a blog piece titled “Data is the new Oil” which since then has captured the imagination of several commentators on big data. No one doubts the value of the ‘resources’ that varies in the effort required to extract. During a discussion with a billion dollar company CIO, he indicated that there is a lot of data but can you make it “analyzeable.”

Perhaps, he was inferring to the challenges of dealing with unstructured data in a company’s communication and information systems, besides the structured data silos that are also teeming with data. In our work with a few analytics companies, we found validation of this premise. Data in log files, PDFs, Images, etc. is one part of it. There is also the deep web, that part of data not readily accessible by googling, etc. or as this Howstuffworks article refers to it as ‘hidden from plain site.’

Bizosys’s HSearch is a Hadoop based search and analytics engine that has been adapted to deal with this challenge faced by data analysts referred to commonly as Data Preparation or Data Harvesting. If indeed finding value in data poses these challenges, then Clive’s analogy to crude oil is valid. Take a look at our take on this. If today, Shale gas extraction represents the next frontier in oil extraction employing a process known as Hydraulic Fracturing or Fracking, then our take on that is ‘data fracking’ as a process of making data accessible.

It’s all gravy

image source:

I was in my early teens when the Voyager spacecraft was launched for what was a four-year mission to explore the solar system. I still recall fascinating, never before seen close up pictures sent by Voyager as it shot past Saturn’s rings. After a long period of silence, Voyager has recently signalled, 36 years later, that it is indeed outside our solar system, sometime this week. The first man-made object to leave our solar system.

Like a bird leaving its nest to lead its own life, led by its own purpose. Its all gravy after that. To track progress, it’s only keenest speculation. The degree of Keenness matters because, if I may borrow Donald Rumsfeld’s rhetoric, now, its an unknown unknown and as it moves beyond, to exist only in imagination. The bird in the nest is a known known. To know about the bird that has flown, best predictability models matter. Patterns matter. There will be lots of noise from uncontrolled, unknown sources. When I used to gaze at those captivating pictures of Saturn and all the planets, those days there was the Skylab crashing down incident. Around that time I was living in Kolkata (Calcutta) and there was this craze, perhaps in jest, to get helmets to protect oneself from the impending crash. It’s funny that in modern times, the scientific rationale can easily be thwarted with such emotive responses. Is that natural when a familiar, well-understood worldview gets challenged and a lot of unknowns enter the picture?

This is the familiar language of big data these days, as popular as it is, and a lot of unknowns. Its all gravy today, for the Voyager, for the bird that discovered flying, and for me too. Through this blog I hope to explore and discover new aspects of our digital worlds, to thwart the known and structured definitions that I have lived by these days, the world of design, user experience, entrepreneurship encompassing Cloud, mobility and big data. I welcome you to join me on this journey and share your thoughts as it progresses.

My first post explains why UX Gravy. The content is primarily contributed by Sridhar Dhulipala until we get other like-minded individuals who want to share out here. The idea is that over the last few decades, especially since the advent of GUI, PCs, point and click devices today we are in the middle of a far more digital-driven world and highly mobile users consuming this digital content across the globe. Those initial heuristics and definitions that guided user interface design perhaps hold, but fundamental changes are questioning their relevance. The nature of tasks, how we work, interact, personal lives are all changing and user experience expectations, opportunities are different.

UX Gravy is about whats beyond previous well-defined rules of user experience. It’s about identifying, exploring, conjectures, evidence pointing to new user experience where our digital stuff has crossed thresholds of easy, ready, structured comprehension. It’s about what user interfaces, what contexts, how, why people interact with information and how it is helping them adapt at work and in life.

The Origins of Bigdata

While sharing our thoughts on big data with our communications team, we were storytellers. The story around big data was impromptu! We realized the oft-quoted VolumeVariety and Velocity actually can be mapped to TransactionsInteractions and Actions. I have represented it using a infographic background.

Here is a summary –

“The trend we observe is that the problems around big data are increasingly being spoken about more in business terms viz.Transactions, Interactions, Actions and less in technology terms viz. Volume, Variety, Velocity. These two represent complementary aspects and from a big data perspective promise better business-IT alignment in 2013, as business gets hungrier still for more actionable information.”

Volume – Transactions

More interestingly, as in a story, it flowed along time and we realize that big data appears on the scene as an IT challenge to grapple with when the Volume happens, which comes from either growing rate of transactions. Sometimes transactions occur in several hundreds per second, or as billions of database records required to process in a single batch were the volume is multiplied due to newer, more sophisticated models being applied as in the case of risk analysis. Big data appears on the scene as a serious IT challenge and project to deal with associated issues around scale and performance of large volumes of data. Typically, these are Operational in nature and internal facing.

These large volumes are often dealt with by relying on a public Cloud infrastructure such as Amazon, Rackspace, Azure, etc. or more sophisticated solutions involving ‘big data appliances’ that combine Terabyte scale RAM at hardware level with  in-memory processing software from large companies such as HP, Oracle, SAP, etc.

Variety – Interactions

The next level of big data problems surface when dealing with external facing information arising out of Interactions with customers, and other stakeholders. Here one is confronted with a huge variety of information, mostly textual, captured from customers interactions with call centers, emails, or meta data from these including videos, logs, etc. The challenge is in semantic analysis of huge volumes of text to determine either user intent or sentiment and project brand reputation, etc. However, despite ability to process this volume and variety, getting a reasonably accurate measurement that is ‘good enough’ still remains a daunting challenge.

Value – Transactions + Interactions

The third level of big data appears where some try to make sense of all the data that is available – structured and unstructured, transactions and interactions, current and historical, to enrich the information, pollinate the raw text by determining business entities extracted, linking them to richer structured data, linking to yet other sources of external information, to triangulate and derive a better view of the data for more Value.

Velocity – Actions

And finally, we deal with Velocity of the information as it happens. Could be for obvious aspects like Fraud detection, etc. but also to determine actionable insights before information goes stale. This requires addressing all aspects of big data to be addressed as it flows and within a highly crunched time frame. For example, an equity analyst or broker would like to be informed about trading anomalies or patterns detected as intraday trades happen.

The business impact of Bigdata

First published on 21 dec, 2012

As a company engaged in Big data before the term became as common as it is today, we are constantly having conversations around solutions that have a big data problem. Naturally, a lot of talk ranges around Hadoop, NoSQL, and other such technologies. 

But what we notice is a  pattern in how this is impacting business. There is a company that caters to researchers who till recently were dealing with petabytes of data. This is a client company and we helped implement our HSearch real time big data search engine for Hadoop. Before this intervention, the norm was to wait for upto 3 days at times to receive a report for a query spanning the petabytes of distributed information that was characterized by huge volumes and lot of variety. Today, the norm has changed with big data solution and it is about sub second response times.

Similarly, in a conversation with a Telecom industry veteran, we were told that the health of telecom has always been networks monitored across large volume of transmission towers and together generate over 1 Terabyte of data each day as machine logs, sensor data, etc. The norm here was to receive health reports compiled at a weekly frequency. Now, some players are not satisfied with that and want to receive these reports on a daily basis, and possibly hourly or even in real time.

Not stopping at reporting as it happens, or in near real-time, the next question business is asking, if you can tell so fast, can you predict it will happen, especially in  a world of monitoring IT systems and machine generated data. We will leave predicting around human generated data analytics (read – social feed analysis) out of the story for the moment. Predictive analysis could mean predicting that a certain purple shade large size polo neck is soon going to run out of stock for a certain market region given other events. Or it could mean, more feasible, that a machine serving rising number of visitors to a site is likely to go down soon since its current sensor data indicates a historical pattern, therefore, alert the adminstrator or better still bring up a new node on demand and keep it warm and ready. 

So it seems the value of big data is in its degree of freshness and actionability, and at most basic level, simply get the analysis or insight out faster by a manifold factor!