Story and Strategy Blog

 RSS Feed

Category: Emerging Technology

  1. Preaching to the Choir

    Posted on

    By many measures the Blockchain Live conference in Olympia, London this week was a success. It was certainly busy with exhibition stands and the seven ‘theatres’ all thronged with people. I certainly learnt a great deal and heard about many exciting projects and visions for blockchain across many different industries. However, I could not shake the feeling that I was inside a quite cultish event.

    A quick, and completely unscientific, scan of people’s badges around me suggested that most delegates were from the blockchain community. They were start-ups, consultants, technologists and service providers within and for the blockchain world. There seemed relatively few from outside the industry. The language of the event reinforced this point. Most presentations and conversations required decent understanding of blockchain technologies and jargon to follow – effectively excluding those that may have come with business issues hoping to find a solution in blockchain.

    That’s not to deny that there was lots of potential, from start-ups looking to use blockchain to incentivise digital content discovery and sales (Catalyst), to cataloguing of Art (Arteia), to mobile wallets for crypto funds (ZEUX), but many of these have yet to even launch to consumers and businesses outside of the immediate blockchain environment.

    Another issue, to my mind at least, was the heavy overlap still seen between blockchain technologies and solutions and cryptocurrency. There were many presentations on the future of cryptocurrency, tokens and ICOs, not only on the Crypto stage, but also on the financial services, technology and development and investors stages. I personally believe this is one of the biggest hurdles to overcome. Whilst Blockchain is so closely associated with Bitcoin in particular, and cryptocurrencies in general, it will always have a taint of criminality to it. Indeed, one of the best attended sessions in the morning was author Misha Glenny’s presentation “What does Crypto mean for crime.” I asked him he thought that crypto could clean up its reputation. His answer struck to the heart of the problem – a catch 22. For crypto to be ‘cleaned-up’ it needs to be used by the mass market – but for the mass market to adopt it, it needs to be cleaned up.

    I don’t want to be down on the event. It was a great showcase of the potential of this exciting technology, and good to see growing interest. To continue to thrive those in the market need to start talking in language that resonates with their prospects and customers in the wider world. They need to root their technologies and solutions in the context of issues that audiences are facing now – to reverse the old marketing axiom, they need to sell more sausage and less sizzle.

    Speaking with the creative director of Vanbex, Max Tkacz at the exhibition, we agreed that blockchain projects needed to get better at talking about what the technology was achieving and the problems it was solving, rather than focus on the ‘how’ of the tech itself. I suggested, and Max agreed that it was time to talk more about the ‘cars’ of effective solutions than the ‘engines’ of blockchain.

    Next year I hope I can return to see the same enthusiasm and innovation, but with an audience drawn from wider segments of the potential customer-base so that real needs and solutions can be discussed in the language of business rather than technology.

  2. Human heroes needed to save AI

    Posted on

    Okay, so that’s a bold statement, and one that is hard to verify, but it was the persistent voice in my head as I listened to Hannah Fry’s@FryRsquared excellent lecture today at the RSA.

    AI’s failings are well documented, from the short-sightedness of some algorithmic design, or the unconscious biases that play out in extremely dangerous ways for many, or just careless coding and bugs. All of these are undoubtedly important but have been covered at length elsewhere. What struck me from what Ms Fry was saying was it is our deep-seated and seemingly contradictory urges to both unquestioningly rely on technology and irrationally reject it for not being perfect, that is the risk. And these are (very) human failings.

    The examples used in the lecture (and in her book Hello World) range from the common trust in SatNav systems even when common sense shows we are going the wrong way, to the far more serious failure of common sense to over-rule the sentencing of criminals based on faulty algorithmic logic assessing likeliness to re-offend (see Brooks vs State of Virginia in Fry’s book).

    Unlike currently popular dystopian views of AI, Fry’s work looks at how we can work with AI to make a better future, and what we need to do ourselves to make it happen. In my mind, her presentation suggested three interlocking questions that need to be addressed to make sure we take responsibility for AI.

    • How should we (humans) manage AI?
    • How should we decide (as businesses, society and individuals) where the boundaries should be established?
    • And, finally – how do we need to change our behaviour to make the best of a world of AI?

    Many bigger brains than mine will doubtless discuss these for years to come, but I do think there is a common thread that must form the basis of answers. We need to find better language and better stories to help people understand what these technologies do in their lives and why.

    As noted by several in the audience, the trouble with AI is it just sounds so scary. Artificialmeans ‘not natural’, imitation, false, insincere and seems to seek to deceive. Intelligence is itself cold and unemotional. Together it is unsurprising that they conjure images of a cold and unhuman future – even without reference to The Terminator!

    But alternatives like Intelligent Assistance (IA) as suggested from the audience today, or Decision Support Systems (around since the 1970’s) are almost as bad and unlikely to catch on. With just the mention of ‘AI’ purported to add 10% to company valuations it seems are stuck with it.

    But can we temper that phrase by providing better context and more inclusive stories that help people to understand what is happening and how it helps them? Technologists have a bad habit of creating their own language that is both complex and arcane to the rest of the world. Yes, undoubtedly, they need to convey complicated things, but as Fry demonstrated today – with a bit of thought and effort even complex processes can be explained in accessible ways.

    To answer the questions above we need a more inclusive, sober and well-informed debate across all segments of society to ensure that the answers are effective, rational and acceptable to the majority.

    In order to have this debate the AI industry (in all its forms) needs to start creating augments, proofs and examples in language understood by the masses. This means stepping away from product (or even commercially) centric communications towards stories that address the context, values and concerns of populations impacted by those products.

    The danger in not leaving the tech bunker, is that AI innovators will find the debate increasingly toxic and the attentions of regulators ever more onerous. AI has an important role to fulfil in improving our lives, societies and economies in many ways. But this cannot happen without articulating that benefit in ways that are convincing and relevant. Let’s re-tell the AI story with humans as the heroes.