AI and machine learning adoption trends

Upward trendline

AI and machine learning adoption trends

There were some interesting stats about AI adoption from the recent Gartner survey of circa 3000 CIOs. In the CIO’s opinions, the top two most disruptive technologies were AI and data & analytics. I guess this is not that surprising, i.e. it doesn’t take a rocket data scientist to see that AI and machine learning are mega-disruptive. What is more interesting is the AI adoption trend, i.e. not who is talking about it, but who is actually doing it? 37% of CIOs responded that they had already deployed AI technology or that deployment was in short-term planning.

AI adoption two years ago

This chimes with what Inawisdom are seeing in the market. Two years ago we were largely working on “discovery” projects. These are valid R&D-oriented initiatives to better define, explore and hopefully validate a hypothesis.  For example, using this combination of my organisation’s private internal datasets and external publicly available datasets, is it feasible to derive some business value and differentiation? We are still undertaking these discovery projects all the time – it’s where organisations need to start to gain confidence in order to make further investment decisions.

What has been interesting is the “failure rate” of these discovery projects. By failure, I mean – how often is the initial hypothesis disproved. Of course a failure in this context is a success – you have learned about your business as a result and that in itself can have real business value. Conventional wisdom might be that you would expect 50% of these predictive analytics discovery initiatives to be disproven, but that is not the case. Anecdotally, I’d say in excess of 80% are proven to be viable. So why is this? I think there are a number of reasons for this surprising level of success…

Lots to go at

It’s early days for machine learning adoption in most industry sectors – so there is plenty of low hanging fruit to go after. To put it another way, every time I engage with a customer – they generally know where the bodies are buried. For example:

  • They know that they are under-exploiting the voice data in their contact centre recordings
  • They know that they are not using data from their customer chat interactions…
  • …and they know that they are not fully exploiting their web traffic behavioural data to optimise the customer experience

And so it goes on…so one of the early challenges for us is in helping a customer to prioritise where to start. Being brutally focused on the biggest potential ROIs is key, otherwise you end up defocused and failing.


Related to the above point, we apply a scoring mechanism to triage each AI/ML opportunity and understand its business impact, feasibility and risk. This allows comparison of otherwise very hard to compare opportunities. For example:

  • Is it better to invest my limited resources in using video processing to identify customer satisfaction and footfall analysis in a retail environment (arguably very much at the AI end of the spectrum)…?
  • …or should I instead invest in optimising the prices and recommendations offered to individual customers on my eCommerce platform (much more at the machine learning end of the spectrum)?

Crucially this allows an organisation to quantitively stay focused on what makes a difference and avoid vanity projects.


Easy to say, but we know what we are doing. So when we qualify all AI/ML opportunities and assess the technical dimension of their feasibility (not the only dimension we consider!), we have managed to largely avoid the hypotheses that will be disproven. I guess this is hard to know for sure because…er…we avoided them. One vital way that we do this is via initial data discovery work. Machine learning without the data is a rather disappointing process for our data science team! Assessing data availability (there are many aspects to this also), data security considerations, volumes, distributions and data quality are crucial in order to get off on the right foot.

AI adoption now – straight to production

In contrast, these days customers are increasingly engaging with us to deliver all the way through to production from the get-go. There is still often a discovery phase to hone and test the hypothesis, but as mentioned above, they know where the gold is buried and now have the confidence to fund an initiative that will deliver that business value, not just explore it. This reflects industry’s growing confidence in the technology, and I think also that we are coming out of the early adopter phase.

Of course, having an ambition to deploy AI/ML workloads in a production setting does not mean that it becomes a waterfall project. There can still be bumps on the way such as unexpected data quality issues, data integration challenges etc. So an iterative approach is essential. We use a sprint cycle model with a weekly cadence and continual playbacks to the customer as described in more detail on the AWS Solution Space site (more details here). I would say that across all the AI/ML projects we’ve delivered to date, there are always surprises, typically from the data. This iterative model allows us to jointly adapt our approach accordingly.

You don’t know what you don’t know

One thing I love about the work we do is when we can overturn the received wisdom of our customer. This can be quite uncomfortable sometimes and so needs to be carefully managed and delivered – but when it happens I know we are adding massive value. The data does not lie! For example, being able to show that the relationship between sales levels and price is not what they thought it was (and so there is a real-time ML optimisation opportunity) is fabulous. This is one of the reasons why our main customer stakeholder is typically more in the business than the technology sphere. These are the people who really need these insights to allow them to further optimise their business model and operations.

One last point on this – it may sound odd for a company like Inawisdom to say, but not every problem is best solved with machine learning or “fancy” AI. Our customers have been running their businesses for years and doing a great job at it. They have refined their pricing strategies, tuned their call centre operations and optimised their distribution network. So in this sense, the low hanging fruit of optimisation has been eaten. AI and ML allow them to grab the next 5% of optimisation that is currently untapped – but only once the “basics” have already been done. To put it another way, if your business is running blind due to lack of good Business Intelligence/dashboards etc giving you the key metrics you need to run the operation, then fix that first! It also provides the foundation (via data integration etc) for future AI/ML initiatives.

Lies, damn lies and surveys

Finally, in their usual unique style The Register pointed out that the growth in AI interest is quite an about-face in the Gartner CIO survey, so it needs to be taken with a pinch of salt. They remind us that only back in June Gartner reported that just 4 per cent of CIOs had invested in and deployed AI. So whilst the survey provides interesting data, I’ve focused in this blog on what we are seeing at the coal face…

Robin Meehan
No Comments

Post A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.