It is often said there are only three benefits for business investment – to reduce costs, increase sales and mitigate risks. However, underpinning all of these are the harder-to-measure (especially at an organisational level), but often-touted metrics of flexibility and efficiency.
Most IT sales and marketing material will be peppered with these words, but achieving real benefit from either requires changes in people, processes and systems, not simply updating the tools.
Technology can be used to automate existing processes, but often acts like a lens or amplifier. This means it needs to be used wisely, not as a panacea. Apply it to a bad process or system and the result is typically that the bad things happen faster.
Optimal outcomes need to be determined as early as possible. While many organisations are looking for flexibility and efficiency by automating their business processes, as Computer Weekly pointed out recently, it is not simply about automation, but optimisation.
To embark on this optimisation journey, it is a good idea to fully understand the starting point, and a good way is to build a computer model to simulate, measure and predict changes in the real world by codifying them in software. Increased curiosity about, or at least public awareness of, computer models and some of the science behind them may have been an unexpected outcome from the Covid-19 pandemic.
However, the iterative and experimental nature implicit in the need for refining and honing models to better fit any imperfect but continuously emerging and developing data is not always appreciated, either by the general public or those in senior decision-making positions. People want answers fast, whether it is something critically important, such as the spread of a pandemic, or something more routine, such as weekly sales forecasts for a business.
Applying artificial intelligence (AI) to support and augment this process seems like an obvious next step, but there is more than just technology required.
Given the pace of change – or at least increasing expectations of faster results – traditional and sequential processes are no longer sufficient. This opens the door for agile hybrid working approaches with parallel processes and a focus on overcoming bottlenecks and constraints, such as was seen in recent vaccine developments.
In IT, this can lead to an over-emphasis on automation and simply applying “Ops” to the end of anything – giving us DevOps, SecOps, AIOps, MLOps, ModelOps, and so on.
In many cases, there are good arguments for the juxtaposition of technology with operations, but there has been too much emphasis on whether one or another XyzOps is best placed or not. The fundamental issue should be: how can this produce better business outcomes faster? That is, how will it optimise, rather than just automate?
There will be many discussions to have about the quality, variety and choices of AI, machine learning, modelling or data analysis being used, but perhaps the more important questions are: how quickly can we apply it, and how quickly can we see the results in a business-relevant – not technical – context?
The answers revolve around platforms, processes and people. The last two need to go hand in hand, combining both business need and technical capability from the outset with the variety of skills that need to be integrated. Mary J Pratt has some great ideas here on how to bring together the right blend of model, algorithms and dashboard specialists, along with liaising with the business. This is critical, and the more closely aligned the data and the business needs can be, the better the outcomes.
While some differences in technologies may be or become important, the wider platform of how readily diverse elements can be combined may be the most important aspect to dictate the speed of obtaining value.
Open source and free trialling have helped deliver a much more rapid pace in software development and can do the same across the broad field of AI. There are extensive frameworks from large suppliers such as Microsoft’s open-source Cognitive Toolkit, IBM’s Watson Studio and Google Cloud AI Platform, automation specialists such as Wipro, plus AI-focused platform players such as TensorFlow and H2O.ai.
In each case, there is a focus on making AI an integrated element of operational capability. There are programmable interfaces and libraries, with Python, C++ and Java the most common bindings, and some of the suppliers have tools to simplify and speed development, such as H2O’s recently launched Wave framework with built-in templates, themes and widgets, or Wipro’s Holmes for Business.
Having powerful and extensive platforms is a good start, but they need to be readily applicable and able to deliver meaningful business benefit. For organisations looking for immediate value, this means working on existing, often mundane challenges, not the bleeding edge of technology. So while video, handwriting and gait analysis might be exciting, the immediate value comes from things with direct business impact, such as forecasting, fraud detection and reducing customer churn.
Increasingly some AI suppliers recognise this and are focusing their algorithms on business process optimisation. This means that any organisation considering how to improve its processes and efficiency can now expect to see real-world applicable examples of AI for investments that can make an immediate impact.
So, don’t focus on asking suppliers the technical questions about how smart their AI is, or how much it automates, because in most cases it will be up to the task. The real value comes from speed to business outcome – ask for examples, strategies and how-to guides – to smartly address the task in hand: increasing the pace and effectiveness of business process optimisation with AI for a reason, not for the hype.