Elly Yates-Roberts |
It’s no secret that artificial intelligence (AI)-enabled devices are listening in or observing what we're doing, collecting and digitising massive amounts of data.
What underpins this entire enterprise is the work under the hood – maintaining data lakes and warehouses that store the data, performing data engineering tasks to establish and enhance the structure, using business intelligence and statistical analysis to make sense of it and, finally, training an AI program to make predictions that yield more “intelligent” decisions.
For example, many of us are familiar with virtual assistant technologies that take in information, digitise the data, put it into data pipelines, and analyse it so the AI algorithms become better in near real-time, fine-tuning it specifically to one’s world. What makes an AI algorithm potent is when it can start making connections with other data sets and data points.
This results in the creation of a profile that encompasses your shopping behaviour, what you're doing at home, what music you like listening to, what you like eating, and even where you live – in suburbia or in the city. When you start collating all these pieces of information and feeding these multiple features to the AI engine, it becomes truly intelligent in catering to your specific needs.
In the last five years, the world has made significant strides in the way AI is developed to solve a variety of business challenges. Most notably is the advent of transfer learning involving deep learning models which have been open sourced so that they can be re-applied and fine-tuned for solving a specific business use-case. Another notable example is the proliferation, understanding and appreciation of a class of statistics called Bayesian statistics. While it’s not technically AI, Bayesian statistics has armed the data science community with a much more intuitive way of understanding the data to enable stakeholders to make more informed decisions.
As for smarter AI, the algorithms are getting so good that they might eventually start writing the code that a programmer typically writes today. It doesn’t necessarily mean that programmers are going to start losing their jobs, but it means they will have to continuously learn and innovate and stay on top of these changes.
Siddarth Ranganathan is director of data science and analytics at Nintex
This article was originally published in the Winter 21/22 issue of Technology Record. To get future issues delivered directly to your inbox, sign up for a free subscription.