What, if anything, should worry us about artificial intelligence today?
Is it the worry that it will take over jobs around the world? Most definitely not. The fact of the matter is that what AI has gotten very good at solving for certain problem sets, however it still lacks overall knowledge of context to the decisions it’s making.
Yet there are still reports like Tom Davenport’s that say that by 2025, there will be more machines in a marketing function than humans. How do we come to terms with both AI not being a job threat, yet occupying a space that used to be entirely human?
It’s not that humans will be replaced. It’s that human skill sets will need to be rejiggered. A relevant example is the emergence of ATMs. When automated cash machines debuted, there was a big buzz that tellers would lose their jobs. And it’s true that if you look at the statistics on average, the number of tellers did go down. But an unexpected second trend also occurred: Banks opened more branches that were now more cost-effective to run. Employees at banks then transitioned from the rather mundane task of debiting or crediting a certain amount of money or depositing a check and became customer relationship managers. In this sense, automation freed up machines to do what they do best and left humans with the bandwidth to focus on the skills best suited for them. The same will be true for marketing, to Davenport’s point. Machines will crunch the myriad of additional numbers available in marketing today, apply analytics to data and determine next best steps at a pace humans could never deliver. But humans will fulfill the necessary creative roles that enable marketing campaigns.
But there is one part of the above equation that should make enterprises worry: Who is monitoring the machine?
It’s not a chaotic superintelligence that needs our immediate attention. It’s our models. It seems that while humans are fairly good at adapting to new roles in the face of technology changes, machines aren’t as good at adapting to be optimal on their own. AI isn’t telling us why it “thinks” what it does and why that’s a valid assumption. It is not able to alert to new biases it may have learned.
The somewhat new focus on machine learning and deep learning are able to add immense value to analytics performed on big data. The reason AI has momentum and growth today is because we have complex, rich data and also now have the computing power to run the algorithms we need to provide insights at scale. This will drive more deep learning throughout the enterprise. But the more data we feed a model and the increase in decisions based on modelling, the more we rely on its automation for analysis and execution. Ultimately, this results in the absence of humans checking to make certain the model isn’t inflating a bias. That problem might not garner as many overhyped headlines as “AI is Taking Our Jobs!” but that’s the dilemma that should be worrying enterprises right now. If an autonomous vehicle has an accident, who will be liable? Will it be the algorithm, the human in the vehicle or the manufacturer? These are the types of conundrums facing enterprises.
As machine and deep learning progress, a big element of analytic operations will have to include self-service, real-time reporting, so business leaders can have the confidence that these models are doing what they are supposed to accomplish for the business. Every model degrades over time, so the accuracy of each one needs to be assigned a threshold and monitored so that enterprise can be proactive and also guard against false positives, inaccuracies and biases that develop over time.
Businesses must make sure they have processes and people in place that can understand why a model was developed, what result it should bring to a business and then comprehend why it’s predicting what it’s predicting. So just like with jobs, where human creativity will reign supreme, it’s going to take human critical thinking skills to ensure narrow AI can automate decision-making in a trusted and reliable manner.
A strategist and change leader, Yasmeen Ahmad has worked on executive teams with focus on defining and leading strategy, driving priorities with a sense of urgency and leading cross-functional initiatives. Yasmeen has held roles including VP of Enterprise Analytics, Head of Global Communications and Chief of Staff to a CEO. Her creativity, ideas and execution have supported organizations to move quickly to deliver on key transformation objectives, including pivots to analytics, as-a-service, subscription and cloud.
Yasmeen is a strong communicator, well versed in connecting business and technical disciplines. Her keynote presentations, articles and published materials are demonstration of her thought leadership and ability to simplify complex concepts. She is regarded as an expert in the enterprise data and analytics domain, having successfully consulted to deliver multi-million dollars of value within Fortune 500 companies. Yasmeen leads with a passion for being customer obsessed and outcome focused. A strong people leader, Yasmeen has driven change management and people initiatives to foster a culture of growth and continuous improvement. Yasmeen is a strong proponent for transparency, diversity, inclusiveness and authentic leadership.
Yasmeen has a PhD in Life Sciences from the Wellcome Trust Centre in Gene Regulation and Expression and has studied on executive programs related to Disruptive Innovative and Strategic IQ at Harvard Business School. Yasmeen has been named as one of the top 50 data leaders and influencers by Information Age and Data Scientist of the Year by Computing magazine, as well as being nominated as a Finalist for Innovator of the Year in the Women in IT Awards. Finally, Yasmeen is part of the exclusive Executive Development Program at Teradata.
View all posts by Yasmeen Ahmad