Subscribe to the Teradata Blog

Get the latest industry news, technology trends, and data science insights each week.



I consent that Teradata Corporation, as provider of this website, may occasionally send me Teradata Marketing Communications emails with information regarding products, data analytics, and event and webinar invitations. I understand that I may unsubscribe at any time by following the unsubscribe link at the bottom of any email I receive.

Your privacy is important. Your personal information will be collected, stored, and processed in accordance with the Teradata Global Privacy Policy.

What are the prerequisites for a large-scale AI initiative?

What are the prerequisites for a large-scale AI initiative?

Sentient Enterprise and Artificial Intelligence (Part 1 of 2)

Over the last few months, I’ve had the chance to engage with customers and industry analysts about a range of topics in the field of Artificial Intelligence, and I’ve been struck by how effective the Sentient Enterprise is in addressing the most common questions and misconceptions about AI. This two part blog series focuses on two of those:

Examining customer case studies is one of the best way to share knowledge and insights around how enterprises are driving business outcomes from AI technology. Case studies are practical, relatable and authentic; and we are fortunate to have some great reference accounts that allow us to publicly share their AI and deep learning success stories.  

For context, most of our AI case studies start with Rapid Analytic Consulting Engagements (RACE) based on an agile and experimental process to find and test new insights and produce results in weeks, not months. So, the starting point for telling these stories is identifying the business outcome we want to achieve, and then jumping into a range of deep neural net taxonomies, augmenting  current platforms with requisite software and GPU enablers, and measuring the final results. All of this happens within a few sprints.

The most common reaction is, “But what about all the work in building the data pipelines and organizing, cleansing and governing the data? What about metadata? What about the pain associated with moving from initial insight to operationalizing?” The conversation shifts from how leading companies are implementing autonomous decisions using deep learning (i.e. stage five of the Sentient Enterprise), to the foundational capabilities necessary to take on a deep learning initiative beyond anything more than a silo science experiment.

Enterprises that are leading the way in deep learning all have some degree of capability around stages one through four of the Sentient Enterprise.  

1. Agile Data Platform

In stage one, the agile data platform creates a balanced, decentralized framework for data. Data that is heavily reused and shared throughout the enterprise – involving customer, products, orders, inventory– are delivered as a highly reliable, trustworthy and easy to use service. Agility comes from the quick reusability of defined data structures, along with more flexible ways to engage with the data such as cloud bursting, data labs and sandboxes that promote experimentation. Our work in AI has focused on driving outcomes related to acquiring or retaining customers, enhancing revenue, reducing risk or improving operational efficiencies. The agile data platform is the system of record for this core data.

2. Behavioral Data Platform

The behavioral data platform captures insights not just from transactions, but also from mapping complex interactions around the behavior of people, networks and devices. Such data sources include machine logs, web logs, sensor readings. Previously intractable data sets such as images, videos and audio files are increasingly being harnessed to analyze and optimize outcomes. For example, regularly taken pictures of a jet engine — in conjunction with data in the agile data platform around maintenance and operations — can be used to spot component deterioration to optimize asset uptime and increase safety.  


Behavioral data requires an augmented approach to managing data on top of the agile data platform, as the extreme volumes and volatility of the schema requires a different design pattern and set of economics.  

Our work in AI has incorporated at least some forms of semi-structured data, and many projects have benefited from deep neural network analysis of rich media data, such as images.

3. Collaborative Ideation Platform

When you mention “scale” in the context of data and analytics, most people think of technology that allows fast processing of analytics against massive data sets. The often overlooked aspect of scaling is how to leverage the talents and willingness of people in the enterprise to help label, certify and annotate the various metadata (or data about data) essential to finding and using ever expanding sources of new data.  

We hear the term goldilocks governance quite a bit now, where some data still requires a very strict process (e.g. financials), but where other floods of new data need just the right amount of governance to sit between locked down and wild wild west. This newer form of governance is being achieved through self-service and crowdsourced techniques that democratize the data and remove bottlenecks that impede usage.

4. Analytical Application Platform

Too many companies struggle to bring new models into production and to monitor the performance of their models already in production. If an organization is struggling to bring classic statistical models to production, it should probably address this before investing in more advanced forms of analytics.

Training models that work well in the lab can—in some ways—bear little resemblance to a production environment where they are heavily relied upon to make decisions. To maintain, manage and monitor them, you will need to develop an AnalyticOps culture and capability. It applies the principles of DevOps and IT Operations to data science so companies can better manage their analytics resources and projects.  

Organizations should be smart about their data science resources, putting applied analytics in the hands of IT operations so data scientists only need to be called in when something unusual happens. Though creating AnalyticOps may seem like overkill at the start of an AI project, it will be essential for scaling it down the road.

Once you’ve got bits of those foundational capabilities in place, then you can start accelerating initiatives around autonomous decisions.


Portrait of Chad Meley

(Author):
Chad Meley

Chad Meley is Vice President of Solutions Marketing at Teradata, responsible for Teradata’s Artificial Intelligence, IoT, and CX solutions.

Chad understands trends in machine & deep learning, and leads a team of technology specialists who interpret the needs and expectations of customers while also working with Teradata engineers, consulting teams and technology partners.
 
Prior to joining Teradata, he led Electronic Arts’ Data Platform organization. Chad has held a variety of other leadership roles centered around data and analytics while at Dell and FedEx.
 
Chad holds a BA in economics from The University of Texas, an MBA from Texas Tech University, and performed post graduate work at The University of Texas.
 
Professional awards include Best Practice Award for Driving Business Results in Data Warehousing from The Data Warehouse Institute and the Marketing Excellence Award from the Direct Marketing Association. He is a regular speaker at conferences, including O’Reilly’s AI Conference, Strata, DataWorks, and Analytics Universe. Chad is the coauthor of the book Achieving Real Business Outcomes From Artificial Intelligence published by O'Reilly Media, and a frequent contributor to publications such as Forbes, CIO Magazine, and Datanami.

View all posts by Chad Meley

Turn your complex data and analytics into answers with Teradata Vantage.

Contact us