Subscribe to the Teradata Blog

Get the latest industry news, technology trends, and data science insights each week.



I consent that Teradata Corporation, as provider of this website, may occasionally send me Teradata Marketing Communications emails with information regarding products, data analytics, and event and webinar invitations. I understand that I may unsubscribe at any time by following the unsubscribe link at the bottom of any email I receive.

Your privacy is important. Your personal information will be collected, stored, and processed in accordance with the Teradata Global Privacy Policy.

The Post-Pandemic Supply Chain: Time to Go Back to Basics?

The Post-Pandemic Supply Chain: Time to Go Back to Basics?
Martin Wilcox’s recent data mesh blog outlines four key requirements prompting the use of concepts such as data mesh, domain driven designs (DDD), and Object oriented (OO) principles:
  • Agility in reacting to ever-evolving needs and demands from discovery analytics applications
  • Increasing number of variables needed to refine and resolve analytic capabilities
  • Delivering these capabilities at a reasonable cost while maintaining desired level of governance
  • Incrementally delivering capabilities and automation
Due to the incredible challenges posed by the pandemic over the past 16 months, Fortune 500 customers are pressing to refine their supply chains to account for changing demand profile, introduce the right product in the right markets at the right time, offset and account for commodity prices fluctuations and availability, shift consumption patterns from brick and mortar to online channels, and most of all, maintain high customer satisfaction rates.

If properly applied, these new and more pressing demands on supply chain can be addressed with streamlined data and analytics enabled digitization and automation. But please, readers, do not get overly starry eyed about new, nifty terms like data mesh without considering that this may truly be the time to go “back to basics” -- to leverage data and analytics in a more intelligent and integrated fashion.

The supply chain is the hub of all value chain activities across CPG and manufacturing organizations. Capabilities within supply chain are intertwined and inter-dependent; the managing and operating of these supply chains is highly complex. Historically, when specific problems arose, organizations have rushed to solve that point problem, soon realizing that not accounting for the macro view translates to undesired outcomes. For example, in our efforts to minimize wasted investment in excess inventory, we may forget that inventory outcomes have many causes and drivers:
  • Did we build an accurate forecast?
  • Changing dynamics of consumption?
  • Did the supply plans align with inventory plans, lead times and MRP steps?
  • Was the planned inventory by location planned in a granular and time-sensitive manner?
  • How were logistics capacity and operations planned for?

Supply chain represents a perfect candidate for us to deploy DDD and OO concepts as it’s the classic example of a sufficiently large and complex area that can benefit from applying a framework to decompose a complex problem into smaller domains and data products. This can later be aligned to the business processes, influencing KPIs and deployable analytics techniques. Analytics techniques / AI / ML framework across all use cases follow a predictable path. These analytics steps and techniques can also be made repeatable via another technique we will touch on called Feature Engineering.

As we start decomposing supply chain questions and analyses, like those listed above, into their component data parts, it results in a finite set of data points that can be containerized, assembled, carefully integrated, and connected back to the business events and activities of your supply chain.

Case study: A large auto manufacturer invested heavily in building a data warehouse. The original architects of the warehouse deployed a thoughtful technology model while enabling a multi-million-dollar cost savings stemming from standardizing supplier pricing and payment policies. The original intent was to expand the data to adjacent areas of analytics leveraging the same data footprint. A separate initiative for Sales and Operational Planning (S&OP) funded by one of the business segments implemented a point solution and leveraged some of the data points from the warehouse while constructing additional data intakes from other ERPs and shop floor management solutions. This resulted in the first data stream spaghetti. Next, finance built another silo – an adjacent set of data feeds to solve an AR opportunity, etc.

Four years into the journey the organization is living in a data pipeline jungle, with complexity and interdependencies across all data feeds, limited governance, and most importantly, varying answers from different systems. The new leadership is back to the drawing board, attempting to streamline the ecosystem and modernize the platform. Modernization can simply be accomplished by going back to the fundamentals – Just in Time (JIT) data sourcing, appropriate attention to data engineering, governance structure, and simplification. Organizations have evolved over time. Now, after all the disruption, we have an unprecedented opportunity to step back, rewrite, and change things.

Now, if creating a way to retool and get back to basics in our supply chain is motivating to you, stay tuned for my next post: The Post Pandemic Supply Chain -- how to build resiliency into our decisioning.
Portrait of Rajnesh Tangri

(Author):
Rajnesh Tangri

Rajnesh leads data-analytics solutions for Automotive and CPG. Rajnesh has been with Teradata since September 2014 and joined from the Hanes Brands where he spent 8 years leading their BI and Analytics organization. Some of his special projects and interests include corporate financial valuations and commodities trading.  View all posts by Rajnesh Tangri

Turn your complex data and analytics into answers with Teradata Vantage.

Contact us