On June 28, 2019: Charles Harbor, in response to Robert Seiner's Data Governance and Metadata Management Automation Webinar, said the following:
One of the techniques that I've had success with in the past is a little more on the invasive side of DG, and deals with master reference data - setting a parameterized report that ships to, say, the product manager, when a new product starts being used in production. These folks like to dig into the details to understand that the new product is being used and referenced correctly, that all of the financials are flowing properly, that the slice and dice analytic capabilities are all working properly as the data flows through the pipeline for the first time. We'll set the report parameter to be the new product number, and the report comes with details so that the business folks can track it. And after 2 or 3 weeks, when they're comfortable that everything's working, then we'll take the parameter out (and the report won't run with an input parameter), and life goes on.
A less invasive approach is using some of the capabilities built into some of the toolsets - an automated job that runs and simply checks the metadata of the entire warehouse (syscols and systables) before the ETL process runs, and compares it against yesterday's entry. If there's an unplanned change, tell us the differences and stop the load. This is more of a DevOps approach to ETL, but it has kept us from stepping on our own toes several times (and took all of about 4 hours to create).
Thank you Robert for all that you do to encourage these kind of discussions. It's appreciated out here in the cheap seats.
I am including this conversation in a different thread so that it does not get lost.
[login to unmask email]
Freelance Production Assistant
Freelance Data, Technology and Science Writer