When It Comes to Building Trusted AI Outcomes, Knowledge Must Precede Speed
And right now the shortest route to that understanding appears to lie within the notion of data cataloging, specifically the ingestion, registration, description, and validation of data. For that we are starting to see a number of tools like Microsoft Azure Data Catalog and Tableau Data Catalog enter the market, promising to bring the focus back to the front end of the pipeline without enforcing (or interfering with) existing data warehousing or master data management and governance requirements. Enterprise cloud data management heavyweight Informatica has certainly been an active proponent of data intelligence through ideas like cataloging (and data management, quality, governance and security) for some time now. But unlike many analytics- or platform-centric rivals, Informatica's broad portfolio allows the company to market their Enterprise Data Catalog not only as standalone but also in the context of data governance, analytics, apps modernization, and other key initiatives, not as an isolated cure-all for data distrust but rather as a trust-increasing component within the enterprise data pipeline, right next to standalone data governance, data preparation, data integration, data quality, data protection, and data operationalization. When it comes to AI-based decisions, this kind of data-first value chain is of particular importance for the simple reason that AI is an iterative, communal endeavor among data analysts, engineers, scientists, and other stakeholders.
Dec-27-2019, 20:28:57 GMT