Data is one of the most valuable assets of the 21st century and businesses are desperately looking for how to organize it and unlock the value of data for businesses. The topic is super broad and covers tons of various topics from how to collect information, extract valuable pieces of data, get access to the updated information, use it for decision support and reasoning. This is obviously a very shortlist and I’m sure you can bring up more topics related to data management and business data usage.
The topic of data usage comes up often as I speak to people in manufacturing companies and enterprise software vendors. While many speak about the data, the topic seems to be very new and companies don’t know much about what can be done. One of the most frequently used topics in the industry (both construction and manufacturing) are digital twin and digital thread. There is quite a lot of marketing related to these two concepts, but the industry seems to be in agreement about both as strategic drivers to get more value from the data.
In the last few weeks, I have had multiple discussions with companies and vendors about the value of data assets and how data can be collected, analyzed, and reused for manufacturing business cases. Examples of business cases related to a variety of product analytics such as failure, supplier risk mitigation, cost analysis. Large multinational OEMs with multiple divisions are looking for brand-specific analytics in their product data as well as global product data analytics.
The topic is interesting enough and very close to the work I was involved in for the last decade – creating semantic models and usage of these models in product data management, data standards, and collaboration across multiple companies.
Matthias Ahrens of HELLA posted an interesting message on LinkedIn asking about the creation of “Enterprise Language” for the interpretation of the created and managed information. Here is the passage that caught my attention.
A federated PLM Landscape requires an “Enterprise Language” for the interpretation of the created and managed information. The dynamic fragments of information within the different IT systems must be “translated” continuously towards the common language resp. semantic. By that the individual fragments can be connected in an upper Knowledge Graph to gain new Knowledge from their contexts. As example a full traceability along the entire R-F-L-P Chain can be realized across multiple IT Systems.
The post also referencing the Ontotext article speaking about the history and development of semantic web technology stack including RDF/OWL and related technologies that used and still touch nerves of people focusing on building rich data models, information sets, and knowledge representations. Check this article too – Semantic Web – 20 years later.
Another project that, in my view, is very close to being used as a prototype for the ideas of enterprise data model and vocabulary – schema.org. If you have never heard about it, schema.org is a website that publishes documentation and creates guidance on how to structure data markups on web pages. The main objective of schema.org is to standardize tags used on the web pages for creating rich results that will be discovered by search engines.
Schema.org is a reference website that publishes documentations and guidelines to using structured data mark-up on web-pages (called microdata). Its main objective is to standardize HTML tags to be used by webmasters for creating rich results (displayed as visual data or infographic tables on search engine results) about a certain topic of interest. It is a part of the semantic web project, which aims to make document mark-up codes more readable and meaningful to both humans and machines. Schema.org is an initiative launched on June 2, 2011 by Bing, Google and Yahoo! (operators of the world’s largest search engines at that time) to create and support a common set of schemas for structured data markup on web pages. The usage of schema.org was increasing strongly between 2012-2015, but the adoption was slowing down recently. Read more about it here (https://schema.org/) and here (https://en.wikipedia.org/wiki/Schema.org).
One of the most interesting outcomes inspired by semantic web development was Knowledge Graph. I’ve been blogging about this topic multiple times and I think Knowledge Graph has the potential to be developed as a model that can sustain rich product data representation that can be used to model complex product and manufacturing solutions as well as to provide a platform for future data analytics and consolidations. Check two of my articles
Another super interesting semantic web project is The Linked Open Data cloud. It contains datasets that have been published in the Linked Data format. The dataset currently contains 1301 datasets with 16283 links (as of May 2020). It is a really interesting project and demonstrates the capabilities of semantic web technologies and building global data schemas.
Semantic web, data modeling, ontologies, knowledge graphs are very interesting technologies, but in my view, in order to move the needle forward and find pragmatic usage of these technologies in the enterprise deployment and data management, we need to find answers on the following burning questions (or problems) – 1/ how to create a universal product development and manufacturing data model and 2/ how to find an agreement between companies about how to standardize semantic data elements. Both problems are expensive data management projects and therefore the third important question is 3/ how to find a business model to support such development.
What is my conclusion?
Data problems are only partially technological. The creation of global enterprise language, data models, and schemas to represent R-F-L-P models are heavily impacted by the interests of the companies to keep their “data close to their chest”. Both software vendors and manufacturing companies are very slowly opening to discussions about how to open their data and share information. One of the possible ways to change the status quo can be collecting publicly available manufacturing information and use of this information to fuel the intelligence of existing software products. If this approach can work, then we can find a way to define a universal data language and, most importantly, a business model to use the data in global manufacturing applications. The problem is too big to ignore. Just my thoughts…
Disclaimer: I’m co-founder and CEO of OpenBOM developing a digital network-based platform that manages product data and connects manufacturers, construction companies, and their supply chain networks. My opinion can be unintentionally biased.
Image credit LOD project