UncategorizedNo Comments

default thumbnail

Connected is such a powerful word. We like it and it feels good. We like to be connected these days. There are many systems to help us to stay connected – web sites, social networks, messaging applications, video chat… you name it. However, when it comes to enterprise applications and connections between engineering teams, manufacturing companies, contractors, and suppliers, the reality is actually pretty grim. PLM vendors just marked a decade of “Cloud PLM” adoption, each PLM vendor is reporting about cloud readiness, cloud PLM, SaaS, but most of these systems are hosted single-tenant PLM systems. The cloud architecture of these systems has nothing to do about connection and data sharing. Imagine you had a database and now you just took the same database and made a hosting of it on AWS, MS Azure, GCP, or any other hosting service. They will remain isolated databases.

Earlier this year, I shared my thoughts about the future of connected PLM, new data architectures, and a single source of truth (SSOT). Check my article here. The idea of connected PLM systems is actually very interesting. In a nutshell, it means that you can establish a process that spans multiple teams, organizations, and companies. The idea has some legs, but the realization is far from the ideal. All major existing PLM systems bring an SSOT that relies on a single database (SQL) storage and single tenancy. So, what is the solution?

My attention was caught by Thomas Kamps of Conweaver GmbH – What to consider if you want to build a Connected Lifecycle. ConWeaver is the outfit focusing on building EKG (enterprise knowledge graph) solutions for manufacturing companies. Mr. Kamps bashes PLM vendors for misleading when selling PLM vision.

PLM vendors typically sell a zoo of systems along the product lifecycle. Even together, this does not make them a PLM solution. Why? Because they are just a collection of systems. What you need is business context materialized by means of linked data across the systems. Thus, it is a little misleading if PLM vendors say they sell PLM solutions, to begin with.

The article compares multiple approaches to connect siloed processes such as traditional federation architecture (hub), graph databases (which Mr. Kamps called Knowledge Graph), and OSLC based standardization. Mr. Kamps completely missed the multi-tenant data management architecture aspect of solutions such as Autodesk Forge, Onshape, and OpenBOM. Mr. Kamps’s favorite is the ConWeaver EKG (Enterprise Knowledge Graph) approach. According to Mr. Kamps, EKG ConWeaver is a completely decoupled data management system collecting information from all existing systems (PLM, ERP, etc.) and building an independent knowledge graph from an entire set of enterprise data.

The idea is to establish a separate linked data layer (the graph) on top of the authoring systems connecting PLM (and other) metadata. This process can to a very large extent be automated so that manual effort will be minimized. Besides the fact that such graphs are general graphs that offer capabilities such as the provision of inferencing processes that go far beyond hierarchic approaches, especially beyond those that link from a PDM hub into other authoring systems. Essential for such a feature is the complete decoupling of the linked data layer from the authoring systems and a scheduled automatic updating process that takes into account ongoing changes in the data systems.

The idea has some legs. Back in 2010, I was building a semantic data layer accumulating data from multiple engineering data sources (CAD, PDM, PLM) and creating applications using RDF triple store. The data engine behind the storage was the database known today as Amazon Neptun. The data modeling was using RDF/OWL and we used SPARQL query language to achieve data retrieval to facilitate business processes.

It is not clear what data layer is used by ConWeaver to model EKG because Mr. Kamps considers the technological debates as hairsplitting and not very helpful. Instead of focusing on standard data representation, ConWeaver is focusing on fully automatic data generation and tuning of data models. The approach seems to be interesting, but details are important.

It is also not clear what is the EKG in ConWeaver, how it can be accessed, and most importantly what is the system and data architecture behind EKG can allow multiple organizations to access this information. As much as I like the EKG idea, if the EKG storage has no tenant notion, the EKG result is yet another data hub automatically accumulating data from multiple systems. Which can be good for analytics, but not for the life cycle. Another concerning point related to EKG is ignoring the importance of SSOT when it comes to the data. It came across in the comment Mr. Kamps made in one of the LinkedIn discussions.

Enterprise Knowledge Graph solution accepts variety and thus allows for SSOT to reside in the verticals. But what if it is about, e.g., baselining? Some customers maintain mechatronic data in three different systems. Where should the baseline then be located? Perhaps on the EKG level. Other customers will handle the data data differenty and so the decision will be anoteh one. Thus, the question where SSOT resides can probably not be answered in a dogmatic way but can be decided by the customer dending on their needs. I do not consider SSOT an important issue w.r.t EKGs.

The idea of EKG is a good one. But I can see the disconnect between Knowledge Graph and SSOT is a major problem. There is nothing more important in the connected lifecycle than the building of data trust. As I mentioned in my article What is PLM Circa 2020s () the single source of truth is distributed?

Things are changing and the truth is now distributed. It is not located in a single database in a single company. It lives in multiple places and it updates all the time. The pace of business is always getting faster. To support such an environment, companies need to move onto different types of systems – ones that are global, online, optimized to use information coming from multiple sources, and capable of interacting in real-time.

Here are my three fundamental principles that can be used to build a connected lifecycle in manufacturing.

1- Real-time data linking (the data does not reside in a single place)
2- Distributed organization (there are multiple organizations involved in the product development)
3- Connected processes (the process is heavily distributed and can span across multiple organizations).

What is my conclusion?

The architecture of future connected lifecycle applications will be different from today’s siloed data architecture. Data management technologies are evolving and combined with modern cloud architecture can provide a very robust data platform to establish a network layer to connect processes based on the product information and lifecycle processes. The next decade will be a very interesting time for a transformation of legacy siloed data architectures and monolithic PLM systems into new connected network-based platforms. Just my thoughts…

Best, Oleg

Disclaimer: I’m co-founder and CEO of OpenBOM developing a digital network-based platform that manages product data and connects manufacturers and their supply chain networksMy opinion can be unintentionally biased.

Share

The post How To Build A Connected Lifecycle? appeared first on Beyond PLM (Product Lifecycle Management) Blog.

Be the first to post a comment.

Add a comment