UncategorizedNo Comments

default thumbnail

Enterprise Data Quality for Product Data This is the forth and final blog in a series about Agile 9.3.2’s integration with Oracle’s Enterprise Data Quality for Product Data (EDQP), which walks through the Agile PLM 9.3.2 and EDQP Integration White Paper to explore the integration’s setup and capabilities. This edition will focus on section 4 of the whitepaper

Section 4:  Enriching the Agile PLM Dataset with EDQP

The main objectives of section 4 are to:

  1. Validate the necessary attributes exist in the database
  2. Set up the semantic model
  3. Use AutoBuild to create a data lens & evaluate in Knowledge Studio
  4. Enrich the Agile data
  5. Import the data back into Agile

Prior to starting the series, Agile PLM 9.3.2 and EDQP’s Admin and Transform servers were installed, so these will not be covered.  An entire blog series could be written on each product and it is beyond the intended scope of the series. 

Validate the necessary attributes exist in the database

This blog, and the whitepaper, are based on the demo database provided to Oracle business partners.  Prior to writing this blog I downloaded the demo database, upgraded it to 9.3.2 and reran the SQL scripts as outlined in the prior blog.  This step verifies that the content in the database is as expected:

  1. Log into the Agile JavaClient
  2. Navigate to Admin tab| Settings | Data Settings | Classes and open the Capacitors subclass
  3. Click the User Interface tab
  4. Double click the Capacitor Attributes (page 3) row
  5. Click the Attributes: Page Three tab
  6. Verify the following attributes are defined: Material, Package, Temperature Characteristics, Tolerance, Capacitanance and Voltage.

To eliminate possible issues, I changed the name on a couple of my attributes to ensure they aligned with the documentation as they contained additional text.  The picture below shows what I started with and unlike the whitepaper, I only had 52 capacitors – but that should be enough to test with.

Validating Agile Attributes

If you do not have the demo databae the whitepaper has a screenshot that shows the APIName and type so you should be able to reproduce the subclass structure without any issues.

Set up the semantic model

Earlier in the series, we installed the Excel add-in for EDQP; now its time to use it.

  1. Start Excel.  By default a new workbook should show but if for some reason yours does not, create a new workbook
  2. Navigate to the Add-Ins tab | DataLens Tools dropdown | Set Transform Server
  3. Log into EDQP and accept the server group
  4. Select Job Options from the DataLens Tools dropdown
  5. Select the APLM_CREATE_SEMANTIC_MODEL from the DSA list, click ok.
  6. Run the Job and enter Capacitor in the popup dialog to select all the objects that are of the type Capacitor.  At this point I received errors stating that a database connection could not be established.  The documentation was actually incorrect.  The database connection name set up prior should be APLM_CONNECTOR, not PLM_CONNECTOR!

At this point, the Agile data will be populated in Excel and is ready for AutoBuild

Create a DataLens using AutoBuild & evaluation it in Knowledge Studio

AutoBuild is a great feature to speed the creation of a DataLens.  It can build out terms, phrases and put together some classification rules.  The usage and capability of the Knowledge Studio is outside of the scope of the blog but it is a very interesting tool and AutoBuild helps one get started.

  1. Next to the DataLens Tools dropdown in Excel is a button for AutoBuild.  With results above loaded in the workbook, click the button
  2. A series of dialog boxes will show.  In this exercise, we only care about the first and last.  On the first dialog, select the “Generate a new DataLens” radio button and click Next to transition through the dialogs until a name can be entered for the data lens.  Give the lens a name – it looks like you must select a name from the dropdown, but it is a combo box that will accept user input.  Click finish
  3. In Knowledge Studio, you will notice some terms in orange and some in white.  The terms in white are recognized and the ones in orange are not.
  4. Watch the video and see how to add an alias to CER so that ceramic becomes a recognized term.  There is a lot of capability built into Knowledge Studio and recognizing terms and phrases is just the beginning.  After come the standardization – it is a very powerful tool!

The exercise is to help understand how one could start manipulating Agile data.  The example data lens APLM_Capacitors was deployed in the earlier blogs, so this particular lens will not serve a purpose.

Enrich the Agile data

There are several steps to enriching the Agile data: extract the data, update it, and prepare it for import.  Here are the steps:

  1. Extracting the data
    1. Open Job Options
    2. Select the APLM_CREATE_PRODUCTION_BATCH DSA, click ok
    3. You are prompted for the subclass(es) and the last update date.  I entered “Capacitor|200-JAN-01 00:00:00” and clicked ok
    4. Run the DSA and Excel will extract Agile data and create a tab called 20-Batch_Creation_Details
    5. Make a note of the value in the Job Id column, you will need this for the next step
  2. Updating the data
    1. With the newly extracted data selected (tab 20-Batch_Creation_Details), open Job Obtions
    2. Select the APLM_CLEANSE_PRODUCTION_BATCH DSA
    3. Run the job and enter the Id from step 1.5 above.  This is where the custom data lens gets called to transform the data.  The gap in information in the whitepaper is that a mature version of the AutoBuild lens created above could eventually make its way to be called in this step rather than the supplied lens (see picture below)
    4. The result of the job are some new tabs in the worksheet.  If you look at the new content you will see that it is comprised of name-value pairs in the form of adjacent columns rather than having an attribute name as the column and its value as the cell value.  Luckily there is a utility to correct this.
  3. Preparing for import
    1. After transformation, the last step is to prepare the data to be reimported into Agile.  Click on the new worksheet created by the cleansing DSA
    2. Notice that around column F is where attributes start being displayed as name-value pairs – this is where we will have the Add-In start its correction
    3. Right click on any cell and select DataLens Services | Group Records into Worksheets
    4. Enter the column F
    5. Select a filename for the new content
    6. Open the new spreadsheet and notice that attribute names are now column headings.  This data should be ready for import!

edqp to agile app studio

 

Import the data back into Agile

I decided not to go into Agile import because it is such a common thing to do, but at this stage, the hard part is done.  Refer to the Agile Import/Export guide if you have questions on this step.

 

Final Thoughts on the Integration

Overall I really like what I have seen of EDQP.  It is a niche in and of itself and even though I have worked with it before, I am sure I have only scratched the surface of its capabilities. 

When I titled this blog “Discovery” I truely meant it.  I did not know where the series would lead and did very little homework first.  After going through the document I am left feeling satisfied but a little empty inside.  I really had hopes for a tighter coupling with EDQP, such as leveraging the extension framework for real-time validation or updating of attributes, but the capability isn’t there.  Instead we are left to either write our own solution or use batch processing.  In the mean time, I am thankful for the tools that Oracle did provide and am still hopeful that future releases will have a tighter integration.

What are your thoughts?

The Video

Here is the video for section 4 of the whitepaper:

 

 

References

20-Batch_Creation_Details

Be the first to post a comment.

Add a comment