Kevin Cummings's Profile
Agile Angel
2667
points

Questions
2

Answers
139

I have been working on the Agile PLM space for over 16 years, mostly focusing on data migration and database work. I probably know how the schema works better than most outside of Oracle Engineering, having explored the schema for most of my Agile career
Title Senior Technical Consultant
Company Kalypso Consulting
Agile Version 6.x, 7.x, 8.x, 9.x
  • Agile Angel Asked 1 hour ago in Other APIs.

    CONTENT_URL is used to store the link to indexing content for the file. I cannot say for certain that if CONTENT_URL is null then the file has not been indexed, but that is the obvious conclusion. IFS_FILEPATH stores the directory/file name for where the file is located on the primary file manager server. If the file is also located on a distributed/local file manager, there will be a value in HFS_FILEPATH.

    • 12 views
    • 1 answers
    • 0 votes
  • As Steve mentioned, no-one outside of Oracle knows how the script works (and most within don’t either). I have never seen it, and I don’t even know of someone who had it executed on their system. But it *is* a big stick, as penalties are severe.
     As I understand it, simply being able to discover and read things does not constitute usage of a license. Creating, managing and processing PSRs and QCRs through a workflow does constitute usage of a license. Then again, that is just my assumption based on how things were done long ago.
     Since we have no idea how the script works, you might send a question into Support (or if you are friendly with a saleperson, ask them), but I doubt you would get an answer.

    This answer accepted by Matt Paulhus. 3 days ago Earned 15 points.

    • 48 views
    • 2 answers
    • 0 votes
  • Agile Angel Asked on July 4, 2018 in Agile PLM (v9).

    As Swagoto stated, the query to get the history data is against SCHEDULED_EVENT_TRACKING (SET). Information for the event itself is in SCHEDULED_EVENT (SE), and it links to the history data using SE.ID = SET.EVENT_ID.

    • 54 views
    • 2 answers
    • 0 votes
  • Agile Angel Asked on July 2, 2018 in Other APIs.

    Swagoto is correct. You can access the REV table, but it isn’t needed. If neither of the internal change attributes in the ITEM table are set, it is because there are no released changes against the item, and therefore it is at Preliminary status. Also note, there is no “Introductory” revision in the database, just the dummy record where change = 0. The status on that record is always “Preliminary”.

    • 55 views
    • 3 answers
    • 0 votes
  • Can you search on the item number you highlighted and find it?  Since it is displaying the item number for the ECR, it being actually hard-deleted is not likely. But it might be something about the item record that is causing the issue (such as the ECO linked to the latest revision of the part).

     First thing to do is run Averify, and if there are actual errors listed, send the log file to Oracle Support and find out what they have to say about the errors. Otherwise it could be a number of things, and you would need to someone who knows the database look at it.

    If nothing else, start SQL debug logging on your database (during a quiet period), open the ECR, click on the item, close the SQL debugging log, and then start running queries from the log file against the database that are linked to either the ECR or the item number. If you get an error on an specific SQL query, you can look further at it to see what the problem is.

    • 62 views
    • 3 answers
    • 0 votes
  • You stated “getting correct number”.  Did you mean “NOT getting correct number”?
     Make sure that the object number is linked to the correct auto-number sequence, because if there is more than one auto-number source configured it may be getting the number from a sequence you are not expecting.

    • 64 views
    • 1 answers
    • 0 votes
  • Arif is correct in that the base table is ACTIVITY. Note that there is an attribute called TEMPLATE in the ACTIVITY table that denotes which records are for project templates versus real projects (0=No, 1=Yes, 2=Proposed).

    • 77 views
    • 2 answers
    • 0 votes
  • AML data starts with the MANU_BY table. AGILE_PART is the ID of the item. MANU_PART is the ID of the manufacturer part number in MANU_PART, where MANU_ID is the ID of the manufacturer for the part (MANUFACTURERS).
     The CHANGE_IN/CHANGE_OUT is a bit complicated, but it works as follows : to get the data for a given revision, get the change IDs for the revision you are interested in and all previously released  revisions (in this case, both ECO and MCO changes). Then look for any row in MANU_BY where CHANGE_IN is zero or is equal to one of that set of change IDs, and where CHANGE_OUT is zero or not in the set of change IDs. This will give you all AML records that were active as of the latest revision in the set.

    • 272 views
    • 1 answers
    • 0 votes
  • Any time you get an error like this, it is usually because Agile *always* expects to get a value, but it doesn’t get one. So the first thing I would recommend is to run Averify, as well as check users and check affected items to make sure that they exist. Glad you found the issue!!!

    • 377 views
    • 3 answers
    • 0 votes
  • As Steve Jones stated, I have only ever seen the number of affected items affect the speed of how fast an ECO can be released, and never the depth of the BOM. Note that you are not releasing the entire BOM tree of an assembly, but only it’s direct BOM. And this is true for each affected item. How big the direct BOM is can certainly be a factor, but unless sub-assemblies are also included as affected items, BOM in the next level down will not affect the processing of the ECO. I would never put more the 500 affected items on an ECO, and only then if the client has a rather robust Agile environment. I always recommend less than 200. I still remember a client that had a single ECO for a data load that had 7500+ affected items on it. And of course, users just *had* to go look at it every once in a while. And the system would grind to a halt.

     As Paritosh pointed out, environment resources also play a factor. If your environment does not have a very large heap size, you can quickly swamp it when you have a lot of affected items in an ECO, as it has to pull things in, then push something else out to process the next affected item. It doesn’t take a lot of time, but it does take time.

     Showing a screen shot of the exploded BOM tree for 1243 (with numbers appropriately blurred) would be good. Even better would be a screen shot of the performance metrics of your Agile server when you are releasing the ECO. After that, you might have to dig rather deeper into what is going on. Are there a number of required fields that must be checked on?? That can slow things down a bit (even more with large BOMs, because each component must also be checked). But that only applies to the direct BOM.

    • 254 views
    • 4 answers
    • 0 votes