Improving PCM performance
Nothing on the maintenance activity. We are using AWS for web-servers and re-start/cache-clearing we do twice/once a month. So on environment-keeping wise Oracle approves our approach.
We are yet to achieve performance improvement on PCM Module. We achieved in my last org. on PPM and PC module, a significant improvement by purging data in some table. For PCM there isn’t similar options.
We have couple of SRs going on with Oracle and we had a visit with their folks at our campus. We are trying to chalk-out plans for improvement. One thing that we have got recommendation is for purging our historical items-data, which again we are thinking upon.
The major issue that plagues PCM are like :
1. It was not possible earlier to update data for Sourcing projects with more than 1000+ items. But now we have resolved it with parameter changes in Agile application.jar file.
2. On other Actions it is taking more than expected time. Almost like triple and sometimes not at all happening for following actions:-
a. Save As on QSP – Fails if existing projects have item > 500 or so. Or gives white screen.
b. Import Items on sourcing project. ( For 5000 items it takes almost double time of what Oracle folks managed in their system)
c. Export QSP from Analysis tab which takes 1700 sec. compared to prescribed 93.
Similarly there are other actions as well.
So pretty much a work-in progress at the moment.
Will share if we get breakthrough.
It sounds like you are seeing the same problems that Agile PC had when you put more than 500 affected items on a change. I know of a company that put 5000 affected items on a change, and they wanted to know why that change couldn’t be viewed (it froze up the application server, this was back in Agile 8.5 days).
I am certain Agile PC is better about it now, but given the amount of data you are pulling in for each related item, it can take up a LOT of memory. So if you give the application a larger memory cache, things get better – to a point. If all it was is a memory issue, then you would be done (assuming you had sufficient memory on your server). So there must be other things going on, such as a maximum list size somewhere that is being exceeded. Managing lots of stuff in memory isn’t too hard, but you have to make sure that you don’t overwrite anything that is being used (by you or anyone else). So yes. keep the list up to date on any improvements you can get.
Yes, pretty fair assessment. The whole comparison thing came in picture once Oracle analysts tried to reproduce all the action point we suggested and they were able to do it a pretty less time. Again their is an ideal environment with less customized process extensions compared to ours.
One thing that i personally found helpful was increasing the session time-out in admin jar present inside Application.ear file. It helps in completing PXes operating on large set of data. Also clustered vs Non-Clustered environment varies greatly in performance.
As of now, for few of the actions we are thinking of writing custom-code. Will share once achieve breakthrough.
this Problem is resolved in Agile 9.3.3. Issue is when you invoke project object, all the projects objects will get loaded in cache.
You can make use of archiving like approach, By adding flag in old project with some value as flag and revoke discovery privileges and read privileges from all users for those projects having flag.
Just trying to revive this old thread as working actively on this. One purge thing that is coming to my mind at the moment is Purging the Priceline and Prices data which are older then 5 years.
Has anyone tried, if yes then i would like to know about the approach you followed like deleting from Back-end / FE.