Why is a cost code path corrupted when applying a template?

We have an issue where sometimes after applying a cost code template to a project, the hierarchy path will not be complete. It will be missing all of the parent path and only shows the name. This issue is only visible in the app by viewing the System Path field inside the form or by using a SQL query, because the system path in the T_TRICOSTCODE table is correct, but the object path field in the IBS_SPEC table is the one that’s not complete. The issue does not have much consequence unless you are using the rollup fields, in which case the corrupt cost code path will cause a posted transaction to fail.

If there are customers who use cost codes heavily, you can try running the following SQL, and if you get any results back, then that means the issue is present at some level in your environment. It is not necessary that you use the Apply Template to create your cost codes, as I have heard of others having the issue where their cost codes are created via an integration. This SQL is for Oracle and may need a tweak for SQL Server. If any customers can run this, and see if they have the issue, it may help us identify how it happens.

select tripathsy, triprojectnamesy, OBJECT_PATH from t_tricostcode T1, IBS_SPEC T2 where T1.spec_id in (select spec_id from ibs_Spec where type_name = ‘triCostCode’ and object_path not like ‘%\Cost Code%’) AND T1.spec_id = T2.spec_id and tripathsy like ‘%\Cost Code%’

[Admin: To see other related posts, use the Cost Code tag or Templates tag.]

Continue reading


IV96536: Revising contract causes double-posted cost amount

After revising and reissuing a standard contract, the cost amount becomes cumulative against the budget.

As a temporary fix, do not revise the contract. Instead, post a change order to the contract. We needed to set the reverse transaction to “true” for 3 workflows. Moving forward, for capital projects, the standard contract will no longer double-post values after revising.

[Admin: To see other related posts, use the Costs tag or Change Orders tag.]

Continue reading

How do you revert an IBM TRIRIGA upgrade to a previous version?

How can I revert back an IBM TRIRIGA upgrade? Is there any way to do so? I need to get prepared and revert my system to a previous state in a case where there were system problems after an IBM TRIRIGA upgrade.

There is no uninstaller or code to revert back or downgrade your current IBM TRIRIGA Application or Platform version to a previous version. You must keep a reliable and preferably offline backup of the database (cold backup), in case you need to revert to a previous version.

Important note: If you have any new user or agent transactions during the period since the backup, they will be lost when you rollback the database. Bottom line: To manually “revert” to previous IBM TRIRIGA version you must…

[Admin: This post is related to the 06.10.16 post about object labels and revisions.]

Continue reading

Why aren’t the group permissions to apply a template working?

I’m attempting to give one of our groups permissions to apply a RE Transaction Plan template and it isn’t working.

I’ve given them “Read, Update, Create and Delete” on the triRETransactionPlan template and all actions under the “Application Access” and “Form Action Access”. They also have full access to the triRETransactionPlan and full permissions to the “Application Access” and “Form Action Access”. Everything under both of those folders are set to “Inherited from Parent”. What am I missing?

They were given permissions to the template, but not the triActionForm. It is necessary to grant access in the business object triActionForm so that they could access the template popup. Give permissions to the triActionForm.

Continue reading

IV87340: Inefficient SQL used when financial rollups are executed

The use of the IN clause is grossly inefficient and it is used, not once, but twice. At first, it was causing full table scans. The index creates helped, but that is a band-aid. The query should be rewritten since all tables can easily be joined on the transaction_id column.

Indexes have been added, and budget data access is now sent through SQL logging, so that the times can be found in debug performance logging. Moving forward, we resolved a performance issue with financial rollups and budget transactions. The SQL is as efficient as it can be. The use of joins did not result in a query plan that was any better.

Continue reading

How can you diagnose memory issues in TRIRIGA servers?

There are a number of parts to the TRIRIGA Platform that, when left unchecked or poorly configured, can contribute to a large memory footprint on the application and process servers, and cause the server to get into an Out of Memory situation in which the TRIRIGA server crashes.

Footprint Contributors

The following is a non-exclusive list of items that can greatly pressure the heap memory on the application/process servers:

  • Workflow Instance: When set to ALWAYS, this will consume a large amount of memory on the application server, as well as slow down the performance of workflows and actions by 3x or more. Workflow instance save (WF_INSTANCE_SAVE) should only be set to ALWAYS if you are actively debugging workflows.  Do not leave ALWAYS set for longer than what you need.
  • BIRT Reporting: When exporting large data sets in BIRT, the BIRT engine itself will consume a large amount of heap memory.
  • DataConnect Task: When writing a workflow with the DataConnect task, please take care in the Transaction section to commit after a low amount of records, no more than 10, but it could be as low as 1.  This is a setting you will need to tune depending on the integration.

Diagnosing Out of Memory Situations

When an Out of Memory error occurs, typically the only recourse is to restart the application/process servers.  At the point of the Out of Memory, a heap dump should be generated:

  • WebSphere Liberty: The heap output file is created in the default directory ${server.output.dir}.
  • Traditional WebSphere: The heap output file is created in the default directory ${WAS_HOME}/profiles/${ProfileName}.
  • Oracle WebLogic: The heap output file is created in the directory from which the Java process was launched.

Once the heap dump has been obtained, you can analyze it by using Memory Analyzer (MAT). Note that MAT will consume a very large amount of memory if your heap dump is very large (6GB or more is large). Your workstation should have at least 16 GB of RAM, and you should close all other applications, and configure the Eclipse config.ini to have its own max heap size to be 15 GB (-Xmx15G).

The “Overview” tab will give you a high-level insight into what the heap contains.  Typically, we see the first- or second-level objects begin to explain what consumed the heap. The following are examples of Workflow Instance and BIRT Out of Memory heaps, respectively. Take a look at the “Problem Suspect” section, and you can identify how the main area heap was consumed…

[Admin: This post is related to the 04.26.16 post about performance when workflow instances are saved, the 04.06.16 post about transaction logs growing too quickly, and the 09.11.15 post about triggering deadlocks when saving workflow instances.]

Continue reading