How can you diagnose memory issues in TRIRIGA servers?


There are a number of parts to the TRIRIGA Platform that, when left unchecked or poorly configured, can contribute to a large memory footprint on the application and process servers, and cause the server to get into an Out of Memory situation in which the TRIRIGA server crashes.

Footprint Contributors

The following is a non-exclusive list of items that can greatly pressure the heap memory on the application/process servers:

  • Workflow Instance: When set to ALWAYS, this will consume a large amount of memory on the application server, as well as slow down the performance of workflows and actions by 3x or more. Workflow instance save (WF_INSTANCE_SAVE) should only be set to ALWAYS if you are actively debugging workflows.  Do not leave ALWAYS set for longer than what you need.
  • BIRT Reporting: When exporting large data sets in BIRT, the BIRT engine itself will consume a large amount of heap memory.
  • DataConnect Task: When writing a workflow with the DataConnect task, please take care in the Transaction section to commit after a low amount of records, no more than 10, but it could be as low as 1.  This is a setting you will need to tune depending on the integration.

Diagnosing Out of Memory Situations

When an Out of Memory error occurs, typically the only recourse is to restart the application/process servers.  At the point of the Out of Memory, a heap dump should be generated:

  • WebSphere Liberty: The heap output file is created in the default directory ${server.output.dir}.
  • Traditional WebSphere: The heap output file is created in the default directory ${WAS_HOME}/profiles/${ProfileName}.
  • Oracle WebLogic: The heap output file is created in the directory from which the Java process was launched.

Once the heap dump has been obtained, you can analyze it by using Memory Analyzer (MAT). Note that MAT will consume a very large amount of memory if your heap dump is very large (6GB or more is large). Your workstation should have at least 16 GB of RAM, and you should close all other applications, and configure the Eclipse config.ini to have its own max heap size to be 15 GB (-Xmx15G).

The “Overview” tab will give you a high-level insight into what the heap contains.  Typically, we see the first- or second-level objects begin to explain what consumed the heap. The following are examples of Workflow Instance and BIRT Out of Memory heaps, respectively. Take a look at the “Problem Suspect” section, and you can identify how the main area heap was consumed…

[Admin: This post is related to the 04.26.16 post about performance when workflow instances are saved, the 04.06.16 post about transaction logs growing too quickly, and the 09.11.15 post about triggering deadlocks when saving workflow instances.]

Continue reading

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s