What are the best practices for TRIRIGA integration optimization?


We’re working with multiple integration points with TRIRIGA and we would like to discuss how we can best increase the efficiency and minimize the performance issues of having competing integrations. Let’s say we have already exhausted our approach of staggering them throughout the day and now want to run 5 different integration every few minutes throughout the day. (This is part business need and part theoretical for this discussion.)

So here is my list of ways to improve integrations. Are there any others I missed?

  • 1. Optimize the workflows to be as efficient as possible.
  • 2. Create database indexes for fields that are used in the lookup queries of these workflows.
  • 3. The Integration Object (Tools > System Setup > Integration Object) appears to allow us to specify the database user used for loading the data to the DataConnect staging tables. If we create and use additional database users for these integration, I would assume that would increase the efficiency for loading the data to the “s_tri…” staging tables?
  • 4. Is there a way to split up the DataConnect integrations to multiple users in the application for workflow processing? Could the DataConnect Agent be split between multiple servers? Ideally, we would like to split up which process server or which application user performs the work and calculations.

In your list, #1 will probably be the biggest contributing factor. I’m assuming you’ve read through the Performance Considerations wiki page. For integrations, the biggest things would be:

  • 1. Use in-memory smart objects (IMSO) as much as possible.
  • 2. When in a DataConnect loop, tune the number of records to commit after.  Start with a number like 10, load records, and note the time. Then halve it to 5, run the same amount of records, and note the time.  Increase or decrease the number until it is as fast as you can get it.
  • 3. Use Query tasks over Retrieve tasks.
  • 4. Tune any queries used in a workflow.

Once you’ve done all you can on the workflow side, tune the threads, using a similar method to #2 above.  Start with 4x the core count of the database for workflow max threads. Then double it, or halve it, while loading the same number of records, and note how long it takes for the queue to clear.

Continue reading

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s