We are currently on TRIRIGA 126.96.36.199. I have an integration object that uses a static query to export records to a flat file. It works great when I click on the Execute action on the integration object. It can export more than 27,000 records. However, I only want to export a subset of those records, so I am executing it from a custom task as described here.
If there are 1000 records or less to export, executing from a custom task runs as expected. But if there are 1001 records or more, the workflow throws a NullPointerException (NPE). How can I get it to export more than 1000 records?
[Admin: To see other related posts, use the Integration Object tag or Custom Task tag.]
Is there a way to clear server caches without logging into the Admin Console?
Beginning in IBM TRIRIGA Platform 3.5.1, TRIRIGA delivered an enhancement for this to be done via workflow. The pertinent release notes can be found from this wiki page. Here is an excerpt from the release notes on the topic:
A custom task class has been added for workflow which triggers a global cache clear across all servers.
You can create a custom task and specify the following in the class field: com.tririga.platform.admin.cache.web.CacheProcessingCustomTask $RefreshAllCache
The custom task will perform a global cache clear on the server where the workflow runs as if it were triggered from that server’s Administrator Console. (Tri-211723)
[Admin: To see other related posts, use the Admin Console tag or Cache tag.]
I need to update a secondary database whenever there are changes in one of the records, or if a new record is created in an object. Can anyone help me with this?
Write a custom task, and run it from the workflow when there is a change.
[Admin: To see other related posts, use the Custom Task tag.]
After an upload of a document, we use a custom task to send the document to a FileNet instance for content searchability. From that point, we don’t need the document in the TRIRIGA database any longer.
Is there any simple way to delete the content associated to a document record? This ensures we have control over where documents and sensitive information are being stored, and to save database space. From reviewing the API, it looks like we might be able to achieve this in our custom task with .setContent(null or empty content). Is there a nicer approach?
[Admin: This post is related to the 12.02.16 post about integrating with a CMIS or ECM, and the 06.09.16 post about using an ECM instead of Document Manager.]
If you have a process that inserts or updates hierarchy records, within a workflow, you can add a custom task step to the beginning and ending of the workflow that controls the hierarchy tree. These tasks have been available since TRIRIGA Platform 3.4.2. Turning on “Data Load” mode will increase performance, and throughput for adding new child records to hierarchy type modules.
These methods will work on the cache. Specify the following in the Class Name field:
- com.tririga.platform.admin.cache.web.CacheProcessingCustomTask$SetDataLoadMode: Sets the faster “Data Load” mode, suspending tree updates, but won’t make the record updates across the cache.
- com.tririga.platform.admin.cache.web.CacheProcessingCustomTask$SetNormalMode: Sets the tree processing back to normal mode.
- com.tririga.platform.admin.cache.web.CacheProcessingCustomTask$ClearCacheAndRebuildHierarchyTree: Clears the cache and rebuilds the tree from scratch on this one server.
- com.tririga.platform.admin.cache.web.CacheProcessingCustomTask$RefreshAllCache: Clears all cache across all servers.
[Admin: These 4 values are also listed in page 365-366 of the 3.5.1 Application Building user guide (PDF). A similar article is also posted in the Watson IoT Support blog.]
I am looking for help analyzing the following issue. When we try to execute the custom code through the TRIRIGA class loader, it does not execute and throws a Null Pointer Exception. We have created a custom class loader (SqlCustomTask) and it has JAR and XML resource files. Through Workflow Builder, we created a workflow which calls this class loader custom task, but upon execution, it throws an exception as follows. I appreciate if anyone can provide some suggestions on resolving the following issue.
Caused by: java.lang.NullPointerException
at com.tririga.platform.util.classloader.application .dao.dto.CustomClassLoaderInfo.getClassLoaderDelegationType (CustomClassLoaderInfo.java:87)
at com.tririga.platform.util.classloader.application .ApplicationClassLoader.getCustomClassLoader (ApplicationClassLoader.java:213)
at com.tririga.platform.util.classloader.application .ApplicationClassLoader.getCustomClassLoader (ApplicationClassLoader.java:194)
at com.tririga.platform.workflow.runtime.taskhandler .CustomTaskHandler.executeCustomTask (CustomTaskHandler.java:185)
From the stack trace, line 87 in CustomClassLoaderInfo retrieves the ClassLoader Type defined in the ClassLoader record. Please double-check the ClassLoader record, and make sure the required ClassLoader Type field has a value. Here’s the sample from EsriJS.
Is there any alternative way to troubleshoot custom tasks in IBM TRIRIGA workflows? We have a need to troubleshoot some custom workflows, but we do not want to use Workflow Instance recording set to ALWAYS, because this can have a big impact on system performance and consume lots of resources.
IBM TRIRIGA has made documentation available on how to have DEBUG class and code incorporated into the IBM TRIRIGA library directory, and then use them on workflows to trace their current status and variable values. The results will be printed out in the server.log file. Note that the documentation is provided “as-is” and it is under under no warranty. We recommend that customers apply it on lower environments (Dev, QA, Test) first to test this out and confirm its effectiveness. The IBM TRIRIGA wiki documentation is: Simple Workflow Logging Custom Task.
[Admin: This post is related to the 04.20.16 post about using a custom task for basic workflow logging.]