I am uploading a tab-delimited file and writing the data into a staging table using TRIRIGA integration object. While uploading the file, I need to restrict the number of lines that can be uploaded into the staging table. I am using DataConnect and a custom task to read the file from the binary field. But I am not able to read the file. How can we read the file from the integration object?
We are currently on TRIRIGA 22.214.171.124. I have an integration object that uses a static query to export records to a flat file. It works great when I click on the Execute action on the integration object. It can export more than 27,000 records. However, I only want to export a subset of those records, so I am executing it from a custom task as described here.
If there are 1000 records or less to export, executing from a custom task runs as expected. But if there are 1001 records or more, the workflow throws a NullPointerException (NPE). How can I get it to export more than 1000 records?
Is there a way to clear server caches without logging into the Admin Console?
Beginning in IBM TRIRIGA Platform 3.5.1, TRIRIGA delivered an enhancement for this to be done via workflow. The pertinent release notes can be found from this wiki page. Here is an excerpt from the release notes on the topic:
A custom task class has been added for workflow which triggers a global cache clear across all servers.
You can create a custom task and specify the following in the class field: com.tririga.platform.admin.cache.web.CacheProcessingCustomTask $RefreshAllCache
The custom task will perform a global cache clear on the server where the workflow runs as if it were triggered from that server’s Administrator Console. (Tri-211723)
I need to update a secondary database whenever there are changes in one of the records, or if a new record is created in an object. Can anyone help me with this?
Write a custom task, and run it from the workflow when there is a change.
[Admin: To see other related posts, use the Custom Task tag.]
After an upload of a document, we use a custom task to send the document to a FileNet instance for content searchability. From that point, we don’t need the document in the TRIRIGA database any longer.
Is there any simple way to delete the content associated to a document record? This ensures we have control over where documents and sensitive information are being stored, and to save database space. From reviewing the API, it looks like we might be able to achieve this in our custom task with .setContent(null or empty content). Is there a nicer approach?
If you have a process that inserts or updates hierarchy records, within a workflow, you can add a custom task step to the beginning and ending of the workflow that controls the hierarchy tree. These tasks have been available since TRIRIGA Platform 3.4.2. Turning on “Data Load” mode will increase performance, and throughput for adding new child records to hierarchy type modules.
These methods will work on the cache. Specify the following in the Class Name field:
- com.tririga.platform.admin.cache.web.CacheProcessingCustomTask$SetDataLoadMode: Sets the faster “Data Load” mode, suspending tree updates, but won’t make the record updates across the cache.
- com.tririga.platform.admin.cache.web.CacheProcessingCustomTask$SetNormalMode: Sets the tree processing back to normal mode.
- com.tririga.platform.admin.cache.web.CacheProcessingCustomTask$ClearCacheAndRebuildHierarchyTree: Clears the cache and rebuilds the tree from scratch on this one server.
- com.tririga.platform.admin.cache.web.CacheProcessingCustomTask$RefreshAllCache: Clears all cache across all servers.
I am looking for help analyzing the following issue. When we try to execute the custom code through the TRIRIGA class loader, it does not execute and throws a Null Pointer Exception. We have created a custom class loader (SqlCustomTask) and it has JAR and XML resource files. Through Workflow Builder, we created a workflow which calls this class loader custom task, but upon execution, it throws an exception as follows. I appreciate if anyone can provide some suggestions on resolving the following issue.
Caused by: java.lang.NullPointerException
at com.tririga.platform.util.classloader.application .dao.dto.CustomClassLoaderInfo.getClassLoaderDelegationType (CustomClassLoaderInfo.java:87)
at com.tririga.platform.util.classloader.application .ApplicationClassLoader.getCustomClassLoader (ApplicationClassLoader.java:213)
at com.tririga.platform.util.classloader.application .ApplicationClassLoader.getCustomClassLoader (ApplicationClassLoader.java:194)
at com.tririga.platform.workflow.runtime.taskhandler .CustomTaskHandler.executeCustomTask (CustomTaskHandler.java:185)
From the stack trace, line 87 in CustomClassLoaderInfo retrieves the ClassLoader Type defined in the ClassLoader record. Please double-check the ClassLoader record, and make sure the required ClassLoader Type field has a value. Here’s the sample from EsriJS.