How do you import CL fields that have more than 100 characters?

I encountered a bug with classification (CL) field types. I have a classification BO with the publish name comprised of (ID – Name). I have a certain number of records where the character length in the Name field is about 150 characters. So the full length of the publish name will be more than 100 characters.

  • Problem: I went to a form that references that classification via an associated CL field. After I selected the value, I noticed that the displayed value showed a truncated value that was less than 100 characters. However, when I went to the Association tab of that form, it had the correct association.
  • Second Problem: When I imported data via Data Integrator (DI), I made sure that the CL field had the full path which is more than 100 characters. DI gave no errors after import. I opened the record to verify the CL field populated, but the CL field was not updated and left null. I had to manually select the value in the CL field to associate it correctly.

Question: How do I import data with CL fields that have more than 100 characters?

I am not sure how to import data with CL fields that are more than 100 characters, but if you feel you have encountered some bugs, please submit a PMR.

[Admin: To see other related posts, use the Classifications tag or Character tag.]

Continue reading

Why do duplicate opportunity rows show in the Assessment tab?

We have an issue with duplicate opportunity rows in the building’s Assessment tab. The issue seems tied to the building system class smart section and the underlying table reporting against it, T_TR_DEF_LI_IT_TR_BUI_SY_CL, while in a building record, Assessment tab, when we create a new opportunity, fill in the building system class smart section (not the building system item one), and create draft.

If a user then tries to change the building system class, but then decides to not save and instead just closes the form, the row that is entered into the table T_TR_DEF_LI_IT_TR_BUI_SY_CL for the temp data, is not removed when the record is closed without saving. Which results in the opportunity query on buildings displaying the opportunity twice. This seems to be a general issue with the use of these tables, since it also happens when you do the same steps on a work task and a facility project; an extra row remains in T_TR_WO_TA_TR_FACILIT_PROJ.

Has anyone else encountered this issue and found a way to correct it? Using the steps below, can anyone confirm that the issue happens for them as well?

This issue is being addressed through PMR 12839,082,000 / APAR [IJ00504].

[Admin: To see other related posts, use the Assessment tag.]

Continue reading

Can you get the Cleanup Agent to remove record data in chunks?

In one of our environments, we have a large amount of records that have been transitioned to the null state. When the Cleanup Agent runs, it runs out of DB2 transaction log space executing the following:


For workflow instance saves, the Cleanup Agent now seems to remove the data in small chunks (of 1000 rows each). But for the record data cleanup, it still seems to (try to) remove all data in one huge SQL statement/transaction. Is it possible to get the Cleanup Agent to remove record data in chunks, like it does for workflow instance saves?

Otherwise, I’m thinking of writing a small util that would run the statement above, but in smaller chunks, since it seems we still have the list of record IDs that it tries to remove in the IBS_SPEC_CA_DELETE table. Any obvious issues with that?

As of 3.5.3, there have been no changes for the data to be deleted in chunks for IBS_SPEC_ASSIGNMENTS. This sounds like a good PMR. It is rarely recommended to delete records directly from the database, but in this circumstance, you might be okay. However, the IBS_SPEC_CA_DELETE only exists during the time the Cleanup Agent is running. It is dropped when the agent goes back to sleep.

[Admin: To see other related posts, use the Cleanup tag or SQL tag. As a side note, starting with version 3.4, the Cleanup Agent name was changed to Platform Maintenance Scheduler.]

Continue reading

What is the IBM TRIRIGA Support best practice for SRs and PMRs?

IBM TRIRIGA Support works on addressing problems through a problem ticketing system where each issue is logged as an IBM Service Request (SR) or Problem Management Report (PMR).  We manage problems reported via this process.

IBM TRIRIGA Support provides a support landing page titled “IBM TRIRIGA Information and Support Resources” which has a lot of very helpful information. This page has a Support Resources Home section that provides numerous links to some great resources, including a link to our IBM Service Request system, where you can open a Service Request (SR). On the “IBM TRIRIGA Information and Support Resources” page, there are also IBM Support phone numbers.

Once an SR/PMR is opened, it can be tracked for updates via the SR tool.  You may also request an update at any time and this will notify the Support team to follow up with you as soon as possible.

For the most efficient IBM TRIRIGA support experience…

  • There should only be one problem per SR/PMR per customer environment. This helps to keep the focus on a particular issue…
  • SRs/PMRs also have the concept of “severity”. This is a ranking that is set by the customer to indicate the urgency and importance…
  • When opening your SR/PMR, try to be as complete as possible and provide as much of the critical information as possible…

[Admin: This post is related to the 07.14.15 post about collecting data to resolve PMRs, the 07.07.15 post about resolving PMRs as soon as possible, and the 04.26.17 about outlining the process for SRs, PMRs, and APARs.]

Continue reading

What is the IBM TRIRIGA Support process for SRs, PMRs, and APARs?

IBM TRIRIGA Support does all that it can to assist our clients. However, there are processes in place to help all of our clients get a consistent level of help…

A Service Request (SR) or Problem Management Report (PMR) is created to request assistance from IBM TRIRIGA Support to help with investigating a problem or to request an answer to a question regarding TRIRIGA. Due to the complexities of the environments supported and the potential scope of work involved with enterprise software, it may take some time to complete an investigation and can result in a number of outcomes, such as the following SR/PMR resolutions:

  • Resolved as a question answered.
  • Resolved as a product working as designed (even when a client may disagree with the design).
  • Resolved as a request outside the scope of support.
  • Resolved as a defect (which will result in the creation of an Authorized Program Analysis Report, or APAR).

With each of these outcomes, IBM TRIRIGA Support has completed its investigation and the SR/PMR has been resolved. What happens next?

[Admin: This post is related to the 07.14.15 post about collecting data to resolve PMRs, and the 07.07.15 post about resolving PMRs as soon as possible. The same article is also posted in the Watson IoT Support blog.]

Continue reading

How do you avoid the tree error after deleting hierarchy records?

I’m loading data via the Data Integrator into a Classifications business object. In the first load, my data is successfully loaded. However, I notice some data mapping issues. So I delete the records from a query, then I clear cache. In the second load, my data is successfully loaded. I go into the Classifications hierarchy form and get the dreaded message:

“Please contact your system administrator. The tree control reported this error while trying to draw itself: There was an error in the database or the query definition.”

When this happens, I tell myself that I deleted the records too quickly and didn’t allow the system to reset in time. The solution is the dreaded wait time for the Cleanup Agent to process records that takes 12 hours, 1 day, 3 days, or sometimes 1 week, before all records with a TRIRECORDSTATESY is null, are removed from the database. The only workaround seems to be to increase the Cleanup Agent time. However, is there a sequence of steps I need to follow before I delete records from a hierarchy form, so that I don’t get the dreaded message each time?

Regarding your scenario of loading hierarchy records, deleting them, then reloading the same records to cause the tree control to fail, that should be considered a platform defect. I would advise you to enter a PMR, so Support can look into this issue. The tree control should never fail to render as you describe it.

To help with your issue, there is an unsupported platform feature that allows the Cleanup Agent to delete data immediately. If you add the following property to your file and set CLEANUP_AGENT_RECORD_DATA_AGE=2, the Cleanup Agent when run will delete records that are 2 minutes old. This allows you to immediately delete a bad data load, and allows you to run it cleanly again a second time without conflicts from that data already existing in a null state.

[Admin: This post is related to the 08.11.16 post about the Organization hierarchy tree not being displayed, the 08.04.16 post about unretiring and returning records to null, and the 02.24.16 post about executing the Cleanup Agent (a.k.a. Platform Maintenance Scheduler) after retiring a record.]

Continue reading

Having an issue with editable queries and SQL queries in 3.5.2

I’m having two issues which I think are related. The environment is running Windows with SQL Server 2016.

  • 1. In editable queries, I have a locator field, and I pick from the locator query. But after I click OK, my session times out, and it doesn’t save the changes. This affects all browsers: Safari, Chrome, IE.
  • 2. In the Admin panel, if I have comments in my SQL query, the query now fails. For example… “SQL SCRIPT IS NOT VALID SELECT STATEMENT, PLEASE REVISE.”

What I noticed is that I get the same error in the error log when using the locator field in an editable query. Something seems to be failing related to SQL queries. Can anyone reproduce this issue? This is a critical failure in our upgrade tests and I will most likely submit a PMR for a patch.

Continue reading

What is your ongoing process for TRIRIGA performance tuning?

Being on the IBM TRIRIGA Support team, I have seen my share of PMRs where the customer is reporting a performance issue… First and foremost, I need to make it clear that performance can be affected by a wide range of components… Second, you should review the Best Practices for System Performance… Third, as a TRIRIGA Administrator, you should review performance on a regular basis…

But your best tool for analyzing TRIRIGA performance is the performance log…

  • 1) Login to the Admin Console.
  • 2) Click on Platform Logging in the Managed Objects section on the left side of the screen.
  • 3) On the right side of the screen is a list of categories that have a hierarchical structure. Scroll down to the Performance Timings category.
  • 4) Click on the check box immediately in front of Performance Timings. This will cause all of the subcategories to be checked.
  • 5) Scroll down to the bottom of the page and click on the Apply button.
  • 6) At this point, performance logging is turned on and the performance.log file should appear in the TRIRIGA directory structure in the log sub-directory.
  • 7) Perform activities that users have indicated are poorly performing. This action, along with other actions taking place at the time of the testing, will get logged to the performance.log file.
  • 8) Once you have completed the process of recreating the performance issue, log back into the Admin Console and turn off the performance logging.
  • 9) Perform an analysis of the resulting performance.log…

[Admin: To see other related posts, use the Performance tag or Tuning tag.]

Continue reading

Why does Support need to know your deployment schedule?

If you have run into an issue while deploying the IBM TRIRIGA application in any of your lower environments (UAT, DEV, TEST), then you may have seen the Support engineer asking questions regarding your deployment window. In truth, we would rather have this information in advance of any production deployment, regardless of whether you have run into an issue that you report to the IBM TRIRIGA Support team.

There are three reasons why it is important to get this information to the TRIRIGA Support team:

  • First, we may know of some potential issues that may crop up during the upgrade process, regardless of whether you are performing an application upgrade versus a platform upgrade.
  • Second, when you have open PMRs with the TRIRIGA Support team, those that could potentially impact your go-live timeline will be viewed with a more critical eye with regard to defects and the versions of either the application or platform. In both of these cases, there is a strong potential for a change in your implementation timeline. In the first case, the potential issues may require you to upgrade to a later release where the problem was resolved. In the second case, you may have identified a previously unknown issue that cannot be resolved with any existing releases and may require a fix pack release to the application and/or platform release to which you are upgrading.
  • The third reason why it is important to know your implementation timeline is to ensure resources are prepared to respond during your production roll out. While the Support team is prepared to respond on a 24/7 basis regardless of any implementation schedules, we need to ensure that development resources are aware and also prepared should a problem occur during your production implementation.

Just as you would rely on your internal resources, the TRIRIGA Support teams are also resources on which you need to rely during your upgrade processes. Keeping us in the loop early on in your process will allow us to work with your implementation team as a single team rather than a separate group with whom you only need to tap when an emergency arises. Our early involvement is meant to prevent such emergencies and make the upgrade process proceed as smoothly as possible…

[Admin: This post is related to the 06.08.16 post about TRIRIGA Support needing your version information.]

Continue reading

Why does IBM TRIRIGA Support need your version information?

What you can do to help is to provide this version information up front when you open a PMR. I suggest keeping a document with the environment configuration information. Having been on the client side of this picture, I know how I ended up making the IBM PMR process easier for myself. Knowing that I would be asked version information right out of the gate, I kept a document which had the version information for the main product as well as versions of other products in our configuration that could have an impact, in any possible way, to the problem being reported. I found that this greatly smoothed out the initial response from the support team I was working with and also led to quicker turnaround time on the PMRs I submitted.

While I cannot guarantee that the PMR turnaround time will be significantly improved, it should at least reduce multiple communications regarding the configuration of an environment. The main reason for back and forth about version information that I have seen is that the support team assumes the information provided will be all that is initially required to work the problem. While working and researching the issue, we may come to find that we need some additional version information (think operating system, third-party tool versions, etc.). Having the information up front when you open the PMR should prevent this from happening…

Continue reading