Why aren’t Group record changes copied through object migration?


Why aren’t Group record changes copied through object migration?

The short answer is that IBM TRIRIGA sees this as an unsupported customization of the Group record. Let’s clarify this further. Even though technically, behind the scenes, Groups are record data, they are currently considered TRIRIGA platform-owned and so, controlled BOs (business objects).

The platform controls exactly what Group data the object migration (OM) can export/import. Thus, any fields added to the Group BO will not be recognized by OM when exporting/importing Group records. Modifications to any platform-owned and controlled BOs are not supported. This does not just apply to Group BOs only.

If the BO is a platform-controlled object and any changes are not supported, then why does the platform currently allow changes to it?

IBM TRIRIGA currently does not prevent users from modifying any BOs, even the ones that are specifically necessary for core platform functionality. The Group BO, Document BO, and triPlatformObjectLabelManager BO are just a few examples. Although the platform does nothing to prevent users from modifying these BOs, TRIRIGA does not support the modification of any of these.

For these core platform BOs, the object migration tool is designed to pull exactly what it needs for the designed platform functionality when exporting/importing the record data tied to these BOs. In other words, any modifications will compromise the TRIRIGA platform integrity, so it becomes an unsupported action if done so.

The wiki on Core objects in TRIRIGA Application Platform functionality details the core platform business objects that should not be modified. Meanwhile, for the expressed requirement to see Group modifications exported/imported with Group record data, a request for enhancement (RFE) was submitted and will be considered for a potential platform change in a future TRIRIGA release.

[Admin: This post is related to the 11.07.17 post about core objects you shouldn’t modify. To see other related posts, use the Groups tag or Object Migration tag.]

Continue reading

Advertisements

How do you migrate record data quickly between environments?


So I just learned that I can’t use the Object Migration tool to migrate record data between two TRIRIGA environments. For example, I have two environments on different servers on the same application and platform version. If I try to use OM to migrate the Record Data only, for instance, the Building Equipment records, not all of the associated records will get migrated and certain smart sections do not get properly migrated either.

What are some other options that I could use to quickly migrate this data? I was thinking the Data Integrator (DI) method, but that would be tedious because I have over 100,000 records.

Ideally, DI should be used for the initial load. If the data is available somewhere else, you can look into Integration Object or DataConnect. You can populate staging tables and then run the integration. In your workflow, you can have logic to create any dependent records (such as organizations or contacts) based on the staging table data.

[Admin: To see other related posts, use the Integration tag or DataConnect tag.]

Continue reading

How do you import the upgrade OM packages to custom environments?


We have already upgraded our platform to 3.5.2.1. We are currently in the process of upgrading our application from 10.3.2 to 10.5.2.

For the application upgrade, we have set up a staging environment with an initial install of 10.5.2 and we have configured all BOs, forms, and other objects to meet our current customization. My question is: What if we import the IBM upgrade OM packages (sequential from 10.4 to 10.5.2) to our current environment (which has all customization)? It would definitely overwrite all the customization and configuration, but does it affect the record data as well (e.g. lease records)?

When it overwrites the customization at the BO and form level, would it corrupt the record data since some of the custom fields on the records won’t exist at the BO level any more? And what happens after we import all our customization back in the current environment from the staging environment?

The short answer is: You wouldn’t apply the IBM upgrade OM packages. Instead, you’d build OMs in your now customized 10.5.2 environment and then apply them to your current environment.

[Admin: To see other related posts, use the Object Migration tag or Upgrade tag.]

Continue reading

Can you get the Cleanup Agent to remove record data in chunks?


In one of our environments, we have a large amount of records that have been transitioned to the null state. When the Cleanup Agent runs, it runs out of DB2 transaction log space executing the following:

DELETE FROM IBS_SPEC_ASSIGNMENTS WHERE EXISTS (SELECT ‘X’ FROM IBS_SPEC_CA_DELETE WHERE IBS_SPEC_CA_DELETE.SPEC_ID=IBS_SPEC_ASSIGNMENTS.SPEC_ID)

For workflow instance saves, the Cleanup Agent now seems to remove the data in small chunks (of 1000 rows each). But for the record data cleanup, it still seems to (try to) remove all data in one huge SQL statement/transaction. Is it possible to get the Cleanup Agent to remove record data in chunks, like it does for workflow instance saves?

Otherwise, I’m thinking of writing a small util that would run the statement above, but in smaller chunks, since it seems we still have the list of record IDs that it tries to remove in the IBS_SPEC_CA_DELETE table. Any obvious issues with that?

As of 3.5.3, there have been no changes for the data to be deleted in chunks for IBS_SPEC_ASSIGNMENTS. This sounds like a good PMR. It is rarely recommended to delete records directly from the database, but in this circumstance, you might be okay. However, the IBS_SPEC_CA_DELETE only exists during the time the Cleanup Agent is running. It is dropped when the agent goes back to sleep.

[Admin: To see other related posts, use the Cleanup tag or SQL tag. As a side note, starting with version 3.4, the Cleanup Agent name was changed to Platform Maintenance Scheduler.]

Continue reading

How do you avoid the tree error after deleting hierarchy records?


I’m loading data via the Data Integrator into a Classifications business object. In the first load, my data is successfully loaded. However, I notice some data mapping issues. So I delete the records from a query, then I clear cache. In the second load, my data is successfully loaded. I go into the Classifications hierarchy form and get the dreaded message:

“Please contact your system administrator. The tree control reported this error while trying to draw itself: There was an error in the database or the query definition.”

When this happens, I tell myself that I deleted the records too quickly and didn’t allow the system to reset in time. The solution is the dreaded wait time for the Cleanup Agent to process records that takes 12 hours, 1 day, 3 days, or sometimes 1 week, before all records with a TRIRECORDSTATESY is null, are removed from the database. The only workaround seems to be to increase the Cleanup Agent time. However, is there a sequence of steps I need to follow before I delete records from a hierarchy form, so that I don’t get the dreaded message each time?

Regarding your scenario of loading hierarchy records, deleting them, then reloading the same records to cause the tree control to fail, that should be considered a platform defect. I would advise you to enter a PMR, so Support can look into this issue. The tree control should never fail to render as you describe it.

To help with your issue, there is an unsupported platform feature that allows the Cleanup Agent to delete data immediately. If you add the following property to your TRIRIGAWEB.properties file and set CLEANUP_AGENT_RECORD_DATA_AGE=2, the Cleanup Agent when run will delete records that are 2 minutes old. This allows you to immediately delete a bad data load, and allows you to run it cleanly again a second time without conflicts from that data already existing in a null state.

[Admin: This post is related to the 08.11.16 post about the Organization hierarchy tree not being displayed, the 08.04.16 post about unretiring and returning records to null, and the 02.24.16 post about executing the Cleanup Agent (a.k.a. Platform Maintenance Scheduler) after retiring a record.]

Continue reading

Having an issue with People Template record data after OM


After importing some amended People Template record data in an OM, we are getting an error when trying to apply those templates to a new People record. The error being reported is:

“One or more Security Group names do not match the User Group names in the Group Details section of the selected people/people template record. Update your Groups before applying the record.”

When I look at the People Template record, I see we have 28 groups in the Group Details section, all of which exist as security groups. However, if I look in the Associations tab, there are 56 associated Group Details records. So it looks like the OM import has created some new associations, but not removed the old ones. Has anyone come across this before? Is anyone aware of any issues when importing People Template record data via an OM?

Continue reading