Our customer has seen an issue when installing TRIRIGA 3.5.3 (Linux, Server build number: 276955) on an existing database (on 3.4.2 / 10.4.2). Everything goes well until starting up the server. Generally, TRIRIGA will run a database upgrade on the first startup when a build number difference is detected.
In the OM log, we notice that TRIRIGA tried to import the upgrade OM package… The import process started with the triPlatformObjectLabelManager package, but it failed to import a navigation item, which is newly created for Object Label Manager. I haven’t found any log which can explain this failure. I’ve checked the NAV_ITEM table. This navigation item wasn’t there before the upgrade process. Then all of the other packages are stuck on a pending status. Nothing happens after “Creating package from Zip file”. This behavior causes a lot of SQL update failures.
Meanwhile, on our Dev environment (Windows, Server build number: 279835), the upgrade went very well. You can find the difference in the logs. The OM log was set on “Debug” level on both servers. Note that the build number is slightly different between these two enviroments. Have you seen this kind of issue? Where can I find more details about the navigation item import failure?
[Admin: This post is related to the 02.17.17 post and 05.19.16 post about inconsistent OM validation results. To see other related posts, use the Object Migration tag or Upgrade tag.]
We have upgraded the TRIRIGA platform to 188.8.131.52 and started upgrading the application from 10.2 to 10.5.2 in incremental order (10.3, 10.3.1, until 10.5.2). To minimize the outage and complexity during production implementation, we have been suggested to take a final OM package after completing 10.5.2 deployment, and apply all the customizations which might have been impacted with the upgrade. This final OM package will contain all the changes from 10.2 to 10.5.2.
Our question is on the patch helpers: Can we run all the patch helpers (from 10.3 to 10.5.2 in order) after importing the final OM package?
Also, we are running the Varchar-to-Numeric script before importing the application upgrade packages. This script is taking a long time (almost a day in two test environments), but when we tried in another environment, it’s running for more than 2 days and still didn’t get executed. Is it normal for this script to run like that? Or will it be an issue? There are no differences between the environments.
I wouldn’t recommend doing the upgrade in one package. Usually, it ends up being quite large and it will cause issues. The IBM-recommended way is to perform each OM, then run the patch helpers. Once you have upgraded the OOB OM packages, you can have one OM which has your custom objects…
[Admin: This post is related to the 10.25.17 post and 04.28.17 post about running “SetVarcharColsToNumeric” scripts. To see other related posts, use the Scripts tag.]
Is it possible to download the imported OM package from the front end or server?
Yes. Go to Tools > Object Migration. Locate the OM package, click the “Copy Package” icon link to the left of the OM name, then in the upper-right, click “Export”. Give it a name, click “Export”, then it will ask you to “Open” or “Save”. I would “Open” the ZIP to see where it was stored. There you have it!
[Admin: To see other related posts, use the Object Migration tag.]
So I just learned that I can’t use the Object Migration tool to migrate record data between two TRIRIGA environments. For example, I have two environments on different servers on the same application and platform version. If I try to use OM to migrate the Record Data only, for instance, the Building Equipment records, not all of the associated records will get migrated and certain smart sections do not get properly migrated either.
What are some other options that I could use to quickly migrate this data? I was thinking the Data Integrator (DI) method, but that would be tedious because I have over 100,000 records.
Ideally, DI should be used for the initial load. If the data is available somewhere else, you can look into Integration Object or DataConnect. You can populate staging tables and then run the integration. In your workflow, you can have logic to create any dependent records (such as organizations or contacts) based on the staging table data.
[Admin: To see other related posts, use the Integration tag or DataConnect tag.]
I’m seeing an issue in the Group Move UX app. After application upgrade to 10.5.3, the Group Move application stopped working on “Production Mode”. After activating “Development Mode”, the Group Move worked again. So I think the problem came from the vulcanized code of 10.5.3 OM package. For now, I’ve just vulcanized the Group Move application manually and it works now. Have you seen a similar issue?
We confirmed the defect related to the vulcanized code for the Group Move UX app. It is being addressed for the next full release under APAR IJ01969. Meanwhile, the manual solution you have followed is the workaround. Here are the details:
“The Group Move UX Perceptive application should no longer contain syntax errors and the search feature should work now. Customers who run into this issue on previous app versions can vulcanize the application following the steps outlined on this wiki page. (Tri-IJ01969-5886)”
[Admin: To see other related posts, use the Vulcanize tag.]
We are seeing an Oracle error during OM package import related to the number of open cursors. We checked the documentation, but there is no recommendation of what the cursor size should be for large OM package imports. We are currently on 500. Has anyone seen this issue or performed a successful upgrade in this regard? What is the optimal cursor size being used by different clients?
I checked with one of the senior TRIRIGA architects, and he said he would recommend setting open_cursors to at least 2000 for an OM package import. I hope this information helps!
[Admin: To see other related posts, use the Object Migration tag or Upgrade tag.]