The user template of the source environment replaces the user template of the destination environment of the same person record after an OM import. In TRIRIGA 3.5.2, when a triPeople user template is migrated from one environment to another, if a user’s people record is associated with the string “Applied Template” in the source environment, the most-recently applied template will be applied to same user’s people record in the target environment.
For example, user James Sullivan has a Project Team Member template applied in the test environment. In the CERT environment, user James Sullivan has a Facilities Manager template applied. But when the Project Team template is migrated to the CERT environment, the template is applied (instead of the Facilities Manager template) to James Sullivan’s people record.
This is working as designed. The root of the issue is that when an OM that has a people template is imported from the source environment to the target environment, and when the published name of the user profile record is the same in both the source and target environments, it will NOT replace, but create additional associations from source to target. All of these associations can be seen in the Associations tab of the user record. However, the form will show that latest template that was applied.
[Admin: To see other related posts, use the Templates tag.]
In a Gantt section, the section is not honoring the associated report sort order.
The issue was caused by the dynamic ordering that was implemented by the project tasks’ internal tree set. The BO query comparison was performed by using the string form of the columns. Moving forward, we resolved an issue where the default Gantt sort ordering, and the sort ordering immediately after importing an MPP project file, did not correctly order by the sequence ID based on the Gantt section query configuration.
[Admin: This post is related to the 06.14.17 post about task date issues when importing MS Project (MPP) files. To see other related posts, use the Gantt tag.]
Logged into TRIRIGA 3.5.2/10.5.2. Set the project. Navigated to the Schedule tab. Imported MS Project (MPP) file. First thing noted was that the Start and End times were set to 9AM. A time zone was set, but no calendar. After fixing dependencies and saving that to the Gantt, we noted that the schedule task and work task times were an hour off, that is, the Gantt section times and the query section times (specifically, the End times).
After saving to the Gantt, but before saving the project, the dates and duration in the Gantt didn’t match. Also after saving, we noted that the Start and End times of the schedule task were updated from 9AM to some different, seemingly arbitrary, values. After the final save, the Duration values changed from Weeks/Days/Hours to Months/Weeks/Days. We also played around with changing the Planned Start Date of the project, and noted that any change that occurs prior to the original date, or any future date, will not recalculate at all.
There is a bug in the MPXJ library where in some instances, the working days needs to be set to 0. Moving forward, we resolved an issue where sometimes, Microsoft Project files (MPP) tasks are coming in with dependencies with an invalid lag time of 95.4733 days when the lag time should be 0 days.
[Admin: To see other related posts, use the Gantt tag.]
In my current project, there was a suggestion to extract (updated) data from TRIRIGA, with a high frequency, and import it into some kind of data warehouse (DW) or business intelligence (BI) solution. Then, from there, perform more advanced reporting and analytics. Have other TRIRIGA solutions implemented something similar? Are there any TRIRIGA best practices or recommendations for staging area, extract-transform-load (ETL), DW, or BI reporting solutions?
[Admin: This post is related to the 12.15.16 post about the IBM TRIRIGA Connector for Watson Analytics. To see other related posts, use the ETL tag or Analytics tag.]
We imported an Oracle 12c database dump from a TRIRIGA platform running TRIRIGA 188.8.131.52 into another Oracle 12c server. The Oracle Data Pump Import (impdp) completed without error. The plan was to install and upgrade the TRIRIGA platform to 3.5.2. This is something we’ve done multiple times with different releases. So platform upgrades are usually painless.
We went right through the upgrade process dialogs, a successful database server conductivity test, but here’s something we’ve never experienced before. Instead of installing and upgrading the database to the new platform, “Installed by InstallAnywhere 17.0 Premier Build 5158” throws up the following dialog box.
“Upgrade Not Supported. An upgrade from the version of your platform is no longer supported. Please upgrade to 184.108.40.206 first, before upgrading to this platform version. Back or Exit.”
Does anyone have any ideas why we are getting this message?
We found the issue. The schema name was “TR1DATA”, not “TRIDATA”. The terminal font was not showing a clear distinction between “1” and “I”. We’d better take a much closer look next time. Thanks for the assistance from everyone.
I have created an object migration (OM) with its workflow. The execution works well on the Development environment. But after an import of the OM package with all objects needed, the execution didn’t work on the Test environment. The object migration launch works. The triIntegration workflow launch works. The execution of the request works in SQL Server. The connection in my object migration works.
But there is no row in the staging table S_CSTPHINTERMARCHECONTRAT. Also, I see in the logs:
Calling SQL: [INSERT INTO S_CSTPHINTERMARCHECONTRAT(DC_JOB_NUMBER, DC_CID, DC_SEQUENCE_ID, DC_STATE, DC_ACTION, DC_GUI_NAME, TRIIDTX, CSSTPHHPIDRATTTX, CSTPHRETIRETX) VALUES (?,?,?,?,?,?,?,?,?)] with params[402, 0, 1, 1, 4, cstPHInterMarcheContrat, 2013/M0166, 101GT, ]
I found the problem. The configuration of the integration object was for the Development environment and not for the Test environment.
[Admin: To see other related posts, use the DataConnect tag.]
We have some trouble understanding how the “Database” scheme is supposed to be correctly implemented through the TRIRIGA integration object. From what we saw, TRIRIGA is unable to interact with an external database if it does not have 4 particular columns dedicated for the TRIRIGA integration process. These columns are IMD_STATUS, IMD_ID, IMD_MESSAGE and TRIRIGA_RECORD_ID on both the external source of data and the internal target for data in the TRIRIGA database.
We found it odd that to interact with an external database, TRIRIGA forces it to have 4 columns dedicated to itself and is not able to simply send a SELECT statement and map the corresponding fields. Did we miss (or overdo) something which could avoid altering the source table? Or is it common practice to interact with an external table?
[Admin: This post is related to the 11.05.14 post about using an integration object with an inbound database scheme.]