One of our customers is trying to apply all the best practices from the TRIRIGA documentation with different recommendations. One of them is regarding:
- ALLOW_SNAPSHOT_ISOLATION: SET ALLOW_SNAPSHOT_ISOLATION should be set to ON
- READ_COMMITTED_SNAPSHOT: SET READ_COMMITTED_SNAPSHOT should be set to ON
Their database department is telling them that if they activate this parameter, they could be doing “dirty reads”. Mainly, if they read and modify in the same tables at the same time. They said that other products control this situation. They wanted to know if TRIRIGA controls it. In case that TRIRIGA controls these situations, they will change it. Can you please confirm if they should set this parameter to ON?
TRIRIGA controls data integrity within the context of the web application. These settings for MS SQL make it behave more like Oracle and DB2, and we recommend that they be set to ON.
Our client is asking for the proper way to migrate project templates (the project itself, tasks, dependencies, roles) between instances. Object migration (they use TRIRIGA 3.4.1) doesn’t move (template) associations, so as a result, all records are created separately. Can I ask your recommendation on a right way to do it?
Are there any suggestions for tools or best practices on validating data after a legacy data load into TRIRIGA?
We are converting from 14 various legacy systems into TRIRIGA. And rather than relying on manual validation, we’d like suggestions for tools that could be used. Since the TRIRIGA database is so complex and convoluted, we thought it’d be best not to try compare scripts at the database level.
[Admin: This post is related to the 01.09.17 post about the best practice for localized data loading.]
What are the concerns about stopping my database for maintenance and leaving IBM TRIRIGA JVMs (JBoss, WebLogic, WebSphere) up and running at this point? Will they be reconnecting automatically after my database is up and running again? I need to programmatically schedule database maintenance for my TRIRIGA system.
When the database is down, the application server (JBoss, WebLogic, WebSphere) will be receiving connection issues to the JDBC component and JVMs will stop responding after that. If the database comes up again, the application server will not reconnect the JVM automatically. The JVM needs to be restarted manually after that.
The best practice for database maintenance requiring database shutdown will always be to shutdown all applications and sessions connected to it BEFORE the database itself. It gives systems the time to close the ongoing transactions gracefully.
If you need to coordinate database maintenance and JVMs automatic restarts, you need to create a batch script to manage that. This is a customized script (not under IBM TRIRIGA support) that will be stopping the JVMs first, then starting the database maintenance itself (likely stopping the database first), then restarting the database and firing commands to restart the application server IBM TRIRIGA JVMs.
Based on customer feedback, the PDF format of the Best Practices for IBM TRIRIGA 3.5.x System Performance is replaced with this more easily navigable wiki format. For PDF documents on earlier versions of TRIRIGA, go to the Archive at the bottom of this page.
Best Practices for IBM TRIRIGA System Performance
Use these System Performance best practices to improve the performance of applications based on the IBM TRIRIGA Application Platform. While these guidelines provide optimal performance in the lab test environment, your environment might require different settings. The settings in this wiki can be used as a guideline or as a starting point, and then monitored and tuned to your specific environment.
TRIRIGA has a long and successful history in the world marketplace. Over the years, TRIRIGA has incorporated many new features, grown in complexity, and integrated with other complex software systems. Small, medium, and large organizations implement TRIRIGA in increasingly complex ways. For many customers, TRIRIGA is now a global, enterprise-wide implementation that is in use by thousands of users.
The larger and more complex the deployment of TRIRIGA is, the more challenging it is for you to keep TRIRIGA performing well for your users. Because some of the greatest challenges are faced by those who deploy these products across large, global enterprises, this document has a special focus on improving performance in advanced enterprise configurations…
[Admin: This post is related to the 04.08.15 post about performance monitoring tools, the 11.06.14 post about the Performance section of the wiki, and the 08.26.14 post about resolving issues.]
We have multiple TRIRIGA instances on our production environment. I’m wondering what is the best practice for deploying an object migration (OM) package in this kind of environment? Should we keep only one instance alive (and stop all of the other servers) before deploying the OM package?
NGKF VISION Real Estate is powered by IBM TRIRIGA. This solution is not out-of-the-box TRIRIGA but rather a pre-configured solution incorporating NGKF best practices in service delivery. VISION Real Estate is a cost-effective, low-risk and schedule-friendly replacement for the myriad point solutions, spreadsheets and reporting applications utilized at most organizations. This tool supports the NGKF Integrator Model and NGKF Account Management services.
There’s a stigma in the industry about Integrated Workplace Management Systems (IWMS). Statistically speaking, many IWMS projects fail. Moreover, even when an IWMS project is successful only one or two modules are effectively utilized creating an expensive and ultimately wasteful technology point solution. Until now.
NGKF has reinvented how IWMS is delivered; providing clients a pre-configured solution built on a mature, commercial platform. NGKF VISION Real Estate is part of an integrated technology platform utilizing IBM TRIRIGA with NGKF best practices as well as features not available in IWMS; a supplier registration portal, a business intelligence solution and a suite of benchmarking and analytics tools.
NGKF has made the complex IBM TRIRIGA solution user-friendly, efficient, fast, and affordable. NGKF VISION Real Estate is even leveraged within NGKF’s own brokerage, project management, facilities management and lease administration user community. VISION Real Estate is implemented by CRE consultants with expertise in CRE service delivery, data management, and industry best practices…
[Admin: This post is related to the 06.01.16 post about NGKF GCS and the acquired CFI team.]
We would like to keep track of the associations (Warranty For/Has Warranty) made and broken between any triWarranty and any triBuildingEquipment. The solution should provide information regarding the user making it change, the contract and asset concerned, the association name, and the new status (“Broken” or “Linked”).
We know that the check box “Audit All Data” in the business object properties already keeps track of the associations. By the way, is it the only one property doing that? But we don’t really want to do an audit of all the data of triBuildingEquipment. Some fields are already auditable and we don’t want to audit the others.
In short, what would be the best practice to get a permanent and precise tracking of associations made between two objects? I am wondering if the triLog module would do the trick, either by creating a new cstAssociationLog business object, or by using an existing one and having an asynchronous workflow creating the log.
Actually, a follow-up question would be: How can we use the triLog module to implement a new logging system? I am pretty sure the question is not new, so maybe a redirection toward a chosen part of the documentation that I might have missed would help.
[Admin: This post is related to the 08.24.16 post about tracking when and who created an association.]
If we add or remove users from the Workflow Agent settings on process servers (as per “best practices” to only have one “open” process server and others “restricted”), we have to restart all process servers for the change to take effect.
The 3.5.0 Administrator Console user guide [PDF] does not suggest that a restart is necessary and since this is the new “best practice”, it should be dynamic rather than require a system outage every time you make this change…