Starting in IBM TRIRIGA 3.5.3, the Report Scheduler application provides administrators the ability to schedule and automate reports, queries, and external reports. Administrators can select the reports to run, setup a schedule and time, and identify who receives the reports. Once scheduled, the platform will run the reports, and send the them in an email notification as an attachment.
The application is found under Tools > System Setup > Report Scheduler.
[Admin: To see other related posts, use the Reports tag or My Reports tag.]
The SESSION_HISTORY table uses 500GB of disk space in our production environment. What is the best way to clean it up?
You would want to make appropriate backups and test this thoroughly, but this is something that can be done via the Platform Maintenance Scheduler (formerly Cleanup Agent) in the IBM TRIRIGA Admin Console. You can develop SQL to do the cleanups as they meet your business requirements, add that as a new cleanup command at the bottom section of the window, and then add a new Cleanup Schedule event in the top section of the window to have it run periodically.
This may be something that you want to reach out to a business partner for assistance with, depending on your skill and comfort level in implementing such a change. If so, you can search the IBM PartnerWorld portal.
[Admin: To see other related posts, use the Sessions tag or History tag.]
In one of our environments, we have a large amount of records that have been transitioned to the null state. When the Cleanup Agent runs, it runs out of DB2 transaction log space executing the following:
DELETE FROM IBS_SPEC_ASSIGNMENTS WHERE EXISTS (SELECT ‘X’ FROM IBS_SPEC_CA_DELETE WHERE IBS_SPEC_CA_DELETE.SPEC_ID=IBS_SPEC_ASSIGNMENTS.SPEC_ID)
For workflow instance saves, the Cleanup Agent now seems to remove the data in small chunks (of 1000 rows each). But for the record data cleanup, it still seems to (try to) remove all data in one huge SQL statement/transaction. Is it possible to get the Cleanup Agent to remove record data in chunks, like it does for workflow instance saves?
Otherwise, I’m thinking of writing a small util that would run the statement above, but in smaller chunks, since it seems we still have the list of record IDs that it tries to remove in the IBS_SPEC_CA_DELETE table. Any obvious issues with that?
As of 3.5.3, there have been no changes for the data to be deleted in chunks for IBS_SPEC_ASSIGNMENTS. This sounds like a good PMR. It is rarely recommended to delete records directly from the database, but in this circumstance, you might be okay. However, the IBS_SPEC_CA_DELETE only exists during the time the Cleanup Agent is running. It is dropped when the agent goes back to sleep.
[Admin: To see other related posts, use the Cleanup tag or SQL tag. As a side note, starting with version 3.4, the Cleanup Agent name was changed to Platform Maintenance Scheduler.]
We are running TRIRIGA 10.5.2 and 18.104.22.168. Some of our lease records are getting stuck in “Processing” status after we try to activate the record. Since it is processing, we have no buttons at the top to Revise, Save, etc. This only happens sometimes, and only to lease records that have payment schedules. From the looks of it, all the payment line items do get created. A couple questions:
- (1) Is there a fix to this to keep it from happening again?
- (2) Can these records that are stuck in processing be pushed to active or do they have to be re-entered?
I was actually able to apply a workflow fix provided by IBM so that this will not happen going forward. So far, it has been working as planned…
With regards to getting the leases “unstuck”, I created an editable query, imported the State Transition Actions (on the Advanced tab in the query form), and ran the report to select and process the “stuck” leases. This worked with no issues and I did not have to delete or retire the leases. They were functioning properly after getting them out of the “Processing” state.
[Admin: This post is related to the 01.12.16 post about records getting stuck. To see other related posts, use the Performance tag or Workflow tag.]
I have come across numerous TRIRIGA clients who run their monthly lease accounting payment schedules as one single synchronous task in TRIRIGA. Often, this slows down the response times and overall environment performance considerably. Here are some simple steps to consider as recommended better/best practices for your lease accounting payment schedule run as follows:
- 1. Break up processing payments into batches that fit well with your business organizations or perhaps geographical regions.
- 2. Each of these batches can be filtered when selecting the leases to process payments for and/or before Get Payments.
Note: This will limit the number of records to process, and the load that is put on the system, and possibly, make it easier to validate payments…
[Admin: To see other related posts, use the Accounting tag or Payments tag.]
I know that TRIRIGA SaaS comes with predefined ETL jobs, and one of them is energy log fact. I have some energy logs on some buildings. However, all of my attempts to run the ETL job for energy logs have failed. Is there any missing parameter for me to add to make this work? I did try adding triEnvEnergyItem as a BO name, but no luck.
The ETLs each require different inputs depending on the ETL. Since, in general, ETL processing is a background process and runs through the scheduler, you would need to look in the server.log to see what the ETL needs if you are running it directly from the ETL Job form. You would also need to make sure that you have a license that allows you to run the ETLs, which would be a license that grants privileges to “Technology Metrics”.
Maybe this link will help: ETL and Metric Query Troubleshooting.
[Admin: To see other related posts, use the ETL tag.]
If a work task has a calendar and the “Planned End” or “Actual End” date-time field values are updated, the “Working Hours” are not always calculated correctly. The working hours do not seem to factor in the calendar hours.
The working hours were not being calculated correctly. The issue was that the values for the calendar fields (Year, Month, Day, etc.) were not being set correctly. The fix was to set the calendar fields exactly as we do for the other calculation methods. Moving forward, we resolved a work task record and Gantt chart issue, where the Actual Working Hours and Actual Working Days were not being correctly calculated when an Actual End date value was entered.
[Admin: To see other related posts, use the Calendar tag or Gantt tag.]
I have two questions:
- (1) Is there any way to export the results of a query and send them as an attachment to a particular email ID?
- (2) Is there any way to schedule to get the report results and automatically send them in Excel to an email ID?
[Admin: This post is related to the 03.15.16 post, 05.04.15 post, and 01.24.15 post about sending BIRT reports via email.]
I was wondering if there is a way to set the PM work schedule for every 2 years, maybe even 3 or 4 years?
When entering the schedule for a PM task, you would think that you could select the Recurrence Pattern Type of “Yearly”, and then enter every 2 years, 3 years, and so on, but no, you can’t! One rather cumbersome solution is to enter a yearly schedule (i.e. every year) and then set a number of exception dates for the years when you do not want the task to happen. That is a bit awkward.
The better, but less obvious, solution is to choose the Recurrence Pattern Type of “Monthly”. This offers two options to set the schedule to either “Day [x] of every [x] months” or “The [first] [Monday] of every [x] months”. Choose whichever one suits your needs, and set the “every [x] months” value to every 24 months, 36 months, 48 months, etc. The main downside of this is that the word “Monthly” appears in the name of the tasks produced, which is misleading when the schedule is effectively based on a number of years.
[Admin: To see other related posts, use the Preventive tag or Scheduling tag.]
If you attempt to revise a project from the Schedule tab, where the Gantt chart is visible, your session is expired and you receive an invalid session error. The issue was observed in Internet Explorer and Chrome, but not in Firefox.
An analysis from a Fiddler trace shows that when revising the project in Chrome, this POST to GanttDataUpload.jsp seems to kill the session. In Firefox, for whatever reason, this POST doesn’t occur, and the state transition is successful. To confirm that this is the scenario you are experiencing, use the following technote to run a Fiddler trace and check for the same GanttDataUpload.jsp call: IBM TRIRIGA using Fiddler for tracing web browser traffic.
As a temporary fix, use Firefox. When the record is in a read-only state, no Save action should be called on the Gantt. Moving forward, we resolved the session-kill issue when the user performs a Revise action on a project in the Schedule tab.
[Admin: This post is related to the 08.18.15 post about using Fiddler to trace TRIRIGA web traffic. To see other related posts, use the Gantt tag or Fiddler tag.]