I’ve noticed that the triStatusCL column of the T_TRIPEOPLE table has been defined as VARCHAR2(1000) by default. Can we resize it using SQL? What’s the impact? According to my understanding, all of the classification type fields are defined in the same way, correct?
This is actually set by design, and uses a thousand characters for compatibility reasons. But there might be a size change based on platform changes in the future.
[Admin: To see other related posts, use the Character tag or Classifications tag.]
When the navigation item is saved with special characters, the user is unable to open the form. Upon clicking the item, an error is reported: “Due to either a session timeout or unauthorized access, you do not have permission to access this page.”
When you create a custom navigational link, if special characters are used such as /, \, &, #, %, they break the capability to redirect the user to a form or to an external link. Refrain from adding a special character. They must be omitted so that the form can be rendered.
[Admin: This post is related to the 01.29.18 post about restricting special characters in a text field. To see other related posts, use the Character tag.]
I want to be able to restrict the user from entering special characters like !@#$% in a text field in a form. I could not find any such utility in the Form Builder or Data Modeler. If my understanding is correct, I should be using a workflow for this. But I am not sure how to do it. Any suggestions?
Do you want the field to disallow spaces and numeric characters, as well as special characters like !@#$%? If yes, then Validation > “Alpha Only no Spaces” through Data Modeler may help you…
If you opt to go the workflow route, you could have a switch that uses a few Contains() statements to check for the individual characters you don’t want, but I don’t know how well that function will work with special characters. You may want to look at using the isStringPatternMatch() function, as described in this thread.
[Admin: To see other related posts, use the Character tag.]
I am trying to export foreign language data using integration object. Alphanumeric values are exported correctly, but I cannot figure out the character set used for foreign language values, namely Japanese. The Japanese values are displayed correctly when I look at the data from a browser, and the data is stored in UTF-8 in the database. But when I export the value out to a file using the integration object, the value is no longer in UTF-8. Does anyone know what character set the integration object uses? How I can change it to UTF-8?
You are absolutely right that it should use UTF-8, but after talking to a co-working and looking at some code it does not appear that is the case. Submit a PMR and reference RTC 292380 and include that you were referred by me to submit the PMR… This issue is tracked by APAR IJ02452.
[Admin: To see other related posts, use the Integration Object tag or Multibyte tag.]
We have upgraded the TRIRIGA platform to 184.108.40.206 and started upgrading the application from 10.2 to 10.5.2 in incremental order (10.3, 10.3.1, until 10.5.2). To minimize the outage and complexity during production implementation, we have been suggested to take a final OM package after completing 10.5.2 deployment, and apply all the customizations which might have been impacted with the upgrade. This final OM package will contain all the changes from 10.2 to 10.5.2.
Our question is on the patch helpers: Can we run all the patch helpers (from 10.3 to 10.5.2 in order) after importing the final OM package?
Also, we are running the Varchar-to-Numeric script before importing the application upgrade packages. This script is taking a long time (almost a day in two test environments), but when we tried in another environment, it’s running for more than 2 days and still didn’t get executed. Is it normal for this script to run like that? Or will it be an issue? There are no differences between the environments.
I wouldn’t recommend doing the upgrade in one package. Usually, it ends up being quite large and it will cause issues. The IBM-recommended way is to perform each OM, then run the patch helpers. Once you have upgraded the OOB OM packages, you can have one OM which has your custom objects…
[Admin: This post is related to the 10.25.17 post and 04.28.17 post about running “SetVarcharColsToNumeric” scripts. To see other related posts, use the Scripts tag.]
I encountered a bug with classification (CL) field types. I have a classification BO with the publish name comprised of (ID – Name). I have a certain number of records where the character length in the Name field is about 150 characters. So the full length of the publish name will be more than 100 characters.
- Problem: I went to a form that references that classification via an associated CL field. After I selected the value, I noticed that the displayed value showed a truncated value that was less than 100 characters. However, when I went to the Association tab of that form, it had the correct association.
- Second Problem: When I imported data via Data Integrator (DI), I made sure that the CL field had the full path which is more than 100 characters. DI gave no errors after import. I opened the record to verify the CL field populated, but the CL field was not updated and left null. I had to manually select the value in the CL field to associate it correctly.
Question: How do I import data with CL fields that have more than 100 characters?
I am not sure how to import data with CL fields that are more than 100 characters, but if you feel you have encountered some bugs, please submit a PMR.
[Admin: To see other related posts, use the Classifications tag or Character tag.]
I wanted to see if anyone has set up the following settings in an Oracle Database for TRIRIGA 10.5.2 and 3.5.2:
- 1. NLS_LENGTH_SEMANTICS: Should this be set to CHAR? In our current production environment, it’s set to BYTE, but the TRIRIGA support documentation says that this can lead to data loss, so they recommend using CHAR.
- 2. NLS_CHARACTERSET: This is set to WE8ISO8859P1 in our current production environment, but the support document says that it must be UTF-8 or UTF-16.
- 3. Block size: This is set to 8k, but the documentation recommends using 16k.
For (1) and (2), if you never want to store multibyte characters, then what you have is fine. But if you do, then you must use what the support documentation suggests. Once you have your database created, it is difficult and time-consuming to change it, and it needs to be done outside of TRIRIGA. As for (3), I would encourage you to use 16k, since it will allow you better throughput and paging, unless you have a strong reason why you need to stay at 8k.
[Admin: This post is related to the 04.04.16 post about database character settings. NLS refers to National Language Support parameters in Oracle. To see other related posts, use the Multibyte tag, MBCS tag, or NLS tag.]