I wanted to see if anyone has set up the following settings in an Oracle Database for TRIRIGA 10.5.2 and 3.5.2:
- 1. NLS_LENGTH_SEMANTICS: Should this be set to CHAR? In our current production environment, it’s set to BYTE, but the TRIRIGA support documentation says that this can lead to data loss, so they recommend using CHAR.
- 2. NLS_CHARACTERSET: This is set to WE8ISO8859P1 in our current production environment, but the support document says that it must be UTF-8 or UTF-16.
- 3. Block size: This is set to 8k, but the documentation recommends using 16k.
For (1) and (2), if you never want to store multibyte characters, then what you have is fine. But if you do, then you must use what the support documentation suggests. Once you have your database created, it is difficult and time-consuming to change it, and it needs to be done outside of TRIRIGA. As for (3), I would encourage you to use 16k, since it will allow you better throughput and paging, unless you have a strong reason why you need to stay at 8k.
[Admin: This post is related to the 04.04.16 post about database character settings. NLS refers to National Language Support parameters in Oracle. To see other related posts, use the Multibyte tag, MBCS tag, or NLS tag.]
I’m seeing issues within report results, where if the user profile language is in US English, the results are of one type, but if the user profile language is in German, it is showing some other data. The record while checked is the same for both cases. While opening and checking the record, the ID and name of the record varies from what it showed as a result in the report. While investigating, what we observed is that these fields are marked as “Localizable” in Data Modeler. What is the use and impact of the “Localizable” field property? Any suggestions?
Here is a PDF link to the 3.5.3 Globalization (Localization) user guide. Also, perhaps this technote on localized database storage will help provide some insight.
[Admin: The same question is also posted in the main Application Platform forum. To see other related posts, use the Localization tag or Language tag.]
IBM TRIRIGA made vast improvements in the globalization of currencies starting with TRIRIGA Application Platform 3.4.2. If you are on a platform version before 3.4.2, we urge you to upgrade to the most current version to take advantage of these enhancements. For the enhancements to work correctly, you must update your environment after you upgrade your platform. The following discussion highlights globalization areas to review after upgrade:
- User Language: A language is defined in each user’s profile record. The language establishes the locale of the user…
- User Currency: A currency is defined in each user’s profile record. When a user creates a record that includes a currency UOM…
- UOM Value: Currency UOMs are defined in the Unit of Measure (UOM) values, which are found in Tools > Administration > Unit of Measure (UOM) > Values…
- Language Code: Review the language codes in Tools > Administration > Globalization Manager > Language Code to ensure that they are correct for your company…
[Admin: To see other related posts, use the Currency tag or Globalization tag.]
As I’ve understood it, both the integration object and DataConnect allow you to import localized data (except business key fields, I think). In addition, we have another option to use the Globalization Manager to import traditional data. I found it’s pretty cool. We will only deal with the localized data, with less impact for the non-localized data. Before going forward with an option, I’d like to know what’s the best option for you guys?
Importing by using the Globalization Manager updates the L_ tables directly. If your data does not include localized values that need to be concatenated, for example in a formula, the Globalization Manager import is your best option.
However, if your data includes localized values that need to be concatenated through a formula, or if your data needs to be processed by workflow before it is added to the TRIRIGA tables, then you should use either the integration object or DataConnect.
[Admin: This post is related to the 03.02.16 post about best practices for integration optimization.]
In the Attachments tab, this package contains the untranslated labels and data delivered in IBM TRIRIGA Platform 3.5.2 and Application 10.5.2. These will be translated in the future release. You can use the files inside this package to provide translations for your supported languages, then import them into your environments.
I am wondering if it is possible to export a user-friendly Microsoft Excel spreadsheet from TRIRIGA with the English translations in one column and the French, Danish, Chinese, or other language in another column (with one spreadsheet for each target language). If so, then this file could be uploaded back into TRIRIGA and the translation would be updated.
TRIRIGA only supports language packs in XLIFF file format, and they must be in this format for importing via Globalization Manager. This is why the export utility also generates the XLIFF format. You may have some luck with third-party tools to convert an XLIFF file into an Excel file. But it’s important that you convert it back to an XLIFF file with the same structure as the files in our shipping language packs. For example, tags like <tririga> and its attributes would need to be retained as-is.