I am trying to export foreign language data using integration object. Alphanumeric values are exported correctly, but I cannot figure out the character set used for foreign language values, namely Japanese. The Japanese values are displayed correctly when I look at the data from a browser, and the data is stored in UTF-8 in the database. But when I export the value out to a file using the integration object, the value is no longer in UTF-8. Does anyone know what character set the integration object uses? How I can change it to UTF-8?
You are absolutely right that it should use UTF-8, but after talking to a co-working and looking at some code it does not appear that is the case. Submit a PMR and reference RTC 292380 and include that you were referred by me to submit the PMR… This issue is tracked by APAR IJ02452.
[Admin: To see other related posts, use the Integration Object tag or Multibyte tag.]
I wanted to see if anyone has set up the following settings in an Oracle Database for TRIRIGA 10.5.2 and 3.5.2:
- 1. NLS_LENGTH_SEMANTICS: Should this be set to CHAR? In our current production environment, it’s set to BYTE, but the TRIRIGA support documentation says that this can lead to data loss, so they recommend using CHAR.
- 2. NLS_CHARACTERSET: This is set to WE8ISO8859P1 in our current production environment, but the support document says that it must be UTF-8 or UTF-16.
- 3. Block size: This is set to 8k, but the documentation recommends using 16k.
For (1) and (2), if you never want to store multibyte characters, then what you have is fine. But if you do, then you must use what the support documentation suggests. Once you have your database created, it is difficult and time-consuming to change it, and it needs to be done outside of TRIRIGA. As for (3), I would encourage you to use 16k, since it will allow you better throughput and paging, unless you have a strong reason why you need to stay at 8k.
[Admin: This post is related to the 04.04.16 post about database character settings. NLS refers to National Language Support parameters in Oracle. To see other related posts, use the Multibyte tag, MBCS tag, or NLS tag.]
The TRIRIGA integration object’s XML post type requests are not correctly encoding UTF-8 characters. As a result, double-byte characters, although they appear correctly in the server.log when the integration object’s “Debug” is clicked, are seen by the receiver as “??”.
The issue was that XML post type requests were not correctly handling UTF-8 characters. The fix is to include UTF-8 content binding when the entity is being added to the request. Moving forward, we resolved an integration object HTTP post scheme issue, where multibyte characters were not being correctly handled in the outgoing HTTP request. Please note with this fix, if for some reason your integration object HTTP post scheme configuration leaves the Content-Type field blank, and the Content-Type is not defined in the Headers field, then the Content-Type of the HTTP request will default to: “Content-Type: text/plain; charset=UTF-8”. On prior releases, it defaulted to: “Content-Type: null”.
I’m running into a small issue with our platform upgrade to TRIRIGA 3.5.1. One of the prerequisites of our 3.5.1 upgrade was to delete any custom tables from the database, because they interfere with the MBCS conversion process. We had a custom table that we are using for integrations, so we scripted it out and deleted it before starting the installation of 3.5.1. But for some reason, TRIRIGA still thinks that the table is there, tries to perform MBCS conversion on it, and throws an Invalid Object Name exception since the table does not really exist in the database.
I am not sure why the MBCS conversion process is looking for a table that does not exist. I am thinking that there’s a list of tables maintained somewhere in TRIRIGA and that’s what’s causing this issue. I’m hoping that someone else has seen this issue before and can guide me in the right direction.
I finally figured it out. I found the script that’s used for converting varchar to nvarchar, and one of the things it does is that it creates a table called IBS_MBCS_CONVERSION_RECOVERY, which is used in case the conversion of a table fails. It turns out that we still had an entry in IBS_MBCS_CONVERSION_RECOVERY for our custom table from before, which was essentially causing the platform to attempt a conversion every time on startup. Once I deleted the entry for our custom table, the conversion error went away.
Disclaimer: TRIRIGA encourages using the latest platform release when installing new TRIRIGA databases. This ensures that you have the latest platform with fixes to known issues. However, if you need to stay at 3.4.2.x and are installing on Microsoft SQL Server, review the following as you might encounter collation issues.
New database install, single-byte
There is a known issue with a TRIRIGA 3.4.2 and SQL Server new install where the single-byte option is ignored during the installation process and the resulting TRIRIGA database will be MBCS with a Latin1_General_CS_AS collation. If a single-byte environment is absolutely necessary, we suggest using TRIRIGA 3.5 where the issue has been resolved.
New database install, multibyte, non-English collation
There is a known issue with a TRIRIGA 3.4.2 and SQL Server new install where the multibyte option is selected with a collation other than English/Latin1_General_CS_AS. The chosen collation is used when building and populating the database, but the ENVIRONMENT_PROPERTIES value for “db.mbcs.mssql.collation” is not set correctly and can cause issues when new text fields and new business objects are created and populated. You can possibly get a collation conflict exception. (Note that for English collation, there is no issue since the “db.mbcs.mssql.collation” is set correctly.) We suggest you make the following changes to ENVIRONMENT_PROPERTIES…
[Admin: This post is related to the 06.29.16 post about installing TRIRIGA 3.4.2 or 3.5.0 on a non-US English database.]
We have received different opinions from our DBA for the international language set. Is it OK to leave it as BYTE?
- Database Character Set: AL32UTF8
- Character Length Semantic: BYTE
- Message: WARNING!! The database character set is inconsistent with the character length. When using a UTF character set which supports multibyte characters the NLS_LENGTH_PARAMETER must be set to CHAR. Refer to the Administration Guide for more details
In the following Oracle 11g document and Oracle 12c document, there is a caution given by Oracle when setting the parameter to CHAR:
- NLS_LENGTH_SEMANTICS = ‘CHAR’
- Caution: Oracle strongly recommends that you do NOT set the NLS_LENGTH_SEMANTICS parameter to CHAR in the instance or server parameter file. This may cause many existing installation scripts to unexpectedly create columns with character length semantics, resulting in runtime errors, including buffer overflows.
TRIRIGA is not one of those application that is impacted. In fact, we require it for the length of fields to be saved correctly. Oracle can be safely ignored in this case. If they do not use NLS_LENGTH_SEMANTICS = ‘CHAR’ and they put MBCS data into their database, they will run into issues when saving data. When we create tables, let’s say with the size of 100, we expect it to be 100 CHARs not 100 BYTEs, and if saving 100 MBCS characters, you will not be able to save. If you are saving a single byte date, it doesn’t matter, BYTE is fine.
Having said that, the NLS_LENGTH_SEMANTICS is not something to change midstream. If they already created their TRIRIGA database and the length semantic was at BYTE, their database fields are probably already set to BYTE. TRIRIGA is not one of those applications where this is applicable. In fact, without it set correctly, we will have save issues. We rely on the database to handle field size properly and our documentation clearly states that if you are going to save MBCS data, your NLS_LENGTH_SEMANTICS should be set to ‘CHAR’ prior to creating the TRIRIGA database.
A document with a file name in Korean fails to upload, view, or download properly.
We needed to (1) add an onError method in upload.js to close the window and refresh the document list, (2) use StringEscapeUtils to escape the document content, and (3) add UTF-8 to the content disposition so the file name can be unescaped by all browsers.
An issue was fixed where uploading a document with multibyte characters in the file name would not close the upload popup window after clicking the submit button. Also, another issue was resolved in Document Manager where viewing or downloading a file with a multibyte character file name (such as Korean) would show encoded characters instead.