IV96036: DataConnect issue with same BO name in different modules


When having two business objects (BOs) with the same name in different modules, and you select the “Validate” check box on DataConnect (DC) runs, this will cause an SQL statement (built to receive a single row, the BO name) to fail, since the logic does not include the module on the WHERE clause.

The DataConnect job issue was that the File-to-DC validation logic had SQL that assumed BO names are unique, and did not account for same-named BOs in different modules. This fix includes a module name parameter to the logic, so that the SQL knows from which module to retrieve the BO. Moving forward, we resolved an integration object File-to-DC issue involving the “Validate” check box. When the “Validate” check box was selected, the validation process would fail if the BO being validated had the same name as another BO in a different module.

[Admin: To see other related posts, use the DataConnect tag.]

Continue reading

Why is the DataConnect staging table empty after OM import?


I have created an object migration (OM) with its workflow. The execution works well on the Development environment. But after an import of the OM package with all objects needed, the execution didn’t work on the Test environment. The object migration launch works. The triIntegration workflow launch works. The execution of the request works in SQL Server. The connection in my object migration works.

But there is no row in the staging table S_CSTPHINTERMARCHECONTRAT. Also, I see in the logs:

Calling SQL: [INSERT INTO S_CSTPHINTERMARCHECONTRAT(DC_JOB_NUMBER, DC_CID, DC_SEQUENCE_ID, DC_STATE, DC_ACTION, DC_GUI_NAME, TRIIDTX, CSSTPHHPIDRATTTX, CSTPHRETIRETX) VALUES (?,?,?,?,?,?,?,?,?)] with params[402, 0, 1, 1, 4, cstPHInterMarcheContrat, 2013/M0166, 101GT, ]

I found the problem. The configuration of the integration object was for the Development environment and not for the Test environment.

[Admin: To see other related posts, use the DataConnect tag.]

Continue reading

Why does the integration object import need 4 dedicated columns?


We have some trouble understanding how the “Database” scheme is supposed to be correctly implemented through the TRIRIGA integration object. From what we saw, TRIRIGA is unable to interact with an external database if it does not have 4 particular columns dedicated for the TRIRIGA integration process. These columns are IMD_STATUS, IMD_ID, IMD_MESSAGE and TRIRIGA_RECORD_ID on both the external source of data and the internal target for data in the TRIRIGA database.

We found it odd that to interact with an external database, TRIRIGA forces it to have 4 columns dedicated to itself and is not able to simply send a SELECT statement and map the corresponding fields. Did we miss (or overdo) something which could avoid altering the source table? Or is it common practice to interact with an external table?

[Admin: This post is related to the 11.05.14 post about using an integration object with an inbound database scheme.]

Continue reading

Is there a way to (OM) migrate integration object records?


Is it possible to (OM) migrate integration object records? I’ve created two integration objects in my Dev environment. But I’m not sure which BO to use to (OM) migrate my two records to my Test environment.

[Admin: To see other related posts, use the Integration Object tag.]

Continue reading

IV94789: Set up data source before exporting data load spreadsheet


The TRIRIGA 3.5.2 “Application Building: Data Management” user guide does not indicate that the data source needs to be configured before attempting to export a Data Load spreadsheet.

The steps for “Creating a data load spreadsheet”, as listed in the 3.5.2 “Application Building: Data Management” user guide, do not indicate that before doing this, you need to set up the data source properly. In fact, there is no mention of the need to set up the data-load data source in the guide until the “Processing and loading sample data” section, which comes after the section on creating a spreadsheet.

In the “Creating a data load spreadsheet” section, the following should be inserted as Step 3:

  • 3. Ensure the data source is correctly set up for each data load item:
    • a. Select the data load item.
    • b. Select the Integration Object field.
    • c. In the Data Source section of the integration object form, modify the credentials of the database if needed.
    • d. Select the Test DB Connection link to verify that you can successfully connect to the database.
    • e. Save and close the integration object form.

The old Step 3 becomes Step 4.

Continue reading

Can an integration object send web service data with attachments?


We are looking for a way to send web service data including attachments (binary objects). Can an integration object help with that? If not, what else? OSLC or CBA? We need to send requests to web service through digitally signed mail…

The attachment will be generated by BIRT report and stored in binary fields. Then we’ll send it via integration object (web service). The binary field in the output is represented as a DM_CONTENT (Document Manager Content) ID… As I understand it, we can use the keyword “CONTENT” in the data map when using the database scheme, but it doesn’t work for web services. How can this be solved?

Continue reading

Is there a maximum file size for importing data using DataConnect?


I have 2 questions regarding the use of DataConnect (DC) and was hoping someone may have the answers I’m looking for:

  • 1. Is there a recommended maximum file size for importing data into TRIRIGA using DataConnect via staging tables?
  • 2. We have a two-stage DataConnect process where we first create a record (using one integration object) and then associate locations to the created record (using a separate integration object). We split this into separate IO records due to the parallel processing of DC trying to create things in the wrong order. However, when our update record runs, it overwrites the values that are blank in the staging table, even though we are not directly mapping them in the workflow. Is there any way to avoid this?

Continue reading