How do I configure data loading and data exchange? - HxGN SDx - Update 63 - Administration & Configuration

Administration and Configuration of HxGN SDx

Language
English
Product
HxGN SDx
Search by Category
Administration & Configuration
SmartPlant Foundation / SDx Version
10

HxGN SDx's Data Validator feature helps you control how supplied data is imported, validated, and loaded into a specific target system, during project handover, brownfield data take-on, data loads, and migrations.

  • Data Validator is especially useful to do bulk creation of objects and documents.

    • Several Loader mappings are used for setting up the system. For example, Tag Classifications, Material Classifications, Site-Plant-Area-Unit, and so on.

    • Loader mappings for documents, tags, and so on to load data. Some of the mappings are of CFIHOS format, but some also have our own HxGN SDx format. This can be either to do migrations into HxGN SDx, or even for day to day loading of data which doesn’t have to be put into a review process through Submittals. For more information, see Load data into your system.

  • Also, there are cases where HxGN SDx is not really used as a collaboration platform and the contractors work offline in their own systems. They handover their data and documents using CSV format. The CSV files are loaded through Data Validator.

  • Data Validator is required to submit Submittals and to put the documents into the Review process.

What are the rules for using Data Validator?

  • You should only use CSV format files as input for Data Validator.

  • If you are creating one object at a time, we recommend using Import Mapping in Data Validator Administration to create the mappings interactively, rather than using the delivered mappings.

  • If you need to bulk load some objects or documents that are not available in the delivered mappings or so dissimilar to the delivered mappings that it is easier to create new mappings.

Do I need any special mappings when moving data into HxGN SDx from a SmartPlant Foundation system based on the ADW or the CDW data model?

Here you need to be aware that the integrated ADW/CDW model has several class definitions to represent a “Tag” whereas SDx has only one class definition - FDWTag. Essentially, the source ADW/CDW system should output the data and documents into the CSV formats that can be absorbed by SDx, but there will be some mapping required to match the systems: that is, property X in CDW might have a different name or even enum entries compared to SDx. For example, CDW can have a property, CDWTagStatus, which has a value, AFC. SDx has a property,FDWTagStatus, with the value, Approved for construction. Once you extract the data from the source system into the appropriate CSVs, you can simply load it into SDx and the data can be converted and imported into SDx if appropriate mappings are configured.

What do I have to do if I am moving data into HxGN SDx from a different system not based on SmartPlant Foundation data models?

Mapping is key in this scenario. If the system’s structure and data model matches up with SDx, there will be minimal effort involved to move your data. In case of any deltas, these will first have to be configured into the SDx model, then configured accordingly in the Data Validator loader capability before moving your data into SDx.

Data Validator Administration

Using Data Validator Administration, data files are imported using an import definition, which specifies how the data is mapped to the target system using existing class definitions, property definitions, and relationship definitions. The imported data is validated against defined validation rule sets that can use the target system as their basis, and the export mapping ensures that the data is mapped correctly into the target system format. Job definitions combine all these components into one set of steps.

Available in the Desktop Client, the Data Validator Administration module manages the movement of data using the following set of components:

  • Import process - Uses a definition to map data files to the staging area.

  • Validation process - Evaluates imported data against a defined set of rules to ensure the validity of the data.

  • Implicit delete process - Allows you to manage the deletion or termination of objects from a target system.

  • Export process - Uses a defined mapping, based on the structure of the target system, to manage loading data into the final destination system.

  • Job definition - Links together various importing, validating, and loading components to define how a set of data will be processed by Data Validator Job Management.

  • Target system - Destination where the validated data is loaded.

  • Functions - Allow you to change the original input data to match a specific requirement expected when data is exported to a target system configuration.

Data Validator Job Management

The Data Validator Job Management module is also available in the Desktop Client. You can create jobs using job definitions that manage the automation of the import, validation, and export of data into selected target systems. For more information, see Using Data Validator Job Management.

Exchange Data functionality

You can export data and documents to CSV files using view definitions and graph definitions to scope the data. You can validate and export these CSV files to a target system using Data Validator. For more information, see Configure the Exchange Data functionality.