Difference between revisions of "Edge Server - Publish Content"

From Fusion Registry Wiki
Jump to navigation Jump to search
(Created page with "== Overview == There are three parts to publishing content # Source Data and Metadata Files # Compiled Data and Metadata Files (the Environment)...")
 
(Part 1 - Source Data and Metadata Files)
Line 18: Line 18:
  
 
Where agency id, dataflow id, and dataflow version are specific to the Dataflows that the data are for.  The content can be in any SDMX format, each folder can contain multiple files, the compiler will merge the information where required.
 
Where agency id, dataflow id, and dataflow version are specific to the Dataflows that the data are for.  The content can be in any SDMX format, each folder can contain multiple files, the compiler will merge the information where required.
 +
 +
'''Note''': The Fusion Edge Compiler can build this local file system automatically from content pulled from compliant SDMX web services such as those provided by Fusion Registry. Information is provided later about how this is achieved.
 +
 +
An example folder/file content is given below:
 +
 +
|- data
 +
|-- WB
 +
|---- POVERTY
 +
|------ 1.0
 +
|-------- PovertyData.zip
 +
|-------- PovertyUpdate.xml
 +
|---- EDUCATION
 +
|------ 1.0
 +
|-------- EduData_1990_2010.json
 +
|-------- EduData2010_2020.xml
 +
|- structure
 +
|-- corestructures.zip
 +
|-- categories.xml
 +
|-- msds.xml
 +
|- metadata
 +
|--metadataset1.zip
 +
|--metadataset2.zip
 +
 +
The files in the file system must be in SDMX format, and may be individually zipped.  Each folder may contain multiple files.  The compilation process will combine all the files in each folder to create a consolidated output.  For example a dataflow folder may contain multiple dataset instances with different series or time periods, the output will be a single compiled dataset instance built from all the dataset files.

Revision as of 01:16, 26 August 2022

Overview

There are three parts to publishing content

  1. Source Data and Metadata Files
  2. Compiled Data and Metadata Files (the Environment)
  3. Published (Environment)

Part 1 - Source Data and Metadata Files

Content is published to the Fusion Edge Server by compiling datasets, structure files, and reference metadata files that are present in a local file system. The compilation process is run using the Fusion Edge Compiler. The Fusion Edge Compiler is given the root folder as an argument and it expects to find the following folder structure under the root folder:

|- data
|-- [agency id]
|---- [dataflow id]
|------ [dataflow version]  (data files are placed in this folder)
|- structure (structure files are placed in this folder)
|- metadata (metadata files are placed in this folder)

Where agency id, dataflow id, and dataflow version are specific to the Dataflows that the data are for. The content can be in any SDMX format, each folder can contain multiple files, the compiler will merge the information where required.

Note: The Fusion Edge Compiler can build this local file system automatically from content pulled from compliant SDMX web services such as those provided by Fusion Registry. Information is provided later about how this is achieved.

An example folder/file content is given below:

|- data
|-- WB
|---- POVERTY
|------ 1.0 
|-------- PovertyData.zip
|-------- PovertyUpdate.xml
|---- EDUCATION
|------ 1.0 
|-------- EduData_1990_2010.json
|-------- EduData2010_2020.xml
|- structure 
|-- corestructures.zip
|-- categories.xml
|-- msds.xml
|- metadata 
|--metadataset1.zip
|--metadataset2.zip

The files in the file system must be in SDMX format, and may be individually zipped. Each folder may contain multiple files. The compilation process will combine all the files in each folder to create a consolidated output. For example a dataflow folder may contain multiple dataset instances with different series or time periods, the output will be a single compiled dataset instance built from all the dataset files.