23.Copy data from multiple files into multiple tables | mapping table SQL | bulk

Cloud Tech Ram
8 Mar 202317:41

Summary

TLDRThis tutorial video guides viewers on copying data from multiple files into various SQL tables using Azure Data Factory. It addresses limitations such as the inability to manually map or use the upsert option. The video demonstrates creating datasets for source files and destination tables, parameterizing table names, and utilizing a lookup activity to read from a file mapping master table. It also covers setting up a for-each loop for dynamic file processing and concludes with a debug test to ensure successful data transfer, encouraging viewers to access provided resources for further practice.

Takeaways

  • πŸ“‹ The video demonstrates how to copy data from multiple files into multiple SQL tables using Azure Data Factory.
  • πŸ“‚ There is a limitation in manual mapping, as the same copy activity is used for different schemas.
  • πŸ”„ Ensure that the source file has the same schema as the destination table to avoid issues.
  • πŸ›‘ The 'upsert' option cannot be used in this copy activity scenario.
  • πŸ“… Files and directories in Data Lake are read dynamically, with the folder structure being organized by Alpha, Beta, and Gamma schemas.
  • 🚩 An 'active' flag in the mapping table allows control over which tables are processed on specific days.
  • πŸ›  A single data set for dynamic files and parameterized SQL table names is created, allowing dynamic file and table mapping.
  • πŸ”„ A 'for each' loop is used to iterate over different items in the value array, dynamically copying data based on the source and destination mapping.
  • πŸ”— The source and destination datasets are parameterized to handle dynamic table names and file paths for scalability.
  • βœ… The copy activity runs for each table (Alpha, Beta, and Gamma) and copies the respective rows into the SQL tables, confirming successful data transfer.

Q & A

  • What is the main goal of the video?

    -The main goal of the video is to demonstrate how to copy data from multiple source files into multiple SQL tables using Azure Data Factory, specifically focusing on handling different schemas for the destination tables.

  • What limitation is highlighted at the beginning of the video?

    -The presenter highlights a limitation that manual mapping cannot be done when copying data from different schemas using the same copy activity. Additionally, an additional column cannot be added within the copy activity for this process.

  • How does the presenter suggest handling scenarios where data should not be copied into a particular table?

    -The presenter suggests using an 'active' flag in the mapping table. If the active flag is set to 0, the corresponding table will not be processed, allowing flexibility for excluding tables on certain days without having to delete them.

  • What is the structure of the source files and where are they stored?

    -The source files are stored in a Data Lake in a structured folder system. Each table has its own folder, and under each folder, the corresponding source files are located. There is a daily folder structure, and the file paths are mapped dynamically.

  • What kind of data set does the presenter create for the destination SQL tables?

    -The presenter creates a single data set for all the destination tables (Alpha, Beta, and Gamma) and parameterizes the table name so that it can dynamically change during execution.

  • Why is a wildcard option used for the source files in the copy activity?

    -A wildcard option is used because the file names have dynamic suffixes, such as dates. The wildcard allows the system to pick up files with a specific prefix and any variable suffix.

  • How is the dynamic content handled for the table names in the SQL destination?

    -The table name in the SQL destination is parameterized and passed dynamically using the current item in the for-each loop. The script extracts the specific table name from the lookup activity and uses it during the copy process.

  • Why does the presenter advise against importing schemas for the destination tables?

    -The presenter advises against importing schemas because the destination tables have different schemas, and importing a fixed schema could cause conflicts when copying data to different tables.

  • What task is used to read the file mapping master table, and how is it configured?

    -A 'lookup' task is used to read the file mapping master table. The task is configured to return all rows with the 'active' flag set to 1, ensuring that only the relevant data is processed.

  • How does the presenter test the process after setting up the pipeline?

    -The presenter tests the pipeline using the 'debug' feature in Azure Data Factory. They check the output to verify that the copy activity runs three times (once for each table) and confirm the number of rows copied for each table.

Outlines

00:00

πŸ“˜ Introduction to Copying Data into SQL Tables

The video begins by introducing the process of copying data from multiple files into multiple SQL tables using the same copy activity. The speaker highlights limitations such as the inability to perform manual mapping or use the upsert option due to the uniformity in schema required for the source files and destination tables. The video then transitions into demonstrating the destination tables with varying schemas, emphasizing the need for matching source file schemas with their respective destination tables. A mapping table is introduced to correlate source files with their corresponding destination tables, including an 'active' flag to control data processing for specific tables on certain days.

05:01

πŸ”— Setting Up Data Sets and Linked Services

The speaker proceeds to guide viewers on setting up data sets for the source files and destination tables within Azure Data Factory. A data set is created for a 'file mapping master' table to read from, and another for the source files in the data lake, noting the dynamic nature of file and folder paths. For the destination, a single data set is parameterized to handle different table names dynamically. The video also covers the creation of linked services to connect the data lake and SQL Server to the data factory. The speaker ensures to mention the exclusion of schema import for the source files and the use of a parameterized table name for the destination.

10:03

πŸ”„ Implementing a Pipeline for Data Copying

The video then delves into the creation of a new pipeline, starting with a lookup task to read from the file mapping master table. This task is crucial as it determines the source and destination details for the data copying process. The speaker uses a SQL query to filter rows based on the 'active' flag, ensuring only relevant data is processed. A 'For Each' loop is introduced to iterate over the items in the 'value array' obtained from the lookup task, setting the stage for adding a copy activity within this loop for each item.

15:04

πŸ“ Configuring Copy Activity and Testing the Pipeline

Inside the 'For Each' loop, a copy activity is added to handle the data transfer from the source container to the SQL dataset. The speaker configures the source path and file name using dynamic content and a wildcard, ensuring that the correct files are selected based on the date suffix. The destination is set by parameterizing the table name, allowing for flexibility in copying data to different tables. The speaker emphasizes not importing any schema due to the varying schemas of the source files. After configuring the pipeline, a debug run is performed to test the flow, which successfully copies data to the respective tables as intended.

Mindmap

Keywords

πŸ’‘Copy Activity

Copy Activity refers to the process in Azure Data Factory where data is copied from one location to another, such as from source files to SQL tables. In the video, this concept is central as it describes the process of copying data from multiple files into multiple SQL tables with varying schemas. The speaker discusses limitations and configuration steps related to this activity.

πŸ’‘Schema

Schema is the structure of a database that defines how data is organized, including tables, columns, and types. In the video, the concept of schema is crucial as the speaker explains the challenges and considerations when copying data between tables with different schemas, highlighting that the source and destination schemas must match for successful data transfer.

πŸ’‘Data Lake

Data Lake refers to a storage repository that holds vast amounts of raw data in its native format. In the video, the speaker mentions using a Data Lake as the source location where files are stored before being copied into SQL tables. The Data Lake is structured with folders and files that are accessed dynamically during the data copying process.

πŸ’‘Azure Data Factory

Azure Data Factory is a cloud-based data integration service that allows for the creation, scheduling, and management of data pipelines. The entire video revolves around using Azure Data Factory to automate the copying of data from multiple files into SQL tables, explaining various tasks and configurations within this platform.

πŸ’‘Linked Service

A Linked Service in Azure Data Factory defines a connection to external data sources like SQL Server or Data Lake. The speaker discusses setting up linked services to connect Azure Data Factory to the Data Lake and SQL Server, emphasizing its importance in accessing the data needed for the copy activities.

πŸ’‘Pipeline

Pipeline refers to a series of data processing steps in Azure Data Factory. In the video, the speaker demonstrates creating a new pipeline, which orchestrates the various activities needed to copy data from the source files to the destination SQL tables, including reading from a master table, iterating through data, and executing the copy activities.

πŸ’‘Wildcard

Wildcard is a symbol used to represent one or more characters in file names or paths. The video explains using a wildcard in the file name configuration to dynamically match files with different suffixes, such as date-stamped files, enabling flexible data copying without needing to specify exact file names.

πŸ’‘Parameterization

Parameterization involves setting up dynamic values in a process to allow flexibility and reuse. In the video, the speaker explains how to parameterize table names and file paths in the copy activity, allowing the pipeline to handle multiple tables and files with different names and schemas dynamically.

πŸ’‘ForEach Loop

A ForEach Loop is a control flow activity in Azure Data Factory that iterates over a collection of items. The video describes using a ForEach Loop to iterate through the rows returned from a lookup query, which contains information about which files to process and where to copy the data, ensuring that each item is handled individually.

πŸ’‘Active Flag

Active Flag is a binary indicator used to determine whether a particular row should be processed. In the video, the speaker explains how setting an active flag in the mapping table helps control which tables should be updated with data on a given day, allowing for selective data processing without altering the pipeline structure.

Highlights

Introduction to copying data from multiple files into multiple SQL tables using the same copy activity.

Limitation of manual mapping due to the use of the same copy activity for different schemas.

Inability to add additional columns within the copy activity for different schemas.

Exclusion of the upsert option in the copy activity for different schemas.

Demonstration of copying data into three tables with different schemas.

Requirement for source files to have the same schema as the destination tables.

Explanation of a mapping table that lists the source files and their corresponding destination tables.

Use of an 'active' flag to control the copying of data to specific tables.

Organization of source files in the data lake with daily folders and specific paths for each table.

Creation of a linked service in Azure Data Factory for the data lake and SQL Server.

Creation of a data set for the file mapping master table in SQL Server.

Creation of a data set for the source container in the data lake without specifying file names.

Parameterization of the table name in the destination data set for dynamic table names.

Use of a for each loop to iterate over items in the value array from the lookup activity.

Setting up a copy activity within the for each loop for each table.

Use of wildcard and dynamic content for file paths and names in the copy activity.

Concatenation function to create file name patterns for the copy activity.

Execution of the pipeline and verification of the copy activity's success.

Access to SQL scripts, Excel files, and ARM templates for practice through a community link.

Transcripts

play00:00

hello everyone in this video we are

play00:02

going to see how to copy data from

play00:05

multiple files into multiple SQL tables

play00:08

if you are new to our Channel hit

play00:09

subscribe your subscription will

play00:11

motivate me to produce more video in

play00:12

better quality before we proceed I just

play00:15

want to highlight a small limitation

play00:16

that you cannot do manual mapping

play00:20

because we are using the same copy

play00:23

activity to copy the datas of different

play00:25

schema

play00:27

and if you remember in our earlier video

play00:29

we saw how we can add an additional

play00:31

column within the copy activity which

play00:33

you cannot do it over here

play00:35

just have the same column name between

play00:38

your source file as well as your

play00:39

destination and another limitation is

play00:42

that you cannot use the upset option

play00:44

which is available in the copy activity

play00:47

let me show the destination tables so

play00:49

these are the three tables where we are

play00:51

going to copy our data into it and these

play00:54

are having different schema two of the

play00:56

table are having the same schema and the

play00:58

third table is having a different schema

play00:59

I just want to show that we can able to

play01:02

copy the tables of different schema with

play01:04

the same copy activity itself

play01:06

just make sure that the source file is

play01:10

having the same schema as that of the

play01:12

destination table

play01:13

and these are the various source file I

play01:16

have created a mapping table which will

play01:19

list out

play01:20

for which table what is the source file

play01:23

and in which directory it is I have

play01:26

already showed the schema for these

play01:28

three table alpha beta and gamma forgot

play01:30

about the last one that we are going to

play01:32

ignore and similarly I have provided the

play01:36

file name in these columns let me show

play01:38

the files from my local itself

play01:41

so these are the three files uh which is

play01:43

going to be our source I will be

play01:45

uploading this to data Lake

play01:46

and these are the three tables and these

play01:49

are the three files and for these three

play01:52

these are the path in the data Lake

play01:55

and I have created one more column which

play01:58

is active which will say whether I need

play02:01

to consider this row or not let's say

play02:04

for example if I am keeping active as 0

play02:07

for Row 2 then I shouldn't copy the data

play02:10

for beta table so we should consider

play02:13

whatever the table which is having

play02:15

active status as 1. I have explained

play02:18

with this active flag because in several

play02:21

scenarios will be coming up like on a

play02:23

particular day we don't want to process

play02:25

datas into this particular table maybe

play02:28

on some other day it may be required so

play02:30

instead of deleting from the table

play02:31

itself it is good to have a flag so that

play02:34

you can update the flag based upon your

play02:36

requirement now I will show how this

play02:39

path are maintained in the data lake so

play02:41

inside the data like I have only one

play02:43

container which is for source and inside

play02:46

that I have created a daily folder under

play02:49

that we have separate folders like Alpha

play02:51

Beta And inbound inside Alpha we have

play02:53

files for Alpha and similarly for beta

play02:56

as well but for gamma what I did is I

play02:59

have created one more folder

play03:01

which is inbound under that only we have

play03:04

a separate folder for gamma and inside

play03:06

that we have source file for gamma

play03:09

you can ignore the fourth file which is

play03:12

inactive

play03:14

now let's jump to our Azure data Factory

play03:19

in our Azure data Factory under main is

play03:22

I have already created a linked service

play03:24

to my data Lake as well as to the SQL

play03:28

Server

play03:29

if you don't have ID on linked service

play03:31

watch my introduction video about ADF

play03:33

now we need to create a data set for

play03:36

this table since we are going to read it

play03:38

we need to create this data set for this

play03:40

table so create a new data set and

play03:43

search for SQL mine is not Azure SQL

play03:46

so I am selecting this this is my

play03:49

personal server so I'm just selecting it

play03:51

let me provide a name to the data set

play03:56

and from the link to Service drop down

play03:58

select the SQL link

play04:01

and it will load whatever the tables

play04:03

available inside the SQL Server so under

play04:05

that I need a file mapping master

play04:09

and I want to import the schema of the

play04:11

file as well just click on OK

play04:19

and now

play04:21

just if you go to schema you will be

play04:23

able to find the scheme of the

play04:24

particular table over here

play04:27

now let's create a data set to the data

play04:30

Lake as well so we need to create a data

play04:33

set for this container alone because the

play04:35

files are going to be dynamic files and

play04:37

folder are going to be dynamic we just

play04:39

need to have a data set for this

play04:41

container alone so in order to do it

play04:43

just go here and click on new data set

play04:45

and search for data Lake

play04:49

click on continue and my input a file is

play04:52

in CSV formatters so I am selecting it

play04:56

let me provide a name to my data set

play05:01

and from the drop down select the link

play05:03

for the data Lake

play05:04

and just browse here to select the

play05:07

container so I'm selecting the container

play05:10

alone and I am not going to select any

play05:12

folders inside it because these folders

play05:15

We are going to read dynamically from

play05:17

here so I'm not going to select the

play05:20

folder I'm just selecting the container

play05:22

alone and

play05:24

my first row is a header in all our

play05:27

source file meaning like that column

play05:28

name will be in the header of the file

play05:30

so I am just selecting it and I'm not

play05:33

going to import any schema so I am

play05:34

selecting none from here

play05:36

now we need to create a data set for our

play05:39

destination tables which is alpha beta

play05:42

and gamma

play05:44

instead of creating a separate data set

play05:47

to all these table we are going to have

play05:50

a single data set and we will

play05:52

parameterize the table name so here

play05:55

search for S cable

play05:57

and select it and click on continue

play06:00

let me provide a name to the data set

play06:05

and from the linked Service drop down I

play06:07

am selecting the SQL link I am not going

play06:09

to import any schema or I'm not even

play06:12

going to select the table name

play06:14

because those are going to be dynamic

play06:16

completely

play06:17

and here

play06:20

we need to parameterize the table name

play06:22

in order to do it go to parameters

play06:25

and click on new

play06:28

here provide the parameter name as table

play06:31

name but while recording I gave us file

play06:33

name by mistake but please provide the

play06:36

parameter name as table name while you

play06:38

are doing it click on connections so for

play06:41

the table instead of setting from the

play06:43

drop down we are going to pass it

play06:45

dynamically so what we need to do is

play06:47

just click on edit and here you will get

play06:52

two boxes

play06:53

the first one is for the schema and the

play06:58

second one is the table name schema is

play07:00

dbo which is the default schema in SQL

play07:04

so I am providing as dbo

play07:08

and this table name is going to be

play07:10

dynamic so click on ADD Dynamic content

play07:13

and select the parameter which we have

play07:15

created

play07:16

just click on it and click on OK

play07:20

now

play07:21

let us see whatever we have done so far

play07:24

we have created a data set for our file

play07:27

mapping master table and we have

play07:30

imported the schema as well and for the

play07:32

source file we didn't import any schema

play07:34

and we didn't create any parameter we

play07:36

just selected the source container and

play07:39

for the destination we have created a

play07:41

parameter and we have passed it under

play07:44

the table name now let's publish

play07:48

now let's jump to creating a new

play07:51

pipeline from there only we are going to

play07:54

start implementing just click on new

play07:55

pipeline

play07:56

the first task is to read from the file

play08:00

mapping master table right then only we

play08:02

will get to know what is the source and

play08:03

what is the destination right so in

play08:05

order to do it look for lookup task just

play08:08

drag and drop

play08:10

and here if you wish to change the name

play08:13

of the particular task you can do so

play08:15

from here I'm just renaming it

play08:19

after that

play08:21

just click on settings and select the

play08:24

source data set which is our file Master

play08:28

data set right which we have created

play08:29

file mapping Master data set so selected

play08:33

here uncheck this first only because we

play08:36

want all the rows from the table so just

play08:38

uncheck it and do we need to read all

play08:42

the rows from the table no right we just

play08:44

want whatever having flag as one right

play08:47

so these are the data we need to read so

play08:51

in order to do it so I'm just typing the

play08:53

SQL query for the same

play08:56

if we execute this we will get only

play08:59

these three rows which is having the

play09:01

column active as one

play09:04

now just click on query you can use

play09:07

store procedure as well but for time

play09:09

being I am going with the query I am

play09:12

going to paste the query which needs to

play09:13

be executed

play09:15

now

play09:17

we will try to run this task alone

play09:20

just run this debug in order to run it

play09:24

it got completed

play09:26

and here if you check the output

play09:29

just let me copy this to your Notepad

play09:32

let me paste it and here if you see

play09:37

we have several information in our

play09:39

output like count value one dot but all

play09:42

we need is whatever the item inside this

play09:45

value array this square bracket is there

play09:47

right which means array so we need the

play09:50

this item the first item represents

play09:53

Alpha table second beta and the third

play09:55

one is gamma so we need the item which

play09:58

is in this value array

play10:01

so we are going to write a for each Loop

play10:03

for it because we are going to Loop each

play10:05

of the items inside that value array so

play10:09

let me drag and drop

play10:13

under settings

play10:15

just click on item and click on ADD

play10:17

Dynamic content here we need to select

play10:20

the value which we are passing in that

play10:22

array so here you are able to see right

play10:25

this particular part so here it is

play10:28

showing value array right lookup value

play10:29

array in the output it is particularly

play10:32

fetching that value array so just click

play10:35

on it

play10:36

now what will happen is

play10:38

each of the item inside this value array

play10:41

will be looped into it

play10:43

what we need to do is click on this edit

play10:46

icon to add activity inside this forage

play10:48

what we are going to do is inside this

play10:51

forage we are going to add a copy

play10:52

activity

play10:55

just drag and drop and under Source

play10:58

select the data set which you have

play11:00

created for the source container just

play11:03

select it and if you have provided the

play11:06

file name as well we can leave this file

play11:08

path in the data set option but we

play11:10

didn't specify any file name right so we

play11:13

need to go with wildcard option in the

play11:15

data set we didn't provide any file name

play11:16

so I'm going with wildcard in wildcard

play11:19

if you see the container name is already

play11:21

here but the path we are A2 provided and

play11:25

path as I told you earlier we are

play11:27

getting from this table right what we

play11:29

need to do now is

play11:31

we are already reading the output from

play11:33

the lookup activity and we are iterating

play11:36

into this Loop right so this value will

play11:38

be already available in the output

play11:42

let me add a dynamic content here and

play11:45

for each already is having that value

play11:47

but which column we want to read that we

play11:51

need to specify so I have put dot let me

play11:53

open the output RH will be getting this

play11:55

highlighted value

play11:57

but all we need is this particular path

play12:00

column alone so just copy it and let me

play12:03

go to Azure data Factory

play12:05

just paste that particular column name

play12:08

so what it will do what are the value

play12:10

coming to this particular path it will

play12:13

be applied over here which means the

play12:15

directory is going to be this one

play12:17

and in this text box we need to specify

play12:19

the file name we don't have complete

play12:22

file name all we have is file name

play12:23

prefix and this is the value let me show

play12:27

in the data Lake

play12:29

let me go to Alpha Daily Alpha so if you

play12:34

see only we have this prefix part we

play12:37

don't have this rest of the part because

play12:39

this will be changing every day right so

play12:41

for that only we have configured only

play12:43

the prefix part and we ignored this

play12:47

suffix path

play12:49

so here what we need to do is

play12:52

click on the text box and then click on

play12:55

ADD Dynamic content

play13:00

and as we did earlier just select the

play13:03

current item of for each and

play13:06

here we need to provide this column name

play13:09

so that we can access the prefix file

play13:11

name

play13:13

copy it

play13:14

and Dot followed by the column name so

play13:18

this value will be applied over here but

play13:20

we have only the prefix part right

play13:23

if I leave this as it is it won't able

play13:26

to read the file because our file is

play13:28

having suffix part as well right the

play13:31

date part we need to say to Azure data

play13:34

Factory to pick up a file which is

play13:36

having this as per fix and some suffix

play13:39

part is going to be there in order to do

play13:41

it

play13:43

just cut this part we'll be pasting it

play13:45

later and go to function here look for

play13:49

concat

play13:52

so usually it will be under string

play13:54

function if you are not able to search

play13:55

it just go here under string function

play13:57

and the first item is going to be concat

play14:00

so paste it paste the item which we have

play14:04

cut and remove this art part so usually

play14:06

only one should be there for a nested

play14:10

expression always leave the first ad to

play14:13

be there and just remove unwanted at in

play14:16

between

play14:18

what concat will do is it will open the

play14:21

strings which we provide in commas

play14:24

let me provide a command and followed by

play14:27

star in single quotes

play14:29

what this expression will provide us

play14:33

in the first Loop it will provide a

play14:35

value something like this crypto Alpha

play14:39

star so what the star represent is the

play14:42

star refers the suffix part the suffix

play14:45

part can be a dead part or it can be

play14:47

something else as well so for our

play14:49

scenario the suffix is dead part

play14:53

so we will be passing this expression to

play14:56

the wildcard file name so obviously what

play14:59

it will do it will pick up what are the

play15:01

file which is having the file name

play15:03

prefix followed by the date part or

play15:06

whatever it is but or for our scenario

play15:07

the date part is going to be the suffix

play15:09

now let's come back to object data

play15:11

Factory

play15:12

so this concat expression will for sure

play15:14

it will pick up this kind of files which

play15:17

is having this suffix part as well now

play15:20

let's come back here and click on OK

play15:24

we have given the path name followed by

play15:26

the file name as well as a wildcard

play15:29

and now it's time to move to sync under

play15:32

sync from the drop down select the

play15:35

destination or data set which is our SQL

play15:39

data set just selected here if you see

play15:43

we have parameterized this file name

play15:45

part right in the table data set so this

play15:49

came up here so I'm going to add this as

play15:51

a dynamic content and just click on this

play15:54

for each current item and now what we

play15:57

need is this particular table name

play16:00

value so copy it and paste it

play16:04

so whatever the value comes here it will

play16:06

be passed to this parameter to the

play16:09

particular data set now

play16:11

and one more thing if you go to mapping

play16:14

please do not import any schema because

play16:16

for each of the file our schema is going

play16:18

to be different right the tables are

play16:20

different so don't import it just click

play16:22

on publish

play16:23

let me come out of the loop

play16:25

and let me debug in order to test out

play16:28

the flow

play16:30

it got completed and if you Mouse over

play16:32

on copy activity it is showing total Run

play16:34

3 3 times it run and down below also you

play16:38

are able to see this right three times

play16:40

the copy activity run for each of the uh

play16:44

tables it ran once and if you click on

play16:47

the output here you are able to see how

play16:49

many rows got copied

play16:51

and that's it you can cross your refine

play16:53

table as well whatever the SQL scripts

play16:55

and the Excel file which we have used

play16:58

for this video I have uploaded to this

play17:01

community just join here for free with

play17:04

your email ID

play17:05

and under Library you will able to

play17:09

access this resource to practice

play17:11

under that I have uploaded this

play17:13

particular zip file which will have the

play17:16

SQL scripts Excel file as well as the

play17:18

arm template of this particular video

play17:20

and this is the video number just

play17:22

download it here as of now this is

play17:24

currently free I'll be providing the

play17:26

link to join this community in the video

play17:28

description please do join

play17:31

thank you for watching this video please

play17:33

hit subscribe and follow me on LinkedIn

play17:35

to stay connected

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Data CopySQL TablesAzure Data FactoryData LakeSchema MappingData IntegrationCSV FilesDynamic ContentData PipelineData Transfer