Sunday, March 29, 2020

Overview of Azure Data Factory Components

My First Blog on Azure

Azure Data Factory Components


Pipelines

Pipelines are the things you execute or run in Azure Data Factory, similar to packages in SQL Server Integration Services (SSIS). This is where you define your workflow: what you want to do and in which order. For example, a pipeline can first copy data from an on-premises data center to Azure Data Lake Storage, and then transform the data from Azure Data Lake Storage into Azure Synapse Analytics (previously Azure SQL Data Warehouse).
Screenshot of the Author page in Azure Data Factory, with a Pipeline open in the user interface
When you open a pipeline, you will see the pipeline authoring interface. On the left side, you will see a list of all the activities you can add to the pipeline. On the right side, you will see the design canvas with the properties panel underneath it.

Activities

Activities are the individual steps inside a pipeline, where each activity performs a single task. You can chain activities or run them in parallel. Activities can either control the flow inside a pipeline, move or transform data, or perform external tasks using services outside of Azure Data Factory.
Screenshot of the Author page in Azure Data Factory, with a Pipeline open and the Activities highlighted
You add an activity to a pipeline by dragging it onto the design canvas. When you click on an activity, it will be highlighted, and you will see the activity properties in the properties panel. These properties will be different for each type of activity.

Data Flows

Data Flows are a special type of activity for creating visual data transformations without having to write any code. There are two types of data flows: mapping and wrangling.
Screenshot of the Author page in Azure Data Factory, with a Mapping Data Flow open

Datasets

If you are moving or transforming data, you need to specify the format and location of the input and output data. Datasets are like named views that represent a database table, a single file, or a folder.
Screenshot of the Author page in Azure Data Factory, with a Dataset open

Linked Services

Linked Services are like connection strings. They define the connection information for data sources and services, as well as how to authenticate to them.
Screenshot of the Author page in Azure Data Factory, with Connections open and Linked Services highlighted

Integration Runtimes

Integration runtimes specify the infrastructure to run activities on. You can create three types of integration runtimes: AzureSelf-Hosted, and Azure-SSIS. Azure integration runtimes use infrastructure and hardware managed by Microsoft. Self-Hosted integration runtimes use hardware and infrastructure managed by you, so you can execute activities on your local servers and data centers. Azure-SSIS integration runtimes are clusters of Azure virtual machines running the SQL Server Integration (SSIS) engine, used for executing SSIS packages in Azure Data Factory.
Screenshot of the Author page in Azure Data Factory, with Connections open and Integration Runtimes highlighted

Triggers

Triggers determine when to execute a pipeline. You can execute a pipeline on a wall-clock schedule, in a periodic interval, or when an event happens.
Screenshot of the Author page in Azure Data Factory, with Triggers open

Templates

Finally, if you don’t want to create all your pipelines from scratch, you can use the pre-defined templates by Microsoft, or create custom templates.
Screenshot of the Author page in Azure Data Factory, with Templates open

No comments:

Post a Comment

Get files of last hour in Azure Data Factory

  Case I have a Data Factory pipeline that should run each hour and collect all new files added to the data lake since the last run. What is...