In this example, we are going to copy the themes.csv file from Rebrickable into a blob container called lego in our Azure Data Lake Storage Gen2 account.
From the Azure Data Factory Home page, click copy data:
This opens the Copy Data Wizard. Let’s walk through each step!
1. Properties
On the Properties page, give the pipeline a name and description. Keep the default “run once now” option:
Click next to move on to the Source properties.
2. Source
On the Source page, we will first create a new linked service to Rebrickable, then create a new dataset to represent the themes.csv file.
Click create new connection:
Search and select the HTTP Linked Service:
Give the linked service a name and description, and use the base URL cdn.rebrickable.com/media/downloads/. (You can find this URL by inspecting the links on rebrickable.com/downloads. Keep the last slash.) Change authentication type to anonymous. Click create:
The linked service has now been created, yay! Make sure it’s selected and click next to move on to the dataset properties:
Since we specified the base URL in the Linked Service, we only have to specify the file name themes.csv.gz in the relative URL. Keep the other default options. Click next:
This next part feels kind of like magic, especially if you have been working with SQL Server Integration Services (SSIS) in the past. The Copy Data Wizard now inspects the file and tries to figure out the file format for us. But… since we are working with a gzipped file, it doesn’t make a whole lot of sense yet…
Let’s fix that! Change the compression type to gzip. Tadaaa! Magic! Without us doing anything else manually, the copy data wizard unzips the CSV file for us and shows us a preview of the content:
If you are working with a raw CSV file, the copy data wizard can detect the file format, the delimiter, and even that we have headers in the first row. But since we are working with a gzipped file, we have to configure these settings manually. Choose first row as header:
If the headers are not detected correctly on the first attempt, try clicking detect text format again:
You can now preview the schema inside the gzipped file. Beautiful! :D
Click next to move on to the Destination properties.
3. Destination
On the Destination page, we will first create a new linked service to our Azure Data Lake Storage Gen2 account, then create a new dataset to represent the themes.csv file in the destination.
Click create new connection:
Select the Azure Data Lake Storage Gen2 linked service:
Give the linked service a name and description. Select your storage account name from the dropdown list. Test the connection. Click create:
The second linked service has now been created, yay! Make sure it’s selected, and click next to move on to the dataset properties:
Specify lego as the folder path, and themes.csv as the file name. Keep the other default options. Click next:
Enable add header to file and keep the other default options:
Click next to move on to the Settings.
4. Settings
On the Settings page, we will configure the fault tolerance settings. This is another part that feels like magic. By changing a setting, we can enable automatic handling and logging of rows with errors. Whaaat! :D In SQL Server Integration Services (SSIS), this had to be handled manually. In Azure Data Factory, you literally just enable it and specify the settings. MAGIC! :D
Change the fault tolerance settings to skip and log incompatible rows:
At this time, error logging can only be done to Azure Blob Storage. Aha! So that’s why we created two storage accounts earlier ;) Click new:
The Copy Data Wizard is even smart enough to figure out that it needs to create an Azure Blob Storage connection. Good Copy Data Wizard :D Give the linked service a name and description. Select your storage account name from the dropdown list. Test the connection. Click create:
Specify lego/errors/themes as the folder path:
Click next to move on to the Summary.
5. Summary
On the Summary page, you will see a pretty graphic illustrating that you are copying data from an HTTP source to an Azure Data Lake Storage Gen2 destination:
Click next to move on to Deployment.
6. Deployment
The final step, Deployment, will create the datasets and pipeline. Since we chose the “run once now” setting in the Properties step, the pipeline will be executed immediately after deployment:
Once the deployment is complete, we can open the pipeline on the Author page, or view the execution on the Monitor page. Click monitor:
Success! ✔🥳 Our pipeline executed successfully.
We can now open Azure Storage Explorer and verify that the file has been copied from Rebrickable:
Summary
In this post, The Copy Data Wizard created all the factory resources for us: one pipeline with a copy data activity, two datasets, and two linked services. This guided experience is a great way to get started with Azure Data Factory.
Next, we will go through each of these factory resources in more detail, and look at how to create them from the Author page instead of through the Copy Data Wizard. First, let’s look at pipelines!
No comments:
Post a Comment