Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 12 additions & 6 deletions Instructions/Labs/04-ingest-pipeline.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,13 @@
---
lab:
title: 'Ingest data with a pipeline in Microsoft Fabric'
module: 'Use Data Factory pipelines in Microsoft Fabric'
title: Ingest data with a pipeline in Microsoft Fabric
module: Use Data Factory pipelines in Microsoft Fabric
description: In this lab, you'll create data pipelines to ingest data from external sources into a lakehouse, and integrate Spark notebooks to transform and load the data into tables. You'll learn how to combine Copy Data activities with custom Spark transformations to build reusable ETL processes in Microsoft Fabric.
duration: 45 minutes
level: 300
islab: true
primarytopics:
- Microsoft Fabric
---

# Ingest data with a pipeline in Microsoft Fabric
Expand Down Expand Up @@ -42,9 +48,9 @@ Now that you have a workspace, it's time to create a data lakehouse into which y

A simple way to ingest data is to use a **Copy Data** activity in a pipeline to extract the data from a source and copy it to a file in the lakehouse.

1. On the **Home** page for your lakehouse, select **Get data** and then select **New data pipeline**, and create a new data pipeline named `Ingest Sales Data`.
1. If the **Copy Data** wizard doesn't open automatically, select **Copy Data > Use copy assistant** in the pipeline editor page.
1. In the **Copy Data** wizard, on the **Choose data source** page, type HTTP in the search bar and then select **HTTP** in the **New sources** section.
1. On the **Home** page for your lakehouse, select **Get data** and then select **New copy job**, and create a new data pipeline named `Ingest Sales Data`.
1. If the **Copy Job** wizard doesn't open automatically, select **From any source to any destination** in the pipeline editor page.
1. In the **Copy Job** wizard, on the **Choose data source** page, type HTTP in the search bar and then select **HTTP** in the **New sources** section.

![Screenshot of the Choose data source page.](./Images/choose-data-source.png)

Expand Down Expand Up @@ -197,4 +203,4 @@ If you've finished exploring your lakehouse, you can delete the workspace you cr

1. In the bar on the left, select the icon for your workspace to view all of the items it contains.
1. Select **Workspace settings** and in the **General** section, scroll down and select **Remove this workspace**.
1. Select **Delete** to delete the workspace.
1. Select **Delete** to delete the workspace.
4 changes: 3 additions & 1 deletion Instructions/Labs/05-dataflows-gen2.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,8 @@ Now that you have a lakehouse, you need to ingest some data into it. One way to

![Screenshot of the Ribbon, highlighting the Add Data destination option.](./Images/add-data-destination.png)

> **Note**: If the **Add data destination** option is grayed out or a lakehouse destination is already shown in the query, your lakehouse has been automatically attached as the default destination because you created the dataflow from within the lakehouse. Select the existing lakehouse destination icon in the query to open the destination settings, and then continue from step 5.

2. Select **Lakehouse**.

3. In the **Connect to default data destination** dialog box, edit the connection and sign in using your Power BI organizational account to set the identity that the dataflow uses to access the lakehouse.
Expand All @@ -100,7 +102,7 @@ Now that you have a lakehouse, you need to ingest some data into it. One way to

## Add a dataflow to a pipeline

You can include a dataflow as an activity in a pipeline. Pipelines are used to orchestrate data ingestion and processing activities, enabling you to combine dataflows with other kinds of operation in a single, scheduled process. Pipelines can be created in a few different experiences, including Data Factory experience.
You can include a dataflow as an activity in a pipeline. Pipelines are used to orchestrate data ingestion and processing activities, enabling you to combine dataflows with other kinds of operation in a single, scheduled process. Pipelines can be created from your workspace by selecting **+ New item** > **Data pipeline**.

1. From your Fabric-enabled workspace, select **+ New item** > **Data pipeline**, then when prompted, create a new pipeline named **Load data**.

Expand Down
7 changes: 5 additions & 2 deletions Instructions/Labs/16-create-reusable-power-bi-assets.md
Original file line number Diff line number Diff line change
Expand Up @@ -269,14 +269,17 @@ The report could look like this. Don't worry about the layout.

### Test the template

1. Close Power BI Desktop. When asked to save your changes, can you choose **Don't save**.
1. Close Power BI Desktop. When asked to save your changes, choose **Don't save**.
1. Open the `regional-sales.pbit` file.

> **Note**: If prompted to sign in, use your Microsoft organizational account credentials. If you see a privacy levels dialog, select **Ignore Privacy Levels checks for this file** and select **Save**.

1. Notice you will get a parameter prompt asking you to select your region.

![Dialog showing the region parameter.](./Images/select-region-sales-parameter.png)

1. Choose **south** from the dropdown list.
1. Load the data and open the report.
1. Select **Load** to load the data and open the report.

Notice how the report opens correctly the south-region values.

Expand Down