From 7009b7374d473b92c7cf459b233b9c0d38822479 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Mon, 9 Mar 2026 15:31:57 +0000 Subject: [PATCH 1/3] Initial plan From c18a4dc712d1f91de23d4411d21617e3924c231d Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Mon, 9 Mar 2026 15:40:09 +0000 Subject: [PATCH 2/3] Fix DP600 lab instructions to match current Fabric interface Co-authored-by: AngieRudduck <34583336+AngieRudduck@users.noreply.github.com> --- Instructions/Labs/04-ingest-pipeline.md | 5 ++++- Instructions/Labs/05-dataflows-gen2.md | 4 +++- Instructions/Labs/16-create-reusable-power-bi-assets.md | 7 +++++-- 3 files changed, 12 insertions(+), 4 deletions(-) diff --git a/Instructions/Labs/04-ingest-pipeline.md b/Instructions/Labs/04-ingest-pipeline.md index da82ec231..0f5cd8204 100644 --- a/Instructions/Labs/04-ingest-pipeline.md +++ b/Instructions/Labs/04-ingest-pipeline.md @@ -44,6 +44,9 @@ A simple way to ingest data is to use a **Copy Data** activity in a pipeline to 1. On the **Home** page for your lakehouse, select **Get data** and then select **New data pipeline**, and create a new data pipeline named `Ingest Sales Data`. 1. If the **Copy Data** wizard doesn't open automatically, select **Copy Data > Use copy assistant** in the pipeline editor page. + + > **Note**: If the pipeline editor shows a **Copy job** option instead of **Copy Data**, select **Copy job > Use copy assistant**. The copy assistant wizard steps are the same regardless of how the activity is labeled in your version of Fabric. + 1. In the **Copy Data** wizard, on the **Choose data source** page, type HTTP in the search bar and then select **HTTP** in the **New sources** section. ![Screenshot of the Choose data source page.](./Images/choose-data-source.png) @@ -81,7 +84,7 @@ A simple way to ingest data is to use a **Copy Data** activity in a pipeline to - **Compression type**: None 1. On the **Copy summary** page, review the details of your copy operation and then select **Save + Run**. - A new pipeline containing a **Copy Data** activity is created, as shown here: + A new pipeline containing a **Copy Data** (or **Copy job**) activity is created, as shown here: ![Screenshot of a pipeline with a Copy Data activity.](./Images/copy-data-pipeline.png) diff --git a/Instructions/Labs/05-dataflows-gen2.md b/Instructions/Labs/05-dataflows-gen2.md index 6a84cb244..522c4af57 100644 --- a/Instructions/Labs/05-dataflows-gen2.md +++ b/Instructions/Labs/05-dataflows-gen2.md @@ -77,6 +77,8 @@ Now that you have a lakehouse, you need to ingest some data into it. One way to ![Screenshot of the Ribbon, highlighting the Add Data destination option.](./Images/add-data-destination.png) + > **Note**: If the **Add data destination** option is grayed out or a lakehouse destination is already shown in the query, your lakehouse has been automatically attached as the default destination because you created the dataflow from within the lakehouse. Select the existing lakehouse destination icon in the query to open the destination settings, and then continue from step 5. + 2. Select **Lakehouse**. 3. In the **Connect to default data destination** dialog box, edit the connection and sign in using your Power BI organizational account to set the identity that the dataflow uses to access the lakehouse. @@ -100,7 +102,7 @@ Now that you have a lakehouse, you need to ingest some data into it. One way to ## Add a dataflow to a pipeline -You can include a dataflow as an activity in a pipeline. Pipelines are used to orchestrate data ingestion and processing activities, enabling you to combine dataflows with other kinds of operation in a single, scheduled process. Pipelines can be created in a few different experiences, including Data Factory experience. +You can include a dataflow as an activity in a pipeline. Pipelines are used to orchestrate data ingestion and processing activities, enabling you to combine dataflows with other kinds of operation in a single, scheduled process. Pipelines can be created from your workspace by selecting **+ New item** > **Data pipeline**. 1. From your Fabric-enabled workspace, select **+ New item** > **Data pipeline**, then when prompted, create a new pipeline named **Load data**. diff --git a/Instructions/Labs/16-create-reusable-power-bi-assets.md b/Instructions/Labs/16-create-reusable-power-bi-assets.md index 9855a3fb9..6f5d26739 100644 --- a/Instructions/Labs/16-create-reusable-power-bi-assets.md +++ b/Instructions/Labs/16-create-reusable-power-bi-assets.md @@ -269,14 +269,17 @@ The report could look like this. Don't worry about the layout. ### Test the template -1. Close Power BI Desktop. When asked to save your changes, can you choose **Don't save**. +1. Close Power BI Desktop. When asked to save your changes, choose **Don't save**. 1. Open the `regional-sales.pbit` file. + + > **Note**: If prompted to sign in, use your Microsoft organizational account credentials. If you see a privacy levels dialog, select **Ignore Privacy Levels checks for this file** and select **Save**. + 1. Notice you will get a parameter prompt asking you to select your region. ![Dialog showing the region parameter.](./Images/select-region-sales-parameter.png) 1. Choose **south** from the dropdown list. -1. Load the data and open the report. +1. Select **Load** to load the data and open the report. Notice how the report opens correctly the south-region values. From 9c9b1eb2d4f87b94db714c4b99aa1999db3cbab0 Mon Sep 17 00:00:00 2001 From: Wesley De Bolster <53346841+weslbo@users.noreply.github.com> Date: Wed, 25 Mar 2026 12:58:20 +0100 Subject: [PATCH 3/3] Enhance DP600 lab instructions with additional details on data pipelines and Spark integration --- Instructions/Labs/04-ingest-pipeline.md | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/Instructions/Labs/04-ingest-pipeline.md b/Instructions/Labs/04-ingest-pipeline.md index 0f5cd8204..30334efaa 100644 --- a/Instructions/Labs/04-ingest-pipeline.md +++ b/Instructions/Labs/04-ingest-pipeline.md @@ -1,7 +1,13 @@ --- lab: - title: 'Ingest data with a pipeline in Microsoft Fabric' - module: 'Use Data Factory pipelines in Microsoft Fabric' + title: Ingest data with a pipeline in Microsoft Fabric + module: Use Data Factory pipelines in Microsoft Fabric + description: In this lab, you'll create data pipelines to ingest data from external sources into a lakehouse, and integrate Spark notebooks to transform and load the data into tables. You'll learn how to combine Copy Data activities with custom Spark transformations to build reusable ETL processes in Microsoft Fabric. + duration: 45 minutes + level: 300 + islab: true + primarytopics: + - Microsoft Fabric --- # Ingest data with a pipeline in Microsoft Fabric @@ -42,12 +48,9 @@ Now that you have a workspace, it's time to create a data lakehouse into which y A simple way to ingest data is to use a **Copy Data** activity in a pipeline to extract the data from a source and copy it to a file in the lakehouse. -1. On the **Home** page for your lakehouse, select **Get data** and then select **New data pipeline**, and create a new data pipeline named `Ingest Sales Data`. -1. If the **Copy Data** wizard doesn't open automatically, select **Copy Data > Use copy assistant** in the pipeline editor page. - - > **Note**: If the pipeline editor shows a **Copy job** option instead of **Copy Data**, select **Copy job > Use copy assistant**. The copy assistant wizard steps are the same regardless of how the activity is labeled in your version of Fabric. - -1. In the **Copy Data** wizard, on the **Choose data source** page, type HTTP in the search bar and then select **HTTP** in the **New sources** section. +1. On the **Home** page for your lakehouse, select **Get data** and then select **New copy job**, and create a new data pipeline named `Ingest Sales Data`. +1. If the **Copy Job** wizard doesn't open automatically, select **From any source to any destination** in the pipeline editor page. +1. In the **Copy Job** wizard, on the **Choose data source** page, type HTTP in the search bar and then select **HTTP** in the **New sources** section. ![Screenshot of the Choose data source page.](./Images/choose-data-source.png) @@ -84,7 +87,7 @@ A simple way to ingest data is to use a **Copy Data** activity in a pipeline to - **Compression type**: None 1. On the **Copy summary** page, review the details of your copy operation and then select **Save + Run**. - A new pipeline containing a **Copy Data** (or **Copy job**) activity is created, as shown here: + A new pipeline containing a **Copy Data** activity is created, as shown here: ![Screenshot of a pipeline with a Copy Data activity.](./Images/copy-data-pipeline.png) @@ -200,4 +203,4 @@ If you've finished exploring your lakehouse, you can delete the workspace you cr 1. In the bar on the left, select the icon for your workspace to view all of the items it contains. 1. Select **Workspace settings** and in the **General** section, scroll down and select **Remove this workspace**. -1. Select **Delete** to delete the workspace. +1. Select **Delete** to delete the workspace. \ No newline at end of file