-
Notifications
You must be signed in to change notification settings - Fork 45
Trigger Migration Guide
Through Scale v5, recipes were automatically started using triggers that were defined to match certain types of data coming into the Scale system. Scale v6 replaces this system with a more streamlined workflow that matches each Strike or Scan to a recipe that will be launched when it ingests its data. This guide outlines how to transition an existing Strike or Scan and recipe trigger to the new v6 recipe of recipes system, as well as how to create a new Scale v6 workflow.
Previously, users had to create multiple pieces of their data workflow all located in separate places in the app in order for their data to flow through the entire Scale workflow from ingesting, to parsing, to being fed through a recipe, to creating products. The new Scale v6 workflow will simplify this process by allowing the user to create their workspace, strike or scan, and recipe all in one location.
Recipes are now triggered directly from the Strike or Scan that ingested the input. It is assumed, for jobs that depend on the data type of a file, that the first job of the triggered recipe will be the PARSE job and will use conditional statements to run jobs within that recipe corresponding to the proper data types. Note that this assumes one Strike or Scan will be associated with each recipe type.
In order to transition an existing workflow to the new Scale v6 system, the resulting recipe must be converted to a recipe of recipes. If the recipe depends on data types, then the first job in the recipe should be the PARSE job that triggered the v5 recipe. Conditional nodes may then be added that evaluate the resulting data types, or other conditions, of the PARSE job which then trigger the proper sub-recipes or jobs for the matching data type.
Once a recipe of recipes has been created, the Scan or Strike configuration may be updated to connect to the proper recipe.
NOTE The Scale v5 UI does not support editing recipe of recipes, nor adding recipes to a Strike or Scan configuration. This must be done through the v6 REST APIs until a Scale upgrade to v6 is complete.
A v5 recipe exists that depends on TYPE1 data coming from a Strike. In the v5 system, there would be the following:
- An input workspace
- A Strike looking for filenames
- A parse job triggered by INGEST
- A recipe triggered by PARSE with data type of TYPE1
To transition this recipe to v6:
-
A new recipe of recipes will be created with the following configuration:
- The input of the recipe
- The existing PARSE job will be the first job in the recipe with the input of the job wired into the input of the recipe
- A condition node will evaluate the output of the PARSE job for TYPE1 data
- The existing recipe will be wired into the new recipe as a sub-recipe that is evaluated when the condition node finds TYPE1 data
-
Once the new recipe is defined, the Strike configuration may be updated to point to the new recipe type and corresponding version
The new Scale v6 UI will allow the user to create their complete data workflow within one page. Users will define their incoming data workspace, create their recipe of recipes, and define their Strike or Scan all in one workflow instead of disjointed pieces.
-
Create a recipe of recipes:
- If sub-recipes or jobs are dependent on certain data types extracted from input data files or JSON parameter values, these may be defined following conditional nodes under the PARSE job of the recipe. The conditional statement will evaluate the defined parameters / datatypes and only launch sub-recipes or jobs that meet the required conditions.
-
Create a Strike or Scan that launches the recipe of recipes when a file is ingested. The input workspace of the Strike or Scan will be defined here as well.
- Home
- What's New
-
In-depth Topics
- Enable Scale to run CUDA GPU optimized algorithms
- Enable Scale to store secrets securely
- Test Scale's scan capability on the fly
- Test Scale's workspace broker capability on the fly
- Scale Performance Metrics
- Private docker repository configuration
- Setting up Automated Snapshots for Elasticsearch
- Setting up Cluster Monitoring
- Developer Notes