Skip to content
69 changes: 69 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,10 @@ tables:
# batchsize: 10 # Optional: number of records per batch
# batchwindow: 5 # Optional: seconds to wait for batch
# startingposition: latest # Optional: trim-horizon, latest (default: latest)
# condition:
# environment:
# - dev
# - prod
```

### Bucket Definitions
Expand Down Expand Up @@ -198,10 +202,19 @@ lambda:
open: <boolean>
greedy: <boolean>
authorizer: <authorizer-lambda-name>
condition:
environment:
- - <environment_name>
- - <environment_name>
```

Locally-defined lambda URI is set to the path of the `easysam.yaml` file.

`condition` defines environment-based conditions that control when this configuration is applied.
`environment` - a list of environment names (e.g. `dev`, `staging`, `prod`) where this Lambda configuration should be deployed.
If the current environment is not in this list, the configuration is skipped.
If `condition` is not specified, the lambda is deployed unconditionally.

#### Local Import

```yaml
Expand All @@ -221,12 +234,22 @@ prismarine:
tables:
- package: <package-to-import>
base: <optional-base-path>
trigger:
function: <function_name>
condition:
environment:
- <environment_name>
- <environment_name>
```

For more information, see [Prismarine README](https://github.com/adsight-app/prismarine/blob/main/README.md).

Set `modelling: pydantic` to generate Prisma clients backed by Pydantic models (see `example/prismapydantic`). Omit or set `modelling: typed-dict` to generate the default TypedDict-based clients.

The `trigger.condition` section defines environment-based rules that control whether a table trigger is deployed.
`environment` - a list of environment names where this trigger should be enabled.
If `trigger.condition` is not specified, the trigger is deployed unconditionally.

### Conditional Resources

Conditional resources are defined using the `!Conditional` tag.
Expand All @@ -252,6 +275,51 @@ The `~` prefix negates the condition.
region: ~eu-west-2
```

### Condition (easysam.yaml)

The `condition` section in `easysam.yaml` allows controlling whether a resource or trigger is deployed based on the deployment environment or region.

#### Structure
```yaml
condition:
environment: <string | list[string]>
region: <string | list[string]>
```

`environment` – one or more environment names (e.g., `dev`, `prod`) where the resource or trigger should be deployed.
`region` – one or more AWS regions where the resource or trigger should be deployed.
At least one of these fields must be specified.

#### Behavior

* If a condition is specified, the resource or trigger is deployed only if the current environment or region matches.

* If a condition is not specified, the resource or trigger is deployed unconditionally.

* If the condition is not satisfied, the resource or trigger is skipped and no related permissions, integrations, or bindings are applied.

#### Example: Lambda with environment condition
```yaml
lambda:
name: example-function
condition:
environment:
- dev
- prod
```
The Lambda is deployed only in dev and prod environment.
In other environments, it is not deployed.

#### Example: Table trigger with region condition
```yaml
trigger:
function: process_updates
condition:
environment: prod
```
The trigger is deployed only in the prod environment.


### Deployment Context File

The deployment context file is used to further control resources, especially in CI. This version has the following features:
Expand Down Expand Up @@ -309,3 +377,4 @@ If you encounter any issues or have questions, please:
## Changelog

See [CHANGELOG.md](https://github.com/adsight-app/easysam/blob/main/CHANGELOG.md) for a list of changes between versions.

3 changes: 3 additions & 0 deletions example/aoss/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,9 @@ trigger:
viewtype: new-and-old # Optional, default
batchsize: 10 # Optional
startingposition: latest # Optional, default
condition: # Optional
- dev
- prod
```

## Stream View Types
Expand Down
2 changes: 1 addition & 1 deletion example/aoss/resources.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ import:

search:
# This is the default search index and can be omitted
searchable:
searchable:
1 change: 0 additions & 1 deletion src/easysam/generate.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,6 @@ def generate(

sam_template = jenv.get_template(template_path)
sam_output = sam_template.render(resources_data)

write_result(template, sam_output)
lg.info(f'SAM template generated: {template}')

Expand Down
87 changes: 76 additions & 11 deletions src/easysam/load.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ def resources(

lg.info('Processing resources')
pypath = [resources_dir] + list(pypath)
preprocess_resources(resources_data, resources_dir, pypath, errors)
preprocess_resources(deploy_ctx, resources_data, resources_dir, pypath, errors)

lg.info('Validating resources')
validate_schema(resources_dir, resources_data, errors)
Expand Down Expand Up @@ -104,6 +104,7 @@ def prismarine_dynamo_tables(


def preprocess_prismarine(
deploy_ctx: dict[str, str],
resources_data: dict,
resources_dir: Path,
pypath: list[Path],
Expand All @@ -129,6 +130,32 @@ def preprocess_prismarine(
tables = prismarine_dynamo_tables(
prefix, base, package, resources_dir, pypath, errors
)
trigger = prisma_integration.get('trigger')

has_condition = (
isinstance(trigger, dict)
and isinstance(trigger.get('condition'), dict)
)
if has_condition:
include = check_condition(
'environment',
trigger.get('condition').get('environment', 'any'),
deploy_ctx,
errors
)
if not include:
lg.debug(
f"Removing trigger {trigger.get('function')} "
)
trigger_name = trigger.get('function')

for table_name, table in tables.items():
if table.get('trigger') == trigger_name:
lg.info(
f'Removing trigger {trigger_name}'
f'from table {table_name}'
)
table.pop('trigger', None)

if not tables:
lg.warning(f'No valid tables found for {package}, continuing')
Expand Down Expand Up @@ -195,24 +222,45 @@ def preprocess_lambda(


def preprocess_tables(
deploy_ctx: dict[str, str],
resources_data: dict,
table_def: dict,
entry_path: Path,
errors: list[str]
):
if 'tables' not in resources_data:
resources_data['tables'] = {}

for table_name, table_data in table_def.items():
if table_name in resources_data['tables']:
errors.append(f'Import file {entry_path} contains duplicate table {table_name}')
continue
# check if table has a trigger with a condition
trigger = table_data.get('trigger')
has_condition = (
isinstance(trigger, dict)
and isinstance(trigger.get('condition'), dict)
)
lg.debug(f'Table {table_name} has trigger condition: {has_condition}')
if has_condition:
include = check_condition(
'environment',
trigger.get('condition').get('environment', 'any'),
deploy_ctx,
errors
)
if not include:
lg.debug(
f"Removing trigger {trigger.get('name')} "
f"from table {table_name} due to condition"
)
table_data.pop('trigger', None)

lg.debug(f'Adding table {table_name} to resources')
resources_data['tables'][table_name] = table_data


def preprocess_file(
deploy_ctx: dict[str, str],
resources_data: dict,
resources_dir: Path,
entry_path: Path,
Expand All @@ -226,25 +274,36 @@ def preprocess_file(
errors.append(f'Error loading import file {entry_path}: {e}')
return

if not all(key in ['lambda', 'import', 'tables'] for key in entry_data.keys()):
if not all(key in ['lambda', 'import', 'tables', 'condition'] for key in entry_data.keys()):
errors.append(f'Import file {entry_path} contains unexpected sections')
return

if 'condition' in entry_data:
include = check_condition(
'environment',
entry_data.get('condition', {}).get('environment', 'any'),
deploy_ctx,
errors
)
if not include:
lg.info(f'Skipping import file {entry_path} due to condition')
return

if lambda_def := entry_data.get('lambda'):
preprocess_lambda(
resources_data, resources_dir, lambda_def, entry_path, entry_dir, errors
)

if tables_def := entry_data.get('tables'):
preprocess_tables(resources_data, tables_def, entry_path, errors)
preprocess_tables(deploy_ctx, resources_data, tables_def, entry_path, errors)

if local_import_def := entry_data.get('import'):
for import_file in local_import_def:
import_path = Path(entry_dir, import_file)
preprocess_file(resources_data, resources_dir, import_path, errors)
preprocess_file(deploy_ctx, resources_data, resources_dir, import_path, errors)


def preprocess_imports(resources_data: dict, resources_dir: Path, errors: list[str]):
def preprocess_imports(deploy_ctx: dict[str, str], resources_data: dict, resources_dir: Path, errors: list[str]):
for import_dir_str in resources_data.get('import', []):
import_dir = Path(resources_dir, import_dir_str)
lg.info(f'Processing import directory {import_dir}')
Expand All @@ -254,7 +313,7 @@ def preprocess_imports(resources_data: dict, resources_dir: Path, errors: list[s
continue

for entry_path in import_dir.glob(f'**/{IMPORT_FILE}'):
preprocess_file(resources_data, resources_dir, entry_path, errors)
preprocess_file(deploy_ctx, resources_data, resources_dir, entry_path, errors)


def process_default_functions(resources_data: dict, errors: list[str]):
Expand Down Expand Up @@ -355,6 +414,7 @@ def preprocess_defaults(resources_data: dict, errors: list[str]):


def preprocess_resources(
deploy_ctx: dict[str, str],
resources_data: dict,
resources_dir: Path,
pypath: list[Path],
Expand All @@ -364,10 +424,10 @@ def sort_dict(d):
return dict(sorted(d.items(), key=lambda x: x[0]))

if 'prismarine' in resources_data:
preprocess_prismarine(resources_data, resources_dir, pypath, errors)
preprocess_prismarine(deploy_ctx, resources_data, resources_dir, pypath, errors)

if 'import' in resources_data:
preprocess_imports(resources_data, resources_dir, errors)
preprocess_imports(deploy_ctx, resources_data, resources_dir, errors)

preprocess_defaults(resources_data, errors)

Expand Down Expand Up @@ -421,13 +481,19 @@ def conditional_constructor(loader, node):

def check_condition(
condition: str,
value: str,
value: list | str,
deploy_ctx: dict[str, str],
errors: list[str]
):
if value == 'any':
return True

if isinstance(value, list):
return any(
check_condition(condition, v, deploy_ctx, errors)
for v in value
)

context_value = deploy_ctx.get(condition)

if context_value is None:
Expand Down Expand Up @@ -469,7 +535,6 @@ def resolve_conditionals(
check_condition('environment', key.environment, deploy_ctx, errors),
check_condition('target_region', key.region, deploy_ctx, errors),
])

if include:
resolved[key.key] = resolved_value
else:
Expand Down
30 changes: 27 additions & 3 deletions src/easysam/schemas.json
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,8 @@
"type": "object",
"properties": {
"package": {"type": "string"},
"base": {"type": "string"}
"base": {"type": "string"},
"trigger":{"$ref": "#/definitions/table_trigger_schema"}
},
"required": ["package"],
"additionalProperties": false
Expand Down Expand Up @@ -225,7 +226,9 @@
"startingposition": {
"type": "string",
"enum": ["trim-horizon", "latest"]
}
},
"condition": {"$ref": "#/definitions/condition_schema"},
"name": {"type": "string"}
},
"required": ["function"],
"additionalProperties": false
Expand Down Expand Up @@ -271,6 +274,26 @@
},
"required": ["function"],
"additionalProperties": false
},
"condition_schema": {
"type": "object",
"properties":{
"environment": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]},
"region": {
"oneOf": [
{"type": "string"},
{"type": "array", "items": {"type": "string"}}
]}
},
"oneOf": [
{ "required": ["environment"] },
{ "required": ["region"] }
],
"additionalProperties": false
}
},
"type": "object",
Expand Down Expand Up @@ -370,7 +393,8 @@
},
{ "type": "null"}
]
}
},
"condition": {"$ref": "#/definitions/condition_schema"}
},
"required": ["prefix"],
"additionalProperties": false
Expand Down
Loading