This repository contains three interconnected projects that together form an AI-powered image gallery application:
- gallery-backend: Backend infrastructure and APIs
- gallery-frontend: User interface and frontend application
- image-generator: AI-based image generation utility
Key Components:
- AWS Amplify: Provides frontend application hosting (using React).
- Amazon API Gateway + AWS Lambda: Used as backend API endpoints for image retrieval and upload.
- Amazon Bedrock: A managed service that utilizes foundation models through APIs. It uses the Amazon Nova Canvas model for image generation and the Claude 3.5 Sonnet v2 model for descriptions.
- Amazon SageMaker: Deploys the necessary model as an Endpoint to process image synthesis requests.
- Amazon Rekognition: Detects faces in images and videos, and crops the relevant facial areas.
- AWS Account with appropriate permissions
- Node.js v14 or higher
- Python 3.8 or higher
- AWS CDK installed and configured
- AWS CLI installed and configured
The backend infrastructure is built using AWS CDK and provides the foundation for the entire application. It consists of several key components:
- API Gateway: RESTful endpoints for image upload, retrieval, and user agreement management
- AWS Cognito: User authentication and authorization
- DynamoDB: Data storage for image processing status, display information, and user agreements
- Lambda Functions: Serverless processing for image manipulation and business logic
- S3 Buckets: Object storage for images and assets
- SageMaker Endpoints: AI model deployment for FaceChain and GFPGAN
- Container Infrastructure: ECR repositories and CodeBuild pipelines for AI model containers
The backend handles all data processing, storage, and AI model integration, providing a robust foundation for the application.
For deployment instructions, please refer to the gallery-backend README.
The frontend is a React TypeScript application that provides the user interface for the gallery service. Key features include:
- User authentication with AWS Cognito
- Image upload and processing
- Face detection and manipulation
- AI-powered face swapping
- Multi-language support (i18n)
- User agreement management
- QR code generation for sharing
The frontend communicates with the backend APIs to provide a seamless user experience for image processing and management.
For deployment instructions, please refer to the gallery-frontend README.
The image generator is a Python-based utility that leverages Amazon Bedrock to create AI-generated images. It includes:
- Integration with Amazon Bedrock's Nova Canvas model for image generation
- Claude 3.5 Sonnet for text processing and prompt engineering
- Configuration for various image styles, historical periods, and attributes
- S3 integration for storing generated images
- DynamoDB integration for tracking image metadata
This component allows the application to generate high-quality base images that can be used in the face swapping process.
For deployment instructions, please refer to the image-generator README.
Before using the image generator, you need to set up access to the required Amazon Bedrock models:
- Sign in to the AWS Bedrock console: https://console.aws.amazon.com/bedrock/
- Set the region to
us-east-1(Nova Canvas model is only available in us-east-1)
- In the left navigation pane, under Bedrock configurations, choose Model access
- Choose Modify model access
- Select the following models:
- Anthropic Claude 3.5 Sonnet
- Amazon Nova Canvas
- Choose Next
- For Anthropic models, you must submit use case details
- Review and accept the terms, then choose Submit
To request model access, your IAM role needs the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"aws-marketplace:Subscribe",
"aws-marketplace:Unsubscribe",
"aws-marketplace:ViewSubscriptions"
],
"Resource": "*"
}
]
}To allow the image generator to save images to S3, you need to configure proper IAM permissions:
- Go to AWS IAM Console: https://console.aws.amazon.com/iam/
- Create a new IAM policy with the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::amazon-bedrock-gallery-sydney-jungseob",
"arn:aws:s3:::amazon-bedrock-gallery-sydney-jungseob/*"
]
}
]
}- Attach this policy to the IAM role/user that will be used to run the image generator
- Make sure the S3 bucket name in the policy matches your actual bucket name
- Nova Canvas model is only available in
us-east-1region - Model access approval may take several minutes
- If model access is denied, contact AWS support or choose alternative models
Follow these steps to deploy the complete application:
First, deploy the backend infrastructure:
cd gallery-backend
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# Generate unique ID and update cdk.context.json
UNIQUE_ID=$(openssl rand -hex 4)
sed -i '' "s/<your-unique-id>/$UNIQUE_ID/g" cdk.context.json
cdk synth
cdk deploy --allThis will create all necessary AWS resources including API Gateway, Lambda functions, DynamoDB tables, S3 buckets, and SageMaker endpoints.
After the backend is deployed, use the image generator to create base images:
cd ../image-generator
pip install -r requirements.txt
python generate_image.pyThis will generate images using Amazon Bedrock and store them in the configured S3 bucket, making them available for the face swapping process.
Finally, deploy the frontend application:
cd ../gallery-frontend
npm install
# Update .env file with your backend API endpoints
npm run build:prod
# Deploy the build directory to your hosting service of choiceOnce all components are deployed, you can test the complete workflow:
- Access the frontend application
- Create an account and sign in
- Upload a photo for face swapping
- Select a base image generated by the image generator
- Process the image using the AI models deployed in the backend
- View and share your generated images
Please refer to the individual project READMEs for specific contribution guidelines.
The code of this projects is released under the MIT License. See the LICENSE file.
This software utilizes the pre-trained models. Users of this software must strictly adhere to these conditions of use.
Please note that if you intend to use this software for any commercial purposes, you will need to train your own models or find models that can be used commercially.

