Upload Photo Archotecture Diagram Using Aws Lambda
Serverless Reference Architecture: Paradigm Recognition and Processing Backend
The Image Recognition and Processing Backend demonstrates how to use AWS Step Functions to orchestrate a serverless processing workflow using AWS Lambda, Amazon S3, Amazon DynamoDB and Amazon Rekognition. This workflow processes photos uploaded to Amazon S3 and extracts metadata from the image such every bit geolocation, size/format, time, etc. Information technology then uses image recognition to tag objects in the photo. In parallel, information technology also produces a thumbnail of the photo.
This repository contains sample code for all the Lambda functions depicted in the diagram below too as an AWS CloudFormation template for creating the functions and related resources. There is as well a test web app that you can run to interact with the backend.
Walkthrough of the compages
- An image is uploaded to the
PhotoRepo
S3 bucket under theprivate/{userid}/uploads
prefix - The S3 upload upshot triggers the
S3Trigger
Lambda office, which kicks off an execution of theImageProcStateMachine
in AWS Pace Functions, passing in the S3 saucepan and object primal every bit input parameters. - The
ImageProcStateMachine
has the following sub-steps:
- Read the file from S3 and extract paradigm metadata (format, EXIF information, size, etc.)
- Based on output from previous step, validate if the file uploaded is a supported file format (png or jpg). If non, throw
NotSupportedImageType
error and end execution. - Store the extracted metadata in the
ImageMetadata
DynamoDB tabular array - In parallel, kick off 2 processes simultaneously:
- Telephone call Amazon Rekognition to observe objects in the image file. If detected, store the tags in the
ImageMetadata
DynamoDB table - Generate a thumbnail and store it under the
private/{userid}/resized
prefix in thePhotoRepo
S3 saucepan
- Telephone call Amazon Rekognition to observe objects in the image file. If detected, store the tags in the
How to deploy
Follow these instructions to deploy the application (both backend and frontend):
- Use i-click deployment button above. Amplify Console will fork this repository in your GitHub business relationship, and deploy the backend and frontend application.
- Note: If you forked and inverse the repository first, you can use the Amplify console and select "Connect App" to connect to your forked repo.
- For IAM Service Role, create 1 if yous don't take one or select an existing role. (This is required because the Dilate Console needs permissions to deploy backend resources on your behalf. More info)
- Within your new app in Amplify Console, await for deployment to complete (this may take a while)
- Once the deployment is complete, you lot tin can exam out the application!
If you lot desire to make changes to the code locally:
- Clone the repo in your Github account that Amplify created
- In the Amplify panel, cull Backend environments, and toggle "Edit backend" on the surroundings with categories added
- Under Edit backend, copy the
amplify pull --appId <your app id> --envName <your env proper name>
command displayed- If you don't see this command and instead run into
amplify init --appId
, endeavour refreshing the backend environment tab later on waiting a few minutes (cloudformation could still be provisioning resources)
- If you don't see this command and instead run into
- Inside your forked repository locally, run the command yous copied and follow the instructions
- This control synchronizes what's deployed to your local Amplify environment - Do y'all want to employ an AWS profile: Aye - default - Choose your default editor: Visutal Studio Code - Cull the type of app that you're building: javascript - What javascript framework are yous using: react - Source Directory Path: src/react-frontend/src - Distribution Directory Path: src/react-frontend/build - Build Command: npm.cmd run-script build - Get-go Command: npm.cmd run-script start - Practice you plan on modifying this backend? (Yes)
If at anytime you want to change these options. Look into amplify/.config/project-config.json
and make your changes at that place.
Using the exam web app
Yous tin can use the test web app to upload images and explore the image recognition and processing workflow.
Sign upward and log in
-
Get to the URL of the Dilate app that was deployed
-
In the login page, click on "Create account"
-
Register an account past post-obit the sign up instructions
-
After confirming the business relationship, sign in
Album list
- create albums using the "Add a new album"
- You may need to referresh
Photo gallery
-
Click into an anthology you created
-
Upload a photo
-
Y'all can follow the Footstep Functions execution link to review the details of the workflow execution Below is the diagram of the state machine existence executed every time a new image is uploaded (you can explore this in the Stride Functions Panel):
-
When the processing finishes, the photograph and extracted information is added to the display
Cleaning Up the Application Resources
To remove all resource created by this example, practice the following:
- Go to AWS CloudFormation console, delete the 2 stacks with name "dilate-photoshare-"
- Go to the AWS Dilate console and delete the app.
License
This reference architecture sample is licensed under Apache 2.0.
Source: https://github.com/aws-samples/lambda-refarch-imagerecognition