How to Create a Serverless Video Processing System on AWS

These days, almost everyone has a camera. This is probably an underestimation. At the moment, I have at least eight cameras in my office, even though some aren’t currently active. However, at least two are connected to my computer.
As more people record and share conference call recordings and create content for YouTube, Bitchute and other video hosting platforms like YouTube, Bitchute and other video sharing platforms, it is important to have an affordable, reliable, and secure solution for automated video processing.
Editing and recording a video does not necessarily mean you are done. Depending on your target audience you might need to repackage video content in different formats and bitrates to ensure the best viewing experience. To avoid buffering, users on a 3G network might prefer watching your video content at 480p. A person sitting at home with a fiber internet connection might desire the best 4k or 8k viewing experience.
This led me to create an AWS Serverless Video Processor solution.
Learn how to become a security expert with SPOTO’s Cybersecurity Training
Get started trainingWhy go serverless for video processing?
My goal was to not have learners manage long-term infrastructure and instead focus on building an event-driven data pipeline to video processing. This helps to keep costs down during low activity periods and makes it easier for you to scale. AWS Fargate is a scale from zero service that uses standard container technology and doesn’t have execution time limits. This made it the best choice for the compute part of the solution.
AWS for Serverless Video Processing
Let’s discuss how I created this solution from scratch.
Amazon Simple Storage Service (S3) was the obvious choice for storing videos files. So that’s where I started the project. An event should be fired every time a file is uploaded to an S3 bucket. This triggers a video processing task. S3 supports some event triggers like AWS Lambda functions and SNS topics. However, I wanted to avoid writing too much code so Lambda was left out.
Let’s take a look at the alternative route I chose.
AWS CloudTrail audit log logging can be enabled for your AWS account. This allows you to configure “data events”, which are distinct from resource management events. AWS CloudTrail can log high-frequency events such as S3 objects being read from or written to. It can also log less frequent resource management events like creating or deleting S3 buckets. Once these data events are configured in AWS CloudTrail the Amazon EventBridge service is able to capture and act upon them.
Once you have captured the event in Amazon EventBridge you can trigger a wider range of AWS services. EventBridge rules allow you to trigger other services, such as Step Functions and Elastic Container Service, (ECS), Simple Notification Service, (SNS), Simple Queue Service, (SQS), queues, EC2 run command, and many more.
You might think that I would trigger an AWS Fargate task from an EventBridge rule at this point. EventBridge does not support passing environment variables (via container-overrides) directly into Fargate tasks at the moment.
As a result, I decided to use Step Functions instead as a middle-man. AWS Step Functions, a powerful managed orchestration service, allows you to create workflows and then later implement the application logic. This solution has a very basic Step Function. It only contains one step. That step invokes an AWS Fargate Task. The S3 bucket key and S3 object keys that were uploaded are passed to the AWS Fargate task as containers environment variables.
The bulk of the video processing and transcoding work is done by the AWS Fargate task. Fargate uses tasks from industry to deploy tasks