What is AWS Shared Responsibility Model?

Security and Compliance is a shared responsibility on the cloud between AWS and its customers. Where Amazon is responsible for “the security OF the cloud”, the customer is responsible for “the security IN the cloud”.

Security of the Cloud – AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and physical security at the facilities that run the AWS Cloud services.

Security in the Cloud – The customer responsibility is determined by the AWS Cloud services that a customer selects. This translates to the amount of configuration work the customer must perform as part of their security responsibilities. For example, services such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Virtual Private Cloud (Amazon VPC), and Amazon S3 are categorized as Infrastructure as a Service (IaaS) and, as such, require the customer to perform all of the necessary security configuration and management tasks. If a customer deploys an Amazon EC2 instance, they are responsible for management of the guest operating system including updates and security patches, any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall such as security group on each instance.

This short animation succinctly drives the message.

CelebritySleuth – A celebrity Face-Recognition Service

Table of Contents

CelebritySleuth

A celebrity face recognition service built with Serverless Framework using Twilio, Amazon Rekognition and IMDbPy API.

This project provides an overview of Serverless computing and drills the concept down by helping build a compelling example of Shazam for Celebrities dubbed CelebritySleuth using the Serverless Framework.

For more information on Serverless architecture and Serverless Framework check out this whitepaper.

Audience

You are a Developer or Solutions Architect wanting to learn the Serverless Framework and how to use it for developing your own auto-scaling, pay-per-execution, event-driven applications. You want to build applications that matters to business instead of spending time configuring, deploying and maintaining infrastructure. The boost in efficiency that the Serverless architecture promises is very compelling for you to ignore.

The CelebritySleuth Application

The CelebritySleuth application is an event-driven application taking advantage of the user’s mobile SMS/MMS for the presentation tier, Twilio in the middle-tier to bridge the SMS world and AWS Gateway and a set of AWS Lambda functions written in Python making use of AWS Rekogniton for image processing and IMDB for gathering information on the celebrities.

In a typical usage, the user snaps a picture of an celebrity (on TV?!) using his/her phone camera and sends the image along with the instructions to a certain phone number hosted on Twilio. The system validates the instructions and responds immediately if there is an error in the instruction set or with a message to await the actual response if the validation passes. Momentarily, the user gets a response with the name and biography of the celebrity.

To begin with you have to train the application to recognize the faces by building a collection of celebrities. You do this by sending a random sample of celebrity pictures (image URLs) and their corresponding names. The more pictures of a celebrity, the more accurate the recognition will be.

The CelebritySleuth application consists of two services: – Twilio Communication ServiceFace Recognition Service

The services are decoupled to allow for using different presentation tiers in future.


The Architecture

The CelebritySleuth application uses Lambda functions for compute needs. As a result the application components are provisioned just before usage and brought down after use resulting in a low-cost, highly-scalable application.

Celebrity Sleuth Architecture

The above picture illustrates the high-level architecture of the application. Details are as follows:

  1. User sends a picture and commands to add/match face to a collection. Alternatively, the user can create a collection – in which case a picture is not required. The SMS/MMS is sent to a telephone number hosted by Twilio.

  2. Twilio intercepts the message and forwards it to an API Gateway based on the user’s Twilio configuration.

  3. API Gateway translates TwiML to JSON and calls the Request Processor lambda function.

  4. The Request Processor lambda validates the commands and put a message to the appropriate topic on SNS. If the validation fails, it returns the error message to the user via Twilio.

  5. When a message arrives in the Create Collection topic, a lambda is triggered which adds the named collection in AWS Rekognition via Boto libraries. A success/error message is put in the Response Processor topic.

  6. When a message arrives in Add Face topic, a lambda is triggered which identifies the most prominent face in the image and adds the metadata for the face to the given collection. If there is no faces identified, it creates an error message and sends the response to the Response Processor topic.

  7. When a message arrives in Match Face topic, a lambda is triggered which identifies the most prominent face in the image and matches the metadata for that face with known faces in the collection. If a match is found, the corresponding person’s name is returned. The Lambda then uses IMDB to lookup the biography of the person.

  8. The various lambda-based processors drops the response message on the Response Processor topic.

  9. The Response Processor picks up the response and constructs a SMS message and calls Twilio’s SMS service.

  10. Twilio validates the From number and sends the message to the corresponding To number.


Components

The application consists of the following components: 1. Python – Python is a programming language that lets you work quickly and integrate systems more effectively. We will use Python 2.7 for building the CelebritySleuth Application.

  1. Twilio – Twilio Messaging Service for having the user communicate with the application through SMS.

  2. AWS Lambda – AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running.

  3. AWS Rekognition – Amazon Rekognition is a service that makes it easy to add image analysis to your applications. With Rekognition, you can detect objects, scenes, and faces in images. You can also search and compare faces.

  4. IMDb – IMDbPY is a Python package useful to retrieve and manage the data of the IMDb movie database about movies, people, characters and companies.


Setup Instructions

Installing Python

If you are on a Mac or Linux machine, you probably already have Python installed. On Windows you have to install Python.

Regardless of your operating system, you are better off using a virtual environment for running Python. Anaconda or its terse version Miniconda is a Python virtual environment that allows you to manage various versions and environments of Python. The installers come with Python and the package manager conda with it. Follow the instructions here to install Miniconda. For this project we will use Python 2.7.

Creating a Python Virtual Environment.

After installing Python 2.7, create an virtual environment as follows. Note: I am calling my virtual environment faces:

$ conda create -n faces python=2

Initialize your Python Virtual Environment

To start working in your new Python virtual environment:

$ source activate faces

If you are working in Windows, use:

$ activate faces

Install Git

Git is a popular code revision control system. To install Git for your respective operating system follow the instructions here.

Install CelebritySleuth

To install CelebritySleuth from Git, follow the instructions below:

$ mkdir DevFestDC
$ cd DevFestDC
$ git clone --recursive https://github.com/skarlekar/faces.git

Twilio Setup

If you do not already have a Twilio number, signup for Twilio and get a trial phone number with MMS capability. You can use this number for 30-days during the trial period.

Setup Twilio Environment Variables

Update the setTwilio.sh in the repository with your credentials from Twilio and setup your environment. Instructions on using vi is here.

$ cd faces
$ vi setTwilio.sh
$ source ./setTwilio.sh

Windows users, use:

$ cd faces
$ edit setTwilio.cmd
$ setTwilio.cmd

Test Twilio Setup

To test your Twilio setup, run the Python program sendmessage.py under twilioTester. This program simply sends a message to your mobile number using your Twilio credentials. Note: Make sure you are running this in your Python 2.7 environment.

$ python twilioTester/sendmessage.py

If you receive a message with an image on your mobile, your Twilio is setup is working.

Install node.js and Serverless framework

Serverless framework is a node.js application. To use Serverless framework and run the CelebritySleuth application you need to install node.js. Follow the instructions from Serverless website to install both node.js and the Serverless framework.

Ensure your Serverless framework is operational using the following:

$ serverless --version

Amazon AWS Setup

  1. Sign into your AWS account or sign-up for one.
  2. Setup your AWS credentials by following the instructions from here.

Testing your Serverless Setup

Now that you have setup AWS, it is time to test your Serverless setup by creating a mock function using the Serverless framework.

Create a test directory. In the test directory, create a Lambda function from the default template as follows:

$ mkdir sls-tester
$ cd sls-tester
$ sls create --template aws-python --name sls-test

This should create two files in the current directory:

serverless.yml

handler.py

The serverless.yml declares a sample service and a function. The handler.py returns a message stating that your function executed successfully.

To deploy the function, simply type:

$ sls deploy --verbose

This should deploy the function. The verbose option provides extra information.

To test your function, type:

$ sls invoke --function hello

If you get the following message, your Serverless setup is working.

  WARNING: You are running v1.9.0. v1.10.0 will include the following breaking changes:
    - Some lifecycle events for the deploy plugin will move to a new package plugin. More info -> https://git.io/vy1zC

  You can opt-out from these warnings by setting the "SLS_IGNORE_WARNING=*" environment variable.

{
    "body": "{\"input\": {}, \"message\": \"Go Serverless v1.0! Your function executed successfully!\"}",
    "statusCode": 200
}

Twilio Communication Service

The Twilio Communication Service twilioCommunicationService bridges Twilio’s SMS messaging service with the Face Recognition Service. When the user sends a message to his/her Twilio number, the message is intercepted by Twilio’s Messaging service. The Twilio Messaging service will forward the SMS message contents to AWS API Gateway URL. The AWS API Gateway in turn will invoke the Request Processor (process_request) Lambda function in the twilioCommunicationService.

The TwilioCommunicationService supports two functions: 1. The processRequest function validates incoming requests and sends a response synchronously if the format of the message is in the right format or an error message giving the correct usage – see the Usage section for the details on how to use this service. It then invokes the faceRecognitionService asynchronously through SNS to process the command. 2. The sendResponse function composes a response from the faceRecognitionService and sends the response back to the number from where the request originated.

Deploy Twilio Communication Service

Assuming your local Serverless setup is complete and the test above to test your Serverless setup passes, follow the instructions below to deploy the twilioCommunicationService using the Serverless framework:

Set your Twilio credentials by running the shell script you updated earlier.

$ source ./setTwilio.sh

Windows users, use:

$ setTwilio.cmd

Change directory to the twilioCommunicationService directory and deploy the service by running sls deploy as shown below:

$ cd twilioCommunicationService
$ sls deploy --verbose

Ensure there are no errors in the deployment process. You can also head on to your AWS Console and verify that the API Gateway has been created. You should see an API Gateway called dev-twilioCommunication.

Also ensure the Lambda functions are created by verifying that the twilioCommunication-dev-processRequest and twilioCommunication-dev-sendResponse lambda functions is available in the AWS Lambda console.

Ensure your Twilio credentials are setup as environment variables by clicking on each of the Lambda functions and verifying that the TWILIO_AUTH_TOKEN and TWILIO_ACCOUNT_SID have been created with the correct values in the Environment Variables section in the Code tab.

Setup Twilio Messaging Service

Follow the instructions below to setup the Messaging Service in Twilio and associate your Twilio number to the Messaging Service:

  1. In the Twilio website, login to your account and head to the Messaging Services

  2. Click on the + under Messaging Services with Copilot to add a new Messaging service.

  3. Give a name to your service and click on Create. Let us say, CelebritySleuth.

  4. Under Properties in the USE CASE drop-down, select Chat Bot/Interactive 2-way.

  5. In the Inbound Settings section, ensure PROCESS INBOUND MESSAGES is checked.

  6. Copy and paste the AWS API Gateway URL from your AWS console into the REQUEST URL field and add /process_request to the end of the URL. Your URL should resemble: https://netxnasdfda.execute-api.us-east-1.amazonaws.com/dev/process_request

  7. Select HTTP GET in the drop-down next to the field.

  8. Leave rest of the fields to its default value and click SAVE.

  9. Head to the Numbers section in the Twilio console.

  10. Click on the number assigned to you. This will take you to the section where you can configure what should happen when an SMS message is sent to your Phone Number.

  11. Under the Messaging section, select Messaging Service under the CONFIGURE WITH drop-down.

  12. In the MESSAGING SERVICE, select the Messaging service that created in steps 2-8 above and click SAVE.


Face Recognition Service

The Face Recognition Service (faceRecognitionService) currently supports three functions. They are:

createCollection: To add a collection.

addFace: To add a face to an existing collection.
matchFace: To match a face in a collection and look up the biography of the matched person’s name in IMDb.

A collection is a logical group of face indexes (face vectors) and their corresponding names. When you create a collection using the createCollection Lambda function, it is empty. On the back-end, createCollection creates a collection in AWS Rekognition. You can then add new faces to the collection and associate it with a given name. The collection thus created can then be used for searching images for known faces with high degree of confidence. Physically, the face indexes are stored in a database on a server managed by AWS Rekognition. You do not have direct access to this database.

Once you have created a collection you can add faces to the collection using the addFace Lambda function. To add a face to a collection, you have to provide an image, a collection name and name you want to associate with the face. If there are no faces in the given image, or if the collection does not exist an error message is returned. The addFace function uses AWS Rekognition to detect faces in the given image, extract features from the face and persist information about facial features detected in the image to AWS Rekognition. The facial features are stored as searchable image vectors.

Once you have some faces indexed using the addFace function, you can then provide images of the person indexed using the matchFace function. The matchFace function requires the

Deploy Face Recognition Service

Change directory to the faceRecognitionService directory and deploy the service by running sls deploy as shown below:

$ cd faceRecognitionService
$ sls deploy --verbose

Ensure there are no errors in the deployment process. You can also head on to your AWS Lambda Console and verify that the Lambda functions faceRecognition-dev-addFace, faceRecognition-dev-matchFace and faceRecogniton-dev-createCollection has been created.


Testing the application

To test the application, create a collection by send the following SMS message to your Twilio number:

face addcol celebs

Does it work? Are you getting a message back from yonder? If not why? Look at the CloudWatch logs for your processRequest Lambda function. What is the issue?

If you look carefully you will find that the Lambda functions do not have enough permissions to operate the SNS topic or AWS Rekognition resources.

Fixing twilioCommunicationService

Open the serverless.yml file in the twilioCommunicationService and uncomment the section iamRoleStatements, save and deploy the service again.

$ cd twilioCommunicationService
$ vi serverless.yml
$ sls deploy --verbose

Repeat adding a collection again.

face addcol celebs

This time, you should get a message saying that you will get a response back momentarily.

All good. You should receive your response momentarily!

You can wait till the cows come home, but you are not getting a response are you?

This is because, you have to provide more permissions to the Lambda functions in the faceRecognitionService.

Fixing faceRecognitionService

Open the serverless.yml file in the faceRecognitionService and uncomment the section iamRoleStatements, save and deploy the service again.

$ cd twilioCommunicationService
$ vi serverless.yml
$ sls deploy --verbose

That should have fixed it. Now if you add a collection:

face addcol celebs

You should not only get the following response:

All good. You should receive your response momentarily!

You should also get the following message:

Collection celebs was created successfully

Adding a face to the collection

To add a face to the collection, along with an image of the person send the following as part of the same SMS message:

(image)
face addimg celebs firstName_lastName

You should receive the following response:

Picture of firstName_lastName added to collection celebs

Matching a face in the collection

Now find another image of the person and test the face matching power of AWS Rekognition by sending an image and the following commands through SMS to your Twilio number:

(image)
face match celebs

You should not only get the following message:

All good. You should receive your response momentarily!

You should also receive the following message and biography of the person followed with the biography of the person.

Face matched firstName lastName with xx.xx similarity and yy.yy confidence.
....
....

Usage

Following are the commands currently supported by CelebritySleuth.

Add Collection

To add a new collection, type the following in the SMS body. You don’t have to add any image with this command. Any image added will be ignored.

face addcol (collection-name)

where collection-name is the name of the collection you want to create.

Example: face addcol celebs

example of adding a collection


Add Image

To add an image to a given collection and associate the face in the image to a name, type the following in the body of the SMS. You should include an image in the same SMS message.

face addimg (collection-name) (first-name_last-name)

where collection-name is the name of an existing collection and first-name_last-name is the full name of the person. Note that the first name and last name should be separated by an underscore.

Example: face addimg celebs Albert_Einstein

example of adding an image to a collection


Match Face

To match a face in a given collection:

face match (collection-name)

where collection-name is the name of an existing collection. This command must be accompanied with an image in the same SMS message.

Example: face match celebs

snapshot of matching a celebrity face


Application in Action:

Following is a sample animation of the application in action:

sample GIF of the application in action

Serverless Architecture & Serverless Framework

Per Gartner, by 2022 most cloud architectures will evolve to a fundamentally serverless model rendering the cloud platform architectures dominating in 2017 as legacy architectures.

Serverless is a cloud-native platform model and reflects the core-promise of cloud-computing by offering agility and capability on demand at a value price.

The introduction of function PaaS (fPaaS) as Lambda by Amazon in re:Invent, Nov 2014 (and out of beta in late 2015) created a momentum for “serverless” platform architecture. AWS Lambda was soon followed by most major cloud platform vendors, including IBM, Microsoft, Google and, more recently, Oracle.

Amazon started the trend with Lambda

Separating the Wheat from the Chaff

Serverless computing model is an emerging trend and quite often misunderstood because of the hype and build-up surrounding the topic.

The term Serverless refers to building applications without having to configure or maintain the infrastructure required for running your applications. In reality, servers are still involved, though they are owned and controlled by the platform providers. On the other hand, there are frameworks used for exploiting the serverless architecture uninspiringly named Serverless Framework and therefore increasing the confusion.

Serverless - No need to configure or maintain infrastructure

Serverless Architectures

Serverless Architectures are based on models where the application’s logic provided by the Developer is run on stateless, compute containers that are provisioned and managed by a provider. Typically these compute instances are ephemeral (short-lived for the duration of the request-response cycle) and triggered by an event. As the load on the application increases, additional infrastructure is automatically deployed to meet the need. Due to this on-demand provisioning nature of this architecture, the systems built using Serverless technologies are inherently scalable and highly responsive under load.

**FaaS – Function as a Service **

The technique of building applications using Serverless architecture.

  • Pay-per-execution – Pay per execution model is most efficient at managing costs.
  • Ephemeral – Short-lived process triggered via event.
  • Auto-scaling – Compute resources are provisioned granularly per request.
  • Event-driven – Functions respond to events such as http, file drop, alerts, timer, topics etc
  • Microservices – Modules built to satisfy a specific goal and uses a simple, well-defined interface.
FaaS – Applications Built with Serverless Architecture

FaaS vs PaaS

Some people in the industry refer to the technique of building applications using Serverless architecture as FaaS (Function as a Service). The reason becomes clear when you contrast FaaS applications with the traditionally built applications or PaaS (Platform as a Service) where there is a perpetual process running on a server waiting for HTTP requests or API calls. In FaaS there is no perpetual process (for the most part) but an event mechanism that triggers the execution of a piece of code, usually just a function. You still need a perpetual gateway that will field your API calls to start the events to cascade.

The other key operational difference between FaaS and PaaS is scaling. With most PaaS solutions you still need to worry about scale. With FaaS the compute resources are provisioned at a request level. You cannot get the same level of granularity with PaaS applications even if you set it to auto-scale. As a result of this, FaaS applications are extremely efficient when it comes to managing cost.

Limitations of FaaS

State Due to the ephemeral nature of the FaaS architecture, the state of your application should be managed externally from the FaaS infrastructure or off-loaded to a cache or database. This could be very limiting for certain type of applications running on thin clients or untrusted devices where the application orchestration has to extend through multiple request-response cycles.

State between Requests must be maintained outside of FaaS

Duration Because of the on-demand provisioning and low-cost nature of the FaaS solution there is a restriction on how long your functions are allowed to run. To keep the price low – as you are billed by minutes of usage, some providers such as Amazon AWS and Microsoft Azure restrict the duration of time a function is allowed to process a request.

 Duration of time a function is allowed to run is restricted

Deployment & Resource Limits Some providers such as AWS have deployment limits on the size of the deployment package, size of the code and libraries that can be deployed in the package. This could be severely limiting for some applications such as image processing functions that depend on large libraries that have to be packaged along with the code. Additionally, there are limits on the number of concurrent function executions, ephemeral disk capacity (temp space) etc. While some of these limits are soft limits and can be reconfigured per function by working with the providers, others are hard limits and will force you to reevaluate the choice of your design.

Resources are limited - Use wisely

Latency Due to the on-demand provisioning nature of the FaaS infrastructure, applications that use languages such as Java/Scala that require a long start time to spin up JVMs may encounter longer runtime. Having said that, providers optimize the infrastructure spin-ups based on the usage patterns of the functions. Due to the interpreted nature of Python and Javascript, functions written in these languages may not see a significant difference in latency between a PaaS and FaaS offering.

Test the performance of your applications thoroughly

The Players

While there are new providers entering the market to exploit the Serverless wave, the significant players are Amazon with its AWS Lambda, Microsoft with its Azure Functions, Google with its Google Functions and IBM with its Openwhisk rule the roost with AWS Lambda being the dominant player.

Amazon's AWS Lambda is the dominant player

Serverless Framework

While not having to manage infrastructure by using serverless functions is nice, having to deal with hundreds of functions in a project between multiple providers, managing buckets, messaging and permissions become an issue in itself. Additionally, organizations want to diversify risk and hence do not want to be bound to a single provider.

Add to this mix the idiosyncrasies of the provider when it comes to their FaaS offering. Not only do you have to learn the different terminologies used by the various providers, you will have to learn how to use their offerings on their respective consoles or CLI (Command Line Interface).

To avoid vendor lock-in and allow to deploy your FaaS solutions to various providers, Serverless Framework comes to your rescue. The Serverless Framework allows you to deploy auto-scaling, pay-per-execution, event-driven functions to any cloud. They currently support AWS Lambda, IBM Bluemix OpenWhisk, Microsoft Azure, and are expanding to support other cloud providers.

The Serverless Framework is an MIT open-source project, actively maintained by a vibrant and engaged community of developers and provides robust plugins for various FaaS providers and allows to extend it when needed.

The Serverless Framework allows you to provision and deploy REST APIs, backend services, data pipelines, and other uses cases by providing a framework and CLI to build serverless services across many providers by abstracting away provider-level complexity.

The Serverless Framework is different than other application frameworks because: – It manages your code as well as your infrastructure – It supports multiple languages (Node.js, Python, Java, and more)

Serverless framework allows choice of FaaS providers across a single project

Core concepts of Serverless Framework Serverless Framework consists of the following core concepts:

  • Service
  • Function
  • Events
  • Resources
  • Plugins

Service A Service in the Serverless Framework is the unit of organization. It’s where you define your Functions, the Events that trigger them, and the Resources your Functions use, all in one file titled serverless.yml. More information at https://goo.gl/9SKBvx

An application can have multiple services and hence multiple serverless.yml files.

Functions A Function is an independent unit of deployment or micro service. It manifests itself as a Lambda or Azure Function depending upon the provider. It’s merely code, deployed in the cloud, that is most often written to perform a single job such as:

  • Saving a user to the database
  • Processing a file in a database
  • Performing a scheduled task

Events Anything that triggers a Function to execute is regarded by the Framework as an Event. Events on AWS are:

  • An AWS API Gateway HTTP endpoint request (e.g., for a REST API)
  • An AWS S3 bucket upload (e.g., for an image)
  • A CloudWatch timer (e.g., run every 5 minutes)
  • An AWS SNS topic (e.g., a message)
  • A CloudWatch Alert (e.g., something happened)

When you define an event for your functions in the Serverless Framework, the Framework will automatically create any infrastructure necessary for that event (e.g., an API Gateway endpoint) and configure your Functions to listen to it.

Simply put, events are the things that trigger your functions to run. If you are using AWS as your provider, all events in the service are anything in AWS that can trigger an AWS Lambda function, like an S3 bucket upload, an SNS topic, and HTTP endpoints created via API Gateway.

Upon deployment, the framework will deploy any infrastructure required for an event (e.g., an API Gateway endpoint) and configure your function to listen to it.

Resources Resources are infrastructure components which your Functions use. If you use AWS as your provider, then resources are:

  • An AWS DynamoDB Table (e.g., for saving Users/Posts/Comments data)
  • An AWS S3 Bucket (e.g., for saving images or files)
  • An AWS SNS Topic (e.g., for sending messages asynchronously)

Anything that can be defined in CloudFormation is supported by the Serverless Framework. The Serverless Framework not only deploys your Functions and the Events that trigger them, but it also deploys the infrastructure components your Functions depend upon.

Credentials Serverless Framework needs access to your cloud provider account credentials to deploy resources on your behalf. For AWS you can use AWS CLI (aws configure). Azure is more involved.

Following links provide excellent guidance on setting up the credentials for various providers currently supported on the Serverless Framework.

  • AWS – https://serverless.com/framework/docs/providers/aws/guide/credentials/
  • Azure -https://serverless.com/framework/docs/providers/azure/guide/credentials/
  • Openwhisk – https://serverless.com/framework/docs/providers/openwhisk/guide/credentials/

Deployment Serverless Framework translates the service declaration in the serverless.yml file into a Cloud Formation or Resource Manager template depending on the provider you choose.

To deploy your service, functions, and provision the resources all at once, enter:

serverless deploy --verbose

To deploy a single function after making changes to it, enter:

serverless deploy function --function <myfunction> --verbose

Invoking Serverless Framework allows you to invoke a function locally for testing or invoke a deployed function.

To invoke your function locally, enter:

serverless invoke local --function <myfunction> --log

To invoke a deployed function, enter:

serverless invoke function --function <myfunction> --stage <my stage> --region <myregion>

If you omit the stage and region option, the default stage (dev) and region specified in your provider configuration will be used.

CelebritySleuth – A Sample Use case

CelebritySleuth is a celebrity face recognition service built using Serverless Framework, Twilio, Amazon Rekognition and IMDbPy API.

The CelebritySleuth application is an event-driven application taking advantage of the user’s mobile SMS/MMS for the presentation tier, Twilio in the middle-tier to bridge the SMS world and AWS Gateway and a set of AWS Lambda functions written in Python making use of AWS Rekogniton for image processing and IMDB for gathering information on the celebrities.

CelebritySleuth code repository, installation guide, and usage is at https://github.com/skarlekar/faces

How it works To begin with, you have to train the application to recognize the faces by building a collection of celebrities. You do this by sending a random sample of celebrity pictures (image URLs) and their corresponding names. The more pictures of a celebrity, the more accurate the recognition will be. The CelebritySleuth application consists of two services:

  • Twilio Communication Service
  • Face Recognition Service

The services are decoupled to allow for using different presentation tiers in future.

Architecture The CelebritySleuth application uses Lambda functions for computing needs. As a result, the application components are provisioned just before usage and brought down after use resulting in a low-cost, highly-scalable application.

The above picture illustrates the high-level architecture of the application. Details are as follows:

  1. The user sends a picture and commands to add/match a face to a collection. Alternatively, the user can create a collection – in which case a picture is not required. The SMS/MMS is sent to a telephone number hosted by Twilio.

  2. Twilio intercepts the message and forwards it to an API Gateway based on the user’s Twilio configuration.

  3. API Gateway translates TwiML to JSON and calls the Request Processor lambda function.

  4. The Request Processor lambda validates the commands and puts a message on the appropriate topic on SNS. If the validation fails, it returns the error message to the user via Twilio.

  5. When a message arrives in the Create Collection topic, a lambda is triggered which adds the named collection in AWS Rekognition via Boto libraries. A success/error message is put in the Response Processor topic.

  6. When a message arrives in Add Face topic, a lambda is triggered which identifies the most prominent face in the image and adds the metadata for the face to the given collection. If there are no faces identified, it creates an error message and sends the response to the Response Processor topic.

  7. When a message arrives in Match Face topic, a lambda is triggered which identifies the most prominent face in the image and matches the metadata for that face with known faces in the collection. If a match is found, the corresponding person’s name is returned. The Lambda then uses IMDB to look up the biography of the person.

  8. The various lambda-based processors put the response message on the Response Processor topic.

  9. The Response Processor picks up the response and constructs an SMS message and calls Twilio’s SMS service.

  10. Twilio validates the From number and sends the message to the corresponding To number.

Components

The application consists of the following components:

  1. Python – Python is a programming language that lets you work quickly and integrate systems more effectively.

  2. Twilio – Twilio Messaging Service for having the user communicate with the application through SMS.

  3. AWS Lambda – AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running.

  4. AWS Rekognition – Amazon Rekognition is a service that makes it easy to add image analysis to your applications. With Rekognition, you can detect objects, scenes, and faces in images. You can also search and compare faces.

  5. IMDb – IMDbPY is a Python package useful to retrieve and manage the data of the IMDb movie database about movies, people, characters, and companies.

Screenshots

Application in Action

Following is a sample animation of the application in action:

Conclusion

Serverless Framework is an accelerator for adopting Serverless architecture. It promises significantly less DevOps, lower cost, high scalability and multiple deployment options across a variety of providers.

Apart from providing a scaffolding to deploy Lambdas the Serverless Framework allows you to manage multiple Lambdas, manage related infrastructure across multiple regions and stages. To top it off, it allows you to manage the equivalent of Lambda functions across multiple providers.

In my testing, I could not deploy the CelebritySleuth application on Microsoft Azure because the Serverless Framework does not currently support deploying functions written in Python. Although, in speaking to Austin Collins the founder of Serverless Framework at the Serverless Conf 2017 in Austin, I gather that his team is working on providing support to as many languages as supported by the cloud providers.

Asides from this, I was able to build the CelebritySleuth application from start to end in a couple of hours using the Serverless Framework compared to half-a-day for setting up the components manually through AWS console.