Fargate Design Patterns

Tom Thumb’s Journey to Fargate on a few Pennies

AWS Fargate is the Uber of container service allowing engineers to hail a container by specifying their compute and memory needs. By providing incredible on-demand flexibility and removing the burden of resource provisioning just as Lambda did years ago to servers, Fargate is disrupting the container management technology.


Making software behave predictably in different environments where it is deployed during the lifecycle of an application is one of the biggest challenges of development. Subtle differences in system software on which developers have no control – even in same operating systems – can cause unexpected behaviors and are hard to debug and fix.

Containers were invented to solve this problem. Containers encapsulate entire runtime environments for an application or service including dependent libraries, configurations which are just enough to run the application into software packages that are portable across operating systems. By sandboxing the application into just enough space and opening just the right ports for communication with the outside world, containers also increase the security of an application by reducing blast radius and increasing the number of services that can be run on a unit of hardware.

First released in 2013, Docker introduced the concept of containers. Kubernetes followed in 2014 allowing multiple Docker nodes running on different heterogenous hosts to be orchestrated by automating provisioning, networking, load-balancing, security and scaling across these nodes through a single dashboard or command line. Both of these technologies required the upkeep of the underlying cluster of servers & operating system through upgrades, patching, rehydration, and security management. Amazon introduced ECS and EKS as platform services to streamline this management process for Docker and Kubernetes respectively.

What is AWS Fargate?

Put simply, AWS Fargate is container management solution provided by AWS to run your containers without having to worry about managing a cluster of servers. You don’t have to choose server types, upgrade or patch servers or optimize container packing on your clusters.

This is analogous to hailing an Uber car service. With Uber, you just tell what size car you want based on how many people are riding, if you want a car seat or want the car to be wheel-chair accessible. You don’t specify a Lexus or a Toyota. With Fargate, all you do is package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Fargate takes care of scaling so that you don’t have to worry about provisioning enough compute resources for your containers to scale out or scale them in when they are not utilized. In essence, Fargate separates the task of running containers from the task of managing the underlying infrastructure. Developers can simply specify the resources that each container requires, and Fargate will handle the rest. As a result, applications deployed on Fargate can save you time, manpower, and money.

If you are used to traditional container management, you will really appreciate Fargate allowing you to focus on the ‘Dev’ part of designing and building your applications and reduce the ‘Ops’ part of managing infrastructure from your ‘DevOps’ responsibilities.

Components of AWS Fargate

Fargate Components
Fargate Components


A Task is the smallest deployable unit on Fargate. A Task can be composed of one or many containers. You use a Task Definition as a blueprint for specifying which container images and which container repository you want to use for running your Task. It also specifies the CPU, memory, the roles to use for executing the task. Fargate then uses this information to spin up the containers for executing the Task.


Fargate allows you to run and maintain a specified number of Tasks simultaneously in an Amazon ECS cluster. This is called a Service. If any of your tasks should fail or stop for any reason, the Amazon ECS service scheduler launches another instance of your task definition to replace it and maintain the desired count of tasks in the service depending on the scheduling strategy used.

In addition to maintaining the desired count of tasks in your service, you can optionally run your service behind a load balancer. The load balancer distributes traffic across the tasks that are associated with the service.


An Amazon ECS Cluster is a logical grouping of tasks or services. Clusters are AWS region specific and can contain tasks using both the Fargate and EC2 launch types.

AWS Fargate – the Good, Bad & Ugly

Good & Bad: Pay Per Use

Fargate is a good choice if you are leaving a lot of computing power and memory foot-print unused. Unlike ECS or EKS, you only pay for the computing time and memory that you actually use. It also integrates well with other AWS services allowing you to schedule tasks and run them based on events while automatically fading them out when not in use.

While Fargate provides you an opportunity to cut costs by charging you only for the time your container is running, the average per-hour cost for running Fargate is more than the per-hour cost of running ECS or EKS in spite of major price reduction in Jan 2019 proving once again that there is no free lunch. The cost differential is the price you pay for not having to deal with the complexity of managing infrastructure or investing in time and resources to deal with the cluster management that comes with the traditional solutions.

As a result, the onus is on you to make the right choice based on the size of your workload, availability of skilled resources to manage and secure clusters, etc.

Good: Low Complexity

With its Container-as-a-Service model, you don’t have to worry about the underlying infrastructure you need for deploying your container, how you will optimize usage or secure them. Instead, your focus reduces to the four walls of your container – its size, power, and communication with the outside world aka memory, CPU, and networking.

Good: Better Security

Since you don’t have to worry about securing the entire cluster of servers, your security concern is reduced to security within the container, the roles required to run your application, the ports that must be opened for the application that is running inside the container to communicate with the outside world, etc.

Good: Faster Development

As the problems of systems management are alleviated, developers spend less time on operational issues and can focus on solving business problems building services.

Good: Scaling

As Fargate is serverless, scaling is taken care of by the provider seamlessly. As a result, you do not have to consider the number of concurrent requests you can handle. Having said that, if you integrate Fargate with downstream server-based solutions, you should expect an increase in load on those components when your services running on Fargate scales out significantly.

Bad: Limited Availability

While AWS is rolling out Fargate to as many regions as they can, it is not as available as Lambdas, ECS or EKS. As of April 2019, Fargate is not available in GovCloud, Sao Paulo, Paris, Stockholm, Japan, and China.

Behavioral Design Patterns for AWS Fargate

Behavioral patterns provide a solution for the better interaction between components and foster lose coupling while providing the flexibility to extend these components easily independent of each other.

In this section, we will explore three behavioral design patterns for AWS Fargate viz., the Container-on-Demand, Scaling-Container and Sidecar-Assembly patterns that allows Fargate to be used just like Lambdas for heavy on-demand tasks where Lambda is not suitable, or allow you to run containers traditionally but without having to manage infrastructure. Additionally, we will explore how to attach sidecar containers to a parent container to provide supporting features for the application.

We will use the Container-on-Demand pattern to build an on-demand video thumbnail service to generate thumbnail images from video files. With this pattern, you can spin the containers on demand and immediately decommission after the task is run.

We will use the Scaling-Container to build an auto-scaling service that finds the value of the coins thrown on a table from an image. With this pattern, you will have a small footprint always running and scale up or down as the processing demands.

Later we will explore the Sidecar-Assembly pattern to deploy components of an application into a separate container to provide isolation and encapsulation.

Container-on-Demand Pattern

Context & Problem

AWS Lambda lets you run functions as a service. This allows you to build applications as a conglomeration of serverless microservices which react to events, eschewing development of core functionalities, easy deployment, automatic scaling and fault tolerance. But Lambda has many resource limitations and in general, it is not efficient for running long-running jobs.

For instance, these are current limitations on Lambda (as of April 2019): – The default deployment package size is 50 MB. – Memory range is from 128 to 3008 MB. – Maximum execution timeout for a function is 15 minutes.
– Request and response (synchronous calls) body payload size can be up to to 6 MB. – Event request (asynchronous calls) body can be up to 128 KB.

These are severe limitations for processing several types of applications including machine learning models where the size of libraries go much above the maximum deployment package size of 250MB or may take longer than 15 minutes to run a batch.

As a result, it is not possible to run large workloads or long running processes on Lambda. Further, the resource limitation around the size of the software package restricts the type of workloads you can run on Lambda. For instance, if you have a machine learning model that requires the usage of large libraries such as Scikit, Numpy, etc, it is impossible to fit the software package in a Lambda deployment.


Deploy your software package in a container as a Fargate Task. Invoke the task using a Lambda. The Fargate Task is started from a dormant state. Once the process is complete and the output is written to the output repository, the Task is automatically stopped. As a result of this, you pay only for the time the Task is running. Additionally, you can preconfigure the size of the task (vCPU, memory, environment variables to pass parameters to the job) or override it for every invocation.

Container on Demand Pattern
]40 Container on Demand Pattern

The entry point in the container can be as trivial as a shell script or could be complex as a web service. But the point to note here is the job submitted to the Fargate Task, in this case, should be asynchronous. As a result, large software packages running large workloads can be run using this pattern.

Pattern Components

  • Input Repository – The input for your Processor is stored here and should be reachable by the processor. This could be an S3-based object store or a database. Ideally, this repository should notify the task invoker when a new object is uploaded or updated.
  • Task Invoker – A short-running function that is used to invoke your Processor. This could be a Lambda function or a synchronous service running as part of another larger process chain.
  • Processor – A long-running task that is the core of the pattern. It is invoked by the Task Invoker. This could be a Fargate Task that reads its input from the Input Repository, processes it and writes back the output to the Output Repository. The Fargate task can be configured to use one or more containers (with a maximum of 10).
  • Output Repository – Results of the Fargate Task are stored here. Again, this could be an S3 store or a database and could be optionally configured to emit events on inserts and updates.


While using this pattern Fargate puts Lambdas on steroids, Fargate has its own resource limitations due to its serverless nature. For instance, the number of tasks using the Fargate launch type, per region, per account cannot be more than 50 or the maximum container storage for tasks using the Fargate launch type cannot be over 10GB.

If you think your workloads will breach these limitations, you should seriously consider AWS EMR or AWS Glue for your solution’s tech stack.

Container-on-Demand Pattern – Example

Tom Thumb – A Video Thumbnail Generator Task

Tom Thumb is a video thumbnail generator task. It is implemented following the Container-on-Demand pattern. In typical usage, a user uploads a video file to an S3 bucket. A trigger is set on the S3 bucket to notify a Lambda function in the event of a file upload to the video folder in the bucket. The Lambda is deployed with a Python code to extract the name of the video file from the Lambda notification event and invoke a Fargate task. The Fargate task consists of one container that uses FFmpeg application to decode the video and freeze an image at a given position in the video. The frozen image is written to a pre-configured folder in an S3 bucket.

Code Repository

All code examples, prerequisites, and instructions are available in the companion Git at tom-thumb subproject.

Scaling Container Pattern

Context & Problem

In the problem section of the Container-on-Demand pattern we discussed how the limitations on long-running processes rule out Lambda for asynchronous workloads. Therefore, we use the Container-on-Demand pattern to overcome the time limitation of Lambda which cannot exceed 15 minutes.

While the Container-on-Demand pattern solves this issue, for synchronous web services that execute within these limits, the main limitations are the size of the deployment package, networking, or the language supported in Lambda.

As of this writing in April 2019, AWS Lambda natively supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby code. Most recently AWS Lambda provides a Runtime API which allows you to use any additional programming languages to author your functions. While the concept of allowing you to bring your own runtime is radical, it is not straight forward as can be seen from this author’s experiment here.

How do we run synchronous services where the size of the deployment package exceeds the Lambda limits?

While Lambda Layers mitigate some of this issue by allowing artifacts to be shared between Lambdas, it introduces it own set of issues, especially around testing Lambdas locally and layers still count towards the 250MB hard limit on the unzipped deployment package size.

What if you want to run always-on services that can scale on-demand?

Note that, the Container-on-Demand pattern spins up a task to execute the job and spins it down. For asynchronous workloads, the time taken to spin-up is not an issue. But for synchronous web services, time is dear.


Following is a possible solution to use a Fargate Service fronted by an Application Load Balancer.

  • Deploy your service in a Fargate Task
  • Open ports for two-way communication in the Task and Container
  • Create an ECS Service to wrap around the Fargate Task.
  • Attach an Application Load Balancer in front of the Fargate Service.
  • Register an auto-scaling target with rules on when to scale out your service and when to scale it in.
Scaling Container Pattern
]48 Scaling Container Pattern

Pattern Components

  • Fargate Task – A Fargate task that has its ports open for two-way communication using one or more containers (within a maximum limit of ten containers).
  • ECS Service – An ECS service that uses the Fargate Task from above identifying the desired count of tasks that must be run at any given time.
  • Application Load Balancer – An Application Load Balancer with a listener to forward requests to the ECS Service.
  • API Gateway – An optional API gateway configured to forward requests to the application load balancer.
  • Web Interface – An optional browser-based interface for allowing users to post requests to the service. This could be a simple HTML form.

Scaling Container Pattern – Example

Bean-counter – A Coin-counter Service

Bean Counter is a coin counter service. It will analyze an image of coins and return the total value of the coins in the image. It works only on US Mint issued coined and does not recognize any denomination above a quarter dollar coin. It also assumes that the picture contains a quarter. The quarter is used to calibrate the size of the coins. It is implemented following the Scaling-Container pattern. In typical usage, a user navigates to the URL of the ALB on the browser and enters the URL for the service along with the location of the image file containing the picture of the coins. The Bean-Counter service then invokes the Fargate Task and returns the response to the browser.

Code Repository

All code examples, prerequisites and instructions are available in the companion Git at bean-counter subproject.

Sidecar Assembly Pattern


Services require orthogonal technical capabilities, such as monitoring, logging, configuration, and networking services. While the components encapsulating these orthogonal capabilities can be integrated into the main service, it will leave the main service exposed to the vagaries of these components. For instance, they will not be well isolated, and an outage in one of these components can affect other components or the entire service. Also, they usually need to be implemented using the same language as the parent service. As a result, the component and the main service have close interdependence on each other.

One option is to deploy these orthogonal components as separate services allowing each component to have its own life-cycle and be built using different languages. While this gives more flexibility, deploying these features as separate services can add latency to the application.


Co-deploy the orthogonal components along with the main service by placing them in their own containers. Containers in a task are co-deployed together in the same host thereby not affecting the latency of the service significantly for the communication between them. As a result of this co-deployment, the sidecar and the main service can access the same resources. This allows the sidecar to monitor system resources used by both the sidecar and the primary service.

Side Car Assembly Pattern
]50 Side Car Assembly Pattern

This pattern can also enable applications to be composed of heterogeneous components and services that have expanded capabilities beyond what is provided by these individual services. In essence, to reinforce that the whole is greater than the sum of its parts. The sidecar also shares the same lifecycle as the parent application, is created and retired alongside the parent.


Each application is unique and solving different needs based on business requirements. If the task of infrastructure management is too onerous and/or if you only want to pay for your computing time, then Fargate may be the right choice for you.

On the other hand, if you need greater control of the network resources or have large container workloads with consistent demand throughout the day, then it warrants maintaining a cluster of servers to run ECS or EKS. With the latter choice, you can use reserved or spot instances to offset your cost.

Scenarios where Fargate is most Beneficial

Fargate can be used with any type of containerized application. However, this doesn’t mean that you will get the same benefit in every scenario. Fargate would be most beneficial for projects that need to reduce the time from ideation to realization such as proofs-of-concept and well-designed, decoupled, micro service-based architectures deployed in production environments.

Applications can consist of a mix of Fargate & Lambda to exploit the Serverless model.

Use Lambdas for small & tight services with low memory (<3GB) and small request-response cycles (<15 mins).

Use containers deployed on Fargate for: – Existing legacy services that cannot be trivially refactored, – Applications are written in languages not supported by Lambda, – Need to use large libraries that cannot fit into a Lambda profile (Quantlib, Scikit, etc), – Where you need more control over networking, computing horsepower or memory – Use cases that require a long in-process runtime.

Scenarios where Fargate may not be the Best Choice

  • When you require greater control of your EC2 instances to support networking, COTS applications that require broader customization options, then use ECS or Kubernetes without Fargate.
  • When you want fast request-response cycle time then Lambda may be a good choice. This is especially true if you are using large container images written with object-heavy languages such as Java/Scala that requires significant initiation time to start the JVM and bootstrap objects.
  • By breaking down your application into smaller modules that fit into Lambdas and using Layers and Step Functions you can reap the benefits of Serverless architectures while paying only for your compute time.

Blockchain & Smart Contracts Demystified

Overview of Blockchain

In it’s simplest form, the block in blockchain is a registry of transactions. The transaction could be as simple as storing a piece of data such as the balance in your account, or Party A sending money to Party B, to something as complex as an insurance payoff. All this while the transactions are encrypted and digitally signed using cryptographic keys.

If it helps, think of it as a page in a file folder and each transaction written down on a line in the paper. Certain critical aspects of your transactions such as account numbers are not written in plain text, rather you would use a codec to translate these critical pieces of information into a cryptic text that only you with the knowledge of the codec key can understand.

As an when a block is complete (the page is used up), it is verified by multiple parties (auditors – or miners in the blockchain parlance) to ensure the pre-agreed method was used to record the transaction. Once a majority of parties have verified, a consensus is formed and the ‘block’ is added to the ‘chain’ (in our example, the page is added to a file folder) creating a Blockchain. As the number of transactions grows, so does the blockchain.

It should be noted that once a block is validated and added to the blockchain, the data (or transactions) in the blockchain cannot be modified. In an open blockchain network, anyone can write to a blockchain provided that they pay a small toll for using the infrastructure. Additionally, anyone can validate these transactions by contributing to the infrastructure. Such validitors also called miners are compensated for their efforts. This very nature of blockchain where data is immutable and untrusted parties collaborate and transact with confidence along with a non-retractable proof of action is what makes blockchain an amenable medium to serve as a System-of-Record for public records.

As a result, the Blockchain technology can be used when untrusted parties who are geographically-dispersed want to share data or exchange value. In the world of finance, this trust or lack thereof is called counterparty risk – the risk to each party of a transaction that the counterparty will not live up to his/her contractual obligation.

To summarize, Blockchain technology helps in creating permanent, non-repudiable & read-only records of transactions across blockchain nodes. Each node is a database of blocks linked together in a way that new blocks are added to existing blocks only after the new block is validated and cryptographically signed. This creates a Chain of Trust.

Blockchain – It Is Different From Your Bank’s Database

We established that a blockchain is a shared database consisting of a ledger of transactions. So is your bank’s database. But this is where the similarities end. Unlike a bank’s database which is centrally controlled and owned, a public blockchain is open to the entire world where anyone can verify the transactions in the ledger or operate the account provided they have the right credentials.

This decentralization of responsibility from a single entity (such as the bank in our example) also eliminates the custodial risk – the risk that the institution that is controlling the asset will flip over and thereby take down your assets (or records thereof) with it.

While on this topic, let us understand what a blockchain node and the concept of mining are. Each copy of the blockchain is called a node. A node usually runs on one computer – physical or virtual. To achieve decentralization, the blockchain network consists of multiple nodes where each node has a copy of the confirmed blocks as rest of the nodes in the network. While each node’s block may be a copy, the transactions in each of these blocks are independently verified by each and every node in the block by a process called mining (more on mining later).

This duplication of data across all the nodes in the blockchain has a positive side effect in that it makes the data in the blockchain extremely fault tolerant. Even if multiple nodes in the network fail, taking the systems down with it, the data is still intact in other nodes. Due to the global nature of blockchain network, the possibility of the entire blockchain network failing is very small. Needless to say, this built-in redundancy reduces not only reduces the likelihood of failure but also make the blockchain network resistant to hacking.

Blockchain – Reducing the Cost of Conducting Transactions

In contrast to today’s systems and processes, blockchain allows you to metaphorically digitize your assets and make it liquid. This in turns eliminates the need for authorities to certify and clear transactions. Additionally, transactions which used to take days due to search, verification, and attestation by third-parties can be reduced to minutes due to the elimination of such actors. This disintermediation not only reduces time but also the cost involved in such transaction due to the elimination of third-parties.

Blockchain – The Inner Sanctum

Now that we have a high-level overview of what a blockchain is, let us dig deeper to understand how the blockchain works.

The blockchain technology is built upon the following fundamental concepts:

Cryptography – Public & Private Key Cryptography

One of the key facets of the blockchain technology is creating trust between third-parties. To achieve this kind of trust between third parties a secure digital identity for reference between the parties is needed. The Public-Private-Key Cryptography (PPK) provides this digital reference. In PPK cryptography, every party is provided with a private and public key pair. These key pairs are one of kind such that, you need the public key to encrypt the data and private key to decrypt it. Therefore, the private key is kept confidential and the public key is shared by the party.

When a sender wants to send something of value to a recipient, he/she uses the recipient’s public key to encrypt the data and passes the encrypted data to the recipient. Once the recipient receives the data, he/she uses their private key to decrypt and consume the data. As the data is encrypted, only the recipient can decipher it. If the data is corrupted during transit, no one will be able to read it including the recipient.

Decentralized Network – The Key to Non-Repudiation

While PPK provides a strong case for establishing identity, this will all be naught if the transactions cannot be validated. This is where the blockchain network and the size of the network plays an important role. To draw an analogy, when a meteor burns through the earth’s atmosphere, thousands of telescopes around the earth capture the moment and is witnessed by astronomers, weather agencies, and hobbyists alike. Their recordings or the very act of witnessing the event provides the visual evidence that the action has taken place. With thousands of computers on the blockchain network validating every transaction, blockchain uses mathematical formulas instead of telescopes to validate the transaction. While the number of nodes in the blockchain network may go up or down slightly they are increasing steadily. For example, there were 11,660 reachable nodes on average in the Bitcoin network over the last year.

By combining the cryptographic keys discussed above with the massive blockchain network an unrefutable method of assurance is provided. This process, when one party uses their private key to sign a transaction and attach the counterparty’s public key, a block containing a digital signature, timestamp, and other relevant information is created. This block is then broadcast to all the nodes in the blockchain network. At this point, each node validates the transaction using a mathematical process called mining. Once a majority of the nodes have validated the transaction, a consensus is reached and the block is added to the network.

Mining – Incentive for Verifying Transactions

There is no such thing as free lunch. This is also true in the world of the blockchain. The blockchain protocol compensates the miners for the opportunity cost of validating the transactions in the form of the blockchain’s native currency (for instance, it is Bitcoins in the Bitcoin blockchain network and Ether in the Ethereum blockchain network). This process is called mining. Akin to mining for gold where there is only a finite amount of gold in the earth’s resources, so is a blockchain’s currency artificially restricted by the blockchain protocol to top-off at a certain point. In the Bitcoin network, this top-off is at 21 million bitcoins and the current reward for mining a block is 12.5 bitcoins which are equivalent to approximately $110,000 as of this writing. As a result, there is a massive interest in mining and any node in the blockchain network can establish itself as a mining node and start validating transactions to earn the proverbial gold.

Mining – Process Explained

The process of mining consists of solving a mathematical puzzle. While the puzzle’s difficulty and method may vary for different types of blockchains, the consistent theme behind the puzzle is to combine the data in the block with a random number and pass it through a hash function to produce a result that is within a certain range. This result is known as a nonce. In case of the Bitcoin network, the nonce can be in a range from 0 & 4,294,967,296. To arrive at the nonce, the mining node has to guess a random number that when combined with the data to be validated produces the valid nonce.

As all the mining nodes will be working independently on this mathematical puzzle, a race ensues. The first miner to arrive at the nonce announces the result to rest of the network and pockets the winning leaving the rest of the miners to verify the victor’s work and pursue their luck with the next block. While the rewards sound attractive, the cost of mining should factor in the huge computing power and the massive cost of electricity needed for mining.

Types of Blockchain

The most popular blockchain network in use today is the Bitcoin network which is a public network consisting of a cluster of blockchain nodes brought together to exchange value. As a result, anyone can use the blockchain network to send/receive value (money). Due to the resiliency, immutability, and provenance of assets that the blockchain behind the public Bitcoin brought to fore, ideas soon emerged to exploit the blockchain technology for use as a distributed ledgers in place of ledgers financial institutions have to maintain. By using a distributed digital ledger, these financial institutions surmised that they can reduce the cost of having to prove their customer’s provenance of assets from one institution to other. This gave rise to the concept of consortium network which are blockchain networks owned and operated by a handful of permissioned entities. By contrast, private networks are closed networks used within an organization to solve auditing, security and prevent fraud without the need for crypto-currencies.

Following are the properties of these various types of blockchain networks:

What is Blockchain good for?

Global System of Record

From its advent, blockchain has quickly established itself as a medium for information registry, data distribution, source-of-truth (or system-of-record). Following are few of the main use cases for the blockchain technology.

Information Registry

Its capability to act as an immutable system-of-record can be leveraged to store:

  • Unencrypted data to promote transparency and trust across business networks.
  • Encrypted data to exchange value between counterparties so that participants can access data that they are authorized to view.

Digital Identity

With its use of public and private key pairs, blockchain technology can be used to establish digital identity and consent to digital interactions. The public key can be used to identify entities and the private keys can be used by the entities to express consent for transactions. Combined with the ability to constrain network participation in a permissioned blockchain, organization, and public-sector agencies can use blockchain to exchange sensitive data while complying with data regulations, PCI and HIPAA.

Distributed Ledgers

The blockchain is an ideal platform for Distributed Ledgers because of its use of consensus. Simply put, a consensus is a set of rules with which a majority of participants working for a common cause can agree on an outcome. The outcome could be a value of an entity or an outcome of a transaction. This simple mechanism is leveraged in the blockchain domain to create Distributed Ledgers so that a majority of the nodes can agree on the value of a transaction or data item. Due to blockchain’s tolerance for failure, consensus mechanism used in the blockchain allows the distributed ledgers to flourish even if some of the nodes fail.

There are various consensus mechanisms that can be used in a blockchain network. For instance:

Proof-of-Work – To validate transactions, miners have to solve complex mathematical puzzles which consume considerable computing power and uses vast amounts of electricity. To bend the system to their needs, malevolent miners have to take over a majority (51%) of the nodes which may cost them more than the transactions they are validating are worth. As a result of this, proof-of-work is well-suited on a public network where trust is in short supply.

Proof-of-Stake – To validate transactions, validators (miners are called validators in the PoS model) have to pitch in a certain amount of digital assets in an escrow to participate. If the validators do not follow a strict protocol for validation they lose their assets in the escrow thereby reducing the incentive for malfeasance.

Multi-signature – In a multi-sig scenario, a majority of stakeholders have to digitally sign-off for a transaction to occur or to validate a transaction.

Practical Byzantine Fault Tolerance – Practical Byzantine Fault Tolerance of PBFT is a way to validate data and achieve consensus using a protocol that allows for a certain number of validators to go rogue and still arrive at a consensus beneficial for all the parties involved.

Smart Contracts

In a vanilla transaction, there are atmost two parties. One at the sending end and another at the receiving end. What if you want multiple parties to be involved in a transaction? What if you want the transactions to be executed based on certain logic? This is where Smart Contracts come in.

A Smart Contract is a method of digitizing the defining elements of a contract and linking it securely to other stakeholders to assure issuance of value. The stakeholders could be internal systems, external systems or other entities. It is stored on a blockchain and is executed as part of a transaction. Rules could be codified to be self-executed or self-enforced based on the passage of time or a change of state. The purpose of a smart contract is to reduce friction, cost, and delays associated with executing traditional contracts. While a standard contract outlines the terms of a transaction and is usually only enforceable by law, a smart contract enforces the term encoded in the contract when a certain event occurs. As a matter of fact, the enforcement happens at the protocol level by all the nodes participating the blockchain network.

Smart Contracts are usually written using a language called Solidity. Solidity is statically typed, supports inheritance, libraries, and complex user-defined types. The language syntax is similar to Javascript.

Following is a sample of a Smart Contract that stores a piece of data. It only allows the owner of the contract to change a item of value whereas anyone else can see how much value is stored in the contract. Additionally, only the owner can tear-down the contract.

pragma solidity ^0.4.0;

contract Register {
    address creator;
    uint256 myNumber;
    bytes32 name;

    modifier onlyowner() {
        if (msg.sender == creator) {

    event ValueChanged(address indexed user, uint256 indexed oldNumber, uint256 indexed newNumber);

    function Register(bytes32 _name, uint256 _seed) public {
        creator = msg.sender;
        name = _name;
        myNumber = _seed;

    function getCreator() public constant returns (address) {
        return creator;

    function getName () public constant returns(bytes32) {
        return name;

    function getMyNumber() public constant returns(uint256) {
        return myNumber;

    function setMyNumber(uint256 myNewNumber) public onlyowner {
        myNumber = myNewNumber;
        ValueChanged(msg.sender, myNumber, myNewNumber);

    function kill() public {
        if (msg.sender == creator) {


While a smart contract can be fully autonomous in a world where all the assets it manages such as currency, payment, obligations, property titles, assets and licenses are digitally represented, we are not at the age where such data and its provenance is fully digitized. In the absence of such digital representation, smart contracts uses special nodes called Oracles that act as an interface to the external world and react to it.


Ethereum is a version of blockchain that allows Smart Contracts to be run in a trusted environment – this is unlike the Bitcoin blockchain which only allows cryptocurrency transactions. To accommodate Smart Contracts to be executed, the Ethereum blockchain hosts a virtual machine called Ethereum Virtual Machine (EVM) on each of its nodes. The EVM allows code to be verified and executed on the blockchain, providing guarantees it will be run the same way on every node.

The main Ethereum public blockchain is called MainNet. Ethereum can also be downloaded and run locally for development. While the local network will be sufficient for development purposes, for testing Smart Contracts, the following public test networks are available:

Ropsten – Ropsten is the official test network from the Ethereum foundation and its a clone of the MainNet in that it uses the same consensus protocol as the MainNet.

Rinkeby – Rinkeby is a test network that uses a consensus method called, Proof of Authority. With this consensus mechanism, transactions are validated by set of predetermined nodes to control the supply of ether and provide a consistent response time.

Kovan – Similar to Rinkeby, Kovan also uses a Proof-of-Authority consensus mechanism.

Decentralized Applications

Decentralized Applications (also called DApps) are those applications that use Smart Contracts at its core to hold the logic and control the processing. DApps usually uses Javascript, HTML & CSS in the presentation tier with the Smart Contracts deployed on Ethereum blockchain.

The Ethereum White Paper splits DApps into three types: financial applications that manage money by using Ether such as money exchange applications where users sends ether to each other, semi-financial applications where money is involved in addition to other data such as insurance payout applications which pay out certain amount of ether as the policy matures, and applications in the other category, which includes voting and governance systems.

Lifecycle of a Transaction in a Decentralized Application

Let us take a closer look at a lifecycle of a transaction in a DApp.

  1. A transaction is digitally signed and the process begins.

  2. The block containing the transaction is sent through miners before it is stored in a blockchain. The mining process ensures that the transaction is legitimate.

  3. Once the transaction is verified by the mining nodes, the transactions in the block are broadcast to other nodes in the network.

  4. When the majority of the nodes verify that the transaction is legitimate (arrive at a consensus), the block containing the transaction is stored in the blockchain.

  5. As soon as the block is stored in the blockchain an event is triggered and the recipient receives the result of the transaction.

Advantages of Decentralized Applications

The key to the growth and adoption of blockchain is the decentralized nature of its network and the assurance it provides in minimal trust relationships. More specifically:

  1. Decentralization reduces the likelihood of failures due to the redundancy built-in due to duplication of data. As a result, the downtime on networks like the MainNet is literally zero.

  2. Again due to the resiliency and duplication of data among the nodes, it is not as prone to hacking as other traditional databases.

  3. Because transactions using Smart Contracts are signed by asymmetric cryptography, it is impossible for parties involved in the transaction to repudiate it.


With the proliferation of various blockchain networks, there is sometimes a need to send value from one blockchain network to another. For instance, send data from one private blockchain to another for faster processing or to make use of a certain service or contract that is only present in the other blockchain. This is where sidechains come into the picture.

When sidechains are enabled in a primary blockchain network, a range of addresses are dedicated for the sidechain. When a certain value has to be sent to the secondary blockchain (through sidechain), it is sent to a preconfigured address in the blockchain which is within this sidechain address range. Once the value is sent to this address, it is out of the control of the primary blockchain. This is called a immobilization transaction. Once the immobilized transaction is confirmed, a message is sent to the secondary blockchain with a proof that the value was transferred to the secondary blockchain. The secondary blockchain creates an equivalent value in its denomination and provides the address. This equivalent value can then be used in the secondary blockchain to conduct transactions.

This logic is symmetric in the sense, when the transaction in the secondary blockchain is complete it can transfer the value to the original blockchain (now its sidechain) and the process continues in the primary blockchain from where it left-off.

Barriers to Adoption

While blockchain offers improved data quality, accuracy, and better services through enhanced efficiencies, its adoption is not widespread outside of the cryptocurrency domain and a few tech-savvy financial institutions. Following are some of the major challenges to the widespread adoption of the blockchain technology.

Lack of Knowledge & Skill – Though many have heard of Bitcoin and may have used it, they may not understand how it works or what the underlying blockchain technology is capable of. Much less, it’s development eco-system.

Cost of Change – Even as more organizations learn about the potential benefits of blockchain and its capabilities, the internal resistance to change and the costs associated with the change is a major barrier to adoption of the blockchain. Additionally, the blockchain projects currently underway are mostly IT-driven rather than business-driven. As business becomes aware of the benefits of adopting blockchain and is able to justify the cost of the shift in technology, the adoption should improve.

Regulatory Environment – One of the main challenges of wide adoption of the blockchain technology is regulations. In many cases, the current regulatory environment is slowing down the adoption of the technology as it will disrupt existing services and institutions. As the current regulatory environment catches up with the innovation, more products and services will start using the blockchain technology.

As some of these benefits promised by blockchain are highly disruptive, a detailed cost-benefit analysis must be performed to determine if the potential benefits of blockchain outweigh the cost of the change.

Additionally, given the early stages of the blockchain and smart contract eco-system, a thorough feasibility analysis must be performed with a proof-of-concept to determine the viability of the technology for a particular use case.

Emerging Services Disrupting the Status Quo by using Blockchain

The following are a few services that use the blockchain technology as its core enabler.

Storj – Storj.io is a blockchain based, end-to-end encrypted, decentralized cloud storage platform. Unlike Dropbox and similar cloud-based storage platform which manages the servers on which your data is stored, Storj takes your files, shreds it into smaller pieces, encrypts each piece and stores them in a consortium blockchain network. You can subsidize your cost of storage by joining the consortium and leasing your computer and hard-disk space to be a part of the consortium network.

ShapeShift – Most of the cryptocurrency trading platforms requires the user to store their cryptocurrency metadata such as keys and addresses with them. ShapeShift has created a cryptocurrency trading platform that allows you to trade one blockchain asset for another without the associated custodial risk of storing the account related information.

Akasha – Akasha promotes freedom of expression, access to information and other basic fundamental human rights by encoding them into smart contracts and providing a social-networking platform to use it.

Balanc3 – Balanc3 has built a blockchain-based accounting platform that bridges the traditional accounting world to the blockchain technology. It provides utilities to monitor blockchain transactions and transform the data and events to traditional accounting formats for reporting and compliance.

Ujo – Ujo is looking to transform the music industry supply chain by connecting the artists to listeners without the need for the middleman. It uses blockchain to store music and uses smart contracts to allow singers and songwriters to trade produced songs and albums directly with their consumers.


Over the ages, businesses have to overcome different types of friction to differentiate itself, create value and forge ahead. Innovations starting from the invention of the wheel, steam engines, assembly lines and the internet have helped overcome geographical divides and other inefficiencies. In spite of this, today’s organizations have to deal with imperfect information that are tied up in silos, retrieving them while at the same time having to secure the channels through which such information is exchanged. This adds to the costs of processing information, delays in getting to it while increasing the touchpoints in the information supply chain. Add to this the bureaucracy and red-tape businesses have to cross to innovate, regulations they have to satisfy and the friction multiplies manifold.

The invention of blockchain comes at the right time by reducing these frictions among untrusted third-parties, increasing collaboration and promoting consensus among them. All this while removing third-parties and reducing the overall cost and delays in business processes. By promoting visibility, blockchain has the potential to transform the business landscape.

SSL – Self-signed Certificates in Certificate Chain

Off late I am working on a standard-issue Windows 10 laptop that is locked down and has a self-signed SSL certificate.

Now, Git doesn’t play well with self-signed certificates and whenever I am trying to clone a Git I get the following error.

fatal: unable to access 'https://github.com/joemccann/dillinger.git/': SSL certificate problem: self signed certificate in certificate chain

As disabling SSL certificate and voiding the error is a bad practice, you are better off installing root certificates. This should resolve the problem. If not, you can follow one of these steps to get around the problem temporarily.

Environment Variable Type the following in the same line:

env GIT_SSL_NO_VERIFY=true git clone <Your .git URL>

Git Configuration Change Change the Git Config file to bypass SSL verification

git config --global http.sslVerify false

Warning: The best solution is to install valid root certificates. Only use these solutions in a bind as a temporary measure. Do not use this solution in Production environments.

How to Determine Mac OS version from Command Line

There are a few ways to determine what version of OSX you are on.

In the GUI, you can click the on Apple menu () at the top left of your screen, and choose About This Mac. The version of OS X will be printed underneath the large bold Mac OS X title. Clicking on the Version XYZ text will reveal the Build number.

If you’re writing a script though, you may wish to access this programmatically. To do this, type:

$ system_profiler SPSoftwareDataType

    System Software Overview:

      System Version: macOS 10.12.6 (16G29)
      Kernel Version: Darwin 16.7.0
      Boot Volume: Macintosh HD
      Boot Mode: Normal
      Computer Name: Srini’s MacBook Pro
      User Name: Srini Karlekar (skarlekar)
      Secure Virtual Memory: Enabled
      System Integrity Protection: Enabled
      Time since boot: 21 days 8:30


Troubleshooting python-jose Installation


The JavaScript Object Signing and Encryption (JOSE) technologies – JSON Web Signature (JWS), JSON Web Encryption (JWE), JSON Web Key (JWK), and JSON Web Algorithms (JWA) – collectively can be used to encrypt and/or sign content using a variety of algorithms.


$ pip install python-jose


When I tried to install python-jose on my Mac running Mac OS Sierra (MacOS 10.12.6), I was getting a RuntimeError: autoconf error.

To get around this issue, I had to run the following command.

$ xcode-select --install

This command pops up a window asking your permission to install xcode compiler. Once the compiler install went through, the pip install worked without issue.

Full details of error below:

(newpython3) srini-macbookpro:aws-auth0-auth skarlekar$ (newpython3) srini-macbookpro:aws-auth0-auth skarlekar$ pip install python-jose The directory ‘/Users/skarlekar/Library/Caches/pip/http’ or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo’s -H flag. The directory ‘/Users/skarlekar/Library/Caches/pip’ or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo’s -H flag. Collecting python-jose
Downloading python_jose-1.4.0-py2.py3-none-any.whl Collecting ecdsa<1.0 (from python-jose) Downloading ecdsa-0.13-py2.py3-none-any.whl (86kB) 100% |████████████████████████████████| 92kB 3.6MB/s Collecting six<2.0 (from python-jose) Downloading six-1.11.0-py2.py3-none-any.whl Collecting future<1.0 (from python-jose) Downloading future-0.16.0.tar.gz (824kB) 100% |████████████████████████████████| 829kB 941kB/s Collecting pycrypto<2.7.0,>=2.6.0 (from python-jose) Downloading pycrypto-2.6.1.tar.gz (446kB) 100% |████████████████████████████████| 450kB 1.6MB/s Installing collected packages: ecdsa, six, future, pycrypto, python-jose
Running setup.py install for future … done Running setup.py install for pycrypto … error Complete output from command /Users/skarlekar/anaconda/envs/newpython3/bin/python -u -c “import setuptools, tokenize;file=’/private/var/folders/zh/54fz82bn0tb4cgv0bdjcffqw0000gn/T/pip-build-spygzevz/pycrypto/setup.py’;f=getattr(tokenize, ‘open’, open)(file);code=f.read().replace(‘\r\n’, ‘\n’);f.close();exec(compile(code, file, ‘exec’))” install –record /var/folders/zh/54fz82bn0tb4cgv0bdjcffqw0000gn/T/pip-y3p2qipv-record/install-record.txt –single-version-externally-managed –compile: running install running build running build_py creating build creating build/lib.macosx-10.7-x86_64-3.6 creating build/lib.macosx-10.7-x86_64-3.6/Crypto copying lib/Crypto/init.py -> build/lib.macosx-10.7-x86_64-3.6/Crypto copying lib/Crypto/pct_warnings.py -> build/lib.macosx-10.7-x86_64-3.6/Crypto creating build/lib.macosx-10.7-x86_64-3.6/Crypto/Hash copying lib/Crypto/Hash/init.py -> build/lib.macosx-10.7-x86_64-3.6/Crypto/Hash copying lib/Crypto/Hash/hashalgo.py -> build/lib.macosx-10.7-x86_64-3.6/Crypto/Hash … … (snipped for brevity) … Skipping optional fixer: buffer Skipping optional fixer: idioms Skipping optional fixer: set_literal Skipping optional fixer: ws_comma running build_ext running build_configure checking for gcc… gcc checking whether the C compiler works… yes checking for C compiler default output file name… a.out checking for suffix of executables… checking whether we are cross compiling… configure: error: in /private/var/folders/zh/54fz82bn0tb4cgv0bdjcffqw0000gn/T/pip-build-spygzevz/pycrypto':
configure: error: cannot run C compiled programs.
If you meant to cross compile, use
–host’. See `config.log’ for more details Traceback (most recent call last): File ““, line 1, in File “/private/var/folders/zh/54fz82bn0tb4cgv0bdjcffqw0000gn/T/pip-build-spygzevz/pycrypto/setup.py”, line 456, in core.setup(**kw) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/core.py”, line 148, in setup dist.run_commands() File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/dist.py”, line 955, in run_commands self.run_command(cmd) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/dist.py”, line 974, in run_command cmd_obj.run() File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/site-packages/setuptools/command/install.py”, line 61, in run return orig.install.run(self) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/command/install.py”, line 545, in run self.run_command(‘build’) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/cmd.py”, line 313, in run_command self.distribution.run_command(command) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/dist.py”, line 974, in run_command cmd_obj.run() File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/command/build.py”, line 135, in run self.run_command(cmd_name) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/cmd.py”, line 313, in run_command self.distribution.run_command(command) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/dist.py”, line 974, in run_command cmd_obj.run() File “/private/var/folders/zh/54fz82bn0tb4cgv0bdjcffqw0000gn/T/pip-build-spygzevz/pycrypto/setup.py”, line 251, in run self.run_command(cmd_name) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/cmd.py”, line 313, in run_command self.distribution.run_command(command) File “/Users/skarlekar/anaconda/envs/newpython3/lib/python3.6/distutils/dist.py”, line 974, in run_command cmd_obj.run() File “/private/var/folders/zh/54fz82bn0tb4cgv0bdjcffqw0000gn/T/pip-build-spygzevz/pycrypto/setup.py”, line 278, in run raise RuntimeError(“autoconf error”) RuntimeError: autoconf error

AWS Ethical Hacking Tutorial


I found this awesome brisk, one-hour, self-paced tutorial on finding & avoiding security gotchas in your AWS setup. I learned a lot going through it. Highly recommend this for your AWS teams. Go to: flaws


Through a series of levels you’ll learn about common mistakes and gotchas when using Amazon Web Services (AWS). The author (Scott Piper) provides a series of hints that will teach you how to discover the information you’ll need to further yourselves to the next level. If you don’t want to actually run any commands, you can just keep following the hints which will give you the solution to the next level. At the start of each level, you’ll learn how to avoid the problem the previous level exhibited.

Lesson learned

It is common to give people and entities read-only permissions such as the SecurityAudit policy. The ability to read your own and other’s IAM policies can really help an attacker figure out what exists in your environment and look for weaknesses and mistakes.

Avoiding this mistake

Don’t hand out any permissions liberally, even permissions that only let you read meta-data or know what your permissions are.

Managing FaaS Services Deployed Across Different Cloud Providers


Sometimes you have to pick the best of the breed solution for different needs. This is true among the services provided by the different cloud providers as well. For instance, when it comes to cognitive services, Google, Amazon, and Microsoft rule the roost. Even among them, Google does natural-language translation, landmark recogintion, and text extraction from images, and content-based search the best. Amazon is the leader in facial recognition. Similarly, I found Microsoft’s image labeling the best among the breed.

No longer do you have to settle with one cloud provider to solve your needs. With frameworks such as the Serverless Framework, you can develop services across various providers, deploy and manage them in a cloud-agnostic fashion. With a single environment, you can develop, test and deploy to most of the big cloud providers without having to worry about their idiosyncrasies and react to cross-cloud events.


This is a simple tutorial to demonstrate how to deploy multiple services on different cloud providers using the Serverless Framework.

More specifically, this tutorial walks you through deploying an image detection service on Google Cloud Platform (GCP) and managing it using a proxy service running on Amazon Web Service. Both the services on either platform is 100% serverless.

The image detection service running on GCP uses Google’s FaaS solution viz., Cloud Functions and the proxy running on AWS uses Amazon’s FaaS solution viz., Lambda.

In a typical scenario, you will use a service such as this to detect the contents of an image uploaded to a S3 bucket and take appropriate actions based on the result. For instance, you could use it to blur/reject the image based on the vulgarity or get the image labels and chain it to other services that will translate the labels to multiple languages to cater to your customer needs.

Code Repository can be found here.


Setup Amazon AWS

  1. Sign into your AWS account or sign-up for one.

  2. Setup your AWS credentials by following the instructions from here.

Install node.js and Serverless framework

Serverless framework is a node.js application. To use Serverless framework and run the tutorial you need to install node.js. Follow the instructions from Serverless website to install both node.js and the Serverless framework.

Ensure your Serverless framework is operational using the following:

$ serverless --version

Testing your Serverless Setup

Now that you have setup AWS, it is time to test your Serverless setup by creating a mock function using the Serverless framework.

Create a test directory. In the test directory, create a Lambda function from the default template as follows:

$ mkdir sls-tester
$ cd sls-tester
$ sls create --template aws-python --name sls-test

This should create two files in the current directory:



The serverless.yml declares a sample service and a function. The handler.py returns a message stating that your function executed successfully.

To deploy the function, simply type:

$ sls deploy --verbose

This should deploy the function. The verbose option provides extra information.

To test your function, type:

$ sls invoke --function hello

If you get the following message, your Serverless setup is working.

    "body": "{\"input\": {}, \"message\": \"Go Serverless v1.0! Your function executed successfully!\"}",
    "statusCode": 200

To check the logs for your function, type:

$ sls logs -f hello

To keep a continuous check of the logs for your function, type:

$ sls logs -f hello -t

Setup Google Cloud

  1. Sign up for a new Google account at http://accounts.google.com. If you already have an account you can skip this step.
  2. Sign up for a Google Cloud trial at http://console.cloud.google.com/start. If you already have Google Cloud privileges on your Google account, you can skip this step.
  3. Create a new project and call it serverless-project (or a name of your choice).
  4. Select Credentials in API & Services section of the Google Cloud console.
  5. Under Create Credentials, create a new Service Account Key. Download the JSON key file to a secure place as you will need that in subsequent steps.
  6. In the API & Services dashboard, enable Google Cloud Vision API, Google Cloud Functions API, Google Cloud Deployment Manager API, Google Cloud Storage & Stackdriver Logging.

Image Detector


The gcp-label-image is the service that will deployed on GCP. It is a node.js based service that takes an image url passed through the HTTP request and sends it to Google Vision to detect the contents of the image and return a list of tags describing the content of the image.

The image URL should be passed as an HTTP parameter named imageUri. If this parameter is missing the service uses a default image to detect and return the contents.

Deploying the Image Detector Service

  1. Location: Go to the gcp-label-image subdirectory in the folder where you cloned the Git repository.
  2. Project: Replace the your-gcp-project-id in the serverless.yml file with your Google Cloud project id.
  3. Credentials: Replace the /path/to/your/gcp/credentials/json in the serverless.yml file with the path to the JSON credentials that you saved in the GCP setup.
  4. Deploy: In the service home directory, run the following command to deploy the detectLabel service on GCP. Make a note of the endpoint created. This endpoint is a URL that will end with detect as shown below. shell
    $ sls deploy --verbose
    Deployed functions
  5. Verify: You can check your Google Cloud Functions dashboard to ensure that your Cloud Function is deployed.
  6. Invoke@theTerminal: Invoke the function detectLabel as follows: shell
    $ sls invoke -f detectLabel
    Serverless: ekvy90t28px8 Image Results: landmark historic site sky tourist attraction ancient rome ancient history ancient roman architecture medieval architecture night building
  7. Invoke@theBrowser: Copy and paste the URL from the result of your sls deploy into the browser and add the imageUri parameter as follows: Far

Image Detector Proxy


The aws-gcp-proxy is the service that will be deployed on AWS. It is a Python-based service that will take an image URL passed through the HTTP request and send it to the Cloud Function deployed on GCP.

In a typical use, you will use it to detect the content of an image uploaded to a S3 bucket and take appropriate actions based on the result. For instance, you could use it to blur/reject the image based on the vulgarity or get the image label and chain it to another service that will translate the labels to multiple languages to cater to your customer needs.

The image URL should be passed as an HTTP parameter named imageUri. If this parameter is missing the service uses a default image URL to detect and return the contents.

Deploying the Image Detector Proxy Service

  1. Location: Go to the aws-gcp-proxy subdirectory in the folder where you cloned the Git repository.
  2. Environment Variable: Edit the setEnv.sh file to point update the GFUNC_URL to point to your image detector service running on GCP.
  3. Deploy: In the service home directory, run the following command to deploy the proxy service. Make a note of the AWS Gateway endpoint created. You will use this endpoint to test your service. shell
    $ sls deploy -v
    GET - https://urmileagemaydiffer.execute-api.us-east-1.amazonaws.com/dev/detect
  4. Verify: You can check your AWS Lambda dashboard to ensure that the Lambda function was created and the environment variable is being passed.
  5. Invoke: Copy and paste the AWS Gateway API URL into the browser and add the imageUri parameter as follows: Far


Serverless Framework makes it painless to deploy services across multiple cloud providers without having to deal with the idiosyncrasies of various providers allowing you to focus on your application. Additionally, the framework allows you to use the right provider for the right service, cuts the time spent on deployment while allowing you to manage the code and infrastructure across multiple providers.

Authenticating to GitHub using SSH

You can use SSH to connect and authenticate to GitHub. This allows you to check-in your artifacts to GitHub from the CLI without having to provide your username and password during every git push. Here are some notes on how to enable SSH for GitHub.

Generating a new SSH key

Open Terminal.

Paste the text below, substituting in your GitHub email address.

ssh-keygen -t rsa -b 4096 -C "your_email@example.com"

This creates a new ssh key, using the provided email as a label.

Generating public/private rsa key pair.

When you’re prompted to Enter a file in which to save the key, press Enter. This accepts the default file location.

 Enter a file in which to save the key (/Users/you/.ssh/id_rsa): [Press enter]

At the prompt, type a secure passphrase.

Enter passphrase (empty for no passphrase): [Type a passphrase]
Enter same passphrase again: [Type passphrase again]

Adding your SSH key to the ssh-agent

Before adding a new SSH key to the ssh-agent to manage your keys, you should have checked for existing SSH keys and generated a new SSH key. When adding your SSH key to the agent, use the default macOS ssh-add command, and not an application installed by macports, homebrew, or some other external source.

Start the ssh-agent in the background.

eval "$(ssh-agent -s)"
Agent pid 4356

If you’re using macOS Sierra 10.12.2 or later, you will need to modify your ~/.ssh/config file to automatically load keys into the ssh-agent and store passphrases in your keychain.

Host *
 AddKeysToAgent yes
 UseKeychain yes
 IdentityFile ~/.ssh/id_rsa

Add your SSH private key to the ssh-agent and store your passphrase in the keychain. If you created your key with a different name, or if you are adding an existing key that has a different name, replace id_rsa in the command with the name of your private key file.

$ ssh-add -K ~/.ssh/id_rsa

Add the SSH key to your GitHub account

Copy the SSH key to your clipboard.

If your SSH key file has a different name than the example code, modify the filename to match your current setup. When copying your key, don’t add any newlines or whitespace.

$ pbcopy < ~/.ssh/id_rsa.pub
# Copies the contents of the id_rsa.pub file to your clipboard

Tip: If pbcopy isn’t working, you can locate the hidden .ssh folder, open the file in your favorite text editor, and copy it to your clipboard.

Now login to your GitHub account in a browser. In the upper-right corner of any page, click your profile photo, then click Settings.

In the user settings sidebar, click SSH and GPG keys.

Click New SSH key or Add SSH key.

In the Title field, add a descriptive label for the new key. For example, if you’re using a Mac, you might call this key “My Mac”. Paste your key into the “Key” field.

Click Add SSH key.

If prompted, confirm your GitHub password.

Switching remote URLs from HTTPS to SSH

Open Terminal.

Change the current working directory to your local project.

List your existing remotes in order to get the name of the remote you want to change.

git remote -v
origin  https://github.com/USERNAME/REPOSITORY.git (fetch)
origin  https://github.com/USERNAME/REPOSITORY.git (push)

Change your remote’s URL from HTTPS to SSH with the git remote set-url command.

git remote set-url origin git@github.com:USERNAME/REPOSITORY.git

Verify that the remote URL has changed.

git remote -v
# Verify new remote URL
origin  git@github.com:USERNAME/REPOSITORY.git (fetch)
origin  git@github.com:USERNAME/REPOSITORY.git (push)

What is AWS Shared Responsibility Model?

Security and Compliance is a shared responsibility on the cloud between AWS and its customers. Where Amazon is responsible for “the security OF the cloud”, the customer is responsible for “the security IN the cloud”.

Security of the Cloud – AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and physical security at the facilities that run the AWS Cloud services.

Security in the Cloud – The customer responsibility is determined by the AWS Cloud services that a customer selects. This translates to the amount of configuration work the customer must perform as part of their security responsibilities. For example, services such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Virtual Private Cloud (Amazon VPC), and Amazon S3 are categorized as Infrastructure as a Service (IaaS) and, as such, require the customer to perform all of the necessary security configuration and management tasks. If a customer deploys an Amazon EC2 instance, they are responsible for management of the guest operating system including updates and security patches, any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall such as security group on each instance.

This short animation succinctly drives the message.

The Rise of Dumb Servers

Webtasks is taking Microservices and Containers to the next level. For the uninitiated, a Webtask is a bundle of backend application logic (think Microservices) along with secrets (think data) necessary for its execution. It is cryptographically protected from tampering and disclosure and can be safely stored or passed through untrusted channels. Hence, Webtasks can be stored in GitHub. Now all you need is a cookie-cutter shell-of-a-container to run any type of Webtask. Given the power of client devices (yes, your smartphones) this upends the client-server model – a phenomenal change in my mind. Is Webtasks set to usher the age of Dumb Servers?! Share your comments below.

Follow this link for a great write-up on how it works from the fantastic folks who brought us Auth0: https://webtask.io/docs/how