FullStack Labs

Please Upgrade Your Browser.

Unfortunately, Internet Explorer is an outdated browser and we do not currently support it. To have the best browsing experience, please upgrade to Microsoft Edge, Google Chrome or Safari.
Upgrade
Welcome to FullStack Labs. We use cookies to enable better features on our website. Cookies help us tailor content to your interests and locations and provide many other benefits of the site. For more information, please see our Cookies Policy and Privacy Policy.

Cache Variables with Global Context in AWS Lambdas

Lambdas and best practices for creating caches, microservices, or monorepo applications with Node.js

Written by 
Carlos Guadir
,
Software Engineer
Cache Variables with Global Context in AWS Lambdas
blog post background
Recent Posts
Six Ways to Handle Concurrency in the JVM
Is there value in learning Vanilla JavaScript?
How to be efficient in remote design work in multicultural teams

Table of contents

As you may know, Lambdas are pieces of code that are executed by a runtime in a cloud provider as a self-managed service that does not require the administration of a server (virtual machine) instance. To mention some of these cloud services, there are AWS Lambdas, Google Functions and Azure Functions. Commonly, these services have been used to create API’s with the purpose of providing a REST endpoint. There are other use cases like websockets or cron jobs. But for this short article, I’ll talk about REST endpoints.

Common Lambda flow steps for services
Common Lambda flow steps for services

A Lambda function executes procedural code in four steps

  1. Receive (handle) a request from the client. This step is not managed by the code implementation. This includes accepting a specific HTTP request (GET, POST, PUT, DELETE) and receiving headers and data (query params, body, Base64 objects, etc.)
  2. Prepare Lambda. This part handles external or dynamic resources. Some examples here are database connections, resources required by the function like observability services (Datadog, Sentry, Split, etc), and others that the app needs according to a set of requirements. This step is the step the article talks about because loading external resources means opening connections, load configurations, creating instances and whatever lambda needs to work properly.
  3. Process a request. This is the main controller to execute an action with the received request. Here it’s possible to re-use optimizations implemented in the previous step.
  4. Return a response. After executing the actions, the Lambda responds with the relevant data and the status. This is the final step where the lambda finishes the process and returns a result to a client like a mobile device, browser, or IoT device. In patterns like microservices, it's possible to chain other steps with other services.

// 1. Handle or receiving the request
export const handler = ( event, context ) => {
    // 2. Preparing lambda resources
    // TODO Await external resources
    // TODO Await data sources/observability

    // 3. Processing the request
    const { params } = JSON.parse( event.body )
    
    // 4. Return response
    return {
        statusCode: 200,
        headers: {},
        body: JSON.stringify({
            message: ´Delicious data´,
            data: { ...{} }
        })
    }
}

So, what happens if you have a database connection and observability services on the second step of preparation? If those resources use execution time on every request that affects response time

Let me tell you about my experience with it. My team had a database connection and a GraphQL server with Apollo Server. So for every request we re-create the database connection and also pre-build an Apollo Server instance. We did that to get some flexibility, but the approach doesn’t justify that we forgot an important feature about AWS Lambdas life cycle execution environment. In summary when a request process ends, the runtime anticipates and awaits for a new invocation, caching instances saved in global context to be reused on the next requests. This means that adding a code check for not creating a new instance when it already exists on cache, allows optimization when the function is invoked again.


let connection: Database

const databaseConnection = new Database({ 
   /* Environment variables */ 
})
export const handler = ( event, context ) => {
    // Heavy stuff like database connection
    // TODO Optimize this code
    connection = await databaseConnection.connect()

    const { where } = JSON.parse( event.body )
   
    return {
        statusCode: 200,
        headers: {}, // Don't forget headers
        body: JSON.stringify({
            message: ´Delicious data from my database´,
            data: await connection.findAll( {
                where
            } )
        } )
    }
}

We found that Lambdas have a short life cycle before freezing the code execution, so it’s possible to cache some objects in a global context. 

The Lambda is alive for some minutes (15–40 minutes.) After that, the Lambda dies if there's no request in the queue. So, you are able to re-use some of the database connections and instances. This speeds up the process for the request almost over 50% for every request in our use case. 

Finally, a good proposal for the last Lambdas could be like the next code example. The variable connection is a global variable inside the lambda context. The handler function implements a check if the variable has been previously assigned a connection instance, and only creates a new connection if is not defined. At adding this logic the await on databaseConnection.connect()only affects the first invoke function. The next invocations reuse the connections and optimize the lambda response time.


let connection: Database

const databaseConnection = new Database({ 
   /* Environment variables */ 
})
export const handler = ( event, context ) => {
    if ( ! connection ) {
        connection = await databaseConnection.connect()
    }

    const { where } = JSON.parse( event.body )

    return {
        statusCode: 200,
        headers: {}, // Don't forget headers
        body: JSON.stringify({
            message: ´Delicious data from my database´,
            data: await connection.findAll( {
                where
            } )
        } )
    }
}

Conclusion

Applying this pattern is very close to the singleton pattern. With this, we handle the heavy task for the cold-start in the Lambda function. For the next requests, the heavy process is cached in the runtime memory. So the response time is reduced to a minimum and you are ready for a production release. That’s all, folks. Thank you for reading!

Carlos Guadir
Written by
Carlos Guadir
Carlos Guadir

Carlos Guadir is a mid-level software engineer at FullStack Labs. He holds a Bachelor's degree in Systems Engineering and has more than five years of experience working in development environments. His main areas of interest include JavaScript, TypeScript, Node.js, and cloud computing with AWS.

People having a meeting on a glass room.
Join Our Team
We are looking for developers committed to writing the best code and deploying flawless apps in a small team setting.
view careers
Desktop screens shown as slices from a top angle.
Case Studies
It's not only about results, it's also about how we helped our clients get there and achieve their goals.
view case studies
Phone with an app screen on it.
Our Playbook
Our step-by-step process for designing, developing, and maintaining exceptional custom software solutions.
VIEW OUR playbook
FullStack Labs Icon

Let's Talk!

We’d love to learn more about your project.
Engagements start at $75,000.

company name
name
email
phone
Type of project
How did you hear about us?
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.