AWS increases Lambda RAM limits to 10GB

In the context of the end of year / re:Invent announcements, AWS just made public that it will push the limits of serverless one step further.

Written on December 10, 2020

AWS just announced on its blog [1] that it increased the maximum RAM allocation of its cloud functions, AWS Lambdas, to 10 GB. Traditionally, cloud functions are meant to be lightweight and quick to spawn compute resources with very limited resources. Most cloud providers limit their offering to 1 CPU / 2GB of RAM per cloud function. This new feature proposed by AWS shows its ambition to make Lambda a first a first class choice for compute resources in an ever wider range of applications. We might ask ourselves whether other cloud providers will follow, but these last years AWS has shown more willingness to invest in cloud functions than its competitors (Firecracker [2] is a good example of this).

The allocation capacity of these new large AWS Lambdas needs to be benchmarked, but this is a great news for the ongoing work on Buzz, the serverless query engine developed by Cloudfuse. This means more flexibility for resource allocation and an increased off-the-shelf limit. But it is still to be investigated whether having larger workers really is beneficial to the most frequent workloads executed by Buzz. Indeed, one of the strength of its architecture is to dilute the query workload through the fleet of machines hosting cloud functions and thus avoiding the saturation of critical resources such as S3 bandwidth.

If you have any insight or wish to work with us on this incredible new opportunity, just to get in touch !