Just curious why Jets sets the default memory allocation to 1536 MB per function. I was just reviewing my AWS Lambda bill today, and realized that I could save money by reducing memory allocation per function. A quick spot-check of my common requests show that actual memory used is below 200 MB per invocation.
I’m thinking if I reduce memory allocation to say 256 MB per function, it’ll reduce my compute time portion of the bill by more than 80%. Am I missing anything here?
There’s some history here. When the 1536MB default was chosen, at the time, Lambda did not support Ruby. At the lower 256MB RAM, the cold start penalty was pretty noticeable. Think it was 4-8 seconds because it took a while to download and load the Ruby interpreter into memory.
On Lambda, the CPU power is proportional to RAM allocation. So the larger the RAM, the more CPU juice you get. Chris Munns talked about this in his re:Invent presentation and also covers it here https://aws.amazon.com/blogs/compute/serverless-reinvent-2017/
In testing, found that around 1536MB reduced the cold start to 1-2 seconds. Thinking it is because the CPU is faster.
That’s in the past now. Thankfully, Ruby has been officially supported for a while.
However, sometimes increasing the RAM can surprisingly save money. With lower memory, compute time takes longer (since the CPU is weaker). Lambda charges for both GB and time. So it can also work out that because the duration is much faster, it can cost less even though you’ve allocated more RAM.
I knew there would be a backstory! Thanks @tung for the quick and thorough reply
I’m gonna try setting the app-wide memory to 512 and see if that impacts performance based on avg execution time. I’m starting to exceed AWS free tier and hoping this will help!