

This solution worked out for us too! Thanks for the detailed explanation, research, and trial and error. I see that I just repeated but the advice is still sound. Doubling the amount of memory decreased the runtime by about half.Ĭurrently I have the function set to 2GB of memory, and I haven't had any memory issues. Since you get more CPU with larger memory sizes in Lambda, it actually doesn't seem to increase the cost in the long run. Eventually the garbage collector will kick in and the function will level off around 800-900mb of usage.

To fix the issue I just increased the amount of memory for the Lambda function to over 1GB. We're doing about 4k invocations an hour. I was able to figure out the issue, but I don't think the answer is going to be satisfying to everyone.įor context I'm running an image resizer & converter that uses sharp on Node 12.x in Lambda. also it gives better function stability since it has more memory size - New container will be used before reaching max memory.Īlso, Increasing VIPS_DISC_THRESHOLD might be help if you are having memory issues. there won't be big cost difference - allocating 2x memory size will give 2x processing speed. Lambda allocates CPU power linearly in proportion to the amount of memory configured. So IMHO It's better to use larger memory size configuration, Especially if you have to handle super high resolution images. GC only can be executed during Lambda invocation. also there is no background task capabilities including GC. So the chance of V8's GC execution is quite low. As you may know, Lambda can reuse container, and freezes container until after invocation. This issue is related with Lambda runtime's behavior. I've been using sharp with Lambda in production environment over 1+ years. = (event) => new Promise((resolve, reject) => īilled Duration: 19900 ms Memory Size: 512 MB Max Memory Used: 464 MB
