In SeBS, we initially focused on Python and Node.js; not all benchmarks have a Node.js implementation. We have pending support for C++ (#145) and Java (#223). However, we want to have at least microbenchmarks in many other languages:
We should be able to benchmark the impact of choosing alternative runtimes.
How to support new and custom runtimes?
- AWS -> supported by default.
- Azure - we can implement custom handlers via HTTP
- GCP - Google Cloud Functions 2nd gen are built on Cloud Run, so we could run custom images (only Docker) on Cloud Run and then create triggers ourselves.
In SeBS, we initially focused on Python and Node.js; not all benchmarks have a Node.js implementation. We have pending support for C++ (#145) and Java (#223). However, we want to have at least microbenchmarks in many other languages:
We should be able to benchmark the impact of choosing alternative runtimes.
How to support new and custom runtimes?