Skip to content
14 changes: 7 additions & 7 deletions src/_posts/languages/java/2000-01-01-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,13 +114,13 @@ faster dependency resolution. However neither the `mvn` executable nor the
By default the `-Xmx` configuration of the JVM depends on the size of container
you selected for your application:

| Container Size | Maximum Heap Size (MB) |
| -------------: | ------------------------------------: |
| S | `160` |
| M | `300` |
| L | `671` |
| XL | `1536` |
| 2XL and above | ~80% of the RAM allocated in the plan |
| Container Size | Memory (MB) | Maximum Heap Size (MB) |
| -------------: | -------------: | ------------------------------------: |
| S | 256 | `160` |
| M | 512 | `300` |
| L | 1024 | `671` |
| XL | 2048 | `1536` |
| 2XL and above | 4096 and above | ~80% of the RAM allocated in the plan |

### Choose a Maven Version

Expand Down
1 change: 0 additions & 1 deletion src/_posts/languages/php/2000-01-01-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,6 @@ The default values for `pm.max_children` are based on the `memory_limit`
parameter of the [PHP configuration](https://github.com/Scalingo/php-buildpack/blob/master/conf/php/php.ini#L15),
the used formula is: `floor(available_memory / php_memory_limit) + 2`

{: .table }
| Container Size | Memory (MB) | Default Concurrency |
| -------------: | ----------: | ------------------: |
| S | 256 | 3 |
Expand Down
58 changes: 17 additions & 41 deletions src/_posts/platform/app/2000-01-01-metrics.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Application Metrics
nav: Metrics
modified_at: 2026-01-02 12:00:00
modified_at: 2026-05-05 00:00:00
tags: app metrics
index: 35
---
Expand All @@ -23,17 +23,20 @@ The application chart displays global data that are not container specific:
events and routing metrics.

The **Requests per minute** chart show the number of requests the application
receives per minute, the famous **RPM**. The number of server error responses generated by the application (HTTP responses in the 500 range) is displayed on the same chart as red bars.
receives per minute, the famous **RPM**. The number of server error responses
generated by the application (HTTP responses in the 500 range) is displayed on
the same chart as red bars.

**Note**: 504 and 503 errors can be generated by our reverse proxy. More
information is available in the [routing documentation][routing-errors].

On top of this chart, all the events that happened during the
viewing period are displayed. This can help you link the application behaviour with events
that happened on the platform, e.g. spot a deployment that contains a memory
leak or follow your application behaviour after a scale operation.
viewing period are displayed. This can help you link the application behaviour
with events that happened on the platform, e.g. spot a deployment that contains
a memory leak or follow your application behaviour after a scale operation.

A lot of events are available on the application timeline but only a few relevant are displayed on the metrics view:
A lot of events are available on the application timeline but only a few
relevant are displayed on the metrics view:

- Restart event
- Deploy event
Expand Down Expand Up @@ -61,10 +64,11 @@ The container charts use the container types defined in your [Procfile]({%
post_url platform/app/2000-01-01-procfile %}).

For each container type, two charts are shown. The first one shows the **CPU
usage** and the second one the **memory** and **swap** usage of this type of
container.
usage** and the second one the **memory usage** and **swap usage** usage of this
type of container.

The CPU chart may exceed 100% if the application uses more than one core of the CPU.
The CPU chart may exceed 100% if the application uses more than one core of the
CPU.

For the memory chart, the memory (in blue) and swap usage (in red)
are stacked. That way the total memory usage of the application can be
Expand All @@ -84,45 +88,16 @@ The swap usage can increase in two different situations:
{% note %}
Protip: Is your application slow? Check your swap usage! If your app
swaps a lot it will significantly alter your application performance. You'd
better reduce its memory usage or use a bigger container size.
better reduce its memory usage or use a bigger
[container size][container-sizes].
{% endnote %}

**Note**: The swap line is only shown if the swap usage exceeds 2% of the
[container memory limit]({% post_url
platform/internals/2000-01-01-container-sizes %}).
[container memory limit][container-sizes].

If the application has more than one container of a specific type, these charts
show the mean CPU usage / memory consumption of all containers of the same type.

## Behavior when memory and swap are fully consumed

When an application consumes all its allocated memory (RAM + swap), the system applies a protection mechanism called the **OOM Killer** (Out of Memory Killer).

### Sequence of events

1. The application progressively uses all available RAM
2. The system starts using swap space (visible in red on the memory chart)
3. When memory and swap reach 100% usage, the OOM Killer intervenes
4. The application is immediately terminated by the system

### Observable consequences

* **Abrupt termination:** The application stops without a graceful shutdown process
* **Automatic restart:** The container restarts according its configuration
* **Restart event:** A "Restart" event appears in the metrics timeline
* **Data loss:** All non-persisted data in memory is lost

### Prevention and monitoring

To avoid this scenario:

* Regularly monitor memory charts in the Metrics tab
* Set up alerts before reaching memory limits
* Analyse usage spikes in correlation with deployment events
* Consider upgrading to a larger [container size](/platform/internals/container-sizes) if needed

**Note:** The OOM Killer is a system protection mechanism. If your application regularly experiences OOM events, it typically indicates a need for code optimization or increased allocated resources.

## Detailed View

If the application has more than one container of a type defined in its
Expand All @@ -134,3 +109,4 @@ debugging process.

[notifiers]: {% post_url platform/app/2000-01-01-notifiers %}
[routing-errors]: {% post_url platform/networking/public/2000-01-01-routing %}#http-errors
[container-sizes]: {% post_url platform/internals/2000-01-01-container-sizes %}
7 changes: 4 additions & 3 deletions src/_posts/platform/app/scaling/2000-01-01-scaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,9 +74,9 @@ Here is a quick comparison table, in the context of a Platform as a Service:
## Limitations

- Vertical scaling is limited by the platform. The biggest container we can
currently boot is the `2XL` container, with 4GB of RAM. For a comprehensive
list of container sizes and corresponding specifications, please see our
[dedicated documentation page]({% post_url platform/internals/2000-01-01-container-sizes %}).
currently boot is the `2XL` container, with 4GB of RAM. See the
[container sizes][container-sizes] documentation for the full list of sizes
and their specifications.
- Horizontal scaling is limited by default to a maximum of 10 containers per
[process type]({% post_url platform/app/2000-01-01-procfile %}). This limit
can be increased via our support team.
Expand Down Expand Up @@ -233,3 +233,4 @@ To learn more about events and notifiers, please visit the page dedicated to

[routing-requests]: {% post_url platform/networking/public/2000-01-01-routing %}#requests-distribution
[Scalingo Autoscaler]: {% post_url platform/app/scaling/2000-01-01-scalingo-autoscaler %}
[container-sizes]: {% post_url platform/internals/2000-01-01-container-sizes %}
Original file line number Diff line number Diff line change
Expand Up @@ -238,9 +238,9 @@ started during a scale-out operation are billed like any other container (on
the other hand, scaling-in allows to save costs).

Consequently, billing depends on the type of container you chose for your
application (M is the default container size), on the maximum number of
containers set in the Autoscaler configuration and on your application
workload.
application (M is the default
[container size][container-sizes]), on the maximum number of containers set in
the Autoscaler configuration and on your application workload.


## Creating an Autoscaler
Expand Down Expand Up @@ -439,3 +439,4 @@ To learn more about events and notifiers, please visit the page dedicated to
[app notifiers]({% post_url platform/app/2000-01-01-notifiers %}).

[scaling-v]: {% post_url platform/app/scaling/2000-01-01-scaling %}#vertical-scaling
[container-sizes]: {% post_url platform/internals/2000-01-01-container-sizes %}
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,8 @@ the commands are run are billed like any other
[one-off container]({% post_url platform/app/2000-01-01-tasks %}).

Consequently, billing depends on the type of container you defined in your task
(M is the default container size) and on the job lifespan.
(M is the default
[container size][container-sizes]) and on the job lifespan.

For example, if your job runs during 5 minutes, you will be billed 5 minutes of
an M container.
Expand Down Expand Up @@ -187,3 +188,5 @@ remove the file.

Logs for scheduled tasks are included in the [application logs]({% post_url platform/app/2000-01-01-logs %}),
next to other containers logs.

[container-sizes]: {% post_url platform/internals/2000-01-01-container-sizes %}
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Runtime Issues
modified_at: 2026-05-04 00:00:00
tags: app runtime crash recovery troubleshooting
modified_at: 2026-05-11 00:00:00
tags: app runtime crash recovery troubleshooting oom memory
index: 3
---

Expand All @@ -25,7 +25,8 @@ The most common causes are:
- Configuration issues
- Bugs in your application code
- Uncaught exception in your code (especially with non-compiled languages)
- Insufficient resources
- Insufficient resources, such as an Out of Memory (OOM) crash when the
application consumes all its allocated memory
- Temporary error/unavailability of an external resource

A Runtime Error can have several consequences, depending on the severity of the
Expand Down Expand Up @@ -112,5 +113,41 @@ when a Timeout Error occurs).

You can modify this behavior by tweaking your
[Notifier's configuration]({% post_url platform/app/2000-01-01-notifiers %}).
The `app_crashed`, `app_crashed_repeated` and the `app_deploy` events can be
The `app_crashed`, `app_crashed_repeated` and the `app_deployed` events can be
particularly worth considering.

## Common Runtime Error Patterns

### Out of Memory Crashes

When an application consumes all its allocated memory (RAM + swap), the system
applies a protection mechanism called the **OOM Killer** (Out of Memory Killer).

The usual sequence is:

1. The application progressively uses all available RAM.
2. The system starts using swap space.
3. When memory and swap reach 100% usage, the OOM Killer intervenes.
4. The application is immediately terminated by the system.

This can have several observable consequences:

- Abrupt termination: the application stops without a graceful shutdown process.
- Automatic restart: the container restarts according to its configuration.
- Restart event: a "Restart" event appears in the metrics timeline.
- Data loss: all non-persisted data in memory is lost.

To reduce the risk of OOM crashes:

- Regularly monitor memory charts in the [Metrics tab][metrics].
- Set up [alerts][alerts] before reaching memory limits.
- Analyze usage spikes in correlation with deployment events.
- Consider upgrading to a larger [container size][container-sizes] if needed.

The OOM Killer is a system protection mechanism. If your application regularly
experiences OOM events, it typically indicates a need for code optimization or
increased allocated resources.

[alerts]: {% post_url platform/app/2000-01-01-alerts %}
[container-sizes]: {% post_url platform/internals/2000-01-01-container-sizes %}
[metrics]: {% post_url platform/app/2000-01-01-metrics %}
89 changes: 26 additions & 63 deletions src/_posts/platform/internals/2000-01-01-container-sizes.md
Original file line number Diff line number Diff line change
@@ -1,83 +1,46 @@
---
title: Container Sizes
modified_at: 2015-12-02 00:00:00
tags: internals containers sizes
modified_at: 2026-05-05 00:00:00
tags: containers sizes
index: 2
---

## Comparative Table

<div class="overflow-horizontal-content">
<table class="table">
<thead>
<tr>
<th>Name</th>
<th>Memory</th>
<th>CPU Priority</th>
<th>PID Limit</th>
<th>Price</th>
</tr>
</thead>
<tbody>
<tr>
<td>S - Small</td>
<td>256MB</td>
<td>Low</td>
<td>128</td>
<td>0.01€/h</td>
</tr>
<tr>
<td>M - Medium (Default)</td>
<td>512MB</td>
<td>Standard</td>
<td>256</td>
<td>0.02€/h</td>
</tr>
<tr>
<td>L - Large</td>
<td>1GB</td>
<td>Standard</td>
<td>512</td>
<td>0.04€/h</td>
</tr>
<tr>
<td>XL - eXtra Large</td>
<td>2GB</td>
<td>High</td>
<td>1024</td>
<td>0.08€/h</td>
</tr>
<tr>
<td>2XL - eXtra eXtra Large</td>
<td>4GB</td>
<td>High</td>
<td>2048</td>
<td>0.16€/h</td>
</tr>
</tbody>
</table>
<div class="overflow-horizontal-content" markdown="1">
| Name | Memory | CPU Priority | PID Limit[^pid-limit] |
| -------------------------- | ------- | ------------ | --------- |
| S - Small | 256 MB | Low | 128 |
| M - Medium (Default) | 512 MB | Standard | 256 |
| L - Large | 1 GB | Standard | 512 |
| XL - eXtra Large | 2 GB | High | 1024 |
| 2XL - eXtra eXtra Large | 4 GB | High | 2048 |
{: .table }
</div>

Bigger container sizes are available upon request on the support.
As a note, each new process requires a PID. And inside each process, each thread needs one too.
Prices are available on the [Scalingo pricing page](https://scalingo.com/pricing).
Bigger container sizes are available upon request on the support.

## Availability of the Sizes

Our 30 days free trial only gives you access to small and medium containers, if you want
to use another kind of size, please [fill your billing profile and payment
method](https://dashboard.scalingo.com/billing).
{% note %}
Limits apply when using Scalingo under the free trial. For more information,
see [what you can do under the free trial][free-trial-limits].
{% endnote %}

## Container Limits

Containers have various limits depending on their size. Here is a comprehensive list:

- RAM: cf. above-mentioned table
- Swap: twice the amount of RAM.
- CPU access: all containers have access to all CPU cores. But higher priority
- **Memory**: see the comparative table above.
- **Swap**: twice the amount of RAM.
- **CPU**: all containers have access to all CPU cores. But higher priority
means twice as much priority compared to standard priority. For example,
consider three containers, one has a high priority and two others have a
standard priority. When processes in all three containers attempt to use
100% of CPU, the first container would receive 50% of the total CPU time and
the two others would receive 25%.
- PID limits: from 128 (S) to 2048 (2XL).
- Ulimit nofile: 1048576. Maximum number of files an application can open.
- **PID limit**: see the comparative table above.
- **Open file limit** (`nofile`): 1,048,576. This is the maximum number of files an application can open.

[^pid-limit]: Each new process requires a PID.

[free-trial-limits]: {% post_url platform/getting-started/2000-01-01-free-trial %}#what-can-i-do-under-the-free-trial