pgSCV v1.0.0 Release Notes
pgSCV v1.0.0 is a major release with significant architectural improvements, new features, and several breaking changes compared to v0.15.x.
New features
Query results caching
pgSCV now supports caching of query results to reduce database load. Two backends are available:
- in-memory — built-in cache, no additional infrastructure required.
- memcached — external Memcached server for shared caching across multiple pgSCV instances.
Cache TTL can be configured globally and overridden per collector. For example, heavy collectors like postgres/indexes and postgres/tables can use longer TTL while the rest uses a shorter default.
Example configuration:
cache:
type: "in-memory"
ttl: 30s
collectors:
postgres/indexes:
ttl: 10m
postgres/tables:
ttl: 10m
Using external Memcached server:
cache:
type: "memcached"
server: 127.0.0.1:11211
ttl: 30s
collectors:
postgres/indexes:
ttl: 10m
Connection pooling
pgSCV switched from single database connections (pgx.Conn) to connection pools (pgxpool.Pool) powered by pgx/v5. This improves connection management and reduces overhead when collecting metrics from multiple collectors in parallel.
Pool parameters can be tuned via YAML configuration or environment variables:
pooler:
max_conns: 10
min_conns: 2
min_idle_conns: 2
Environment variables: PGSCV_POOLER_MAX_CONNS, PGSCV_POOLER_MIN_CONNS, PGSCV_POOLER_MIN_IDLE_CONNS.
New collector: postgres/stattuple
New collector postgres/stattuple uses pgstattuple_approx() function from the pgstattuple extension. It collects tuple-level statistics for tables larger than 50MB.
New metrics:
postgres_pgstattuple_approx_tuple_percent
postgres_pgstattuple_dead_tuple_count
postgres_pgstattuple_dead_tuple_len
postgres_pgstattuple_dead_tuple_percent
postgres_pgstattuple_approx_free_space
postgres_pgstattuple_approx_free_percent
All metrics have labels: datname, relname, schemaname.
Requirement: pgstattuple extension must be installed in the target database.
Postgres-based service discovery
New discovery type postgres allows pgSCV to connect to a PostgreSQL instance, discover all databases and automatically register each as a separate monitoring service. Databases can be filtered using db (include) and exclude_db (exclude) regular expressions.
discovery:
my_cluster:
type: postgres
config:
conninfo: "postgres://pgscv:password@127.0.0.1:5432/postgres"
db: "^app_.*$"
exclude_db: "template.*"
refresh_interval: 30
target_labels:
- name: environment
value: production
The password_from_env option allows reading password from an environment variable instead of putting it directly in the config:
discovery:
my_cluster:
type: postgres
config:
conninfo: "postgres://pgscv@127.0.0.1:5432/postgres"
password_from_env: "PG_MONITORING_PASSWORD"
This feature replaces the removed databases configuration option (see Breaking changes below).
Script-based service discovery
New discovery type script allows running external shell scripts to discover services. The script outputs service connection information that pgSCV parses and registers as monitoring targets.
discovery:
aws_rds:
type: script
config:
script: /opt/pgscv/discovery-scripts/aws-discovery.sh
execution_timeout: 30s
refresh_interval: 5m
args: ["ALL"]
env:
- name: AWS_REGION
value: "us-east-1"
target_labels:
- name: environment
value: "production"
debug: false
Example discovery scripts for AWS RDS, macOS Homebrew PostgreSQL and macOS Postgres.app are included in the discovery-scripts/ directory.
Context propagation and timeouts
All collector Update() methods now accept context.Context. This enables proper timeout handling:
- 10-minute timeout for the overall metrics collection call.
- 30-second timeout per individual collector.
PostgreSQL 18 support
Test matrix and demo lab now include PostgreSQL 18.
Breaking changes
Throttling removed
The throttling mechanism (throttling_interval YAML parameter, PGSCV_THROTTLING_INTERVAL environment variable) has been completely removed. Use the new caching feature instead, which provides a more efficient way to reduce database load.
Multi-database collection via databases regexp removed
The databases YAML parameter and PGSCV_DATABASES environment variable have been removed. This feature allowed collecting per-database metrics (tables, indexes, functions) from databases matching a regular expression.
Use the new Postgres-based service discovery instead — it provides the same functionality with more flexibility (see Migration guide below).
pgx/v4 replaced with pgx/v5
The PostgreSQL driver has been upgraded from github.com/jackc/pgx/v4 to github.com/jackc/pgx/v5. This is an internal change, but it affects users who build pgSCV from source or use custom forks.
Collector Update() method signature changed
For users with custom collectors or forks: all collector Update() methods now require context.Context as first argument:
// Before (v0.15)
Update(config Config, ch chan<- prometheus.Metric) error
// After (v1.0)
Update(ctx context.Context, config Config, ch chan<- prometheus.Metric) error
Improvements
- Grafana dashboard updated to v1.0.0 edition with PostgreSQL restart annotations and variable
$interval support in PromQL queries.
- Configuration validation now runs before discovery initialization for faster feedback on config errors.
- Thread-safe service addition from multiple discovery providers.
Migration guide: v0.15 → v1.0
Step 1. Download and install v1.0
curl -s -L https://github.com/cherts/pgscv/releases/download/v1.0.0/pgscv_1.0.0_linux_$(uname -m).tar.gz -o - | tar xzf - -C /tmp && \
mv /tmp/pgscv /usr/sbin && \
systemctl restart pgscv
Step 2. Update configuration file
Remove throttling_interval
If your configuration uses throttling_interval, remove it and add cache section instead.
Before (v0.15):
listen_address: 127.0.0.1:9890
throttling_interval: 25s
services:
postgres:5432:
service_type: "postgres"
conninfo: "postgres://pgscv:password@127.0.0.1:5432/postgres"
After (v1.0):
listen_address: 127.0.0.1:9890
cache:
type: "in-memory"
ttl: 25s
services:
postgres:5432:
service_type: "postgres"
conninfo: "postgres://pgscv:password@127.0.0.1:5432/postgres"
Replace databases with Postgres discovery
If your configuration uses databases regexp for multi-database monitoring, replace it with the discovery section.
Before (v0.15):
listen_address: 127.0.0.1:9890
databases: "^app_.*$"
services:
postgres:5432":
service_type: "postgres"
conninfo: "postgres://pgscv:password@127.0.0.1:5432/postgres"
After (v1.0):
listen_address: 127.0.0.1:9890
services:
postgres:5432:
service_type: "postgres"
conninfo: "postgres://pgscv:password@127.0.0.1:5432/postgres"
discovery:
my_cluster:
type: postgres
config:
conninfo: "postgres://pgscv:password@127.0.0.1:5432/postgres"
db: "^app_.*$"
If you need to exclude specific databases, use exclude_db:
discovery:
my_cluster:
type: postgres
config:
conninfo: "postgres://pgscv:password@127.0.0.1:5432/postgres"
db: ".*"
exclude_db: "^(template0|template1|postgres)$"
refresh_interval: 30
Remove PGSCV_DATABASES and PGSCV_THROTTLING_INTERVAL environment variables
If you use environment variables, remove PGSCV_DATABASES and PGSCV_THROTTLING_INTERVAL from your systemd unit file, Docker environment, or .env file.
Step 3. Optional: enable caching
For heavily loaded monitoring setups, enable caching to reduce database load:
cache:
type: "in-memory"
ttl: 30s
collectors:
postgres/indexes:
ttl: 10m
postgres/tables:
ttl: 10m
postgres/stattuple:
ttl: 10m
Step 4. Optional: tune connection pool
If you need to control the number of connections pgSCV makes to the database:
pooler:
max_conns: 10
min_conns: 2
min_idle_conns: 2
Step 5. Optional: enable pgstattuple collector
Install the pgstattuple extension in your database:
CREATE EXTENSION IF NOT EXISTS pgstattuple;
The postgres/stattuple collector will be automatically enabled when the extension is detected.
Step 6. Verify
After restarting pgSCV, check that metrics are being collected:
curl -s 127.0.0.1:9890/metrics | head -20
Check logs for any configuration errors:
pgSCV v1.0.0 Release Notes
pgSCV v1.0.0 is a major release with significant architectural improvements, new features, and several breaking changes compared to v0.15.x.
New features
Query results caching
pgSCV now supports caching of query results to reduce database load. Two backends are available:
Cache TTL can be configured globally and overridden per collector. For example, heavy collectors like
postgres/indexesandpostgres/tablescan use longer TTL while the rest uses a shorter default.Example configuration:
Using external Memcached server:
Connection pooling
pgSCV switched from single database connections (
pgx.Conn) to connection pools (pgxpool.Pool) powered by pgx/v5. This improves connection management and reduces overhead when collecting metrics from multiple collectors in parallel.Pool parameters can be tuned via YAML configuration or environment variables:
Environment variables:
PGSCV_POOLER_MAX_CONNS,PGSCV_POOLER_MIN_CONNS,PGSCV_POOLER_MIN_IDLE_CONNS.New collector: postgres/stattuple
New collector
postgres/stattupleusespgstattuple_approx()function from the pgstattuple extension. It collects tuple-level statistics for tables larger than 50MB.New metrics:
postgres_pgstattuple_approx_tuple_percentpostgres_pgstattuple_dead_tuple_countpostgres_pgstattuple_dead_tuple_lenpostgres_pgstattuple_dead_tuple_percentpostgres_pgstattuple_approx_free_spacepostgres_pgstattuple_approx_free_percentAll metrics have labels:
datname,relname,schemaname.Requirement:
pgstattupleextension must be installed in the target database.Postgres-based service discovery
New discovery type
postgresallows pgSCV to connect to a PostgreSQL instance, discover all databases and automatically register each as a separate monitoring service. Databases can be filtered usingdb(include) andexclude_db(exclude) regular expressions.The
password_from_envoption allows reading password from an environment variable instead of putting it directly in the config:This feature replaces the removed
databasesconfiguration option (see Breaking changes below).Script-based service discovery
New discovery type
scriptallows running external shell scripts to discover services. The script outputs service connection information that pgSCV parses and registers as monitoring targets.Example discovery scripts for AWS RDS, macOS Homebrew PostgreSQL and macOS Postgres.app are included in the
discovery-scripts/directory.Context propagation and timeouts
All collector
Update()methods now acceptcontext.Context. This enables proper timeout handling:PostgreSQL 18 support
Test matrix and demo lab now include PostgreSQL 18.
Breaking changes
Throttling removed
The throttling mechanism (
throttling_intervalYAML parameter,PGSCV_THROTTLING_INTERVALenvironment variable) has been completely removed. Use the new caching feature instead, which provides a more efficient way to reduce database load.Multi-database collection via
databasesregexp removedThe
databasesYAML parameter andPGSCV_DATABASESenvironment variable have been removed. This feature allowed collecting per-database metrics (tables, indexes, functions) from databases matching a regular expression.Use the new Postgres-based service discovery instead — it provides the same functionality with more flexibility (see Migration guide below).
pgx/v4 replaced with pgx/v5
The PostgreSQL driver has been upgraded from
github.com/jackc/pgx/v4togithub.com/jackc/pgx/v5. This is an internal change, but it affects users who build pgSCV from source or use custom forks.Collector
Update()method signature changedFor users with custom collectors or forks: all collector
Update()methods now requirecontext.Contextas first argument:Improvements
$intervalsupport in PromQL queries.Migration guide: v0.15 → v1.0
Step 1. Download and install v1.0
Step 2. Update configuration file
Remove
throttling_intervalIf your configuration uses
throttling_interval, remove it and addcachesection instead.Before (v0.15):
After (v1.0):
Replace
databaseswith Postgres discoveryIf your configuration uses
databasesregexp for multi-database monitoring, replace it with thediscoverysection.Before (v0.15):
After (v1.0):
If you need to exclude specific databases, use
exclude_db:Remove
PGSCV_DATABASESandPGSCV_THROTTLING_INTERVALenvironment variablesIf you use environment variables, remove
PGSCV_DATABASESandPGSCV_THROTTLING_INTERVALfrom your systemd unit file, Docker environment, or.envfile.Step 3. Optional: enable caching
For heavily loaded monitoring setups, enable caching to reduce database load:
Step 4. Optional: tune connection pool
If you need to control the number of connections pgSCV makes to the database:
Step 5. Optional: enable pgstattuple collector
Install the
pgstattupleextension in your database:The
postgres/stattuplecollector will be automatically enabled when the extension is detected.Step 6. Verify
After restarting pgSCV, check that metrics are being collected:
curl -s 127.0.0.1:9890/metrics | head -20Check logs for any configuration errors: