Monitoring database size
This section provides instructions on how to monitor the database size. Typically, a job to monitor the database size needs three piece of information:
- What is the maximum size?
- When to raise the alert?
- Where to put the alert?
The purpose of the maximum size is to define the size of the database at 100% so that we can later define when to raise the different level of alert.
The alert_db_size()
PostgreSQL function mentioned in this section is part of the Sensys database schema. Please make sure to have the correct database schema version installed.
To complete this step, you will need to define the maximum size of the database in gigabyte (GB), define three levels for logging (log, warning, and critical) as percentages to the maximum size, and finally configure PostgreSQL log_destination
to write the alert message to.
Enable PostgreSQL to log to syslog
Edit the log_destination
value in the postgresql.conf
file to include syslog
. The default value for the log_destination
is stderr
. To see the current value of log destination, connect to the database and run:
SHOW log_destination;
Please refer to the Error Reporting and Logging section of PostgreSQL documentation for full detail on how to configure the log_destination
option.
Calling Alert Database Size Stored Procedure
The alert_db_size()
will query the current database for its size to compute its current percentage in respect to the maximum database size for comparison with the three logging levels. Next is the signature of this function.
FUNCTION alert_db_size(
max_db_size_in_gb double precision,
log_level_percentage double precision,
warning_level_percentage double precision,
critical_level_percentage double precision,
log_tag text)
RETURNS text
The restriction on the parameters are as follow:
0 < max_db_size_in_gb
0 <= log_level_percentage < warning_level_percentage < critical_level_percentage <= 1.0
Below is an example of checking the database size with the maximum size to be 100.0 GB. If the current database size exceeded 70.0 GB (70% of 100.0 GB) then it will log a message at the LOG
level in the log specified in the PostgreSQL's log_destination
. Similarly, if the current database size exceeded 80.0 GB or 90.0 GB, it will log a message at the WARNING
or CRITICAL
level respectively. All log message will also include the string ORCMDB 100GB Alert
for ease of filtering. If the database size is below 70.0 GB, then this function simply return a text message stating the database is within limit.
SELECT alert_db_size(100.0, 0.7, 0.8, 0.9, 'ORCMDB 100GB Alert');
The SQL statement example above can be schedule to run on a regular basis using pgAgent. Alternative, the statement can also be executed as a shell script using psql
tool. It then can easily be scheduled to run on a regular basis using any schedule tool such as crontab
.
$ psql -U username -d database -c "SELECT alert_db_size(100.0, 0.7, 0.8, 0.9, 'ORCMDB 100GB Alert');"
Using Alert Database Runner Script
An alternative to the method of calling the alert database size stored procedure, the pg_alert_runner.py
is a Python wrapper script that, in turn, calls the alert_db_size()
stored procedure. The pg_alert_runner.py
script is available in the Sensys repository contrib/database
directory. This Python script uses SQLAlchemy to connect to the PostgreSQL database and execute the stored procedure. For this to work, the system needs to have the following python modules installed:
- SQLAlchemy - Database Abstraction Library
% pip install SQLAlchemy
- psycopg2 - Python-PostgreSQL, also the default PostgreSQL driver used by SQLAlchemy
% pip install psycopg2
Because this helper script uses SQLAlchemy to connect to the database, the database connection string will need to be in the format of SQLAlchemy database URL syntax
The helper script expects the PostgreSQL database URL to be stored in the environment variable named PG_DB_URL
or passed to it with the --db-url
option, see example below on how to set it.
$ export PG_DB_URL=postgresql://[username[:password]]@host[:port]/database
Run pg_alert_runner.py --help
for full usage detail. Below is an example of running this script in an endless loop every 6 hours (21600 seconds)
$ export PG_DB_URL=postgresql://[username[:password]]@host[:port]/database
$ python contrib/database/pg_alert_runner.py 100.0 --log-level-percentage 0.7 --warning-level-percentage 0.8 --critical-level-percentage 0.9 --log-tag 'Sample alert on ORCMDB' --loop 21600