Are you using OpenAI API? Then you need to be prepared!

Are you using OpenAI API? Then you need to be prepared!

If you are dependent on OpenAI API, you should know that at some point in time it might go down, even for a short period. And in such cases you would like to know, act accordingly or maybe even run certain automations to mitigate the problem at hand. Those that do monitoring on their external dependencies know exactly what I’m talking about.

This article is for those:

Use OpenAI API in their apps, or have dependency on it directly/indirectly
Prometheus Server and Blackbox exporter users
Grafana, metrics and visualization enjoyers

Monitoring the status of APIs is crucial for maintaining the health and reliability of your applications. For those using OpenAI’s API, the official status API always returns a 200 status code, even when the service is down. And naturally, this prevents you from probing this API via Prometheus Blackbox exporter.

This is where the OpenAI API Status Prober comes in handy. It acts as a proxy, translating the status into meaningful HTTP codes that integrate seamlessly with your Prometheus setup.

Key Features

Accurate Status Reporting: Converts OpenAI’s status API responses into proper HTTP codes (200/500/3xx).
Easy Integration: Simplifies the process of integrating OpenAI API status monitoring into Prometheus.
Flexible Installation Options: Supports global, local, and direct usage methods.

Why Use OpenAI API Status Prober?

The primary motivation for using this tool is the limitation of the official OpenAI status API. By providing a proxy that returns appropriate HTTP status codes, the prober makes it possible to integrate OpenAI’s status into Prometheus, enhancing your monitoring capabilities.

Usage

Installation

You can install and set up OpenAI API Status Prober using three methods:

Global Installation:

npm install -g pm2
npm install -g openai-api-status-prober
openai-api-status-prober start
pm2 startup
pm2 save

Local installation:

git clone https://github.com/skywarth/openai-api-status-prober.git
cd openai-api-status-prober
npm ci
node src/server.js

Direct Usage of Production Deployment:

You can use the deployment directly via https://openai-api-status-prober.onrender.com/open-ai-status-prober/simplified_status. However, it’s recommended to self-host to avoid overloading the service.

Integrating into Prometheus Blackbox exporter

scrape_configs:
job_name: blackbox’
metrics_path: /probe
params:
module: [http_2xx]
static_configs:
targets:
http://127.0.0.1:9091/open-ai-status-prober/simplified_status
relabel_configs:
source_labels: [__address__]
target_label: __param_target
source_labels: [__param_target]
target_label: instance
target_label: __address__
replacement: 127.0.0.1:9115

Then run systemctl restart prometheus

CLI Commands

Start Server: openai-api-status-prober start

Stop Server: openai-api-status-prober stop

Version: openai-api-status-prober -v

Env Path: openai-api-status-prober env-path

Repository: https://github.com/skywarth/openai-api-status-prober
Deployment: https://openai-api-status-prober.onrender.com/open-ai-status-prober/simplified_status

Please follow and like us:
Pin Share