Instrumenting Django Applications using OpenTelemetry

Instrumenting Django Applications using OpenTelemetry

In this articles we are going to go through instrumentating your django application using OTel. The project will demonstrate how to add logging for Prometheus and how to visualised spans using Jaeger.

Introduction

OpenTelemetry is a vendor-neutral open source Observability framework for instrumenting, generating, collecting, and exporting telemetry data such as traces, metrics, and logs.

It provides end-to-end tracing, allowing you to follow requests as they travel through various services and components. This helps in understanding the flow and identifying bottlenecks or failures.
Enables the easy collection of metrics such as latency, request count, error rates, and resource utilization helps in monitoring the health and performance of your application.
Enables correlating logs with traces and metrics provides a detailed view of what happens at each step of the request lifecycle.

Prequisites

To follow along with this article you require:

Docker installed
python and/or poetry

Step 1: Install Dependencies

Setup your python environment then install the following packages

poetry add django opentelemetry-sdk opentelemetry-instrumentation-django gunicorn django-prometheus opentelemetry-exporter-otlp celery

After installing the project dependencies setup up you django application

poetry run django-admin startproject config .

Step 2: Setup Django App

We will use an todo application to demonstrate some of the features that we can use from OpenTelemetry.

Create a todo app in our django application and added to the setting.py inside the config folder

poetry run python3 manage.py startapp todo

After this your settings should be similart to this

INSTALLED_APPS = [
django.contrib.admin,
django.contrib.auth,
django.contrib.contenttypes,
django.contrib.sessions,
django.contrib.messages,
django.contrib.staticfiles,
django_prometheus,

todo
]

update your todo/urls.py file to be

from django.urls import path

from . import views

urlpatterns = [
path(“”, views.home, name=home),
path(create-todo, views.create_todo, name=create-todo)
]

we will also add a templates folder that contains the html that will be rendered by the view below. You will find the template files in this link.

update your templates settings in the settings.py file to be

TEMPLATES = [
{
BACKEND: django.template.backends.django.DjangoTemplates,
DIRS: [BASE_DIR / templates],
APP_DIRS: True,
OPTIONS: {
context_processors: [
django.template.context_processors.debug,
django.template.context_processors.request,
django.contrib.auth.context_processors.auth,
django.contrib.messages.context_processors.messages,
],
},
},
]

Before create the view for the application we will need a model to store our todos

Create the following model within the models.py file in the todo folder

from django.db import models
from datetime import datetime

# Create your models here.

class Todo(models.Model):
text: str = models.TextField(null=False)
created_at: datetime = models.DateTimeField(auto_now_add=True, null=True)
title: str = models.CharField(max_length=100, null=False)
is_completed: bool = models.BooleanField(default=False)

def __str__(self) -> str:
return self.title

Next create the views for our todo application

from django.shortcuts import get_object_or_404, redirect
from django.template.response import TemplateResponse
from django.http import HttpRequest, HttpResponse

from todo.models import Todo
from todo.task import task_created_alert

def home(request: HttpRequest) -> HttpResponse:

# with trace.
todos = Todo.objects.all()

return TemplateResponse(
request,
todos/home.html,
{
todos: todos
},
)

def create_todo(request: HttpRequest) -> HttpResponse:
if request.method == POST:
title = request.POST.get(title)
description = request.POST.get(description)

created_todo = Todo.objects.create(title=title, text=description)

return redirect(home)

return TemplateResponse(
request,
todos/create_todo.html
)

Add our new urls to the application urls in config/urls.py file

from django.contrib import admin
from django.urls import path, include

urlpatterns = [
path(admin/, admin.site.urls),
path(, include(todo.urls))
]

For this application we are also using Celery for taks queues in django. To setup celery in your app create celery.py file inside the config folder.

# celery.py

import os

from celery import Celery

os.environ.setdefault(DJANGO_SETTINGS_MODULE, config.settings)

app = Celery(config)

app.config_from_object(django.conf:settings, namespace=CELERY)

app.autodiscover_tasks()

to complete the integration process of celery to django navigate to the init.py file in the config folder and adjust it to be as follows.

# config/__init__.py

from .celery import app as celery_app

__all__ = (celery_app,)

Step 3: Instrumenting the application

In this step we are going to instrument our todo application. First we are going to tackle logging of our application by integrating the django-prometheus package we installed earlier update settings.py file to be

MIDDLEWARE = [
django_prometheus.middleware.PrometheusBeforeMiddleware,
django.middleware.security.SecurityMiddleware,
django.contrib.sessions.middleware.SessionMiddleware,
django.middleware.common.CommonMiddleware,
django.middleware.csrf.CsrfViewMiddleware,
django.contrib.auth.middleware.AuthenticationMiddleware,
django.contrib.messages.middleware.MessageMiddleware,
django.middleware.clickjacking.XFrameOptionsMiddleware,
django_prometheus.middleware.PrometheusAfterMiddleware,
]

By updating our middleware we allow django_prometheus to log all request made to our application we can also add logging for django models by modifying our todo model.

from django.db import models
from datetime import datetime
from django_prometheus.models import ExportModelOperationsMixin

# Create your models here.

class Todo(ExportModelOperationsMixin(todo), models.Model):
text: str = models.TextField(null=False)
created_at: datetime = models.DateTimeField(auto_now_add=True, null=True)
title: str = models.CharField(max_length=100, null=False)
is_completed: bool = models.BooleanField(default=False)

def __str__(self) -> str:
return self.title

Django prometheus makes it easy to automatically add logging to our applicaiton.

Next is to add otel spans into a our application to monitor the lifecyle of request.

For production environments we will create a gunicorn.config.py file in the root directory and modify it as follows:

import os

from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import (
OTLPSpanExporter,
)

from opentelemetry.sdk.resources import Resource, SERVICE_NAME
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor

TRACING_EXPORTER_ENDPOINT = os.environ.get(JAEGER_ENDPOINT, http://127.0.0.1:4317)

def post_fork(server, worker):
server.log.info(Worker spawned (pid: %s), worker.pid)

resource = Resource(attributes={
SERVICE_NAME: todo-application
})

traceProvider = TracerProvider(resource=resource)

processor = BatchSpanProcessor(OTLPSpanExporter(endpoint=TRACING_EXPORTER_ENDPOINT))
traceProvider.add_span_processor(processor)
trace.set_tracer_provider(traceProvider)

In the file above we are exporting application spans to Jaeger application at a port 4317 where jaeger is expecting to receive trace information.

Modify celery.py to enable it to export spans from celery to Jaeger

# config/celery.py
import os
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (
BatchSpanProcessor,
)

from celery import Celery

os.environ.setdefault(DJANGO_SETTINGS_MODULE, config.settings)

TRACING_EXPORTER_ENDPOINT = os.environ.get(JAEGER_ENDPOINT, http://127.0.0.1:4317)

resource = Resource(attributes={
SERVICE_NAME: celery-worker
})

traceProvider = TracerProvider(resource=resource)

TRACING_EXPORTER_ENDPOINT = os.environ.get(JAEGER_ENDPOINT, TRACING_EXPORTER_ENDPOINT)
processor = BatchSpanProcessor(OTLPSpanExporter(endpoint=TRACING_EXPORTER_ENDPOINT))
traceProvider.add_span_processor(processor)
trace.set_tracer_provider(traceProvider)

tracer = trace.get_tracer(__name__)

app = Celery(config)

app.config_from_object(django.conf:settings, namespace=CELERY)

app.autodiscover_tasks()

Next is to modify the wsgi.py file found in config.py by adding the following line to automatically instrument the whole application

“””
WSGI config for config project.

It exposes the WSGI callable as a module-level variable named “application“.

For more information on this file, see
https://docs.djangoproject.com/en/5.0/howto/deployment/wsgi/

“””
import os

from opentelemetry.instrumentation.django import DjangoInstrumentor

from django.core.wsgi import get_wsgi_application

os.environ.setdefault(DJANGO_SETTINGS_MODULE, config.settings)
DjangoInstrumentor().instrument()

application = get_wsgi_application()

We have been able to instrument our django + celery application at this point but there is still some customisation we have to do. Inorder for us to view the span of a view that uses celery tasks ques we have to use context propagation.

The create_do view function creates a todo record in the model and then passes the created todo title to task_created_alert function that runs for 10 seconds.

To propagate the context from the view to the task_queue function modify the view function to be as follows:

from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator

def create_todo(request: HttpRequest) -> HttpResponse:
if request.method == POST:
title = request.POST.get(title)
description = request.POST.get(description)

created_todo = Todo.objects.create(title=title, text=description)

# add this
carrier = {}
TraceContextTextMapPropagator().inject(carrier)
task_created_alert.delay(title=created_todo.title, headers=carrier)
# end here
sleep(5)
return redirect(home)

return TemplateResponse(
request,
todos/create_todo.html
)

The updated function retrieves the current trace contexts and todo title then passes it to our task function.

Let create the task that should run after creation of a todo.
Create a task.py file in the todo folder

from time import sleep
from celery import shared_task
from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator
from opentelemetry import trace

tracer = trace.get_tracer(__name__)
@shared_task
def task_created_alert(title, headers):
ctx = TraceContextTextMapPropagator().extract(carrier=headers)

with tracer.start_as_current_span(tack_create_alert, context=ctx):
span = trace.get_current_span()
span.set_attribute(title, title)
sleep(10)

return True

tasks.py file has task_created_alert function which we called in create_todo view function. We create another context from the context provided when calling the function.

We are simulating a task that takes 10 seconds to view the generated span from this tasks.

Step 4: Setup Observabilty tools
As previously stated we are using docker compose to run this application with all its dependencies.
First we need a dockerfile for our python application

FROM python:3.12-slim

# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# Install Poetry
RUN pip install poetry

# Set work directory
WORKDIR /code

# Copy only the necessary files to install dependencies
COPY pyproject.toml poetry.lock /code/

# Install dependencies
RUN poetry config virtualenvs.create false
&& poetry install –no-dev –no-interaction –no-ansi

# Copy project
COPY . /code/

# Expose port 8000 for the app
EXPOSE 8000

Then the docker compose file which will be located at the root folder of our application

services:
web:
build: .
command: gunicorn –config gunicorn.config.py –workers=4 –bind 0.0.0.0:8000 config.wsgi:application
volumes:
.:/code
environment:
OTEL_SERVICE_NAME=todo-app
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true
CELERY_BROKER_URL=amqp://guest:guest@rabbitmq:5672/
JAEGER_ENDPOINT=http://jaeger:4317/
OTEL_LOGS_EXPORTER=otlp
ports:
8000:8000″

celery:
build: .
command: celery –app=config worker –loglevel=info –logfile=logs/celery.log
volumes:
.:/code
environment:
OTEL_SERVICE_NAME=todo-app
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true
CELERY_BROKER_URL=amqp://guest:guest@rabbitmq:5672/
JAEGER_ENDPOINT=http://jaeger:4317/

rabbitmq:
image: rabbitmq:3-management
container_name: rabbitmq’
ports:
5672:5672
15672:15672

jaeger:
image: jaegertracing/all-in-one:1.58
ports:
16686:16686″
4318:4318″
6831:6831″
4317:4317″
environment:
LOG_LEVEL=debug

prometheus:
image: prom/prometheus
volumes:
./prometheus.yml:/etc/prometheus/prometheus.yml
command:
–config.file=/etc/prometheus/prometheus.yml’
ports:
9090:9090

prometheus configurations

global:
scrape_interval: 10s
evaluation_interval: 10s

scrape_configs:
job_name: prometheus’
static_configs:
targets: [127.0.0.1:9090′]

job_name: web-app-stuff’
metrics_path: /metrics’
scrape_interval: 5s
static_configs:
targets: [web:8000′]
labels:
alias: web-app”

by adding the configurations we are able to run our application and view spans that are generated when creating a todo

Step 5: Usage
Visit web app at web-app and create a task in your todo
the visit the jaeger ui dashboard search for POST create-todo operation to view the span for the request

We are also able to view our logs using the Prometheus dashboard

Summary

We have gone through how to implement a django app using celery and also to propagate request in the app
Useful links

Github Repo
OpenTelemtry
Jaeger
Prometheus
Celery

Please follow and like us:
Pin Share