3.5 KiB
3.5 KiB
Django LiveView vs Phoenix LiveView: a real benchmark
A reproducible, Docker-based benchmark comparing django-liveview (WebSocket + Django Channels + Stimulus) against Phoenix LiveView (WebSocket + BEAM + LiveView diffs).
Both apps implement an identical alert dashboard: add, delete, and search alerts in real time. The benchmark is fully automated with Playwright headless Chromium.
Stack versions
| Component | Django LiveView | Phoenix LiveView |
|---|---|---|
| Language | Python 3.12 | Elixir 1.17.3 / OTP 27 |
| Framework | Django 6.0.5 | Phoenix 1.7 |
| LiveView lib | django-liveview 2.2.0 | phoenix_live_view 1.0 |
| WS layer | Channels 4.3.2 + channels-redis 4.3.0 | Built-in (BEAM) |
| HTTP server | Uvicorn 0.47.0 | Bandit 1.5 |
| Database | PostgreSQL 16 | PostgreSQL 16 |
| Cache/broker | Redis 7 | — |
How each framework works
| Django LiveView | Phoenix LiveView | |
|---|---|---|
| Transport | WebSocket (Django Channels) | WebSocket (BEAM) |
| Updates | Explicit HTML snippets per selector | Automatic template diffs |
| State | Stateless (DB per action) | In-memory assigns + DB for mutations |
| JS | liveview.js (Stimulus + WS client) | phoenix_live_view.js |
| Server | Uvicorn (ASGI) | Bandit (HTTP/WS) |
Scenarios
Common (10 iterations, 2 warmup)
| # | Scenario | What it measures |
|---|---|---|
| 1 | Add alert | Click → new row appears (append) |
| 2 | Delete alert | Click → row disappears |
| 3 | Search / filter | Input → filtered list rendered |
Edge cases (5 iterations, 1 warmup)
| # | Scenario | What it measures |
|---|---|---|
| 4 | Add to large list | Add one alert with 500 already loaded |
| 5 | Rapid fire | 5 consecutive clicks (50 ms apart) → all 5 rows appear |
| 6 | Empty search | Search for non-existent term → empty state |
Results
Full results, charts, and analysis are available in the article:
Django LiveView vs Phoenix LiveView: a real benchmark
How to run
1. Start the apps
docker compose up --build -d django phoenix
Wait until both are healthy (check docker compose ps).
2. Run the benchmark
docker compose --profile bench run --rm benchmark
Results land in ./results/ as:
results_<timestamp>.csv— raw per-iteration datareport_<timestamp>.md— summary tables (replace the placeholders above)
3. Manual exploration
| App | URL |
|---|---|
| Django LiveView | http://localhost:8001 |
| Phoenix LiveView | http://localhost:8002 |
Benchmark methodology
- Timing: wall-clock
time.perf_counter()from DOM action to DOM mutation detected by Playwright. - WS payload: Playwright
framesent/framereceivedevents summed per interaction. - Warmup: first N iterations discarded (default 2 for common, 1 for edge cases).
- DB state: cleared via HTTP
/bench/clear/before each scenario; pre-populated via/bench/populate/?count=Nwhere needed. - WS ready: benchmark waits for
#ws-statusto read"connected"before starting. - Browser: headless Chromium via Playwright.
Related work
- django-interactive-frameworks-benchmark — previous benchmark comparing multiple Django interactive frameworks.
- django-liveview — the Django LiveView package used here.