LLM Locust combines the simplicity of Locust with deep support for LLM-specific benchmarking
-
Updated
Jan 5, 2026 - TypeScript
LLM Locust combines the simplicity of Locust with deep support for LLM-specific benchmarking
Let LLMs play coup with each other and see who's the best at deception & strategy
Quick web app to collect feedback from LLM chat interactions
Web app & CLI for benchmarking LLMs via OpenRouter. Test multiple models simultaneously with custom benchmarks, live progress tracking, and detailed results analysis.
Add a description, image, and links to the llm-benchmarking topic page so that developers can more easily learn about it.
To associate your repository with the llm-benchmarking topic, visit your repo's landing page and select "manage topics."