This boilerplate demonstrates how to run Qwen3-VL-8B locally using LM Studio with an OpenAI-compatible API, while implementing a powerful Guardrails system to protect your application against misuse, prompt injection, jailbreak attempts, and malicious actors.
It provides a complete, end-to-end example that combines local LLM development with a strong security layer â essential for any production-oriented AI workflow.
Overview
The example integrates Next.js App Router, shadcn/ui, and the Vercel AI SDK (AI Elements) to build a secure chat interface powered by a locally hosted model.
A well-structured guardrailSystemPrompt ensures the model follows strict behavioral boundaries, blocking attempts to bypass restrictions or extract sensitive information.
A companion file, PROMPT_TEST.md, is included to demonstrate real-world tests you can run against the guardrails â covering injection attempts, jailbreak scenarios, manipulation tactics, and malformed instructions.
This setup acts as a local AI security lab, allowing you to test and validate LLM guardrails before deploying them in production.
Features
- LM Studio hosting Qwen3-VL-8B with OpenAI-compatible API
- Guardrail system prompt designed to mitigate jailbreaks and malicious intent
- Includes
PROMPT_TEST.mdwith practical attack scenarios - Next.js App Router for modern routing
- Vercel AI SDK (AI Elements) for a polished chat interface
- shadcn/ui for consistent UI components
- End-to-end chat with enforced safety boundaries
- Fully local, cost-free LLM experimentation
- Ideal for testing safe AI implementation before deploying to cloud models
Use Case
Perfect for developers who want to:
- Experiment with LLM security in a controlled, local environment
- Validate guardrails against real attack patterns
- Build AI applications where safety and reliability are critical
- Test local LLMs before moving to hosted providers
- Run a fully offline development workflow without token cost
With LM Studio, Next.js, AI Elements, and a robust guardrail system, this boilerplate offers a practical, security-focused foundation for building AI applications that remain safe â even under adversarial pressure. đ
Boilerplate details
Last update
5 hours agoBoilerplate age
5 hours ago