ai Available · Flagship
AI Contextual Help Assistant
Inline assistant that understands the current page, field, and state — plus free-form chat over a per-tenant knowledge base.
Key benefits
- · Contextual help on the current page ("explain this field / state / action")
- · Free-form chat against the tenant's knowledge base (RAG)
- · User Guide Q&A mode
- · Every exchange stored with thumbs-up/down feedback
What it does
A chat-style assistant embedded in every portal. Three modes:
- Contextual help. Claude-or-OpenAI-backed assistant that sees the current page, field, or state you’re looking at, and can explain it in plain English — grounded in your program’s own rules.
- Free-form tenant Q&A. Real-time keyword retrieval over the tenant’s configured knowledge articles (RAG) — “what’s the filing deadline for domestic violence in our state?” — with the top-5 matching articles passed to the LLM for the final answer.
- User Guide chat. Answers procedural “how do I use VCPMS to do X?” questions scoped to product usage.
Why it matters
New staff ramp faster. Victims and advocates get answers in the portal instead of calling the back office. Reviewers get fast rule-lookup without leaving the claim screen.
How it’s configured
Each tenant seeds its own knowledge-base articles as a JSON file (ai-assistant-knowledge-articles.json). Articles can be filtered by portal (VBO / VCA / SPA / ADV / LEA) so answers match the reader’s role. The LLM backend is pluggable — Anthropic Claude, Azure OpenAI, or OpenAI — so programs can choose their vendor.