0
1.0.6
Zurich Patch 5
Standalone Application
AI Control Tower – Evaluations provides insight into the runtime executions of AI agents by generating scores and detailed reasoning through an LLM-as-a-judge. It produces average quality and safety scores for each agent and tracks performance trends over time.
- Provides insight into the runtime executions of AI agents
- Generates scores and detailed reasoning using an LLM-as-a-judge.
- Produces average quality and safety scores for each agent.
- Tracks performance trends over time.
- Net-new feature of Evaluations for your AI Agentic systems. Customers will get Quality and Safety scores for their 3rd party and ServiceNow Agents
- Moved existing Evaluations tab (for Virtual Agent) to the VA asset record details page
- Ability to update your Value's productivity calculation using your quality or safety scores
- Evaluation partners: Now Assist Evaluations for ServiceNow Agents and Galileo for third-party Agents
- Resolved an issue in the retry mechanism for the metric retrieval flow
Plugin Dependencies:
- com.glide.hub.stream_connect.installer
App Dependencies:
- sn_observable_ui (1.0.1)
- sn_ai_governance (5.1.3)
- sn_ai_disc (2.0.5)
- sn_metric_engine (1.0.8)
- sn_ai_metric_ui (1.0.8)
- sn_telemetry_data (1.1.12)
Other app dependencies to support Now Assist Evaluations for ServiceNow Agents:
- sn_skill_builder (7.1.0)
- sn_prompt_assist (2.1.0)