August 9, 2024 - v2.0.463
8 months ago by Noriaki Tatsumi
New Features:
- OpenAI has been added as an LLM service provider option in addition to Azure for the Shield rules that leverage LLMs (ex: Hallucination, Sensitive Data)
- Our most recent experimental hallucination rule has been now promoted as the Hallucination V3 rule in beta mode. Benefits of V3 include: Increased speed & decreased cost compared to V2, Improved labeling through algorithmically optimized language models that require fewer few-shot examples for training
Product Enhancements:
- The claim classifier in the hallucination rule has been retrained with more data to improve the detection of text chunks that should be skipped for evaluation as claims
- Improved the stability of the toxicity rule executions. A circuit breaker has been introduced to skip the rule evaluation if the number of tokens exceeds the default value of 1,200 to reduce the number of requests that fail due to latency.
- A new endpoint was introduced in Arthur Chat that returns the list of most recent conversation IDs
Bug Fixes:
- Fixed an issue where claims that had a negative order value were being shown first in the UI even when they were at the bottom of the overall message