October 22, 2024 - v2.0.544

by Noriaki Tatsumi

New Feature:

  • Shield can now run on GPUs for ECS deployments, resulting in significantly faster performance.

Product Enhancements:

  • Users can now see the subcategories of toxicity or toxicity violation types in the UI
  • The sentence transformer model is now bundled inside the Shield container image instead of downloading it during server startup. This change results in a more reliable and faster server startup.
  • Improved the performance of the inference query endpoint

Bug Fixes:

  • Fixed an issue with refreshing the task list in the UI
  • The delete functionality when managing keyword rules in the UI now works as expected

October 16, 2024 - v2.0.532

by Noriaki Tatsumi

Bug Fixes:

  • Fixed the problem in Shield that was requiring unnecessary embedded model configuration when Arthur Chat is disabled

October 15, 2024 - v2.0.531

by Noriaki Tatsumi

Product Enhancements:

  • Further restricted what's on the Shield container by building on a distroless image

October 8, 2024 - v2.0.524

by Noriaki Tatsumi

Bug Fix:

  • Fixed the problem that was preventing API keys to be able to create default task rules

Product Enhancements:

  • When toxicity is detected, Shield will return subcategories of toxicity or toxicity violation types (i.e. Profanity, Harmful Request, Toxic Content - which covers hate speech and other discriminative languages)
  • Users now have the ability to add “hints”, a string descriptor of the type of sensitive data to be caught when configuring the sensitive data rule. This will provide the LLM with additional information on what to look for in the text it is evaluating leading to improved performance. Refer to the rule configuration guide for more detail.
  • The product now supports the use of self signed SSL certificates.

Bug Fixes:

  • The embedding model configuration that’s only used when Arthur Chat is enabled was required when Arthur Chat was disabled in v2.0.495
  • Fixed an issue with user role migrations in Shield v2.0.495 upgrade

Notes:

  • Rolled back the recently released code classifier and are currently making improvements based on insights with new datasets.

Product Enhancements:

  • Added a code classifier to the Prompt Injection rule to detect and skip evaluation of code to reduce false positives
  • The toxicity rule check’s max token limit configuration is now exposed on the installers
  • A new endpoint for resetting user password is added. The new attribute was also added to the user creation endpoint that provides the control to indicate whether the password must be changed at the first login.
  • Refactored the pre-defined UI access roles with TASK-ADMIN and CHAT-USER roles

Bug Fixes:

  • Improved request input validation to enhance robustness of Shield working with unexpected characters, such as the null character.
  • Fixed the issue with duplicate logging in the application log

August 9, 2024 - v2.0.463

by Noriaki Tatsumi

New Features:

  • OpenAI has been added as an LLM service provider option in addition to Azure for the Shield rules that leverage LLMs (ex: Hallucination, Sensitive Data)
  • Our most recent experimental hallucination rule has been now promoted as the Hallucination V3 rule in beta mode. Benefits of V3 include: Increased speed & decreased cost compared to V2, Improved labeling through algorithmically optimized language models that require fewer few-shot examples for training

Product Enhancements:

  • The claim classifier in the hallucination rule has been retrained with more data to improve the detection of text chunks that should be skipped for evaluation as claims
  • Improved the stability of the toxicity rule executions. A circuit breaker has been introduced to skip the rule evaluation if the number of tokens exceeds the default value of 1,200 to reduce the number of requests that fail due to latency.
  • A new endpoint was introduced in Arthur Chat that returns the list of most recent conversation IDs

Bug Fixes:

  • Fixed an issue where claims that had a negative order value were being shown first in the UI even when they were at the bottom of the overall message