Dscout has taken measures to manage artificial intelligence (AI) cyber risk prior to releasing AI features. Our objective is to build trustworthy AI solutions with the appropriate security controls to protect your data. Trust guardrails and data protection measures have been implemented to achieve this objective.

Industry standards such as ISO 42001 and NIST AI RMF (Risk Management Framework) inform our approach to AI risk management. Below, you'll find a brief summary of steps we have implemented in order to deliver responsible AI features.

Trust guardrails

In order to build trust prior to releasing new AI features, we use the following guardrails:

  • Minimize bias in AI output.
  • Continually assess and adjust models to increase accuracy.
  • Explicitly label AI outputs as AI-generated to provide transparency.

Data protection measures

Security is embedded into AI features to protect against relevant threats. Dscout follows guidance provided by OWASP Top 10 for Large Language Models (LLMs) as we build AI features that leverage LLMs. Here are some things you should know about our data protection measures:

  • Third party AI provider APIs are secured and keys are properly managed.
  • All AI data is encrypted during transmission and at rest.
  • Data is not persistently stored in the environments of third party AI provider’s (for example, OpenAI).
  • Third party AI providers are restricted from using Dscout customer data to train their models.
  • Administrator access to the AI provider environment is restricted and strong authentication is required.

Keep in mind that Dscout AI features are currently opt-in only. This means that AI features will not be enabled without your consent.

We are committed to building secure AI features that can be trusted. Please contact us at security@dscout.com if you have questions.

Was this article helpful?

1 out of 1 found this helpful
Have more questions? Submit a request