Skip to main content
Gradient
9 Emerging Use Cases for Responsible AI
May 13, 2024

Why Responsible AI (RAI) is important

Gen AI provides a powerful tool for building software; however, companies confront adoption challenges due to concerns about AI safety and responsibility — in surveys, enterprises' main roadblocks are privacy laws (31%), AI governance, and orchestration (31%); and more than 50% of enterprises emphasized the risk of biases and hallucination. Amid increasing AI lawsuits and upcoming regulations, developing and adopting responsible AI could be key to effectively mitigating legal risks and enabling this transformational technology.

To examine closely the use cases of Responsible AI (RAI), we will decouple Responsible AI from the jobs-to-be-done (JTBD) and solution-mapping perspectives in this post.

What are RAI's use cases?

There are various RAI use cases across the AI stack, and these solutions can be divided into two dimensions: AI engineering workflow and the “value” of responsible AI, and grouped under three jobs-to-be-done (JTBD): 1) Assess and improve AI quality, 2) Make AI secured and privacy-preserving, and 3) Prepare responsible data.

Responsible AI Market Map
  • AI engineering workflow includes 4 stages: Data preprocessing, Model adaptation, Serving, and Inference

  • Responsible AI value includes 5 categories: Explainability, Bias & Fairness, System Robustness, Privacy & Protection, System Security

Using the workflow and value map above as a guide, we’ll walk through 9 emerging use cases we believe are missing in the market, but that have growing demand. The remaining use cases listed on the map are established RAI use cases with active companies developing solutions for them.

Emerging RAI use cases

JTBD 1: Assess and Improve AI Quality

The common use cases for assessing and improving AI quality and reducing toxicity include traditional model evaluation/monitoring, robust AI deployment, and implementing guardrails post-deployment.

A few big hurdles around evaluations and monitoring are: how to scale evaluation and self-alignment in the model layer, and how to properly interpret deep learning models given their complexity, non-linear transformations, and large parameters. Recent research developments have enhanced model understanding and alignment, enabling startups to build tools based on these research:

  1. Constitutional AI that employs models as judges to evaluate other models for self-alignment and aims to improve helpfulness and reduce harmfulness.

  2. Feature Interpretability via dictionary feature sets that turn features as direction of points in neural network activation space, becoming the human-understandable units that represent aspects of reality. This greatly enhances AI interpretability.

Beyond evaluation in the model layer, many AI builders have questions about how to leverage user feedback to further improve AI generations responsibly and create the “AI feedback loop” programmatically, which introduces this startup opportunity:

JTBD 2: Make AI Secure and Privacy-Preserving

Additional key components of RAI are security and privacy, which involve protecting user and company data, preventing security attacks on the AI stack, and ensuring AI compliance.

One major challenge of data protection is related to copyright protections, with more than 10 significant lawsuits initiated in 2023. Determining AI copyright protection is complex, as training data originates from a corpus of data (which may include works with copyright protection), but AI models may be covered by fair use overall or in certain domains. These challenges drive the emerging use case for detecting and authorizing copyright:

As more companies build Gen AI apps via RAG pipeline and develop AI agents, which often use internal data to create external interfaces, it is critical to consider security and permissioning to prevent AI from leaking internal data sources through prompt engineering or human implementation pitfalls. Some emerging use cases include:

For compliance purposes, another open question is how model explainability will evolve from mathematical evaluation to user-understandable context to help non-technical users understand AI. The demand will increase as AI compliance professionals and regulators will be in a position to request reporting once AI regulations (e.g. US AI Safety EO, EU AI Act, etc.) are finalized:

JTBD 3: Examine and Prepare Responsible Data

Besides the choice of algorithms and techniques, the data responsibility significantly impacts the model's performance and safety.

Traditional MLOps understands and categorizes training data through data labeling and feature engineering. In GenAI, the challenges of data responsibility lie in removing undesirable data from large scales of unstructured data. The emerging data curation use cases include visual data understanding and proprietary data optimization, given the growing demand for image/video GenAI and custom models:

What About Multi-Modality Responsible AI?

Gen AI shifts from LLM to LMM can meaningfully benefit many industries such as medical, pharmaceutical, manufacturing, commerce, robotics, etc. This means that responsible AI needs to be more sophisticated as the level of complexity increases.

Most of the responsible AI use cases for multimodal systems are still in their early stages; a few examples of use cases include:

  • Teachable AI to allow users to provide multimodal real-world labels via mixed-reality approaches that help AI learn and be safer. [ref]

  • Multimodal Adversaries Detection to quantitatively understand the adversarial vulnerabilities of the LMMs. [ref]

  • Spatial Understanding Evaluation provides refined datasets and metrics that incorporate relationships between object positions with the goal of reducing unsafe spatial errors in LMMs. [ref]

These use cases today are in the research or proof-of-concept stage. We are keeping an eye on how multimodal RAI develops in production in the near future.

At Gradient Ventures, we're excited about startups innovating in these emerging areas of Responsible AI. If you're working on solutions in this space, feel free to reach out to our team — we'd be happy to learn more about what you're building.