Success Story

Confronting Bias in AI

Problem

Race, gender, disabilities, and other biases may be inadvertently embedded in AI systems, forcing computational systems to replicate historical problems. The explosion of new AI technology needs a call-to-action from regulators and organizations to mitigate those risks with best practices for AI applications. For example, testing algorithms for discriminatory outcomes, leveraging the NIST Voluntary Framework for examining AI systems (for issues involving fairness and equity) and studying algorithms and their models for biased outcomes. This means-making a democratic computational tool set that considers the ethics and morality of biological social beings.

Solution: A Bias Toolkit

WWT has been on the algorithmic-accountability front lines serving customers with solutions that investigate, monitor and audit complex AI systems. The Bias Toolkit is a collection of tools designed to assist in the mitigation of bias in federal data, by addressing issues in data planning, curation, analysis, and dissemination.

  • Algorithm Audits to prevent algorithm harm
    The Model Card Generator – A model card is a documentation tool to increase transparency and share information with a wider audience about ML, AI, or the automation model’s intent, data, architecture, and performance. It reduces bias in government machine learning workflows by exploring the model’s ability to perform across sensitive classes and by collecting this information in a readable format for a wider audience. This audit type function provides context and transparency into a model’s development and performance for effective public oversight and accountability.
  • Natural Language Processing to ensure empathy
    The Ableist Language Detector – Ableist language is language that is offensive to people with disabilities and can cause them to feel excluded from jobs they are qualified to perform. A
    natural language processing-powered web application was developed that allows for checking of ableist language and recommends alternative language to make the posting more inclusive for persons with disabilities.
  • Algorithm accountability investigation
    The Data Generation Tool – A suite of Jupyter (Python) notebooks that produces synthetic datasets that enable a user to compare the expected behavior and the actual output of a
    given ML model. Each notebook addresses a different practical application that may be relevant to “customers’” models. This human intervention can detect potential bias, when two data observations with the same characteristics are treated differently by the model. 

Impact

  • Since problems are not reflected inside the computational system, this resulted in trustworthy AI for the future of the organization 
  • Accelerated Federal adoption of AI within the agency, that assisted in modernizing operations, cultivating public trust in AI, and exemplifying agency leadership in the use of trustworthy AI. 
  • More empathy indication in the thinking of individuals and harm mitigation when examining large population data sets 
  • An AI-forward organization that is breaking boundaries and holding systems accountable with advanced risk management techniques.

Certifications

Socioeconomic Certifications

Small Business Administration 8(a) program

Small Business Enterprise (SBE)

Disadvantaged Business Enterprise (DBE)

Maryland Minority Business Enterprise Certified (MBE)

Industry certifications

ISO 9001:2015, Quality Management Systems

Contract Vehicles

Download Our resources