AI for Good
Using AI to tackle society’s biggest challenges.
As an industry leader, we are committed to the responsible use of AI. Here’s how we’re using AI applications to protect privacy and combat unconscious bias.
Uncovering Unconscious Bias
Unconscious bias is easy to miss and much more pervasive in the workplace than blatant discrimination. It contributes to lower wages, fewer opportunities for advancement, and high turnover.
As part of our mission to use AI for good, we invested months of research and development and worked with leading ethicists and business leaders to build a solution to efficiently and accurately detect unconscious bias. The result: our Unconscious Bias Detector.
How the Unconscious Bias Detector Works
Analyzing employee demographic information and performance reviews with Text’s IQ’s socio-linguistic hypergraph technology can help you understand context and reveal patterns of potential bias.
High-level snapshot
The easy-to-read report dashboard provides a clear view of what appears in the performance reviews. This includes a diversity breakdown, a phrasing analysis, and a manager comment sentiment audit with positivity scores.
Analysis drill-down
This granular view shows commonly used phrases across the organization and a reviewer-centric report.
Objective approach
Using sophisticated AI to detect potential occurrences of unconscious bias, your team is empowered to take action.
Protecting Privacy with AI
The Freedom of Information Act (FOIA) is a critical tool to ensure public information remains public, an important aspect of democracy. And it has been imperative for journalists reporting in the wake of the COVID-19 pandemic.
However, it’s easy for personal information to be released in requested documents, either due to oversight or it not being covered under privacy laws. While it may not be illegal to publish these documents, it is unethical, and responsible journalists are working to avoid trampling individual privacy rights while upholding rights to public information.
How the AI to Protect Privacy tool works:
Input
Millions of documents and unstructured data, like declassified government files and emails, are uploaded into the AI to Protect Privacy dashboard.
Analyze & Redact
The AI analyzes the data, using context and NLP to identify what information is sensitive. It then automatically redacts it and returns the redacted documents.
Protect
Once all sensitive information has been redacted, the documents can be published without infringing individual citizen’s rights to privacy.
FAQs
How is the AI discovering unconscious bias?
The machine learning model is trained for each of the bias categories. For example, in the examination of unconscious gender bias, the model is looking for differences in structured information, such as quantitative scoring based on gender, and unstructured information, such as performance review commentary, for differences in language, such as situations where commentary refers to personality traits more frequently for one gender than another.
How is the decision made as to whether bias is present?
Ultimately, the users of the analysis are making these decisions. The AI is helping to flag patterns that are very difficult for humans to detect in the large volume of data processed. The Text IQ analysis helps users identify these trends and then make decisions about whether bias is present and how it should be mitigated.
How is privacy protected for individuals within an organization?
In general, the information used in the analysis is already available to employees, including for example their performance reviews. Additional information is already available in HR systems. Organizations can also anonymize employee names in the input and still identify bias by department, geography, manager, etc.
Can unconscious bias in hiring also be identified?
Absolutely, given a large volume of data from which to develop the machine learning model. Today’s solution is designed for performance evaluations and reviews, however.
What actions should an organization take based on the results?
Many organizations are already looking for these types of bias through review of their available data and, from those assessments, putting in place mitigation strategies. With this new technology, these organizations get much deeper insight into bias through the analysis of all the unstructured information that exists, including review commentary, interview notes, messages, and more. Unconscious bias that’s identified is then mitigated through the same approaches, such as training, process changes, manager coaching, and more.
For what types of organizations is this suitable?
The approach is applicable to any type of organization large enough to have an appropriate dataset to analyze. This includes public-traded and private companies, public sector agencies, and large membership/nonprofit organizations. We recommend this tool for organizations with at least 1,000 employees.
How large of a dataset is needed to get started?
The answer is a range depending on the application, but a good rule of thumb is that organizations need at least 10,000 documents (for example, historical performance reviews) to benefit from an accurate model.
Are the unconscious bias detector models built from shared data across companies?
No, each organization benefits from a unique detector model built just for them. From our work with companies in other areas of sensitive data identification and categorization, we’ve learned that every organization has its own language, job functions, job titles, organizational structure and other variables that make sharing machine learning across organizations impractical. This approach also ensures that no private data is ever shared amongst users.