A new artificial intelligence framework called Justice AI GPT has been developed to address systemic bias in AI systems used for workplace decisions. Created by technologist Christian Ortiz, the tool is designed to identify and correct biased language in recruitment, hiring, and performance evaluations by using a unique dataset built on non-Western knowledge systems.
Key Takeaways
- Justice AI GPT is a new framework designed to prevent bias in AI tools at their source, rather than correcting it after the fact.
- It was created by technologist and decolonial social scientist Christian Ortiz, who identified bias as a fundamental design element in current AI models.
- The system works by pairing large language models with a proprietary "decolonial dataset" developed in collaboration with over 560 global experts.
- It is currently used by 112 organizations to audit HR policies, rewrite job descriptions, and create more inclusive training materials.
- The tool is available as a plug-in for platforms like ChatGPT for a monthly fee of $20, with custom enterprise solutions also offered.
The Problem of AI Bias in the Workplace
Many companies now use artificial intelligence to streamline human resources processes, including recruitment, applicant screening, and employee performance reviews. However, these AI tools often reflect and amplify existing human biases, leading to unfair outcomes.
Experts have warned that without careful oversight, AI systems can perpetuate discriminatory patterns. These systems are trained on vast amounts of historical data, which can contain inherent biases related to race, gender, and socioeconomic background.
How AI Bias Manifests
According to Ava Toro, a global research consultant for the Consumer Climate Report, biased AI can structurally misinterpret an individual's qualifications. For example, an uncalibrated system might devalue experience gained at a small business compared to a large corporation, disproportionately affecting candidates from diverse backgrounds.
This systemic issue means that AI might penalize candidates with non-Western names, educational paths, or communication styles that differ from a perceived norm. Phrases in job descriptions like "cultural fit" or "strong communication skills" can be interpreted by AI in ways that favor a narrow, dominant cultural standard.
A New Approach to Countering Bias
Christian Ortiz, a technologist and decolonial social scientist, developed Justice AI GPT to tackle this problem at its root. Ortiz concluded that bias in AI was not a mere flaw but a core part of its design, stemming from the datasets used to train these models.
"Bias was not a glitch in AI, it was the design. I asked myself, ‘Where does this bias come from, and what would it take to dismantle it completely?’ That question led me to build Justice AI GPT."
– Christian Ortiz, Creator of Justice AI GPT
To build the system, Ortiz authored the Decolonial Intelligence Algorithmic Framework™ (DIAL) and created what he describes as the world's first decolonial dataset. This foundational work forms the intellectual property behind the Justice AI GPT platform.
How Justice AI GPT Functions
The system operates by identifying and neutralizing biased information originating from what Ortiz terms "Eurocentric, Western colonial datasets." Unlike other tools that attempt to apply a corrective filter after a biased output is generated, Justice AI GPT aims to prevent the bias from forming in the first place.
It achieves this by augmenting the massive datasets from models like OpenAI's with its own decolonial dataset. This unique dataset was compiled with contributions from more than 560 global experts, each providing over three decades of specialized knowledge from their respective communities and professions.
The result is a framework designed not to replicate existing societal patterns but to actively counteract them, creating more equitable and accurate outputs.
Practical Applications and Adoption
Justice AI GPT is already being used in various organizational settings to create more equitable workplace environments. According to Ortiz, 112 organizations have implemented the tool for a range of HR and policy-related tasks.
Current Use Cases
- Bias Audits: Analyzing existing company policies and procedures to pinpoint hidden biases with precision.
- Inclusive Language: Rewriting job postings and HR documents to remove coded language and affirm diverse communication styles, including multilingualism and neurodivergence.
- Training and Development: Reshaping leadership and diversity, equity, and inclusion (DEI) training modules to embed equity as a core principle.
- Hiring Processes: Preventing qualified candidates from being unfairly filtered out due to ethnic names or non-traditional educational backgrounds.
Ortiz stated that two major corporations are currently deploying Justice AI GPT across their entire HR departments. The goal is to ensure employees are evaluated based on their contributions and skills, rather than being judged against unspoken cultural norms.
Accessibility and Future Vision
The tool is designed to be widely accessible. Individual users can access Justice AI GPT as a plug-in through major platforms like ChatGPT, Claude, and Gemini for a monthly subscription of $20. For larger organizations, custom-tailored versions are available to meet specific workplace needs.
Ortiz has ambitious goals for the platform's future. He envisions the tool reaching a million users within the next few years and being adopted by organizations and governments worldwide.
"The global landscape is shifting, markets are interconnected, migration is reshaping demographics, and technology has collapsed distance," Ortiz explained. He believes that tools promoting fair cross-cultural communication are essential for building solidarity rather than deepening divisions in an increasingly connected world.