As new technologies such as Artificial Intelligence (AI) are being leveraged to help optimize the hiring process, companies face ethical challenges in how Artificial Intelligence models can help make the hiring process equitable and unbiased.
In the past, AI models could assign job candidates various scores based on their facial attributes, voice analytics and use of keywords in the spoken and written language during interviews or resumes. AI models can predict the organizational fit, job suitability and ideal probability of success for a particular candidate for a specific job.
Traditionally, individuals with disabilities that stutter, exhibit neuro-diverse expression of language or thought processes or have physical limitations that impair engagement with a keyboard were often disadvantaged and discriminated against by Machine Learning Models.
Earlier this year, the Office of the United Nations High Commissioner for Human Rights (OHCHR) also published a report on the rights of people with disabilities and how AI can help or hinder people with disabilities. The OHCHR report sets apparent precedence for governments, civil society and institutions to enshrine the rights of people with disabilities in creating future artificial intelligence tools and technologies.
Following the lead from the OHCHR report, the US Government and the US Department of Justice and Equal Opportunity Commission are leading the charge. These public policy institutions are offering guidance, best practices, and policies that companies can use to build AI tools that comply with fair hiring practices and help Americans with Disabilities.
The US Government has clarified how accommodations, screening, inquiries, and medical examinations about disabled people’s job applications should be carried out when AI tools are leveraged.
Besides the US Department of Justice, the Federal Trade Commission and the Bureau of Labor Statistics also encourage the importance of fairness in screening tools to ensure AI is a force for good and the use of Artificial Intelligence tools doesn’t create a barrier for people with disabilities.
The USA is leading the drive to create an equitable space for people with disabilities. The EU, UK, Canada, and the European Commission are formulating a legal framework for artificial intelligence to align AI tools with the United Nations Convention on the Rights of Person with Disabilities (CRPD).
Aside from Governments and International bodies, various organizations are also hiring AI Ethics Officers to ensure the ethical implementation of Artificial Intelligence solutions. Lloyds Bank recently recruited a head of Data Ethics to lead responsible discussions around the importance of data ethics checks in implementing AI technologies. Besides Lloyds Bank, other organizations such as Boston Consulting Group (BCG), Microsoft, Google, IBM and Sales Force have hired AI Ethics/Responsible AI Officers to oversee the ethical use of AI technologies.
As AI technologies continue to play a critical role in various organizational processes, the ethical applications of AI will be essential as part of corporate governance, compliance, and ethics.
Comments are closed.