The nascent sector of algorithm auditing is appearing, with various start-ups arising in the wake of concerns about the human impacts of AI. This is important, because there is a lack of regulation and standards to govern the deployment of AI, despite increasing awareness of the scope for harm and inappropriate use.
For example the predictive AI tool HireVue (marketed as a way to assess applicants at job interviews) has been using facial analysis until very recently to help assess competency scores, despite well-documented risks of racial bias in facial recognition systems. This was despite HireVue being unable to explain how exactly facial analysis scores could be related to competence, given the blackbox nature of machine learning systems.
There is an obvious need for independent bodies to review algorithms of this nature before they are deployed, and various startups are stepping into the space. But without a supportive regulatory framework, it seems that start-ups alone are not enough. HireVue’s algorithm has since been through an independent auditing process, which it cited as an exoneration of its methods. However others in the field (such as the AI governance specialist Alex Engler) have claimed that HireVue mischaracterised the results of the audit and actually used it to engage in ‘ethics-washing’ – pointing out that this is easy to do given the lack of regulatory oversight or defined standards. Cathy O’Neil, owner of the algorithm auditing start-up cited by HireVue, agrees, warning of the risk of corruption in the sector unless there is regulation to support its healthy development.
What algorithm auditing startups need to succeed https://venturebeat.com/2021/01/30/what-algorithm-auditing-startups-need-to-succeed/
Independent auditors are struggling to hold AI companies accountable https://www.fastcompany.com/90597594/ai-algorithm-auditing-hirevue
This bot judges how much you smile during your job interview https://www.fastcompany.com/90284772/this-bot-judges-how-much-you-smile-during-your-job-interview