|
Post by account_disabled on Feb 19, 2024 21:31:46 GMT -6
Expensive models are not easy to interpret. Tools that use these models also known as black box models do tend to perform better than simpler models but at the cost of lacking a clear mechanism to better understand them. Understanding the internals of a model may not be as big of a problem in certain types of problems such as image classification where we can eventually see clearly whether images of dogs are correctly classified. But things quickly get complicated when we’re dealing with subjective human data. Data such as whether someone is suitable for a job. There are already many cases where well-intentioned AI tools end up introducing more bias than they reduce, resulting in huge human costs. To address this issue in the human resources Belgium Mobile Number List field, companies operating in New York and California that provide AI services for hiring decisions will have to undergo bias audits starting this year. It remains to be seen what standards will be developed but what is certain is that interpretability will be a key component. How interpretable models can reduce bias in hiring An interpretable model can reduce bias in hiring because it has an excellent ability to communicate whether a potentially biasing variable is relevant to the analysis. This may prompt companies deploying the model to further examine the variable. Should it be retained in the model or should it be considered in advance. processing or cleaning. Some variables may be obvious and we should not factor race or ethnicity into hiring decisions. But other variables are trickier.
|
|