While the warning in this article, that we need to be very aware of the inheritance bias in AI systems and training sets, is a real issue that’s both dangerous to society and difficult to solve, the inaccuracies in this article make me grind my teeth.
It’s accurate that facial recognition systems have much harder times with dark skinned individuals. However this is not really an AI system problem (facial recognition isn’t really AI), it’s related to how darker colours tend to obscure “depth” in visual images, making it more difficult for the image processing algorithms to accurately pin point the points on the face that are used for the key measurements in the face comparison algorithms.
Thus (while I can’t be 100% sure) based on my knowledge of these algorithms, it’s highly likely that using additional cranial measurements will make the comparison algorithms more accurate for darker skinned individuals.
The cranial measurements that are used for facial recognition have nothing to do with the science of craniometry.
Using AI to classify individuals into groups, is a completely different question and one that I’d have serious reservations about if it is actually being done, unless it’s only used as an initial filter that is followed up a unbiased human analysis.