For years, activists and academics have been raising concerns that facial analysis software that claims to be able to identify a person’s age, gender and emotional state may be biased, unreliable or offensive – and should not be sold. needed.
Acknowledging some of those criticisms, Microsoft said Tuesday that it plans to remove those features from its artificial intelligence service for detecting, analyzing and recognizing faces. They will stop being available to new users this week and will be phased out for existing users within the year.
The changes are part of a push by Microsoft for tighter controls of its artificial intelligence products. After two years of review, a team at Microsoft has developed the “Responsible AI Standards”, a 27-page document that sets out requirements for AI systems to ensure that they do not have a detrimental effect on society. Will have.
Microsoft Lead Ai
Prior to release, technologies that will be used to make critical decisions about an individual’s access to employment, education, health care, financial services, or life opportunities are subject to review by Officer, who is led by Microsoft’s lead AI responsible Natasha Crampton.
At Microsoft, there was growing concerned about emotion recognition tools that labelled a person’s expression as anger, contempt, disgust, fear, joy, neutrality, sadness, or surprise.
“There’s a lot of cultural, geographic, and individual variation in how we express ourselves,” Crampton said. This raised credibility concerns, as well as the larger question of whether “facial expression is a reliable indicator of your internal emotional state,” as she put it.
Age and gender analysis tools, as well as tools to detect facial features such as hair and smiles, may be useful for interpreting visual images for people who are blind or have low vision, but the company decided it was problematic to make a profiling tool generally available to the public, according to Crampton.
Also, check Sony PlayStation 5: India Restock on June 21