Why lack of diversity is a drag on artificial intelligence

Artificial Intelligence (AI) systems are getting smarter and defeating world champions in games like  Go , identify tumors in medical tests better than human radiologists and increase the efficiency of data hungry electricity centers. Some economists compare the transforming potential of AI with other “general purpose technologies” such as the steam engine, electricity or transistor.

But today’s AI systems are far from perfect. They tend to reflect the prejudices of the data used to train them and to break down when faced with unexpected situations . They can be fooled, as we have seen in the case of controversies surrounding false information on social networks, violent content posted on YouTube , or Tay’s famous case , the Microsoft chatbot , which was manipulated to make sexist statements within hours.

Do we really want to transform these fragile bias-prone technologies into the cornerstone of tomorrow’s economy?

Minimize risk

One way to minimize the risks of AI is to increase the diversity of the teams involved in its development. As research on collective decision making and creativity indicates , groups that are cognitively more diverse tend to make better decisions. Unfortunately, this is not by far the case of the community that is currently developing AI systems. And the lack of gender diversity is an important dimension (although not the only one) of this.

An analysis published this year by the AI ​​Now Institute revealed that less than 20% of researchers requesting to participate in prestigious AI conferences are women, and that these represent only a quarter of AI university students at Stanford and at the University from California in Berkeley.

The authors claimed that this lack of gender diversity results in AI failures that only affect women, such as an Amazon hiring system that discriminated against job seekers with female names.

Read More:   How to become an RPA Business Analyst

Our recent Gender Diversity report on AI research includes a big data analysis of 1.5 million arXiv works, a prepublication website that the AI ​​community uses very often to disseminate its work.

We analyze the text of summaries to determine which apply AI techniques, we deduce the authors’ gender from their names and study the levels of gender diversity in AI and its evolution over time. We also compare the situation in various research fields and countries, and the differences in language between works with female co-authors and works with only male authors.

Our analysis confirms the idea that there is a crisis of gender diversity in AI research. Only 13.8% of the authors of AI in arXiv are women and, in relative terms, the proportion of AI works of which at least one woman has co-authored has not improved since the 1990s.

There are significant differences between countries and research fields. We found a greater female representation in AI research in the Netherlands, Norway and Denmark, and a smaller representation in Japan and Singapore. We also found that women who work in physics, education, biology and social aspects of computer science are more likely to publish papers on AI versus those who work in computer science or mathematics.

In addition to measuring the gender diversity of AI research staff, we also explore the semantic differences between research papers with and without female participation. We test the hypothesis that research teams with more gender diversity tend to increase the variety of issues and issues that are considered in AI research, which makes their results potentially more inclusive.

Read More:   Is Cellphone Surveillance a Big Problem?

To do this, we measure the “semantic signature” of each work using a machine learning technique called word embeddings ( word mapping) and compare the signatures of the works in which at least one author was a woman with those of jobs without No author.

This analysis, which focuses on the Machine Learning and Social Aspects of Computer Science in the United Kingdom, showed significant differences between the groups. Specifically, we found that work with at least one co-author tends to be more practical and socially sensitized, and in them terms such as “justice,” “human mobility,” “mental,” “gender,” and “personality” play a key role. . The difference between the two groups is consistent with the idea that cognitive diversity has an impact on the research produced and indicates that it results in a greater commitment to social issues.

How to fix it

So how do you explain this persistent gender gap in AI and what can we do about it?

Research shows that the lack of gender diversity among science, technology, engineering and math (STEM) workers is not the product of a single factor: stereotypes and gender discrimination, the lack of models and mentors, insufficient attention to the balance between work and private life and the “toxic” work environments of the technology industry come together to create a perfect storm against gender inclusion.

Ending the gender gap in AI research has no easy solution. Changes throughout the system to create safe and inclusive spaces that support and encourage researchers from groups with little representation, a change in attitudes and cultures in the fields of research and industry and better communication of the transformative potential of the AI in numerous areas could be part of it.

Read More:   How Voice Assistants Are Changing Our Lives

Political interventions, such as 13.5 million pound state investment to foster the diversity of roles in AI through new university transformation courses, may improve the situation a bit, but large-scale interventions are needed to create better connections between arts, humanities and AI, and change the image of who can work in AI.

Although there is no single reason for girls to disproportionately stop enrolling in subjects such as Science, Technology, Engineering and Mathematics as they progress in their studies, there is evidence that factors such as generalized gender stereotypes and an educational environment that It affects girls ‘confidence more than boys’ are part of the problem. We must also highlight those models that use AI to promote a change for the better.

A tangible intervention to address these problems is the Longitude Explorer Award , which encourages secondary school students to use AI to solve social challenges and work with AI models. We want young people, especially girls, to realize the potential of AI for good and its role in driving change.

Strengthening the preparation and confidence of young women, we can change the proportion of people who study and work in AI and help address the possible prejudices of artificial intelligence.


Rose

Rose is a technology enthusiast and a writer. She had the interest to write articles related to technology, software, Mobiles, Gadgets and many more.

Leave a Reply

Your email address will not be published. Required fields are marked *