Written By: Sashrika Pandey (Guest Writer)
With the increased reliance on predictive models, the lack of representation in artificial intelligence has proven to be a serious concern. Considering that the decisions we make now, in the early stages of AI integration, are likely to impact future decisions regarding the societal reliance on AI, it is critical to consider the variety of factors that influence each model.
Machine learning is a perfect example of how unresolved social issues can become ingrained in future development processes. When predictive models are developed, they are trained on past data and exposed to test data sets to produce results used by researchers and industry professionals alike. According to the World Economic Forum, training models based on data that have ingrained biases often reflect such biases in the generated results.
However, the accuracy of this predictiveness does depend on the trend that training data pairs exhibit. While studying gender associations in artificial intelligence, Susan Leavy examined gender-related phrases contextually and found that the models she trained used the term “girl” when the term “woman” would be more appropriate — more often than they used the word “boy” instead of “man” in similar situations. The presence of stereotypes in language, evident in the use of descriptive rhetoric and metaphors, illustrated the ingrained associations of gender bias in decades of text. The study’s conclusion that machine learning algorithms can learn gender bias indicates the gravity of the underrepresentation of women in technology. With only 12% of machine learning researchers being female, the implications of having an unbalanced representation are a cause for concern in future algorithms. Involving women in the decision-making process of model development means channeling a series of perspectives that may have otherwise gone unheard. Overturning decades of prejudices and stigmas is a multifaceted campaign, as it includes advocacy from both the educational and social sectors, but it is a necessary one if sustainable AI models are to be integrated into the daily lives of billions of people.
This challenge falls under the need for diversity in tech. For instance, the Moral Machine Experiment, which considered the ethical decisions made by participants from several backgrounds who directed the hypothetical actions of an autonomous vehicle, concluded that one’s answers to these moral issues often correlated with one’s cultural background. Forbes reports that biased models can have detrimental effects on criminal justice due to training based on inaccurate data, which, in communities of color, has reportedly over-predicted the likelihood of offenders to commit another crime. Diversity in developing influential tech is key, especially when products and research may be scaled for a globalized audience. When minor flaws are overlooked in the beta stages of product development, their influence will undoubtedly be amplified when the product is used by a massive audience. Providing a widely-marketed, intrinsically flawed product on the international stage may lead to the mistakes that present-day industries struggle to address. By rooting out the source of possible future blunders in the early stages of widespread AI integration, researchers and developers alike will be able to ensure that the consumers of AI-based products and services have a dependable platform.
To counter the issue of underrepresentation, numerous studies have outlined possible calls to action. Researchers studying the ramifications of gender inequality in the United Kingdom recommended building collaborative networks for women in technology to build a stable foundation for their future careers and further advocacy. Constructive policies are also essential to such progress, as they ensure that established corporations will abandon obsolete practices in favor of those that promote inclusion and diversity in the workplace. In an interview with CNN, Melinda Gates explains that companies should take the initiative to implement inclusive policies and suggests that increased corporate transparency could be beneficial for the public.
Now more than ever, it is essential to consider input from people of varying backgrounds and opinions. The transition to predictive technology means that the underlying structures of several institutions are likely to exhibit present practices, for better or worse. Therefore, confronting these biases rather than dismissing them for the sake of saving face is a vital first step. Transparency between employers and employees, as well as between private institutions and the public, can contribute to a collaborative mentality that is needed if we are to work together to build the technologies of the future. While forming a utopia, cultivated from the optimistic seeds of this generation, is unlikely, persistence and communication are needed if progress in any direction is to be made. Such an endeavor cannot be completed alone, either; it is through the contributions of numerous diverse groups that effective progress can be made. Considering the blunders of past institutions and the positive impacts that can be made by future ones, all we can do is take the first step and begin to build a brighter, more inclusive community.