As February has come to an end, a month that honors Black History, it is important to reflect on the progress and strides that have been made to improve institutional bias and prejudice. Data Science has provided the tools to better visualize and analyze insights in many fields, including detecting discrimination.
The advanced data science tools and visualizations can uncover many insights including the clear discrepancy that can be seen in neighborhoods all over America. It is well-known that government policies and practices in the past had forced people of color into segregated neighborhoods; as a result of the segregation, children went to schools that were more segregated and the networks formed were also segregated. This raises the question: are our workplaces segregated as well?
Recently, an article with an embedded video was released by Vox showing segregated America geographically during the workday and how the racial makeup of neighborhoods changes during the workday. Data from “Racial Separation at Home and Work: Segregation in Residential and Workplace Settings” by Matthew Hall, John Iceland, and Youngmin Yi showed that work segregation is getting worse.
As the graph shows, work segregation has increased in the decade. When the researchers dove deeper into this research, they realized the more diverse work neighborhoods had the highest occupational inequality, meaning that workplaces were diverse because of the levels of employment varying significantly racially speaking. Most of the managers of these workplaces were white while the lower-level employees were black and brown.
The possible consequences of work segregation and occupational inequality can lead to more workplace conflicts and heated situations in the long run.
Biased Algorithms and Data: Predictive Policing
While data can uncover insights regarding discrimination, there is also the issue of biased algorithms.
For example, predictive policing algorithms are used in Los Angeles, Atlanta, and Philadelphia that comb through past crime data to predict which areas are most at risk for future crimes. While the intentions of the algorithms are good, they can lead to “human-rights violations” by over-policing mostly black neighborhoods regardless of where the crime was happening. In Oakland, for instance, black neighborhoods have about 200 times more drug arrests than other Oakland neighborhoods. When over-policing happens while these algorithms are in place and implemented, a feedback loop system is created and the disparity of police watching in these residences will occur.
Designing Smarter Systems:
“Predictive analytics often reproduce society’s prejudices because they are built by people with prejudice or because they are trained using historical data that reinforces historical stereotypes.”
So, what are some possible solutions and systems that can be created to offset biased algorithms to reduce institutional bias?
- Algorithms are built by people and hence it is extremely important to be aware of employing ML engineers and data scientists of diverse backgrounds to offset the prejudices that can lead to biased algorithms
- Researchers are also in favor of creating tools that can help companies and government agencies use to test whether their algorithms yield discriminatory results and to fix them when necessary
- Notably, Barocas and Moritz Hardt established a traveling workshop called Fairness, Accountability and Transparency in Machine Learning to encourage other computer scientists to do just that
- Some legal scholars (including the University of Maryland’s Danielle Keats Citronand Frank Pasquale) argue for the creation of new regulations or even regulatory bodies to govern the algorithms that make increasingly important decisions in our lives
While Data Science tools and visualizations have a come a long way in helping to uncover discrimination, systems need to be created that can prevent further discrimination by algorithms as Machine Learning and Artificial Intelligence will continue to grow and become a bigger part of our society in the near future.