Both search and recommendation algorithms provide a user with a ranking that aims to match their needs and interests. Despite the (non) personalized perspective characterizing each class of algorithms, both learn patterns from historical data, which conveys biases in terms of imbalances and inequalities.
In most cases, the trained models and, by extension, the final ranking, unfortunately strengthen these biases in the learned patterns. When a bias impacts on human beings as individuals or as groups with certain legally protected characteristics (e.g., race, gender), the inequalities reinforced by search and recommendation algorithms lead to severe societal consequences like discrimination and unfairness.
Challenges that arise in the real-world applications are focused, among others, on controlling the effects generated by popularity bias to improve the user’s perceived quality of the results, supporting consumers and providers with fair rankings, and transparently explaining why a model provides a given (less) biased result. Hence, being able to detect, measure, characterize, and mitigate bias while keeping high effectiveness is a prominent and timely challenge.
BIAS 2021 will be the ECIR’s workshop aimed at collecting new contributions in this emerging field and providing a common ground for interested researchers and practitioners. Specifically, BIAS 2021 will be the second edition of this dedicated event at ECIR, coming after a very successful 2020 delivering. Given the growing interest of the community in these topics, we expect that this workshop will be more and more of interest, with a stronger outcome and a wider community dialog. More info at workshop website