
The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20) Bursting the Filter Bubble: Fairness-Aware Network Link Prediction Farzan Masrour, Tyler Wilson, Heng Yan, Pang-Ning Tan, Abdol-Hossein Esfahanian Michigan State University {masrours, wils1270, yanheng, ptan, esfahanian}@msu.edu Abstract exposed to online users. If left unchecked, the filter bubble may introduce systematic biases in the network data and its Link prediction is an important task in online social network- subsequent analysis. For instance, Hofstra et al. (Hofstra et ing as it can be used to infer new or previously unknown rela- tionships of a network. However, due to the homophily prin- al. 2017) examined the ethnic and gender diversity of social ciple, current algorithms are susceptible to promoting links relationships on Facebook and showed that those who have that may lead to increase segregation of the network—an ef- ample opportunities to befriend other similar users often find fect known as filter bubble. In this study, we examine the fil- themselves in highly segregated networks. This is due to the ter bubble problem from the perspective of algorithm fairness homophily principle (McPherson, Smith-Lovin, and Cook and introduce a dyadic-level fairness criterion based on net- 2001), which is the tendency of individuals to form social work modularity measure. We show how the criterion can be ties with other similar individuals in a network. As current utilized as a postprocessing step to generate more heteroge- algorithms are designed to promote links between similar neous links in order to overcome the filter bubble problem. users, their suggested links may exacerbate the user segre- In addition, we also present a novel framework that com- gation problem. bines adversarial network representation learning with super- vised link prediction to alleviate the filter bubble problem. In addition to online social networks, the filter bubble Experimental results conducted on several real-world datasets problem is also prevalent in recommender systems, which showed the effectiveness of the proposed methods compared can be viewed as a link prediction task applied to a bipar- to other baseline approaches, which include conventional link tite network of users and items. A recent study by Nguyen prediction and fairness-aware methods for i.i.d data. et al. (Nguyen et al. 2014) concluded that recommender sys- tems tend to expose users to “slightly narrowing set of items Introduction over time.” For example, in movie recommendation, movies from a certain genre may only be recommended to users Online social networking sites have transformed the way in- from a specific gender. By addressing the filter bubble prob- dividuals interact and share information with each other. The lem in network link prediction, the proposed method can wealth of social network data available also provide oppor- potentially be used to alleviate the filter bubble problem in tunities to mine them for a variety of business applications. other types of recommender systems. For example, businesses can learn about the users’ interests, This paper examines the filter bubble problem for network sentiment, and online behavior by analyzing the social net- link prediction from algorithm fairness perspective. Specif- work data. The insights gained from such analysis will help ically, we consider a link prediction algorithm to be unfair businesses to increase engagement with their existing cus- if it is biased towards promoting certain types of links (e.g., tomers or connect with new customers. Despite its impor- those between users with similar gender or other protected tance, recent studies have raised concerns about the poten- attributes). As a motivating example, consider the link pre- tial biases and unintended consequences that may arise from diction task on professional networking sites. Certain pro- such automated analysis. fessions, such as software engineering, tend to be dominated For example, link prediction methods (Liben-Nowell and by men, a fact that is likely to be reflected in the link struc- Kleinberg 2007; Al Hasan et al. 2006; Masrour et al. 2015; ture of the professional network. As a result, the links rec- 2018) are commonly employed by social networking sites ommended by the site may reinforce this gender-based seg- to encourage users to expand their social circles. “Suggested regation and primarily recommend links between individ- for you” on Instagram and “People you may know” on uals from the same gender while recommending compara- LinkedIn are two example applications of such methods. tively fewer inter-gender links. Though such a system may However, the rise of link prediction systems have led to an be able to achieve high link prediction accuracy, it may un- effect known as filter bubble (Pariser 2012), which is the re- fairly disadvantage some users. For example, a female soft- inforced segregation and narrowing diversity of information ware engineer may be treated unfairly as they are seldom Copyright c 2020, Association for the Advancement of Artificial recommended to other male software engineers. Intelligence (www.aaai.org). All rights reserved. Unfair practices due to the decisions generated by auto- 841 mated systems is a problem that has been well-documented is significantly fewer than the number of non-links. Recent in many application domains, including criminal justice, years have also witnessed the emergence of deep neural net- mortgage lending, and university admission. For example, work methods for the link prediction task (Li et al. 2014; ; Angwin et al. (Angwin et al. 2016) warned about the poten- Tian et al. 2014). These methods have been shown to achieve tial biases against African Americans in the software used state of the art performance. to predict the risk score of defendants who would likely re- Social networks are increasingly personalizing their con- offend again while O’Neil (O’Neil 2017) cautioned against tent using automated machine learning techniques, which is the manipulative marketing tactics used by for-profit col- a concern as the decisions may lead to adverse effects on the leges in online advertising that exploit vulnerable popula- users. This is due to the so-called “filter bubble” or “echo tions. These concerns have brought increasing scrutiny into chamber” effect (Hofstra et al. 2017; Pariser 2012) in which the issue of fairness in machine learning algorithms. Despite individuals are increasingly isolated to consuming only in- their growing research, existing works are primarily focused formation that conform to their own belief system. In on- on independent and identically distributed (i.i.d) data, and line social networks, the effect of filter bubble is exemplified thus, may not be suitable for link analysis problems. For ex- by the recommendation decisions generated using link pre- ample, previous works have considered the notion of fair- diction algorithms. As link prediction algorithms are com- ness either at individual (Dwork et al. 2012) or group (Hardt monly used to encourage users to expand their networks, et al. 2016; Feldman et al. 2015) level. In contrast, this paper this may lead to adverse consequences such as segregation examines the notion of fairness at a dyadic-level, based on of users (Hofstra et al. 2017; Nguyen et al. 2014). the pairwise interactions between users in a social network. Quantifying fairness has been a subject of intense de- Furthermore, previous approaches have considered fairness bate among AI and ML researchers in recent years (Berk in terms of the unjust decisions against members of a spe- et al. 2018; Dwork et al. 2012; Hardt et al. 2016; Kus- cific underrepresented (protected) group. Instead, we con- ner et al. 2017). Previous works are primarily focused on sider fairness in terms of promoting inter-group connections non-relational data and can be classified into two types— in a network in order to alleviate the filter bubble problem. individual-level or group-level fairness. Fairness definition There are four major contributions of this paper. First, at individual level is based on the premise that similar we empirically assess the influence of protected attributes people should be treated similarly. For example, Dwork et such as gender on the link structure of a network by mea- al. (Dwork et al. 2012) defined a task-specific metric based suring the homophily effect on several real-world network on a probabilistic distance measure between individuals via datasets. Second, we introduce modred as a fairness crite- a Lipschitz condition. The metric is used as constraints to rion for network link prediction. The metric is inspired by optimize a fairness-aware classifier. In contrast, the group- the well-known modularity measure (Newman and Girvan level approach quantifies fairness in terms of statistical mea- 2004) developed for network community detection. We con- sures such as demographic parity, equalized odds (Hardt et sider the reduction in modularity measure as a way to deter- al. 2016) or balanced error rate (Feldman et al. 2015) with mine whether the links predicted by an algorithm may lead respect to the protected groups. The measures are typically to further segregation of the network. We then illustrate how computed from a confusion matrix (Berk et al. 2018) and the measure can be incorporated into a greedy algorithm are used to ensure that the average performance do not vary for postprocessing the results of current link prediction al- significantly among different groups of a protected attribute. gorithms. Finally, we present a novel Fairness-aware LInk In addition, there has been growing literature on develop- Prediction (FLIP) framework that combines adversarial net- ing fairness-aware methods. Current methods can be divided work representation learning with supervised link prediction into three categories. The first category includes prepossess- to mitigate the filter bubble problem. ing algorithms (Zemel et al. 2013; Madras et al. 2018) with the motivation that training data is the main cause of bias Related Work in machine learning.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-