人工智能培训

搜索

人工智能论文:差异私人公平学习(Differentially Private Fair Learning)

[复制链接]
hjrinfo 发表于 2018-12-7 11:29:54 | 显示全部楼层 |阅读模式
hjrinfo 2018-12-7 11:29:54 35 0 显示全部楼层
人工智能论文:差异私人公平学习(Differentially Private Fair Learning)我们设计了两种同时承诺差异隐私和均衡赔率的学习算法,这是一种“公平”条件,对应于保护群体中的假阳性和阴性率。我们的第一个算法是[Hardt等人的后处理方法的简单私有实现。 2016年]。该算法具有非常简单的优点,但必须能够在测试时明确使用受保护的组成员资格,这可以被视为“不同的处理”。第二种算法是[Agarwal等人的算法]的私有版本。 2018],可用于找到最佳公平分类器的非高效算法,允许访问可解决原始(非必要公平)学习问题的子程序。该算法无需在测试时访问受保护的组成员资格。我们确定了在需要所有三个属性时出现的公平性,准确性和隐私之间的新权衡,并表明如果在测试时使用组成员资格,这些权衡可以更温和。
We design two learning algorithms that simultaneously promise differentialprivacy and equalized odds, a 'fairness' condition that corresponds toequalizing false positive and negative rates across protected groups.Our firstalgorithm is a simple private implementation of the post-processing approach of[Hardt et al.2016].This algorithm has the merit of being exceedingly simple,but must be able to use protected group membership explicitly at test time,which can be viewed as 'disparate treatment'.The second algorithm is adifferentially private version of the algorithm of [Agarwal et al.2018], anoracle-efficient algorithm that can be used to find the optimal fairclassifier, given access to a subroutine that can solve the original (notnecessarily fair) learning problem.This algorithm need not have access toprotected group membership at test time.We identify new tradeoffs betweenfairness, accuracy, and privacy that emerge only when requiring all threeproperties, and show that these tradeoffs can be milder if group membership maybe used at test time.人工智能论文:差异私人公平学习(Differentially Private Fair Learning) hJ7xM233O934qWG2.jpg
URL地址:https://arxiv.org/abs/1812.02696     ----pdf下载地址:https://arxiv.org/pdf/1812.02696    ----人工智能论文:差异私人公平学习(Differentially Private Fair Learning)
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则 返回列表 发新帖

hjrinfo当前离线
新手上路

查看:35 | 回复:0

快速回复 返回顶部 返回列表