人工智能培训

搜索

机器学习论文:图像转换可以使神经网络对抗对抗性示例更具鲁棒性(Image Transformation can make Neural Networks mor

[复制链接]
hhtonyhh 发表于 2019-1-11 10:36:31 | 显示全部楼层 |阅读模式
hhtonyhh 2019-1-11 10:36:31 235 0 显示全部楼层
机器学习论文:图像转换可以使神经网络对抗对抗性示例更具鲁棒性(Image Transformation can make Neural Networks more robust against  Adversarial Examples)神经网络正在应用于与物联网相关的许多任务中,并且具有令人鼓舞的结果。例如,出于安全目的,神经网络可以通过监视摄像机精确地检测人,物体和动物。然而,最近发现神经网络容易受到精心设计的输入样本的影响,这些样本称为对抗性的例子。这样的问题导致神经网络对人类难以察觉的对抗性例子进行分类。我们找到一个旋转到对抗的示例图像可以打败对抗的例子的效果。使用MNIST数字图像作为原始图像,我们首先为神经网络识别器生成了对抗性的例子,这被伪造的例子完全愚弄了。然后我们旋转对抗图像并将它们提供给识别器以找到识别器以重新获得正确的识别。因此,我们凭经验确认旋转图像可以保护基于神经网络的模式识别器免受对抗性示例攻击。
Neural networks are being applied in many tasks related to IoT withencouraging results.For example, neural networks can precisely detect human,objects and animal via surveillance camera for security purpose.However,neural networks have been recently found vulnerable to well-designed inputsamples that called adversarial examples.Such issue causes neural networks tomisclassify adversarial examples that are imperceptible to humans.We foundgiving a rotation to an adversarial example image can defeat the effect ofadversarial examples.Using MNIST number images as the original images, wefirst generated adversarial examples to neural network recognizer, which wascompletely fooled by the forged examples.Then we rotated the adversarial imageand gave them to the recognizer to find the recognizer to regain the correctrecognition.Thus, we empirically confirmed rotation to images can protectpattern recognizer based on neural networks from adversarial example attacks.机器学习论文:图像转换可以使神经网络对抗对抗性示例更具鲁棒性(Image Transformation can make Neural Networks more robust against  Adversarial Examples) rTFLXxJTlImig34H.jpg
URL地址:https://arxiv.org/abs/1901.03037     ----pdf下载地址:https://arxiv.org/pdf/1901.03037    ----机器学习论文:图像转换可以使神经网络对抗对抗性示例更具鲁棒性(Image Transformation can make Neural Networks more robust against  Adversarial Examples)
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则 返回列表 发新帖

hhtonyhh当前离线
新手上路

查看:235 | 回复:0

快速回复 返回顶部 返回列表