人工智能培训

搜索

人工智能论文:持续学习的正则化缺点(Regularization Shortcomings for Continual Learning)

[复制链接]
hhtonyhh 发表于 2019-12-9 13:17:18 | 显示全部楼层 |阅读模式
hhtonyhh 2019-12-9 13:17:18 176 0 显示全部楼层
人工智能论文:持续学习的正则化缺点(Regularization Shortcomings for Continual Learning)在经典机器学习中,流传输到算法的数据被假定为独立且分布均匀。否则,如果数据分布随时间而变化,则该算法冒着只记住分布当前状态下的数据而忘记其他所有内容的风险。持续学习是机器学习的一个子领域,旨在寻找解决非iid问题的自动学习过程。 。持续学习的主要挑战有两个方面。首先,要检测分布中的概念漂移,其次要记住在概念漂移之前发生了什么。在本文中,我们研究了一种持续学习方法的特殊情况:\ textit {正则化方法}。它包括找到一个智能正则化术语,它将保护重要参数免于被修改而不会忘记。我们在本文中表明,在用于分类的多任务学习的上下文中,此过程不会学习将类与其他任务区分开。我们提出了理论推理来证明这一缺点,并通过“ MNISTFellowship”数据集的示例和实验进行说明。
In classical machine learning, the data streamed to the algorithms is assumedto be independent and identically distributed.Otherwise, if the datadistribution changes through time, the algorithm risks to remember only thedata from the current state of the distribution and forget everything else.Continual learning is a sub-field of machine learning that aims to findautomatic learning processes to solve non-iid problems.The main challenges ofcontinual learning are two-fold.Firstly, to detect concept-drift in thedistribution and secondly to remember what happened before a concept-drift.Inthis article, we study a specific case of continual learning approaches:\textit{the regularization method}.It consists of finding a smartregularization term that will protect important parameters from being modifiedto not forget.We show in this article, that in the context of multi-tasklearning for classification, this process does not learn to discriminateclasses from different tasks.We propose theoretical reasoning to prove thisshortcoming and illustrate it with examples and experiments with the "MNISTFellowship" dataset.人工智能论文:持续学习的正则化缺点(Regularization Shortcomings for Continual Learning)
URL地址:https://arxiv.org/abs/1912.03049     ----pdf下载地址:https://arxiv.org/pdf/1912.03049    ----人工智能论文:持续学习的正则化缺点(Regularization Shortcomings for Continual Learning)
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则 返回列表 发新帖

hhtonyhh当前离线
新手上路

查看:176 | 回复:0

快速回复 返回顶部 返回列表