学术报告

Minimax optimal learning: adaptivity, model compression and limitation-Yuhong Yang (University of Minnesota)

博亚体育app下载- App Store

题目: Minimax optimal learning: adaptivity, model compression and limitation

报告人: Yuhong Yang(University of Minnesota) 

Abstract: Minimax-rate optimallty plays a foundational role in theory of statistical and machine leaning. Besides the identification of minimax-rates of convergence and optimal learning procedures for various learning scenarlos. adaptive strategies have also been devised to work simultaneously well for multiple or even infinitely (countably or contlnuously, many possible scenarlos that may describe the underying distribution of the data. Going with the exciting successes of the modern regression leaming tool are questions/concerns/doubts on sparsity, model compressibility, instablity, robustness and rellability of the fancy automated algorithms. in this talk, we will first present on minimax optimal adaptive estimations for high dimensional regression learning under hard and soft sparsity setups, taking advantage of recent sharp sparse linear approdmation bounds. An application on mode compression in neural network leaming will be given. Then we will address the question that how adaptive and powerful any learning procedure really can be. We show that every procedure, no matter how it is constructed, can only work well for a limited set of regreson functions


报告时间: 2022年9 月 27 日 (周二)下午 3:00-4:00

报告地点: #腾讯会议: 956-802-903

联系人: 邹国华

欢迎全体师生积极参加!


Baidu
sogou
/
Baidu
sogou
/
XML 地图 | Sitemap 地图