3402 lines
126 KiB
HTML
3402 lines
126 KiB
HTML
<h1 id="awesome-gradient-boosting-research-papers.">Awesome Gradient
|
||
Boosting Research Papers.</h1>
|
||
<a href="https://github.com/sindresorhus/awesome"><img
|
||
src="https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg"
|
||
alt="Awesome" /></a> <a href="http://makeapullrequest.com"><img
|
||
src="https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square"
|
||
alt="PRs Welcome" /></a> <img
|
||
src="https://img.shields.io/github/license/benedekrozemberczki/awesome-gradient-boosting-papers.svg?color=blue"
|
||
alt="License" /> <a
|
||
href="https://github.com/benedekrozemberczki/awesome-gradient-boosting-papers/archive/master.zip"><img
|
||
src="https://img.shields.io/github/repo-size/benedekrozemberczki/awesome-gradient-boosting-papers.svg"
|
||
alt="repo size" /></a> <a
|
||
href="https://twitter.com/intent/follow?screen_name=benrozemberczki"><img
|
||
src="https://img.shields.io/twitter/follow/benrozemberczki?style=social&logo=twitter"
|
||
alt="benedekrozemberczki" /></a>
|
||
<p align="center">
|
||
<img width="450" src="boosting.gif">
|
||
</p>
|
||
<hr />
|
||
<p>A curated list of gradient and adaptive boosting papers with
|
||
implementations from the following conferences:</p>
|
||
<ul>
|
||
<li>Machine learning
|
||
<ul>
|
||
<li><a href="https://nips.cc/">NeurIPS</a></li>
|
||
<li><a href="https://icml.cc/">ICML</a></li>
|
||
<li><a href="https://iclr.cc/">ICLR</a></li>
|
||
</ul></li>
|
||
<li>Computer vision
|
||
<ul>
|
||
<li><a href="http://cvpr2019.thecvf.com/">CVPR</a></li>
|
||
<li><a href="http://iccv2019.thecvf.com/">ICCV</a></li>
|
||
<li><a href="https://eccv2018.org/">ECCV</a></li>
|
||
</ul></li>
|
||
<li>Natural language processing
|
||
<ul>
|
||
<li><a href="http://www.acl2019.org/EN/index.xhtml">ACL</a></li>
|
||
<li><a href="https://naacl2019.org/">NAACL</a></li>
|
||
<li><a href="https://www.emnlp-ijcnlp2019.org/">EMNLP</a></li>
|
||
</ul></li>
|
||
<li>Data
|
||
<ul>
|
||
<li><a href="https://www.kdd.org/">KDD</a></li>
|
||
<li><a href="http://www.cikmconference.org/">CIKM</a><br />
|
||
</li>
|
||
<li><a href="http://icdm2019.bigke.org/">ICDM</a></li>
|
||
<li><a
|
||
href="https://www.siam.org/Conferences/CM/Conference/sdm19">SDM</a><br />
|
||
</li>
|
||
<li><a href="http://pakdd2019.medmeeting.org">PAKDD</a></li>
|
||
<li><a href="http://ecmlpkdd2019.org">PKDD/ECML</a></li>
|
||
<li><a href="https://recsys.acm.org/">RECSYS</a></li>
|
||
<li><a href="https://sigir.org/">SIGIR</a></li>
|
||
<li><a href="https://www2019.thewebconf.org/">WWW</a></li>
|
||
<li><a href="www.wsdm-conference.org">WSDM</a></li>
|
||
</ul></li>
|
||
<li>Artificial intelligence
|
||
<ul>
|
||
<li><a href="https://www.aaai.org/">AAAI</a></li>
|
||
<li><a href="https://www.aistats.org/">AISTATS</a></li>
|
||
<li><a href="https://e-nns.org/icann2019/">ICANN</a><br />
|
||
</li>
|
||
<li><a href="https://www.ijcai.org/">IJCAI</a></li>
|
||
<li><a href="http://www.auai.org/">UAI</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<p>Similar collections about <a
|
||
href="https://github.com/benedekrozemberczki/awesome-graph-classification">graph
|
||
classification</a>, <a
|
||
href="https://github.com/benedekrozemberczki/awesome-decision-tree-papers">classification/regression
|
||
tree</a>, <a
|
||
href="https://github.com/benedekrozemberczki/awesome-fraud-detection-papers">fraud
|
||
detection</a>, <a
|
||
href="https://github.com/benedekrozemberczki/awesome-monte-carlo-tree-search-papers">Monte
|
||
Carlo tree search</a>, and <a
|
||
href="https://github.com/benedekrozemberczki/awesome-community-detection">community
|
||
detection</a> papers with implementations.</p>
|
||
<h2 id="section">2023</h2>
|
||
<ul>
|
||
<li><strong>Computing Abductive Explanations for Boosted Trees (AISTATS
|
||
2023)</strong>
|
||
<ul>
|
||
<li>Gilles Audemard, Jean-Marie Lagniez, Pierre Marquis, Nicolas
|
||
Szczepanski</li>
|
||
<li><a href="https://arxiv.org/abs/2209.07740">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Off-Policy Learning (AISTATS 2023)</strong>
|
||
<ul>
|
||
<li>Ben London, Levi Lu, Ted Sandler, Thorsten Joachims</li>
|
||
<li><a href="https://arxiv.org/abs/2208.01148">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Variational Boosted Soft Trees (AISTATS 2023)</strong>
|
||
<ul>
|
||
<li>Tristan Cinquin, Tammo Rukat, Philipp Schmidt, Martin Wistuba, Artur
|
||
Bekasov</li>
|
||
<li><a href="https://arxiv.org/abs/2302.10706">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Krylov-Bellman boosting: Super-linear policy evaluation in
|
||
general state spaces (AISTATS 2023)</strong>
|
||
<ul>
|
||
<li>Eric Xia, Martin J. Wainwright</li>
|
||
<li><a href="https://arxiv.org/abs/2210.11377">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>FairGBM: Gradient Boosting with Fairness Constraints (ICLR
|
||
2023)</strong>
|
||
<ul>
|
||
<li>André Ferreira Cruz, Catarina Belém, João Bravo, Pedro Saleiro,
|
||
Pedro Bizarro</li>
|
||
<li><a href="https://arxiv.org/abs/2209.07850">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Gradient Boosting Performs Gaussian Process Inference (ICLR
|
||
2023)</strong>
|
||
<ul>
|
||
<li>Aleksei Ustimenko, Artem Beliakov, Liudmila Prokhorenkova</li>
|
||
<li><a href="https://arxiv.org/abs/2206.05608">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-1">2022</h2>
|
||
<ul>
|
||
<li><strong>TransBoost: A Boosting-Tree Kernel Transfer Learning
|
||
Algorithm for Improving Financial Inclusion (AAAI 2022)</strong>
|
||
<ul>
|
||
<li>Yiheng Sun, Tian Lu, Cong Wang, Yuan Li, Huaiyu Fu, Jingran Dong,
|
||
Yunjie Xu</li>
|
||
<li><a href="https://arxiv.org/abs/2112.02365">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Resilient Distributed Boosting Algorithm (ICML
|
||
2022)</strong>
|
||
<ul>
|
||
<li>Yuval Filmus, Idan Mehalel, Shay Moran</li>
|
||
<li><a href="https://arxiv.org/abs/2206.04713">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fast Provably Robust Decision Trees and Boosting (ICML
|
||
2022)</strong>
|
||
<ul>
|
||
<li>Jun-Qi Guo, Ming-Zhuo Teng, Wei Gao, Zhi-Hua Zhou</li>
|
||
<li><a
|
||
href="https://proceedings.mlr.press/v162/guo22h.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Building Robust Ensembles via Margin Boosting (ICML
|
||
2022)</strong>
|
||
<ul>
|
||
<li>Dinghuai Zhang, Hongyang Zhang, Aaron C. Courville, Yoshua Bengio,
|
||
Pradeep Ravikumar, Arun Sai Suggala</li>
|
||
<li><a href="https://arxiv.org/abs/2206.03362">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Retrieval-Based Gradient Boosting Decision Trees for Disease
|
||
Risk Assessment (KDD 2022)</strong>
|
||
<ul>
|
||
<li>Handong Ma, Jiahang Cao, Yuchen Fang, Weinan Zhang, Wenbo Sheng,
|
||
Shaodian Zhang, Yong Yu</li>
|
||
<li><a
|
||
href="https://dl.acm.org/doi/abs/10.1145/3534678.3539052">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Federated Functional Gradient Boosting (AISTATS
|
||
2022)</strong>
|
||
<ul>
|
||
<li>Zebang Shen, Hamed Hassani, Satyen Kale, Amin Karbasi</li>
|
||
<li><a href="https://arxiv.org/abs/2103.06972">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>ExactBoost: Directly Boosting the Margin in Combinatorial
|
||
and Non-decomposable Metrics (AISTATS 2022)</strong>
|
||
<ul>
|
||
<li>Daniel Csillag, Carolina Piazza, Thiago Ramos, João Vitor Romano,
|
||
Roberto I. Oliveira, Paulo Orenstein</li>
|
||
<li><a
|
||
href="https://proceedings.mlr.press/v151/csillag22a.html">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-2">2021</h2>
|
||
<ul>
|
||
<li><strong>Precision-based Boosting (AAAI 2021)</strong>
|
||
<ul>
|
||
<li>Mohammad Hossein Nikravan, Marjan Movahedan, Sandra Zilles</li>
|
||
<li><a
|
||
href="https://ojs.aaai.org/index.php/AAAI/article/view/17105">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>BNN: Boosting Neural Network Framework Utilizing Limited
|
||
Amount of Data (CIKM 2021)</strong>
|
||
<ul>
|
||
<li>Amit Livne, Roy Dor, Bracha Shapira, Lior Rokach</li>
|
||
<li><a
|
||
href="https://dl.acm.org/doi/abs/10.1145/3459637.3482414">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Unsupervised Domain Adaptation for Static Malware Detection
|
||
based on Gradient Boosting Trees (CIKM 2021)</strong>
|
||
<ul>
|
||
<li>Panpan Qi, Wei Wang, Lei Zhu, See-Kiong Ng</li>
|
||
<li><a
|
||
href="https://dl.acm.org/doi/pdf/10.1145/3459637.3482400">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Individually Fair Gradient Boosting (ICLR 2021)</strong>
|
||
<ul>
|
||
<li>Alexander Vargo, Fan Zhang, Mikhail Yurochkin, Yuekai Sun</li>
|
||
<li><a href="https://arxiv.org/abs/2103.16785">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Are Neural Rankers still Outperformed by Gradient Boosted
|
||
Decision Trees (ICLR 2021)</strong>
|
||
<ul>
|
||
<li>Zhen Qin, Le Yan, Honglei Zhuang, Yi Tay, Rama Kumar Pasumarthi,
|
||
Xuanhui Wang, Michael Bendersky, Marc Najork</li>
|
||
<li><a
|
||
href="https://iclr.cc/virtual/2021/spotlight/3536">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>AdaGCN: Adaboosting Graph Convolutional Networks into Deep
|
||
Models (ICLR 2021)</strong>
|
||
<ul>
|
||
<li>Ke Sun, Zhanxing Zhu, Zhouchen Lin</li>
|
||
<li><a href="https://arxiv.org/abs/1908.05081">[Paper]</a></li>
|
||
<li><a href="https://github.com/datake/AdaGCN">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Uncertainty in Gradient Boosting via Ensembles (ICLR
|
||
2021)</strong>
|
||
<ul>
|
||
<li>Andrey Malinin, Liudmila Prokhorenkova, Aleksei Ustimenko</li>
|
||
<li><a href="https://arxiv.org/abs/2006.10562">[Paper]</a></li>
|
||
<li></li>
|
||
</ul></li>
|
||
<li><strong>Boost then Convolve: Gradient Boosting Meets Graph Neural
|
||
Networks (ICLR 2021)</strong>
|
||
<ul>
|
||
<li>Sergei Ivanov, Liudmila Prokhorenkova</li>
|
||
<li><a href="https://arxiv.org/abs/2101.08543">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>GBHT: Gradient Boosting Histogram Transform for Density
|
||
Estimation (ICML 2021)</strong>
|
||
<ul>
|
||
<li>Jingyi Cui, Hanyuan Hang, Yisen Wang, Zhouchen Lin</li>
|
||
<li><a href="https://arxiv.org/abs/2106.05738">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting for Online Convex Optimization (ICML 2021)</strong>
|
||
<ul>
|
||
<li>Elad Hazan, Karan Singh</li>
|
||
<li><a href="https://arxiv.org/abs/2102.09305">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Accuracy, Interpretability, and Differential Privacy via
|
||
Explainable Boosting (ICML 2021)</strong>
|
||
<ul>
|
||
<li>Harsha Nori, Rich Caruana, Zhiqi Bu, Judy Hanwen Shen, Janardhan
|
||
Kulkarni</li>
|
||
<li><a href="https://arxiv.org/abs/2106.09680">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>SGLB: Stochastic Gradient Langevin Boosting (ICML
|
||
2021)</strong>
|
||
<ul>
|
||
<li>Aleksei Ustimenko, Liudmila Prokhorenkova</li>
|
||
<li><a href="https://arxiv.org/abs/2001.07248">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Self-boosting for Feature Distillation (IJCAI 2021)</strong>
|
||
<ul>
|
||
<li>Yulong Pei, Yanyun Qu, Junping Zhang</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/proceedings/2021/131">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Variational Inference With Locally Adaptive
|
||
Step-Sizes (IJCAI 2021)</strong>
|
||
<ul>
|
||
<li>Gideon Dresdner, Saurav Shekhar, Fabian Pedregosa, Francesco
|
||
Locatello, Gunnar Rätsch</li>
|
||
<li><a href="https://arxiv.org/abs/2105.09240">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Probabilistic Gradient Boosting Machines for Large-Scale
|
||
Probabilistic Regression (KDD 2021)</strong>
|
||
<ul>
|
||
<li>Olivier Sprangers, Sebastian Schelter, Maarten de Rijke</li>
|
||
<li><a href="https://arxiv.org/abs/2106.01682">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Task-wise Split Gradient Boosting Trees for Multi-center
|
||
Diabetes Prediction (KDD 2021)</strong>
|
||
<ul>
|
||
<li>Mingcheng Chen, Zhenghui Wang, Zhiyun Zhao, Weinan Zhang, Xiawei
|
||
Guo, Jian Shen, Yanru Qu, Jieli Lu, Min Xu, Yu Xu, Tiange Wang, Mian Li,
|
||
Weiwei Tu, Yong Yu, Yufang Bi, Weiqing Wang, Guang Ning</li>
|
||
<li><a href="https://arxiv.org/abs/2108.07107">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Better Short than Greedy: Interpretable Models through
|
||
Optimal Rule Boosting (SDM 2021)</strong>
|
||
<ul>
|
||
<li>Mario Boley, Simon Teshuva, Pierre Le Bodic, Geoffrey I. Webb</li>
|
||
<li><a href="https://arxiv.org/abs/2101.08380">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-3">2020</h2>
|
||
<ul>
|
||
<li><strong>A Unified Framework for Knowledge Intensive Gradient
|
||
Boosting: Leveraging Human Experts for Noisy Sparse Domains (AAAI
|
||
2020)</strong>
|
||
<ul>
|
||
<li>Harsha Kokel, Phillip Odom, Shuo Yang, Sriraam Natarajan</li>
|
||
<li><a
|
||
href="https://personal.utdallas.edu/~sriraam.natarajan/Papers/Kokel_AAAI20.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/harshakokel/KiGB">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Practical Federated Gradient Boosting Decision Trees (AAAI
|
||
2020)</strong>
|
||
<ul>
|
||
<li>Qinbin Li, Zeyi Wen, Bingsheng He</li>
|
||
<li><a href="https://arxiv.org/abs/1911.04206">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Privacy-Preserving Gradient Boosting Decision Trees (AAAI
|
||
2020)</strong>
|
||
<ul>
|
||
<li>Qinbin Li, Zhaomin Wu, Zeyi Wen, Bingsheng He</li>
|
||
<li><a href="https://arxiv.org/abs/1911.04209">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Accelerating Gradient Boosting Machines (AISTATS
|
||
2020)</strong>
|
||
<ul>
|
||
<li>Haihao Lu, Sai Praneeth Karimireddy, Natalia Ponomareva, Vahab S.
|
||
Mirrokni</li>
|
||
<li><a href="https://arxiv.org/abs/1903.08708">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Scalable Feature Selection for Multitask Gradient Boosted
|
||
Trees (AISTATS 2020)</strong>
|
||
<ul>
|
||
<li>Cuize Han, Nikhil Rao, Daria Sorokina, Karthik Subbian</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v108/han20a.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Functional Gradient Boosting for Learning Residual-like
|
||
Networks with Statistical Guarantees (AISTATS 2020)</strong>
|
||
<ul>
|
||
<li>Atsushi Nitanda, Taiji Suzuki</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v108/nitanda20a.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Optimal Decision Trees with MaxSAT and its
|
||
Integration in AdaBoost (IJCAI 2020)</strong>
|
||
<ul>
|
||
<li>Hao Hu, Mohamed Siala, Emmanuel Hebrard, Marie-José Huguet</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/2020/163">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>MixBoost: Synthetic Oversampling using Boosted Mixup for
|
||
Handling Extreme Imbalance (ICDM 2020)</strong>
|
||
<ul>
|
||
<li>Anubha Kabra, Ayush Chopra, Nikaash Puri, Pinkesh Badjatiya, Sukriti
|
||
Verma, Piyush Gupta, Balaji Krishnamurthy</li>
|
||
<li><a href="https://arxiv.org/abs/2009.01571">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting for Control of Dynamical Systems (ICML
|
||
2020)</strong>
|
||
<ul>
|
||
<li>Naman Agarwal, Nataly Brukhim, Elad Hazan, Zhou Lu</li>
|
||
<li><a href="https://arxiv.org/abs/1906.08720">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Quantum Boosting (ICML 2020)</strong>
|
||
<ul>
|
||
<li>Srinivasan Arunachalam, Reevu Maity</li>
|
||
<li><a href="https://arxiv.org/abs/2002.05056">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Histogram Transform for Regression (ICML
|
||
2020)</strong>
|
||
<ul>
|
||
<li>Yuchao Cai, Hanyuan Hang, Hanfang Yang, Zhouchen Lin</li>
|
||
<li><a
|
||
href="https://proceedings.icml.cc/static/paper_files/icml/2020/2360-Paper.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Frank-Wolfe by Chasing Gradients (ICML
|
||
2020)</strong>
|
||
<ul>
|
||
<li>Cyrille W. Combettes, Sebastian Pokutta</li>
|
||
<li><a href="https://arxiv.org/abs/2003.06369">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>NGBoost: Natural Gradient Boosting for Probabilistic
|
||
Prediction (ICML 2020)</strong>
|
||
<ul>
|
||
<li>Tony Duan, Avati Anand, Daisy Yi Ding, Khanh K. Thai, Sanjay Basu,
|
||
Andrew Y. Ng, Alejandro Schuler</li>
|
||
<li><a href="https://arxiv.org/abs/1910.03225">[Paper]</a></li>
|
||
<li><a href="https://github.com/stanfordmlgroup/ngboost">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Online Agnostic Boosting via Regret Minimization (NeurIPS
|
||
2020)</strong>
|
||
<ul>
|
||
<li>Nataly Brukhim, Xinyi Chen, Elad Hazan, Shay Moran</li>
|
||
<li><a href="https://arxiv.org/abs/2003.01150">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting First-Order Methods by Shifting Objective: New
|
||
Schemes with Faster Worst Case Rates (NeurIPS 2020)</strong>
|
||
<ul>
|
||
<li>Kaiwen Zhou, Anthony Man-Cho So, James Cheng</li>
|
||
<li><a href="https://arxiv.org/abs/2005.12061">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Optimization and Generalization Analysis of Transduction
|
||
through Gradient Boosting and Application to Multi-scale Graph Neural
|
||
Networks (NeurIPS 2020)</strong>
|
||
<ul>
|
||
<li>Kenta Oono, Taiji Suzuki</li>
|
||
<li><a href="https://arxiv.org/abs/2006.08550">[Paper]</a></li>
|
||
<li><a href="https://github.com/delta2323/GB-GNN">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Gradient Boosted Normalizing Flows (NeurIPS 2020)</strong>
|
||
<ul>
|
||
<li>Robert Giaquinto, Arindam Banerjee</li>
|
||
<li><a href="https://arxiv.org/abs/2002.11896">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/robert-giaquinto/gradient-boosted-normalizing-flows">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>HyperML: A Boosting Metric Learning Approach in Hyperbolic
|
||
Space for Recommender Systems (WSDM 2020)</strong>
|
||
<ul>
|
||
<li>Lucas Vinh Tran, Yi Tay, Shuai Zhang, Gao Cong, Xiaoli Li</li>
|
||
<li><a href="https://arxiv.org/abs/1809.01703">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-4">2019</h2>
|
||
<ul>
|
||
<li><strong>Induction of Non-Monotonic Logic Programs to Explain Boosted
|
||
Tree Models Using LIME (AAAI 2019)</strong>
|
||
<ul>
|
||
<li>Farhad Shakerin, Gopal Gupta</li>
|
||
<li><a href="https://arxiv.org/abs/1808.00629">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Verifying Robustness of Gradient Boosted Models (AAAI
|
||
2019)</strong>
|
||
<ul>
|
||
<li>Gil Einziger, Maayan Goldstein, Yaniv Sa’ar, Itai Segall</li>
|
||
<li><a href="https://arxiv.org/pdf/1906.10991.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Online Multiclass Boosting with Bandit Feedback (AISTATS
|
||
2019)</strong>
|
||
<ul>
|
||
<li>Daniel T. Zhang, Young Hun Jung, Ambuj Tewari</li>
|
||
<li><a href="https://arxiv.org/abs/1810.05290">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>AdaFair: Cumulative Fairness Adaptive Boosting (CIKM
|
||
2019)</strong>
|
||
<ul>
|
||
<li>Vasileios Iosifidis, Eirini Ntoutsi</li>
|
||
<li><a href="https://arxiv.org/abs/1909.08982">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Interpretable MTL from Heterogeneous Domains using Boosted
|
||
Tree (CIKM 2019)</strong>
|
||
<ul>
|
||
<li>Ya-Lin Zhang, Longfei Li</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=3357384.3358072">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Adversarial Training of Gradient-Boosted Decision Trees
|
||
(CIKM 2019)</strong>
|
||
<ul>
|
||
<li>Stefano Calzavara, Claudio Lucchese, Gabriele Tolomei</li>
|
||
<li><a
|
||
href="https://www.dais.unive.it/~calzavara/papers/cikm19.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fair Adversarial Gradient Tree Boosting (ICDM 2019)</strong>
|
||
<ul>
|
||
<li>Vincent Grari, Boris Ruf, Sylvain Lamprier, Marcin Detyniecki</li>
|
||
<li><a href="https://arxiv.org/abs/1911.05369">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Density Estimation Remastered (ICML 2019)</strong>
|
||
<ul>
|
||
<li>Zac Cranko, Richard Nock</li>
|
||
<li><a href="https://arxiv.org/abs/1803.08178">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Lossless or Quantized Boosting with Integer Arithmetic (ICML
|
||
2019)</strong>
|
||
<ul>
|
||
<li>Richard Nock, Robert C. Williamson</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v97/nock19a.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Optimal Minimal Margin Maximization with Boosting (ICML
|
||
2019)</strong>
|
||
<ul>
|
||
<li>Alexander Mathiasen, Kasper Green Larsen, Allan Grønlund</li>
|
||
<li><a href="https://arxiv.org/abs/1901.10789">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Katalyst: Boosting Convex Katayusha for Non-Convex Problems
|
||
with a Large Condition Number (ICML 2019)</strong>
|
||
<ul>
|
||
<li>Zaiyi Chen, Yi Xu, Haoyuan Hu, Tianbao Yang</li>
|
||
<li><a href="https://arxiv.org/abs/1809.06754">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting for Comparison-Based Learning (IJCAI 2019)</strong>
|
||
<ul>
|
||
<li>Michaël Perrot, Ulrike von Luxburg</li>
|
||
<li><a href="https://arxiv.org/abs/1810.13333">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>AugBoost: Gradient Boosting Enhanced with Step-Wise Feature
|
||
Augmentation (IJCAI 2019)</strong>
|
||
<ul>
|
||
<li>Philip Tannor, Lior Rokach</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/proceedings/2019/0493.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Gradient Boosting with Piece-Wise Linear Regression Trees
|
||
(IJCAI 2019)</strong>
|
||
<ul>
|
||
<li>Yu Shi, Jian Li, Zhize Li</li>
|
||
<li><a href="https://arxiv.org/abs/1802.05640">[Paper]</a></li>
|
||
<li><a href="https://github.com/GBDT-PL/GBDT-PL">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>SpiderBoost and Momentum: Faster Variance Reduction
|
||
Algorithms (NeurIPS 2019)</strong>
|
||
<ul>
|
||
<li>Zhe Wang, Kaiyi Ji, Yi Zhou, Yingbin Liang, Vahid Tarokh</li>
|
||
<li><a
|
||
href="http://papers.nips.cc/paper/8511-spiderboost-and-momentum-faster-variance-reduction-algorithms">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Faster Boosting with Smaller Memory (NeurIPS 2019)</strong>
|
||
<ul>
|
||
<li>Julaiti Alafate, Yoav Freund</li>
|
||
<li><a href="https://arxiv.org/abs/1901.09047">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Regularized Gradient Boosting (NeurIPS 2019)</strong>
|
||
<ul>
|
||
<li>Corinna Cortes, Mehryar Mohri, Dmitry Storcheus</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/8784-regularized-gradient-boosting">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Margin-Based Generalization Lower Bounds for Boosted
|
||
Classifiers (NeurIPS 2019)</strong>
|
||
<ul>
|
||
<li>Allan Grønlund, Lior Kamma, Kasper Green Larsen, Alexander
|
||
Mathiasen, Jelani Nelson</li>
|
||
<li><a href="https://arxiv.org/abs/1909.12518">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Minimal Variance Sampling in Stochastic Gradient Boosting
|
||
(NeurIPS 2019)</strong>
|
||
<ul>
|
||
<li>Bulat Ibragimov, Gleb Gusev</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/9645-minimal-variance-sampling-in-stochastic-gradient-boosting">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Universal Boosting Variational Inference (NeurIPS
|
||
2019)</strong>
|
||
<ul>
|
||
<li>Trevor Campbell, Xinglong Li</li>
|
||
<li><a href="https://arxiv.org/abs/1906.01235">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Provably Robust Boosted Decision Stumps and Trees against
|
||
Adversarial Attacks (NeurIPS 2019)</strong>
|
||
<ul>
|
||
<li>Maksym Andriushchenko, Matthias Hein</li>
|
||
<li><a href="https://arxiv.org/abs/1906.03526">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/max-andr/provably-robust-boosting">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Block-distributed Gradient Boosted Trees (SIGIR
|
||
2019)</strong>
|
||
<ul>
|
||
<li>Theodore Vasiloudis, Hyunsu Cho, Henrik Boström</li>
|
||
<li><a href="https://arxiv.org/abs/1904.10522">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning to Rank in Theory and Practice: From Gradient
|
||
Boosting to Neural Networks and Unbiased Learning (SIGIR 2019)</strong>
|
||
<ul>
|
||
<li>Claudio Lucchese, Franco Maria Nardini, Rama Kumar Pasumarthi,
|
||
Sebastian Bruch, Michael Bendersky, Xuanhui Wang, Harrie Oosterhuis,
|
||
Rolf Jagerman, Maarten de Rijke</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/334579610_Learning_to_Rank_in_Theory_and_Practice_From_Gradient_Boosting_to_Neural_Networks_and_Unbiased_Learning">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-5">2018</h2>
|
||
<ul>
|
||
<li><strong>Boosted Generative Models (AAAI 2018)</strong>
|
||
<ul>
|
||
<li>Aditya Grover, Stefano Ermon</li>
|
||
<li><a href="https://arxiv.org/pdf/1702.08484.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/ermongroup/bgm">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Variational Inference: an Optimization Perspective
|
||
(AISTATS 2018)</strong>
|
||
<ul>
|
||
<li>Francesco Locatello, Rajiv Khanna, Joydeep Ghosh, Gunnar Rätsch</li>
|
||
<li><a href="https://arxiv.org/abs/1708.01733">[Paper]</a></li>
|
||
<li><a href="https://github.com/ratschlab/boosting-bbvi">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Online Boosting Algorithms for Multi-label Ranking (AISTATS
|
||
2018)</strong>
|
||
<ul>
|
||
<li>Young Hun Jung, Ambuj Tewari</li>
|
||
<li><a href="https://arxiv.org/abs/1710.08079">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/yhjung88/OnlineMLRBoostingWithVFDT">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>DualBoost: Handling Missing Values with Feature Weights and
|
||
Weak Classifiers that Abstain (CIKM 2018)</strong>
|
||
<ul>
|
||
<li>Weihong Wang, Jie Xu, Yang Wang, Chen Cai, Fang Chen</li>
|
||
<li><a
|
||
href="http://delivery.acm.org/10.1145/3270000/3269319/p1543-wang.pdf?ip=129.215.164.203&id=3269319&acc=ACTIVE%20SERVICE&key=C2D842D97AC95F7A%2EEB9E991028F4E1F1%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1558633895_f01b39fd47b943fd01eade763a397e04">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Functional Gradient Boosting based on Residual Network
|
||
Perception (ICML 2018)</strong>
|
||
<ul>
|
||
<li>Atsushi Nitanda, Taiji Suzuki</li>
|
||
<li><a href="https://arxiv.org/abs/1802.09031">[Paper]</a></li>
|
||
<li><a href="https://github.com/anitan0925/ResFGB">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Finding Influential Training Samples for Gradient Boosted
|
||
Decision Trees (ICML 2018)</strong>
|
||
<ul>
|
||
<li>Boris Sharchilev, Yury Ustinovskiy, Pavel Serdyukov, Maarten de
|
||
Rijke</li>
|
||
<li><a href="https://arxiv.org/abs/1802.06640">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Deep ResNet Blocks Sequentially using Boosting
|
||
Theory (ICML 2018)</strong>
|
||
<ul>
|
||
<li>Furong Huang, Jordan T. Ash, John Langford, Robert E. Schapire</li>
|
||
<li><a href="https://arxiv.org/abs/1706.04964">[Paper]</a></li>
|
||
<li><a href="https://github.com/JordanAsh/boostresnet">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>UCBoost: A Boosting Approach to Tame Complexity and
|
||
Optimality for Stochastic Bandits (IJCAI 2018)</strong>
|
||
<ul>
|
||
<li>Fang Liu, Sinong Wang, Swapna Buccapatnam, Ness B. Shroff</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/proceedings/2018/0338.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://smpybandits.github.io/docs/Policies.UCBoost.html">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Adaboost with Auto-Evaluation for Conversational Models
|
||
(IJCAI 2018)</strong>
|
||
<ul>
|
||
<li>Juncen Li, Ping Luo, Ganbin Zhou, Fen Lin, Cheng Niu</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/proceedings/2018/0580.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Ensemble Neural Relation Extraction with Adaptive Boosting
|
||
(IJCAI 2018)</strong>
|
||
<ul>
|
||
<li>Dongdong Yang, Senzhang Wang, Zhoujun Li</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/proceedings/2018/0630.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>CatBoost: Unbiased Boosting with Categorical Features (NIPS
|
||
2018)</strong>
|
||
<ul>
|
||
<li>Liudmila Ostroumova Prokhorenkova, Gleb Gusev, Aleksandr Vorobev,
|
||
Anna Veronika Dorogush, Andrey Gulin</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/7898-catboost-unbiased-boosting-with-categorical-features.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/catboost/catboost">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multitask Boosting for Survival Analysis with Competing
|
||
Risks (NIPS 2018)</strong>
|
||
<ul>
|
||
<li>Alexis Bellot, Mihaela van der Schaar</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/7413-multitask-boosting-for-survival-analysis-with-competing-risks">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multi-Layered Gradient Boosting Decision Trees (NIPS
|
||
2018)</strong>
|
||
<ul>
|
||
<li>Ji Feng, Yang Yu, Zhi-Hua Zhou</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/7614-multi-layered-gradient-boosting-decision-trees.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/kingfengji/mGBDT">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Sparse and Low-Rank Tensor Regression (NIPS
|
||
2018)</strong>
|
||
<ul>
|
||
<li>Lifang He, Kun Chen, Wanwan Xu, Jiayu Zhou, Fei Wang</li>
|
||
<li><a href="https://arxiv.org/abs/1811.01158">[Paper]</a></li>
|
||
<li><a href="https://github.com/LifangHe/NeurIPS18_SURF">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Selective Gradient Boosting for Effective Learning to Rank
|
||
(SIGIR 2018)</strong>
|
||
<ul>
|
||
<li>Claudio Lucchese, Franco Maria Nardini, Raffaele Perego, Salvatore
|
||
Orlando, Salvatore Trani</li>
|
||
<li><a
|
||
href="http://quickrank.isti.cnr.it/selective-data/selective-SIGIR2018.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/hpclab/quickrank/blob/master/documentation/selective.md">[Code]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-6">2017</h2>
|
||
<ul>
|
||
<li><strong>Boosting for Real-Time Multivariate Time Series
|
||
Classification (AAAI 2017)</strong>
|
||
<ul>
|
||
<li>Haishuai Wang, Jun Wu</li>
|
||
<li><a
|
||
href="https://www.aaai.org/ocs/index.php/AAAI/AAAI17/paper/download/14852/14241">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Cross-Domain Sentiment Classification via Topic-Related
|
||
TrAdaBoost (AAAI 2017)</strong>
|
||
<ul>
|
||
<li>Xingchang Huang, Yanghui Rao, Haoran Xie, Tak-Lam Wong, Fu Lee
|
||
Wang</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/826c/c83d98a5c4c7dcc02be1f4dd9c27e2b99670.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/xchhuang/cross-domain-sentiment-classification">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Extreme Gradient Boosting and Behavioral Biometrics (AAAI
|
||
2017)</strong>
|
||
<ul>
|
||
<li>Benjamin Manning</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/8c6e/6c887d6d47dda3f0c73297fd4da516fef1ee.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>FeaBoost: Joint Feature and Label Refinement for Semantic
|
||
Segmentation (AAAI 2017)</strong>
|
||
<ul>
|
||
<li>Yulei Niu, Zhiwu Lu, Songfang Huang, Xin Gao, Ji-Rong Wen</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/d566/73be998b3ed38ccbb53551e38758ae8cfc9d.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Complementary Hash Tables for Fast Nearest Neighbor
|
||
Search (AAAI 2017)</strong>
|
||
<ul>
|
||
<li>Xianglong Liu, Cheng Deng, Yadong Mu, Zhujin Li</li>
|
||
<li><a
|
||
href="https://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14336">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Gradient Boosting on Stochastic Data Streams (AISTATS
|
||
2017)</strong>
|
||
<ul>
|
||
<li>Hanzhang Hu, Wen Sun, Arun Venkatraman, Martial Hebert, J. Andrew
|
||
Bagnell</li>
|
||
<li><a href="https://arxiv.org/abs/1703.00377">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>BoostVHT: Boosting Distributed Streaming Decision Trees
|
||
(CIKM 2017)</strong>
|
||
<ul>
|
||
<li>Theodore Vasiloudis, Foteini Beligianni, Gianmarco De Francisci
|
||
Morales</li>
|
||
<li><a
|
||
href="https://melmeric.files.wordpress.com/2010/05/boostvht-boosting-distributed-streaming-decision-trees.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fast Boosting Based Detection Using Scale Invariant
|
||
Multimodal Multiresolution Filtered Features (CVPR 2017)</strong>
|
||
<ul>
|
||
<li>Arthur Daniel Costea, Robert Varga, Sergiu Nedevschi</li>
|
||
<li><a
|
||
href="http://openaccess.thecvf.com/content_cvpr_2017/papers/Costea_Fast_Boosting_Based_CVPR_2017_paper.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>BIER - Boosting Independent Embeddings Robustly (ICCV
|
||
2017)</strong>
|
||
<ul>
|
||
<li>Michael Opitz, Georg Waltner, Horst Possegger, Horst Bischof</li>
|
||
<li><a
|
||
href="http://openaccess.thecvf.com/content_ICCV_2017/papers/Opitz_BIER_-_Boosting_ICCV_2017_paper.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/mop/bier">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>An Analysis of Boosted Linear Classifiers on Noisy Data with
|
||
Applications to Multiple-Instance Learning (ICDM 2017)</strong>
|
||
<ul>
|
||
<li>Rui Liu, Soumya Ray</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/8215501">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Variational Boosting: Iteratively Refining Posterior
|
||
Approximations (ICML 2017)</strong>
|
||
<ul>
|
||
<li>Andrew C. Miller, Nicholas J. Foti, Ryan P. Adams</li>
|
||
<li><a href="https://arxiv.org/abs/1611.06585">[Paper]</a></li>
|
||
<li><a href="https://github.com/andymiller/vboost">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Fitted Q-Iteration (ICML 2017)</strong>
|
||
<ul>
|
||
<li>Samuele Tosatto, Matteo Pirotta, Carlo D’Eramo, Marcello
|
||
Restelli</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v70/tosatto17a.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Simple Multi-Class Boosting Framework with Theoretical
|
||
Guarantees and Empirical Proficiency (ICML 2017)</strong>
|
||
<ul>
|
||
<li>Ron Appel, Pietro Perona</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v70/appel17a.html">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/GuillaumeCollin/A-Simple-Multi-Class-Boosting-Framework-with-Theoretical-Guarantees-and-Empirical-Proficiency">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Gradient Boosted Decision Trees for High Dimensional Sparse
|
||
Output (ICML 2017)</strong>
|
||
<ul>
|
||
<li>Si Si, Huan Zhang, S. Sathiya Keerthi, Dhruv Mahajan, Inderjit S.
|
||
Dhillon, Cho-Jui Hsieh</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v70/si17a.html">[Paper]</a></li>
|
||
<li><a href="https://github.com/springdaisy/GBDT">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Local Topic Discovery via Boosted Ensemble of Nonnegative
|
||
Matrix Factorization (IJCAI 2017)</strong>
|
||
<ul>
|
||
<li>Sangho Suh, Jaegul Choo, Joonseok Lee, Chandan K. Reddy</li>
|
||
<li><a href="http://dmkd.cs.vt.edu/papers/IJCAI17.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/benedekrozemberczki/BoostedFactorization">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Zero-Shot Learning with Semantic Correlation
|
||
Regularization (IJCAI 2017)</strong>
|
||
<ul>
|
||
<li>Te Pi, Xi Li, Zhongfei (Mark) Zhang</li>
|
||
<li><a href="https://arxiv.org/abs/1707.08008">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>BDT: Gradient Boosted Decision Tables for High Accuracy and
|
||
Scoring Efficiency (KDD 2017)</strong>
|
||
<ul>
|
||
<li>Yin Lou, Mikhail Obukhov</li>
|
||
<li><a
|
||
href="https://yinlou.github.io/papers/lou-kdd17.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>CatBoost: Gradient Boosting with Categorical Features
|
||
Support (NIPS 2017)</strong>
|
||
<ul>
|
||
<li>Anna Veronika Dorogush, Vasily Ershov, Andrey Gulin</li>
|
||
<li><a href="https://arxiv.org/abs/1810.11363">[Paper]</a></li>
|
||
<li><a href="https://catboost.ai/">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Cost Efficient Gradient Boosting (NIPS 2017)</strong>
|
||
<ul>
|
||
<li>Sven Peter, Ferran Diego, Fred A. Hamprecht, Boaz Nadler</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/6753-cost-efficient-gradient-boosting">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/svenpeter42/LightGBM-CEGB">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>AdaGAN: Boosting Generative Models (NIPS 2017)</strong>
|
||
<ul>
|
||
<li>Ilya O. Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann
|
||
Simon-Gabriel, Bernhard Schölkopf</li>
|
||
<li><a href="https://arxiv.org/abs/1701.02386">[Paper]</a></li>
|
||
<li><a href="https://github.com/tolstikhin/adagan">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>LightGBM: A Highly Efficient Gradient Boosting Decision Tree
|
||
(NIPS 2017)</strong>
|
||
<ul>
|
||
<li>Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong
|
||
Ma, Qiwei Ye, Tie-Yan Liu</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/6907-lightgbm-a-highly-efficient-gradient-boosting-decision-tree">[Paper]</a></li>
|
||
<li><a href="https://lightgbm.readthedocs.io/en/latest/">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Early Stopping for Kernel Boosting Algorithms: A General
|
||
Analysis with Localized Complexities (NIPS 2017)</strong>
|
||
<ul>
|
||
<li>Yuting Wei, Fanny Yang, Martin J. Wainwright</li>
|
||
<li><a href="https://arxiv.org/abs/1707.01543">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/fanny-yang/EarlyStoppingRKHS">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Online Multiclass Boosting (NIPS 2017)</strong>
|
||
<ul>
|
||
<li>Young Hun Jung, Jack Goetz, Ambuj Tewari</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/6693-online-multiclass-boosting.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Stacking Bagged and Boosted Forests for Effective Automated
|
||
Classification (SIGIR 2017)</strong>
|
||
<ul>
|
||
<li>Raphael R. Campos, Sérgio D. Canuto, Thiago Salles, Clebson C. A. de
|
||
Sá, Marcos André Gonçalves</li>
|
||
<li><a
|
||
href="https://homepages.dcc.ufmg.br/~rcampos/papers/sigir2017/appendix.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/raphaelcampos/stacking-bagged-boosted-forests">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>GB-CENT: Gradient Boosted Categorical Embedding and
|
||
Numerical Trees (WWW 2017)</strong>
|
||
<ul>
|
||
<li>Qian Zhao, Yue Shi, Liangjie Hong</li>
|
||
<li><a
|
||
href="http://papers.www2017.com.au.s3-website-ap-southeast-2.amazonaws.com/proceedings/p1311.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/grouplens/samantha">[Code]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-7">2016</h2>
|
||
<ul>
|
||
<li><strong>Group Cost-Sensitive Boosting for Multi-Resolution
|
||
Pedestrian Detection (AAAI 2016)</strong>
|
||
<ul>
|
||
<li>Chao Zhu, Yuxin Peng</li>
|
||
<li><a
|
||
href="https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/viewFile/11898/12146">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/nnikolaou/Cost-sensitive-Boosting-Tutorial">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Communication Efficient Distributed Agnostic Boosting
|
||
(AISTATS 2016)</strong>
|
||
<ul>
|
||
<li>Shang-Tse Chen, Maria-Florina Balcan, Duen Horng Chau</li>
|
||
<li><a href="https://arxiv.org/abs/1506.06318">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Logistic Boosting Regression for Label Distribution Learning
|
||
(CVPR 2016)</strong>
|
||
<ul>
|
||
<li>Chao Xing, Xin Geng, Hui Xue</li>
|
||
<li><a
|
||
href="https://zpascal.net/cvpr2016/Xing_Logistic_Boosting_Regression_CVPR_2016_paper.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Structured Regression Gradient Boosting (CVPR 2016)</strong>
|
||
<ul>
|
||
<li>Ferran Diego, Fred A. Hamprecht</li>
|
||
<li><a
|
||
href="https://hci.iwr.uni-heidelberg.de/sites/default/files/publications/files/1037872734/diego_16_structured.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>L-EnsNMF: Boosted Local Topic Discovery via Ensemble of
|
||
Nonnegative Matrix Factorization (ICDM 2016)</strong>
|
||
<ul>
|
||
<li>Sangho Suh, Jaegul Choo, Joonseok Lee, Chandan K. Reddy</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/7837872">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/benedekrozemberczki/BoostedFactorization">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Meta-Gradient Boosted Decision Tree Model for Weight and
|
||
Target Learning (ICML 2016)</strong>
|
||
<ul>
|
||
<li>Yury Ustinovskiy, Valentina Fedorova, Gleb Gusev, Pavel
|
||
Serdyukov</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v48/ustinovskiy16.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Generalized Dictionary for Multitask Learning with Boosting
|
||
(IJCAI 2016)</strong>
|
||
<ul>
|
||
<li>Boyu Wang, Joelle Pineau</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/16/Papers/299.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Self-Paced Boost Learning for Classification (IJCAI
|
||
2016)</strong>
|
||
<ul>
|
||
<li>Te Pi, Xi Li, Zhongfei Zhang, Deyu Meng, Fei Wu, Jun Xiao, Yueting
|
||
Zhuang</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/31b6/ab4a0771d5b7405cacdd12c398b1c832729d.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Interactive Martingale Boosting (IJCAI 2016)</strong>
|
||
<ul>
|
||
<li>Ashish Kulkarni, Pushpak Burange, Ganesh Ramakrishnan</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/16/Papers/124.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Optimal and Adaptive Algorithms for Online Boosting (IJCAI
|
||
2016)</strong>
|
||
<ul>
|
||
<li>Alina Beygelzimer, Satyen Kale, Haipeng Luo</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/16/Papers/614.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/VowpalWabbit/vowpal_wabbit/blob/master/vowpalwabbit/boosting.cc">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Rating-Boosted Latent Topics: Understanding Users and Items
|
||
with Ratings and Reviews (IJCAI 2016)</strong>
|
||
<ul>
|
||
<li>Yunzhi Tan, Min Zhang, Yiqun Liu, Shaoping Ma</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/db63/89e0ca49ec0e4686e40604e7489cb4c0729d.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>XGBoost: A Scalable Tree Boosting System (KDD 2016)</strong>
|
||
<ul>
|
||
<li>Tianqi Chen, Carlos Guestrin</li>
|
||
<li><a
|
||
href="https://www.kdd.org/kdd2016/papers/files/rfp0697-chenAemb.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/dmlc/xgboost">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Decision Tree Regression Adjustment for Variance
|
||
Reduction in Online Controlled Experiments (KDD 2016)</strong>
|
||
<ul>
|
||
<li>Alexey Poyarkov, Alexey Drutsa, Andrey Khalyavin, Gleb Gusev, Pavel
|
||
Serdyukov</li>
|
||
<li><a
|
||
href="https://www.kdd.org/kdd2016/papers/files/adf0653-poyarkovA.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting with Abstention (NIPS 2016)</strong>
|
||
<ul>
|
||
<li>Corinna Cortes, Giulia DeSalvo, Mehryar Mohri</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/6336-boosting-with-abstention">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>SEBOOST - Boosting Stochastic Learning Using Subspace
|
||
Optimization Techniques (NIPS 2016)</strong>
|
||
<ul>
|
||
<li>Elad Richardson, Rom Herskovitz, Boris Ginsburg, Michael
|
||
Zibulevsky</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/6109-seboost-boosting-stochastic-learning-using-subspace-optimization-techniques.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/eladrich/seboost">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Incremental Boosting Convolutional Neural Network for Facial
|
||
Action Unit Recognition (NIPS 2016)</strong>
|
||
<ul>
|
||
<li>Shizhong Han, Zibo Meng, Ahmed-Shehab Khan, Yan Tong</li>
|
||
<li><a href="https://arxiv.org/abs/1707.05395">[Paper]</a></li>
|
||
<li><a href="https://github.com/sjsingh91/IB-CNN">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Generalized BROOF-L2R: A General Framework for Learning to
|
||
Rank Based on Boosting and Random Forests (SIGIR 2016)</strong>
|
||
<ul>
|
||
<li>Clebson C. A. de Sá, Marcos André Gonçalves, Daniel Xavier de Sousa,
|
||
Thiago Salles</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=2911540">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-8">2015</h2>
|
||
<ul>
|
||
<li><strong>Online Boosting Algorithms for Anytime Transfer and
|
||
Multitask Learning (AAAI 2015)</strong>
|
||
<ul>
|
||
<li>Boyu Wang, Joelle Pineau</li>
|
||
<li><a
|
||
href="https://www.cs.mcgill.ca/~jpineau/files/bwang-aaai15.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Boosted Multi-Task Model for Pedestrian Detection with
|
||
Occlusion Handling (AAAI 2015)</strong>
|
||
<ul>
|
||
<li>Chao Zhu, Yuxin Peng</li>
|
||
<li><a
|
||
href="https://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/viewFile/9879/9825">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Second-Order Gradient Boosting for Conditional
|
||
Random Fields (AISTATS 2015)</strong>
|
||
<ul>
|
||
<li>Tianqi Chen, Sameer Singh, Ben Taskar, Carlos Guestrin</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v38/chen15b.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Tumblr Blog Recommendation with Boosted Inductive Matrix
|
||
Completion (CIKM 2015)</strong>
|
||
<ul>
|
||
<li>Donghyuk Shin, Suleyman Cetintas, Kuang-Chih Lee, Inderjit S.
|
||
Dhillon</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=2806578">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Basis mapping based boosting for object detection (CVPR
|
||
2015)</strong>
|
||
<ul>
|
||
<li>Haoyu Ren, Ze-Nian Li</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/7298766">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Tracking-by-Segmentation with Online Gradient Boosting
|
||
Decision Tree (ICCV 2015)</strong>
|
||
<ul>
|
||
<li>Jeany Son, Ilchae Jung, Kayoung Park, Bohyung Han</li>
|
||
<li><a
|
||
href="https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Son_Tracking-by-Segmentation_With_Online_ICCV_2015_paper.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="http://cvlab.postech.ac.kr/research/ogbdt_track/">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning to Boost Filamentary Structure Segmentation (ICCV
|
||
2015)</strong>
|
||
<ul>
|
||
<li>Lin Gu, Li Cheng</li>
|
||
<li><a
|
||
href="https://isg.nist.gov/BII_2015/webPages/pages/2015_BII_program/PDFs/Day_3/Session_9/Abstract_Gu_Lin.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Optimal and Adaptive Algorithms for Online Boosting (ICML
|
||
2015)</strong>
|
||
<ul>
|
||
<li>Alina Beygelzimer, Satyen Kale, Haipeng Luo</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v37/beygelzimer15.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/VowpalWabbit/vowpal_wabbit/blob/master/vowpalwabbit/boosting.cc">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Rademacher Observations, Private Data, and Boosting (ICML
|
||
2015)</strong>
|
||
<ul>
|
||
<li>Richard Nock, Giorgio Patrini, Arik Friedman</li>
|
||
<li><a href="https://arxiv.org/abs/1502.02322">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Categorical Restricted Boltzmann Machine for
|
||
Computational Prediction of Splice Junctions (ICML 2015)</strong>
|
||
<ul>
|
||
<li>Taehoon Lee, Sungroh Yoon</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/d0ad/beef3053e98dd88ff74f42744417bc65a729.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Direct Boosting Approach for Semi-supervised
|
||
Classification (IJCAI 2015)</strong>
|
||
<ul>
|
||
<li>Shaodan Zhai, Tian Xia, Zhongliang Li, Shaojun Wang</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/15/Papers/565.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Boosting Algorithm for Item Recommendation with Implicit
|
||
Feedback (IJCAI 2015)</strong>
|
||
<ul>
|
||
<li>Yong Liu, Peilin Zhao, Aixin Sun, Chunyan Miao</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/15/Papers/255.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/microsoft/recommenders">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Training-Time Optimization of a Budgeted Booster (IJCAI
|
||
2015)</strong>
|
||
<ul>
|
||
<li>Yi Huang, Brian Powers, Lev Reyzin</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/15/Papers/504.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Optimal Action Extraction for Random Forests and Boosted
|
||
Trees (KDD 2015)</strong>
|
||
<ul>
|
||
<li>Zhicheng Cui, Wenlin Chen, Yujie He, Yixin Chen</li>
|
||
<li><a
|
||
href="https://www.cse.wustl.edu/~ychen/public/OAE.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Online Gradient Boosting (NIPS 2015)</strong>
|
||
<ul>
|
||
<li>Alina Beygelzimer, Elad Hazan, Satyen Kale, Haipeng Luo</li>
|
||
<li><a href="https://arxiv.org/abs/1506.04820">[Paper]</a></li>
|
||
<li><a href="https://github.com/crm416/online_boosting">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>BROOF: Exploiting Out-of-Bag Errors Boosting and Random
|
||
Forests for Effective Automated Classification (SIGIR 2015)</strong>
|
||
<ul>
|
||
<li>Thiago Salles, Marcos André Gonçalves, Victor Rodrigues, Leonardo C.
|
||
da Rocha</li>
|
||
<li><a
|
||
href="https://homepages.dcc.ufmg.br/~tsalles/broof/appendix.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Search with Deep Understanding of Contents and
|
||
Users (WSDM 2015)</strong>
|
||
<ul>
|
||
<li>Kaihua Zhu</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/282482189_Boosting_Search_with_Deep_Understanding_of_Contents_and_Users">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-9">2014</h2>
|
||
<ul>
|
||
<li><strong>On Boosting Sparse Parities (AAAI 2014)</strong>
|
||
<ul>
|
||
<li>Lev Reyzin</li>
|
||
<li><a
|
||
href="https://www.aaai.org/ocs/index.php/AAAI/AAAI14/paper/view/8587">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Joint Coupled-Feature Representation and Coupled Boosting
|
||
for AD Diagnosis (CVPR 2014)</strong>
|
||
<ul>
|
||
<li>Yinghuan Shi, Heung-Il Suk, Yang Gao, Dinggang Shen</li>
|
||
<li><a
|
||
href="https://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Shi_Joint_Coupled-Feature_Representation_2014_CVPR_paper.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>From Categories to Individuals in Real Time - A Unified
|
||
Boosting Approach (CVPR 2014)</strong>
|
||
<ul>
|
||
<li>David Hall, Pietro Perona</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/6909424">[Paper]</a></li>
|
||
<li><a
|
||
href="http://www.vision.caltech.edu/~dhall/projects/CategoriesToIndividuals/">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Boosted Exemplar-Based Face Detection (CVPR
|
||
2014)</strong>
|
||
<ul>
|
||
<li>Haoxiang Li, Zhe Lin, Jonathan Brandt, Xiaohui Shen, Gang Hua</li>
|
||
<li><a
|
||
href="http://users.eecs.northwestern.edu/~xsh835/assets/cvpr14_exemplarfacedetection.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Facial Expression Recognition via a Boosted Deep Belief
|
||
Network (CVPR 2014)</strong>
|
||
<ul>
|
||
<li>Ping Liu, Shizhong Han, Zibo Meng, Yan Tong</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/abstract/document/6909629">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Confidence-Rated Multiple Instance Boosting for Object
|
||
Detection (CVPR 2014)</strong>
|
||
<ul>
|
||
<li>Karim Ali, Kate Saenko</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/6909708">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>The Return of AdaBoost.MH: Multi-Class Hamming Trees (ICLR
|
||
2014)</strong>
|
||
<ul>
|
||
<li>Balázs Kégl</li>
|
||
<li><a href="https://arxiv.org/pdf/1312.6086.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/aciditeam/acidano/blob/master/acidano/utils/cost.py">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Deep Boosting (ICML 2014)</strong>
|
||
<ul>
|
||
<li>Corinna Cortes, Mehryar Mohri, Umar Syed</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v32/cortesb14.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/google/deepboost">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Convergence Rate Analysis for LogitBoost, MART and Their
|
||
Variant (ICML 2014)</strong>
|
||
<ul>
|
||
<li>Peng Sun, Tong Zhang, Jie Zhou</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v32/sunc14.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting with Online Binary Learners for the Multiclass
|
||
Bandit Problem (ICML 2014)</strong>
|
||
<ul>
|
||
<li>Shang-Tse Chen, Hsuan-Tien Lin, Chi-Jen Lu</li>
|
||
<li><a
|
||
href="https://www.cc.gatech.edu/~schen351/paper/icml14boost.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Multi-Step Autoregressive Forecasts (ICML
|
||
2014)</strong>
|
||
<ul>
|
||
<li>Souhaib Ben Taieb, Rob J. Hyndman</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v32/taieb14.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Dynamic Programming Boosting for Discriminative Macro-Action
|
||
Discovery (ICML 2014)</strong>
|
||
<ul>
|
||
<li>Leonidas Lefakis, François Fleuret</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v32/lefakis14.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Guess-Averse Loss Functions For Cost-Sensitive Multiclass
|
||
Boosting (ICML 2014)</strong>
|
||
<ul>
|
||
<li>Oscar Beijbom, Mohammad J. Saberian, David J. Kriegman, Nuno
|
||
Vasconcelos</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v32/beijbom14.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Multi-Class Boosting Method with Direct Optimization (KDD
|
||
2014)</strong>
|
||
<ul>
|
||
<li>Shaodan Zhai, Tian Xia, Shaojun Wang</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=2623689">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Gradient Boosted Feature Selection (KDD 2014)</strong>
|
||
<ul>
|
||
<li>Zhixiang Eddie Xu, Gao Huang, Kilian Q. Weinberger, Alice X.
|
||
Zheng</li>
|
||
<li><a href="https://arxiv.org/abs/1901.04055">[Paper]</a></li>
|
||
<li><a href="https://github.com/dmlc/xgboost">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multi-Class Deep Boosting (NIPS 2014)</strong>
|
||
<ul>
|
||
<li>Vitaly Kuznetsov, Mehryar Mohri, Umar Syed</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/5514-multi-class-deep-boosting">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Deconvolution of High Dimensional Mixtures via Boosting with
|
||
Application to Diffusion-Weighted MRI of Human Brain (NIPS
|
||
2014)</strong>
|
||
<ul>
|
||
<li>Charles Y. Zheng, Franco Pestilli, Ariel Rokem</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/5506-deconvolution-of-high-dimensional-mixtures-via-boosting-with-application-to-diffusion-weighted-mri-of-human-brain">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Drifting-Games Analysis for Online Learning and
|
||
Applications to Boosting (NIPS 2014)</strong>
|
||
<ul>
|
||
<li>Haipeng Luo, Robert E. Schapire</li>
|
||
<li><a href="https://arxiv.org/abs/1406.1856">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Boosting Framework on Grounds of Online Learning (NIPS
|
||
2014)</strong>
|
||
<ul>
|
||
<li>Tofigh Naghibi Mohamadpoor, Beat Pfister</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/5512-a-boosting-framework-on-grounds-of-online-learning.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Gradient Boosting Factorization Machines (RECSYS
|
||
2014)</strong>
|
||
<ul>
|
||
<li>Chen Cheng, Fen Xia, Tong Zhang, Irwin King, Michael R. Lyu</li>
|
||
<li><a
|
||
href="http://tongzhang-ml.org/papers/recsys14-fm.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-10">2013</h2>
|
||
<ul>
|
||
<li><strong>Boosting Binary Keypoint Descriptors (CVPR 2013)</strong>
|
||
<ul>
|
||
<li>Tomasz Trzcinski, C. Mario Christoudias, Pascal Fua, Vincent
|
||
Lepetit</li>
|
||
<li><a
|
||
href="https://cvlab.epfl.ch/research/page-90554-en-html/research-detect-binboost/">[Paper]</a></li>
|
||
<li><a href="https://github.com/biotrump/cvlab-BINBOOST">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>PerturBoost: Practical Confidential Classifier Learning in
|
||
the Cloud (ICDM 2013)</strong>
|
||
<ul>
|
||
<li>Keke Chen, Shumin Guo</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/6729587">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multiclass Semi-Supervised Boosting Using Similarity
|
||
Learning (ICDM 2013)</strong>
|
||
<ul>
|
||
<li>Jafar Tanha, Mohammad Javad Saberian, Maarten van Someren</li>
|
||
<li><a
|
||
href="https://www.cse.msu.edu/~rongjin/publications/MultiClass-08.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Saving Evaluation Time for the Decision Function in
|
||
Boosting: Representation and Reordering Base Learner (ICML
|
||
2013)</strong>
|
||
<ul>
|
||
<li>Peng Sun, Jie Zhou</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v28/sun13.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>General Functional Matrix Factorization Using Gradient
|
||
Boosting (ICML 2013)</strong>
|
||
<ul>
|
||
<li>Tianqi Chen, Hang Li, Qiang Yang, Yong Yu</li>
|
||
<li><a
|
||
href="http://w.hangli-hl.com/uploads/3/1/6/8/3168008/icml_2013.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Margins, Shrinkage, and Boosting (ICML 2013)</strong>
|
||
<ul>
|
||
<li>Matus Telgarsky</li>
|
||
<li><a href="https://arxiv.org/abs/1303.4172">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Quickly Boosting Decision Trees - Pruning Underachieving
|
||
Features Early (ICML 2013)</strong>
|
||
<ul>
|
||
<li>Ron Appel, Thomas J. Fuchs, Piotr Dollár, Pietro Perona</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v28/appel13.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/pdollar/toolbox/blob/master/classify/adaBoostTrain.m">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Human Boosting (ICML 2013)</strong>
|
||
<ul>
|
||
<li>Harsh H. Pareek, Pradeep Ravikumar</li>
|
||
<li><a
|
||
href="https://www.cs.cmu.edu/~pradeepr/paperz/humanboosting.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Collaborative Boosting for Activity Classification in
|
||
Microblogs (KDD 2013)</strong>
|
||
<ul>
|
||
<li>Yangqiu Song, Zhengdong Lu, Cane Wing-ki Leung, Qiang Yang</li>
|
||
<li><a
|
||
href="http://chbrown.github.io/kdd-2013-usb/kdd/p482.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Direct 0-1 Loss Minimization and Margin Maximization with
|
||
Boosting (NIPS 2013)</strong>
|
||
<ul>
|
||
<li>Shaodan Zhai, Tian Xia, Ming Tan, Shaojun Wang</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/5214-direct-0-1-loss-minimization-and-margin-maximization-with-boosting">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Reservoir Boosting : Between Online and Offline Ensemble
|
||
Learning (NIPS 2013)</strong>
|
||
<ul>
|
||
<li>Leonidas Lefakis, François Fleuret</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/5215-reservoir-boosting-between-online-and-offline-ensemble-learning">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Non-Linear Domain Adaptation with Boosting (NIPS
|
||
2013)</strong>
|
||
<ul>
|
||
<li>Carlos J. Becker, C. Mario Christoudias, Pascal Fua</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/5200-non-linear-domain-adaptation-with-boosting">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting in the Presence of Label Noise (UAI 2013)</strong>
|
||
<ul>
|
||
<li>Jakramate Bootkrajang, Ata Kabán</li>
|
||
<li><a href="https://arxiv.org/abs/1309.6818">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-11">2012</h2>
|
||
<ul>
|
||
<li><strong>Contextual Boost for Pedestrian Detection (CVPR
|
||
2012)</strong>
|
||
<ul>
|
||
<li>Yuanyuan Ding, Jing Xiao</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.308.5611&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Shrink Boost for Selecting Multi-LBP Histogram Features in
|
||
Object Detection (CVPR 2012)</strong>
|
||
<ul>
|
||
<li>Cher Keng Heng, Sumio Yokomitsu, Yuichi Matsumoto, Hajime
|
||
Tamura</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/6248061">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Bottom-Up and Top-Down Visual Features for Saliency
|
||
Estimation (CVPR 2012)</strong>
|
||
<ul>
|
||
<li>Ali Borji</li>
|
||
<li><a
|
||
href="http://ilab.usc.edu/borji/papers/cvpr-2012-BUModel-v4.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Algorithms for Simultaneous Feature Extraction and
|
||
Selection (CVPR 2012)</strong>
|
||
<ul>
|
||
<li>Mohammad J. Saberian, Nuno Vasconcelos</li>
|
||
<li><a
|
||
href="http://svcl.ucsd.edu/publications/conference/2012/cvpr/SOPBoost.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Sharing Features in Multi-class Boosting via Group Sparsity
|
||
(CVPR 2012)</strong>
|
||
<ul>
|
||
<li>Sakrapee Paisitkriangkrai, Chunhua Shen, Anton van den Hengel</li>
|
||
<li><a
|
||
href="https://cs.adelaide.edu.au/~paulp/publications/pubs/sharing_cvpr2012.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Feature Weighting and Selection Using Hypothesis Margin of
|
||
Boosting (ICDM 2012)</strong>
|
||
<ul>
|
||
<li>Malak Alshawabkeh, Javed A. Aslam, Jennifer G. Dy, David R.
|
||
Kaeli</li>
|
||
<li><a
|
||
href="http://www.ece.neu.edu/fac-ece/jdy/papers/alshawabkeh-ICDM2012.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>An AdaBoost Algorithm for Multiclass Semi-supervised
|
||
Learning (ICDM 2012)</strong>
|
||
<ul>
|
||
<li>Jafar Tanha, Maarten van Someren, Hamideh Afsarmanesh</li>
|
||
<li>[[Paper]]https://ieeexplore.ieee.org/document/6413799)</li>
|
||
</ul></li>
|
||
<li><strong>AOSO-LogitBoost: Adaptive One-Vs-One LogitBoost for
|
||
Multi-Class Problem (ICML 2012)</strong>
|
||
<ul>
|
||
<li>Peng Sun, Mark D. Reid, Jie Zhou</li>
|
||
<li><a
|
||
href="AOSO-LogitBoost:%20Adaptive%20One-Vs-One%20LogitBoost%20for%20Multi-Class%20Problem">[Paper]</a></li>
|
||
<li><a href="https://github.com/pengsun/AOSOLogitBoost">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>An Online Boosting Algorithm with Theoretical Justifications
|
||
(ICML 2012)</strong>
|
||
<ul>
|
||
<li>Shang-Tse Chen, Hsuan-Tien Lin, Chi-Jen Lu</li>
|
||
<li><a href="https://arxiv.org/abs/1206.6422">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Image Descriptors with the Boosting-Trick (NIPS
|
||
2012)</strong>
|
||
<ul>
|
||
<li>Tomasz Trzcinski, C. Mario Christoudias, Vincent Lepetit, Pascal
|
||
Fua</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/4848-learning-image-descriptors-with-the-boosting-trick.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/biotrump/cvlab-BINBOOST">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Accelerated Training for Matrix-norm Regularization: A
|
||
Boosting Approach (NIPS 2012)</strong>
|
||
<ul>
|
||
<li>Xinhua Zhang, Yaoliang Yu, Dale Schuurmans</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/4663-accelerated-training-for-matrix-norm-regularization-a-boosting-approach">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning from Heterogeneous Sources via Gradient Boosting
|
||
Consensus (SDM 2012)</strong>
|
||
<ul>
|
||
<li>Xiaoxiao Shi, Jean-François Paiement, David Grangier, Philip S.
|
||
Yu</li>
|
||
<li><a
|
||
href="http://david.grangier.info/papers/2012/shi_sdm_2012.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/PriyeshV/GBDT-CC">[Code]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-12">2011</h2>
|
||
<ul>
|
||
<li><strong>Selective Transfer Between Learning Tasks Using Task-Based
|
||
Boosting (AAAI 2011)</strong>
|
||
<ul>
|
||
<li>Eric Eaton, Marie desJardins</li>
|
||
<li><a
|
||
href="http://www.cis.upenn.edu/~eeaton/papers/Eaton2011Selective.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Incorporating Boosted Regression Trees into Ecological
|
||
Latent Variable Models (AAAI 2011)</strong>
|
||
<ul>
|
||
<li>Rebecca A. Hutchinson, Li-Ping Liu, Thomas G. Dietterich</li>
|
||
<li><a
|
||
href="https://www.aaai.org/ocs/index.php/AAAI/AAAI11/paper/viewFile/3711/4086">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>FlowBoost - Appearance Learning from Sparsely Annotated
|
||
Video (CVPR 2011)</strong>
|
||
<ul>
|
||
<li>Karim Ali, David Hasler, François Fleuret</li>
|
||
<li><a
|
||
href="http://www.karimali.org/publications/AHF_CVPR11.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>AdaBoost on Low-Rank PSD Matrices for Metric Learning (CVPR
|
||
2011)</strong>
|
||
<ul>
|
||
<li>Jinbo Bi, Dijia Wu, Le Lu, Meizhu Liu, Yimo Tao, Matthias Wolf</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5995363">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Local Structured HOG-LBP for Object Localization
|
||
(CVPR 2011)</strong>
|
||
<ul>
|
||
<li>Junge Zhang, Kaiqi Huang, Yinan Yu, Tieniu Tan</li>
|
||
<li><a
|
||
href="http://www.cbsr.ia.ac.cn/users/ynyu/1682.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Direct Formulation for Totally-Corrective Multi-Class
|
||
Boosting (CVPR 2011)</strong>
|
||
<ul>
|
||
<li>Chunhua Shen, Zhihui Hao</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=5995554">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Gated Classifiers: Boosting Under High Intra-class Variation
|
||
(CVPR 2011)</strong>
|
||
<ul>
|
||
<li>Oscar M. Danielsson, Babak Rasolzadeh, Stefan Carlsson</li>
|
||
<li><a
|
||
href="http://www.nada.kth.se/cvap/cvg/papers/danielssonCVPR11.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>TaylorBoost: First and Second-order Boosting Algorithms with
|
||
Explicit Margin Control (CVPR 2011)</strong>
|
||
<ul>
|
||
<li>Mohammad J. Saberian, Hamed Masnadi-Shirazi, Nuno Vasconcelos</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/5995605">[Paper]</a></li>
|
||
<li><a
|
||
href="https://pythonhosted.org/bob.learn.boosting/">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Robust and Efficient Regularized Boosting Using Total
|
||
Bregman Divergence (CVPR 2011)</strong>
|
||
<ul>
|
||
<li>Meizhu Liu, Baba C. Vemuri</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/5995686">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Treat Samples differently: Object Tracking with
|
||
Semi-Supervised Online CovBoost (ICCV 2011)</strong>
|
||
<ul>
|
||
<li>Guorong Li, Lei Qin, Qingming Huang, Junbiao Pang, Shuqiang
|
||
Jiang</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/6126297">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>LinkBoost: A Novel Cost-Sensitive Boosting Framework for
|
||
Community-Level Network Link Prediction (ICDM 2011)</strong>
|
||
<ul>
|
||
<li>Prakash Mandayam Comar, Pang-Ning Tan, Anil K. Jain</li>
|
||
<li><a
|
||
href="http://www.cse.msu.edu/~ptan/papers/icdm2011.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Markov Logic Networks via Functional Gradient
|
||
Boosting (ICDM 2011)</strong>
|
||
<ul>
|
||
<li>Tushar Khot, Sriraam Natarajan, Kristian Kersting, Jude W.
|
||
Shavlik</li>
|
||
<li><a href="https://github.com/starling-lab/BoostSRL">[Paper]</a></li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/6137236">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting on a Budget: Sampling for Feature-Efficient
|
||
Prediction (ICML 2011)</strong>
|
||
<ul>
|
||
<li>Lev Reyzin</li>
|
||
<li><a
|
||
href="http://www.icml-2011.org/papers/348_icmlpaper.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multiclass Boosting with Hinge Loss based on Output Coding
|
||
(ICML 2011)</strong>
|
||
<ul>
|
||
<li>Tianshi Gao, Daphne Koller</li>
|
||
<li><a
|
||
href="http://ai.stanford.edu/~tianshig/papers/multiclassHingeBoost-ICML2011.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/memect/hao/blob/master/awesome/multiclass-boosting.md">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Generalized Boosting Algorithms for Convex Optimization
|
||
(ICML 2011)</strong>
|
||
<ul>
|
||
<li>Alexander Grubb, Drew Bagnell</li>
|
||
<li><a href="https://arxiv.org/pdf/1105.2054.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Imitation Learning in Relational Domains: A
|
||
Functional-Gradient Boosting Approach (IJCAI 2011)</strong>
|
||
<ul>
|
||
<li>Sriraam Natarajan, Saket Joshi, Prasad Tadepalli, Kristian Kersting,
|
||
Jude W. Shavlik</li>
|
||
<li><a
|
||
href="http://ftp.cs.wisc.edu/machine-learning/shavlik-group/natarajan.ijcai11.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting with Maximum Adaptive Sampling (NIPS 2011)</strong>
|
||
<ul>
|
||
<li>Charles Dubout, François Fleuret</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/4310-boosting-with-maximum-adaptive-sampling">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>The Fast Convergence of Boosting (NIPS 2011)</strong>
|
||
<ul>
|
||
<li>Matus Telgarsky</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/4343-the-fast-convergence-of-boosting">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>ShareBoost: Efficient Multiclass Learning with Feature
|
||
Sharing (NIPS 2011)</strong>
|
||
<ul>
|
||
<li>Shai Shalev-Shwartz, Yonatan Wexler, Amnon Shashua</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/4213-shareboost-efficient-multiclass-learning-with-feature-sharing">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multiclass Boosting: Theory and Algorithms (NIPS
|
||
2011)</strong>
|
||
<ul>
|
||
<li>Mohammad J. Saberian, Nuno Vasconcelos</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/4450-multiclass-boosting-theory-and-algorithms.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Variance Penalizing AdaBoost (NIPS 2011)</strong>
|
||
<ul>
|
||
<li>Pannagadatta K. Shivaswamy, Tony Jebara</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/4207-variance-penalizing-adaboost.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>MKBoost: A Framework of Multiple Kernel Boosting (SDM
|
||
2011)</strong>
|
||
<ul>
|
||
<li>Hao Xia, Steven C. H. Hoi</li>
|
||
<li><a
|
||
href="https://ink.library.smu.edu.sg/cgi/viewcontent.cgi?article=3280&context=sis_research">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Boosting Approach to Improving Pseudo-Relevance Feedback
|
||
(SIGIR 2011)</strong>
|
||
<ul>
|
||
<li>Yuanhua Lv, ChengXiang Zhai, Wan Chen</li>
|
||
<li><a
|
||
href="http://www.tyr.unlu.edu.ar/tallerIR/2012/papers/pseudorelevance.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Bagging Gradient-Boosted Trees for High Precision, Low
|
||
Variance Ranking Models (SIGIR 2011)</strong>
|
||
<ul>
|
||
<li>Yasser Ganjisaffar, Rich Caruana, Cristina Videira Lopes</li>
|
||
<li><a
|
||
href="http://www.ccs.neu.edu/home/vip/teach/MLcourse/4_boosting/materials/bagging_lmbamart_jforests.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting as a Product of Experts (UAI 2011)</strong>
|
||
<ul>
|
||
<li>Narayanan Unny Edakunni, Gary Brown, Tim Kovacs</li>
|
||
<li><a href="https://arxiv.org/abs/1202.3716">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Parallel Boosted Regression Trees for Web Search Ranking
|
||
(WWW 2011)</strong>
|
||
<ul>
|
||
<li>Stephen Tyree, Kilian Q. Weinberger, Kunal Agrawal, Jennifer
|
||
Paykin</li>
|
||
<li><a
|
||
href="http://www.cs.cornell.edu/~kilian/papers/fr819-tyreeA.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/YS-L/pgbm">[Code]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-13">2010</h2>
|
||
<ul>
|
||
<li><strong>The Boosting Effect of Exploratory Behaviors (AAAI
|
||
2010)</strong>
|
||
<ul>
|
||
<li>Jivko Sinapov, Alexander Stoytchev</li>
|
||
<li><a
|
||
href="https://www.aaai.org/ocs/index.php/AAAI/AAAI10/paper/download/1777/2265">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting-Based System Combination for Machine Translation
|
||
(ACL 2010)</strong>
|
||
<ul>
|
||
<li>Tong Xiao, Jingbo Zhu, Muhua Zhu, Huizhen Wang</li>
|
||
<li><a href="https://www.aclweb.org/anthology/P10-1076">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>BagBoo: A Scalable Hybrid Bagging-the-Boosting Model (CIKM
|
||
2010)</strong>
|
||
<ul>
|
||
<li>Dmitry Yurievich Pavlov, Alexey Gorodilov, Cliff A. Brunk</li>
|
||
<li><a
|
||
href="http://cache-default03h.cdn.yandex.net/download.yandex.ru/company/a_scalable_hybrid_bagging_the_boosting_model.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/arogozhnikov/infiniteboost">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Automatic Detection of Craters in Planetary Images: an
|
||
Embedded Framework Using Feature Selection and Boosting (CIKM
|
||
2010)</strong>
|
||
<ul>
|
||
<li>Wei Ding, Tomasz F. Stepinski, Lourenço P. C. Bandeira, Ricardo
|
||
Vilalta, Youxi Wu, Zhenyu Lu, Tianyu Cao</li>
|
||
<li><a
|
||
href="https://www.uh.edu/~rvilalta/papers/2010/cikm10.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Facial Point Detection Using Boosted Regression and Graph
|
||
Models (CVPR 2010)</strong>
|
||
<ul>
|
||
<li>Michel François Valstar, Brais Martínez, Xavier Binefa, Maja
|
||
Pantic</li>
|
||
<li><a
|
||
href="https://ibug.doc.ic.ac.uk/media/uploads/documents/CVPR-2010-ValstarEtAl-CAMERA.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting for Transfer Learning with Multiple Sources (CVPR
|
||
2010)</strong>
|
||
<ul>
|
||
<li>Yi Yao, Gianfranco Doretto</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/5539857">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Rotation Invariant Object Detection Using Boosted
|
||
Random Ferns (CVPR 2010)</strong>
|
||
<ul>
|
||
<li>Michael Villamizar, Francesc Moreno-Noguer, Juan Andrade-Cetto,
|
||
Alberto Sanfeliu</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.307.4002&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Implicit Hierarchical Boosting for Multi-view Object
|
||
Detection (CVPR 2010)</strong>
|
||
<ul>
|
||
<li>Xavier Perrotton, Marc Sturzel, Michel Roux</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/5540115">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>On-Line Semi-Supervised Multiple-Instance Boosting (CVPR
|
||
2010)</strong>
|
||
<ul>
|
||
<li>Bernhard Zeisl, Christian Leistner, Amir Saffari, Horst Bischof</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/5539860">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Online Multi-Class LPBoost (CVPR 2010)</strong>
|
||
<ul>
|
||
<li>Amir Saffari, Martin Godec, Thomas Pock, Christian Leistner, Horst
|
||
Bischof</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.165.8939&rep=rep1&type=pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/amirsaffari/online-multiclass-lpboost">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Homotopy Regularization for Boosting (ICDM 2010)</strong>
|
||
<ul>
|
||
<li>Zheng Wang, Yangqiu Song, Changshui Zhang</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/5694094">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Exploiting Local Data Uncertainty to Boost Global Outlier
|
||
Detection (ICDM 2010)</strong>
|
||
<ul>
|
||
<li>Bo Liu, Jie Yin, Yanshan Xiao, Longbing Cao, Philip S. Yu</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/5693984">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Classifiers with Tightened L0-Relaxation Penalties
|
||
(ICML 2010)</strong>
|
||
<ul>
|
||
<li>Noam Goldberg, Jonathan Eckstein</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/11df/aed4ec2a2f72878789fa3a54d588d693bdda.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting for Regression Transfer (ICML 2010)</strong>
|
||
<ul>
|
||
<li>David Pardoe, Peter Stone</li>
|
||
<li><a
|
||
href="https://www.cs.utexas.edu/~dpardoe/papers/ICML10.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/jay15summer/Two-stage-TrAdaboost.R2">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Backpropagation Learning for Training Deep Modular
|
||
Networks (ICML 2010)</strong>
|
||
<ul>
|
||
<li>Alexander Grubb, J. Andrew Bagnell</li>
|
||
<li><a
|
||
href="https://icml.cc/Conferences/2010/papers/451.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fast Boosting Using Adversarial Bandits (ICML 2010)</strong>
|
||
<ul>
|
||
<li>Róbert Busa-Fekete, Balázs Kégl</li>
|
||
<li><a
|
||
href="https://www.lri.fr/~kegl/research/PDFs/BuKe10.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting with Structure Information in the Functional Space:
|
||
an Application to Graph Classification (KDD 2010)</strong>
|
||
<ul>
|
||
<li>Hongliang Fei, Jun Huan</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=1835804.1835886">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multi-task Learning for Boosting with Application to Web
|
||
Search Ranking (KDD 2010)</strong>
|
||
<ul>
|
||
<li>Olivier Chapelle, Pannagadatta K. Shivaswamy, Srinivas Vadrevu,
|
||
Kilian Q. Weinberger, Ya Zhang, Belle L. Tseng</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=1835953">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Theory of Multiclass Boosting (NIPS 2010)</strong>
|
||
<ul>
|
||
<li>Indraneel Mukherjee, Robert E. Schapire</li>
|
||
<li><a
|
||
href="http://rob.schapire.net/papers/multiboost-journal.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Classifier Cascades (NIPS 2010)</strong>
|
||
<ul>
|
||
<li>Mohammad J. Saberian, Nuno Vasconcelos</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/4033-boosting-classifier-cascades.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Joint Cascade Optimization Using A Product Of Boosted
|
||
Classifiers (NIPS 2010)</strong>
|
||
<ul>
|
||
<li>Leonidas Lefakis, François Fleuret</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/4148-joint-cascade-optimization-using-a-product-of-boosted-classifiers">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Robust LogitBoost and Adaptive Base Class (ABC) LogitBoost
|
||
(UAI 2010)</strong>
|
||
<ul>
|
||
<li>Ping Li</li>
|
||
<li><a href="https://arxiv.org/abs/1203.3491">[Paper]</a></li>
|
||
<li><a href="https://github.com/pengsun/AOSOLogitBoost">[Code]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-14">2009</h2>
|
||
<ul>
|
||
<li><strong>Feature Selection for Ranking Using Boosted Trees (CIKM
|
||
2009)</strong>
|
||
<ul>
|
||
<li>Feng Pan, Tim Converse, David Ahn, Franco Salvetti, Gianluca
|
||
Donato</li>
|
||
<li><a
|
||
href="http://www.francosalvetti.com/cikm09_camera2.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting KNN Text Classification Accuracy by Using
|
||
Supervised Term Weighting Schemes (CIKM 2009)</strong>
|
||
<ul>
|
||
<li>Iyad Batal, Milos Hauskrecht</li>
|
||
<li><a
|
||
href="https://people.cs.pitt.edu/~milos/research/CIKM_2009_boosting_KNN.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Stochastic Gradient Boosted Distributed Decision Trees (CIKM
|
||
2009)</strong>
|
||
<ul>
|
||
<li>Jerry Ye, Jyh-Herng Chow, Jiang Chen, Zhaohui Zheng</li>
|
||
<li><a
|
||
href="http://cse.iitrpr.ac.in/ckn/courses/f2012/thomas.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A General Magnitude-Preserving Boosting Algorithm for Search
|
||
Ranking (CIKM 2009)</strong>
|
||
<ul>
|
||
<li>Chenguang Zhu, Weizhu Chen, Zeyuan Allen Zhu, Gang Wang, Dong Wang,
|
||
Zheng Chen</li>
|
||
<li><a
|
||
href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/06/cikm2009-1.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Reducing Joint Boost-Based Multiclass Classification to
|
||
Proximity Search (CVPR 2009)</strong>
|
||
<ul>
|
||
<li>Alexandra Stefan, Vassilis Athitsos, Quan Yuan, Stan Sclaroff</li>
|
||
<li><a
|
||
href="https://www.semanticscholar.org/paper/Reducing-JointBoost-based-multiclass-classification-Stefan-Athitsos/08ba1a7d91ce9b4ac26869bfe4bb7c955b0d1a24">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Imbalanced RankBoost for Efficiently Ranking Large-Scale
|
||
Image-Video Collections (CVPR 2009)</strong>
|
||
<ul>
|
||
<li>Michele Merler, Rong Yan, John R. Smith</li>
|
||
<li><a
|
||
href="https://www.semanticscholar.org/paper/Imbalanced-RankBoost-for-efficiently-ranking-Merler-Yan/031ba6bf0d6df8bd3aa686ce85791b7d74f0b6d5">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Regularized Multi-Class Semi-Supervised Boosting (CVPR
|
||
2009)</strong>
|
||
<ul>
|
||
<li>Amir Saffari, Christian Leistner, Horst Bischof</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/abstract/document/5206715">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning to Associate: HybridBoosted Multi-Target Tracker
|
||
for Crowded Scene (CVPR 2009)</strong>
|
||
<ul>
|
||
<li>Yuan Li, Chang Huang, Ram Nevatia</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.309.8335&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Multi-task Learning for Face Verification with
|
||
Applications to Web Image and Video Search (CVPR 2009)</strong>
|
||
<ul>
|
||
<li>Xiaogang Wang, Cha Zhang, Zhengyou Zhang</li>
|
||
<li><a
|
||
href="http://www.ee.cuhk.edu.hk/~xgwang/webface.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>LidarBoost: Depth Superresolution for ToF 3D Shape Scanning
|
||
(CVPR 2009)</strong>
|
||
<ul>
|
||
<li>Sebastian Schuon, Christian Theobalt, James E. Davis, Sebastian
|
||
Thrun</li>
|
||
<li><a
|
||
href="http://ai.stanford.edu/~schuon/sr/cvpr09_poster_lidarboost.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Model Adaptation via Model Interpolation and Boosting for
|
||
Web Search Ranking (EMNLP 2009)</strong>
|
||
<ul>
|
||
<li>Jianfeng Gao, Qiang Wu, Chris Burges, Krysta Marie Svore, Yi Su,
|
||
Nazan Khan, Shalin Shah, Hongyan Zhou</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/7a82/66335d0b44596574588eabb090bfeae4ab35.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Finding Shareable Informative Patterns and Optimal Coding
|
||
Matrix for Multiclass Boosting (ICCV 2009)</strong>
|
||
<ul>
|
||
<li>Bang Zhang, Getian Ye, Yang Wang, Jie Xu, Gunawan Herman</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/5459146">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>RankBoost with L1 Regularization for Facial Expression
|
||
Recognition and Intensity Estimation (ICCV 2009)</strong>
|
||
<ul>
|
||
<li>Peng Yang, Qingshan Liu, Dimitris N. Metaxas</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/5459371">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Robust Boosting Tracker with Minimum Error Bound in a
|
||
Co-Training Framework (ICCV 2009)</strong>
|
||
<ul>
|
||
<li>Rong Liu, Jian Cheng, Hanqing Lu</li>
|
||
<li><a
|
||
href="http://nlpr-web.ia.ac.cn/2009papers/gjhy/gh1.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Tutorial Summary: Survey of Boosting from an Optimization
|
||
Perspective (ICML 2009)</strong>
|
||
<ul>
|
||
<li>Manfred K. Warmuth, S. V. N. Vishwanathan</li>
|
||
<li><a
|
||
href="http://www.stat.purdue.edu/~vishy/erlpboost/manfred.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Products of Base Classifiers (ICML 2009)</strong>
|
||
<ul>
|
||
<li>Balázs Kégl, Róbert Busa-Fekete</li>
|
||
<li><a
|
||
href="https://users.lal.in2p3.fr/kegl/research/PDFs/keglBusafekete09.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>ABC-boost: Adaptive Base Class Boost for Multi-Class
|
||
Classification (ICML 2009)</strong>
|
||
<ul>
|
||
<li>Ping Li</li>
|
||
<li><a
|
||
href="https://icml.cc/Conferences/2009/papers/417.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting with Structural Sparsity (ICML 2009)</strong>
|
||
<ul>
|
||
<li>John C. Duchi, Yoram Singer</li>
|
||
<li><a
|
||
href="https://web.stanford.edu/~jduchi/projects/DuchiSi09a.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Constrained Mutual Subspace Method for Robust
|
||
Image-Set Based Object Recognition (IJCAI 2009)</strong>
|
||
<ul>
|
||
<li>Xi Li, Kazuhiro Fukui, Nanning Zheng</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/220812439_Boosting_Constrained_Mutual_Subspace_Method_for_Robust_Image-Set_Based_Object_Recognition">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Information Theoretic Regularization for Semi-supervised
|
||
Boosting (KDD 2009)</strong>
|
||
<ul>
|
||
<li>Lei Zheng, Shaojun Wang, Yan Liu, Chi-Hoon Lee</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/5255/242d50851ce56354e10ae8fdcee6f47591c9.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Potential-Based Agnostic Boosting (NIPS 2009)</strong>
|
||
<ul>
|
||
<li>Adam Kalai, Varun Kanade</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/3676-potential-based-agnostic-boosting">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Positive Semidefinite Metric Learning with Boosting (NIPS
|
||
2009)</strong>
|
||
<ul>
|
||
<li>Chunhua Shen, Junae Kim, Lei Wang, Anton van den Hengel</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/3658-positive-semidefinite-metric-learning-with-boosting">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting with Spatial Regularization (NIPS 2009)</strong>
|
||
<ul>
|
||
<li>Zhen James Xiang, Yongxin Taylor Xi, Uri Hasson, Peter J.
|
||
Ramadge</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/3696-boosting-with-spatial-regularization">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Effective Boosting of Na%C3%AFve Bayesian Classifiers by
|
||
Local Accuracy Estimation (PAKDD 2009)</strong>
|
||
<ul>
|
||
<li>Zhipeng Xie</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/978-3-642-01307-2_88">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multi-resolution Boosting for Classification and Regression
|
||
Problems (PAKDD 2009)</strong>
|
||
<ul>
|
||
<li>Chandan K. Reddy, Jin Hyeong Park</li>
|
||
<li><a href="http://dmkd.cs.vt.edu/papers/PAKDD09.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Active Learning with Boosting (SDM 2009)</strong>
|
||
<ul>
|
||
<li>Zheng Wang, Yangqiu Song, Changshui Zhang</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/c8be/b70c37e4b4c4ad77e46b39060c977779d201.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-15">2008</h2>
|
||
<ul>
|
||
<li><strong>Group-Based Learning: A Boosting Approach (CIKM
|
||
2008)</strong>
|
||
<ul>
|
||
<li>Weijian Ni, Jun Xu, Hang Li, Yalou Huang</li>
|
||
<li><a
|
||
href="http://www.bigdatalab.ac.cn/~junxu/publications/CIKM2008_GroupLearn.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Semi-Supervised Boosting Using Visual Similarity Learning
|
||
(CVPR 2008)</strong>
|
||
<ul>
|
||
<li>Christian Leistner, Helmut Grabner, Horst Bischof</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144.7914&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Mining Compositional Features for Boosting (CVPR
|
||
2008)</strong>
|
||
<ul>
|
||
<li>Junsong Yuan, Jiebo Luo, Ying Wu</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4587347">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Deformable Model for Human Body Alignment (CVPR
|
||
2008)</strong>
|
||
<ul>
|
||
<li>Xiaoming Liu, Ting Yu, Thomas Sebastian, Peter H. Tu</li>
|
||
<li><a
|
||
href="https://www.cse.msu.edu/~liuxm/publication/Liu_Yu_Sebastian_Tu_cvpr08.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Discriminative Modeling by Boosting on Multilevel Aggregates
|
||
(CVPR 2008)</strong>
|
||
<ul>
|
||
<li>Jason J. Corso</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.409.3166&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Face Alignment via Boosted Ranking Model (CVPR
|
||
2008)</strong>
|
||
<ul>
|
||
<li>Hao Wu, Xiaoming Liu, Gianfranco Doretto</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/4587753">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Adaptive Linear Weak Classifiers for Online
|
||
Learning and Tracking (CVPR 2008)</strong>
|
||
<ul>
|
||
<li>Toufiq Parag, Fatih Porikli, Ahmed M. Elgammal</li>
|
||
<li><a
|
||
href="https://www.merl.com/publications/docs/TR2008-065.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Detection with Multi-Exit Asymmetric Boosting (CVPR
|
||
2008)</strong>
|
||
<ul>
|
||
<li>Minh-Tri Pham, V-D. D. Hoang, Tat-Jen Cham</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.330.6364&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Ordinal Features for Accurate and Fast Iris
|
||
Recognition (CVPR 2008)</strong>
|
||
<ul>
|
||
<li>Zhaofeng He, Zhenan Sun, Tieniu Tan, Xianchao Qiu, Cheng Zhong,
|
||
Wenbo Dong</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/224323296_Boosting_ordinal_features_for_accurate_and_fast_iris_recognition">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Adaptive and Compact Shape Descriptor by Progressive Feature
|
||
Combination and Selection with Boosting (CVPR 2008)</strong>
|
||
<ul>
|
||
<li>Cheng Chen, Yueting Zhuang, Jun Xiao, Fei Wu</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/4587613">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Relational Sequence Alignments (ICDM 2008)</strong>
|
||
<ul>
|
||
<li>Andreas Karwath, Kristian Kersting, Niels Landwehr</li>
|
||
<li><a
|
||
href="https://www.cs.uni-potsdam.de/~landwehr/ICDM08boosting.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting with Incomplete Information (ICML 2008)</strong>
|
||
<ul>
|
||
<li>Gholamreza Haffari, Yang Wang, Shaojun Wang, Greg Mori, Feng
|
||
Jiao</li>
|
||
<li><a
|
||
href="http://users.monash.edu.au/~gholamrh/publications/boosting_icml08_slides.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>ManifoldBoost: Stagewise Function Approximation for Fully-,
|
||
Semi- and Un-supervised Learning (ICML 2008)</strong>
|
||
<ul>
|
||
<li>Nicolas Loeff, David A. Forsyth, Deepak Ramachandran</li>
|
||
<li><a
|
||
href="http://reason.cs.uiuc.edu/deepak/manifoldboost.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Random Classification Noise Defeats All Convex Potential
|
||
Boosters (ICML 2008)</strong>
|
||
<ul>
|
||
<li>Philip M. Long, Rocco A. Servedio</li>
|
||
<li><a
|
||
href="http://phillong.info/publications/LS09_potential.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multi-class Cost-Sensitive Boosting with P-norm Loss
|
||
Functions (KDD 2008)</strong>
|
||
<ul>
|
||
<li>Aurelie C. Lozano, Naoki Abe</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=1401953">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>MCBoost: Multiple Classifier Boosting for Perceptual
|
||
Co-clustering of Images and Visual Features (NIPS 2008)</strong>
|
||
<ul>
|
||
<li>Tae-Kyun Kim, Roberto Cipolla</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/3483-mcboost-multiple-classifier-boosting-for-perceptual-co-clustering-of-images-and-visual-features">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>PSDBoost: Matrix-Generation Linear Programming for Positive
|
||
Semidefinite Matrices Learning (NIPS 2008)</strong>
|
||
<ul>
|
||
<li>Chunhua Shen, Alan Welsh, Lei Wang</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.879.7750&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>On the Design of Loss Functions for Classification: Theory,
|
||
Robustness to Outliers, and SavageBoost (NIPS 2008)</strong>
|
||
<ul>
|
||
<li>Hamed Masnadi-Shirazi, Nuno Vasconcelos</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/3591-on-the-design-of-loss-functions-for-classification-theory-robustness-to-outliers-and-savageboost">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Adaptive Martingale Boosting (NIPS 2008)</strong>
|
||
<ul>
|
||
<li>Philip M. Long, Rocco A. Servedio</li>
|
||
<li><a
|
||
href="http://phillong.info/publications/LS08_adaptive_martingale_boosting.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Boosting Algorithm for Learning Bipartite Ranking
|
||
Functions with Partially Labeled Data (SIGIR 2008)</strong>
|
||
<ul>
|
||
<li>Massih-Reza Amini, Tuong-Vinh Truong, Cyril Goutte</li>
|
||
<li><a
|
||
href="http://ama.liglab.fr/~amini/Publis/SemiSupRanking_sigir08.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-16">2007</h2>
|
||
<ul>
|
||
<li><strong>Using Error-Correcting Output Codes with Model-Refinement to
|
||
Boost Centroid Text Classifier (ACL 2007)</strong>
|
||
<ul>
|
||
<li>Songbo Tan</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=1557794">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fast Human Pose Estimation using Appearance and Motion via
|
||
Multi-Dimensional Boosting Regression (CVPR 2007)</strong>
|
||
<ul>
|
||
<li>Alessandro Bissacco, Ming-Hsuan Yang, Stefano Soatto</li>
|
||
<li><a
|
||
href="http://vision.ucla.edu/papers/bissaccoYS07.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Generic Face Alignment using Boosted Appearance Model (CVPR
|
||
2007)</strong>
|
||
<ul>
|
||
<li>Xiaoming Liu</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4270290">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Eigenboosting: Combining Discriminative and Generative
|
||
Information (CVPR 2007)</strong>
|
||
<ul>
|
||
<li>Helmut Grabner, Peter M. Roth, Horst Bischof</li>
|
||
<li><a
|
||
href="https://www.tugraz.at/fileadmin/user_upload/Institute/ICG/Documents/lrs/pubs/grabner_cvpr_07.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Online Learning Asymmetric Boosted Classifiers for Object
|
||
Detection (CVPR 2007)</strong>
|
||
<ul>
|
||
<li>Minh-Tri Pham, Tat-Jen Cham</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/abstract/document/4270108">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Improving Part based Object Detection by Unsupervised Online
|
||
Boosting (CVPR 2007)</strong>
|
||
<ul>
|
||
<li>Bo Wu, Ram Nevatia</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/4270173">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Specialized Processor Suitable for AdaBoost-Based
|
||
Detection with Haar-like Features (CVPR 2007)</strong>
|
||
<ul>
|
||
<li>Masayuki Hiromoto, Kentaro Nakahara, Hiroki Sugano, Yukihiro
|
||
Nakamura, Ryusuke Miyamoto</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/4270413">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Simultaneous Object Detection and Segmentation by Boosting
|
||
Local Shape Feature based Classifier (CVPR 2007)</strong>
|
||
<ul>
|
||
<li>Bo Wu, Ram Nevatia</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.309.9795&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Compositional Boosting for Computing Hierarchical Image
|
||
Structures (CVPR 2007)</strong>
|
||
<ul>
|
||
<li>Tianfu Wu, Gui-Song Xia, Song Chun Zhu</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/4270059">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Coded Dynamic Features for Facial Action Units and
|
||
Facial Expression Recognition (CVPR 2007)</strong>
|
||
<ul>
|
||
<li>Peng Yang, Qingshan Liu, Dimitris N. Metaxas</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/4270084">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Object Classification in Visual Surveillance Using Adaboost
|
||
(CVPR 2007)</strong>
|
||
<ul>
|
||
<li>John-Paul Renno, Dimitrios Makris, Graeme A. Jones</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/abstract/document/4270512">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Boosting Regression Approach to Medical Anatomy Detection
|
||
(CVPR 2007)</strong>
|
||
<ul>
|
||
<li>Shaohua Kevin Zhou, Jinghao Zhou, Dorin Comaniciu</li>
|
||
<li><a
|
||
href="http://ww.w.comaniciu.net/Papers/BoostingRegression_CVPR07.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Joint Real-time Object Detection and Pose Estimation Using
|
||
Probabilistic Boosting Network (CVPR 2007)</strong>
|
||
<ul>
|
||
<li>Jingdan Zhang, Shaohua Kevin Zhou, Leonard McMillan, Dorin
|
||
Comaniciu</li>
|
||
<li><a
|
||
href="http://csbio.unc.edu/mcmillan/pubs/CVPR07_Zhang.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Kernel Sharing With Joint Boosting For Multi-Class Concept
|
||
Detection (CVPR 2007)</strong>
|
||
<ul>
|
||
<li>Wei Jiang, Shih-Fu Chang, Alexander C. Loui</li>
|
||
<li><a
|
||
href="http://www.ee.columbia.edu/~wjiang/references/jiangcvprws07.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Scale-Space Based Weak Regressors for Boosting (ECML
|
||
2007)</strong>
|
||
<ul>
|
||
<li>Jin Hyeong Park, Chandan K. Reddy</li>
|
||
<li><a
|
||
href="http://www.cs.wayne.edu/~reddy/Papers/ECML07.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Avoiding Boosting Overfitting by Removing Confusing Samples
|
||
(ECML 2007)</strong>
|
||
<ul>
|
||
<li>Alexander Vezhnevets, Olga Barinova</li>
|
||
<li><a
|
||
href="http://groups.inf.ed.ac.uk/calvin/hp_avezhnev/Pubs/AvoidingBoostingOverfitting.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>DynamicBoost: Boosting Time Series Generated by Dynamical
|
||
Systems (ICCV 2007)</strong>
|
||
<ul>
|
||
<li>René Vidal, Paolo Favaro</li>
|
||
<li><a
|
||
href="http://vision.jhu.edu/assets/VidalICCV07.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Incremental Learning of Boosted Face Detector (ICCV
|
||
2007)</strong>
|
||
<ul>
|
||
<li>Chang Huang, Haizhou Ai, Takayoshi Yamashita, Shihong Lao, Masato
|
||
Kawade</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.126.9012&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Gradient Feature Selection for Online Boosting (ICCV
|
||
2007)</strong>
|
||
<ul>
|
||
<li>Xiaoming Liu, Ting Yu</li>
|
||
<li><a
|
||
href="https://www.cse.msu.edu/~liuxm/publication/Liu_Yu_ICCV2007.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fast Training and Selection of Haar Features Using
|
||
Statistics in Boosting-based Face Detection (ICCV 2007)</strong>
|
||
<ul>
|
||
<li>Minh-Tri Pham, Tat-Jen Cham</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.212.6173&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Cluster Boosted Tree Classifier for Multi-View - Multi-Pose
|
||
Object Detection (ICCV 2007)</strong>
|
||
<ul>
|
||
<li>Bo Wu, Ramakant Nevatia</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.309.9885&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Asymmetric Boosting (ICML 2007)</strong>
|
||
<ul>
|
||
<li>Hamed Masnadi-Shirazi, Nuno Vasconcelos</li>
|
||
<li><a
|
||
href="http://www.svcl.ucsd.edu/publications/conference/2007/icml07/AsymmetricBoosting.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting for Transfer Learning (ICML 2007)</strong>
|
||
<ul>
|
||
<li>Wenyuan Dai, Qiang Yang, Gui-Rong Xue, Yong Yu</li>
|
||
<li><a
|
||
href="http://www.cs.ust.hk/~qyang/Docs/2007/tradaboost.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Gradient Boosting for Kernelized Output Spaces (ICML
|
||
2007)</strong>
|
||
<ul>
|
||
<li>Pierre Geurts, Louis Wehenkel, Florence d’Alché-Buc</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.435.3970&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting a Complete Technique to Find MSS and MUS Thanks to
|
||
a Local Search Oracle (IJCAI 2007)</strong>
|
||
<ul>
|
||
<li>Éric Grégoire, Bertrand Mazure, Cédric Piette</li>
|
||
<li><a
|
||
href="http://www.cril.univ-artois.fr/~piette/IJCAI07_HYCAM.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Training Conditional Random Fields Using Virtual Evidence
|
||
Boosting (IJCAI 2007)</strong>
|
||
<ul>
|
||
<li>Lin Liao, Tanzeem Choudhury, Dieter Fox, Henry A. Kautz</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/07/Papers/407.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Simple Training of Dependency Parsers via Structured
|
||
Boosting (IJCAI 2007)</strong>
|
||
<ul>
|
||
<li>Qin Iris Wang, Dekang Lin, Dale Schuurmans</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/07/Papers/284.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Real Boosting a la Carte with an Application to Boosting
|
||
Oblique Decision Tree (IJCAI 2007)</strong>
|
||
<ul>
|
||
<li>Claudia Henry, Richard Nock, Frank Nielsen</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/07/Papers/135.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Managing Domain Knowledge and Multiple Models with Boosting
|
||
(IJCAI 2007)</strong>
|
||
<ul>
|
||
<li>Peng Zang, Charles Lee Isbell Jr.</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/07/Papers/185.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Model-Shared Subspace Boosting for Multi-label
|
||
Classification (KDD 2007)</strong>
|
||
<ul>
|
||
<li>Rong Yan, Jelena Tesic, John R. Smith</li>
|
||
<li><a
|
||
href="http://rogerioferis.com/VisualRecognitionAndSearch2014/material/papers/IMARSKDD2007.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Regularized Boost for Semi-Supervised Learning (NIPS
|
||
2007)</strong>
|
||
<ul>
|
||
<li>Ke Chen, Shihai Wang</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/3167-regularized-boost-for-semi-supervised-learning.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Algorithms for Maximizing the Soft Margin (NIPS
|
||
2007)</strong>
|
||
<ul>
|
||
<li>Manfred K. Warmuth, Karen A. Glocer, Gunnar Rätsch</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/3374-boosting-algorithms-for-maximizing-the-soft-margin.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>McRank: Learning to Rank Using Multiple Classification and
|
||
Gradient Boosting (NIPS 2007)</strong>
|
||
<ul>
|
||
<li>Ping Li, Christopher J. C. Burges, Qiang Wu</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/3270-mcrank-learning-to-rank-using-multiple-classification-and-gradient-boosting.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>One-Pass Boosting (NIPS 2007)</strong>
|
||
<ul>
|
||
<li>Zafer Barutçuoglu, Philip M. Long, Rocco A. Servedio</li>
|
||
<li><a
|
||
href="http://phillong.info/publications/BLS07_one_pass.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting the Area under the ROC Curve (NIPS 2007)</strong>
|
||
<ul>
|
||
<li>Philip M. Long, Rocco A. Servedio</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/3247-boosting-the-area-under-the-roc-curve.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>FilterBoost: Regression and Classification on Large Datasets
|
||
(NIPS 2007)</strong>
|
||
<ul>
|
||
<li>Joseph K. Bradley, Robert E. Schapire</li>
|
||
<li><a
|
||
href="http://rob.schapire.net/papers/FilterBoost_paper.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A General Boosting Method and its Application to Learning
|
||
Ranking Functions for Web Search (NIPS 2007)</strong>
|
||
<ul>
|
||
<li>Zhaohui Zheng, Hongyuan Zha, Tong Zhang, Olivier Chapelle, Keke
|
||
Chen, Gordon Sun</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/8f8d/874a3f0217289ba317b1f6175ac3b6f73d70.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Multiclass Boosting Classification with Active
|
||
Learning (SDM 2007)</strong>
|
||
<ul>
|
||
<li>Jian Huang, Seyda Ertekin, Yang Song, Hongyuan Zha, C. Lee
|
||
Giles</li>
|
||
<li><a
|
||
href="https://epubs.siam.org/doi/abs/10.1137/1.9781611972771.27">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>AdaRank: a Boosting Algorithm for Information Retrieval
|
||
(SIGIR 2007)</strong>
|
||
<ul>
|
||
<li>Jun Xu, Hang Li</li>
|
||
<li><a
|
||
href="http://www.bigdatalab.ac.cn/~junxu/publications/SIGIR2007_AdaRank.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-17">2006</h2>
|
||
<ul>
|
||
<li><strong>Gradient Boosting for Sequence Alignment (AAAI
|
||
2006)</strong>
|
||
<ul>
|
||
<li>Charles Parker, Alan Fern, Prasad Tadepalli</li>
|
||
<li><a
|
||
href="http://web.engr.oregonstate.edu/~afern/papers/aaai06-align.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Kernel Models for Regression (ICDM 2006)</strong>
|
||
<ul>
|
||
<li>Ping Sun, Xin Yao</li>
|
||
<li><a
|
||
href="https://www.cs.bham.ac.uk/~xin/papers/icdm06SunYao.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting for Learning Multiple Classes with Imbalanced Class
|
||
Distribution (ICDM 2006)</strong>
|
||
<ul>
|
||
<li>Yanmin Sun, Mohamed S. Kamel, Yang Wang</li>
|
||
<li><a
|
||
href="http://people.ee.duke.edu/~lcarin/ImbalancedClassDistribution.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting the Feature Space: Text Classification for
|
||
Unstructured Data on the Web (ICDM 2006)</strong>
|
||
<ul>
|
||
<li>Yang Song, Ding Zhou, Jian Huang, Isaac G. Councill, Hongyuan Zha,
|
||
C. Lee Giles</li>
|
||
<li><a href="http://sonyis.me/paperpdf/icdm06_song.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Totally Corrective Boosting Algorithms that Maximize the
|
||
Margin (ICML 2006)</strong>
|
||
<ul>
|
||
<li>Manfred K. Warmuth, Jun Liao, Gunnar Rätsch</li>
|
||
<li><a
|
||
href="https://users.soe.ucsc.edu/~manfred/pubs/C75.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>How Boosting the Margin Can Also Boost Classifier Complexity
|
||
(ICML 2006)</strong>
|
||
<ul>
|
||
<li>Lev Reyzin, Robert E. Schapire</li>
|
||
<li><a
|
||
href="http://rob.schapire.net/papers/boost_complexity.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multiclass Boosting with Repartitioning (ICML 2006)</strong>
|
||
<ul>
|
||
<li>Ling Li</li>
|
||
<li><a
|
||
href="https://authors.library.caltech.edu/72259/1/p569-li.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>AdaBoost is Consistent (NIPS 2006)</strong>
|
||
<ul>
|
||
<li>Peter L. Bartlett, Mikhail Traskin</li>
|
||
<li><a
|
||
href="http://jmlr.csail.mit.edu/papers/volume8/bartlett07b/bartlett07b.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Structured Prediction for Imitation Learning (NIPS
|
||
2006)</strong>
|
||
<ul>
|
||
<li>Nathan D. Ratliff, David M. Bradley, J. Andrew Bagnell, Joel E.
|
||
Chestnutt</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/3154-boosting-structured-prediction-for-imitation-learning.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Chained Boosting (NIPS 2006)</strong>
|
||
<ul>
|
||
<li>Christian R. Shelton, Wesley Huie, Kin Fai Kan</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/2981-chained-boosting">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>When Efficient Model Averaging Out-Performs Boosting and
|
||
Bagging (PKDD 2006)</strong>
|
||
<ul>
|
||
<li>Ian Davidson, Wei Fan</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/11871637_46">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-18">2005</h2>
|
||
<ul>
|
||
<li><strong>Semantic Place Classification of Indoor Environments with
|
||
Mobile Robots Using Boosting (AAAI 2005)</strong>
|
||
<ul>
|
||
<li>Axel Rottmann, Óscar Martínez Mozos, Cyrill Stachniss, Wolfram
|
||
Burgard</li>
|
||
<li><a
|
||
href="http://www2.informatik.uni-freiburg.de/~stachnis/pdf/rottmann05aaai.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting-based Parse Reranking with Subtree Features (ACL
|
||
2005)</strong>
|
||
<ul>
|
||
<li>Taku Kudo, Jun Suzuki, Hideki Isozaki</li>
|
||
<li><a
|
||
href="http://chasen.org/~taku/publications/acl2005.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Using RankBoost to Compare Retrieval Systems (CIKM
|
||
2005)</strong>
|
||
<ul>
|
||
<li>Huyen-Trang Vu, Patrick Gallinari</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.9470&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Classifier Fusion Using Shared Sampling Distribution for
|
||
Boosting (ICDM 2005)</strong>
|
||
<ul>
|
||
<li>Costin Barbu, Raja Tanveer Iqbal, Jing Peng</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/1565659">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Semi-Supervised Mixture of Kernels via LPBoost Methods (ICDM
|
||
2005)</strong>
|
||
<ul>
|
||
<li>Jinbo Bi, Glenn Fung, Murat Dundar, R. Bharat Rao</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/1565728">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Discriminative Learning of Bayesian Network
|
||
Classifier via Boosted Augmented Naive Bayes (ICML 2005)</strong>
|
||
<ul>
|
||
<li>Yushi Jing, Vladimir Pavlovic, James M. Rehg</li>
|
||
<li><a
|
||
href="http://mrl.isr.uc.pt/pub/bscw.cgi/d27355/Jing05Efficient.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Unifying the Error-Correcting and Output-Code AdaBoost
|
||
within the Margin Framework (ICML 2005)</strong>
|
||
<ul>
|
||
<li>Yijun Sun, Sinisa Todorovic, Jian Li, Dapeng Wu</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.138.4246&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Smoothed Boosting Algorithm Using Probabilistic Output
|
||
Codes (ICML 2005)</strong>
|
||
<ul>
|
||
<li>Rong Jin, Jian Zhang</li>
|
||
<li><a
|
||
href="http://www.stat.purdue.edu/~jianzhan/papers/icml05jin.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Robust Boosting and its Relation to Bagging (KDD
|
||
2005)</strong>
|
||
<ul>
|
||
<li>Saharon Rosset</li>
|
||
<li><a
|
||
href="https://www.tau.ac.il/~saharon/papers/bagboost.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Computations via Scalable Sparse Kernel Partial
|
||
Least Squares and Boosted Latent Features (KDD 2005)</strong>
|
||
<ul>
|
||
<li>Michinari Momma</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.387.2078&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multiple Instance Boosting for Object Detection (NIPS
|
||
2005)</strong>
|
||
<ul>
|
||
<li>Paul A. Viola, John C. Platt, Cha Zhang</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.138.8312&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Convergence and Consistency of Regularized Boosting
|
||
Algorithms with Stationary B-Mixing Observations (NIPS 2005)</strong>
|
||
<ul>
|
||
<li>Aurelie C. Lozano, Sanjeev R. Kulkarni, Robert E. Schapire</li>
|
||
<li><a
|
||
href="https://www.cs.princeton.edu/~schapire/papers/betamix.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted decision trees for word recognition in handwritten
|
||
document retrieval (SIGIR 2005)</strong>
|
||
<ul>
|
||
<li>Nicholas R. Howe, Toni M. Rath, R. Manmatha</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.152.1551&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Obtaining Calibrated Probabilities from Boosting (UAI
|
||
2005)</strong>
|
||
<ul>
|
||
<li>Alexandru Niculescu-Mizil, Rich Caruana</li>
|
||
<li><a
|
||
href="https://www.cs.cornell.edu/~caruana/niculescu.scldbst.crc.rev4.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-19">2004</h2>
|
||
<ul>
|
||
<li><strong>Online Parallel Boosting (AAAI 2004)</strong>
|
||
<ul>
|
||
<li>Jesse A. Reichler, Harlan D. Harris, Michael A. Savchenko</li>
|
||
<li><a
|
||
href="https://www.aaai.org/Papers/AAAI/2004/AAAI04-059.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Boosting Approach to Multiple Instance Learning (ECML
|
||
2004)</strong>
|
||
<ul>
|
||
<li>Peter Auer, Ronald Ortner</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/978-3-540-30115-8_9">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Boosting Algorithm for Classification of Semi-Structured
|
||
Text (EMNLP 2004)</strong>
|
||
<ul>
|
||
<li>Taku Kudo, Yuji Matsumoto</li>
|
||
<li><a href="https://www.aclweb.org/anthology/W04-3239">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Text Classification by Boosting Weak Learners based on Terms
|
||
and Concepts (ICDM 2004)</strong>
|
||
<ul>
|
||
<li>Stephan Bloehdorn, Andreas Hotho</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/1410303">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Grammatical Inference with Confidence Oracles (ICML
|
||
2004)</strong>
|
||
<ul>
|
||
<li>Jean-Christophe Janodet, Richard Nock, Marc Sebban, Henri-Maxime
|
||
Suchier</li>
|
||
<li><a
|
||
href="http://www1.univ-ag.fr/~rnock/Articles/Drafts/icml04-jnss.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Surrogate Maximization/Minimization Algorithms for AdaBoost
|
||
and the Logistic Regression Model (ICML 2004)</strong>
|
||
<ul>
|
||
<li>Zhihua Zhang, James T. Kwok, Dit-Yan Yeung</li>
|
||
<li><a
|
||
href="https://icml.cc/Conferences/2004/proceedings/papers/77.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Training Conditional Random Fields via Gradient Tree
|
||
Boosting (ICML 2004)</strong>
|
||
<ul>
|
||
<li>Thomas G. Dietterich, Adam Ashenfelter, Yaroslav Bulatov</li>
|
||
<li><a
|
||
href="http://web.engr.oregonstate.edu/~tgd/publications/ml2004-treecrf.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Margin Based Distance Functions for Clustering
|
||
(ICML 2004)</strong>
|
||
<ul>
|
||
<li>Tomer Hertz, Aharon Bar-Hillel, Daphna Weinshall</li>
|
||
<li><a
|
||
href="http://www.cs.huji.ac.il/~daphna/papers/distboost-icml.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Column-Generation Boosting Methods for Mixture of Kernels
|
||
(KDD 2004)</strong>
|
||
<ul>
|
||
<li>Jinbo Bi, Tong Zhang, Kristin P. Bennett</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.94.6359&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Optimal Aggregation of Classifiers and Boosting Maps in
|
||
Functional Magnetic Resonance Imaging (NIPS 2004)</strong>
|
||
<ul>
|
||
<li>Vladimir Koltchinskii, Manel Martínez-Ramón, Stefan Posse</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/2699-optimal-aggregation-of-classifiers-and-boosting-maps-in-functional-magnetic-resonance-imaging.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting on Manifolds: Adaptive Regularization of Base
|
||
Classifiers (NIPS 2004)</strong>
|
||
<ul>
|
||
<li>Balázs Kégl, Ligen Wang</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/2613-boosting-on-manifolds-adaptive-regularization-of-base-classifiers">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Contextual Models for Object Detection Using Boosted Random
|
||
Fields (NIPS 2004)</strong>
|
||
<ul>
|
||
<li>Antonio Torralba, Kevin P. Murphy, William T. Freeman</li>
|
||
<li><a
|
||
href="https://www.cs.ubc.ca/~murphyk/Papers/BRF-nips04-camera.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Generalization Error and Algorithmic Convergence of Median
|
||
Boosting (NIPS 2004)</strong>
|
||
<ul>
|
||
<li>Balázs Kégl</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.70.8990&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>An Application of Boosting to Graph Classification (NIPS
|
||
2004)</strong>
|
||
<ul>
|
||
<li>Taku Kudo, Eisaku Maeda, Yuji Matsumoto</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/2739-an-application-of-boosting-to-graph-classification">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Logistic Regression and Boosting for Labeled Bags of
|
||
Instances (PAKDD 2004)</strong>
|
||
<ul>
|
||
<li>Xin Xu, Eibe Frank</li>
|
||
<li><a
|
||
href="https://www.cs.waikato.ac.nz/~ml/publications/2004/xu-frank.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fast and Light Boosting for Adaptive Mining of Data Streams
|
||
(PAKDD 2004)</strong>
|
||
<ul>
|
||
<li>Fang Chu, Carlo Zaniolo</li>
|
||
<li><a
|
||
href="http://web.cs.ucla.edu/~zaniolo/papers/NBCAJMW77MW0J8CP.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-20">2003</h2>
|
||
<ul>
|
||
<li><strong>On Boosting and the Exponential Loss (AISTATS 2003)</strong>
|
||
<ul>
|
||
<li>Abraham J. Wyner</li>
|
||
<li><a
|
||
href="http://www-stat.wharton.upenn.edu/~ajw/exploss.ps">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Support Vector Machines for Text Classification
|
||
through Parameter-Free Threshold Relaxation (CIKM 2003)</strong>
|
||
<ul>
|
||
<li>James G. Shanahan, Norbert Roma</li>
|
||
<li><a href="https://dl.acm.org/citation.cfm?id=956911">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Cross-Document Structural Relationships Using
|
||
Boosting (CIKM 2003)</strong>
|
||
<ul>
|
||
<li>Zhu Zhang, Jahna Otterbacher, Dragomir R. Radev</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.128.7712&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>On Boosting Improvement: Error Reduction and Convergence
|
||
Speed-Up (ECML 2003)</strong>
|
||
<ul>
|
||
<li>Marc Sebban, Henri-Maxime Suchier</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/978-3-540-39857-8_32">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Lazy Decision Trees (ICML 2003)</strong>
|
||
<ul>
|
||
<li>Xiaoli Zhang Fern, Carla E. Brodley</li>
|
||
<li><a
|
||
href="https://www.aaai.org/Papers/ICML/2003/ICML03-026.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>On the Convergence of Boosting Procedures (ICML
|
||
2003)</strong>
|
||
<ul>
|
||
<li>Tong Zhang, Bin Yu</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/dd3f/901b232280533fbdb9e57f144f44723617cf.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Linear Programming Boosting for Uneven Datasets (ICML
|
||
2003)</strong>
|
||
<ul>
|
||
<li>Jure Leskovec, John Shawe-Taylor</li>
|
||
<li><a
|
||
href="https://cs.stanford.edu/people/jure/pubs/textbooster-icml03.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Monte Carlo Theory as an Explanation of Bagging and Boosting
|
||
(IJCAI 2003)</strong>
|
||
<ul>
|
||
<li>Roberto Esposito, Lorenza Saitta</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=1630733">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>On the Dynamics of Boosting (NIPS 2003)</strong>
|
||
<ul>
|
||
<li>Cynthia Rudin, Ingrid Daubechies, Robert E. Schapire</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/2535-on-the-dynamics-of-boosting">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Mutual Boosting for Contextual Inference (NIPS
|
||
2003)</strong>
|
||
<ul>
|
||
<li>Michael Fink, Pietro Perona</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/2520-mutual-boosting-for-contextual-inference">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Versus Covering (NIPS 2003)</strong>
|
||
<ul>
|
||
<li>Kohei Hatano, Manfred K. Warmuth</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/2532-boosting-versus-covering">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multiple-Instance Learning via Disjunctive Programming
|
||
Boosting (NIPS 2003)</strong>
|
||
<ul>
|
||
<li>Stuart Andrews, Thomas Hofmann</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/2478-multiple-instance-learning-via-disjunctive-programming-boosting">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Averaged Boosting: A Noise-Robust Ensemble Method (PAKDD
|
||
2003)</strong>
|
||
<ul>
|
||
<li>Yongdai Kim</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-36175-8_38">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>SMOTEBoost: Improving Prediction of the Minority Class in
|
||
Boosting (PKDD 2003)</strong>
|
||
<ul>
|
||
<li>Nitesh V. Chawla, Aleksandar Lazarevic, Lawrence O. Hall, Kevin W.
|
||
Bowyer</li>
|
||
<li><a
|
||
href="https://www3.nd.edu/~nchawla/papers/ECML03.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-21">2002</h2>
|
||
<ul>
|
||
<li><strong>Minimum Majority Classification and Boosting (AAAI
|
||
2002)</strong>
|
||
<ul>
|
||
<li>Philip M. Long</li>
|
||
<li><a
|
||
href="http://phillong.info/publications/minmaj.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Ranking Algorithms for Named Entity Extraction: Boosting and
|
||
the Voted Perceptron (ACL 2002)</strong>
|
||
<ul>
|
||
<li>Michael Collins</li>
|
||
<li><a href="https://www.aclweb.org/anthology/P02-1062">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting to Correct Inductive Bias in Text Classification
|
||
(CIKM 2002)</strong>
|
||
<ul>
|
||
<li>Yan Liu, Yiming Yang, Jaime G. Carbonell</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=584792.584850">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>How to Make AdaBoost.M1 Work for Weak Base Classifiers by
|
||
Changing Only One Line of the Code (ECML 2002)</strong>
|
||
<ul>
|
||
<li>Günther Eibl, Karl Peter Pfeiffer</li>
|
||
<li><a href="https://dl.acm.org/citation.cfm?id=650068">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Scaling Boosting by Margin-Based Inclusionof Features and
|
||
Relations (ECML 2002)</strong>
|
||
<ul>
|
||
<li>Susanne Hoche, Stefan Wrobel</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-36755-1_13">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Robust Boosting Algorithm (ECML 2002)</strong>
|
||
<ul>
|
||
<li>Richard Nock, Patrice Lefaucheur</li>
|
||
<li><a href="https://dl.acm.org/citation.cfm?id=650081">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>iBoost: Boosting Using an instance-Based Exponential
|
||
Weighting Scheme (ECML 2002)</strong>
|
||
<ul>
|
||
<li>Stephen Kwek, Chau Nguyen</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/220516082_iBoost_Boosting_using_an_instance-based_exponential_weighting_scheme">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Density Function Estimators (ECML 2002)</strong>
|
||
<ul>
|
||
<li>Franck Thollard, Marc Sebban, Philippe Ézéquel</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007%2F3-540-36755-1_36">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Statistical Behavior and Consistency of Support Vector
|
||
Machines, Boosting, and Beyond (ICML 2002)</strong>
|
||
<ul>
|
||
<li>Tong Zhang</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/221344927_Statistical_Behavior_and_Consistency_of_Support_Vector_Machines_Boosting_and_Beyond">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Boosted Maximum Entropy Model for Learning Text Chunking
|
||
(ICML 2002)</strong>
|
||
<ul>
|
||
<li>Seong-Bae Park, Byoung-Tak Zhang</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/221345636_A_Boosted_Maximum_Entropy_Model_for_Learning_Text_Chunking">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Towards Large Margin Speech Recognizers by Boosting and
|
||
Discriminative Training (ICML 2002)</strong>
|
||
<ul>
|
||
<li>Carsten Meyer, Peter Beyerlein</li>
|
||
<li><a
|
||
href="https://www.semanticscholar.org/paper/Towards-Large-Margin-Speech-Recognizers-by-Boosting-Meyer-Beyerlein/8408479e36da812cdbf6bc15f7849c3e76a1016d">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Incorporating Prior Knowledge into Boosting (ICML
|
||
2002)</strong>
|
||
<ul>
|
||
<li>Robert E. Schapire, Marie Rochery, Mazin G. Rahim, Narendra K.
|
||
Gupta</li>
|
||
<li><a
|
||
href="http://rob.schapire.net/papers/boostknowledge.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Modeling Auction Price Uncertainty Using Boosting-based
|
||
Conditional Density Estimation (ICML 2002)</strong>
|
||
<ul>
|
||
<li>Robert E. Schapire, Peter Stone, David A. McAllester, Michael L.
|
||
Littman, János A. Csirik</li>
|
||
<li><a
|
||
href="http://www.cs.utexas.edu/~ai-lab/pubs/ICML02-tac.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>MARK: A Boosting Algorithm for Heterogeneous Kernel Models
|
||
(KDD 2002)</strong>
|
||
<ul>
|
||
<li>Kristin P. Bennett, Michinari Momma, Mark J. Embrechts</li>
|
||
<li><a
|
||
href="http://homepages.rpiscrews.us/~bennek/papers/kdd2.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Predicting rare classes: can boosting make any weak learner
|
||
strong (KDD 2002)</strong>
|
||
<ul>
|
||
<li>Mahesh V. Joshi, Ramesh C. Agarwal, Vipin Kumar</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.13.1159&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Kernel Design Using Boosting (NIPS 2002)</strong>
|
||
<ul>
|
||
<li>Koby Crammer, Joseph Keshet, Yoram Singer</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/ff79/344807e972fdd7e5e1c3ed5c539dd1aeecbe.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>FloatBoost Learning for Classification (NIPS 2002)</strong>
|
||
<ul>
|
||
<li>Stan Z. Li, ZhenQiu Zhang, Heung-Yeung Shum, HongJiang Zhang</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/8ccc/5ef87eab96a4cae226750eba8322b30606ea.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Discriminative Learning for Label Sequences via Boosting
|
||
(NIPS 2002)</strong>
|
||
<ul>
|
||
<li>Yasemin Altun, Thomas Hofmann, Mark Johnson</li>
|
||
<li><a
|
||
href="http://web.science.mq.edu.au/~mjohnson/papers/nips02.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Density Estimation (NIPS 2002)</strong>
|
||
<ul>
|
||
<li>Saharon Rosset, Eran Segal</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/2298-boosting-density-estimation.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Self Supervised Boosting (NIPS 2002)</strong>
|
||
<ul>
|
||
<li>Max Welling, Richard S. Zemel, Geoffrey E. Hinton</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/6a2a/f112a803e70c23b7055de2e73007cf42c301.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Dyadic Kernel Discriminants (NIPS 2002)</strong>
|
||
<ul>
|
||
<li>Baback Moghaddam, Gregory Shakhnarovich</li>
|
||
<li><a
|
||
href="http://www.merl.com/publications/docs/TR2002-55.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Method to Boost Support Vector Machines (PAKDD
|
||
2002)</strong>
|
||
<ul>
|
||
<li>Lili Diao, Keyun Hu, Yuchang Lu, Chunyi Shi</li>
|
||
<li><a
|
||
href="https://elkingarcia.github.io/Papers/MLDM07.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Method to Boost Naive Bayesian Classifiers (PAKDD
|
||
2002)</strong>
|
||
<ul>
|
||
<li>Lili Diao, Keyun Hu, Yuchang Lu, Chunyi Shi</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-47887-6_11">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Predicting Rare Classes: Comparing Two-Phase Rule Induction
|
||
to Cost-Sensitive Boosting (PKDD 2002)</strong>
|
||
<ul>
|
||
<li>Mahesh V. Joshi, Ramesh C. Agarwal, Vipin Kumar</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-45681-3_20">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Iterative Data Squashing for Boosting Based on a
|
||
Distribution-Sensitive Distance (PKDD 2002)</strong>
|
||
<ul>
|
||
<li>Yuta Choki, Einoshin Suzuki</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-45681-3_8">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Staged Mixture Modelling and Boosting (UAI 2002)</strong>
|
||
<ul>
|
||
<li>Christopher Meek, Bo Thiesson, David Heckerman</li>
|
||
<li><a href="https://arxiv.org/abs/1301.0586">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Advances in Boosting (UAI 2002)</strong>
|
||
<ul>
|
||
<li>Robert E. Schapire</li>
|
||
<li><a href="http://rob.schapire.net/papers/uai02.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-22">2001</h2>
|
||
<ul>
|
||
<li><strong>Is Regularization Unnecessary for Boosting? (AISTATS
|
||
2001)</strong>
|
||
<ul>
|
||
<li>Wenxin Jiang</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/2439718_Is_Regularization_Unnecessary_for_Boosting">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Online Bagging and Boosting (AISTATS 2001)</strong>
|
||
<ul>
|
||
<li>Nikunj C. Oza, Stuart J. Russell</li>
|
||
<li><a
|
||
href="https://ti.arc.nasa.gov/m/profile/oza/files/ozru01a.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Text Categorization Using Transductive Boosting (ECML
|
||
2001)</strong>
|
||
<ul>
|
||
<li>Hirotoshi Taira, Masahiko Haruno</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-44795-4_39">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Improving Term Extraction by System Combination Using
|
||
Boosting (ECML 2001)</strong>
|
||
<ul>
|
||
<li>Jordi Vivaldi, Lluís Màrquez, Horacio Rodríguez</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=3108351">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Analysis of the Performance of AdaBoost.M2 for the Simulated
|
||
Digit-Recognition-Example (ECML 2001)</strong>
|
||
<ul>
|
||
<li>Günther Eibl, Karl Peter Pfeiffer</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-44795-4_10">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>On the Practice of Branching Program Boosting (ECML
|
||
2001)</strong>
|
||
<ul>
|
||
<li>Tapio Elomaa, Matti Kääriäinen</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/221112522_On_the_Practice_of_Branching_Program_Boosting">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Mixture Models for Semi-supervised Learning (ICANN
|
||
2001)</strong>
|
||
<ul>
|
||
<li>Yves Grandvalet, Florence d’Alché-Buc, Christophe Ambroise</li>
|
||
<li>[[Paper]](https://link.springer.com/chapter/10.1007/3-540-44668-0_7</li>
|
||
</ul></li>
|
||
<li><strong>A Comparison of Stacking with Meta Decision Trees to
|
||
Bagging, Boosting, and Stacking with other Methods (ICDM 2001)</strong>
|
||
<ul>
|
||
<li>Bernard Zenko, Ljupco Todorovski, Saso Dzeroski</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.23.3118&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Using Boosting to Simplify Classification Models (ICDM
|
||
2001)</strong>
|
||
<ul>
|
||
<li>Virginia Wheway</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/abstract/document/989565">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Evaluating Boosting Algorithms to Classify Rare Classes:
|
||
Comparison and Improvements (ICDM 2001)</strong>
|
||
<ul>
|
||
<li>Mahesh V. Joshi, Vipin Kumar, Ramesh C. Agarwal</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/b829/fe743e4beeeed65d32d2d7931354df7a2f60.pdf">[Paper]</a></li>
|
||
<li><a href="">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Neighborhood-Based Classifiers (ICML 2001)</strong>
|
||
<ul>
|
||
<li>Marc Sebban, Richard Nock, Stéphane Lallich</li>
|
||
<li><a
|
||
href="https://www.semanticscholar.org/paper/Boosting-Neighborhood-Based-Classifiers-Sebban-Nock/ee88e3bbe8a7e81cae7ee53da2c824de7c82f882">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Noisy Data (ICML 2001)</strong>
|
||
<ul>
|
||
<li>Abba Krieger, Chuan Long, Abraham J. Wyner</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/profile/Abba_Krieger/publication/221345435_Boosting_Noisy_Data/links/00463528a1ba641692000000.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Some Theoretical Aspects of Boosting in the Presence of
|
||
Noisy Data (ICML 2001)</strong>
|
||
<ul>
|
||
<li>Wenxin Jiang</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=2494A2C06ACA22FA971AC1C29B53FF62?doi=10.1.1.27.7231&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Filters, Wrappers and a Boosting-Based Hybrid for Feature
|
||
Selection (ICML 2001)</strong>
|
||
<ul>
|
||
<li>Sanmay Das</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/93b6/25a0e35b59fa6a3e7dc1cbdb31268d62d69f.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>The Distributed Boosting Algorithm (KDD 2001)</strong>
|
||
<ul>
|
||
<li>Aleksandar Lazarevic, Zoran Obradovic</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/2488971_The_Distributed_Boosting_Algorithm">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Experimental Comparisons of Online and Batch Versions of
|
||
Bagging and Boosting (KDD 2001)</strong>
|
||
<ul>
|
||
<li>Nikunj C. Oza, Stuart J. Russell</li>
|
||
<li><a
|
||
href="https://people.eecs.berkeley.edu/~russell/papers/kdd01-online.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Semi-supervised MarginBoost (NIPS 2001)</strong>
|
||
<ul>
|
||
<li>Florence d’Alché-Buc, Yves Grandvalet, Christophe Ambroise</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/2197/f1c2d55827b6928cc80030922569acce2d6c.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting and Maximum Likelihood for Exponential Models (NIPS
|
||
2001)</strong>
|
||
<ul>
|
||
<li>Guy Lebanon, John D. Lafferty</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/2042-boosting-and-maximum-likelihood-for-exponential-models.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fast and Robust Classification using Asymmetric AdaBoost and
|
||
a Detector Cascade (NIPS 2001)</strong>
|
||
<ul>
|
||
<li>Paul A. Viola, Michael J. Jones</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.68.4306&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Localized Classifiers in Heterogeneous Databases
|
||
(SDM 2001)</strong>
|
||
<ul>
|
||
<li>Aleksandar Lazarevic, Zoran Obradovic</li>
|
||
<li><a
|
||
href="https://epubs.siam.org/doi/abs/10.1137/1.9781611972719.14">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-23">2000</h2>
|
||
<ul>
|
||
<li><strong>Boosted Wrapper Induction (AAAI 2000)</strong>
|
||
<ul>
|
||
<li>Dayne Freitag, Nicholas Kushmerick</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/d009/a2bd48a9d1971fbc0d99f6df00539a62048a.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>An Improved Boosting Algorithm and its Application to Text
|
||
Categorization (CIKM 2000)</strong>
|
||
<ul>
|
||
<li>Fabrizio Sebastiani, Alessandro Sperduti, Nicola Valdambrini</li>
|
||
<li><a
|
||
href="http://nmis.isti.cnr.it/sebastiani/Publications/CIKM00.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting for Document Routing (CIKM 2000)</strong>
|
||
<ul>
|
||
<li>Raj D. Iyer, David D. Lewis, Robert E. Schapire, Yoram Singer, Amit
|
||
Singhal</li>
|
||
<li><a href="http://singhal.info/cikm-2000.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>On the Boosting Pruning Problem (ECML 2000)</strong>
|
||
<ul>
|
||
<li>Christino Tamon, Jie Xiang</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-45164-1_41">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Applied to Word Sense Disambiguation (ECML
|
||
2000)</strong>
|
||
<ul>
|
||
<li>Gerard Escudero, Lluís Màrquez, German Rigau</li>
|
||
<li><a href="https://dl.acm.org/citation.cfm?id=649539">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>An Empirical Study of MetaCost Using Boosting Algorithms
|
||
(ECML 2000)</strong>
|
||
<ul>
|
||
<li>Kai Ming Ting</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.218.1624&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>FeatureBoost: A Meta-Learning Algorithm that Improves Model
|
||
Robustness (ICML 2000)</strong>
|
||
<ul>
|
||
<li>Joseph O’Sullivan, John Langford, Rich Caruana, Avrim Blum</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/221345746_FeatureBoost_A_Meta-Learning_Algorithm_that_Improves_Model_Robustness">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Comparing the Minimum Description Length Principle and
|
||
Boosting in the Automatic Analysis of Discourse (ICML 2000)</strong>
|
||
<ul>
|
||
<li>Tadashi Nomoto, Yuji Matsumoto</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/221344998_Comparing_the_Minimum_Description_Length_Principle_and_Boosting_in_the_Automatic_Analysis_of_Discourse">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Boosting Approach to Topic Spotting on Subdialogues (ICML
|
||
2000)</strong>
|
||
<ul>
|
||
<li>Kary Myers, Michael J. Kearns, Satinder P. Singh, Marilyn A.
|
||
Walker</li>
|
||
<li><a
|
||
href="https://www.cis.upenn.edu/~mkearns/papers/topicspot.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Comparative Study of Cost-Sensitive Boosting Algorithms
|
||
(ICML 2000)</strong>
|
||
<ul>
|
||
<li>Kai Ming Ting</li>
|
||
<li><a href="https://dl.acm.org/citation.cfm?id=657944">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting a Positive-Data-Only Learner (ICML 2000)</strong>
|
||
<ul>
|
||
<li>Andrew R. Mitchell</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.3669">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Column Generation Algorithm For Boosting (ICML
|
||
2000)</strong>
|
||
<ul>
|
||
<li>Kristin P. Bennett, Ayhan Demiriz, John Shawe-Taylor</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=1828D5853F656BD6892E9C2C446ECC68?doi=10.1.1.16.9612&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Gradient-Based Boosting Algorithm for Regression Problems
|
||
(NIPS 2000)</strong>
|
||
<ul>
|
||
<li>Richard S. Zemel, Toniann Pitassi</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/c41a/9417f5605b55bdd216d119e47669a92f5c50.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Weak Learners and Improved Rates of Convergence in Boosting
|
||
(NIPS 2000)</strong>
|
||
<ul>
|
||
<li>Shie Mannor, Ron Meir</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/1906-weak-learners-and-improved-rates-of-convergence-in-boosting.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Adaptive Boosting for Spatial Functions with Unstable
|
||
Driving Attributes (PAKDD 2000)</strong>
|
||
<ul>
|
||
<li>Aleksandar Lazarevic, Tim Fiez, Zoran Obradovic</li>
|
||
<li><a
|
||
href="http://www.dabi.temple.edu/~zoran/papers/lazarevic01j.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Scaling Up a Boosting-Based Learner via Adaptive Sampling
|
||
(PAKDD 2000)</strong>
|
||
<ul>
|
||
<li>Carlos Domingo, Osamu Watanabe</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-45571-X_37">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning First Order Logic Time Series Classifiers: Rules
|
||
and Boosting (PKDD 2000)</strong>
|
||
<ul>
|
||
<li>Juan J. Rodríguez Diez, Carlos Alonso González, Henrik Boström</li>
|
||
<li><a
|
||
href="https://people.dsv.su.se/~henke/papers/rodriguez00b.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Bagging and Boosting with Dynamic Integration of Classifiers
|
||
(PKDD 2000)</strong>
|
||
<ul>
|
||
<li>Alexey Tsymbal, Seppo Puuronen</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-45372-5_12">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Text Filtering by Boosting Naive Bayes Classifiers (SIGIR
|
||
2000)</strong>
|
||
<ul>
|
||
<li>Yu-Hwan Kim, Shang-Yoon Hahn, Byoung-Tak Zhang</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/221299823_Text_filtering_by_boosting_Naive_Bayes_classifiers">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-24">1999</h2>
|
||
<ul>
|
||
<li><strong>Boosting Methodology for Regression Problems (AISTATS
|
||
1999)</strong>
|
||
<ul>
|
||
<li>Greg Ridgeway, David Madigan, Thomas Richardson</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/5f19/6a8baa281b2190c4519305bec8f5c91c8e5a.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Applied to Tagging and PP Attachment (EMNLP
|
||
1999)</strong>
|
||
<ul>
|
||
<li>Steven Abney, Robert E. Schapire, Yoram Singer</li>
|
||
<li><a href="https://www.aclweb.org/anthology/W99-0606">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Lazy Bayesian Rules: A Lazy Semi-Naive Bayesian Learning
|
||
Technique Competitive to Boosting Decision Trees (ICML 1999)</strong>
|
||
<ul>
|
||
<li>Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/067e/86836ddbcb5e2844e955c16e058366a18c77.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>AdaCost: Misclassification Cost-Sensitive Boosting (ICML
|
||
1999)</strong>
|
||
<ul>
|
||
<li>Wei Fan, Salvatore J. Stolfo, Junxin Zhang, Philip K. Chan</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/9ddf/bc2cc5c1b13b80a1a487b9caa57e80edd863.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting a Strong Learner: Evidence Against the Minimum
|
||
Margin (ICML 1999)</strong>
|
||
<ul>
|
||
<li>Michael Bonnell Harries</li>
|
||
<li><a href="https://dl.acm.org/citation.cfm?id=657480">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Algorithms as Gradient Descent (NIPS 1999)</strong>
|
||
<ul>
|
||
<li>Llew Mason, Jonathan Baxter, Peter L. Bartlett, Marcus R. Frean</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/1766-boosting-algorithms-as-gradient-descent.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting with Multi-Way Branching in Decision Trees (NIPS
|
||
1999)</strong>
|
||
<ul>
|
||
<li>Yishay Mansour, David A. McAllester</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/1659-boosting-with-multi-way-branching-in-decision-trees.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Potential Boosters (NIPS 1999)</strong>
|
||
<ul>
|
||
<li>Nigel Duffy, David P. Helmbold</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/4884/c765b6ceab7bdfb6703489810c8a386fd2a8.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-25">1998</h2>
|
||
<ul>
|
||
<li><strong>An Efficient Boosting Algorithm for Combining Preferences
|
||
(ICML 1998)</strong>
|
||
<ul>
|
||
<li>Yoav Freund, Raj D. Iyer, Robert E. Schapire, Yoram Singer</li>
|
||
<li><a
|
||
href="http://jmlr.csail.mit.edu/papers/volume4/freund03a/freund03a.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Query Learning Strategies Using Boosting and Bagging (ICML
|
||
1998)</strong>
|
||
<ul>
|
||
<li>Naoki Abe, Hiroshi Mamitsuka</li>
|
||
<li><a
|
||
href="https://www.bic.kyoto-u.ac.jp/pathway/mami/pubs/Files/icml98.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Regularizing AdaBoost (NIPS 1998)</strong>
|
||
<ul>
|
||
<li>Gunnar Rätsch, Takashi Onoda, Klaus-Robert Müller</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/0afc/9de245547c675d40ad29240e2788c0416f91.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-26">1997</h2>
|
||
<ul>
|
||
<li><strong>Boosting the Margin: A New Explanation for the Effectiveness
|
||
of Voting Methods (ICML 1997)</strong>
|
||
<ul>
|
||
<li>Robert E. Schapire, Yoav Freund, Peter Barlett, Wee Sun Lee</li>
|
||
<li><a
|
||
href="https://www.cc.gatech.edu/~isbell/tutorials/boostingmargins.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Using Output Codes to Boost Multiclass Learning Problems
|
||
(ICML 1997)</strong>
|
||
<ul>
|
||
<li>Robert E. Schapire</li>
|
||
<li><a
|
||
href="http://rob.schapire.net/papers/Schapire97.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Improving Regressors Using Boosting Techniques (ICML
|
||
1997)</strong>
|
||
<ul>
|
||
<li>Harris Drucker</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/8d49/e2dedb817f2c3330e74b63c5fc86d2399ce3.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Pruning Adaptive Boosting (ICML 1997)</strong>
|
||
<ul>
|
||
<li>Dragos D. Margineantu, Thomas G. Dietterich</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/b25f/615fc139fbdeccc3bcf4462f908d7f8e37f9.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Training Methods for Adaptive Boosting of Neural Networks
|
||
(NIPS 1997)</strong>
|
||
<ul>
|
||
<li>Holger Schwenk, Yoshua Bengio</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/1335-training-methods-for-adaptive-boosting-of-neural-networks.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-27">1996</h2>
|
||
<ul>
|
||
<li><strong>Experiments with a New Boosting Algorithm (ICML
|
||
1996)</strong>
|
||
<ul>
|
||
<li>Yoav Freund, Robert E. Schapire</li>
|
||
<li><a
|
||
href="https://cseweb.ucsd.edu/~yfreund/papers/boostingexperiments.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-28">1995</h2>
|
||
<ul>
|
||
<li><strong>Boosting Decision Trees (NIPS 1995)</strong>
|
||
<ul>
|
||
<li>Harris Drucker, Corinna Cortes</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/1059-boosting-decision-trees.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-29">1994</h2>
|
||
<ul>
|
||
<li><strong>Boosting and Other Machine Learning Algorithms (ICML
|
||
1994)</strong>
|
||
<ul>
|
||
<li>Harris Drucker, Corinna Cortes, Lawrence D. Jackel, Yann LeCun,
|
||
Vladimir Vapnik</li>
|
||
<li><a
|
||
href="https://www.sciencedirect.com/science/article/pii/B9781558603356500155">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<hr />
|
||
<p><strong>License</strong></p>
|
||
<ul>
|
||
<li><a
|
||
href="https://github.com/benedekrozemberczki/awesome-gradient-boosting-papers/blob/master/LICENSE">CC0
|
||
Universal</a></li>
|
||
</ul>
|
||
<p><a
|
||
href="https://github.com/benedekrozemberczki/awesome-gradient-boosting-papers">gradientboostingpapers.md
|
||
Github</a></p>
|