2854 lines
106 KiB
HTML
2854 lines
106 KiB
HTML
<h1
|
||
id="awesome-decision-classification-and-regression-tree-research-papers">Awesome
|
||
Decision, Classification, and Regression Tree Research Papers</h1>
|
||
<a href="https://github.com/sindresorhus/awesome"><img
|
||
src="https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg"
|
||
alt="Awesome" /></a> <a href="http://makeapullrequest.com"><img
|
||
src="https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square"
|
||
alt="PRs Welcome" /></a> <a
|
||
href="https://github.com/benedekrozemberczki/awesome-decision-tree-papers/archive/master.zip"><img
|
||
src="https://img.shields.io/github/repo-size/benedekrozemberczki/awesome-decision-tree-papers.svg"
|
||
alt="repo size" /></a> <img
|
||
src="https://img.shields.io/github/license/benedekrozemberczki/awesome-decision-tree-papers.svg?color=blue"
|
||
alt="License" /> <a
|
||
href="https://twitter.com/intent/follow?screen_name=benrozemberczki"><img
|
||
src="https://img.shields.io/twitter/follow/benrozemberczki?style=social&logo=twitter"
|
||
alt="benedekrozemberczki" /></a>
|
||
<p align="center">
|
||
<img width="300" src="tree.png">
|
||
</p>
|
||
<p>A curated list of classification and regression tree research papers
|
||
with implementations from the following conferences:</p>
|
||
<ul>
|
||
<li>Machine learning
|
||
<ul>
|
||
<li><a href="https://nips.cc/">NeurIPS</a></li>
|
||
<li><a href="https://icml.cc/">ICML</a></li>
|
||
<li><a href="https://iclr.cc/">ICLR</a></li>
|
||
</ul></li>
|
||
<li>Computer vision
|
||
<ul>
|
||
<li><a href="http://cvpr2019.thecvf.com/">CVPR</a></li>
|
||
<li><a href="http://iccv2019.thecvf.com/">ICCV</a></li>
|
||
<li><a href="https://eccv2018.org/">ECCV</a></li>
|
||
</ul></li>
|
||
<li>Natural language processing
|
||
<ul>
|
||
<li><a href="http://www.acl2019.org/EN/index.xhtml">ACL</a></li>
|
||
<li><a href="https://naacl2019.org/">NAACL</a></li>
|
||
<li><a href="https://www.emnlp-ijcnlp2019.org/">EMNLP</a></li>
|
||
</ul></li>
|
||
<li>Data
|
||
<ul>
|
||
<li><a href="https://www.kdd.org/">KDD</a></li>
|
||
<li><a href="http://www.cikmconference.org/">CIKM</a><br />
|
||
</li>
|
||
<li><a href="http://icdm2019.bigke.org/">ICDM</a></li>
|
||
<li><a
|
||
href="https://www.siam.org/Conferences/CM/Conference/sdm19">SDM</a><br />
|
||
</li>
|
||
<li><a href="http://pakdd2019.medmeeting.org">PAKDD</a></li>
|
||
<li><a href="http://ecmlpkdd2019.org">PKDD/ECML</a></li>
|
||
<li><a href="https://sigir.org/">SIGIR</a></li>
|
||
<li><a href="https://www2019.thewebconf.org/">WWW</a></li>
|
||
<li><a href="www.wsdm-conference.org">WSDM</a></li>
|
||
</ul></li>
|
||
<li>Artificial intelligence
|
||
<ul>
|
||
<li><a href="https://www.aaai.org/">AAAI</a></li>
|
||
<li><a href="https://www.aistats.org/">AISTATS</a></li>
|
||
<li><a href="https://e-nns.org/icann2019/">ICANN</a><br />
|
||
</li>
|
||
<li><a href="https://www.ijcai.org/">IJCAI</a></li>
|
||
<li><a href="http://www.auai.org/">UAI</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<p>Similar collections about <a
|
||
href="https://github.com/benedekrozemberczki/awesome-graph-classification">graph
|
||
classification</a>, <a
|
||
href="https://github.com/benedekrozemberczki/awesome-gradient-boosting-papers">gradient
|
||
boosting</a>, <a
|
||
href="https://github.com/benedekrozemberczki/awesome-fraud-detection-papers">fraud
|
||
detection</a>, <a
|
||
href="https://github.com/benedekrozemberczki/awesome-monte-carlo-tree-search-papers">Monte
|
||
Carlo tree search</a>, and <a
|
||
href="https://github.com/benedekrozemberczki/awesome-community-detection">community
|
||
detection</a> papers with implementations.</p>
|
||
<h2 id="section">2022</h2>
|
||
<ul>
|
||
<li><strong>Using MaxSAT for Efficient Explanations of Tree Ensembles
|
||
(AAAI 2022)</strong>
|
||
<ul>
|
||
<li>Alexey Ignatiev, Yacine Izza, Peter J. Stuckey, João
|
||
Marques-Silva</li>
|
||
<li><a
|
||
href="https://alexeyignatiev.github.io/assets/pdf/iisms-aaai22-preprint.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>FOCUS: Flexible Optimizable Counterfactual Explanations for
|
||
Tree Ensembles (AAAI 2022)</strong>
|
||
<ul>
|
||
<li>Ana Lucic, Harrie Oosterhuis, Hinda Haned, Maarten de Rijke</li>
|
||
<li><a
|
||
href="https://a-lucic.github.io/talks/ICML_SMRL_focus.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Explainable and Local Correction of Classification Models
|
||
Using Decision Trees (AAAI 2022)</strong>
|
||
<ul>
|
||
<li>Hirofumi Suzuki, Hiroaki Iwashita, Takuya Takagi, Keisuke Goto, Yuta
|
||
Fujishige, Satoshi Hara</li>
|
||
<li><a
|
||
href="https://ojs.aaai.org/index.php/AAAI/article/view/20816">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Robust Optimal Classification Trees against Adversarial
|
||
Examples (AAAI 2022)</strong>
|
||
<ul>
|
||
<li>Daniël Vos, Sicco Verwer</li>
|
||
<li><a href="https://arxiv.org/abs/2109.03857">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fairness without Imputation: A Decision Tree Approach for
|
||
Fair Prediction with Missing Values (AAAI 2022)</strong>
|
||
<ul>
|
||
<li>Haewon Jeong, Hao Wang, Flávio P. Calmon</li>
|
||
<li><a href="https://arxiv.org/abs/2109.10431">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fast Sparse Decision Tree Optimization via Reference
|
||
Ensembles (AAAI 2022)</strong>
|
||
<ul>
|
||
<li>Hayden McTavish, Chudi Zhong, Reto Achermann, Ilias Karimalis,
|
||
Jacques Chen, Cynthia Rudin, Margo I. Seltzer</li>
|
||
<li><a href="https://arxiv.org/abs/2112.00798">[Paper]</a></li>
|
||
<li><a href="https://pypi.org/project/gosdt/">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>TransBoost: A Boosting-Tree Kernel Transfer Learning
|
||
Algorithm for Improving Financial Inclusion (AAAI 2022)</strong>
|
||
<ul>
|
||
<li>Yiheng Sun, Tian Lu, Cong Wang, Yuan Li, Huaiyu Fu, Jingran Dong,
|
||
Yunjie Xu</li>
|
||
<li><a href="https://arxiv.org/abs/2112.02365">[Paper]</a></li>
|
||
<li><a href="https://github.com/yihengsun/TransBoost">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Counterfactual Explanation Trees: Transparent and Consistent
|
||
Actionable Recourse with Decision Trees (AISTATS 2022)</strong>
|
||
<ul>
|
||
<li>Kentaro Kanamori, Takuya Takagi, Ken Kobayashi, Yuichi Ike</li>
|
||
<li><a
|
||
href="https://proceedings.mlr.press/v151/kanamori22a.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Accurate Shapley Values for explaining tree-based models
|
||
(AISTATS 2022)</strong>
|
||
<ul>
|
||
<li>Salim I. Amoukou, Tangi Salaün, Nicolas J.-B. Brunel</li>
|
||
<li><a href="https://arxiv.org/abs/2106.03820">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A cautionary tale on fitting decision trees to data from
|
||
additive models: generalization lower bounds (AISTATS 2022)</strong>
|
||
<ul>
|
||
<li>Yan Shuo Tan, Abhineet Agarwal, Bin Yu</li>
|
||
<li><a href="https://arxiv.org/abs/2110.09626">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/aagarwal1996/additive_trees">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Enterprise-Scale Search: Accelerating Inference for Sparse
|
||
Extreme Multi-Label Ranking Trees (WWW 2022)</strong>
|
||
<ul>
|
||
<li>Philip A. Etter, Kai Zhong, Hsiang-Fu Yu, Lexing Ying, Inderjit S.
|
||
Dhillon</li>
|
||
<li><a href="https://arxiv.org/abs/2106.02697">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>MBCT: Tree-Based Feature-Aware Binning for Individual
|
||
Uncertainty Calibration (WWW 2022)</strong>
|
||
<ul>
|
||
<li>Siguang Huang, Yunli Wang, Lili Mou, Huayue Zhang, Han Zhu, Chuan
|
||
Yu, Bo Zheng</li>
|
||
<li><a href="https://arxiv.org/abs/2202.04348">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Rethinking Conversational Recommendations: Is Decision Tree
|
||
All You Need (CIKM 2022)</strong>
|
||
<ul>
|
||
<li>A S. M. Ahsan-Ul-Haque, Hongning Wang</li>
|
||
<li><a href="https://arxiv.org/abs/2208.14614">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Neural Tangent Kernel Perspective of Infinite Tree
|
||
Ensembles (ICLR 2022)</strong>
|
||
<ul>
|
||
<li>Ryuichi Kanoh, Mahito Sugiyama</li>
|
||
<li><a
|
||
href="https://openreview.net/forum?id=vUH85MOXO7h">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>POETREE: Interpretable Policy Learning with Adaptive
|
||
Decision Trees (ICLR 2022)</strong>
|
||
<ul>
|
||
<li>Alizée Pace, Alex Chan, Mihaela van der Schaar</li>
|
||
<li><a href="https://arxiv.org/abs/2203.08057">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Hierarchical Shrinkage: Improving the accuracy and
|
||
interpretability of tree-based models (ICML 2022)</strong>
|
||
<ul>
|
||
<li>Abhineet Agarwal, Yan Shuo Tan, Omer Ronen, Chandan Singh, Bin
|
||
Yu</li>
|
||
<li><a href="https://arxiv.org/abs/2202.00858">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Popular decision tree algorithms are provably noise tolerant
|
||
(ICML 2022)</strong>
|
||
<ul>
|
||
<li>Guy Blanc, Jane Lange, Ali Malik, Li-Yang Tan</li>
|
||
<li><a href="https://arxiv.org/abs/2206.08899">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Robust Counterfactual Explanations for Tree-Based Ensembles
|
||
(ICML 2022)</strong>
|
||
<ul>
|
||
<li>Sanghamitra Dutta, Jason Long, Saumitra Mishra, Cecilia Tilli,
|
||
Daniele Magazzeni</li>
|
||
<li><a
|
||
href="https://proceedings.mlr.press/v162/dutta22a.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fast Provably Robust Decision Trees and Boosting (ICML
|
||
2022)</strong>
|
||
<ul>
|
||
<li>Jun-Qi Guo, Ming-Zhuo Teng, Wei Gao, Zhi-Hua Zhou</li>
|
||
<li><a
|
||
href="https://proceedings.mlr.press/v162/guo22h.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>BAMDT: Bayesian Additive Semi-Multivariate Decision Trees
|
||
for Nonparametric Regression (ICML 2022)</strong>
|
||
<ul>
|
||
<li>Zhao Tang Luo, Huiyan Sang, Bani K. Mallick</li>
|
||
<li><a
|
||
href="https://proceedings.mlr.press/v162/luo22a.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Quant-BnB: A Scalable Branch-and-Bound Method for Optimal
|
||
Decision Trees with Continuous Features (ICML 2022)</strong>
|
||
<ul>
|
||
<li>Rahul Mazumder, Xiang Meng, Haoyue Wang</li>
|
||
<li><a href="https://arxiv.org/abs/2206.11844">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Tree-based Model Averaging Approach for Personalized
|
||
Treatment Effect Estimation from Heterogeneous Data Sources (ICML
|
||
2022)</strong>
|
||
<ul>
|
||
<li>Xiaoqing Tan, Chung-Chou H. Chang, Ling Zhou, Lu Tang</li>
|
||
<li><a href="https://arxiv.org/abs/2103.06261">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>On Preferred Abductive Explanations for Decision Trees and
|
||
Random Forests (IJCAI 2022)</strong>
|
||
<ul>
|
||
<li>Gilles Audemard, Steve Bellart, Louenas Bounia, Frédéric Koriche,
|
||
Jean-Marie Lagniez, Pierre Marquis</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/proceedings/2022/0091.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Extending Decision Tree to Handle Multiple Fairness Criteria
|
||
(IJCAI 2022)</strong>
|
||
<ul>
|
||
<li>Alessandro Castelnovo</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/proceedings/2022/0822.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Flexible Modeling and Multitask Learning using
|
||
Differentiable Tree Ensembles (KDD 2022)</strong>
|
||
<ul>
|
||
<li>Shibal Ibrahim, Hussein Hazimeh, Rahul Mazumder</li>
|
||
<li><a href="https://arxiv.org/abs/2205.09717">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Integrity Authentication in Tree Models (KDD 2022)</strong>
|
||
<ul>
|
||
<li>Weijie Zhao, Yingjie Lao, Ping Li</li>
|
||
<li><a
|
||
href="https://dl.acm.org/doi/abs/10.1145/3534678.3539428">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Retrieval-Based Gradient Boosting Decision Trees for Disease
|
||
Risk Assessment (KDD 2022)</strong>
|
||
<ul>
|
||
<li>Handong Ma, Jiahang Cao, Yuchen Fang, Weinan Zhang, Wenbo Sheng,
|
||
Shaodian Zhang, Yong Yu</li>
|
||
<li><a
|
||
href="https://dl.acm.org/doi/abs/10.1145/3534678.3539052">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Improved feature importance computation for tree models
|
||
based on the Banzhaf value (UAI 2022)</strong>
|
||
<ul>
|
||
<li>Adam Karczmarz, Tomasz Michalak, Anish Mukherjee, Piotr Sankowski,
|
||
Piotr Wygocki</li>
|
||
<li><a
|
||
href="https://proceedings.mlr.press/v180/karczmarz22a.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning linear non-Gaussian polytree models (UAI
|
||
2022)</strong>
|
||
<ul>
|
||
<li>Daniele Tramontano, Anthea Monod, Mathias Drton</li>
|
||
<li><a href="https://arxiv.org/abs/2208.06701">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-1">2021</h2>
|
||
<ul>
|
||
<li><strong>Online Probabilistic Label Trees (AISTATS 2021)</strong>
|
||
<ul>
|
||
<li>Kalina Jasinska-Kobus, Marek Wydmuch, Devanathan Thiruvenkatachari,
|
||
Krzysztof Dembczyński</li>
|
||
<li><a href="https://arxiv.org/abs/2007.04451">[Paper]</a></li>
|
||
<li><a href="https://github.com/mwydmuch/napkinXC">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Optimal Decision Trees for Nonlinear Metrics (AAAI
|
||
2021)</strong>
|
||
<ul>
|
||
<li>Emir Demirovic, Peter J. Stuckey</li>
|
||
<li><a href="https://arxiv.org/abs/2009.06921">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>SAT-based Decision Tree Learning for Large Data Sets (AAAI
|
||
2021)</strong>
|
||
<ul>
|
||
<li>André Schidler, Stefan Szeider</li>
|
||
<li><a
|
||
href="https://ojs.aaai.org/index.php/AAAI/article/view/16509">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Parameterized Complexity of Small Decision Tree Learning
|
||
(AAAI 2021)</strong>
|
||
<ul>
|
||
<li>Sebastian Ordyniak, Stefan Szeider</li>
|
||
<li><a
|
||
href="https://www.ac.tuwien.ac.at/files/tr/ac-tr-21-002.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Counterfactual Explanations for Oblique Decision Trees:
|
||
Exact - Efficient Algorithms (AAAI 2021)</strong>
|
||
<ul>
|
||
<li>Miguel Á. Carreira-Perpiñán, Suryabhan Singh Hada</li>
|
||
<li><a href="https://arxiv.org/abs/2103.01096">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Geometric Heuristics for Transfer Learning in Decision Trees
|
||
(CIKM 2021)</strong>
|
||
<ul>
|
||
<li>Siddhesh Chaubal, Mateusz Rzepecki, Patrick K. Nicholson, Guangyuan
|
||
Piao, Alessandra Sala</li>
|
||
<li><a
|
||
href="https://dl.acm.org/doi/abs/10.1145/3459637.3482259">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fairness-Aware Training of Decision Trees by Abstract
|
||
Interpretation (CIKM 2021)</strong>
|
||
<ul>
|
||
<li>Francesco Ranzato, Caterina Urban, Marco Zanella</li>
|
||
<li><a
|
||
href="https://dl.acm.org/doi/abs/10.1145/3459637.3482342">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Enabling Efficiency-Precision Trade-offs for Label Trees in
|
||
Extreme Classification (CIKM 2021)</strong>
|
||
<ul>
|
||
<li>Tavor Z. Baharav, Daniel L. Jiang, Kedarnath Kolluri, Sujay
|
||
Sanghavi, Inderjit S. Dhillon</li>
|
||
<li><a href="https://arxiv.org/abs/2106.00730">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Are Neural Rankers still Outperformed by Gradient Boosted
|
||
Decision Trees (ICLR 2021)</strong>
|
||
<ul>
|
||
<li>Zhen Qin, Le Yan, Honglei Zhuang, Yi Tay, Rama Kumar Pasumarthi,
|
||
Xuanhui Wang, Michael Bendersky, Marc Najork</li>
|
||
<li><a
|
||
href="https://openreview.net/forum?id=Ut1vF_q_vC">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>NBDT: Neural-Backed Decision Tree (ICLR 2021)</strong>
|
||
<ul>
|
||
<li>Alvin Wan, Lisa Dunlap, Daniel Ho, Jihan Yin, Scott Lee, Suzanne
|
||
Petryk, Sarah Adel Bargal, Joseph E. Gonzalez</li>
|
||
<li><a href="https://arxiv.org/abs/2004.00221">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Versatile Verification of Tree Ensembles (ICML
|
||
2021)</strong>
|
||
<ul>
|
||
<li>Laurens Devos, Wannes Meert, Jesse Davis</li>
|
||
<li><a href="https://arxiv.org/abs/2010.13880">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Connecting Interpretability and Robustness in Decision Trees
|
||
through Separation (ICML 2021)</strong>
|
||
<ul>
|
||
<li>Michal Moshkovitz, Yao-Yuan Yang, Kamalika Chaudhuri</li>
|
||
<li><a href="https://arxiv.org/abs/2102.07048">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Optimal Counterfactual Explanations in Tree Ensembles (ICML
|
||
2021)</strong>
|
||
<ul>
|
||
<li>Axel Parmentier, Thibaut Vidal</li>
|
||
<li><a href="https://arxiv.org/abs/2106.06631">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Training of Robust Decision Trees Against
|
||
Adversarial Examples (ICML 2021)</strong>
|
||
<ul>
|
||
<li>Daniël Vos, Sicco Verwer</li>
|
||
<li><a href="https://arxiv.org/abs/2012.10438">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Binary Decision Trees by Argmin Differentiation
|
||
(ICML 2021)</strong>
|
||
<ul>
|
||
<li>Valentina Zantedeschi, Matt J. Kusner, Vlad Niculae</li>
|
||
<li><a href="https://arxiv.org/pdf/2010.04627.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>BLOCKSET (Block-Aligned Serialized Trees): Reducing
|
||
Inference Latency for Tree ensemble Deployment (KDD 2021)</strong>
|
||
<ul>
|
||
<li>Meghana Madhyastha, Kunal Lillaney, James Browne, Joshua T.
|
||
Vogelstein, Randal Burns</li>
|
||
<li><a
|
||
href="https://dl.acm.org/doi/abs/10.1145/3447548.3467368">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>ControlBurn: Feature Selection by Sparse Forests (KDD
|
||
2021)</strong>
|
||
<ul>
|
||
<li>Brian Liu, Miaolan Xie, Madeleine Udell</li>
|
||
<li><a
|
||
href="https://dl.acm.org/doi/abs/10.1145/3447548.3467387?sid=SCITRUS">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Probabilistic Gradient Boosting Machines for Large-Scale
|
||
Probabilistic Regression (KDD 2021)</strong>
|
||
<ul>
|
||
<li>Olivier Sprangers, Sebastian Schelter, Maarten de Rijke</li>
|
||
<li><a
|
||
href="https://dl.acm.org/doi/10.1145/3447548.3467278">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Verifying Tree Ensembles by Reasoning about Potential
|
||
Instances (SDM 2021)</strong>
|
||
<ul>
|
||
<li>Laurens Devos, Wannes Meert, Jesse Davis</li>
|
||
<li><a href="https://arxiv.org/abs/2001.11905">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-2">2020</h2>
|
||
<ul>
|
||
<li><strong>DTCA: Decision Tree-based Co-Attention Networks for
|
||
Explainable Claim Verification (ACL 2020)</strong>
|
||
<ul>
|
||
<li>Lianwei Wu, Yuan Rao, Yongqiang Zhao, Hao Liang, Ambreen Nazir</li>
|
||
<li><a href="https://arxiv.org/abs/2004.13455">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Privacy-Preserving Gradient Boosting Decision Trees (AAAI
|
||
2020)</strong>
|
||
<ul>
|
||
<li>Qinbin Li, Zhaomin Wu, Zeyi Wen, Bingsheng He</li>
|
||
<li><a href="https://arxiv.org/abs/1911.04209">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Practical Federated Gradient Boosting Decision Trees (AAAI
|
||
2020)</strong>
|
||
<ul>
|
||
<li>Qinbin Li, Zeyi Wen, Bingsheng He</li>
|
||
<li><a href="https://arxiv.org/abs/1911.04206">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Inference of Optimal Decision Trees (AAAI
|
||
2020)</strong>
|
||
<ul>
|
||
<li>Florent Avellaneda</li>
|
||
<li><a
|
||
href="http://florent.avellaneda.free.fr/dl/AAAI20.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/FlorentAvellaneda/InferDT">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Optimal Decision Trees Using Caching
|
||
Branch-and-Bound Search (AAAI 2020)</strong>
|
||
<ul>
|
||
<li>Gael Aglin, Siegfried Nijssen, Pierre Schaus</li>
|
||
<li><a
|
||
href="https://dial.uclouvain.be/pr/boreal/fr/object/boreal%3A223390/datastream/PDF_01/view">[Paper]</a></li>
|
||
<li><a href="https://pypi.org/project/dl8.5/">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Abstract Interpretation of Decision Tree Ensemble
|
||
Classifiers (AAAI 2020)</strong>
|
||
<ul>
|
||
<li>Francesco Ranzato, Marco Zanella</li>
|
||
<li><a
|
||
href="https://www.math.unipd.it/~ranzato/papers/aaai20.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/abstract-machine-learning/silva">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Scalable Feature Selection for (Multitask) Gradient Boosted
|
||
Trees (AISTATS 2020)</strong>
|
||
<ul>
|
||
<li>Cuize Han, Nikhil Rao, Daria Sorokina, Karthik Subbian</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v108/han20a.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Optimization Methods for Interpretable Differentiable
|
||
Decision Trees Applied to Reinforcement Learning (AISTATS 2020)</strong>
|
||
<ul>
|
||
<li>Andrew Silva, Matthew C. Gombolay, Taylor W. Killian, Ivan Dario
|
||
Jimenez Jimenez, Sung-Hyun Son</li>
|
||
<li><a href="https://arxiv.org/abs/1903.09338">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Exploiting Categorical Structure Using Tree-Based Methods
|
||
(AISTATS 2020)</strong>
|
||
<ul>
|
||
<li>Brian Lucena</li>
|
||
<li><a href="https://arxiv.org/abs/2004.07383">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>LdSM: Logarithm-depth Streaming Multi-label Decision Trees
|
||
(AISTATS 2020)</strong>
|
||
<ul>
|
||
<li>Maryam Majzoubi, Anna Choromanska</li>
|
||
<li><a href="https://arxiv.org/abs/1905.10428">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Oblique Decision Trees from Derivatives of ReLU Networks
|
||
(ICLR 2020)</strong>
|
||
<ul>
|
||
<li>Guang-He Lee, Tommi S. Jaakkola</li>
|
||
<li><a href="https://openreview.net/pdf?id=Bke8UR4FPB">[Paper]</a></li>
|
||
<li><a href="https://github.com/guanghelee/iclr20-lcn">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Provable Guarantees for Decision Tree Induction: the
|
||
Agnostic Setting (ICML 2020)</strong>
|
||
<ul>
|
||
<li>Guy Blanc, Jane Lange, Li-Yang Tan</li>
|
||
<li><a href="https://arxiv.org/abs/2006.00743v1">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Decision Trees for Decision-Making under the
|
||
Predict-then-Optimize Framework (ICML 2020)</strong>
|
||
<ul>
|
||
<li>Adam N. Elmachtoub, Jason Cheuk Nam Liang, Ryan McNellis</li>
|
||
<li><a href="https://arxiv.org/abs/2003.00360">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>The Tree Ensemble Layer: Differentiability meets Conditional
|
||
Computation (ICML 2020)</strong>
|
||
<ul>
|
||
<li>Hussein Hazimeh, Natalia Ponomareva, Petros Mol, Zhenyu Tan, Rahul
|
||
Mazumder</li>
|
||
<li><a href="https://arxiv.org/abs/2002.07772">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/google-research/google-research/tree/master/tf_trees">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Generalized and Scalable Optimal Sparse Decision Trees (ICML
|
||
2020)</strong>
|
||
<ul>
|
||
<li>Jimmy Lin, Chudi Zhong, Diane Hu, Cynthia Rudin, Margo I.
|
||
Seltzer</li>
|
||
<li><a href="https://arxiv.org/abs/2006.08690">[Paper]</a></li>
|
||
<li><a href="https://github.com/xiyanghu/OSDT">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Born-Again Tree Ensembles (ICML 2020)</strong>
|
||
<ul>
|
||
<li>Thibaut Vidal, Maximilian Schiffer</li>
|
||
<li><a href="https://arxiv.org/abs/2003.11132">[Paper]</a></li>
|
||
<li><a href="https://github.com/vidalt/BA-Trees">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>On Lp-norm Robustness of Ensemble Decision Stumps and Trees
|
||
(ICML 2020)</strong>
|
||
<ul>
|
||
<li>Yihan Wang, Huan Zhang, Hongge Chen, Duane S. Boning, Cho-Jui
|
||
Hsieh</li>
|
||
<li><a href="https://arxiv.org/abs/2008.08755">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Smaller, More Accurate Regression Forests Using Tree
|
||
Alternating Optimization (ICML 2020)</strong>
|
||
<ul>
|
||
<li>Arman Zharmagambetov, Miguel Á. Carreira-Perpinan</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v119/zharmagambetov20a.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Optimal Decision Trees with MaxSAT and its
|
||
Integration in AdaBoost (IJCAI 2020)</strong>
|
||
<ul>
|
||
<li>Hao Hu, Mohamed Siala, Emmanuel Hebrard, Marie-José Huguet</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/2020/163">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Speeding up Very Fast Decision Tree with Low Computational
|
||
Cost (IJCAI 2020)</strong>
|
||
<ul>
|
||
<li>Jian Sun, Hongyu Jia, Bo Hu, Xiao Huang, Hao Zhang, Hai Wan, Xibin
|
||
Zhao</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/2020/0177.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>PyDL8.5: a Library for Learning Optimal Decision Trees
|
||
(IJCAI 2020)</strong>
|
||
<ul>
|
||
<li>Gaël Aglin, Siegfried Nijssen, Pierre Schaus</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/2020/0750.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/aia-uclouvain/pydl8.5">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Geodesic Forests (KDD 2020)</strong>
|
||
<ul>
|
||
<li>Meghana Madhyastha, Gongkai Li, Veronika Strnadova-Neeley, James
|
||
Browne, Joshua T. Vogelstein, Randal Burns</li>
|
||
<li><a
|
||
href="https://dl.acm.org/doi/pdf/10.1145/3394486.3403094">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Scalable MIP-based Method for Learning Optimal
|
||
Multivariate Decision Trees (NeurIPS 2020)</strong>
|
||
<ul>
|
||
<li>Haoran Zhu, Pavankumar Murali, Dzung T. Phan, Lam M. Nguyen, Jayant
|
||
Kalagnanam</li>
|
||
<li><a href="https://arxiv.org/abs/2011.03375">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Estimating Decision Tree Learnability with Polylogarithmic
|
||
Sample Complexity (NeurIPS 2020)</strong>
|
||
<ul>
|
||
<li>Guy Blanc, Neha Gupta, Jane Lange, Li-Yang Tan</li>
|
||
<li><a href="https://arxiv.org/abs/2011.01584">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Universal Guarantees for Decision Tree Induction via a
|
||
Higher-Order Splitting Criterion (NeurIPS 2020)</strong>
|
||
<ul>
|
||
<li>Guy Blanc, Neha Gupta, Jane Lange, Li-Yang Tan</li>
|
||
<li><a href="https://arxiv.org/abs/2010.08633">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Smooth And Consistent Probabilistic Regression Trees
|
||
(NeurIPS 2020)</strong>
|
||
<ul>
|
||
<li>Sami Alkhoury, Emilie Devijver, Marianne Clausel, Myriam Tami, Éric
|
||
Gaussier, Georges Oppenheim</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/2020/file/8289889263db4a40463e3f358bb7c7a1-Paper.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>An Efficient Adversarial Attack for Tree Ensembles (NeurIPS
|
||
2020)</strong>
|
||
<ul>
|
||
<li>Chong Zhang, Huan Zhang, Cho-Jui Hsieh</li>
|
||
<li><a href="https://arxiv.org/abs/2010.11598">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/chong-z/tree-ensemble-attack">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Decision Trees as Partitioning Machines to Characterize
|
||
their Generalization Properties (NeurIPS 2020)</strong>
|
||
<ul>
|
||
<li>Jean-Samuel Leboeuf, Frédéric Leblanc, Mario Marchand</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/2020/file/d2a10b0bd670e442b1d3caa3fbf9e695-Paper.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Evidence Weighted Tree Ensembles for Text Classification
|
||
(SIGIR 2020)</strong>
|
||
<ul>
|
||
<li>Md. Zahidul Islam, Jixue Liu, Jiuyong Li, Lin Liu, Wei Kang</li>
|
||
<li><a
|
||
href="https://dl.acm.org/doi/abs/10.1145/3397271.3401229">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-3">2019</h2>
|
||
<ul>
|
||
<li><strong>Multi Level Deep Cascade Trees for Conversion Rate
|
||
Prediction in Recommendation System (AAAI 2019)</strong>
|
||
<ul>
|
||
<li>Hong Wen, Jing Zhang, Quan Lin, Keping Yang, Pipei Huang</li>
|
||
<li><a href="https://arxiv.org/pdf/1805.09484.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Induction of Non-Monotonic Logic Programs to Explain Boosted
|
||
Tree Models Using LIME (AAAI 2019)</strong>
|
||
<ul>
|
||
<li>Farhad Shakerin, Gopal Gupta</li>
|
||
<li><a href="https://arxiv.org/abs/1808.00629">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Optimal and Fair Decision Trees for
|
||
Non-Discriminative Decision-Making (AAAI 2019)</strong>
|
||
<ul>
|
||
<li>Sina Aghaei, Mohammad Javad Azizi, Phebe Vayanos</li>
|
||
<li><a href="https://arxiv.org/abs/1903.10598">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Desiderata for Interpretability: Explaining Decision Tree
|
||
Predictions with Counterfactuals (AAAI 2019)</strong>
|
||
<ul>
|
||
<li>Kacper Sokol, Peter A. Flach</li>
|
||
<li><a
|
||
href="https://aaai.org/ojs/index.php/AAAI/article/view/5154">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Weighted Oblique Decision Trees (AAAI 2019)</strong>
|
||
<ul>
|
||
<li>Bin-Bin Yang, Song-Qing Shen, Wei Gao</li>
|
||
<li><a
|
||
href="https://aaai.org/ojs/index.php/AAAI/article/view/4505">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Optimal Classification Trees Using a Binary Linear
|
||
Program Formulation (AAAI 2019)</strong>
|
||
<ul>
|
||
<li>Sicco Verwer, Yingqian Zhang</li>
|
||
<li><a
|
||
href="https://yingqianzhang.net/wp-content/uploads/2018/12/VerwerZhangAAAI-final.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Optimization of Hierarchical Regression Model with
|
||
Application to Optimizing Multi-Response Regression K-ary Trees (AAAI
|
||
2019)</strong>
|
||
<ul>
|
||
<li>Pooya Tavallali, Peyman Tavallali, Mukesh Singhal</li>
|
||
<li><a
|
||
href="https://aaai.org/ojs/index.php/AAAI/article/view/4447/4325">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>XBART: Accelerated Bayesian Additive Regression Trees
|
||
(AISTATS 2019)</strong>
|
||
<ul>
|
||
<li>Jingyu He, Saar Yalov, P. Richard Hahn</li>
|
||
<li><a href="https://arxiv.org/abs/1810.02215">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Interaction Detection with Bayesian Decision Tree Ensembles
|
||
(AISTATS 2019)</strong>
|
||
<ul>
|
||
<li>Junliang Du, Antonio R. Linero</li>
|
||
<li><a href="https://arxiv.org/abs/1809.08524">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Adversarial Training of Gradient-Boosted Decision Trees
|
||
(CIKM 2019)</strong>
|
||
<ul>
|
||
<li>Stefano Calzavara, Claudio Lucchese, Gabriele Tolomei</li>
|
||
<li><a
|
||
href="https://www.dais.unive.it/~calzavara/papers/cikm19.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Interpretable MTL from Heterogeneous Domains using Boosted
|
||
Tree (CIKM 2019)</strong>
|
||
<ul>
|
||
<li>Ya-Lin Zhang, Longfei Li</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=3357384.3358072">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Interpreting CNNs via Decision Trees (CVPR 2019)</strong>
|
||
<ul>
|
||
<li>Quanshi Zhang, Yu Yang, Haotian Ma, Ying Nian Wu</li>
|
||
<li><a href="https://arxiv.org/abs/1802.00121">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>EDiT: Interpreting Ensemble Models via Compact Soft Decision
|
||
Trees (ICDM 2019)</strong>
|
||
<ul>
|
||
<li>Jaemin Yoo, Lee Sael</li>
|
||
<li><a
|
||
href="https://github.com/leesael/EDiT/blob/master/docs/YooS19.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/leesael/EDiT">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fair Adversarial Gradient Tree Boosting (ICDM 2019)</strong>
|
||
<ul>
|
||
<li>Vincent Grari, Boris Ruf, Sylvain Lamprier, Marcin Detyniecki</li>
|
||
<li><a href="https://arxiv.org/abs/1911.05369">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Functional Transparency for Structured Data: a
|
||
Game-Theoretic Approach (ICML 2019)</strong>
|
||
<ul>
|
||
<li>Guang-He Lee, Wengong Jin, David Alvarez-Melis, Tommi S.
|
||
Jaakkola</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v97/lee19b/lee19b.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Incorporating Grouping Information into Bayesian Decision
|
||
Tree Ensembles (ICML 2019)</strong>
|
||
<ul>
|
||
<li>Junliang Du, Antonio R. Linero</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v97/du19d.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Adaptive Neural Trees (ICML 2019)</strong>
|
||
<ul>
|
||
<li>Ryutaro Tanno, Kai Arulkumaran, Daniel C. Alexander, Antonio
|
||
Criminisi, Aditya V. Nori</li>
|
||
<li><a href="https://arxiv.org/abs/1807.06699">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/rtanno21609/AdaptiveNeuralTrees">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Robust Decision Trees Against Adversarial Examples (ICML
|
||
2019)</strong>
|
||
<ul>
|
||
<li>Hongge Chen, Huan Zhang, Duane S. Boning, Cho-Jui Hsieh</li>
|
||
<li><a href="https://arxiv.org/abs/1902.10660">[Paper]</a></li>
|
||
<li><a href="https://github.com/chenhongge/RobustTrees">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learn Smart with Less: Building Better Online Decision Trees
|
||
with Fewer Training Examples (IJCAI 2019)</strong>
|
||
<ul>
|
||
<li>Ariyam Das, Jin Wang, Sahil M. Gandhi, Jae Lee, Wei Wang, Carlo
|
||
Zaniolo</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/proceedings/2019/0306.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>FAHT: An Adaptive Fairness-aware Decision Tree Classifier
|
||
(IJCAI 2019)</strong>
|
||
<ul>
|
||
<li>Wenbin Zhang, Eirini Ntoutsi</li>
|
||
<li><a href="https://arxiv.org/abs/1907.07237">[Paper]</a></li>
|
||
<li><a href="https://github.com/vanbanTruong/FAHT">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Inter-node Hellinger Distance based Decision Tree (IJCAI
|
||
2019)</strong>
|
||
<ul>
|
||
<li>Pritom Saha Akash, Md. Eusha Kadir, Amin Ahsan Ali, Mohammad
|
||
Shoyaib</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/proceedings/2019/0272.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/ZDanielsResearch/HellingerTreesMatlab">[Matlab
|
||
Code]</a></li>
|
||
<li><a href="https://github.com/kaustubhrpatil/HDDT">[R Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Gradient Boosting with Piece-Wise Linear Regression Trees
|
||
(IJCAI 2019)</strong>
|
||
<ul>
|
||
<li>Yu Shi, Jian Li, Zhize Li</li>
|
||
<li><a href="https://arxiv.org/abs/1802.05640">[Paper]</a></li>
|
||
<li><a href="https://github.com/GBDT-PL/GBDT-PL">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Gradient-Based Split Criterion for Highly Accurate and
|
||
Transparent Model Trees (IJCAI 2019)</strong>
|
||
<ul>
|
||
<li>Klaus Broelemann, Gjergji Kasneci</li>
|
||
<li><a href="https://arxiv.org/abs/1809.09703">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Combining Decision Trees and Neural Networks for
|
||
Learning-to-Rank in Personal Search (KDD 2019)</strong>
|
||
<ul>
|
||
<li>Pan Li, Zhen Qin, Xuanhui Wang, Donald Metzler</li>
|
||
<li><a href="https://ai.google/research/pubs/pub48133/">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Tight Certificates of Adversarial Robustness for Randomly
|
||
Smoothed Classifiers (NeurIPS 2019)</strong>
|
||
<ul>
|
||
<li>Guang-He Lee, Yang Yuan, Shiyu Chang, Tommi S. Jaakkola</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/8737-tight-certificates-of-adversarial-robustness-for-randomly-smoothed-classifiers.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/guanghelee/Randomized_Smoothing">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Partitioning Structure Learning for Segmented Linear
|
||
Regression Trees (NeurIPS 2019)</strong>
|
||
<ul>
|
||
<li>Xiangyu Zheng, Song Xi Chen</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/8494-partitioning-structure-learning-for-segmented-linear-regression-trees">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Provably Robust Boosted Decision Stumps and Trees against
|
||
Adversarial Attacks (NeurIPS 2019)</strong>
|
||
<ul>
|
||
<li>Maksym Andriushchenko, Matthias Hein</li>
|
||
<li><a href="https://arxiv.org/abs/1906.03526">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/max-andr/provably-robust-boosting">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Optimal Decision Tree with Noisy Outcomes (NeurIPS
|
||
2019)</strong>
|
||
<ul>
|
||
<li>Su Jia, Viswanath Nagarajan, Fatemeh Navidi, R. Ravi</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/8592-optimal-decision-tree-with-noisy-outcomes.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/sjia1/ODT-with-noisy-outcomes">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Regularized Gradient Boosting (NeurIPS 2019)</strong>
|
||
<ul>
|
||
<li>Corinna Cortes, Mehryar Mohri, Dmitry Storcheus</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/8784-regularized-gradient-boosting.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Optimal Sparse Decision Trees (NeurIPS 2019)</strong>
|
||
<ul>
|
||
<li>Xiyang Hu, Cynthia Rudin, Margo Seltzer</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/8947-optimal-sparse-decision-trees.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/xiyanghu/OSDT">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>MonoForest framework for tree ensemble analysis (NeurIPS
|
||
2019)</strong>
|
||
<ul>
|
||
<li>Igor Kuralenok, Vasilii Ershov, Igor Labutin</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/9530-monoforest-framework-for-tree-ensemble-analysis">[Paper]</a></li>
|
||
<li><a href="https://github.com/xiyanghu/OSDT">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Calibrating Probability Estimation Trees using Venn-Abers
|
||
Predictors (SDM 2019)</strong>
|
||
<ul>
|
||
<li>Ulf Johansson, Tuwe Löfström, Henrik Boström</li>
|
||
<li><a
|
||
href="https://epubs.siam.org/doi/pdf/10.1137/1.9781611975673.4">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fast Training for Large-Scale One-versus-All Linear
|
||
Classifiers using Tree-Structured Initialization (SDM 2019)</strong>
|
||
<ul>
|
||
<li>Huang Fang, Minhao Cheng, Cho-Jui Hsieh, Michael P. Friedlander</li>
|
||
<li><a
|
||
href="https://epubs.siam.org/doi/pdf/10.1137/1.9781611975673.32">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Forest Packing: Fast Parallel, Decision Forests (SDM
|
||
2019)</strong>
|
||
<ul>
|
||
<li>James Browne, Disa Mhembere, Tyler M. Tomita, Joshua T. Vogelstein,
|
||
Randal Burns</li>
|
||
<li><a
|
||
href="https://epubs.siam.org/doi/abs/10.1137/1.9781611975673.6">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Block-distributed Gradient Boosted Trees (SIGIR
|
||
2019)</strong>
|
||
<ul>
|
||
<li>Theodore Vasiloudis, Hyunsu Cho, Henrik Boström</li>
|
||
<li><a href="https://arxiv.org/abs/1904.10522">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Entity Personalized Talent Search Models with Tree
|
||
Interaction Features (WWW 2019)</strong>
|
||
<ul>
|
||
<li>Cagri Ozcaglar, Sahin Cem Geyik, Brian Schmitz, Prakhar Sharma, Alex
|
||
Shelkovnykov, Yiming Ma, Erik Buchanan</li>
|
||
<li><a href="https://arxiv.org/abs/1902.09041">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-4">2018</h2>
|
||
<ul>
|
||
<li><strong>Adapting to Concept Drift in Credit Card Transaction Data
|
||
Streams Using Contextual Bandits and Decision Trees (AAAI 2018)</strong>
|
||
<ul>
|
||
<li>Dennis J. N. J. Soemers, Tim Brys, Kurt Driessens, Mark H. M.
|
||
Winands, Ann Nowé</li>
|
||
<li><a
|
||
href="https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewFile/16183/16394">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>MERCS: Multi-Directional Ensembles of Regression and
|
||
Classification Trees (AAAI 2018)</strong>
|
||
<ul>
|
||
<li>Elia Van Wolputte, Evgeniya Korneva, Hendrik Blockeel</li>
|
||
<li><a
|
||
href="https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/viewFile/16875/16735">[Paper]</a></li>
|
||
<li><a href="https://github.com/eliavw/mercs-v5">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Differential Performance Debugging With Discriminant
|
||
Regression Trees (AAAI 2018)</strong>
|
||
<ul>
|
||
<li>Saeid Tizpaz-Niari, Pavol Cerný, Bor-Yuh Evan Chang, Ashutosh
|
||
Trivedi</li>
|
||
<li><a href="https://arxiv.org/abs/1711.04076">[Paper]</a></li>
|
||
<li><a href="https://github.com/cuplv/DPDEBUGGER">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Estimating the Class Prior in Positive and Unlabeled Data
|
||
Through Decision Tree Induction (AAAI 2018)</strong>
|
||
<ul>
|
||
<li>Jessa Bekker, Jesse Davis</li>
|
||
<li><a
|
||
href="https://aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16776">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>MDP-Based Cost Sensitive Classification Using Decision Trees
|
||
(AAAI 2018)</strong>
|
||
<ul>
|
||
<li>Shlomi Maliah, Guy Shani</li>
|
||
<li><a
|
||
href="https://aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17128">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Generative Adversarial Image Synthesis With Decision Tree
|
||
Latent Controller (CVPR 2018)</strong>
|
||
<ul>
|
||
<li>Takuhiro Kaneko, Kaoru Hiramatsu, Kunio Kashino</li>
|
||
<li><a href="https://arxiv.org/abs/1805.10603">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/LynnHo/DTLC-GAN-Tensorflow">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Enhancing Very Fast Decision Trees with Local Split-Time
|
||
Predictions (ICDM 2018)</strong>
|
||
<ul>
|
||
<li>Viktor Losing, Heiko Wersing, Barbara Hammer</li>
|
||
<li><a
|
||
href="https://www.techfak.uni-bielefeld.de/~hwersing/LosingHammerWersing_ICDM2018.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/ICDM2018Submission/VFDT-split-time-prediction">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Realization of Random Forest for Real-Time Evaluation
|
||
through Tree Framing (ICDM 2018)</strong>
|
||
<ul>
|
||
<li>Sebastian Buschjäger, Kuan-Hsun Chen, Jian-Jia Chen, Katharina
|
||
Morik</li>
|
||
<li><a
|
||
href="https://sfb876.tu-dortmund.de/PublicPublicationFiles/buschjaeger_2018a.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Finding Influential Training Samples for Gradient Boosted
|
||
Decision Trees (ICML 2018)</strong>
|
||
<ul>
|
||
<li>Boris Sharchilev, Yury Ustinovskiy, Pavel Serdyukov, Maarten de
|
||
Rijke</li>
|
||
<li><a href="https://arxiv.org/abs/1802.06640">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/bsharchilev/influence_boosting">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Optimal Decision Trees with SAT (IJCAI
|
||
2018)</strong>
|
||
<ul>
|
||
<li>Nina Narodytska, Alexey Ignatiev, Filipe Pereira, João
|
||
Marques-Silva</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/proceedings/2018/0189.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Extremely Fast Decision Tree (KDD 2018)</strong>
|
||
<ul>
|
||
<li>Chaitanya Manapragada, Geoffrey I. Webb, Mahsa Salehi</li>
|
||
<li><a href="https://arxiv.org/abs/1802.08780">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/doubleplusplus/incremental_decision_tree-CART-Random_Forest_python">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>RapidScorer: Fast Tree Ensemble Evaluation by Maximizing
|
||
Compactness in Data Level Parallelization (KDD 2018)</strong>
|
||
<ul>
|
||
<li>Ting Ye, Hucheng Zhou, Will Y. Zou, Bin Gao, Ruofei Zhang</li>
|
||
<li><a
|
||
href="http://ai.stanford.edu/~wzou/kdd_rapidscorer.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>CatBoost: Unbiased Boosting with Categorical Features (NIPS
|
||
2018)</strong>
|
||
<ul>
|
||
<li>Liudmila Prokhorenkova, Gleb Gusev, Aleksandr Vorobev, Anna Veronika
|
||
Dorogush, Andrey Gulin</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/7898-catboost-unbiased-boosting-with-categorical-features.pdf">[Paper]</a></li>
|
||
<li><a href="https://catboost.ai/">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Active Learning for Non-Parametric Regression Using Purely
|
||
Random Trees (NIPS 2018)</strong>
|
||
<ul>
|
||
<li>Jack Goetz, Ambuj Tewari, Paul Zimmerman</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/7520-active-learning-for-non-parametric-regression-using-purely-random-trees.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Alternating Optimization of Decision Trees with Application
|
||
to Learning Sparse Oblique Trees (NIPS 2018)</strong>
|
||
<ul>
|
||
<li>Miguel Á. Carreira-Perpiñán, Pooya Tavallali</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/7397-alternating-optimization-of-decision-trees-with-application-to-learning-sparse-oblique-trees">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multi-Layered Gradient Boosting Decision Trees (NIPS
|
||
2018)</strong>
|
||
<ul>
|
||
<li>Ji Feng, Yang Yu, Zhi-Hua Zhou</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/7614-multi-layered-gradient-boosting-decision-trees.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/kingfengji/mGBDT">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Transparent Tree Ensembles (SIGIR 2018)</strong>
|
||
<ul>
|
||
<li>Alexander Moore, Vanessa Murdock, Yaxiong Cai, Kristine Jones</li>
|
||
<li><a
|
||
href="http://delivery.acm.org/10.1145/3220000/3210151/p1241-moore.pdf?ip=129.215.164.203&id=3210151&acc=ACTIVE%20SERVICE&key=C2D842D97AC95F7A%2EEB9E991028F4E1F1%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1559054892_a29816c683aa83a0ce0fbb777c68daba">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Privacy-aware Ranking with Tree Ensembles on the Cloud
|
||
(SIGIR 2018)</strong>
|
||
<ul>
|
||
<li>Shiyu Ji, Jinjin Shao, Daniel Agun, Tao Yang</li>
|
||
<li><a
|
||
href="https://sites.cs.ucsb.edu/projects/ds/sigir18.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-5">2017</h2>
|
||
<ul>
|
||
<li><strong>Strategic Sequences of Arguments for Persuasion Using
|
||
Decision Trees (AAAI 2017)</strong>
|
||
<ul>
|
||
<li>Emmanuel Hadoux, Anthony Hunter</li>
|
||
<li><a
|
||
href="http://www0.cs.ucl.ac.uk/staff/a.hunter/papers/aaai17.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>BoostVHT: Boosting Distributed Streaming Decision Trees
|
||
(CIKM 2017)</strong>
|
||
<ul>
|
||
<li>Theodore Vasiloudis, Foteini Beligianni, Gianmarco De Francisci
|
||
Morales</li>
|
||
<li><a
|
||
href="https://melmeric.files.wordpress.com/2010/05/boostvht-boosting-distributed-streaming-decision-trees.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Latency Reduction via Decision Tree Based Query Construction
|
||
(CIKM 2017)</strong>
|
||
<ul>
|
||
<li>Aman Grover, Dhruv Arya, Ganesh Venkataraman</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=3132865">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Enumerating Distinct Decision Trees (ICML 2017)</strong>
|
||
<ul>
|
||
<li>Salvatore Ruggieri</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v70/ruggieri17a/ruggieri17a.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Gradient Boosted Decision Trees for High Dimensional Sparse
|
||
Output (ICML 2017)</strong>
|
||
<ul>
|
||
<li>Si Si, Huan Zhang, S. Sathiya Keerthi, Dhruv Mahajan, Inderjit S.
|
||
Dhillon, Cho-Jui Hsieh</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v70/si17a.html">[Paper]</a></li>
|
||
<li><a href="https://github.com/springdaisy/GBDT">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Consistent Feature Attribution for Tree Ensembles (ICML
|
||
2017)</strong>
|
||
<ul>
|
||
<li>Scott M. Lundberg, Su-In Lee</li>
|
||
<li><a href="https://arxiv.org/abs/1706.06060">[Paper]</a></li>
|
||
<li><a href="https://github.com/slundberg/shap">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Extremely Fast Decision Tree Mining for Evolving Data
|
||
Streams (KDD 2017)</strong>
|
||
<ul>
|
||
<li>Albert Bifet, Jiajin Zhang, Wei Fan, Cheng He, Jianfeng Zhang,
|
||
Jianfeng Qian, Geoff Holmes, Bernhard Pfahringer</li>
|
||
<li><a
|
||
href="https://core.ac.uk/download/pdf/151040580.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>CatBoost: Gradient Boosting with Categorical Features
|
||
Support (NIPS 2017)</strong>
|
||
<ul>
|
||
<li>Anna Veronika Dorogush, Vasily Ershov, Andrey Gulin</li>
|
||
<li><a href="https://arxiv.org/abs/1810.11363">[Paper]</a></li>
|
||
<li><a href="https://catboost.ai/">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>LightGBM: A Highly Efficient Gradient Boosting Decision Tree
|
||
(NIPS 2017)</strong>
|
||
<ul>
|
||
<li>Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong
|
||
Ma, Qiwei Ye, Tie-Yan Liu</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/6907-lightgbm-a-highly-efficient-gradient-boosting-decision-tree">[Paper]</a></li>
|
||
<li><a href="https://lightgbm.readthedocs.io/en/latest/">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Variable Importance Using Decision Trees (NIPS
|
||
2017)</strong>
|
||
<ul>
|
||
<li>Jalil Kazemitabar, Arash Amini, Adam Bloniarz, Ameet S.
|
||
Talwalkar</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/6646-variable-importance-using-decision-trees">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Unified Approach to Interpreting Model Predictions (NIPS
|
||
2017)</strong>
|
||
<ul>
|
||
<li>Scott M. Lundberg, Su-In Lee</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions">[Paper]</a></li>
|
||
<li><a href="https://github.com/slundberg/shap">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Pruning Decision Trees via Max-Heap Projection (SDM
|
||
2017)</strong>
|
||
<ul>
|
||
<li>Zhi Nie, Binbin Lin, Shuai Huang, Naren Ramakrishnan, Wei Fan,
|
||
Jieping Ye</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/317485748_Pruning_Decision_Trees_via_Max-Heap_Projection">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Practical Method for Solving Contextual Bandit Problems
|
||
Using Decision Trees (UAI 2017)</strong>
|
||
<ul>
|
||
<li>Adam N. Elmachtoub, Ryan McNellis, Sechan Oh, Marek Petrik</li>
|
||
<li><a href="https://arxiv.org/abs/1706.04687">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Complexity of Solving Decision Trees with Skew-Symmetric
|
||
Bilinear Utility (UAI 2017)</strong>
|
||
<ul>
|
||
<li>Hugo Gilbert, Olivier Spanjaard</li>
|
||
<li><a
|
||
href="http://auai.org/uai2017/proceedings/papers/64.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>GB-CENT: Gradient Boosted Categorical Embedding and
|
||
Numerical Trees (WWW 2017)</strong>
|
||
<ul>
|
||
<li>Qian Zhao, Yue Shi, Liangjie Hong</li>
|
||
<li><a
|
||
href="http://papers.www2017.com.au.s3-website-ap-southeast-2.amazonaws.com/proceedings/p1311.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-6">2016</h2>
|
||
<ul>
|
||
<li><strong>Sparse Perceptron Decision Tree for Millions of Dimensions
|
||
(AAAI 2016)</strong>
|
||
<ul>
|
||
<li>Weiwei Liu, Ivor W. Tsang</li>
|
||
<li><a
|
||
href="https://aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/12111">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Online Smooth Predictors for Realtime Camera
|
||
Planning Using Recurrent Decision Trees (CVPR 2016)</strong>
|
||
<ul>
|
||
<li>Jianhui Chen, Hoang Minh Le, Peter Carr, Yisong Yue, James J.
|
||
Little</li>
|
||
<li><a
|
||
href="http://hoangle.info/papers/cvpr2016_online_smooth_long.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Online Learning with Bayesian Classification Trees (CVPR
|
||
2016)</strong>
|
||
<ul>
|
||
<li>Samuel Rota Bulò, Peter Kontschieder</li>
|
||
<li><a
|
||
href="http://www.dsi.unive.it/~srotabul/files/publications/CVPR2016.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Accurate Robust and Efficient Error Estimation for Decision
|
||
Trees (ICML 2016)</strong>
|
||
<ul>
|
||
<li>Lixin Fan</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v48/fan16.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Meta-Gradient Boosted Decision Tree Model for Weight and
|
||
Target Learning (ICML 2016)</strong>
|
||
<ul>
|
||
<li>Yury Ustinovskiy, Valentina Fedorova, Gleb Gusev, Pavel
|
||
Serdyukov</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v48/ustinovskiy16.html">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Decision Tree Regression Adjustment for Variance
|
||
Reduction in Online Controlled Experiments (KDD 2016)</strong>
|
||
<ul>
|
||
<li>Alexey Poyarkov, Alexey Drutsa, Andrey Khalyavin, Gleb Gusev, Pavel
|
||
Serdyukov</li>
|
||
<li><a
|
||
href="https://www.kdd.org/kdd2016/papers/files/adf0653-poyarkovA.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>XGBoost: A Scalable Tree Boosting System (KDD 2016)</strong>
|
||
<ul>
|
||
<li>Tianqi Chen, Carlos Guestrin</li>
|
||
<li><a
|
||
href="https://www.kdd.org/kdd2016/papers/files/rfp0697-chenAemb.pdf">[Paper]</a></li>
|
||
<li><a href="https://xgboost.readthedocs.io/en/latest/">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Yggdrasil: An Optimized System for Training Deep Decision
|
||
Trees at Scale (NIPS 2016)</strong>
|
||
<ul>
|
||
<li>Firas Abuzaid, Joseph K. Bradley, Feynman T. Liang, Andrew Feng, Lee
|
||
Yang, Matei Zaharia, Ameet S. Talwalkar</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/6366-yggdrasil-an-optimized-system-for-training-deep-decision-trees-at-scale">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Communication-Efficient Parallel Algorithm for Decision
|
||
Tree (NIPS 2016)</strong>
|
||
<ul>
|
||
<li>Qi Meng, Guolin Ke, Taifeng Wang, Wei Chen, Qiwei Ye, Zhiming Ma,
|
||
Tie-Yan Liu</li>
|
||
<li><a href="https://arxiv.org/abs/1611.01276">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/microsoft/LightGBM/blob/master/docs/Features.rst">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Exploiting CPU SIMD Extensions to Speed-up Document Scoring
|
||
with Tree Ensembles (SIGIR 2016)</strong>
|
||
<ul>
|
||
<li>Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele
|
||
Perego, Nicola Tonellotto, Rossano Venturini</li>
|
||
<li><a
|
||
href="http://pages.di.unipi.it/rossano/wp-content/uploads/sites/7/2016/07/SIGIR16a.pdf">[Paper]</a></li>
|
||
<li><a
|
||
href="https://github.com/hpclab/vectorized-quickscorer">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Post-Learning Optimization of Tree Ensembles for Efficient
|
||
Ranking (SIGIR 2016)</strong>
|
||
<ul>
|
||
<li>Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele
|
||
Perego, Fabrizio Silvestri, Salvatore Trani</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/305081572_Post-Learning_Optimization_of_Tree_Ensembles_for_Efficient_Ranking">[Paper]</a></li>
|
||
<li><a href="https://github.com/hpclab/quickrank">[Code]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-7">2015</h2>
|
||
<ul>
|
||
<li><strong>Particle Gibbs for Bayesian Additive Regression Trees
|
||
(AISTATS 2015)</strong>
|
||
<ul>
|
||
<li>Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh</li>
|
||
<li><a href="https://arxiv.org/abs/1502.04622">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>DART: Dropouts Meet Multiple Additive Regression Trees
|
||
(AISTATS 2015)</strong>
|
||
<ul>
|
||
<li>Korlakai Vinayak Rashmi, Ran Gilad-Bachrach</li>
|
||
<li><a href="https://arxiv.org/abs/1505.01866">[Paper]</a></li>
|
||
<li><a href="https://xgboost.readthedocs.io/en/latest/">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Single Target Tracking Using Adaptive Clustered Decision
|
||
Trees and Dynamic Multi-level Appearance Models (CVPR 2015)</strong>
|
||
<ul>
|
||
<li>Jingjing Xiao, Rustam Stolkin, Ales Leonardis</li>
|
||
<li><a
|
||
href="https://www.cv-foundation.org/openaccess/content_cvpr_2015/app/3B_058.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Face Alignment Using Cascade Gaussian Process Regression
|
||
Trees (CVPR 2015)</strong>
|
||
<ul>
|
||
<li>Donghoon Lee, Hyunsin Park, Chang Dong Yoo</li>
|
||
<li><a
|
||
href="https://slsp.kaist.ac.kr/paperdata/Face_Alignment_Using.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/donghoonlee04/cGPRT">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Tracking-by-Segmentation with Online Gradient Boosting
|
||
Decision Tree (ICCV 2015)</strong>
|
||
<ul>
|
||
<li>Jeany Son, Ilchae Jung, Kayoung Park, Bohyung Han</li>
|
||
<li><a
|
||
href="https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Son_Tracking-by-Segmentation_With_Online_ICCV_2015_paper.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Entropy Evaluation Based on Confidence Intervals of
|
||
Frequency Estimates : Application to the Learning of Decision Trees
|
||
(ICML 2015)</strong>
|
||
<ul>
|
||
<li>Mathieu Serrurier, Henri Prade</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v37/serrurier15.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Large-scale Distributed Dependent Nonparametric Trees (ICML
|
||
2015)</strong>
|
||
<ul>
|
||
<li>Zhiting Hu, Qirong Ho, Avinava Dubey, Eric P. Xing</li>
|
||
<li><a
|
||
href="https://www.cs.cmu.edu/~zhitingh/data/icml15hu.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Optimal Action Extraction for Random Forests and Boosted
|
||
Trees (KDD 2015)</strong>
|
||
<ul>
|
||
<li>Zhicheng Cui, Wenlin Chen, Yujie He, Yixin Chen</li>
|
||
<li><a
|
||
href="https://www.cse.wustl.edu/~ychen/public/OAE.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Decision Tree Framework for Spatiotemporal Sequence
|
||
Prediction (KDD 2015)</strong>
|
||
<ul>
|
||
<li>Taehwan Kim, Yisong Yue, Sarah L. Taylor, Iain A. Matthews</li>
|
||
<li><a
|
||
href="http://www.yisongyue.com/publications/kdd2015_ssw_dt.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Non-greedy Optimization of Decision Trees (NIPS
|
||
2015)</strong>
|
||
<ul>
|
||
<li>Mohammad Norouzi, Maxwell D. Collins, Matthew Johnson, David J.
|
||
Fleet, Pushmeet Kohli</li>
|
||
<li><a href="https://arxiv.org/abs/1511.04056">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>QuickScorer: A Fast Algorithm to Rank Documents with
|
||
Additive Ensembles of Regression Trees (SIGIR 2015)</strong>
|
||
<ul>
|
||
<li>Claudio Lucchese, Franco Maria Nardini, Salvatore Orlando, Raffaele
|
||
Perego, Nicola Tonellotto, Rossano Venturini</li>
|
||
<li><a
|
||
href="http://pages.di.unipi.it/rossano/wp-content/uploads/sites/7/2015/11/sigir15.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/hpclab/quickrank">[Code]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-8">2014</h2>
|
||
<ul>
|
||
<li><strong>A Mixtures-of-Trees Framework for Multi-Label Classification
|
||
(CIKM 2014)</strong>
|
||
<ul>
|
||
<li>Charmgil Hong, Iyad Batal, Milos Hauskrecht</li>
|
||
<li><a
|
||
href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4410801/">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>On Building Decision Trees from Large-scale Data in
|
||
Applications of On-line Advertising (CIKM 2014)</strong>
|
||
<ul>
|
||
<li>Shivaram Kalyanakrishnan, Deepthi Singh, Ravi Kant</li>
|
||
<li><a
|
||
href="https://www.cse.iitb.ac.in/~shivaram/papers/ksk_cikm_2014.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fast Supervised Hashing with Decision Trees for
|
||
High-Dimensional Data (CVPR 2014)</strong>
|
||
<ul>
|
||
<li>Guosheng Lin, Chunhua Shen, Qinfeng Shi, Anton van den Hengel, David
|
||
Suter</li>
|
||
<li><a href="https://arxiv.org/abs/1404.1561">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>One Millisecond Face Alignment with an Ensemble of
|
||
Regression Trees (CVPR 2014)</strong>
|
||
<ul>
|
||
<li>Vahid Kazemi, Josephine Sullivan</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/264419855_One_Millisecond_Face_Alignment_with_an_Ensemble_of_Regression_Trees">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>The return of AdaBoost.MH: multi-class Hamming trees (ICLR
|
||
2014)</strong>
|
||
<ul>
|
||
<li>Balázs Kégl</li>
|
||
<li><a href="https://arxiv.org/pdf/1312.6086.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Diagnosis Determination: Decision Trees Optimizing
|
||
Simultaneously Worst and Expected Testing Cost (ICML 2014)</strong>
|
||
<ul>
|
||
<li>Ferdinando Cicalese, Eduardo Sany Laber, Aline Medeiros
|
||
Saettler</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/47ae/852f83b76f95b27ab00308d04f6020bdf71f.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Multiple-Question Decision Trees for Cold-Start
|
||
Recommendation (WSDM 2013)</strong>
|
||
<ul>
|
||
<li>Mingxuan Sun, Fuxin Li, Joonseok Lee, Ke Zhou, Guy Lebanon, Hongyuan
|
||
Zha</li>
|
||
<li><a
|
||
href="http://www.joonseok.net/papers/coldstart.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-9">2013</h2>
|
||
<ul>
|
||
<li><strong>Weakly Supervised Learning of Image Partitioning Using
|
||
Decision Trees with Structured Split Criteria (ICCV 2013)</strong>
|
||
<ul>
|
||
<li>Christoph N. Straehle, Ullrich Köthe, Fred A. Hamprecht</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/6751340">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Revisiting Example Dependent Cost-Sensitive Learning with
|
||
Decision Trees (ICCV 2013)</strong>
|
||
<ul>
|
||
<li>Oisin Mac Aodha, Gabriel J. Brostow</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/6751133">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Conformal Prediction Using Decision Trees (ICDM
|
||
2013)</strong>
|
||
<ul>
|
||
<li>Ulf Johansson, Henrik Boström, Tuve Löfström</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/abstract/document/6729517">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Focal-Test-Based Spatial Decision Tree Learning: A Summary
|
||
of Results (ICDM 2013)</strong>
|
||
<ul>
|
||
<li>Zhe Jiang, Shashi Shekhar, Xun Zhou, Joseph K. Knight, Jennifer
|
||
Corcoran</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/f28e/df8d9eed76e4ce97cb6bd4182d590547be5e.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Top-down Particle Filtering for Bayesian Decision Trees
|
||
(ICML 2013)</strong>
|
||
<ul>
|
||
<li>Balaji Lakshminarayanan, Daniel M. Roy, Yee Whye Teh</li>
|
||
<li><a href="https://arxiv.org/abs/1303.0561">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Quickly Boosting Decision Trees - Pruning Underachieving
|
||
Features Early (ICML 2013)</strong>
|
||
<ul>
|
||
<li>Ron Appel, Thomas J. Fuchs, Piotr Dollár, Pietro Perona</li>
|
||
<li><a
|
||
href="http://proceedings.mlr.press/v28/appel13.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Knowledge Compilation for Model Counting: Affine Decision
|
||
Trees (IJCAI 2013)</strong>
|
||
<ul>
|
||
<li>Frédéric Koriche, Jean-Marie Lagniez, Pierre Marquis, Samuel
|
||
Thomas</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/262398921_Knowledge_Compilation_for_Model_Counting_Affine_Decision_Trees">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Understanding Variable Importances in Forests of Randomized
|
||
Trees (NIPS 2013)</strong>
|
||
<ul>
|
||
<li>Gilles Louppe, Louis Wehenkel, Antonio Sutera, Pierre Geurts</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/4928-understanding-variable-importances-in-forests-of-randomized-trees">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Regression-tree Tuning in a Streaming Setting (NIPS
|
||
2013)</strong>
|
||
<ul>
|
||
<li>Samory Kpotufe, Francesco Orabona</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/4898-regression-tree-tuning-in-a-streaming-setting">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Max-Margin Tree Predictors (UAI 2013)</strong>
|
||
<ul>
|
||
<li>Ofer Meshi, Elad Eban, Gal Elidan, Amir Globerson</li>
|
||
<li><a
|
||
href="https://ttic.uchicago.edu/~meshi/papers/mtreen.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-10">2012</h2>
|
||
<ul>
|
||
<li><strong>Regression Tree Fields - An Efficient, Non-parametric
|
||
Approach to Image Labeling Problems (CVPR 2012)</strong>
|
||
<ul>
|
||
<li>Jeremy Jancsary, Sebastian Nowozin, Toby Sharp, Carsten Rother</li>
|
||
<li><a
|
||
href="http://www.nowozin.net/sebastian/papers/jancsary2012rtf.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>ConfDTree: Improving Decision Trees Using Confidence
|
||
Intervals (ICDM 2012)</strong>
|
||
<ul>
|
||
<li>Gilad Katz, Asaf Shabtai, Lior Rokach, Nir Ofek</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/6413889">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Improved Information Gain Estimates for Decision Tree
|
||
Induction (ICML 2012)</strong>
|
||
<ul>
|
||
<li>Sebastian Nowozin</li>
|
||
<li><a href="https://arxiv.org/abs/1206.4620">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Partially Observable Models Using Temporally
|
||
Abstract Decision Trees (NIPS 2012)</strong>
|
||
<ul>
|
||
<li>Erik Talvitie</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/4662-learning-partially-observable-models-using-temporally-abstract-decision-trees">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Subtree Replacement in Decision Tree Simplification (SDM
|
||
2012)</strong>
|
||
<ul>
|
||
<li>Salvatore Ruggieri</li>
|
||
<li><a
|
||
href="http://pages.di.unipi.it/ruggieri/Papers/sdm2012.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-11">2011</h2>
|
||
<ul>
|
||
<li><strong>Incorporating Boosted Regression Trees into Ecological
|
||
Latent Variable Models (AAAI 2011)</strong>
|
||
<ul>
|
||
<li>Rebecca A. Hutchinson, Li-Ping Liu, Thomas G. Dietterich</li>
|
||
<li><a
|
||
href="https://www.aaai.org/ocs/index.php/AAAI/AAAI11/paper/viewFile/3711/4086">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Syntactic Decision Tree LMs: Random Selection or Intelligent
|
||
Design (EMNLP 2011)</strong>
|
||
<ul>
|
||
<li>Denis Filimonov, Mary P. Harper</li>
|
||
<li><a href="https://www.aclweb.org/anthology/D11-1064">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Decision Tree Fields (ICCV 2011)</strong>
|
||
<ul>
|
||
<li>Sebastian Nowozin, Carsten Rother, Shai Bagon, Toby Sharp, Bangpeng
|
||
Yao, Pushmeet Kohli</li>
|
||
<li><a
|
||
href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/11/nrbsyk_iccv11.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Confidence in Predictions from Random Tree Ensembles (ICDM
|
||
2011)</strong>
|
||
<ul>
|
||
<li>Siddhartha Bhattacharyya</li>
|
||
<li><a
|
||
href="https://link.springer.com/article/10.1007/s10115-012-0600-z">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Speeding-Up Hoeffding-Based Regression Trees With Options
|
||
(ICML 2011)</strong>
|
||
<ul>
|
||
<li>Elena Ikonomovska, João Gama, Bernard Zenko, Saso Dzeroski</li>
|
||
<li><a
|
||
href="https://icml.cc/Conferences/2011/papers/349_icmlpaper.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Density Estimation Trees (KDD 2011)</strong>
|
||
<ul>
|
||
<li>Parikshit Ram, Alexander G. Gray</li>
|
||
<li><a href="https://mlpack.org/papers/det.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Bagging Gradient-Boosted Trees for High Precision, Low
|
||
Variance Ranking Models (SIGIR 2011)</strong>
|
||
<ul>
|
||
<li>Yasser Ganjisaffar, Rich Caruana, Cristina Videira Lopes</li>
|
||
<li><a
|
||
href="http://www.ccs.neu.edu/home/vip/teach/MLcourse/4_boosting/materials/bagging_lmbamart_jforests.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>On the Complexity of Decision Making in Possibilistic
|
||
Decision Trees (UAI 2011)</strong>
|
||
<ul>
|
||
<li>Hélène Fargier, Nahla Ben Amor, Wided Guezguez</li>
|
||
<li><a
|
||
href="https://dslpitt.org/uai/papers/11/p203-fargier.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Adaptive Bootstrapping of Recommender Systems Using Decision
|
||
Trees (WSDM 2011)</strong>
|
||
<ul>
|
||
<li>Nadav Golbandi, Yehuda Koren, Ronny Lempel</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=1935910">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Parallel Boosted Regression Trees for Web Search Ranking
|
||
(WWW 2011)</strong>
|
||
<ul>
|
||
<li>Stephen Tyree, Kilian Q. Weinberger, Kunal Agrawal, Jennifer
|
||
Paykin</li>
|
||
<li><a
|
||
href="http://www.cs.cornell.edu/~kilian/papers/fr819-tyreeA.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-12">2010</h2>
|
||
<ul>
|
||
<li><strong>Discrimination Aware Decision Tree Learning (ICDM
|
||
2010)</strong>
|
||
<ul>
|
||
<li>Faisal Kamiran, Toon Calders, Mykola Pechenizkiy</li>
|
||
<li><a
|
||
href="https://www.win.tue.nl/~mpechen/publications/pubs/KamiranICDM2010.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Decision Trees for Uplift Modeling (ICDM 2010)</strong>
|
||
<ul>
|
||
<li>Piotr Rzepakowski, Szymon Jaroszewicz</li>
|
||
<li><a
|
||
href="https://core.ac.uk/download/pdf/81899141.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Markov Network Structure with Decision Trees (ICDM
|
||
2010)</strong>
|
||
<ul>
|
||
<li>Daniel Lowd, Jesse Davis</li>
|
||
<li><a
|
||
href="https://ix.cs.uoregon.edu/~lowd/icdm10lowd.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multivariate Dyadic Regression Trees for Sparse Learning
|
||
Problems (NIPS 2010)</strong>
|
||
<ul>
|
||
<li>Han Liu, Xi Chen</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/4178-multivariate-dyadic-regression-trees-for-sparse-learning-problems.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Fast and Accurate Gene Prediction by Decision Tree
|
||
Classification (SDM 2010)</strong>
|
||
<ul>
|
||
<li>Rong She, Jeffrey Shih-Chieh Chu, Ke Wang, Nansheng Chen</li>
|
||
<li><a
|
||
href="http://www.sfu.ca/~chenn/genBlastDT_sdm.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Robust Decision Tree Algorithm for Imbalanced Data Sets
|
||
(SDM 2010)</strong>
|
||
<ul>
|
||
<li>Wei Liu, Sanjay Chawla, David A. Cieslak, Nitesh V. Chawla</li>
|
||
<li><a
|
||
href="https://www3.nd.edu/~nchawla/papers/SDM10.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-13">2009</h2>
|
||
<ul>
|
||
<li><strong>Stochastic Gradient Boosted Distributed Decision Trees (CIKM
|
||
2009)</strong>
|
||
<ul>
|
||
<li>Jerry Ye, Jyh-Herng Chow, Jiang Chen, Zhaohui Zheng</li>
|
||
<li><a
|
||
href="https://dl.acm.org/citation.cfm?id=1646301">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Feature Selection for Ranking Using Boosted Trees (CIKM
|
||
2009)</strong>
|
||
<ul>
|
||
<li>Feng Pan, Tim Converse, David Ahn, Franco Salvetti, Gianluca
|
||
Donato</li>
|
||
<li><a
|
||
href="http://www.francosalvetti.com/cikm09_camera2.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Thai Word Segmentation with Hidden Markov Model and Decision
|
||
Tree (PAKDD 2009)</strong>
|
||
<ul>
|
||
<li>Poramin Bheganan, Richi Nayak, Yue Xu</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/978-3-642-01307-2_10">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Parameter Estimdation in Semi-Random Decision Tree
|
||
Ensembling on Streaming Data (PAKDD 2009)</strong>
|
||
<ul>
|
||
<li>Pei-Pei Li, Qianhui Liang, Xindong Wu, Xuegang Hu</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/978-3-642-01307-2_35">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>DTU: A Decision Tree for Uncertain Data (PAKDD
|
||
2009)</strong>
|
||
<ul>
|
||
<li>Biao Qin, Yuni Xia, Fang Li</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/978-3-642-01307-2_4">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-14">2008</h2>
|
||
<ul>
|
||
<li><strong>Predicting Future Decision Trees from Evolving Data (ICDM
|
||
2008)</strong>
|
||
<ul>
|
||
<li>Mirko Böttcher, Martin Spott, Rudolf Kruse</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/4781098">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Bayes Optimal Classification for Decision Trees (ICML
|
||
2008)</strong>
|
||
<ul>
|
||
<li>Siegfried Nijssen</li>
|
||
<li><a
|
||
href="http://icml2008.cs.helsinki.fi/papers/455.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A New Credit Scoring Method Based on Rough Sets and Decision
|
||
Tree (PAKDD 2008)</strong>
|
||
<ul>
|
||
<li>XiYue Zhou, Defu Zhang, Yi Jiang</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/978-3-540-68125-0_117">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Comparison of Different Off-Centered Entropies to Deal
|
||
with Class Imbalance for Decision Trees (PAKDD 2008)</strong>
|
||
<ul>
|
||
<li>Philippe Lenca, Stéphane Lallich, Thanh-Nghi Do, Nguyen-Khang
|
||
Pham</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/978-3-540-68125-0_59">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>BOAI: Fast Alternating Decision Tree Induction Based on
|
||
Bottom-Up Evaluation (PAKDD 2008)</strong>
|
||
<ul>
|
||
<li>Bishan Yang, Tengjiao Wang, Dongqing Yang, Lei Chang</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/978-3-540-68125-0_36">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A General Framework for Estimating Similarity of Datasets
|
||
and Decision Trees: Exploring Semantic Similarity of Decision Trees (SDM
|
||
2008)</strong>
|
||
<ul>
|
||
<li>Irene Ntoutsi, Alexandros Kalousis, Yannis Theodoridis</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/220907047_A_general_framework_for_estimating_similarity_of_datasets_and_decision_trees_exploring_semantic_similarity_of_decision_trees">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>ROC-tree: A Novel Decision Tree Induction Algorithm Based on
|
||
Receiver Operating Characteristics to Classify Gene Expression Data (SDM
|
||
2008)</strong>
|
||
<ul>
|
||
<li>M. Maruf Hossain, Md. Rafiul Hassan, James Bailey</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/bd80/db2f0903169b7611d34b2cc85f60a736375d.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-15">2007</h2>
|
||
<ul>
|
||
<li><strong>Tree-based Classifiers for Bilayer Video Segmentation (CVPR
|
||
2007)</strong>
|
||
<ul>
|
||
<li>Pei Yin, Antonio Criminisi, John M. Winn, Irfan A. Essa</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/4270033">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Additive Groves of Regression Trees (ECML 2007)</strong>
|
||
<ul>
|
||
<li>Daria Sorokina, Rich Caruana, Mirek Riedewald</li>
|
||
<li><a
|
||
href="http://additivegroves.net/papers/groves.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Decision Tree Instability and Active Learning (ECML
|
||
2007)</strong>
|
||
<ul>
|
||
<li>Kenneth Dwyer, Robert Holte</li>
|
||
<li><a
|
||
href="https://webdocs.cs.ualberta.ca/~holte/Publications/ecml07.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Ensembles of Multi-Objective Decision Trees (ECML
|
||
2007)</strong>
|
||
<ul>
|
||
<li>Dragi Kocev, Celine Vens, Jan Struyf, Saso Dzeroski</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/978-3-540-74958-5_61">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Seeing the Forest Through the Trees: Learning a
|
||
Comprehensible Model from an Ensemble (ECML 2007)</strong>
|
||
<ul>
|
||
<li>Anneleen Van Assche, Hendrik Blockeel</li>
|
||
<li><a
|
||
href="http://ftp.cs.wisc.edu/machine-learning/shavlik-group/ilp07wip/ilp07_assche.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Sample Compression Bounds for Decision Trees (ICML
|
||
2007)</strong>
|
||
<ul>
|
||
<li>Mohak Shah</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.331.9136&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Tighter Error Bound for Decision Tree Learning Using PAC
|
||
Learnability (IJCAI 2007)</strong>
|
||
<ul>
|
||
<li>Chaithanya Pichuka, Raju S. Bapi, Chakravarthy Bhagvati, Arun K.
|
||
Pujari, Bulusu Lakshmana Deekshatulu</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/07/Papers/163.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Keep the Decision Tree and Estimate the Class Probabilities
|
||
Using its Decision Boundary (IJCAI 2007)</strong>
|
||
<ul>
|
||
<li>Isabelle Alvarez, Stephan Bernard, Guillaume Deffuant</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/07/Papers/104.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Real Boosting a la Carte with an Application to Boosting
|
||
Oblique Decision Tree (IJCAI 2007)</strong>
|
||
<ul>
|
||
<li>Claudia Henry, Richard Nock, Frank Nielsen</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/07/Papers/135.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Scalable Look-ahead Linear Regression Trees (KDD
|
||
2007)</strong>
|
||
<ul>
|
||
<li>David S. Vogel, Ognian Asparouhov, Tobias Scheffer</li>
|
||
<li><a
|
||
href="https://www.cs.uni-potsdam.de/ml/publications/kdd2007.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Mining Optimal Decision Trees from Itemset Lattices (KDD
|
||
2007)</strong>
|
||
<ul>
|
||
<li>Siegfried Nijssen, Élisa Fromont</li>
|
||
<li><a
|
||
href="https://hal.archives-ouvertes.fr/hal-00372011/document">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Hybrid Multi-group Privacy-Preserving Approach for
|
||
Building Decision Trees (PAKDD 2007)</strong>
|
||
<ul>
|
||
<li>Zhouxuan Teng, Wenliang Du</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/978-3-540-71701-0_30">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-16">2006</h2>
|
||
<ul>
|
||
<li><strong>Decision Tree Methods for Finding Reusable MDP Homomorphisms
|
||
(AAAI 2006)</strong>
|
||
<ul>
|
||
<li>Alicia P. Wolfe, Andrew G. Barto</li>
|
||
<li><a
|
||
href="https://www.aaai.org/Papers/AAAI/2006/AAAI06-085.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Fast Decision Tree Learning Algorithm (AAAI 2006)</strong>
|
||
<ul>
|
||
<li>Jiang Su, Harry Zhang</li>
|
||
<li><a
|
||
href="http://www.cs.unb.ca/~hzhang/publications/AAAI06.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Anytime Induction of Decision Trees: An Iterative
|
||
Improvement Approach (AAAI 2006)</strong>
|
||
<ul>
|
||
<li>Saher Esmeir, Shaul Markovitch</li>
|
||
<li><a
|
||
href="https://www.aaai.org/Papers/AAAI/2006/AAAI06-056.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>When a Decision Tree Learner Has Plenty of Time (AAAI
|
||
2006)</strong>
|
||
<ul>
|
||
<li>Saher Esmeir, Shaul Markovitch</li>
|
||
<li><a
|
||
href="https://www.aaai.org/Papers/AAAI/2006/AAAI06-259.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Decision Trees for Functional Variables (ICDM 2006)</strong>
|
||
<ul>
|
||
<li>Suhrid Balakrishnan, David Madigan</li>
|
||
<li><a
|
||
href="http://archive.dimacs.rutgers.edu/Research/MMS/PAPERS/fdt17.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Cost-Sensitive Decision Tree Learning for Forensic
|
||
Classification (ECML 2006)</strong>
|
||
<ul>
|
||
<li>Jason V. Davis, Jungwoo Ha, Christopher J. Rossbach, Hany E.
|
||
Ramadan, Emmett Witchel</li>
|
||
<li><a
|
||
href="https://www.cs.utexas.edu/users/witchel/pubs/davis-ecml06.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Improving the Ranking Performance of Decision Trees (ECML
|
||
2006)</strong>
|
||
<ul>
|
||
<li>Bin Wang, Harry Zhang</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/11871842_44">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A General Framework for Accurate and Fast Regression by Data
|
||
Summarization in Random Decision Trees (KDD 2006)</strong>
|
||
<ul>
|
||
<li>Wei Fan, Joe McCloskey, Philip S. Yu</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.442.2004&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Constructing Decision Trees for Graph-Structured Data by
|
||
Chunkingless Graph-Based Induction (PAKDD 2006)</strong>
|
||
<ul>
|
||
<li>Phu Chien Nguyen, Kouzou Ohara, Akira Mogi, Hiroshi Motoda, Takashi
|
||
Washio</li>
|
||
<li><a
|
||
href="http://www.ar.sanken.osaka-u.ac.jp/~motoda/papers/pakdd06.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Variable Randomness in Decision Tree Ensembles (PAKDD
|
||
2006)</strong>
|
||
<ul>
|
||
<li>Fei Tony Liu, Kai Ming Ting</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/11731139_12">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Generalized Conditional Entropy and a Metric Splitting
|
||
Criterion for Decision Trees (PAKDD 2006)</strong>
|
||
<ul>
|
||
<li>Dan A. Simovici, Szymon Jaroszewicz</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/profile/Szymon_Jaroszewicz/publication/220895184_Generalized_Conditional_Entropy_and_a_Metric_Splitting_Criterion_for_Decision_Trees/links/0fcfd50b1267f7b868000000/Generalized-Conditional-Entropy-and-a-Metric-Splitting-Criterion-for-Decision-Trees.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Decision Trees for Hierarchical Multilabel Classification: A
|
||
Case Study in Functional Genomics (PKDD 2006)</strong>
|
||
<ul>
|
||
<li>Hendrik Blockeel, Leander Schietgat, Jan Struyf, Saso Dzeroski,
|
||
Amanda Clare</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/11871637_7">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>k-Anonymous Decision Tree Induction (PKDD 2006)</strong>
|
||
<ul>
|
||
<li>Arik Friedman, Assaf Schuster, Ran Wolff</li>
|
||
<li><a
|
||
href="http://www.cs.technion.ac.il/~arikf/online-publications/kADET06.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-17">2005</h2>
|
||
<ul>
|
||
<li><strong>Representing Conditional Independence Using Decision Trees
|
||
(AAAI 2005)</strong>
|
||
<ul>
|
||
<li>Jiang Su, Harry Zhang</li>
|
||
<li><a
|
||
href="http://www.cs.unb.ca/~hzhang/publications/AAAI051SuJ.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Use of Expert Knowledge for Decision Tree Pruning (AAAI
|
||
2005)</strong>
|
||
<ul>
|
||
<li>Jingfeng Cai, John Durkin</li>
|
||
<li><a
|
||
href="http://www.aaai.org/Papers/AAAI/2005/SA05-009.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Model Selection in Omnivariate Decision Trees (ECML
|
||
2005)</strong>
|
||
<ul>
|
||
<li>Olcay Taner Yildiz, Ethem Alpaydin</li>
|
||
<li><a
|
||
href="https://www.cmpe.boun.edu.tr/~ethem/files/papers/yildiz_ecml05.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Combining Bias and Variance Reduction Techniques for
|
||
Regression Trees (ECML 2005)</strong>
|
||
<ul>
|
||
<li>Yuk Lai Suen, Prem Melville, Raymond J. Mooney</li>
|
||
<li><a
|
||
href="http://www.cs.utexas.edu/users/ml/papers/bv-ecml-05.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Simple Test Strategies for Cost-Sensitive Decision Trees
|
||
(ECML 2005)</strong>
|
||
<ul>
|
||
<li>Shengli Sheng, Charles X. Ling, Qiang Yang</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/3297582_Test_strategies_for_cost-sensitive_decision_trees">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Effective Estimation of Posterior Probabilities: Explaining
|
||
the Accuracy of Randomized Decision Tree Approaches (ICDM 2005)</strong>
|
||
<ul>
|
||
<li>Wei Fan, Ed Greengrass, Joe McCloskey, Philip S. Yu, Kevin
|
||
Drummey</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.218.9713&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Exploiting Informative Priors for Bayesian Classification
|
||
and Regression Trees (IJCAI 2005)</strong>
|
||
<ul>
|
||
<li>Nicos Angelopoulos, James Cussens</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/05/Papers/1013.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Ranking Cases with Decision Trees: a Geometric Method that
|
||
Preserves Intelligibility (IJCAI 2005)</strong>
|
||
<ul>
|
||
<li>Isabelle Alvarez, Stephan Bernard</li>
|
||
<li><a
|
||
href="https://www.ijcai.org/Proceedings/05/Papers/1502.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Maximizing Tree Diversity by Building Complete-Random
|
||
Decision Trees (PAKDD 2005)</strong>
|
||
<ul>
|
||
<li>Fei Tony Liu, Kai Ming Ting, Wei Fan</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.218.7805&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Hybrid Cost-Sensitive Decision Tree (PKDD 2005)</strong>
|
||
<ul>
|
||
<li>Shengli Sheng, Charles X. Ling</li>
|
||
<li><a
|
||
href="https://cling.csd.uwo.ca/papers/pkdd05a.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Tree2 - Decision Trees for Tree Structured Data (PKDD
|
||
2005)</strong>
|
||
<ul>
|
||
<li>Björn Bringmann, Albrecht Zimmermann</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/11564126_10">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Building Decision Trees on Records Linked through Key
|
||
References (SDM 2005)</strong>
|
||
<ul>
|
||
<li>Ke Wang, Yabo Xu, Philip S. Yu, Rong She</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.215.7181&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Decision Tree Induction in High Dimensional, Hierarchically
|
||
Distributed Databases (SDM 2005)</strong>
|
||
<ul>
|
||
<li>Amir Bar-Or, Ran Wolff, Assaf Schuster, Daniel Keren</li>
|
||
<li><a
|
||
href="https://www.semanticscholar.org/paper/Decision-Tree-Induction-in-High-Dimensional%2C-Bar-Or-Wolff/90235fc35c27dae273681f7847c2b20ff37928a9">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosted Decision Trees for Word Recognition in Handwritten
|
||
Document Retrieval (SIGIR 2005)</strong>
|
||
<ul>
|
||
<li>Nicholas R. Howe, Toni M. Rath, R. Manmatha</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.152.1551&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-18">2004</h2>
|
||
<ul>
|
||
<li><strong>On the Optimality of Probability Estimation by Random
|
||
Decision Trees (AAAI 2004)</strong>
|
||
<ul>
|
||
<li>Wei Fan</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.447.2128&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Occam’s Razor and a Non-Syntactic Measure of Decision Tree
|
||
Complexity (AAAI 2004)</strong>
|
||
<ul>
|
||
<li>Goutam Paul</li>
|
||
<li><a
|
||
href="https://www.aaai.org/Papers/AAAI/2004/AAAI04-130.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Using Emerging Patterns and Decision Trees in Rare-Class
|
||
Classification (ICDM 2004)</strong>
|
||
<ul>
|
||
<li>Hamad Alhammady, Kotagiri Ramamohanarao</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/abstract/document/1410299">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Orthogonal Decision Trees (ICDM 2004)</strong>
|
||
<ul>
|
||
<li>Hillol Kargupta, Haimonti Dutta</li>
|
||
<li><a
|
||
href="https://www.csee.umbc.edu/~hillol/PUBS/odtree.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Improving the Reliability of Decision Tree and Naive Bayes
|
||
Learners (ICDM 2004)</strong>
|
||
<ul>
|
||
<li>David George Lindsay, Siân Cox</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.521.3127&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Communication Efficient Construction of Decision Trees Over
|
||
Heterogeneously Distributed Data (ICDM 2004)</strong>
|
||
<ul>
|
||
<li>Chris Giannella, Kun Liu, Todd Olsen, Hillol Kargupta</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.79.7119&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Decision Tree Evolution Using Limited Number of Labeled Data
|
||
Items from Drifting Data Streams (ICDM 2004)</strong>
|
||
<ul>
|
||
<li>Wei Fan, Yi-an Huang, Philip S. Yu</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.218.9450&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Lookahead-based Algorithms for Anytime Induction of Decision
|
||
Trees (ICML 2004)</strong>
|
||
<ul>
|
||
<li>Saher Esmeir, Shaul Markovitch</li>
|
||
<li><a
|
||
href="http://www.cs.technion.ac.il/~shaulm/papers/pdf/Esmeir-Markovitch-icml2004.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Decision Trees with Minimal Costs (ICML 2004)</strong>
|
||
<ul>
|
||
<li>Charles X. Ling, Qiang Yang, Jianning Wang, Shichao Zhang</li>
|
||
<li><a
|
||
href="https://icml.cc/Conferences/2004/proceedings/papers/136.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Training Conditional Random Fields via Gradient Tree
|
||
Boosting (ICML 2004)</strong>
|
||
<ul>
|
||
<li>Thomas G. Dietterich, Adam Ashenfelter, Yaroslav Bulatov</li>
|
||
<li><a
|
||
href="http://web.engr.oregonstate.edu/~tgd/publications/ml2004-treecrf.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Detecting Structural Metadata with Decision Trees and
|
||
Transformation-Based Learning (NAACL 2004)</strong>
|
||
<ul>
|
||
<li>Joungbum Kim, Sarah E. Schwarm, Mari Ostendorf</li>
|
||
<li><a href="https://www.aclweb.org/anthology/N04-1018">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>On the Adaptive Properties of Decision Trees (NIPS
|
||
2004)</strong>
|
||
<ul>
|
||
<li>Clayton D. Scott, Robert D. Nowak</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/2625-on-the-adaptive-properties-of-decision-trees.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Metric Approach to Building Decision Trees Based on
|
||
Goodman-Kruskal Association Index (PAKDD 2004)</strong>
|
||
<ul>
|
||
<li>Dan A. Simovici, Szymon Jaroszewicz</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/2906289_A_Metric_Approach_to_Building_Decision_Trees_Based_on_Goodman-Kruskal_Association_Index">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-19">2003</h2>
|
||
<ul>
|
||
<li><strong>Rademacher Penalization over Decision Tree Prunings (ECML
|
||
2003)</strong>
|
||
<ul>
|
||
<li>Matti Kääriäinen, Tapio Elomaa</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/221112653_Rademacher_Penalization_over_Decision_Tree_Prunings">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Ensembles of Cascading Trees (ICDM 2003)</strong>
|
||
<ul>
|
||
<li>Jinyan Li, Huiqing Liu</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/4047523_Ensembles_of_cascading_trees">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Postprocessing Decision Trees to Extract Actionable
|
||
Knowledge (ICDM 2003)</strong>
|
||
<ul>
|
||
<li>Qiang Yang, Jie Yin, Charles X. Ling, Tielin Chen</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/b2c6/ff54c7aeefc70820ff04a8fc8b804012c504.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>K-D Decision Tree: An Accelerated and Memory Efficient
|
||
Nearest Neighbor Classifier (ICDM 2003)</strong>
|
||
<ul>
|
||
<li>Tomoyuki Shibata, Takekazu Kato, Toshikazu Wada</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/abstract/document/1250997">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Identifying Markov Blankets with Decision Tree Induction
|
||
(ICDM 2003)</strong>
|
||
<ul>
|
||
<li>Lewis J. Frey, Douglas H. Fisher, Ioannis Tsamardinos, Constantin F.
|
||
Aliferis, Alexander R. Statnikov</li>
|
||
<li><a
|
||
href="https://www.semanticscholar.org/paper/Identifying-Markov-Blankets-with-Decision-Tree-Frey-Fisher/1aa0b0ede22f3963c923ea320a8bed91ac5aafbf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Comparing Naive Bayes, Decision Trees, and SVM with AUC and
|
||
Accuracy (ICDM 2003)</strong>
|
||
<ul>
|
||
<li>Jin Huang, Jingjing Lu, Charles X. Ling</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/8a73/74b98a9d94b8c01e996e72340f86a4327869.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Lazy Decision Trees (ICML 2003)</strong>
|
||
<ul>
|
||
<li>Xiaoli Zhang Fern, Carla E. Brodley</li>
|
||
<li><a
|
||
href="https://www.aaai.org/Papers/ICML/2003/ICML03-026.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Decision Tree with Better Ranking (ICML 2003)</strong>
|
||
<ul>
|
||
<li>Charles X. Ling, Robert J. Yan</li>
|
||
<li><a
|
||
href="https://www.aaai.org/Papers/ICML/2003/ICML03-064.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Skewing: An Efficient Alternative to Lookahead for Decision
|
||
Tree Induction (IJCAI 2003)</strong>
|
||
<ul>
|
||
<li>David Page, Soumya Ray</li>
|
||
<li><a
|
||
href="http://pages.cs.wisc.edu/~dpage/ijcai3.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Decision Tree Construction on Streaming Data (KDD
|
||
2003)</strong>
|
||
<ul>
|
||
<li>Ruoming Jin, Gagan Agrawal</li>
|
||
<li><a
|
||
href="http://web.cse.ohio-state.edu/~agrawal.28/p/sigkdd03.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>PaintingClass: Interactive Construction Visualization and
|
||
Exploration of Decision Trees (KDD 2003)</strong>
|
||
<ul>
|
||
<li>Soon Tee Teoh, Kwan-Liu Ma</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/220272011_PaintingClass_interactive_construction_visualization_and_exploration_of_decision_trees">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Accurate Decision Trees for Mining High-Speed Data Streams
|
||
(KDD 2003)</strong>
|
||
<ul>
|
||
<li>João Gama, Ricardo Rocha, Pedro Medas</li>
|
||
<li><a
|
||
href="http://staff.icar.cnr.it/manco/Teaching/2006/datamining/Esami2006/ArticoliSelezionatiDM/SEMINARI/Mining%20Data%20Streams/kdd03.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Near-Minimax Optimal Classification with Dyadic
|
||
Classification Trees (NIPS 2003)</strong>
|
||
<ul>
|
||
<li>Clayton D. Scott, Robert D. Nowak</li>
|
||
<li><a href="http://nowak.ece.wisc.edu/nips03.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Improving Performance of Decision Tree Algorithms with
|
||
Multi-edited Nearest Neighbor Rule (PAKDD 2003)</strong>
|
||
<ul>
|
||
<li>Chenzhou Ye, Jie Yang, Lixiu Yao, Nian-yi Chen</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/220895462_Improving_Performance_of_Decision_Tree_Algorithms_with_Multi-edited_Nearest_Neighbor_Rule">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Arbogodai: a New Approach for Decision Trees (PKDD
|
||
2003)</strong>
|
||
<ul>
|
||
<li>Djamel A. Zighed, Gilbert Ritschard, Walid Erray, Vasile-Marian
|
||
Scuturici</li>
|
||
<li><a
|
||
href="http://mephisto.unige.ch/pub/publications/gr/zig_rit_arbo_pkdd03.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Communication and Memory Efficient Parallel Decision Tree
|
||
Construction (SDM 2003)</strong>
|
||
<ul>
|
||
<li>Ruoming Jin, Gagan Agrawal</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.4.3059&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Decision Tree Classification of Spatial Data Patterns from
|
||
Videokeratography using Zernicke Polynomials (SDM 2003)</strong>
|
||
<ul>
|
||
<li>Michael D. Twa, Srinivasan Parthasarathy, Thomas W. Raasch, Mark
|
||
Bullimore</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/220907147_Decision_Tree_Classification_of_Spatial_Data_Patterns_From_Videokeratography_Using_Zernike_Polynomials">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-20">2002</h2>
|
||
<ul>
|
||
<li><strong>Multiclass Alternating Decision Trees (ECML 2002)</strong>
|
||
<ul>
|
||
<li>Geoffrey Holmes, Bernhard Pfahringer, Richard Kirkby, Eibe Frank,
|
||
Mark A. Hall</li>
|
||
<li><a
|
||
href="https://www.cs.waikato.ac.nz/~bernhard/papers/ecml2002.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Heterogeneous Forests of Decision Trees (ICANN
|
||
2002)</strong>
|
||
<ul>
|
||
<li>Krzysztof Grabczewski, Wlodzislaw Duch</li>
|
||
<li><a
|
||
href="https://fizyka.umk.pl/publications/kmk/02forest.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Solving the Fragmentation Problem of Decision Trees by
|
||
Discovering Boundary Emerging Patterns (ICDM 2002)</strong>
|
||
<ul>
|
||
<li>Jinyan Li, Limsoon Wong</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/1184021">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Solving the Fragmentation Problem of Decision Trees by
|
||
Discovering Boundary Emerging Patterns (ICDM 2002)</strong>
|
||
<ul>
|
||
<li>Jinyan Li, Limsoon Wong</li>
|
||
<li><a
|
||
href="https://www.comp.nus.edu.sg/~wongls/psZ/decisionTreeandEP-2.ps">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Decision Trees Using the Area Under the ROC Curve
|
||
(ICML 2002)</strong>
|
||
<ul>
|
||
<li>César Ferri, Peter A. Flach, José Hernández-Orallo</li>
|
||
<li><a
|
||
href="http://dmip.webs.upv.es/papers/ICML2002.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Finding an Optimal Gain-Ratio Subset-Split Test for a
|
||
Set-Valued Attribute in Decision Tree Induction (ICML 2002)</strong>
|
||
<ul>
|
||
<li>Fumio Takechi, Einoshin Suzuki</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/221346121_Finding_an_Optimal_Gain-Ratio_Subset-Split_Test_for_a_Set-Valued_Attribute_in_Decision_Tree_Induction">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficiently Mining Frequent Trees in a Forest (KDD
|
||
2002)</strong>
|
||
<ul>
|
||
<li>Mohammed Javeed Zaki</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.160.8511&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>SECRET: a Scalable Linear Regression Tree Algorithm (KDD
|
||
2002)</strong>
|
||
<ul>
|
||
<li>Alin Dobra, Johannes Gehrke</li>
|
||
<li><a
|
||
href="http://www.cs.cornell.edu/people/dobra/papers/secret-extended.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Instability of Decision Tree Classification Algorithms (KDD
|
||
2002)</strong>
|
||
<ul>
|
||
<li>Ruey-Hsia Li, Geneva G. Belford</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.12.8094&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Extracting Decision Trees From Trained Neural Networks (KDD
|
||
2002)</strong>
|
||
<ul>
|
||
<li>Olcay Boz</li>
|
||
<li><a
|
||
href="http://dspace.library.iitb.ac.in/jspui/bitstream/10054/1285/1/5664.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Dyadic Classification Trees via Structural Risk Minimization
|
||
(NIPS 2002)</strong>
|
||
<ul>
|
||
<li>Clayton D. Scott, Robert D. Nowak</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/2198-dyadic-classification-trees-via-structural-risk-minimization.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Approximate Splitting for Ensembles of Trees using
|
||
Histograms (SDM 2002)</strong>
|
||
<ul>
|
||
<li>Chandrika Kamath, Erick Cantú-Paz, David Littau</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/0855/0a94993a268e4e3e99c41e7e0ee43eabd993.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-21">2001</h2>
|
||
<ul>
|
||
<li><strong>Japanese Named Entity Recognition based on a Simple Rule
|
||
Generator and Decision Tree Learning (ACL 2001)</strong>
|
||
<ul>
|
||
<li>Hideki Isozaki</li>
|
||
<li><a href="https://www.aclweb.org/anthology/P01-1041">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Message Length as an Effective Ockham’s Razor in Decision
|
||
Tree Induction (AISTATS 2001)</strong>
|
||
<ul>
|
||
<li>Scott Needham, David L. Dowe</li>
|
||
<li><a
|
||
href="www.gatsby.ucl.ac.uk/aistats/aistats2001/files/needham122.ps">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>SQL Database Primitives for Decision Tree Classifiers (CIKM
|
||
2001)</strong>
|
||
<ul>
|
||
<li>Kai-Uwe Sattler, Oliver Dunemann</li>
|
||
<li><a
|
||
href="http://fusion.cs.uni-magdeburg.de/pubs/classprim.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Unified Framework for Evaluation Metrics in Classification
|
||
Using Decision Trees (ECML 2001)</strong>
|
||
<ul>
|
||
<li>Ricardo Vilalta, Mark Brodie, Daniel Oblinger, Irina Rish</li>
|
||
<li><a
|
||
href="https://scholar.harvard.edu/files/nkc/files/2015_framework_for_benefit_risk_assessment_value_in_health.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Backpropagation in Decision Trees for Regression (ECML
|
||
2001)</strong>
|
||
<ul>
|
||
<li>Victor Medina-Chico, Alberto Suárez, James F. Lutsko</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-44795-4_30">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Consensus Decision Trees: Using Consensus Hierarchical
|
||
Clustering for Data Relabelling and Reduction (ECML 2001)</strong>
|
||
<ul>
|
||
<li>Branko Kavsek, Nada Lavrac, Anuska Ferligoj</li>
|
||
<li><a
|
||
href="https://link.springer.com/content/pdf/10.1007/3-540-44795-4_22.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Mining Decision Trees from Data Streams in a Mobile
|
||
Environment (ICDM 2001)</strong>
|
||
<ul>
|
||
<li>Hillol Kargupta, Byung-Hoon Park</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/989530">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Determination of Dynamic Split Points in a
|
||
Decision Tree (ICDM 2001)</strong>
|
||
<ul>
|
||
<li>David Maxwell Chickering, Christopher Meek, Robert Rounthwaite</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/3587/a245c34ea415b205a903bde3220eb533d1a7.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Comparison of Stacking with Meta Decision Trees to
|
||
Bagging, Boosting, and Stacking with other Methods (ICDM 2001)</strong>
|
||
<ul>
|
||
<li>Bernard Zenko, Ljupco Todorovski, Saso Dzeroski</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.23.3118&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Algorithms for Decision Tree Cross-Validation
|
||
(ICML 2001)</strong>
|
||
<ul>
|
||
<li>Hendrik Blockeel, Jan Struyf</li>
|
||
<li><a
|
||
href="http://www.jmlr.org/papers/volume3/blockeel02a/blockeel02a.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Bias Correction in Classification Tree Construction (ICML
|
||
2001)</strong>
|
||
<ul>
|
||
<li>Alin Dobra, Johannes Gehrke</li>
|
||
<li><a
|
||
href="http://www.cs.cornell.edu/people/dobra/papers/icml2001-bias.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Breeding Decision Trees Using Evolutionary Techniques (ICML
|
||
2001)</strong>
|
||
<ul>
|
||
<li>Athanassios Papagelis, Dimitrios Kalles</li>
|
||
<li><a
|
||
href="http://www.gatree.com/data/BreedinDecisioTreeUsinEvo.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Obtaining Calibrated Probability Estimates from Decision
|
||
Trees and Naive Bayesian Classifiers (ICML 2001)</strong>
|
||
<ul>
|
||
<li>Bianca Zadrozny, Charles Elkan</li>
|
||
<li><a
|
||
href="http://cseweb.ucsd.edu/~elkan/calibrated.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Temporal Decision Trees or the lazy ECU vindicated (IJCAI
|
||
2001)</strong>
|
||
<ul>
|
||
<li>Luca Console, Claudia Picardi, Daniele Theseider Dupré</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/220815333_Temporal_Decision_Trees_or_the_lazy_ECU_vindicated">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Data Mining Criteria for Tree-based Regression and
|
||
Classification (KDD 2001)</strong>
|
||
<ul>
|
||
<li>Andreas Buja, Yung-Seop Lee</li>
|
||
<li><a
|
||
href="https://repository.upenn.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=1406&context=statistics_papers">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Decision Tree of Bigrams is an Accurate Predictor of Word
|
||
Sense (NAACL 2001)</strong>
|
||
<ul>
|
||
<li>Ted Pedersen</li>
|
||
<li><a href="https://www.aclweb.org/anthology/N01-1011">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Rule Reduction over Numerical Attributes in Decision Tree
|
||
Using Multilayer Perceptron (PAKDD 2001)</strong>
|
||
<ul>
|
||
<li>DaeEun Kim, Jaeho Lee</li>
|
||
<li><a href="https://dl.acm.org/citation.cfm?id=693490">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Scalable Algorithm for Rule Post-pruning of Large Decision
|
||
Trees (PAKDD 2001)</strong>
|
||
<ul>
|
||
<li>Trong Dung Nguyen, Tu Bao Ho, Hiroshi Shimodaira</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-45357-1_49">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Optimizing the Induction of Alternating Decision Trees
|
||
(PAKDD 2001)</strong>
|
||
<ul>
|
||
<li>Bernhard Pfahringer, Geoffrey Holmes, Richard Kirkby</li>
|
||
<li><a
|
||
href="https://www.researchgate.net/publication/33051701_Optimizing_the_Induction_of_Alternating_Decision_Trees">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Interactive Construction of Decision Trees (PAKDD
|
||
2001)</strong>
|
||
<ul>
|
||
<li>Jianchao Han, Nick Cercone</li>
|
||
<li><a
|
||
href="https://pure.tue.nl/ws/files/3522084/672434611234867.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Bloomy Decision Tree for Multi-objective Classification
|
||
(PKDD 2001)</strong>
|
||
<ul>
|
||
<li>Einoshin Suzuki, Masafumi Gotoh, Yuta Choki</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-44794-6_36">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Fourier Analysis Based Approach to Learning Decision Trees
|
||
in a Distributed Environment (SDM 2001)</strong>
|
||
<ul>
|
||
<li>Byung-Hoon Park, Rajeev Ayyagari, Hillol Kargupta</li>
|
||
<li><a
|
||
href="https://archive.siam.org/meetings/sdm01/pdf/sdm01_19.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-22">2000</h2>
|
||
<ul>
|
||
<li><strong>Intuitive Representation of Decision Trees Using General
|
||
Rules and Exceptions (AAAI 2000)</strong>
|
||
<ul>
|
||
<li>Bing Liu, Minqing Hu, Wynne Hsu</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/e284/96551e595f1850a53f93affa98919147712f.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Tagging Unknown Proper Names Using Decision Trees (ACL
|
||
2000)</strong>
|
||
<ul>
|
||
<li>Frédéric Béchet, Alexis Nasr, Franck Genet</li>
|
||
<li><a href="https://www.aclweb.org/anthology/P00-1011">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Clustering Through Decision Tree Construction (CIKM
|
||
2000)</strong>
|
||
<ul>
|
||
<li>Bing Liu, Yiyuan Xia, Philip S. Yu</li>
|
||
<li><a href="https://dl.acm.org/citation.cfm?id=354775">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Handling Continuous-Valued Attributes in Decision Tree with
|
||
Neural Network Modelling (ECML 2000)</strong>
|
||
<ul>
|
||
<li>DaeEun Kim, Jaeho Lee</li>
|
||
<li><a
|
||
href="https://link.springer.com/content/pdf/10.1007/3-540-45164-1_22.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Investigation and Reduction of Discretization Variance in
|
||
Decision Tree Induction (ECML 2000)</strong>
|
||
<ul>
|
||
<li>Pierre Geurts, Louis Wehenkel</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-45164-1_17">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Nonparametric Regularization of Decision Trees (ECML
|
||
2000)</strong>
|
||
<ul>
|
||
<li>Tobias Scheffer</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-45164-1_36">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Exploiting the Cost (In)sensitivity of Decision Tree
|
||
Splitting Criteria (ICML 2000)</strong>
|
||
<ul>
|
||
<li>Chris Drummond, Robert C. Holte</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/160e/21c3acc925b60dc040cb1705e58bb166b045.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Multi-agent Q-learning and Regression Trees for Automated
|
||
Pricing Decisions (ICML 2000)</strong>
|
||
<ul>
|
||
<li>Manu Sridharan, Gerald Tesauro</li>
|
||
<li><a
|
||
href="https://manu.sridharan.net/files/icml00.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Growing Decision Trees on Support-less Association Rules
|
||
(KDD 2000)</strong>
|
||
<ul>
|
||
<li>Ke Wang, Senqiang Zhou, Yu He</li>
|
||
<li><a
|
||
href="https://www2.cs.sfu.ca/~wangk/pub/kdd002.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Algorithms for Constructing Decision Trees with
|
||
Constraints (KDD 2000)</strong>
|
||
<ul>
|
||
<li>Minos N. Garofalakis, Dongjoon Hyun, Rajeev Rastogi, Kyuseok
|
||
Shim</li>
|
||
<li><a
|
||
href="http://www.softnet.tuc.gr/~minos/Papers/kdd00-cam.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Interactive Visualization in Mining Large Decision Trees
|
||
(PAKDD 2000)</strong>
|
||
<ul>
|
||
<li>Trong Dung Nguyen, Tu Bao Ho, Hiroshi Shimodaira</li>
|
||
<li><a
|
||
href="https://link.springer.com/content/pdf/10.1007/3-540-45571-X_40.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>VQTree: Vector Quantization for Decision Tree Induction
|
||
(PAKDD 2000)</strong>
|
||
<ul>
|
||
<li>Shlomo Geva, Lawrence Buckingham</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007%2F3-540-45571-X_41">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Some Enhencements of Decision Tree Bagging (PKDD
|
||
2000)</strong>
|
||
<ul>
|
||
<li>Pierre Geurts</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-45372-5_14">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Combining Multiple Models with Meta Decision Trees (PKDD
|
||
2000)</strong>
|
||
<ul>
|
||
<li>Ljupco Todorovski, Saso Dzeroski</li>
|
||
<li><a href="http://kt.ijs.si/bernard/mdts/pub01.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Induction of Multivariate Decision Trees by Using Dipolar
|
||
Criteria (PKDD 2000)</strong>
|
||
<ul>
|
||
<li>Leon Bobrowski, Marek Kretowski</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-45372-5_33">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Decision Tree Toolkit: A Component-Based Library of Decision
|
||
Tree Algorithms (PKDD 2000)</strong>
|
||
<ul>
|
||
<li>Nikos Drossos, Athanassios Papagelis, Dimitrios Kalles</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007/3-540-45372-5_40">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-23">1999</h2>
|
||
<ul>
|
||
<li><strong>Modeling Decision Tree Performance with the Power Law
|
||
(AISTATS 1999)</strong>
|
||
<ul>
|
||
<li>Lewis J. Frey, Douglas H. Fisher</li>
|
||
<li><a
|
||
href="https://www.microsoft.com/en-us/research/wp-content/uploads/2017/01/ModelingTree.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Causal Mechanisms and Classification Trees for Predicting
|
||
Chemical Carcinogens (AISTATS 1999)</strong>
|
||
<ul>
|
||
<li>Louis Anthony Cox Jr.</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/0d7b/1d55c5abfd024aacf645c66d0c90c283814e.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>POS Tags and Decision Trees for Language Modeling (EMNLP
|
||
1999)</strong>
|
||
<ul>
|
||
<li>Peter A. Heeman</li>
|
||
<li><a href="https://www.aclweb.org/anthology/W99-0617">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Lazy Bayesian Rules: A Lazy Semi-Naive Bayesian Learning
|
||
Technique Competitive to Boosting Decision Trees (ICML 1999)</strong>
|
||
<ul>
|
||
<li>Zijian Zheng, Geoffrey I. Webb, Kai Ming Ting</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/067e/86836ddbcb5e2844e955c16e058366a18c77.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>The Alternating Decision Tree Learning Algorithm (ICML
|
||
1999)</strong>
|
||
<ul>
|
||
<li>Yoav Freund, Llew Mason</li>
|
||
<li><a
|
||
href="https://cseweb.ucsd.edu/~yfreund/papers/atrees.pdf">[Paper]</a></li>
|
||
<li><a href="https://github.com/rajanil/mkboost">[Code]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting with Multi-Way Branching in Decision Trees (NIPS
|
||
1999)</strong>
|
||
<ul>
|
||
<li>Yishay Mansour, David A. McAllester</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/1659-boosting-with-multi-way-branching-in-decision-trees.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-24">1998</h2>
|
||
<ul>
|
||
<li><strong>Learning Sorting and Decision Trees with POMDPs (ICML
|
||
1998)</strong>
|
||
<ul>
|
||
<li>Blai Bonet, Hector Geffner</li>
|
||
<li><a
|
||
href="https://bonetblai.github.io/reports/icml98-learning.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Using a Permutation Test for Attribute Selection in Decision
|
||
Trees (ICML 1998)</strong>
|
||
<ul>
|
||
<li>Eibe Frank, Ian H. Witten</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/9aa9/21b0203e06e98b49bf726a33e124f4310ea3.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A Fast and Bottom-Up Decision Tree Pruning Algorithm with
|
||
Near-Optimal Generalization (ICML 1998)</strong>
|
||
<ul>
|
||
<li>Michael J. Kearns, Yishay Mansour</li>
|
||
<li><a
|
||
href="https://www.cis.upenn.edu/~mkearns/papers/pruning.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-25">1997</h2>
|
||
<ul>
|
||
<li><strong>Pessimistic Decision Tree Pruning Based Continuous-Time
|
||
(ICML 1997)</strong>
|
||
<ul>
|
||
<li>Yishay Mansour</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/b6fc/e37612db10a9756b904b5e79e1144ca12574.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>PAC Learning with Constant-Partition Classification Noise
|
||
and Applications to Decision Tree Induction (ICML 1997)</strong>
|
||
<ul>
|
||
<li>Scott E. Decatur</li>
|
||
<li><a
|
||
href="https://www.semanticscholar.org/paper/PAC-Learning-with-Constant-Partition-Classification-Decatur/dd205073aeb512ecd1e823b35f556058fdeea5e0">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Option Decision Trees with Majority Votes (ICML
|
||
1997)</strong>
|
||
<ul>
|
||
<li>Ron Kohavi, Clayton Kunz</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/383b/381d1ac0bb41ec595e0d1603ed642809eb86.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Integrating Feature Construction with Multiple Classifiers
|
||
in Decision Tree Induction (ICML 1997)</strong>
|
||
<ul>
|
||
<li>Ricardo Vilalta, Larry A. Rendell</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/1f73/d9d409a75d16871cfa1182ac72b37c839d86.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Functional Models for Regression Tree Leaves (ICML
|
||
1997)</strong>
|
||
<ul>
|
||
<li>Luís Torgo</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/48e4/b3187ca234308e97e1ac0cab84222c603bdd.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>The Effects of Training Set Size on Decision Tree Complexity
|
||
(ICML 1997)</strong>
|
||
<ul>
|
||
<li>Tim Oates, David D. Jensen</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/e003/9dbdec3bd4cfbb3273b623fbed2d6b2f0cc9.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Unsupervised On-line Learning of Decision Trees for
|
||
Hierarchical Data Analysis (NIPS 1997)</strong>
|
||
<ul>
|
||
<li>Marcus Held, Joachim M. Buhmann</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/1479-unsupervised-on-line-learning-of-decision-trees-for-hierarchical-data-analysis.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Data-Dependent Structural Risk Minimization for Perceptron
|
||
Decision Trees (NIPS 1997)</strong>
|
||
<ul>
|
||
<li>John Shawe-Taylor, Nello Cristianini</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/1359-data-dependent-structural-risk-minimization-for-perceptron-decision-trees">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Generalization in Decision Trees and DNF: Does Size Matter
|
||
(NIPS 1997)</strong>
|
||
<ul>
|
||
<li>Mostefa Golea, Peter L. Bartlett, Wee Sun Lee, Llew Mason</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/1340-generalization-in-decision-trees-and-dnf-does-size-matter.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-26">1996</h2>
|
||
<ul>
|
||
<li><strong>Second Tier for Decision Trees (ICML 1996)</strong>
|
||
<ul>
|
||
<li>Miroslav Kubat</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/b619/7c531b1c83dfaa52563449f9b8248cc68c5a.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Non-Linear Decision Trees - NDT (ICML 1996)</strong>
|
||
<ul>
|
||
<li>Andreas Ittner, Michael Schlosser</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.85.2133&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Learning Relational Concepts with Decision Trees (ICML
|
||
1996)</strong>
|
||
<ul>
|
||
<li>Peter Geibel, Fritz Wysotzki</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/32f1/78d7266fee779257b87ac8f948951db57d1e.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-27">1995</h2>
|
||
<ul>
|
||
<li><strong>A Hill-Climbing Approach for Optimizing Classification Trees
|
||
(AISTATS 1995)</strong>
|
||
<ul>
|
||
<li>Xiaorong Sun, Steve Y. Chiu, Louis Anthony Cox Jr.</li>
|
||
<li><a
|
||
href="https://link.springer.com/chapter/10.1007%2F978-1-4612-2404-4_11">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>An Exact Probability Metric for Decision Tree Splitting
|
||
(AISTATS 1995)</strong>
|
||
<ul>
|
||
<li>J. Kent Martin</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.48.6378&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>On Pruning and Averaging Decision Trees (ICML 1995)</strong>
|
||
<ul>
|
||
<li>Jonathan J. Oliver, David J. Hand</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.53.6733&rep=rep1&type=pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>On Handling Tree-Structured Attributed in Decision Tree
|
||
Learning (ICML 1995)</strong>
|
||
<ul>
|
||
<li>Hussein Almuallim, Yasuhiro Akiba, Shigeo Kaneda</li>
|
||
<li><a
|
||
href="https://www.sciencedirect.com/science/article/pii/B9781558603776500116">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Retrofitting Decision Tree Classifiers Using Kernel Density
|
||
Estimation (ICML 1995)</strong>
|
||
<ul>
|
||
<li>Padhraic Smyth, Alexander G. Gray, Usama M. Fayyad</li>
|
||
<li><a
|
||
href="https://pdfs.semanticscholar.org/3a05/8ab505f096b23962591bb14e495a543aa2a1.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Increasing the Performance and Consistency of Classification
|
||
Trees by Using the Accuracy Criterion at the Leaves (ICML 1995)</strong>
|
||
<ul>
|
||
<li>David J. Lubinsky</li>
|
||
<li><a
|
||
href="https://www.sciencedirect.com/science/article/pii/B9781558603776500530">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Efficient Algorithms for Finding Multi-way Splits for
|
||
Decision Trees (ICML 1995)</strong>
|
||
<ul>
|
||
<li>Truxton Fulton, Simon Kasif, Steven Salzberg</li>
|
||
<li><a
|
||
href="https://www.sciencedirect.com/science/article/pii/B9781558603776500384">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Theory and Applications of Agnostic PAC-Learning with Small
|
||
Decision Trees (ICML 1995)</strong>
|
||
<ul>
|
||
<li>Peter Auer, Robert C. Holte, Wolfgang Maass</li>
|
||
<li><a href="https://igi-web.tugraz.at/PDF/77.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Boosting Decision Trees (NIPS 1995)</strong>
|
||
<ul>
|
||
<li>Harris Drucker, Corinna Cortes</li>
|
||
<li><a
|
||
href="http://papers.nips.cc/paper/1059-boosting-decision-trees.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Using Pairs of Data-Points to Define Splits for Decision
|
||
Trees (NIPS 1995)</strong>
|
||
<ul>
|
||
<li>Geoffrey E. Hinton, Michael Revow</li>
|
||
<li><a
|
||
href="https://www.cs.toronto.edu/~hinton/absps/bcart.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>A New Pruning Method for Solving Decision Trees and Game
|
||
Trees (UAI 1995)</strong>
|
||
<ul>
|
||
<li>Prakash P. Shenoy</li>
|
||
<li><a href="https://arxiv.org/abs/1302.4981">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-28">1994</h2>
|
||
<ul>
|
||
<li><strong>A Statistical Approach to Decision Tree Modeling (ICML
|
||
1994)</strong>
|
||
<ul>
|
||
<li>Michael I. Jordan</li>
|
||
<li><a
|
||
href="https://www.sciencedirect.com/science/article/pii/B9781558603356500519">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>In Defense of C4.5: Notes Learning One-Level Decision Trees
|
||
(ICML 1994)</strong>
|
||
<ul>
|
||
<li>Tapio Elomaa</li>
|
||
<li><a
|
||
href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.9386">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>An Improved Algorithm for Incremental Induction of Decision
|
||
Trees (ICML 1994)</strong>
|
||
<ul>
|
||
<li>Paul E. Utgoff</li>
|
||
<li><a
|
||
href="https://www.sciencedirect.com/science/article/pii/B9781558603356500465">[Paper]</a></li>
|
||
</ul></li>
|
||
<li><strong>Decision Tree Parsing using a Hidden Derivation Model (NAACL
|
||
1994)</strong>
|
||
<ul>
|
||
<li>Frederick Jelinek, John D. Lafferty, David M. Magerman, Robert L.
|
||
Mercer, Adwait Ratnaparkhi, Salim Roukos</li>
|
||
<li><a
|
||
href="http://acl-arc.comp.nus.edu.sg/archives/acl-arc-090501d3/data/pdf/anthology-PDF/H/H94/H94-1052.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-29">1993</h2>
|
||
<ul>
|
||
<li><strong>Using Decision Trees to Improve Case-Based Learning (ICML
|
||
1993)</strong>
|
||
<ul>
|
||
<li>Claire Cardie</li>
|
||
<li><a
|
||
href="https://www.cs.cornell.edu/home/cardie/papers/ml-93.ps">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-30">1991</h2>
|
||
<ul>
|
||
<li><strong>Context Dependent Modeling of Phones in Continuous Speech
|
||
Using Decision Trees (NAACL 1991)</strong>
|
||
<ul>
|
||
<li>Lalit R. Bahl, Peter V. de Souza, P. S. Gopalakrishnan, David
|
||
Nahamoo, Michael Picheny</li>
|
||
<li><a
|
||
href="https://www.aclweb.org/anthology/H91-1051.pdf">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-31">1989</h2>
|
||
<ul>
|
||
<li><strong>Performance Comparisons Between Backpropagation Networks and
|
||
Classification Trees on Three Real-World Applications (NIPS
|
||
1989)</strong>
|
||
<ul>
|
||
<li>Les E. Atlas, Ronald A. Cole, Jerome T. Connor, Mohamed A.
|
||
El-Sharkawi, Robert J. Marks II, Yeshwant K. Muthusamy, Etienne
|
||
Barnard</li>
|
||
<li><a
|
||
href="https://papers.nips.cc/paper/203-performance-comparisons-between-backpropagation-networks-and-classification-trees-on-three-real-world-applications">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-32">1988</h2>
|
||
<ul>
|
||
<li><strong>Multiple Decision Trees (UAI 1988)</strong>
|
||
<ul>
|
||
<li>Suk Wah Kwok, Chris Carter</li>
|
||
<li><a href="https://arxiv.org/abs/1304.2363">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="section-33">1987</h2>
|
||
<ul>
|
||
<li><strong>Decision Tree Induction Systems: A Bayesian Analysis (UAI
|
||
1987)</strong>
|
||
<ul>
|
||
<li>Wray L. Buntine</li>
|
||
<li><a href="https://arxiv.org/abs/1304.2732">[Paper]</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<hr />
|
||
<p><strong>License</strong></p>
|
||
<ul>
|
||
<li><a
|
||
href="https://github.com/benedekrozemberczki/awesome-decision-tree-papers/blob/master/LICENSE">CC0
|
||
Universal</a></li>
|
||
</ul>
|
||
<p><a
|
||
href="https://github.com/benedekrozemberczki/awesome-decision-tree-papers">decisiontreepapers.md
|
||
Github</a></p>
|