Major research topic

Randomized decision trees and kernel logistic regression: sparsity and decomposition ; methods

Abstract

In machine learning decision trees are widely used techniques thanks to both their interpretability and good accuracy performance. A decision tree has a flowchart-like structure where each branch node represents a "test" on attributes and each leaf node represents a decision taken after computing all attributes. They can be used both for classification and regression tasks. Since designing optimal decision trees is NP-hard, classical methods are characterised by greedy approaches. Given the substantial progress in mixed-integer linear programming (MILP) and nonlinear programming, optimal decision trees have recently attracted renewed attention. Kernel Logistic Regression (KLR) is a well-known Machine Learning technique that allows the regular logistic regression model to deal with data points that are not linearly separable in the original space. However, KLR does not share the support vector property of SVM (i.e. only a subset of data points are involved in the prediction). In my project I’m interested in optimization methods for randomized decision trees for classification and regression tasks as well as for kernel logistic regression, with a focus on sparsity and decomposition methods.

Back to Alumni

Skip to content