InfiniteBoosting

Bagging and boosting are popular ensemble methods. While they aren’t without their drawbacks, their strengths are clearly defined: bagging reduces the variance, while boosting focuses on both bias and variance (but does lead to higher variance). So, what if we combined the two?

Alex Rogozhnikov and Tatiana Likhomanenko from the Higher School of Economics in Moscow developed a new algorithm for this purpose, called InfiniteBoosting. It constructs an arbitrary large ensemble tree and trains every weak learner in a gradient boosting way. Due to its hybrid features, it builds a classifier reducing bias at each training step, while controlling the over-fitting at the same time.

Congratulations for this success to Alex Rogozhnikov and Tatiana Likhomanenko from the Higher School of Economics in Moscow.

For the full article, please look here.

Kommentare
Add a comment
Sorry

your browser is not up to date
to enjoy this website you will need to install a modern browser.
we recommend to update your browser and to install the latest version.

iOS users, please male sure you're running at least iOS 9.

Mozilla Firefox Google Chrome Microsoft Edge Internet Explorer