Аннотация:This chapter contains a tutorial illustrating bagging and boosting in the context of regression models. The first base regression method used in this tutorial is the classical algorithm of Multiple Linear Regression (MLR) implemented in Weka in the class classifiers/functions with the name LinearRegression. The bagging procedure consists of: generating several samples from the original training set by drawing each compound with the same probability with replacement (so-called bootstrapping), building a base learner (MLR in our case) model on each of the samples, averaging the values predicted for test compounds over the whole ensemble of models. This procedure is implemented in Weka by means of a special “meta-classifier” in the class classifiers/meta with the name Bagging. Additive regression is a Weka implementation of the Gradient Boosting ensemble learning method, which enhances the performance of a base regression base method.