G. Tsoumakas, I. Katakis, I. Vlahavas, “Random k-Labelsets for Multi-Label Classification”, IEEE Transactions on Knowledge and Data Engineering, IEEE, 23(7), pp. 1079-1089, 2011.

Author(s): Grigorios Tsoumakas, I. Katakis, I. Vlahavas

Availability:

Appeared In: IEEE Transactions on Knowledge and Data Engineering, IEEE, 23(7), pp. 1079-1089, 2011.

Tags:

Abstract: A simple yet effective multi-label learning method, called label powerset (LP), considers each distinct combination of labels that exist in the training set as a different class value of a single-label classification task. The computational efficiency and predictive performance of LP is challenged by application domains with large number of labels and training examples. In these cases the number of classes may become very large and at the same time many classes are associated with very few training examples. To deal with these problems, this paper proposes breaking the initial set of labels into a number of small random subsets, called {\em labelsets} and employing LP to train a corresponding classifier. The labelsets can be either disjoint or overlapping depending on which of two strategies is used to construct them. The proposed method is called RA$k$EL (RAndom $k$ labELsets), where $k$ is a parameter that specifies the size of the subsets. Empirical evidence indicate that RA$k$EL manages to improve substantially over LP, especially in domains with large number of labels and exhibits competitive performance against other high-performing multi-label learning methods.