There are two main paradigms in combining different classification algorithms: Classifier Selection and Classifier Fusion. The first one selects a single model for classifying a new instance, while the latter combines the decisions of all models. The work presented in this paper stands in between these two paradigms aiming tackle the disadvantages and benefit from the advantages of both. In particular, this paper proposes the use of statistical procedures for the selection of the best subgroup among different classification algorithms and the subsequent fusion of the decision of the models in this subgroup with simple methods like Weighted Voting. Extensive experimental results show that the proposed approach, Selective Fusion, improves over simple selection and fusion methods, leading to performance comparable with the state-of-the-art heterogeneous classifier combination method of Stacking, without the additional computational cost and learning problems of meta-training.