In multimedia annotation, labeling a large amount of training data by human is both time-consuming and tedious. Therefore, to automate this process, a number of methods that leverage unlabeled training data have been proposed. Normally, a given multimedia sample is associated with multiple labels, which may have inherent correlations in real world. Classical multimedia annotation algorithms address this problem by decomposing the multi-label learning into multiple independent single-label problems, which ignores the correlations between different labels. In this paper, we combine label correlation mining and semi-supervised feature selection into a single framework. We evaluate performance of the proposed algorithm of multimedia annotation using MIML, MIRFLICKR and NUS-WIDE datasets. Mean average precision (MAP), MicroAUC and MacroAUC are used as evaluation metrics. Experimental results on the multimedia annotation task demonstrate that our method outperforms the state-of-the-art algorithms for its capability of mining label correlations and exploiting both labeled and unlabeled training data.