Learning to Rank (LtR) encompasses a class of machine learning techniques developed to automatically learn how to better rank the documents returned for an information retrieval (IR) search. Such techniques offer great promise to software engineers because they better adapt to the wider range of differences in the documents and queries seen in software corpora. To encourage the greater use of LtR in software maintenance and evolution research, this paper explores the value that LtR brings to two common maintenance problems: feature location and traceability. When compared to the worst, median, and best models identified from among hundreds of alternative models for performing feature location, LtR ubiquitously provides a statistically significant improvement in MAP, MRR, and MnDCG scores. Looking forward a further motivation for the use of LtR is its ability to enable the development of software specific retrieval models.