This paper presents what we believe to be the most comprehensive suite of comparison criteria regarding multinomial discrete-choice experiment elicitation formats to date. We administer a choice experiment focused on ecosystem-service valuation to three independent samples: single-choice, repeated-choice, and best-worst scaling elicitation. We test whether results differ by parameter estimates, scale factors, preference heterogeneity, status-quo effects, attribute non-attendance, and magnitude and precision of welfare measures. Overall, we find limited evidence of differences in attribute parameter estimates, scale factors, and attribute increment values across elicitation treatments. However, we find significant differences in status-quo effects across elicitation treatments, with repeated-choice resulting in greater proportions of “action” votes, and consequently, higher program-level welfare estimates. Also, we find that single-choice yields drastically less-precise welfare estimates. Finally, we find some differences in attribute non-attendance behavior across elicitation formats, although there appears to be little consistency in class shares even within a given elicitation treatment.