Knowledge discovery in databases (KDD) has become a very attractive discipline both for research and industry within the last few years. Its goal is to extract "pieces" of knowledge or "patterns" from usually very large databases. It portrays a robust sequence of procedures or steps that have to be carried out so as to derive reasonable and understandable results. One of its components is the process which induces the above “pieces” of knowledge; usually this is a machine learning (ML) algorithm. However, most of the machine learning algorithms require perfect data in a specific format. The data that are to be processed by a knowledge acquisition (inductive) algorithm are usually noisy and often inconsistent. Many steps are involved before the actual data analysis starts. Moreover, many ML systems do not easily allow processing of numerical attributes as well as numerical (continuous) classes. Therefore, certain procedures have to precede the actual data analysis process. Next, a result of an ML algorithm, such as a decision tree, a set of decision rules, or weights and topology of a neural net, may not be appropriate from the view of custom or commercial applications. As a result, a concept description (model, knowledge base) produced by an inductive process has to be usually postprocessed. Postprocessing procedures usually include various pruning routines, rule quality processing, rule filtering, rule combination, model combination, or even knowledge integration. All these procedures provide a kind of symbolic filter for noisy, imprecise, or non-user-friendly knowledge derived by an inductive algorithm. Therefore, some preprocessing routines as well as postprocessing ones should fill up the entire chain of data processing. The pre- and post-processing tools always help to investigate databases as well as to refine the acquired knowledge. Usually, these tools exploit techniques that are not genuinely symbolic/logical, e.g., statistics, neural nets, and others.