An experimental prototype system was created and used to investigate how information relevant to analyst queries, and constrained by a contextual model, can be found over a large information space. Agents employing the ant model sift through documents quickly using a transductive support machine classifier and return those meeting a classifier which is constantly refined through feedback from semantic information extraction to a knowledge base. An ontology-informed extraction is performed on returned documents; an objective function then evaluates how well each document fulfilled the queries and this information is used to create a new classifier for each query. In numerous trials on a static corpus, recall and precision of the classifiers was consistently above 92%. Semantic results have not been quantified but appear highly promising.