We address the e-rulemaking problem of reducing the manual labor required to analyze public comment sets. In current and previous work, for example, text categorization techniques have been used to speed up the comment analysis phase of e-rulemaking - by classifying sentences automatically, according to the rule-specific issues  or general topics that they address [7, 8]. Manually annotated data, however, is still required to train the supervised inductive learning algorithms that perform the categorization. This paper, therefore, investigates the application of active learning methods for public comment categorization: we develop two new, general-purpose, active learning techniques to selectively sample from the available training data for human labeling when building the sentence-level classiers employed in public comment categorization. Using an e-rulemaking corpus developed for our purposes , we compare our methods to the well-known query by committee (QBC) active learning algorithm  and to a baseline that randomly selects instances for labeling in each round of active learning. We show that our methods statistically significantly exceed the performance of the random selection active learner and the query by committee (QBC) variation, requiring many fewer training examples to reach the same levels of accuracy on a held-out test set. This provides promising evidence that automated text categorization methods might be used effectively to support public comment analysis.
Purpura, Stephen; Cardie, Claire; and Simons, Jesse, "Active Learning for e-Rulemaking: Public Comment Categorization" (2008). Cornell e-Rulemaking Initiative Publications. Paper 7.