|Title||:||Ontology-based Multiple-Choice Question Generation|
|Speaker||:||Vinu E V (IITM)|
|Details||:||Tue, 10 May, 2016 3:00 PM @ BSB 361|
|Abstract:||:||Assessment is a well understood educational topic which is having a long history and a rich literature. Generating questions items from (Web Ontology Language) OWL-DL ontologies has gained much attention recently, as these structures can capture the semantics of a domain and not just data.
In this seminar, we will cover the relevant works in the literature, and explore the various aspects of a formal ontology that can be utilized for generating an assessment test which is meant for a specific pedagogical goal. For this purpose, we elaborate two prototype systems called Automatic Test Generation (ATG) system and its extended version, Extended-ATG (E-ATG) system, which we have proposed in our publications. The ATG system was useful in generating multiple choice question(MCQ)-sets of required sizes from a given formal ontology. It works by employing a set of heuristics for selecting only those questions which are most-relevant for conducting a domain related assessment. We enhanced this system with new features such as finding the difficulty values of generated MCQs and controlling the overall difficulty-level of question-sets, to form Extended-ATG system.
In the talk, we will discuss the novel methods adopted to address the various features of these systems. While the ATG system uses at most two predicates for generating the stems of MCQs, the E-ATG system has no such limitations and employs several interesting predicate based patterns for stem generation. These predicate patterns are obtained from a detailed empirical study of large real-world question-sets. In addition, the new system also incorporates a specific non-pattern based approach which makes use of aggregation-like operations, to generate questions that involve superlatives (e.g., highest mountain, largest river etc.).
We studied the feasibility and usefulness of the proposed methods by generating MCQs from several ontologies available online. The effectiveness of the suggested question selection heuristics is studied by comparing the resulting questions with those questions which were prepared by domain experts. It is found that the difficulty-scores of questions computed by the proposed system have highly correlation with their actual difficulty-scores determined with the help of (Item Response Theory) IRT principles applied to data from classroom experiments.
Our results show that the E-ATG system can generate domain specific question-sets which are close to the human generated ones (in terms of their semantic similarity). Also, the system can be potentially used for controlling the overall difficulty-level of the automatically generated question-sets for achieving specific pedagogical goals.