Intelligent Question Paper Generation using AI and ML Algorithms for Tailored Difficulty Levels

Authors

  • Tejasvi Vijay Panchal Indian Institute of Teacher Education, Sector 15, Gandhinagar, 382016, India
  • Lakshya Singh Chouhan Poornima Institute of Engineering and Technology, Sitapura, Jaipur, 302012, India

Keywords:

Automated Question Paper Generation; Machine Learning and Artificial Intelligence; Natural Language Processing (NLP); Python Libraries (sklearn, pandas, Transformers); Pre-trained Models (GPT-2, BERT); Text Classification and Vectorization

Abstract

Automated question paper generation has emerged as a prominent field in educational settings, driven by advancements in artificial intelligence and machine learning. This research paper presents a comprehensive approach that harnesses advanced language models and machine learning techniques to generate question papers with diverse difficulty levels and types. The methodology encompasses several key steps. To begin, we employ the pandas and os libraries in Python for data preparation. Pandas, a versatile tool for data manipulation and analysis, facilitates the creation of a structured DataFrame from questions and labels extracted from text files. The os module, on the other hand, aids in managing files and directories, enabling efficient iteration over files and content retrieval. Data cleaning is crucial, and we accomplish it by employing regular expressions with the re module. This step sanitizes the input question text, removing unwanted characters and ensuring a cleaner and more uniform dataset. Next, we train machine learning models using the popular sklearn library. The dataset is split into training and testing sets using the sklearn.model_selection.train_test_split function, allowing us to train the models on the larger training set and evaluate their performance on the testing set. To transform the textual data into a format suitable for machine learning models, we utilize the sklearn.feature_extraction.text.CountVectorizer. This process, known as vectorization, converts the text into a matrix of token counts, facilitating subsequent analysis. For the classification task, we adopt the sklearn.naive_bayes.MultinomialNB algorithm, renowned for its efficacy in text classification. This algorithm is trained using the feature matrix from the CountVectorizer and the corresponding labels. Evaluation of the models' performance is achieved through the sklearn.metrics.accuracy_score function, which compares the predicted labels with the actual labels from the testing set. To facilitate future use without retraining, the trained models, along with the CountVectorizer, are saved using the joblib library. In the question classification and storage stage, new questions are classified using the trained models and stored in separate files. These questions are imported from text files, cleaned using the aforementioned data cleaning techniques, and then vectorized. The trained models are utilized to classify the questions, and the results are saved in separate text files based on the predicted labels. This organization enables efficient retrieval of questions according to their classification, simplifying the generation of specific question paper types. Moreover, we fine-tune the GPT-2 model using the Transformers library, a powerful language model. This fine-tuning process occurs on classified question datasets, enabling the generation of unique and contextually relevant questions. Additionally, we utilize BERT, a highly effective model in Natural Language Processing (NLP), for text classification. A pre-trained BERT model is fine-tuned specifically for sequence classification, enabling the categorization of questions based on their relevance. To enhance the quality of the generated questions, we incorporate a question filter script that assesses relevance and grammatical correctness. This script identifies and eliminates irrelevant or grammatically incorrect questions, thus improving the overall quality of the generated question set. To create well-structured question papers, we employ multiple pre-trained models capable of generating questions categorized by cognitive domains (such as knowledge, comprehension, application, analysis, synthesis, and evaluation) and section types (including very short, short, and long). This approach ensures a balanced representation of different question types and difficulty levels in the generated question papers. Finally, we employ the convert_to_pdf function, utilizing the reportlab library, to convert the generated question paper into a PDF format. This conversion process simplifies dissemination and sharing of the question paper output. The evaluation of our model yields promising results, with an overall accuracy of approximately 74.2% in predicting Bloom's taxonomy categories. Notably, the model demonstrates strengths in predicting 'comprehension' and 'evaluation'. Additionally, the question type classification model exhibits high accuracy in identifying 'not relevant' questions, a crucial aspect for maintaining question paper quality. However, further improvements are required in distinguishing between 'long,' 'short,' and 'very short' questions. Furthermore, aligning the generated questions with the class 10 science course syllabus is a priority for future work, as the current limitations in the dataset hindered full alignment. This research contributes significantly to the advancement of automated question paper generation systems. It highlights the importance of leveraging advanced language models, machine learning algorithms, and effective data preprocessing techniques. The findings provide valuable insights into the strengths and areas for improvement in our proposed methodology, laying the foundation for further research to refine and enhance these systems.

 

Downloads

Published

22.02.2024

How to Cite

[1]
Tejasvi Vijay Panchal and Lakshya Singh Chouhan, “Intelligent Question Paper Generation using AI and ML Algorithms for Tailored Difficulty Levels”, IJREST, vol. 10, pp. 1–17, Feb. 2024.

Issue

Section

Articles