Using AI to analyze student scientific writing in chemistry (CER framework)
Keywords:
Artificial Intelligence, Chemistry Education, CER Framework, Scientific Argumentation, AI-Assisted Assessment, Educational Technology, Expert ValidationAbstract
The rapid development of artificial intelligence technology has opened up new opportunities in educational assessment methods. This study examines the use of artificial intelligence in analysing chemistry students’ scientific writing based on the Claim-Evidence-Reasoning (CER) framework. Using the Nominal Group Technique (NGT) with five expert assessors, ten implementation strategies were evaluated in terms of their suitability and priority. The analysis results showed a high level of agreement (86.67%–100%) with four strategies achieving full consensus: developing a chemistry-specific AI rubric aligned with the CER framework, integrating a chemistry misunderstanding database into AI training, providing structured step-by-step CER feedback and ensuring a safety and ethics mechanism for chemistry content. These findings highlight the need for domain-specific adaptation, human–AI collaboration, multimodal analysis capabilities and robust error-checking systems to ensure the validity and reliability of AI assessments of students’ chemistry arguments. Overall, this study offers evidence-based guidelines for responsible AI implementation in chemistry education, thereby enhancing the ability to provide high-quality, scalable scientific writing feedback.










