Automated assessment in educational context can improve learning through faster and more consistent evaluation at scale made by computers with assessment quality reaching human judges. In writing practice, the great advancement of Natural Language Processing (NLP) has enabled automated assessment of student writing. This is especially beneficial for English as Second Language (ESL) learners who can easily get feedback from automated assessment, rather than relying on human teachers. Current assessment systems can not only evaluate overall writing quality, but also provide detailed writing features such as spelling and grammar, word usage, sentence structure. In addition to these low-level language usage features, NLP models have been proposed to score text coherence and argumentative structure of persuasive essays.
Recent years have observed an emerging interest in argument mining, a research field which aims at automatically determining relevant components of the presented argument in a given text. Output of argument mining has been used to improve automated essay scoring, text summarization and opinion mining. In this research, we propose to apply argument mining to improve automated essay coherence scoring. It is hypothesized that good argumentation structure would correlate with the coherence of persuasive essays. We build a competitive end-to-end argument mining system that parses a free text input and outputs argument components and supportive relations between them in the text. Our argument mining model achieves comparable performance with the state-of-the-art, with F1 = 0.83 for component classification and F1 = 0.73 for relation classification. Text coherence scoring experiments are conducted on our dataset of nearly 2000 essays graded by ESL experts. Preliminary results show that features extracted from argument mining output, when combined with baseline features, helps improve prediction accuracy of both holistic and text coherence scores.