Since the era of burgeoning big data, text data have not only become increasingly accessible but also been widely applied to diverse domains. In these circumstances, adequate language processing is urgently required to handle the enormous amount of unorganized data (e.g., wrong, missing, incomplete). To deal with text data errors, varied efforts have been applied to develop grammatical error correction (GEC) models, especially for the English language. However, correction models for Korean have remained relatively unexplored. In this paper, we propose a novel GEC model specialized in Korean. Owing to the lack of training sample-label pairs (parallel corpus) in the pre-training phase, prior to training in the main stage, this model accommodates a pre-defined noise function that produces artificial errors to reinforce the previous language-correction models. For numerical study, we generate approximately 37 million training sentences and choose the case study of Korean learners’ parallel corpus, a benchmark dataset for text correction. We conclude from the study that the proposed model outperforms humans in the context of bilingual evaluation understudy scores.