This study investigated the effectiveness of two parallel rater training procedures of the English Test for International Communication (ETIC). Twelve university English teachers participated in the study as new raters and were divided into two groups that underwent two modes of training led by the same trainer: collaborative and self-paced training. The difference was that in the collaborative training raters discussed with both peers and the trainer while in the self-paced training raters only discussed their progress with the trainer. Sample scripts from the email writing task of ETIC Intermediate were used as training materials. Following training raters scored an additional set of 80 scripts. The resulted rating data were analyzed using the computer program FACETS (version 3.80.4) under two separate many-facet Rasch models for rater invariance and rater accuracy (Wind & Engelhard, 2013) in order to systematically examine rating quality. Statistical indexes of rater calibration, model-data fit, and interactions from both models indicate that both types of training were effective overall and that the self-paced method was equivalent in effectiveness to the traditional collaborative method. Suggestions for improvement of the training programs and implications for large-scale performance assessments are discussed.