Word embeddings are widely used in various tasks of natural language processing. It is a method of mapping words into vector space in natural language processing and it is a numerical representation of words. The quality of word embeddings will be affected by various factors such as training methods and training corpus, and it will directly affect the performance of machine learning models. Aiming at the task of sentiment classification in natural language processing, this paper conducts a series of comparative experiments based on several commonly used word embeddings and typical sentiment classification models to study the influence of word embeddings on sentiment classification performance. The experiment results show that the quality of word embeddings is not only related to the training method and the size of the training corpus, but also the content of the training corpus and the dimension of the generated word embeddings will have a great impact.