Comparative study of deep learning models for sentiment analysis

  • Authors

    • Oumaima Hourrane Hassan II University
    • El Habib Benlahmar Hassan II University
    • Ahmed zellou National School for Computer Science and Systems Analysis
    2018-04-12
    https://doi.org/10.14419/ijet.v7i4.24459
  • Sentiment Analysis, Word Embeddings, Deep Learning.
  • Sentiment analysis is one of the new absorbing parts appeared in natural language processing with the emergence of community sites on the web. Taking advantage of the amount of information now available, research and industry have been seeking ways to automatically analyze the sentiments expressed in texts. The challenge for this task is the human language ambiguity, and also the lack of labeled data. In order to solve this issue, sentiment analysis and deep learning have been merged as deep learning models are effective due to their automatic learning capability. In this paper, we provide a comparative study on IMDB movie review dataset, we compare word embeddings and further deep learning models on sentiment analysis and give broad empirical outcomes for those keen on taking advantage of deep learning for sentiment analysis in real-world settings.

  • References

    1. [1] Hourrane, Oumaima, and El Habib Benlahmar. â€Survey of Plagiarism Detection Approaches and Big data Techniques related to Plagiarism Candidate Retrieval.†Proceedings of the 2nd international Conference on Big Data, Cloud and Applications, ACM, (2017).

      [2] Hourrane, Oumaima, et al. â€Using Deep Learning Word Embeddings for Citations Similarity in Academic Papers.†International Conference on Big Data, Cloud and Applications, Springer, Cham, (2018).

      [3] T. Sagara and M. Hagiwara, â€Natural language neural network and its application to question-answering system.†Neurocomputing, vol. 142, pp.201 – 208, (2014).

      [4] D. Bahdanau, K. Cho, and Y. Bengio, â€Neural machine translation by jointly learning to align and translate.†CoRR, vol. abs/1409.0473, (2014).

      [5] Ke, Yuanzhi, and Masafumi Hagiwara. â€An English neural network that learns texts, finds hidden knowledge, and answers questions.†Journal of Artificial Intelligence and Soft Computing Research7.4 (2017), pg: 229-242.

      [6] Tang D, Wei F, Yang N, Zhou M, Liu T, and Qin B. â€Learning sentiment-specific word embedding for twitter sentiment classification.†In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2014), (2014).

      [7] Tang, Duyu, et al. â€Coooolll: A deep learning system for Twitter sentiment classification.†Proceedings of the 8th international workshop on semantic evaluation (SemEval 2014), (2014).

      [8] Yu LC, Wang J, Lai KR, and Zhang X. â€Refining word embeddings for sentiment analysis.†In Proceedings of the Conference on Empirical Methods on Natural Language Processing (EMNLP 2017), (2017).

      [9] Kalchbrenner N, Grefenstette E, Blunsom P. â€A convolutional neural network for modeling sentences.†In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2014), (2014).

      [10] Dos Santos, C. N., Gatti M. â€Deep convolutional neural networks for sentiment analysis for short texts.†In Proceedings of the International Conference on Computational Linguistics (COLING 2014), 2014.

      [11] Severyn, Aliaksei, and Alessandro Moschitti. â€Twitter sentiment analysis with deep convolutional neural networks.†Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, (2015).

      [12] Zhang, Ye, and Byron Wallace. â€A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification.†arXiv preprint arXiv:1510.03820 (2015).

      [13] Wang X, Liu Y, Sun C, Wang B, and Wang X. â€Predicting polarities of tweets by composing word embeddings with long short-term memory.†In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2015), 2015.

      [14] Qian Q, Huang M, Lei J, and Zhu X. â€Linguistically regularized LSTM for sentiment classification.†In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2017), (2017).

      [15] Wang, Yequan, et al. â€Sentiment analysis by capsules.†Proceedings of the 2018 World Wide Web Conference on the World Wide Web. International World Wide Web Conferences Steering Committee, (2018).

      [16] Wang J, Yu L-C, Lai R.K., and Zhang X. â€Dimensional sentiment analysis using a regional CNN-LSTM model.†In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2016), (2016)

      [17] Wang X, Jiang W, Luo Z. â€Combination of convolutional and recurrent neural network for sentiment analysis of short texts.†In Proceedings of the International Conference on Computational Linguistics (COLING 2016), 2016.

      [18] Guggilla C, Miller T, Gurevych I. â€CNN-and LSTM-based claim classification in online user comments.†In Proceedings of the International Conference on Computational Linguistics (COLING 2016), (2016).

      [19] Akhtar MS, Kumar A, Ghosal D, Ekbal A, and Bhattacharyya P. â€A multilayer perceptron-based ensemble technique for fine-grained financial sentiment analysis.†In Proceedings of the Conference on Empirical Methods on Natural Language Processing (EMNLP 2017), (2017).

      [20] Mikolov T., Yih W.-t., Zweig G., â€Linguistic Regularities in Continuous Space Word Representations.â€, HLT-NAACL, vol. 13, p. 746-751, (2013c).

      [21] Goldberg Y., Levy O., â€word2vec Explained: deriving Mikolov et al.’s negative-sampling word-embedding methodâ€, arXiv preprint arXiv: 1402.3722, (2014).

      [22] Le Q. V., Mikolov T., â€Distributed Representations of Sentences and Documents.â€, ICML, vol. 14, p. 1188-1196, (2014).

      [23] LeCun, Yann, and Yoshua Bengio. â€Convolutional networks for images, speech, and time series.†The handbook of brain theory and neural networks 3361.10 (1995): 1995.

      [24] Sepp Hochreiter and Jurgen Schmidhuber. â€Long Short-Term Memory.†¨ Neural Computation 9(8):1735–1780, (1997).

      [25] KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. †On the Properties of Neural Machine Translation: Encoder-Decoder Approachesâ€. CoRR abs/1409.1259, (2014). http://arxiv.org/abs/1409.1259.

      [26] Kenter, Tom, Alexey Borisov, and Maarten de Rijke. â€Siamese Cbow: Optimizing word embeddings for sentence representations.†arXiv preprint arXiv:1606.04640 (2016).

      [27] Yoon Kim. â€Convolutional Neural Networks for Sentence Classification.†In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL.October 25-29, (2014), pages 1746–1751.

      [28] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. â€Dropout: A simple way to prevent neural networks from overfittingâ€. Journal of Machine Learning Research, (2014), 15:1929–1958.

      [29] Diederik P. Kingma and Jimmy Ba. â€Adam: A Method for Stochastic Optimization.†CoRR abs/1412.6980, (2014).

      [30] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. â€Deep Sparse Rectifier Neural Networksâ€. In Geoffrey J. Gordon and David B. Dunson, editors, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS-11). Journal of Machine Learning Research - Workshop and Conference Proceedings, volume 15, (2011), pages 315–323.

      [31] Nair, Vinod, and Geoffrey E. Hinton. â€Rectified linear units improve restricted Boltzmann machines.†Proceedings of the 27th international conference on machine learning (ICML-10), (2010).

  • Downloads

  • How to Cite

    Hourrane, O., Benlahmar, E. H., & zellou, A. (2018). Comparative study of deep learning models for sentiment analysis. International Journal of Engineering & Technology, 7(4), 5726-5731. https://doi.org/10.14419/ijet.v7i4.24459