1 |
Axelrad, E. T., Sticha, P. J., Brdiczka, O., & Shen, J. (2013). A Bayesian network model for predicting insider threats. In Proceedings of the 2013 IEEE Security and Privacy Workshops (pp. 82-89). Piscataway: IEEE.
|
2 |
Azaria, A., Richardson, A., Kraus, S., & Subrahmanian, V. S. (2014). Behavioral analysis of insider threat: A survey and bootstrapped prediction in imbalanced data. In IEEE Transactions on Computational Social Systems (pp. 135-155). Piscataway: IEEE.
|
3 |
Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of Machine Learning Research, 3, 993-1022.
|
4 |
Brdiczka, O., Liu, J., Price, B., Shen, J., Patil, A., Chow, R., . . . Ducheneaut, N. (2012). Proactive insider threat detection through graph learning and psychological context. In Proceedings of the 2012 IEEE Symposium on Security and Privacy Workshops (pp. 142-149). Piscataway: IEEE.
|
5 |
Brown, C. R., Greitzer, F. L., & Watkins, A. (2013). Toward the development of a psycholinguistic-based measure of insider threat risk focusing on core word categories used in social media. In AMCIS 2013 Proceedings (pp. 3596-3603). Atlanta: Association for Information Systems.
|
6 |
Brown, C. R., Watkins, A., & Greitzer, F. L. (2013). Predicting insider threat risks through linguistic analysis of electronic communication. In Proceedings of the 46th Hawaii International Conference on System Sciences (pp. 1849-1858). Piscataway: IEEE.
|
7 |
Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16, 321-357.
DOI
|
8 |
Chen, Y., & Malin, B. (2011). Detection of anomalous insiders in collaborative environments via relational analysis of access logs. In Proceedings of the First ACM Conference on Data and Application Security and Privacy (pp. 63-74). New York: ACM.
|
9 |
Cherry, C., Mohammad, S. M., & De Bruijn, B. (2012). Binary classifiers and latent sequence models for emotion detection in suicide notes. Biomedical Informatics Insights, 5(Suppl 1), 147-154.
|
10 |
Colwill, C. (2009). Human factors in information security: The insider threat-Who can you trust these days?. Information Security Technical Report, 14(4), 186-196.
DOI
|
11 |
Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273-297.
DOI
|
12 |
Eberle, W., Graves, J., & Holder, L. (2010). Insider threat detection using a graph-based approach. Journal of Applied Security Research, 6(1), 32-81.
DOI
|
13 |
Grijalva, E., Newman, D. A., Tay, L., Donnellan, M. B., Harms, P. D., Robins, R. W., & Yan, T. (2015). Gender differences in narcissism: A meta-analytic review. Psychological Bulletin, 141(2), 261-310.
DOI
|
14 |
Gheyas, I. A., & Abdallah, A. E. (2016). Detection and prediction of insider threats to cyber security: A systematic literature review and meta-analysis. Big Data Analytics, 1(1), 6.
DOI
|
15 |
Greitzer, F. L., Frincke, D. A., & Zabriskie, M. (2010). Social/ethical issues in predictive insider threat monitoring. In M. J. Dark (Ed.), Information assurance and security ethics in complex systems: Interdisciplinary perspectives (pp. 1100-1129). Hershey: IGI Global.
|
16 |
Greitzer, F. L., Kangas, L. J., Noonan, C. F., Brown, C. R., & Ferryman, T. (2013). Psychosocial modeling of insider threat risk based on behavioral and word use analysis. e-Service Journal, 9(1), 106-138.
DOI
|
17 |
Ho, S. M., Hancock, J. T., Booth, C., Burmester, M., Liu, X., & Timmarajus, S. S. (2016). Demystifying insider threat: Language-action cues in group dynamics. In Proceedings of the 49th Hawaii International Conference on System Sciences (pp. 2729-2738). Piscataway: IEEE.
|
18 |
Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9, 1735-1780.
DOI
|
19 |
Hornik, K. (1991). Approximation capabilities of multilayer feedforward networks. Neural Networks, 4(2), 251-257.
DOI
|
20 |
Kandias, M., Mylonas, A., Virvilis, N., Theoharidou, M., & Gritzalis, D. (2010). An insider threat prediction model. In S. Katsikas, J. Lopez, & M. Soriano (Eds.), Lecture notes in computer science: Vol. 6264. Trust, privacy and security in digital business (pp. 26-37). Berlin: Springer.
|
21 |
Mohammad, S. M. (2012). #Emotional tweets. In Proceedings of the First Joint Conference on Lexical and Computational Semantics (pp. 246-255). Stroudsburg: Association for Computational Linguistics.
|
22 |
Kandias, M., Stavrou, V., Bozovic, N., Mitrou, L., & Gritzalis, D. (2013). Can we trust this user? Predicting insider's attitude via YouTube usage profiling. In Proceedings of the 2013 IEEE 10th International Conference on Ubiquitous Intelligence and Computing and 2013 IEEE 10th International Conference on Autonomic and Trusted Computing (pp. 347-354). Piscataway: IEEE.
|
23 |
Kiser, A. I., Porter, T., & Vequist, D. (2010). Employee monitoring and ethics: Can they co-exist?. International Journal of Digital Literacy and Digital Competence, 1(4), 30-45.
DOI
|
24 |
McCrae, R. R. (2010). The place of the FFM in personality psychology. Psychological Inquiry, 21(1), 57-64.
DOI
|
25 |
Mohammad, S. M. (2015). Sentiment analysis: Detecting valence, emotions, and other affectual states from text. In H. L. Meiselman (Ed.), Emotion measurement (pp. 201-237). Duxford: Woodhead Publishing.
|
26 |
Mohammad, S. M., & Bravo-Marquez, F. (2017). Emotion intensities in tweets. In Proceedings of the Sixth Joint Conference on Lexical and Computational Semantics (pp. 65-77). Stroudsburg: Association for Computational Linguistics.
|
27 |
Mohammad, S. M., & Turney, P. D. (2013). Crowdsourcing a word-emotion association lexicon. Computational Intelligence, 29(3), 436-465.
DOI
|
28 |
Mohammad, S. M., Zhu, X., Kiritchenko, S., & Martin, J. (2015). Sentiment, emotion, purpose, and style in electoral tweets. Information Processing & Management, 51(4), 480-499.
DOI
|
29 |
Parker, D. B. (1998). Fighting computer crime: A new framework for protecting information. New York: Wiley.
|
30 |
Myers, J., Grimaila, M. R., & Mills, R. F. (2009). Towards insider threat detection using web server logs. In Proceedings of the 5th Annual Workshop on Cyber Security and Information Intelligence Research: Cyber Security and Information Intelligence Challenges and Strategies (p. 54). New York: ACM.
|
31 |
Pennebaker, J. W., Booth, R. J., & Francis, M. E. (2001). Linguistic inquiry and word count: LIWC 2001. Retrieved February 5, 2019 from http://www.depts.ttu.edu/psy/lusi/files/LIWCmanual.pdf.
|
32 |
Pennebaker, J. W., Mehl, M. R., & Niederhoffer, K. G. (2003). Psychological aspects of natural language use: Our words, our selves. Annual Review of Psychology, 54(1), 547-577.
DOI
|
33 |
Plutchik, R. (1982). A psychoevolutionary theory of emotions. Social Science Information, 21(4-5), 529-553.
DOI
|
34 |
Schultz, E. E. (2002). A framework for understanding and predicting insider attacks. Computers & Security, 21(6), 526-531.
DOI
|
35 |
Shaw, E. D., & Fischer, L. F. (2005). Ten tales of betrayal: The threat to corporate infrastructure by information technology insiders analysis and observations. Retrieved February 5, 2019 from http://www.dtic.mil/dtic/tr/fulltext/u2/a441293.pdf.
|
36 |
Taylor, P. J., Dando, C. J., Ormerod, T. C., Ball, L. J., Jenkins, M. C., Sandham, A., & Menacere, T. (2013). Detecting insider threats through language change. Law and Human Behavior, 37(4), 267-275.
DOI
|
37 |
Wood, B. (2000). An insider threat model for adversary simulation. In Proceedings of the Workshop on Mitigating the Insider Threat to Information Systems (pp. 41-48). Arlington: RAND.
|