• Title/Summary/Keyword: classification of expressions

Search Result 108, Processing Time 0.031 seconds

A Formal Presentation of the Extensional Object Model (외연적 객체모델의 정형화)

  • Jeong, Cheol-Yong
    • Asia pacific journal of information systems
    • /
    • v.5 no.2
    • /
    • pp.143-176
    • /
    • 1995
  • We present an overview of the Extensional Object Model (ExOM) and describe in detail the learning and classification components which integrate concepts from machine learning and object-oriented databases. The ExOM emphasizes flexibility in information acquisition, learning, and classification which are useful to support tasks such as diagnosis, planning, design, and database mining. As a vehicle to integrate machine learning and databases, the ExOM supports a broad range of learning and classification methods and integrates the learning and classification components with traditional database functions. To ensure the integrity of ExOM databases, a subsumption testing rule is developed that encompasses categories defined by type expressions as well as concept definitions generated by machine learning algorithms. A prototype of the learning and classification components of the ExOM is implemented in Smalltalk/V Windows.

  • PDF

Smart Mirror for Facial Expression Recognition Based on Convolution Neural Network (컨볼루션 신경망 기반 표정인식 스마트 미러)

  • Choi, Sung Hwan;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.200-203
    • /
    • 2021
  • This paper introduces a smart mirror technology that recognizes a person's facial expressions through image classification among several artificial intelligence technologies and presents them in a mirror. 5 types of facial expression images are trained through artificial intelligence. When someone looks at the smart mirror, the mirror recognizes my expression and shows the recognized result in the mirror. The dataset fer2013 provided by kaggle used the faces of several people to be separated by facial expressions. For image classification, the network structure is trained using convolution neural network (CNN). The face is recognized and presented on the screen in the smart mirror with the embedded board such as Raspberry Pi4.

  • PDF

Expression of Survivin and KAI-1 in Gastric Adenocarcinomas (위선암에서 Survivin과 KAI-1의 발현에 대한 연구)

  • Lee Ju-Han;Kim Byung-Soo;Choi Jong-Sang
    • Journal of Gastric Cancer
    • /
    • v.3 no.1
    • /
    • pp.44-49
    • /
    • 2003
  • Purpose: The aim of this study was to investigate the impact of survivin expression and the decrease or loss of KAI-1 on the clinical stage and the survival rate in gastric adenocarcinomas. Materials and Methods: Expressions of survivin and KAI-1 were immunohistochemically determined in 40 cases of gastric adenocarcinomas. The survivin and KAI-1 expressions were also analyzed by using western blots in 14 cases among them. Results: Resected gastric cancer specimens from 40 patients (intestinal type: 15 cases and diffuse type: 25 cases) were evaluated immunohistochemically. Survivin protein expressions were significantly higher in diffuse types (P=0.03) and in advanced clinical stages (UICC TNM II and III, P=0.02). In contrast, a decrease or loss of KAI-1 expression had no statistically significant correlation with the Lauren classification or the clinical stage. Survivin protein positivity was associated with an unfavorable prognosis. Decrease or loss of KAI-1 was associated with a shorter disease free survivial rate (P < 0.01). The western blot data (n=14) indicated that neither survivin protein over-expression nor KAI-1 down-expression had an significant correlation with the Lauren classification or the clinical stage. Conclusion: In gastric carcinomas, survivin over-expression and decrease or loss of KAI-1 were associated with unfavorable prognosis, being independent prognostic factors along with the clinical stage and the disease free survival rate.

  • PDF

A Study on the Toxic Comments Classification Using CNN Modeling with Highway Network and OOV Process (하이웨이 네트워크 기반 CNN 모델링 및 사전 외 어휘 처리 기술을 활용한 악성 댓글 분류 연구)

  • Lee, Hyun-Sang;Lee, Hee-Jun;Oh, Se-Hwan
    • The Journal of Information Systems
    • /
    • v.29 no.3
    • /
    • pp.103-117
    • /
    • 2020
  • Purpose Recently, various issues related to toxic comments on web portal sites and SNS are becoming a major social problem. Toxic comments can threaten Internet users in the type of defamation, personal attacks, and invasion of privacy. Over past few years, academia and industry have been conducting research in various ways to solve this problem. The purpose of this study is to develop the deep learning modeling for toxic comments classification. Design/methodology/approach This study analyzed 7,878 internet news comments through CNN classification modeling based on Highway Network and OOV process. Findings The bias and hate expressions of toxic comments were classified into three classes, and achieved 67.49% of the weighted f1 score. In terms of weighted f1 score performance level, this was superior to approximate 50~60% of the previous studies.

HANDWRITTEN HANGUL RECOGNITION MODEL USING MULTI-LABEL CLASSIFICATION

  • HANA CHOI
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.27 no.2
    • /
    • pp.135-145
    • /
    • 2023
  • Recently, as deep learning technology has developed, various deep learning technologies have been introduced in handwritten recognition, greatly contributing to performance improvement. The recognition accuracy of handwritten Hangeul recognition has also improved significantly, but prior research has focused on recognizing 520 Hangul characters or 2,350 Hangul characters using SERI95 data or PE92 data. In the past, most of the expressions were possible with 2,350 Hangul characters, but as globalization progresses and information and communication technology develops, there are many cases where various foreign words need to be expressed in Hangul. In this paper, we propose a model that recognizes and combines the consonants, medial vowels, and final consonants of a Korean syllable using a multi-label classification model, and achieves a high recognition accuracy of 98.38% as a result of learning with the public data of Korean handwritten characters, PE92. In addition, this model learned only 2,350 Hangul characters, but can recognize the characters which is not included in the 2,350 Hangul characters

Affective Computing in Education: Platform Analysis and Academic Emotion Classification

  • So, Hyo-Jeong;Lee, Ji-Hyang;Park, Hyun-Jin
    • International journal of advanced smart convergence
    • /
    • v.8 no.2
    • /
    • pp.8-17
    • /
    • 2019
  • The main purpose of this study isto explore the potential of affective computing (AC) platforms in education through two phases ofresearch: Phase I - platform analysis and Phase II - classification of academic emotions. In Phase I, the results indicate that the existing affective analysis platforms can be largely classified into four types according to the emotion detecting methods: (a) facial expression-based platforms, (b) biometric-based platforms, (c) text/verbal tone-based platforms, and (c) mixed methods platforms. In Phase II, we conducted an in-depth analysis of the emotional experience that a learner encounters in online video-based learning in order to establish the basis for a new classification system of online learner's emotions. Overall, positive emotions were shown more frequently and longer than negative emotions. We categorized positive emotions into three groups based on the facial expression data: (a) confidence; (b) excitement, enjoyment, and pleasure; and (c) aspiration, enthusiasm, and expectation. The same method was used to categorize negative emotions into four groups: (a) fear and anxiety, (b) embarrassment and shame, (c) frustration and alienation, and (d) boredom. Drawn from the results, we proposed a new classification scheme that can be used to measure and analyze how learners in online learning environments experience various positive and negative emotions with the indicators of facial expressions.

Facial Expression Recognition by Combining Adaboost and Neural Network Algorithms (에이다부스트와 신경망 조합을 이용한 표정인식)

  • Hong, Yong-Hee;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.806-813
    • /
    • 2010
  • Human facial expression shows human's emotion most exactly, so it can be used as the most efficient tool for delivering human's intention to computer. For fast and exact recognition of human's facial expression on a 2D image, this paper proposes a new method which integrates an Discrete Adaboost classification algorithm and a neural network based recognition algorithm. In the first step, Adaboost algorithm finds the position and size of a face in the input image. Second, input detected face image into 5 Adaboost strong classifiers which have been trained for each facial expressions. Finally, neural network based recognition algorithm which has been trained with the outputs of Adaboost strong classifiers determines final facial expression result. The proposed algorithm guarantees the realtime and enhanced accuracy by utilizing fastness and accuracy of Adaboost classification algorithm and reliability of neural network based recognition algorithm. In this paper, the proposed algorithm recognizes five facial expressions such as neutral, happiness, sadness, anger and surprise and achieves 86~95% of accuracy depending on the expression types in real time.

Learning Directional LBP Features and Discriminative Feature Regions for Facial Expression Recognition (얼굴 표정 인식을 위한 방향성 LBP 특징과 분별 영역 학습)

  • Kang, Hyunwoo;Lim, Kil-Taek;Won, Chulho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.5
    • /
    • pp.748-757
    • /
    • 2017
  • In order to recognize the facial expressions, good features that can express the facial expressions are essential. It is also essential to find the characteristic areas where facial expressions appear discriminatively. In this study, we propose a directional LBP feature for facial expression recognition and a method of finding directional LBP operation and feature region for facial expression classification. The proposed directional LBP features to characterize facial fine micro-patterns are defined by LBP operation factors (direction and size of operation mask) and feature regions through AdaBoost learning. The facial expression classifier is implemented as a SVM classifier based on learned discriminant region and directional LBP operation factors. In order to verify the validity of the proposed method, facial expression recognition performance was measured in terms of accuracy, sensitivity, and specificity. Experimental results show that the proposed directional LBP and its learning method are useful for facial expression recognition.

Study for Classification of Facial Expression using Distance Features of Facial Landmarks (얼굴 랜드마크 거리 특징을 이용한 표정 분류에 대한 연구)

  • Bae, Jin Hee;Wang, Bo Hyeon;Lim, Joon S.
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.613-618
    • /
    • 2021
  • Facial expression recognition has long been established as a subject of continuous research in various fields. In this paper, the relationship between each landmark is analyzed using the features obtained by calculating the distance between the facial landmarks in the image, and five facial expressions are classified. We increased data and label reliability based on our labeling work with multiple observers. In addition, faces were recognized from the original data and landmark coordinates were extracted and used as features. A genetic algorithm was used to select features that are relatively more helpful for classification. We performed facial recognition classification and analysis with the method proposed in this paper, which shows the validity and effectiveness of the proposed method.

A Comparative Analysis on Classification Systems for Children's Materials of Internet Portals and Online Bookstores (인터넷포털과 인터넷서점의 어린이자료 분류시스템의 비교분석)

  • Bae, Yeong-Hwal;Oh, Dong-Geun;Yeo, Ji-Suk
    • Journal of Korean Library and Information Science Society
    • /
    • v.39 no.3
    • /
    • pp.321-344
    • /
    • 2008
  • This study tries to compare the classification systems of major internet portals and their sub-portals specialized for the children and of major online book stores. It compares and analyzes the major directories of them and suggests recommendations not only to improve their own systems but also to apply to the development for the classification systems for children's library. Some of them are: (1) The system should reflect information requests and use behaviors of the children netizen. (2) It should select the terms reflecting the children's viewpoints and expressions and suggest the guidelines by ages. (3) It should maintain the clear hierarchies and grouping for the accessability and convenience of the users. (4) It will be helpful to establish the categories to mix the subject- or concept-based categories and the activities and objects of the children. (5) It will also be helpful to establish the categories based on the curricula added by those creating the imagination and interest and to subdivide by subject.

  • PDF