• Title/Summary/Keyword: Features

Search Result 27,753, Processing Time 0.052 seconds

A Study on On-line Recognition System of Korean Characters (온라인 한글자소 인식시스템의 구성에 관한 연구)

  • Choi, Seok;Kim, Gil-Jung;Huh, Man-Tak;Lee, Jong-Hyeok;Nam, Ki-Gon;Yoon, Tae-Hoon;Kim, Jae-Chang;Lee, Ryang-Seong
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.9
    • /
    • pp.94-105
    • /
    • 1993
  • In this paper propose a Koaren character recognition system using a neural network is proposed. This system is a multilayer neural network based on the masking field model which consists of a input layer, four feature extraction layers which extracts type, direction, stroke, and connection features, and an output layer which gives us recognized character codes. First, 4x4 subpatterns of an NxN character pattern stored in the input buffer are applied into the feature extraction layers sequentially. Then, each of feature extraction layers extracts sequentially features such as type, direction, stroke, and connection, respectively. Type features for direction and connection are extracted by the type feature extraction layer, direction features for stroke by the direction feature extraction layer and stroke and connection features for stroke by the direction feature extraction layer and stroke and connection features for the recongnition of character by the stroke and the connection feature extractions layers, respectively. The stroke and connection features are saved in the sequential buffer layer sequentially and using these features the characters are recognized in the output layer. The recognition results of this system by tests with 8 single consonants and 6 single vowels are promising.

  • PDF

Activities on Naming Undersea Features in Korea (한국에서 해저지명 부여를 위한 활동)

  • Sung, Hyo-Hyun
    • Journal of the Korean Geographical Society
    • /
    • v.41 no.5 s.116
    • /
    • pp.600-622
    • /
    • 2006
  • The consistent use of appropriate names for the undersea features is an essential element of effective communication among ocean scientists. The correct use of names on bathymetric and nautical charts provide benefits to national and international communities. Also it is expected that naming of the marine geographical features within the territorial waters and EEZ contributes to secure the territorial waters and preserve the various marine resources. This paper will seek to addresses a variety of activities where geographic names issues for undersea features arises. For the purpose of this paper, the attention will be given upon 1) the general history of activities on naming undersea features in Korea; 2) development of the guideline for standardization of marine geographical names; 3) geomorphological characteristics of undersea features in East Sea; and 4) future plan to conduct a systematic analysis for naming marine geographical features in Korea.

International Practices of Naming Undersea Features and the Implication for Naming Those in the East Sea (해저지명 제정의 국제적 관례와 동해 해저지명 제정에의 시사점)

  • Choo, Sung-Jae
    • Journal of the Korean Geographical Society
    • /
    • v.41 no.5 s.116
    • /
    • pp.630-638
    • /
    • 2006
  • This paper reviews international practices of naming undersea features, centered on SCUFN (Sub-Committee on Undersea Feature Names), and draws some implications for the newly announced undersea feature names in East Sea. Even though the history of the activities of naming undersea features in Korea is not long, recent years have witnessed considerable progress in finding and naming undersea features. In view of the guidelines for naming undersea features by SCUFN, it is evaluated that most of these names have been appropriately selected. But more justification should be made for specific terms using historical persons, symbolic term, and for two names proposed for those already listed in the Gazetteer. For further works on naming undersea features, three steps are suggested: first, conducting surveys and accumulating data on undersea features, second, naming and announcing newly found features and publicizing them, and third, making attempts to achieve international standardization of domestically announced names.

Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression (특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식)

  • Noh, Sung-Kyu;Park, Han-Hoon;Shin, Hong-Chang;Jin, Yoon-Jong;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.667-674
    • /
    • 2007
  • Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

  • PDF

Emotion Recognition using Facial Thermal Images

  • Eom, Jin-Sup;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.3
    • /
    • pp.427-435
    • /
    • 2012
  • The aim of this study is to investigate facial temperature changes induced by facial expression and emotional state in order to recognize a persons emotion using facial thermal images. Background: Facial thermal images have two advantages compared to visual images. Firstly, facial temperature measured by thermal camera does not depend on skin color, darkness, and lighting condition. Secondly, facial thermal images are changed not only by facial expression but also emotional state. To our knowledge, there is no study to concurrently investigate these two sources of facial temperature changes. Method: 231 students participated in the experiment. Four kinds of stimuli inducing anger, fear, boredom, and neutral were presented to participants and the facial temperatures were measured by an infrared camera. Each stimulus consisted of baseline and emotion period. Baseline period lasted during 1min and emotion period 1~3min. In the data analysis, the temperature differences between the baseline and emotion state were analyzed. Eyes, mouth, and glabella were selected for facial expression features, and forehead, nose, cheeks were selected for emotional state features. Results: The temperatures of eyes, mouth, glanella, forehead, and nose area were significantly decreased during the emotional experience and the changes were significantly different by the kind of emotion. The result of linear discriminant analysis for emotion recognition showed that the correct classification percentage in four emotions was 62.7% when using both facial expression features and emotional state features. The accuracy was slightly but significantly decreased at 56.7% when using only facial expression features, and the accuracy was 40.2% when using only emotional state features. Conclusion: Facial expression features are essential in emotion recognition, but emotion state features are also important to classify the emotion. Application: The results of this study can be applied to human-computer interaction system in the work places or the automobiles.

A Korean Emotion Features Extraction Method and Their Availability Evaluation for Sentiment Classification (감정 분류를 위한 한국어 감정 자질 추출 기법과 감정 자질의 유용성 평가)

  • Hwang, Jae-Won;Ko, Young-Joong
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.4
    • /
    • pp.499-517
    • /
    • 2008
  • In this paper, we propose an effective emotion feature extraction method for Korean and evaluate their availability in sentiment classification. Korean emotion features are expanded from several representative emotion words and they play an important role in building in an effective sentiment classification system. Firstly, synonym information of English word thesaurus is used to extract effective emotion features and then the extracted English emotion features are translated into Korean. To evaluate the extracted Korean emotion features, we represent each document using the extracted features and classify it using SVM(Support Vector Machine). In experimental results, the sentiment classification system using the extracted Korean emotion features obtained more improved performance(14.1%) than the system using content-words based features which have generally used in common text classification systems.

  • PDF

Realtime Markerless 3D Object Tracking for Augmented Reality (증강현실을 위한 실시간 마커리스 3차원 객체 추적)

  • Min, Jae-Hong;Islam, Mohammad Khairul;Paul, Anjan Kumar;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.2
    • /
    • pp.272-277
    • /
    • 2010
  • AR(Augmented Reality) needs medium between real and virtual, world, and recognition techniques are necessary to track an object continuously. Optical tracking using marker is mainly used, but it takes time and is inconvenient to attach marker onto the target objects. Therefore, many researchers try to develop markerless tracking techniques nowaday. In this paper, we extract features and 3D position from 3D objects and suggest realtime tracking based on these features and positions, which do not use just coplanar features and 2D position. We extract features using SURF, get rotation matrix and translation vector of 3D object using POSIT with these features and track the object in real time. If the extracted features are nor enough and it fail to track the object, then new features are extracted and re-matched to recover the tracking. Also, we get rotation in matrix and translation vector of 3D object using POSIT and track the object in real time.

Face classification and analysis based on geometrical feature of face (얼굴의 기하학적 특징정보 기반의 얼굴 특징자 분류 및 해석 시스템)

  • Jeong, Kwang-Min;Kim, Jung-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1495-1504
    • /
    • 2012
  • This paper proposes an algorithm to classify and analyze facial features such as eyebrow, eye, mouth and chin based on the geometric features of the face. As a preprocessing process to classify and analyze the facial features, the algorithm extracts the facial features such as eyebrow, eye, nose, mouth and chin. From the extracted facial features, it detects the shape and form information and the ratio of distance between the features and formulated them to evaluation functions to classify 12 eyebrows types, 3 eyes types, 9 mouth types and 4 chine types. Using these facial features, it analyzes a face. The face analysis algorithm contains the information about pixel distribution and gradient of each feature. In other words, the algorithm analyzes a face by comparing such information about the features.

Correlation-based Automatic Image Captioning (상호 관계 기반 자동 이미지 주석 생성)

  • Hyungjeong, Yang;Pinar, Duygulu;Christos, Falout
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.10
    • /
    • pp.1386-1399
    • /
    • 2004
  • This paper presents correlation-based automatic image captioning. Given a training set of annotated images, we want to discover correlations between visual features and textual features, so that we can automatically generate descriptive textual features for a new unseen image. We develop models with multiple design alternatives such as 1) adaptively clustering visual features, 2) weighting visual features and textual features, and 3) reducing dimensionality for noise sup-Pression. We experiment thoroughly on 10 data sets of various content styles from the Corel image database, about 680MB. The major contributions of this work are: (a) we show that careful weighting visual and textual features, as well as clustering visual features adaptively leads to consistent performance improvements, and (b) our proposed methods achieve a relative improvement of up to 45% on annotation accuracy over the state-of-the-art, EM approach.

Feature Extraction of Asterias Amurensis by Using the Multi-Directional Linear Scanning and Convex Hull (다방향 선형 스캐닝과 컨벡스 헐을 이용한 아무르불가사리의 특징 추출)

  • Shin, Hyun-Deok;Jeon, Young-Cheol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.3
    • /
    • pp.99-107
    • /
    • 2011
  • The feature extraction of asterias amurensis by using patterns is difficult to extract all the concave and convex features of asterias amurensis nor classify concave and convex. Concave and convex as important structural features of asterias amurensis are the features which should be found and the classification of concave and convex is also necessary for the recognition of asterias amurensis later. Accordingly, this study suggests the technique to extract the features of concave and convex, the main features of asterias amurensis. This technique classifies the concave and convex features by using the multi-directional linear scanning and form the candidate groups of the concave and convex feature points and decide the feature points of the candidate groups and apply convex hull algorithm to the extracted feature points. The suggested technique efficiently extracts the concave and convex features, the main features of asterias amurensis by dividing them. Accordingly, it is expected to contribute to the studies on the recognition of asterias amurensis in the future.