• Title/Summary/Keyword: features extraction

Search Result 1,467, Processing Time 0.029 seconds

Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression (특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식)

  • Noh, Sung-Kyu;Park, Han-Hoon;Shin, Hong-Chang;Jin, Yoon-Jong;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.667-674
    • /
    • 2007
  • Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

  • PDF

Feature Extraction of Asterias Amurensis by Using the Multi-Directional Linear Scanning and Convex Hull (다방향 선형 스캐닝과 컨벡스 헐을 이용한 아무르불가사리의 특징 추출)

  • Shin, Hyun-Deok;Jeon, Young-Cheol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.3
    • /
    • pp.99-107
    • /
    • 2011
  • The feature extraction of asterias amurensis by using patterns is difficult to extract all the concave and convex features of asterias amurensis nor classify concave and convex. Concave and convex as important structural features of asterias amurensis are the features which should be found and the classification of concave and convex is also necessary for the recognition of asterias amurensis later. Accordingly, this study suggests the technique to extract the features of concave and convex, the main features of asterias amurensis. This technique classifies the concave and convex features by using the multi-directional linear scanning and form the candidate groups of the concave and convex feature points and decide the feature points of the candidate groups and apply convex hull algorithm to the extracted feature points. The suggested technique efficiently extracts the concave and convex features, the main features of asterias amurensis by dividing them. Accordingly, it is expected to contribute to the studies on the recognition of asterias amurensis in the future.

Study of Emotion Recognition based on Facial Image for Emotional Rehabilitation Biofeedback (정서재활 바이오피드백을 위한 얼굴 영상 기반 정서인식 연구)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.957-962
    • /
    • 2010
  • If we want to recognize the human's emotion via the facial image, first of all, we need to extract the emotional features from the facial image by using a feature extraction algorithm. And we need to classify the emotional status by using pattern classification method. The AAM (Active Appearance Model) is a well-known method that can represent a non-rigid object, such as face, facial expression. The Bayesian Network is a probability based classifier that can represent the probabilistic relationships between a set of facial features. In this paper, our approach to facial feature extraction lies in the proposed feature extraction method based on combining AAM with FACS (Facial Action Coding System) for automatically modeling and extracting the facial emotional features. To recognize the facial emotion, we use the DBNs (Dynamic Bayesian Networks) for modeling and understanding the temporal phases of facial expressions in image sequences. The result of emotion recognition can be used to rehabilitate based on biofeedback for emotional disabled.

Automatic Extraction and Measurement of Visual Features of Mushroom (Lentinus edodes L.) (표고 외관 특징점의 자동 추출 및 측정)

  • Hwang, Heon;Lee, Yong-Guk
    • Journal of Bio-Environment Control
    • /
    • v.1 no.1
    • /
    • pp.37-51
    • /
    • 1992
  • Quantizing and extracting visual features of mushroom(Lentinus edodes L.) are crucial to the sorting and grading automation, the growth state measurement, and the dried performance indexing. A computer image processing system was utilized for the extraction and measurement of visual features of front and back sides of the mushroom. The image processing system is composed of the IBM PC compatible 386DK, ITEX PCVISION Plus frame grabber, B/W CCD camera, VGA color graphic monitor, and image output RGB monitor. In this paper, an automatic thresholding algorithm was developed to yield the segmented binary image representing skin states of the front and back sides. An eight directional Freeman's chain coding was modified to solve the edge disconnectivity by gradually expanding the mask size of 3$\times$3 to 9$\times$9. A real scaled geometric quantity of the object was directly extracted from the 8-directional chain element. The external shape of the mushroom was analyzed and converted to the quantitative feature patterns. Efficient algorithms for the extraction of the selected feature patterns and the recognition of the front and back side were developed. The developed algorithms were coded in a menu driven way using MS_C language Ver.6.0, PC VISION PLUS library fuctions, and VGA graphic functions.

  • PDF

Recent Advances in Feature Detectors and Descriptors: A Survey

  • Lee, Haeseong;Jeon, Semi;Yoon, Inhye;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.3
    • /
    • pp.153-163
    • /
    • 2016
  • Local feature extraction methods for images and videos are widely applied in the fields of image understanding and computer vision. However, robust features are detected differently when using the latest feature detectors and descriptors because of diverse image environments. This paper analyzes various feature extraction methods by summarizing algorithms, specifying properties, and comparing performance. We analyze eight feature extraction methods. The performance of feature extraction in various image environments is compared and evaluated. As a result, the feature detectors and descriptors can be used adaptively for image sequences captured under various image environments. Also, the evaluation of feature detectors and descriptors can be applied to driving assistance systems, closed circuit televisions (CCTVs), robot vision, etc.

Relation Extraction Using Convolution Tree Kernel Expanded with Entity Features

  • Qian, Longhua;Zhou, Guodong;Zhu, Qiaomin;Qian, Peide
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.415-421
    • /
    • 2007
  • This paper proposes a convolution tree kernel-based approach for relation extraction where the parse tree is expanded with entity features such as entity type, subtype, and mention level etc. Our study indicates that not only can our method effectively capture both syntactic structure and entity information of relation instances, but also can avoid the difficulty with tuning the parameters in composite kernels. We also demonstrate that predicate verb information can be used to further improve the performance, though its enhancement is limited. Evaluation on the ACE2004 benchmark corpus shows that our system slightly outperforms both the previous best-reported feature-based and kernel-based systems.

  • PDF

A Study on the Feature Extraction for High Speed Character Recognition -By Using Interative Extraction and Hierarchical Formation of Directional Information- (고속 문자 인식을 위한 특징량 추출에 관한 연구 - 방향정보의 반복적 추출과 특징량의 계층성을 이용하여 -)

  • 강선미;이기용;양윤모;양윤모;김덕진
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.29B no.11
    • /
    • pp.102-110
    • /
    • 1992
  • In this paper, a new method of character recognition is proposed. It uses density information, in addition to positional and directional information generally used, to recognize a character. Four directional feature primitives are extracted from the thinning templates on the observation that the output of the templates have directional property in general. A simple and fast feature extraction scheme is possible. Features are organized from recursive nonary tree(N-tree) that corresponds to normalized character area. Each node of the N-tree has four directional features that are sum of the features of it's nine sub-nodes. Every feature primitive from the templates are added to the corresponding leaf and then summed to the upper nodes successively. Recognition can be accomplished by using appropriate feature level of N-tree. Also, effectiveness of each node's feature vector was tested by experiment. A method to implement the proposed feature vector organization algorithm into hardware is proposed as well. The third generation node, which is 4$\times$4, is used as a unit processing element to extract features, and it was implemented in hardware. As a result, we could observe that it is possible to extract feature vector for real-time processing.

  • PDF

Time-Frequency Feature Extraction of Broadband Echo Signals from Individual Live Fish for Species Identification (활어 개체어의 광대역 음향산란신호로부터 어종식별을 위한 시간-주파수 특징 추출)

  • Lee, Dae-Jae;Kang, Hee-Young;Pak, Yong-Ye
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.49 no.2
    • /
    • pp.214-223
    • /
    • 2016
  • Joint time-frequency images of the broadband acoustic echoes of six fish species were obtained using the smoothed pseudo-Wigner-Ville distribution (SPWVD). The acoustic features were extracted by changing the sliced window widths and dividing the time window by a 0.02-ms interval and the frequency window by a 20-kHz bandwidth. The 22 spectrum amplitudes obtained in the time and frequency domains of the SPWVD images were fed as input parameters into an artificial neural network (ANN) to verify the effectiveness for species-dependent features related to fish species identification. The results showed that the time-frequency approach improves the extraction of species-specific features for species identification from broadband echoes, compare with time-only or frequency-only features. The ANN classifier based on these acoustic feature components was correct in approximately 74.5% of the test cases. In the future, the identification rate will be improved using time-frequency images with reduced dimensions of the broadband acoustic echoes as input for the ANN classifier.

Automatic extraction of golf swing features using a single Kinect (단일 키넥트를 이용한 골프 스윙 특징의 자동 추출)

  • Kim, Pyeoung-Kee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.12
    • /
    • pp.197-207
    • /
    • 2014
  • In this paper, I propose an automatic extraction method of golf swing features using a practical TOF camera Kinect. I extracted 7 key swing frames and features using joints and depth information from a Kinect. I tested the proposed method on 50 swings from 10 players and showed the performace. It is meaningful that 3D swing features are extracted automatically using an inexpensive and simple system and specific numerical feature values can be used for the building of automatic swing analysis system.

Enhancing Snippet Extraction Method using Fuzzy and Semantic Features (퍼지와 의미특징을 이용한 스니핏 추출 향상 방법)

  • Park, Sun;Lee, Yeonwoo;Cho, Kwangmoon;Yang, Huyeol;Lee, Seong Ro
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.11
    • /
    • pp.2374-2381
    • /
    • 2012
  • This paper proposes a new enhancing snippet extraction method using fuzzy and semantic features. The proposed method creates a delegate of sentence by using semantic features. It extracts snippet using fuzzy association between a delegate sentence and sentence set which well represents query. In addition, the method uses pseudo relevance feedback to expand query which extracts snippet to be well reflected semantic user's intention. The experimental results demonstrate the proposed method can achieve better snippet extraction performance than the previous methods.