• Title/Summary/Keyword: spatial feature

Search Result 816, Processing Time 0.031 seconds

A Probabilistic Network for Facial Feature Verification

  • Choi, Kyoung-Ho;Yoo, Jae-Joon;Hwang, Tae-Hyun;Park, Jong-Hyun;Lee, Jong-Hoon
    • ETRI Journal
    • /
    • v.25 no.2
    • /
    • pp.140-143
    • /
    • 2003
  • In this paper, we present a probabilistic approach to determining whether extracted facial features from a video sequence are appropriate for creating a 3D face model. In our approach, the distance between two feature points selected from the MPEG-4 facial object is defined as a random variable for each node of a probability network. To avoid generating an unnatural or non-realistic 3D face model, automatically extracted 2D facial features from a video sequence are fed into the proposed probabilistic network before a corresponding 3D face model is built. Simulation results show that the proposed probabilistic network can be used as a quality control agent to verify the correctness of extracted facial features.

  • PDF

A Study on the Spatial Property of Dress Modeling-I (복식조형의 공간적 특질에 관한 연구-I)

  • 김혜연
    • Journal of the Korean Society of Costume
    • /
    • v.38
    • /
    • pp.31-49
    • /
    • 1998
  • This study is the primary basic study about the spatial feature of modeling of Fashion Design. Then, this researcher lays significance in establishing the basic system about the character of dress and its ornaments as modeling in spatial-formal, dimension, examining the feature of modeling closely through perception principle and offering the basic principle to plan and organize the modeling space for dress and its ornaments on the basis of it. To generalize the findings is as follows : First, the spatial system of modeling for dress and its ornaments is made with 3 elements such as space, human beings and dress and its ornaments. Second, the form of dress and its ornaments and the spatial organization start from the structural basis which is human body, and the sensible system of body is made through inter-action, but the aesthetic expression is complet-ed by the moment of body. Third, the characteristic principle of model-ing for dress and its ornaments which was suggested in Chapter IV is based on the visuo-per-ceptional modeling experience, and these thinking contents are inputted in cognition course as the invisible in formation in the new space plan and organization and activate the apperception course and aim at the action about aesthetic judgement.

  • PDF

Unique Feature Identifier for Utilizing Digital Map (수치지도의 활용을 위한 단일식별자)

  • Cho, Woo-Sug
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.6 no.1 s.11
    • /
    • pp.27-34
    • /
    • 1998
  • A Unique Feature Identifier(UFID) is a way of referring to a feature, generally representing a tangible feature in the real world. In other words, a UFID uniquely identifies the related feature in the database and is normally used to link two or more databases together. This paper presents a UFID system aiming at the internal uses for National Geography Institute(NGI) as well as external uses for National Geographic Information System(NGIS) generally to link datasets together. The advantage of the proposed type of UFID lies in the meaningful nature of the identifier in providing a direct spatial index - administrative area and feature code. The checksum character proposed in this research is designed to remove any uncertainty about the number being corrupt. It will account lot digit transposition during manual input as well as corruption in transfer or processing.

  • PDF

A Comparison of Global Feature Extraction Technologies and Their Performance for Image Identification (영상 식별을 위한 전역 특징 추출 기술과 그 성능 비교)

  • Yang, Won-Keun;Cho, A-Young;Jeong, Dong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.1
    • /
    • pp.1-14
    • /
    • 2011
  • While the circulation of images become active, various requirements to manage increasing database are raised. The content-based technology is one of methods to satisfy these requirements. The image is represented by feature vectors extracted by various methods in the content-based technology. The global feature method insures fast matching speed because the feature vector extracted by the global feature method is formed into a standard shape. The global feature extraction methods are classified into two categories, the spatial feature extraction and statistical feature extraction. And each group is divided by what kind of information is used, color feature or gray scale feature. In this paper, we introduce various global feature extraction technologies and compare their performance by accuracy, recall-precision graph, ANMRR, feature vector size and matching time. According to the experiments, the spatial features show good performance in non-geometrical modifications, and the extraction technologies that use color and histogram feature show the best performance.

Optical Design and Construction of Narrow Band Eliminating Spatial Filter for On-line Defect Detection (온라인 결함계측용 협대역 제거형 공간필터의 최적설계 및 제작)

  • 전승환
    • Journal of the Korean Institute of Navigation
    • /
    • v.22 no.4
    • /
    • pp.59-67
    • /
    • 1998
  • A quick and automatic detection with no harm to the goods is very important task for improving quality control, process control and labour reduction. In real fields of industry, defect detection is mostly accomplished by skillful workers. A narrow band eliminating spatial filter having characteristics of removing the specified spatial frequency is developed by the author, and it is proved that the filter has an excellent ability for on-line and real time detection of surface defect. By the way,. this spatial filter shows a ripple phenominum in filtering characteristics. So, it is necessary to remove the ripple component for the improvement of filter gain, moreover efficiency of defect detection. The spatial filtering method has a remarkable feature which means that it is able to set up weighting function for its own sake, and which can to obtain the best signal relating to the purpose of the measurement. Hence, having an eye on such feature, theoretical analysis is carried out at first for optimal design of narrow band eliminating spatial filter, and secondly, on the basis of above results spatial filter is manufactured, and finally advanced effectiveness of spatial filter is evaluated experimentally.

  • PDF

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

A New Covert Visual Attention System by Object-based Spatiotemporal Cues and Their Dynamic Fusioned Saliency Map (객체기반의 시공간 단서와 이들의 동적결합 된돌출맵에 의한 상향식 인공시각주의 시스템)

  • Cheoi, Kyungjoo
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.4
    • /
    • pp.460-472
    • /
    • 2015
  • Most of previous visual attention system finds attention regions based on saliency map which is combined by multiple extracted features. The differences of these systems are in the methods of feature extraction and combination. This paper presents a new system which has an improvement in feature extraction method of color and motion, and in weight decision method of spatial and temporal features. Our system dynamically extracts one color which has the strongest response among two opponent colors, and detects the moving objects not moving pixels. As a combination method of spatial and temporal feature, the proposed system sets the weight dynamically by each features' relative activities. Comparative results show that our suggested feature extraction and integration method improved the detection rate of attention region.

INTERACTIVE FEATURE EXTRACTION FOR IMAGE REGISTRATION

  • Kim Jun-chul;Lee Young-ran;Shin Sung-woong;Kim Kyung-ok
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.641-644
    • /
    • 2005
  • This paper introduces an Interactive Feature Extraction (!FE) approach for the registration of satellite imagery by matching extracted point and line features. !FE method contains both point extraction by cross-correlation matching of singular points and line extraction by Hough transform. The purpose of this study is to minimize user's intervention in feature extraction and easily apply the extracted features for image registration. Experiments with these imagery dataset proved the feasibility and the efficiency of the suggested method.

  • PDF

Robust Global Localization based on Environment map through Sensor Fusion (센서 융합을 통한 환경지도 기반의 강인한 전역 위치추정)

  • Jung, Min-Kuk;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.9 no.2
    • /
    • pp.96-103
    • /
    • 2014
  • Global localization is one of the essential issues for mobile robot navigation. In this study, an indoor global localization method is proposed which uses a Kinect sensor and a monocular upward-looking camera. The proposed method generates an environment map which consists of a grid map, a ceiling feature map from the upward-looking camera, and a spatial feature map obtained from the Kinect sensor. The method selects robot pose candidates using the spatial feature map and updates sample poses by particle filter based on the grid map. Localization success is determined by calculating the matching error from the ceiling feature map. In various experiments, the proposed method achieved a position accuracy of 0.12m and a position update speed of 10.4s, which is robust enough for real-world applications.

LFFCNN: Multi-focus Image Synthesis in Light Field Camera (LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성)

  • Hyeong-Sik Kim;Ga-Bin Nam;Young-Seop Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF