• 제목/요약/키워드: Image Feature Modeling

Search Result 109, Processing Time 0.019 seconds

Web-based 3D Face Modeling System (웹기반 3차원 얼굴 모델링 시스템)

  • 김응곤;송승헌
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.5 no.3
    • /
    • pp.427-433
    • /
    • 2001
  • This paper proposes a web-based 3 dimensional face modeling system that makes a realistic facial model efficiently without any 30 scanner or camera that uses in the traditional methods. Without expensive image-input equipments, we can easily create 3B models only using front and side images. The system is available to make 3D facial models as we connect to the facial modeling server on the WWW which is independent from specific platforms and softwares. This system will be implemented using Java 3D API, which includes the functions and conveniences of developed graphic libraries. It is a Client/server architecture which consists of user connection module and 3D facial model creating module. Clients connect with the facial modeling server, input two facial photographic images, detects the feature points, and then create a 3D facial model modifying generic facial model with the points according to the procedures using only the web browser.

  • PDF

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.

An Analysis of Visual Storytelling Characteristics of Desire in Animation - Regarding Affiliation, Achievement, and Nurturance (애니메이션에서 욕망 비주얼 스토리텔링 특징 분석 - 소속, 성취, 보호에 대하여)

  • Jiang, Weiyi;Wang, Yuchao;Kim, Jong Dae;Chin, Danni;Kim, Jae Ho
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.6
    • /
    • pp.1074-1088
    • /
    • 2016
  • Successful Visual Story Telling(VST) of desire is a crucial key for the success of animation because desire is the leading power of story development of animation. An analysis of the desire of VST using the top 5 successful American feature film animations is carried out. Totally, 147 desire shots are extracted by using the proposed Objective Selection of Desire Shots(OSDS) method based on the theory of Makee's conflict and desire pursuing modeling, Maslow's 20 desire types, Greimas's actant model, and the 17 narrative process classification. In addition to them, the 5 Beat(5B) model of a scene is proposed. Five image specialists have evaluated VST of the selected 147 desire shots. For each shot, the desire type among the 20 desires and the strength are obtained. Among them, the top 3 desires(affiliation, achievement, and nurturance) appearing 51.8% are analyzed. The composition elements of shots affecting the desire type and the strength have found. These can be used for better VST of preproduction and production of animation.

Fixed-Point Modeling and Performance Analysis of a SIFT Keypoints Localization Algorithm for SoC Hardware Design (SoC 하드웨어 설계를 위한 SIFT 특징점 위치 결정 알고리즘의 고정 소수점 모델링 및 성능 분석)

  • Park, Chan-Ill;Lee, Su-Hyun;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.6
    • /
    • pp.49-59
    • /
    • 2008
  • SIFT(Scale Invariant Feature Transform) is an algorithm to extract vectors at pixels around keypoints, in which the pixel colors are very different from neighbors, such as vortices and edges of an object. The SIFT algorithm is being actively researched for various image processing applications including 3-D image constructions, and its most computation-intensive stage is a keypoint localization. In this paper, we develope a fixed-point model of the keypoint localization and propose its efficient hardware architecture for embedded applications. The bit-length of key variables are determined based on two performance measures: localization accuracy and error rate. Comparing with the original algorithm (implemented in Matlab), the accuracy and error rate of the proposed fixed point model are 93.57% and 2.72% respectively. In addition, we found that most of missing keypoints appeared at the edges of an object which are not very important in the case of keypoints matching. We estimate that the hardware implementation will give processing speed of $10{\sim}15\;frame/sec$, while its fixed point implementation on Pentium Core2Duo (2.13 GHz) and ARM9 (400 MHz) takes 10 seconds and one hour each to process a frame.

Virtual Target Overlay Technique by Matching 3D Satellite Image and Sensor Image (3차원 위성영상과 센서영상의 정합에 의한 가상표적 Overlay 기법)

  • Cha, Jeong-Hee;Jang, Hyo-Jong;Park, Yong-Woon;Kim, Gye-Young;Choi, Hyung-Il
    • The KIPS Transactions:PartD
    • /
    • v.11D no.6
    • /
    • pp.1259-1268
    • /
    • 2004
  • To organize training in limited training area for an actuai combat, realistic training simulation plugged in by various battle conditions is essential. In this paper, we propose a virtual target overlay technique which does not use a virtual image, but Projects a virtual target on ground-based CCD image by appointed scenario for a realistic training simulation. In the proposed method, we create a realistic 3D model (for an instructor) by using high resolution Geographic Tag Image File Format(GeoTIFF) satellite image and Digital Terrain Elevation Data (DTED), and extract the road area from a given CCD image (for both an instructor and a trainee). Satellite images and ground-based sensor images have many differences in observation position, resolution, and scale, thus yielding many difficulties in feature-based matching. Hence, we propose a moving synchronization technique that projects the target on the sensor image according to the marked moving path on 3D satellite image by applying Thin-Plate Spline(TPS) interpolation function, which is an image warping function, on the two given sets of corresponding control point pair. To show the experimental result of the proposed method, we employed two Pentium4 1.8MHz personal computer systems equipped with 512MBs of RAM, and the satellite and sensor images of Daejoen area are also been utilized. The experimental result revealed the effective-ness of proposed algorithm.

Semi-automatic 3D Building Reconstruction from Uncalibrated Images (비교정 영상에서의 반자동 3차원 건물 모델링)

  • Jang, Kyung-Ho;Jang, Jae-Seok;Lee, Seok-Jun;Jung, Soon-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.9
    • /
    • pp.1217-1232
    • /
    • 2009
  • In this paper, we propose a semi-automatic 3D building reconstruction method using uncalibrated images which includes the facade of target building. First, we extract feature points in all images and find corresponding points between each pair of images. Second, we extract lines on each image and estimate the vanishing points. Extracted lines are grouped with respect to their corresponding vanishing points. The adjacency graph is used to organize the image sequence based on the number of corresponding points between image pairs and camera calibration is performed. The initial solid model can be generated by some user interactions using grouped lines and camera pose information. From initial solid model, a detailed building model is reconstructed by a combination of predefined basic Euler operators on half-edge data structure. Automatically computed geometric information is visualized to help user's interaction during the detail modeling process. The proposed system allow the user to get a 3D building model with less user interaction by augmenting various automatically generated geometric information.

  • PDF

Towards 3D Modeling of Buildings using Mobile Augmented Reality and Aerial Photographs (모바일 증강 현실 및 항공사진을 이용한 건물의 3차원 모델링)

  • Kim, Se-Hwan;Ventura, Jonathan;Chang, Jae-Sik;Lee, Tae-Hee;Hollerer, Tobias
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.2
    • /
    • pp.84-91
    • /
    • 2009
  • This paper presents an online partial 3D modeling methodology that uses a mobile augmented reality system and aerial photographs, and a tracking methodology that compares the 3D model with a video image. Instead of relying on models which are created in advance, the system generates a 3D model for a real building on the fly by combining frontal and aerial views. A user's initial pose is estimated using an aerial photograph, which is retrieved from a database according to the user's GPS coordinates, and an inertial sensor which measures pitch. We detect edges of the rooftop based on Graph cut, and find edges and a corner of the bottom by minimizing the proposed cost function. To track the user's position and orientation in real-time, feature-based tracking is carried out based on salient points on the edges and the sides of a building the user is keeping in view. We implemented camera pose estimators using both a least squares estimator and an unscented Kalman filter (UKF). We evaluated the speed and accuracy of both approaches, and we demonstrated the usefulness of our computations as important building blocks for an Anywhere Augmentation scenario.

Adult Image Classification using Adaptive Skin Detection and Edge Information (적응적 피부색 검출과 에지 정보를 이용한 유해 영상분류방법)

  • Park, Chan-Woo;Park, Ki-Tae;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.1
    • /
    • pp.127-132
    • /
    • 2011
  • In this paper, we propose a novel method of adult image classification by combining skin color regions and edges in an input image. The proposed method consists of four steps. In the first step, initial skin color regions are detected by logical AND operation of all skin color regions detected by the existing methods of skin color detection. In the second step, a skin color probability map is created by modeling the distribution of skin color in the initial regions. Then, a binary image is generated by using threshold value from the skin color probability map. In the third step, after using the binary image and edge information, we detect final skin color regions using a region growing method. In the final step, adult image classification is performed by support vector machine(SVM). To this end, a feature vector is extracted by combining the final skin color regions and neighboring edges of them. As experimental results, the proposed method improves performance of the adult image classification by 9.6%, compared to the existing method.

Hierarchical Flow-Based Anomaly Detection Model for Motor Gearbox Defect Detection

  • Younghwa Lee;Il-Sik Chang;Suseong Oh;Youngjin Nam;Youngteuk Chae;Geonyoung Choi;Gooman Park
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.6
    • /
    • pp.1516-1529
    • /
    • 2023
  • In this paper, a motor gearbox fault-detection system based on a hierarchical flow-based model is proposed. The proposed system is used for the anomaly detection of a motion sound-based actuator module. The proposed flow-based model, which is a generative model, learns by directly modeling a data distribution function. As the objective function is the maximum likelihood value of the input data, the training is stable and simple to use for anomaly detection. The operation sound of a car's side-view mirror motor is converted into a Mel-spectrogram image, consisting of a folding signal and an unfolding signal, and used as training data in this experiment. The proposed system is composed of an encoder and a decoder. The data extracted from the layer of the pretrained feature extractor are used as the decoder input data in the encoder. This information is used in the decoder by performing an interlayer cross-scale convolution operation. The experimental results indicate that the context information of various dimensions extracted from the interlayer hierarchical data improves the defect detection accuracy. This paper is notable because it uses acoustic data and a normalizing flow model to detect outliers based on the features of experimental data.

Depth-Based Recognition System for Continuous Human Action Using Motion History Image and Histogram of Oriented Gradient with Spotter Model (모션 히스토리 영상 및 기울기 방향성 히스토그램과 적출 모델을 사용한 깊이 정보 기반의 연속적인 사람 행동 인식 시스템)

  • Eum, Hyukmin;Lee, Heejin;Yoon, Changyong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.471-476
    • /
    • 2016
  • In this paper, recognition system for continuous human action is explained by using motion history image and histogram of oriented gradient with spotter model based on depth information, and the spotter model which performs action spotting is proposed to improve recognition performance in the recognition system. The steps of this system are composed of pre-processing, human action and spotter modeling and continuous human action recognition. In pre-processing process, Depth-MHI-HOG is used to extract space-time template-based features after image segmentation, and human action and spotter modeling generates sequence by using the extracted feature. Human action models which are appropriate for each of defined action and a proposed spotter model are created by using these generated sequences and the hidden markov model. Continuous human action recognition performs action spotting to segment meaningful action and meaningless action by the spotter model in continuous action sequence, and continuously recognizes human action comparing probability values of model for meaningful action sequence. Experimental results demonstrate that the proposed model efficiently improves recognition performance in continuous action recognition system.