• Title/Summary/Keyword: 색상 특징

Search Result 659, Processing Time 0.02 seconds

2D Planar Object Tracking using Improved Chamfer Matching Likelihood (개선된 챔퍼매칭 우도기반 2차원 평면 객체 추적)

  • Oh, Chi-Min;Jeong, Mun-Ho;You, Bum-Jae;Lee, Chil-Woo
    • The KIPS Transactions:PartB
    • /
    • v.17B no.1
    • /
    • pp.37-46
    • /
    • 2010
  • In this paper we have presented a two dimensional model based tracking system using improved chamfer matching. Conventional chamfer matching could not calculate similarity well between the object and image when there is very cluttered background. Then we have improved chamfer matching to calculate similarity well even in very cluttered background with edge and corner feature points. Improved chamfer matching is used as likelihood function of particle filter which tracks the geometric object. Geometric model which uses edge and corner feature points, is a discriminant descriptor in color changes. Particle Filter is more non-linear tracking system than Kalman Filter. Then the presented method uses geometric model, particle filter and improved chamfer matching for tracking object in complex environment. In experimental result, the robustness of our system is proved by comparing other methods.

An Object-based Database Mapping Technology for 3D Graphic Data (3차원 그래픽 데이터를 위한 객체단위 데이터베이스 매핑 기법)

  • Jo, Hee-Jeong;Kim, Yong-Hwan;Lee, Ki-Jun;Hwang, Soo-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.8
    • /
    • pp.950-962
    • /
    • 2006
  • Recently, there have been increased many 3 dimensional graphic applications in Internet. Thus, a growing number of methods have been proposed for retrieving 3-D graphic data using their 3D features such as color, texture, shape, and spacial relations. However, few researches focus on 3D graphic modeling and database storage techniques. In this paper, we introduce a system that can store 3D graphics data modeled by XML-based 3D graphics markup language, 3DGML, and support content-based retrievals on 3D data by using SQL. We also present a mapping technique of 3DGML to relational database. The mapping process includes the extraction of semantic information from 3DGML and translate it into relational format. Finally, we show examples of SQL queries which use the 3D information contained in a 3D scene such as objects, 3D features, descriptions and scene-object component hierarchy.

  • PDF

A proposed image stitching method for web-based panoramic virtual reality for Hoseo Cyber Museum (호서 사이버 박물관: 웹기반의 파노라마 비디오 가상현실에 대한 효율적인 이미지 스티칭 알고리즘)

  • Khan, Irfan;Soo, Hong Song
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.2
    • /
    • pp.893-898
    • /
    • 2013
  • It is always a dream to recreate the experience of a particular place, the Panorama Virtual Reality has been interpreted as a kind of technology to create virtual environments and the ability to maneuver angle for and select the path of view in a dynamic scene. In this paper we examined an efficient method for Image registration and stitching of captured imaged. Two approaches are studied in this paper. First, dynamic programming is used to spot the ideal key points, match these points to merge adjacent images together, later image blending is used for smooth color transitions. In second approach, FAST and SURF detection are used to find distinct features in the images and nearest neighbor algorithm is used to match corresponding features, estimate homography with matched key points using RANSAC. The paper also covers the automatically choosing (recognizing, comparing) images to stitching method.

A Study on the Mapping and Characteristics of Distributions in Cultural-Historic Sites of Yanbian Area using Google Earth (구글어스를 이용한 연변지역의 문화.역사유적 지도화와 분포의 특징에 관한 연구)

  • Jin, Shizhu;Kim, Nam-Sin
    • Journal of the Korean association of regional geographers
    • /
    • v.17 no.1
    • /
    • pp.122-139
    • /
    • 2011
  • Yanbian area is a region with great interests to Cultural-Historically Korea as well as China. Cultural-historic study on Yanbian are lots of researches but can find few mapping for sites. This study aimed to make a map and analyze characteristics of distributions in the Cultural-Historic sites of Yanbian using Google Earth. We made a distribution map from stone age to Qing Dynasty. Symbology for mapping made color symbols by time series and categorical symbols. As a research finding, Sites of Balhae and Yuo-Geum age account for large parts in comparison with other ages in Yanbian. Especially, sites of Goguryeo, Balhae and Yuo-Geum age showed spatio-temporal structure of accumulative layers Characteristics of distributions is located in basin and stream area in the early age, and after then historical period moved to hilly sides and mountainous areas. The result of this research is expected to offer information for relevant follow-up studies of Cultural-Historic sites.

  • PDF

Dynamic Bayesian Network based Two-Hand Gesture Recognition (동적 베이스망 기반의 양손 제스처 인식)

  • Suk, Heung-Il;Sin, Bong-Kee
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.4
    • /
    • pp.265-279
    • /
    • 2008
  • The idea of using hand gestures for human-computer interaction is not new and has been studied intensively during the last dorado with a significant amount of qualitative progress that, however, has been short of our expectations. This paper describes a dynamic Bayesian network or DBN based approach to both two-hand gestures and one-hand gestures. Unlike wired glove-based approaches, the success of camera-based methods depends greatly on the image processing and feature extraction results. So the proposed method of DBN-based inference is preceded by fail-safe steps of skin extraction and modeling, and motion tracking. Then a new gesture recognition model for a set of both one-hand and two-hand gestures is proposed based on the dynamic Bayesian network framework which makes it easy to represent the relationship among features and incorporate new information to a model. In an experiment with ten isolated gestures, we obtained the recognition rate upwards of 99.59% with cross validation. The proposed model and the related approach are believed to have a strong potential for successful applications to other related problems such as sign languages.

Visual Information Selection Mechanism Based on Human Visual Attention (인간의 주의시각에 기반한 시각정보 선택 방법)

  • Cheoi, Kyung-Joo;Park, Min-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.3
    • /
    • pp.378-391
    • /
    • 2011
  • In this paper, we suggest a novel method of selecting visual information based on bottom-up visual attention of human. We propose a new model that improve accuracy of detecting attention region by using depth information in addition to low-level spatial features such as color, lightness, orientation, form and temporal feature such as motion. Motion is important cue when we derive temporal saliency. But noise obtained during the input and computation process deteriorates accuracy of temporal saliency Our system exploited the result of psychological studies in order to remove the noise from motion information. Although typical systems get problems in determining the saliency if several salient regions are partially occluded and/or have almost equal saliency, our system is able to separate the regions with high accuracy. Spatiotemporally separated prominent regions in the first stage are prioritized using depth value one by one in the second stage. Experiment result shows that our system can describe the salient regions with higher accuracy than the previous approaches do.

Robust Detection of Body Areas Using an Adaboost Algorithm (에이다부스트 알고리즘을 이용한 인체 영역의 강인한 검출)

  • Jang, Seok-Woo;Byun, Siwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.11
    • /
    • pp.403-409
    • /
    • 2016
  • Recently, harmful content (such as images and photos of nudes) has been widely distributed. Therefore, there have been various studies to detect and filter out such harmful image content. In this paper, we propose a new method using Haar-like features and an AdaBoost algorithm for robustly extracting navel areas in a color image. The suggested algorithm first detects the human nipples through color information, and obtains candidate navel areas with positional information from the extracted nipple areas. The method then selects real navel regions based on filtering using Haar-like features and an AdaBoost algorithm. Experimental results show that the suggested algorithm detects navel areas in color images 1.6 percent more robustly than an existing method. We expect that the suggested navel detection algorithm will be usefully utilized in many application areas related to 2D or 3D harmful content detection and filtering.

A Study of Similarity Measures on Multidimensional Data Sequences Using Semantic Information (의미 정보를 이용한 다차원 데이터 시퀀스의 유사성 척도 연구)

  • Lee, Seok-Lyong;Lee, Ju-Hong;Chun, Seok-Ju
    • The KIPS Transactions:PartD
    • /
    • v.10D no.2
    • /
    • pp.283-292
    • /
    • 2003
  • One-dimensional time-series data have been studied in various database applications such as data mining and data warehousing. However, in the current complex business environment, multidimensional data sequences (MDS') become increasingly important in addition to one-dimensional time-series data. For example, a video stream can be modeled as an MDS in the multidimensional space with respect to color and texture attributes. In this paper, we propose the effective similarity measures on which the similar pattern retrieval is based. An MDS is partitioned into segments, each of which is represented by various geometric and semantic features. The similarity measures are defined on the basis of these segments. Using the measures, irrelevant segments are pruned from a database with respect to a given query. Both data sequences and query sequences are partitioned into segments, and the query processing is based upon the comparison of the features between data and query segments, instead of scanning all data elements of entire sequences.

Face and Hand Tracking Algorithm for Sign Language Recognition (수화 인식을 위한 얼굴과 손 추적 알고리즘)

  • Park, Ho-Sik;Bae, Cheol-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.11C
    • /
    • pp.1071-1076
    • /
    • 2006
  • In this paper, we develop face and hand tracking for sign language recognition system. The system is divided into two stages; the initial and tracking stages. In initial stage, we use the skin feature to localize face and hands of signer. The ellipse model on CbCr space is constructed and used to detect skin color. After the skin regions have been segmented, face and hand blobs are defined by using size and facial feature with the assumption that the movement of face is less than that of hands in this signing scenario. In tracking stage, the motion estimation is applied only hand blobs, in which first and second derivative are used to compute the position of prediction of hands. We observed that there are errors in the value of tracking position between two consecutive frames in which velocity has changed abruptly. To improve the tracking performance, our proposed algorithm compensates the error of tracking position by using adaptive search area to re-compute the hand blobs. The experimental results indicate that our proposed method is able to decrease the prediction error up to 96.87% with negligible increase in computational complexity of up to 4%.

Hand gesture based a pet robot control (손 제스처 기반의 애완용 로봇 제어)

  • Park, Se-Hyun;Kim, Tae-Ui;Kwon, Kyung-Su
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.4
    • /
    • pp.145-154
    • /
    • 2008
  • In this paper, we propose the pet robot control system using hand gesture recognition in image sequences acquired from a camera affixed to the pet robot. The proposed system consists of 4 steps; hand detection, feature extraction, gesture recognition and robot control. The hand region is first detected from the input images using the skin color model in HSI color space and connected component analysis. Next, the hand shape and motion features from the image sequences are extracted. Then we consider the hand shape for classification of meaning gestures. Thereafter the hand gesture is recognized by using HMMs (hidden markov models) which have the input as the quantized symbol sequence by the hand motion. Finally the pet robot is controlled by a order corresponding to the recognized hand gesture. We defined four commands of sit down, stand up, lie flat and shake hands for control of pet robot. And we show that user is able to control of pet robot through proposed system in the experiment.

  • PDF