• Title/Summary/Keyword: Object Color

Search Result 926, Processing Time 0.027 seconds

A Moving Object Tracking using Color and OpticalFlow Information (컬러 및 광류정보를 이용한 이동물체 추적)

  • Gim, Ju-Hyeon;Choi, Han-Go
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.319-322
    • /
    • 2013
  • 본 연구에서는 이동 객체를 컬러기반에서 추적하는데 있어 주변 환경 변화와 추적중인 객체 색상이 유사한 물체가 존재할 경우 보다 안정적으로 추적할 수 있는 방법을 제시한다. 백그라운드 차영상과 모폴로지 연산을 통하여 이동 객체를 검출하고, 매 프레임마다 발생하는 밝기 및 주변 환경의 영향을 고려하여 기존의 CamShift 알고리즘을 보완하였다. 추적 물체와 색상이 비슷한 주변 물체가 존재할 경우 개선된 CamShift는 불안정한 추적을 보여주었는데 이를 해결하기 위해 Optical Flow기반의 KLT 알고리즘과 병합하는 방법을 제시하였다. 실험 결과를 통해 제안된 추적 방법은 기존의 단점을 보완하였으며 추적성능이 개선됨을 확인하였다.

Active Object Tracking based on hierarchical application of Region and Color Information (지역정보와 색 정보의 계층적 적용에 의한 능동 객체 추적)

  • Jeong, Joon-Yong;Lee, Kyu-Won
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.633-636
    • /
    • 2010
  • 본 논문에서 Pan, Tilt 카메라를 이용한 객체 추적을 위하여 초기 지역정보를 이용하여 객체를 검출하고 검출된 객체의 색 정보를 이용하여 능동 객체를 추적하는 기술을 제안한다. 외부 환경의 잡음을 제거하기 위해 적응적인 가우시안 혼합 모델링을 이용하여 배경과 객체를 분리한다. 객체가 정해지면 카메라가 이동하는 동안에도 추적이 가능한 CAMShift 추적 알고리즘을 이용하여 객체를 실시간으로 추적한다. CAMShift 추적 알고리즘은 객체의 크기를 계산하므로 객체의 크기가 변하더라도 유동적인 객체 판별이 가능하다. Pan, Tilt의 위치는 구좌표계(Spherical coordinates system)를 이용하여 계산하였다. 이렇게 구해진 Pan, Tilt 위치는 Pan, Tilt 프로토콜을 이용하여 객체의 위치를 화면의 중심에 놓이게 함으로써 적합한 추적을 가능하게 한다.

Machine Classifying Object by Color (색상별 물체 분류기)

  • Jun, Jae-Yung;Choi, Min-Soon;Hwang, Seok-Joong;Kim, Jong-Kook
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.344-345
    • /
    • 2011
  • 본 논문에서는 영상처리를 이용해서 물품을 색상별로 분류하는 로봇 개발에 대해 기술한다. 그동안 로봇에서 획득한 영상 데이터를 고성능 host PC에 보내어 처리하고 로봇은 그 결과만을 받아 사용하는 것이 일반적이었으나, 최근에는 embedded CPU의 비약적인 발전에 따라 영상을 로봇 자체에서 영상 처리 하는 것이 점점 더 용이해지고 있다. 따라서 본 논문에서 기술하는 색상별 물체 분류기 로봇 개발을 통하여 로봇에서의 영상 처리 가능성을 알아보고자 한다.

Probing Intracluster Light of 10 Galaxy Clustersat z >1 with Deep HST WFC3/IR Imaging Data

  • Joo, Hyungjin;Jee, M. James;Ko, Jongwan
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.2
    • /
    • pp.42.2-42.2
    • /
    • 2021
  • Intraclusterlight (ICL) is diffuse light from stars that are bound to the clusterpotential, not to individual member galaxies. Understanding the formationmechanism of ICL provides critical information on the assembly and evolution ofthe galaxy cluster. Although there exist several competing models, the dominantproduction mechanism is still in dispute. The ICL measurement between z=1 and 2strongly constrains the formation scenario of the ICL because the epoch is whenthe first mature clusters begin to appear. However, the number of high-redshiftICL studies is small mainly because of observational challenges. In this study, based on deep HST WFC3/IR data, we measured ICL of 10 galaxy clusters atredshift beyond unity, which nearly doubles the sample size in this redshiftregime. With careful handling of systematics including object masking, skyestimation, flatfielding, dwarf galaxy contamination, etc., we quantified thetotal amount of ICL, measured the color profile, and examined the transitionbetween BCG and ICL.

  • PDF

A Study on the Objects Arrangement of Display Panel and the Cognitive Accuracy under the Virtual Reality Evaluation Tool (가상현실 기법을 적용한 평가도구를 활용한 계기반 배치 및 인지 정확도에 관한 연구)

  • Kim, Sun-young;Yu, Seung-dong;Park, Peom
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.1
    • /
    • pp.1-8
    • /
    • 2000
  • Most of the important visual information is presented to driver through the display panel that is related to the vehicle. If display panel is designed with the c consideration of driver's visibility, drivers can get broad visual field and visual I information related to vehicle promptly and exactly during driving situation. Therefore display panel has direct relationship with the driver's task performance and it can be considered as an important device that affects the driver-automotive interaction. Many r researches about shape, characteristic and color of display panel have been performed, but not sufficient in this country Nowadays most of vehicles has an analog type display. but its shape and arrangement a are various without any definite standards about position. Therefore. experiments using evaluation tool (VISVEC System) were conducted to inquire the driver's preference on the major objects arrangement of display panel (speedometer. tachometer. fuel meter. and t thermometer) and to ascertain the factors that have an effect on drivers according to the objects position of the display panel The experiment results showed that there was no correlation between the arrangement c characteristics preferred by subjects and the cognitive accuracy but the cognition of V visual information more easy when the each major object has its area.

  • PDF

Land Cover Object-oriented Base Classification Using Digital Aerial Photo Image (디지털항공사진영상을 이용한 객체기반 토지피복분류)

  • Lee, Hyun-Jik;Lu, Ji-Ho;Kim, Sang-Youn
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.19 no.1
    • /
    • pp.105-113
    • /
    • 2011
  • Since existing thematic maps have been made with medium- to low-resolution satellite images, they have several shortcomings including low positional accuracy and low precision of presented thematic information. Digital aerial photo image taken recently can express panchromatic and color bands as well as NIR (Near Infrared) bands which can be used in interpreting forest areas. High resolution images are also available, so it would be possible to conduct precision land cover classification. In this context, this paper implemented object-based land cover classification by using digital aerial photos with 0.12m GSD (Ground Sample Distance) resolution and IKONOS satellite images with 1m GSD resolution, both of which were taken on the same area, and also executed qualitative analysis with ortho images and existing land cover maps to check the possibility of object-based land cover classification using digital aerial photos and to present usability of digital aerial photos. Also, the accuracy of such classification was analyzed by generating TTA(Training and Test Area) masks and also analyzed their accuracy through comparison of classified areas using screen digitizing. The result showed that it was possible to make a land cover map with digital aerial photos, which allows more detailed classification compared to satellite images.

Modified HOG Feature Extraction for Pedestrian Tracking (동영상에서 보행자 추적을 위한 변형된 HOG 특징 추출에 관한 연구)

  • Kim, Hoi-Jun;Park, Young-Soo;Kim, Ki-Bong;Lee, Sang-Hun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.3
    • /
    • pp.39-47
    • /
    • 2019
  • In this paper, we proposed extracting modified Histogram of Oriented Gradients (HOG) features using background removal when tracking pedestrians in real time. HOG feature extraction has a problem of slow processing speed due to large computation amount. Background removal has been studied to improve computation reductions and tracking rate. Area removal was carried out using S and V channels in HSV color space to reduce feature extraction in unnecessary areas. The average S and V channels of the video were removed and the input video was totally dark, so that the object tracking may fail. Histogram equalization was performed to prevent this case. HOG features extracted from the removed region are reduced, and processing speed and tracking rates were improved by extracting clear HOG features. In this experiment, we experimented with videos with a large number of pedestrians or one pedestrian, complicated videos with backgrounds, and videos with severe tremors. Compared with the existing HOG-SVM method, the proposed method improved the processing speed by 41.84% and the error rate was reduced by 52.29%.

Design of Vision-based Interaction Tool for 3D Interaction in Desktop Environment (데스크탑 환경에서의 3차원 상호작용을 위한 비전기반 인터랙션 도구의 설계)

  • Choi, Yoo-Joo;Rhee, Seon-Min;You, Hyo-Sun;Roh, Young-Sub
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.421-434
    • /
    • 2008
  • As computer graphics, virtual reality and augmented reality technologies have been developed, in many application areas based on those techniques, interaction for 3D space is required such as selection and manipulation of an 3D object. In this paper, we propose a framework for a vision-based 3D interaction which enables to simulate functions of an expensive 3D mouse for a desktop environment. The proposed framework includes a specially manufactured interaction device using three-color LEDs. By recognizing position and color of the LED from video sequences, various events of the mouse and 6 DOF interactions are supported. Since the proposed device is more intuitive and easier than an existing 3D mouse which is expensive and requires skilled manipulation, it can be used without additional learning or training. In this paper, we explain methods for making a pointing device using three-color LEDs which is one of the components of the proposed framework, calculating 3D position and orientation of the pointer and analyzing color of the LED from video sequences. We verify accuracy and usefulness of the proposed device by showing a measurement result of an error of the 3D position and orientation.

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

Stereo Vision Based 3D Input Device (스테레오 비전을 기반으로 한 3차원 입력 장치)

  • Yoon, Sang-Min;Kim, Ig-Jae;Ahn, Sang-Chul;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.429-441
    • /
    • 2002
  • This paper concerns extracting 3D motion information from a 3D input device in real time focused to enabling effective human-computer interaction. In particular, we develop a novel algorithm for extracting 6 degrees-of-freedom motion information from a 3D input device by employing an epipolar geometry of stereo camera, color, motion, and structure information, free from requiring the aid of camera calibration object. To extract 3D motion, we first determine the epipolar geometry of stereo camera by computing the perspective projection matrix and perspective distortion matrix. We then incorporate the proposed Motion Adaptive Weighted Unmatched Pixel Count algorithm performing color transformation, unmatched pixel counting, discrete Kalman filtering, and principal component analysis. The extracted 3D motion information can be applied to controlling virtual objects or aiding the navigation device that controls the viewpoint of a user in virtual reality setting. Since the stereo vision-based 3D input device is wireless, it provides users with a means for more natural and efficient interface, thus effectively realizing a feeling of immersion.