• Title/Summary/Keyword: Color image detection

Search Result 721, Processing Time 0.031 seconds

Research for Bit-depth Conversion Development by Detection Lost Information to Resizing Process for Digital Photography (디지털 사진영상의 크기조절과정에서 유실되는 정보를 이용한 비트심도의 확장)

  • Cho, Do-Hee;Maik, Vivek;Paik, Joon-Ki;Har, Dong-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.4
    • /
    • pp.189-197
    • /
    • 2009
  • A digital image usually has 8 bits of depth basically representing pixel intensity ranging for [0 255]. These pixel range allow 256 step levels of pixel values in the image. Thus the greyscale value for a given image is an integer. When we carry out interpolation of a given image for resizing we have to round the interpolated value to integer which can result in loss of quality on perceived color values. This paper proposes a new method for recovering this loss of information during interpolation process. By using the proposed method the pixels tend to regain more original values which yields better looking images on resizing.

A proposed image stitching method for web-based panoramic virtual reality for Hoseo Cyber Museum (호서 사이버 박물관: 웹기반의 파노라마 비디오 가상현실에 대한 효율적인 이미지 스티칭 알고리즘)

  • Khan, Irfan;Soo, Hong Song
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.2
    • /
    • pp.893-898
    • /
    • 2013
  • It is always a dream to recreate the experience of a particular place, the Panorama Virtual Reality has been interpreted as a kind of technology to create virtual environments and the ability to maneuver angle for and select the path of view in a dynamic scene. In this paper we examined an efficient method for Image registration and stitching of captured imaged. Two approaches are studied in this paper. First, dynamic programming is used to spot the ideal key points, match these points to merge adjacent images together, later image blending is used for smooth color transitions. In second approach, FAST and SURF detection are used to find distinct features in the images and nearest neighbor algorithm is used to match corresponding features, estimate homography with matched key points using RANSAC. The paper also covers the automatically choosing (recognizing, comparing) images to stitching method.

A Study for Introducing a Method of Detecting and Recovering the Shadow Edge from Aerial Photos (항공영상에서 그림자 경계 탐색 및 복원 기법 연구)

  • Jung, Yong-Ju;Jang, Young-Woon;Choi, Yun-Woong;Cho, Gi-Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.24 no.4
    • /
    • pp.327-334
    • /
    • 2006
  • The aerial photos need in a simple object such as cartography and ground cover classification and also in a social objects such as the city plan, environment, disaster, transportation etc. However, the shadow, which includes when taking the aerial photos, makes a trouble to interpret the ground information, and also users, who need the photos in their field tasks, have a restriction. Generally the shadow occurs by the building and surface topography, and the detail cause is by changing of the illumination in an area. For removing the shadow this study uses the single image and processes the image without the source of image and taking situation. Also, applying the entropy minimization method it generates the 1-D gray-scale invariant image for creating the shadow edge mask and using the Canny edge detection creates the shadow edge mask, and finally by filtering in Fourier frequency domain creates the intrinsic image which recovers the 3-D color information and removes the shadow.

Development of Web-cam Game using Hand and Face Skin Color (손과 얼굴의 피부색을 이용한 웹캠 게임 개발)

  • Oh, Chi-Min;Aurrahman, Dhi;Islam, Md. Zahidul;Kim, Hyung-Gwan;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.60-63
    • /
    • 2008
  • The sony Eytoy is developed on Playstation 2 using webcam for detecting human. A user see his appearance in television and become real gamer in the game. It is very different interface compared with ordinary video game which uses joystick. Although Eyetoy already was made for commercial products but the interface method still is interesting and can be added with many techniques like gesture recognition. In this paper, we have developed game interface with image processing for human hand and face detection and with game graphic module. And we realize one example game for busting balloons and demonstrated the game interface abilities. We will open this project for other developers and will be developed very much.

  • PDF

Image Segmentation for Fire Prediction using Deep Learning (딥러닝을 이용한 화재 발생 예측 이미지 분할)

  • TaeHoon, Kim;JongJin, Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.1
    • /
    • pp.65-70
    • /
    • 2023
  • In this paper, we used a deep learning model to detect and segment flame and smoke in real time from fires. To this end, well known U-NET was used to separate and divide the flame and smoke of the fire using multi-class. As a result of learning using the proposed technique, the values of loss error and accuracy are very good at 0.0486 and 0.97996, respectively. The IOU value used in object detection is also very good at 0.849. As a result of predicting fire images that were not used for learning using the learned model, the flame and smoke of fire are well detected and segmented, and smoke color were well distinguished. Proposed method can be used to build fire prediction and detection system.

A Laser Pointer Detection Algorithm Based on Conditional Test in Color Model and Differential Image (색상 조건 검사와 차영상을 이용한 레이저 포인터의 좌표 검출)

  • Lee, Doo-Hee;Kim, Yoon;Choi, Chang-Yeol
    • Annual Conference of KIPS
    • /
    • 2010.11a
    • /
    • pp.617-620
    • /
    • 2010
  • 최근 고성능 모바일 단말기와 다양한 컨텐츠가 등장하면서 유비쿼터스 환경에서의 사용자 인터페이스에 대한 관심이 높아지고 있다. 특히 모바일 프로젝터는 장소의 제한을 받지 않고 큰 화면을 다른 사람과 공유할 수 있는 장점이 있는 반면에 단말기를 직접 제어해야 하는 불편함이 있다. 본 논문에서는 모바일 환경에서 카메라를 통해 입력된 영상 정보만으로 사용자가 스크린에 비추는 레이저 포인터를 실시간으로 검출하는 알고리즘을 제안한다. 제안하는 알고리즘은 색상 감지와 움직임 감지로 나뉜다. 단일 프레임에서 영상 성분의 평균을 이용한 조건을 검사하여 레이저 포인터 색상 영역을 검출하고, 인접한 프레임과 현재 프레임과의 차를 구하며 그 차이가 임계값보다 큰 영역을 움직임 영역으로 검출한다. 마지막으로 색상 검출 영역과 움직임 검출 영역을 동시에 만족하는 영역을 최종적으로 레이저 포인터 영역으로 인식한다. 본 기법은 영상 정보만 사용하기 때문에 센서나 불필요한 장비를 착용할 필요가 없고 영상 성분 평균을 이용하므로 프로젝터 성능에 따른 조도의 변화에 강건하여 효과적인 레이저 포인터 검출이 가능하다. 실험결과는 주변 조명의 밝기에 따라 차이가 있지만 대부분 80% 이상의 검출률과 16% 미만의 오검출률의 성능으로 나타났고, 이 같은 결과는 사용자의 주관적인 만족을 보장하였다.

Vision-Based Identification of Personal Protective Equipment Wearing

  • Park, Man-Woo;Zhu, Zhenhua
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.313-316
    • /
    • 2015
  • Construction is one of the most dangerous job sectors, which reports tens of thousands of time-loss injuries and deaths every year. These disasters incur delays and additional costs to the projects. The safety management needs to be on the top primary tasks throughout the construction to avoid fatal accidents and to foster safe working environments. One of the safety regulations that are frequently violated is the wearing of personal protection equipment (PPE). In order to facilitate monitoring of the compliance of the PPE wearing regulations, this paper proposes a vision based method that automatically identifies whether workers wear hard hats and safety vests. The method involves three modules - human body detection, identification of safety vest wearing, and hard hat detection. First, human bodies are detected in the video frames captured by real-time on-site construction cameras. The detected human bodies are classified into with/without wearing safety vests based on the color features of their upper parts. Finally, hard hats are detected on the nearby regions of the detected human bodies and the locations of the detected hard hats and human bodies are correlated to reveal their corresponding matches. In this way, the proposed method provides any appearance of the workers without wearing hard hats or safety vests. The method has been tested on onsite videos and the results signify its potential to facilitate site safety monitoring.

  • PDF

Decoding Brain Patterns for Colored and Grayscale Images using Multivariate Pattern Analysis

  • Zafar, Raheel;Malik, Muhammad Noman;Hayat, Huma;Malik, Aamir Saeed
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1543-1561
    • /
    • 2020
  • Taxonomy of human brain activity is a complicated rather challenging procedure. Due to its multifaceted aspects, including experiment design, stimuli selection and presentation of images other than feature extraction and selection techniques, foster its challenging nature. Although, researchers have focused various methods to create taxonomy of human brain activity, however use of multivariate pattern analysis (MVPA) for image recognition to catalog the human brain activities is scarce. Moreover, experiment design is a complex procedure and selection of image type, color and order is challenging too. Thus, this research bridge the gap by using MVPA to create taxonomy of human brain activity for different categories of images, both colored and gray scale. In this regard, experiment is conducted through EEG testing technique, with feature extraction, selection and classification approaches to collect data from prequalified criteria of 25 graduates of University Technology PETRONAS (UTP). These participants are shown both colored and gray scale images to record accuracy and reaction time. The results showed that colored images produces better end result in terms of accuracy and response time using wavelet transform, t-test and support vector machine. This research resulted that MVPA is a better approach for the analysis of EEG data as more useful information can be extracted from the brain using colored images. This research discusses a detail behavior of human brain based on the color and gray scale images for the specific and unique task. This research contributes to further improve the decoding of human brain with increased accuracy. Besides, such experiment settings can be implemented and contribute to other areas of medical, military, business, lie detection and many others.

The Embodiment of the Real-Time Face Recognition System Using PCA-based LDA Mixture Algorithm (PCA 기반 LDA 혼합 알고리즘을 이용한 실시간 얼굴인식 시스템 구현)

  • 장혜경;오선문;강대성
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.45-50
    • /
    • 2004
  • In this paper, we propose a new PCA-based LDA Mixture Algorithm(PLMA) for real-time face recognition system. This system greatly consists of the two parts: 1) face extraction part; 2) face recognition part. In the face extraction part we applied subtraction image, color filtering, eyes and mouth region detection, and normalization method, and in the face recognition part we used the method mixing PCA and LDA in extracted face candidate region images. The existing recognition system using only PCA showed low recognition rates, and it is hard in the recognition system using only LDA to apply LDA to the input images as it is when the number of image pixels ire small as compared with the training set. To overcome these shortcomings, we reduced dimension as we apply PCA to the normalized images, and apply LDA to the compressed images, therefore it is possible for us to do real-time recognition, and we are also capable of improving recognition rates. We have experimented using self-organized DAUface database to evaluate the performance of the proposed system. The experimental results show that the proposed method outperform PCA, LDA and ICA method within the framework of recognition accuracy.

Hand gesture based a pet robot control (손 제스처 기반의 애완용 로봇 제어)

  • Park, Se-Hyun;Kim, Tae-Ui;Kwon, Kyung-Su
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.4
    • /
    • pp.145-154
    • /
    • 2008
  • In this paper, we propose the pet robot control system using hand gesture recognition in image sequences acquired from a camera affixed to the pet robot. The proposed system consists of 4 steps; hand detection, feature extraction, gesture recognition and robot control. The hand region is first detected from the input images using the skin color model in HSI color space and connected component analysis. Next, the hand shape and motion features from the image sequences are extracted. Then we consider the hand shape for classification of meaning gestures. Thereafter the hand gesture is recognized by using HMMs (hidden markov models) which have the input as the quantized symbol sequence by the hand motion. Finally the pet robot is controlled by a order corresponding to the recognized hand gesture. We defined four commands of sit down, stand up, lie flat and shake hands for control of pet robot. And we show that user is able to control of pet robot through proposed system in the experiment.

  • PDF