• Title/Summary/Keyword: RGB 영상

Search Result 719, Processing Time 0.029 seconds

A Lip Detection Algorithm Using Color Clustering (색상 군집화를 이용한 입술탐지 알고리즘)

  • Jeong, Jongmyeon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.3
    • /
    • pp.37-43
    • /
    • 2014
  • In this paper, we propose a robust lip detection algorithm using color clustering. At first, we adopt AdaBoost algorithm to extract facial region and convert facial region into Lab color space. Because a and b components in Lab color space are known as that they could well express lip color and its complementary color, we use a and b component as the features for color clustering. The nearest neighbour clustering algorithm is applied to separate the skin region from the facial region and K-Means color clustering is applied to extract lip-candidate region. Then geometric characteristics are used to extract final lip region. The proposed algorithm can detect lip region robustly which has been shown by experimental results.

Development of Motion Recognition Platform Using Smart-Phone Tracking and Color Communication (스마트 폰 추적 및 색상 통신을 이용한 동작인식 플랫폼 개발)

  • Oh, Byung-Hun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.5
    • /
    • pp.143-150
    • /
    • 2017
  • In this paper, we propose a novel motion recognition platform using smart-phone tracking and color communication. The interface requires only a camera and a personal smart-phone to provide a motion control interface rather than expensive equipment. The platform recognizes the user's gestures by the tracking 3D distance and the rotation angle of the smart-phone, which acts essentially as a motion controller in the user's hand. Also, a color coded communication method using RGB color combinations is included within the interface. Users can conveniently send or receive any text data through this function, and the data can be transferred continuously even while the user is performing gestures. We present the result that implementation of viable contents based on the proposed motion recognition platform.

Image Retrieval Using Histogram Refinement Based on Local Color Difference (지역 색차 기반의 히스토그램 정교화에 의한 영상 검색)

  • Kim, Min-KI
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.12
    • /
    • pp.1453-1461
    • /
    • 2015
  • Since digital images and videos are rapidly increasing in the internet with the spread of mobile computers and smartphones, research on image retrieval has gained tremendous momentum. Color, shape, and texture are major features used in image retrieval. Especially, color information has been widely used in image retrieval, because it is robust in translation, rotation, and a small change of camera view. This paper proposes a new method for histogram refinement based on local color difference. Firstly, the proposed method converts a RGB color image into a HSV color image. Secondly, it reduces the size of color space from 2563 to 32. It classifies pixels in the 32-color image into three groups according to the color difference between a central pixel and its neighbors in a 3x3 local region. Finally, it makes a color difference vector(CDV) representing three refined color histograms, then image retrieval is performed by the CDV matching. The experimental results using public image database show that the proposed method has higher retrieval accuracy than other conventional ones. They also show that the proposed method can be effectively applied to search low resolution images such as thumbnail images.

A LabVIEW-based Video Dehazing using Dark Channel Prior (Dark Channel Prior을 이용한 LabVIEW 기반의 동영상 안개제거)

  • Roh, Chang Su;Kim, Yeon Gyo;Chong, Ui Pil
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.101-107
    • /
    • 2017
  • LabVIEW coding for video dehazing was developed. The dark channel prior proposed by K. He was applied to remove fog based on a single image, and K. B. Gibson's median dark channel prior was applied, and implemented in LabVIEW. In other words, we improved the image processing speed by converting the existing fog removal algorithm, dark channel prior, to the LabVIEW system. As a result, we have developed a real-time fog removal system that can be commercialized. Although the existing algorithm has been utilized, since the performance has been verified real - time, it will be highly applicable in academic and industrial fields. In addition, fog removal is performed not only in the entire image but also in the selected area of the partial region. As an application example, we have developed a system that acquires clear video from the long distance by connecting a laptop equipped with LabVIEW SW that was developed in this paper to a 100~300 times zoom telescope.

Hybrid Silhouette Extraction Using Color and Gradient Informations (색상 및 기울기 정보를 이용한 인간 실루엣 추출)

  • Joo, Young-Hoon;So, Jea-Yun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.7
    • /
    • pp.913-918
    • /
    • 2007
  • Human motion analysis is an important research subject in human-robot interaction (HRI). However, before analyzing the human motion, silhouette of human body should be extracted from sequential images obtained by CCD camera. The intelligent robot system requires more robust silhouette extraction method because it has internal vibration and low resolution. In this paper, we discuss the hybrid silhouette extraction method for detecting and tracking the human motion. The proposed method is to combine and optimize the temporal and spatial gradient information. Also, we propose some compensation methods so as not to miss silhouette information due to poor images. Finally, we have shown the effectiveness and feasibility of the proposed method through some experiments.

Content-based Image Retrieval System (내용기반 영상검색 시스템)

  • Yoo, Hun-Woo;Jang, Dong-Sik;Jung, She-Hwan;Park, Jin-Hyung;Song, Kwang-Seop
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.26 no.4
    • /
    • pp.363-375
    • /
    • 2000
  • In this paper we propose a content-based image retrieval method that can search large image databases efficiently by color, texture, and shape content. Quantized RGB histograms and the dominant triple (hue, saturation, and value), which are extracted from quantized HSV joint histogram in the local image region, are used for representing global/local color information in the image. Entropy and maximum entry from co-occurrence matrices are used for texture information and edge angle histogram is used for representing shape information. Relevance feedback approach, which has coupled proposed features, is used for obtaining better retrieval accuracy. Simulation results illustrate the above method provides 77.5 percent precision rate without relevance feedback and increased precision rate using relevance feedback for overall queries. We also present a new indexing method that supports fast retrieval in large image databases. Tree structures constructed by k-means algorithm, along with the idea of triangle inequality, eliminate candidate images for similarity calculation between query image and each database image. We find that the proposed method reduces calculation up to average 92.9 percent of the images from direct comparison.

  • PDF

Implementation of Lane Luminance Measurement Application using Smartphone (스마트폰 기반의 도로 밝기 측정 어플리케이션)

  • Choi, Young-Hwan;Yum, HyoSub;Park, Doo-Soon;Hong, Min
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.298-301
    • /
    • 2014
  • 최근까지의 교통사고 통계에 따르면 주간 보다는 야간에 더 많은 사고가 발생하며, 사고 원인 중 하나는 부적절한 조명 시설로 인한 시야 미확보, 눈의 피로감 증가가 중요한 요소이다. 본 논문에서는 스마트폰 기반으로 야간 도로 밝기 측정 어플리케이션을 구현하여 야간에 부적절한 조명 시설이 설치된 지점을 파악하기 위해서 위치 정보와 밝기 정보, 이동 방향 정보를 실시간으로 데이터베이스에 저장하여 모니터링해주는 시스템을 설계 및 구현하였다. 이를 위해, 안드로이드 NDK을 이용하여 Native 환경에서 차선 검출 및 RGB 색 공간의 값을 휘도 값으로 변환 알고리즘을 작성하였다. 또한 스마트폰의 카메라를 이용하여 실시간으로 도로 영상을 입력 받은 후 ROI 설정 하여 연산 속도를 향상시켰고, 차선 검출을 통해 차선 사이의 도로 밝기 정보를 획득하고 GPS 센서를 이용하여 이동 방향 정보와 위치 정보를 획득하여 데이터베이스에 저장하였다. 어플리케이션 사용자들의 주행 정보를 바탕으로 도로 밝기 값 데이터베이스를 활용하여 눈부심 및 조명관련 사고 위험 구간을 자동으로 알려주는 후속 연구를 진행할 수 있을 것으로 기대한다.

Camera-based Dog Unwanted Behavior Detection (영상 기반 강아지의 이상 행동 탐지)

  • Atif, Othmane;Lee, Jonguk;Park, Daehee;Chung, Yongwha
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.419-422
    • /
    • 2019
  • The recent increase in single-person households and family income has led to an increase in the number of pet owners. However, due to the owners' difficulty to communicate with them for 24 hours, pets, and especially dogs, tend to display unwanted behavior that can be harmful to themselves and their environment when left alone. Therefore, detecting those behaviors when the owner is absent is necessary to suppress them and prevent any damage. In this paper, we propose a camera-based system that detects a set of normal and unwanted behaviors using deep learning algorithms to monitor dogs when left alone at home. The frames collected from the camera are arranged into sequences of RGB frames and their corresponding optical flow sequences, and then features are extracted from each data flow using pre-trained VGG-16 models. The extracted features from each sequence are concatenated and input to a bi-directional LSTM network that classifies the dog action into one of the targeted classes. The experimental results show that our method achieves a good performance exceeding 0.9 in precision, recall and f-1 score.

CNN-Based Fake Image Identification with Improved Generalization (일반화 능력이 향상된 CNN 기반 위조 영상 식별)

  • Lee, Jeonghan;Park, Hanhoon
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.12
    • /
    • pp.1624-1631
    • /
    • 2021
  • With the continued development of image processing technology, we live in a time when it is difficult to visually discriminate processed (or tampered) images from real images. However, as the risk of fake images being misused for crime increases, the importance of image forensic science for identifying fake images is emerging. Currently, various deep learning-based identifiers have been studied, but there are still many problems to be used in real situations. Due to the inherent characteristics of deep learning that strongly relies on given training data, it is very vulnerable to evaluating data that has never been viewed. Therefore, we try to find a way to improve generalization ability of deep learning-based fake image identifiers. First, images with various contents were added to the training dataset to resolve the over-fitting problem that the identifier can only classify real and fake images with specific contents but fails for those with other contents. Next, color spaces other than RGB were exploited. That is, fake image identification was attempted on color spaces not considered when creating fake images, such as HSV and YCbCr. Finally, dropout, which is commonly used for generalization of neural networks, was used. Through experimental results, it has been confirmed that the color space conversion to HSV is the best solution and its combination with the approach of increasing the training dataset significantly can greatly improve the accuracy and generalization ability of deep learning-based identifiers in identifying fake images that have never been seen before.

Lightweight Video-based Approach for Monitoring Pigs' Aggressive Behavior (돼지 공격 행동 모니터링을 위한 영상 기반의 경량화 시스템)

  • Mluba, Hassan Seif;Lee, Jonguk;Atif, Othmane;Park, Daihee;Chung, Yongwha
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.11a
    • /
    • pp.704-707
    • /
    • 2021
  • Pigs' aggressive behavior represents one of the common issues that occur inside pigpens and which harm pigs' health and welfare, resulting in a financial burden to farmers. Continuously monitoring several pigs for 24 hours to identify those behaviors manually is a very difficult task for pig caretakers. In this study, we propose a lightweight video-based approach for monitoring pigs' aggressive behavior that can be implemented even in small-scale farms. The proposed system receives sequences of frames extracted from an RGB video stream containing pigs and uses MnasNet with a DM value of 0.5 to extract image features from pigs' ROI identified by predefined annotations. These extracted features are then forwarded to a lightweight LSTM to learn temporal features and perform behavior recognition. The experimental results show that our proposed model achieved 0.92 in recall and F1-score with an execution time of 118.16 ms/sequence.