• Title/Summary/Keyword: Color Image Processing

Search Result 1,051, Processing Time 0.028 seconds

Development of Fire Detection Algorithm for Video Incident Detection System of Double Deck Tunnel (복층터널 영상유고감지시스템의 화재 감지 알고리즘 개발)

  • Kim, Tae-Bok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.9
    • /
    • pp.1082-1087
    • /
    • 2019
  • Video Incident Detection System is a detection system for the purpose of detection of an emergency in an unexpected situation such as a pedestrian in a tunnel, a falling object, a stationary vehicle, a reverse run, and a fire(smoke and flame). In recent years, the importance of the city center has been emphasized by the construction of underpasses in great depth underground space. Therefore, in order to apply Video Incident Detection System to a Double Deck Tunnel, it was developed to reflect the design characteristics of the Double Deck Tunnel. and In this paper especially, the fire detection technology, which is not it is difficult to apply to the Double Deck Tunnel environment because it is not supported on existing Video Incident Detection System or has a fail detect, we propose fire detection using color image analysis, silhouette spread, and statistical properties, It is verified through a real fire test in a double deck tunnel test bed environment.

Coin Classification using CNN (CNN 을 이용한 동전 분류)

  • Lee, Jaehyun;Shin, Donggyu;Park, Leejun;Song, Hyunjoo;Gu, Bongen
    • Journal of Platform Technology
    • /
    • v.9 no.3
    • /
    • pp.63-69
    • /
    • 2021
  • Limited materials to make coins for countries and designs suitable for hand-carry make the shape, size, and color of coins similar. This similarity makes that it is difficult for visitors to identify each country's coins. To solve this problem, we propose the coin classification method using CNN effective to image processing. In our coin identification method, we collect the training data by using web crawling and use OpenCV for preprocessing. After preprocessing, we extract features from an image by using three CNN layers and classify coins by using two fully connected network layers. To show that our model designed in this paper is effective for coin classification, we evaluate our model using eight different coin types. From our experimental results, the accuracy for coin classification is about 99.5%.

Hair Classification and Region Segmentation by Location Distribution and Graph Cutting (위치 분포 및 그래프 절단에 의한 모발 분류와 영역 분할)

  • Kim, Yong-Gil;Moon, Kyung-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.3
    • /
    • pp.1-8
    • /
    • 2022
  • Recently, Google MedeiaPipe presents a novel approach for neural network-based hair segmentation from a single camera input specifically designed for real-time, mobile application. Though neural network related to hair segmentation is relatively small size, it produces a high-quality hair segmentation mask that is well suited for AR effects such as a realistic hair recoloring. However, it has undesirable segmentation effects according to hair styles or in case of containing noises and holes. In this study, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood function. It is further optimized according to graph cuts algorithm and initial hair region is obtained. Finally, clustering algorithm and image post-processing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. The proposed method is applied to MediaPipe hair segmentation pipeline.

Lip Contour Detection by Multi-Threshold (다중 문턱치를 이용한 입술 윤곽 검출 방법)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.431-438
    • /
    • 2020
  • In this paper, the method to extract lip contour by multiple threshold is proposed. Spyridonos et. el. proposed a method to extract lip contour. First step is get Q image from transform of RGB into YIQ. Second step is to find lip corner points by change point detection and split Q image into upper and lower part by corner points. The candidate lip contour can be obtained by apply threshold to Q image. From the candidate contour, feature variance is calculated and the contour with maximum variance is adopted as final contour. The feature variance 'D' is based on the absolute difference near the contour points. The conventional method has 3 problems. The first one is related to lip corner point. Calculation of variance depends on much skin pixels and therefore the accuracy decreases and have effect on the split for Q image. Second, there is no analysis for color systems except YIQ. YIQ is a good however, other color systems such as HVS, CIELUV, YCrCb would be considered. Final problem is related to selection of optimal contour. In selection process, they used maximum of average feature variance for the pixels near the contour points. The maximum of variance causes reduction of extracted contour compared to ground contours. To solve the first problem, the proposed method excludes some of skin pixels and got 30% performance increase. For the second problem, HSV, CIELUV, YCrCb coordinate systems are tested and found there is no relation between the conventional method and dependency to color systems. For the final problem, maximum of total sum for the feature variance is adopted rather than the maximum of average feature variance and got 46% performance increase. By combine all the solutions, the proposed method gives 2 times in accuracy and stability than conventional method.

DSP Embedded Early Fire Detection Method Using IR Thermal Video

  • Kim, Won-Ho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.10
    • /
    • pp.3475-3489
    • /
    • 2014
  • Here we present a simple flame detection method for an infrared (IR) thermal camera based real-time fire surveillance digital signal processor (DSP) system. Infrared thermal cameras are especially advantageous for unattended fire surveillance. All-weather monitoring is possible, regardless of illumination and climate conditions, and the data quantity to be processed is one-third that of color videos. Conventional IR camera-based fire detection methods used mainly pixel-based temporal correlation functions. In the temporal correlation function-based methods, temporal changes in pixel intensity generated by the irregular motion and spreading of the flame pixels are measured using correlation functions. The correlation values of non-flame regions are uniform, but the flame regions have irregular temporal correlation values. To satisfy the requirement of early detection, all fire detection techniques should be practically applied within a very short period of time. The conventional pixel-based correlation function is computationally intensive. In this paper, we propose an IR camera-based simple flame detection algorithm optimized with a compact embedded DSP system to achieve early detection. To reduce the computational load, block-based calculations are used to select the candidate flame region and measure the temporal motion of flames. These functions are used together to obtain the early flame detection algorithm. The proposed simple algorithm was tested to verify the required function and performance in real-time using IR test videos and a real-time DSP system. The findings indicated that the system detected the flames within 5 to 20 seconds, and had a correct flame detection ratio of 100% with an acceptable false detection ratio in video sequence level.

Real-time Identification of Traffic Light and Road Sign for the Next Generation Video-Based Navigation System (차세대 실감 내비게이션을 위한 실시간 신호등 및 표지판 객체 인식)

  • Kim, Yong-Kwon;Lee, Ki-Sung;Cho, Seong-Ik;Park, Jeong-Ho;Choi, Kyoung-Ho
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.13-24
    • /
    • 2008
  • A next generation video based car navigation is researched to supplement the drawbacks of existed 2D based navigation and to provide the various services for safety driving. The components of this navigation system could be a load object database, identification module for load lines, and crossroad identification module, etc. In this paper, we proposed the traffic lights and road sign recognition method which can be effectively exploited for crossroad recognition in video-based car navigation systems. The method uses object color information and other spatial features in the video image. The results show average 90% recognition rate from 30m to 60m distance for traffic lights and 97% at 40-90m distance for load sign. The algorithm also achieves 46msec/frame processing time which also indicates the appropriateness of the algorithm in real-time processing.

  • PDF

Automatic Segmentation of the meniscus based on Active Shape Model in MR Images through Interpolated Shape Information (MR 영상에서 중간형상정보 생성을 통한 활성형상모델 기반 반월상 연골 자동 분할)

  • Kim, Min-Jung;Yoo, Ji-Hyun;Hong, Helen
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.11
    • /
    • pp.1096-1100
    • /
    • 2010
  • In this paper, we propose an automatic segmentation of the meniscus based on active shape model using interpolated shape information in MR images. First, the statistical shape model of meniscus is constructed to reflect the shape variation in the training set. Second, the generation technique of interpolated shape information by using the weight according to shape similarity is proposed to robustly segment the meniscus with large variation. Finally, the automatic meniscus segmentation is performed through the active shape model fitting. For the evaluation of our method, we performed the visual inspection, accuracy measure and processing time. For accuracy evaluation, the average distance difference between automatic segmentation and semi-automatic segmentation are calculated and visualized by color-coded mapping. Experimental results show that the average distance difference was $0.54{\pm}0.16mm$ in medial meniscus and $0.73{\pm}0.39mm$ in lateral meniscus. The total processing time was 4.87 seconds on average.

Skin Pigmentation Detection Using Projection Transformed Block Coefficient (투영 변환 블록 계수를 이용한 피부 색소 침착 검출)

  • Liu, Yang;Lee, Suk-Hwan;Kwon, Seong-Geun;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.9
    • /
    • pp.1044-1056
    • /
    • 2013
  • This paper presents an approach for detecting and measuring human skin pigmentation. In the proposed scheme, we extract a skin area by a GMM-EM clustering based skin color model that is estimated from the statistical analysis of training images and remove tiny noises through the morphology processing. A skin area is decomposed into two components of hemoglobin and melanin by an independent component analysis (ICA) algorithm. Then, we calculate the intensities of hemoglobin and melanin by using the projection transformed block coefficient and determine the existence of skin pigmentation according to the global and local distribution of two intensities. Furthermore, we measure the area and density of the detected skin pigmentation. Experimental results verified that our scheme can both detect the skin pigmentation and measure the quantity of that and also our scheme takes less time because of the location histogram.

Efficient Traffic Lights Detection and Signal Recognition in Moving Image (동영상에서 교통 신호등 위치 검출 및 신호인식 기법)

  • Oh, Seong;Kim, Jin-soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.717-719
    • /
    • 2015
  • The research and development of the unmanned vehicle is being carried out actively in domestic and foreign countries. The research is being carried out to provide various services so that the weakness of system such as conventional 2D-based navigation systems can be supplemented and the driving can be safer. This paper suggests the method that enables real-time video processing in more efficient way by realizing the location detection and signal recognition technique of traffic signals in video. In order to overcome the limit of conventional methods that have a difficulty in analyzing the signal as it is sensitive to brightness change, the proposed method realizes the program that grasps the depth data in front of the vehicle using video processing, analyzes the signal by detecting traffic signal and estimates color components of traffic signal in front and the distance between traffic signal and the vehicle.

  • PDF

Development of Rotation Invariant Real-Time Multiple Face-Detection Engine (회전변화에 무관한 실시간 다중 얼굴 검출 엔진 개발)

  • Han, Dong-Il;Choi, Jong-Ho;Yoo, Seong-Joon;Oh, Se-Chang;Cho, Jae-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.116-128
    • /
    • 2011
  • In this paper, we propose the structure of a high-performance face-detection engine that responds well to facial rotating changes using rotation transformation which minimize the required memory usage compared to the previous face-detection engine. The validity of the proposed structure has been verified through the implementation of FPGA. For high performance face detection, the MCT (Modified Census Transform) method, which is robust against lighting change, was used. The Adaboost learning algorithm was used for creating optimized learning data. And the rotation transformation method was added to maintain effectiveness against face rotating changes. The proposed hardware structure was composed of Color Space Converter, Noise Filter, Memory Controller Interface, Image Rotator, Image Scaler, MCT(Modified Census Transform), Candidate Detector / Confidence Mapper, Position Resizer, Data Grouper, Overlay Processor / Color Overlay Processor. The face detection engine was tested using a Virtex5 LX330 FPGA board, a QVGA grade CMOS camera, and an LCD Display. It was verified that the engine demonstrated excellent performance in diverse real life environments and in a face detection standard database. As a result, a high performance real time face detection engine that can conduct real time processing at speeds of at least 60 frames per second, which is effective against lighting changes and face rotating changes and can detect 32 faces in diverse sizes simultaneously, was developed.