• Title/Summary/Keyword: patch-based image

Search Result 154, Processing Time 0.029 seconds

Fast Video Fire Detection Using Luminous Smoke and Textured Flame Features

  • Ince, Ibrahim Furkan;Yildirim, Mustafa Eren;Salman, Yucel Batu;Ince, Omer Faruk;Lee, Geun-Hoo;Park, Jang-Sik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.12
    • /
    • pp.5485-5506
    • /
    • 2016
  • In this article, a video based fire detection framework for CCTV surveillancesystems is presented. Two novel features and a novel image type with their corresponding algorithmsareproposed for this purpose. One is for the slow-smoke detection and another one is for fast-smoke/flame detection. The basic idea is slow-smoke has a highly varying chrominance/luminance texture in long periods and fast-smoke/flame has a highly varying texture waiting at the same location for long consecutive periods. Experiments with a large number of smoke/flame and non-smoke/flame video sequences outputs promising results in terms of algorithmic accuracy and speed.

Auto-Exposure Control using Loop-Up Table Based on Scene-Luminance Curve in Mobile Phone Camera (입.출력 특성곡선에 기초한 Look-Up Table 방식의 자동노출제어)

  • Lee, Tae-Hyoug;Kyung, Wang-Jun;Lee, Cheol Hee;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.4
    • /
    • pp.56-62
    • /
    • 2010
  • Auto-exposure control automatically calculates and adjusts the exposure for consecutive input image. Recently, this is usually controlled by the sensor gain, however, unsuitable control causes oscillation of luminance for sonsecutive input images, called as flickering. Also, in mobile phone cameras, only simple information, such as the average luminance value, can be utilized due to coarse performance. Therefore, this paper presents a new real-time AE control method using a Look Up Table(LUT) based on Scene-Luminance curves to avoid the generation of flickering. Prior to the AE control, a LUT is constructed, which illustrates the characteristic of outputs for input patches corresponding to sensor gains. The AE control is first performed by estimating a current scene as a patch using the proposed LUT. A new sensor gain is then estimated using also LUT with previously estimated patch. The entire estimation process is performed using linear interpolation to achieve real-time execution. Based on experimental results, the proposed AE control is demonstrated with real-time, flicker-free.

Relation Based Bayesian Network for NBNN

  • Sun, Mingyang;Lee, YoonSeok;Yoon, Sung-eui
    • Journal of Computing Science and Engineering
    • /
    • v.9 no.4
    • /
    • pp.204-213
    • /
    • 2015
  • Under the conditional independence assumption among local features, the Naive Bayes Nearest Neighbor (NBNN) classifier has been recently proposed and performs classification without any training or quantization phases. While the original NBNN shows high classification accuracy without adopting an explicit training phase, the conditional independence among local features is against the compositionality of objects indicating that different, but related parts of an object appear together. As a result, the assumption of the conditional independence weakens the accuracy of classification techniques based on NBNN. In this work, we look into this issue, and propose a novel Bayesian network for an NBNN based classification to consider the conditional dependence among features. To achieve our goal, we extract a high-level feature and its corresponding, multiple low-level features for each image patch. We then represent them based on a simple, two-level layered Bayesian network, and design its classification function considering our Bayesian network. To achieve low memory requirement and fast query-time performance, we further optimize our representation and classification function, named relation-based Bayesian network, by considering and representing the relationship between a high-level feature and its low-level features into a compact relation vector, whose dimensionality is the same as the number of low-level features, e.g., four elements in our tests. We have demonstrated the benefits of our method over the original NBNN and its recent improvement, and local NBNN in two different benchmarks. Our method shows improved accuracy, up to 27% against the tested methods. This high accuracy is mainly due to consideration of the conditional dependences between high-level and its corresponding low-level features.

Human Action Recognition in Still Image Using Weighted Bag-of-Features and Ensemble Decision Trees (가중치 기반 Bag-of-Feature와 앙상블 결정 트리를 이용한 정지 영상에서의 인간 행동 인식)

  • Hong, June-Hyeok;Ko, Byoung-Chul;Nam, Jae-Yeal
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.1
    • /
    • pp.1-9
    • /
    • 2013
  • This paper propose a human action recognition method that uses bag-of-features (BoF) based on CS-LBP (center-symmetric local binary pattern) and a spatial pyramid in addition to the random forest classifier. To construct the BoF, an image divided into dense regular grids and extract from each patch. A code word which is a visual vocabulary, is formed by k-means clustering of a random subset of patches. For enhanced action discrimination, local BoF histogram from three subdivided levels of a spatial pyramid is estimated, and a weighted BoF histogram is generated by concatenating the local histograms. For action classification, a random forest, which is an ensemble of decision trees, is built to model the distribution of each action class. The random forest combined with the weighted BoF histogram is successfully applied to Standford Action 40 including various human action images, and its classification performance is better than that of other methods. Furthermore, the proposed method allows action recognition to be performed in near real-time.

Texture Classification Algorithm for Patch-based Image Processing (패치 기반 영상처리를 위한 텍스쳐 분류 알고리즘)

  • Yu, Seung Wan;Song, Byung Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.146-154
    • /
    • 2014
  • The local binary pattern (LBP) scheme that is one of the texture classification methods normally uses the distribution of flat, edge and corner patterns. However, it cannot examine the edge direction and the pixel difference because it is a sort of binary pattern caused by thresholding. Furthermore, since it cannot consider the pixel distribution, it shows lower performance as the image size becomes larger. In order to solve this problem, we propose a sub-classification method using the edge direction distribution and eigen-matrix. The proposed sub-classification is applied to the particular texture patches which cannot be classified by LBP. First, we quantize the edge direction and compute its distribution. Second, we calculate the distribution of the largest value among eigenvalues derived from structure matrix. Simulation results show that the proposed method provides a higher classification performance of about 8 % than the existing method.

Hole-Filling Method Using Extrapolated Spatio-temporal Background Information (추정된 시공간 배경 정보를 이용한 홀채움 방식)

  • Kim, Beomsu;Nguyen, Tien Dat;Hong, Min-Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.8
    • /
    • pp.67-80
    • /
    • 2017
  • This paper presents a hole-filling method using extrapolated spatio-temporal background information to obtain a synthesized view. A new temporal background model using non-overlapped patch based background codebook is introduced to extrapolate temporal background information In addition, a depth-map driven spatial local background estimation is addressed to define spatial background constraints that represent the lower and upper bounds of a background candidate. Background holes are filled by comparing the similarities between the temporal background information and the spatial background constraints. Additionally, a depth map-based ghost removal filter is described to solve the problem of the non-fit between a color image and the corresponding depth map of a virtual view after 3-D warping. Finally, an inpainting is applied to fill in the remaining holes with the priority function that includes a new depth term. The experimental results demonstrated that the proposed method led to results that promised subjective and objective improvement over the state-of-the-art methods.

Case Study: Cost-effective Weed Patch Detection by Multi-Spectral Camera Mounted on Unmanned Aerial Vehicle in the Buckwheat Field

  • Kim, Dong-Wook;Kim, Yoonha;Kim, Kyung-Hwan;Kim, Hak-Jin;Chung, Yong Suk
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.64 no.2
    • /
    • pp.159-164
    • /
    • 2019
  • Weed control is a crucial practice not only in organic farming, but also in modern agriculture because it can lead to loss in crop yield. In general, weed is distributed in patches heterogeneously in the field. These patches vary in size, shape, and density. Thus, it would be efficient if chemicals are sprayed on these patches rather than spraying uniformly in the field, which can pollute the environment and be cost prohibitive. In this sense, weed detection could be beneficial for sustainable agriculture. Studies have been conducted to detect weed patches in the field using remote sensing technologies, which can be classified into a method using image segmentation based on morphology and a method with vegetative indices based on the wavelength of light. In this study, the latter methodology has been used to detect the weed patches. As a result, it was found that the vegetative indices were easier to operate as it did not need any sophisticated algorithm for differentiating weeds from crop and soil as compared to the former method. Consequently, we demonstrated that the current method of using vegetative index is accurate enough to detect weed patches, and will be useful for farmers to control weeds with minimal use of chemicals and in a more precise manner.

Analysis of Color Error and Distortion Pattern in Underwater images (수중 영상의 색상 오차 및 왜곡 패턴 분석)

  • Jeong Yeop Kim
    • Journal of Platform Technology
    • /
    • v.12 no.3
    • /
    • pp.16-26
    • /
    • 2024
  • Videos shot underwater are known to have significant color distortion. Typical causes are backscattering by floating objects and attenuation of red colors in proportion to the depth of the water. In this paper, we aim to analyze color correction performance and color distortion patterns for images taken underwater. Backscattering and attenuation caused by suspended matter will be discussed in the next study. In this study, based on the DeepSeeColor model proposed by Jamieson et al., we verify color correction performance and analyze the pattern of color distortion according to changes in water depth. The input images were taken in the US Virgin Islands by Jamieson et al., and out of 1,190 images, 330 images including color charts were used. Color correction performance was expressed as angular error using the input image and the correction image using the DeepSeeColor model. Jamieson et al. calculated the angular error using only black and white patches among the color charts, so they were unable to provide an accurate analysis of overall color distortion. In this paper, the color correction error was calculated targeting the entire color chart patch, so an appropriate degree of color distortion can be suggested. Since the input image of the DeepSeeColor model has a depth of 1 to 8, color distortion patterns according to depth changes can be analyzed. In general, the deeper the depth, the greater the attenuation of red colors. Color distortion due to depth changes was modeled in the form of scale and offset movement to predict distortion due to depth changes. As the depth increases, the scale for color correction increases and the offset decreases. The color correction performance using the proposed method was improved by 41.5% compared to the conventional method.

  • PDF

Bio-Sensing Convergence Big Data Computing Architecture (바이오센싱 융합 빅데이터 컴퓨팅 아키텍처)

  • Ko, Myung-Sook;Lee, Tae-Gyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.2
    • /
    • pp.43-50
    • /
    • 2018
  • Biometric information computing is greatly influencing both a computing system and Big-data system based on the bio-information system that combines bio-signal sensors and bio-information processing. Unlike conventional data formats such as text, images, and videos, biometric information is represented by text-based values that give meaning to a bio-signal, important event moments are stored in an image format, a complex data format such as a video format is constructed for data prediction and analysis through time series analysis. Such a complex data structure may be separately requested by text, image, video format depending on characteristics of data required by individual biometric information application services, or may request complex data formats simultaneously depending on the situation. Since previous bio-information processing computing systems depend on conventional computing component, computing structure, and data processing method, they have many inefficiencies in terms of data processing performance, transmission capability, storage efficiency, and system safety. In this study, we propose an improved biosensing converged big data computing architecture to build a platform that supports biometric information processing computing effectively. The proposed architecture effectively supports data storage and transmission efficiency, computing performance, and system stability. And, it can lay the foundation for system implementation and biometric information service optimization optimized for future biometric information computing.

Fast Object Classification Using Texture and Color Information for Video Surveillance Applications (비디오 감시 응용을 위한 텍스쳐와 컬러 정보를 이용한 고속 물체 인식)

  • Islam, Mohammad Khairul;Jahan, Farah;Min, Jae-Hong;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.1
    • /
    • pp.140-146
    • /
    • 2011
  • In this paper, we propose a fast object classification method based on texture and color information for video surveillance. We take the advantage of local patches by extracting SURF and color histogram from images. SURF gives intensity content information and color information strengthens distinctiveness by providing links to patch content. We achieve the advantages of fast computation of SURF as well as color cues of objects. We use Bag of Word models to generate global descriptors of a region of interest (ROI) or an image using the local features, and Na$\ddot{i}$ve Bayes model for classifying the global descriptor. In this paper, we also investigate discriminative descriptor named Scale Invariant Feature Transform (SIFT). Our experiment result for 4 classes of the objects shows 95.75% of classification rate.