• Title/Summary/Keyword: background image

Search Result 2,217, Processing Time 0.035 seconds

Image Thresholding based on Edge Detection (테두리 검출에 기반한 영상 이진화)

  • Kwon, Soon H.;Sivakumar, Krishnamoorthy
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.2
    • /
    • pp.139-143
    • /
    • 2013
  • The basic idea of conventional thresholding is that an image consists of objects and their background where the gray levels of the objects are different from those of the background. In this paper, we extend it to one where an image consists of not only objects and the background but also their edges. Based on this extension, we propose an edge detection-based thresholding method. The effectiveness of the proposed method is demonstrated by experimental results tested on six well-known test images and compared with conventional methods.

Automatic Target Detection Using the Extended Fuzzy Clustering (확장된 Fuzzy Clustering 알고리즘을 이용한 자동 목표물 검출)

  • 김수환;강경진;이태원
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.10
    • /
    • pp.842-913
    • /
    • 1991
  • The automatic target detection which automatically identifies the location of the target with its input image is one of the significant subjects of image processing field. Then, there are some problems that should be solved to detect the target automatically from the input image. First of all, the ambiguity of the boundary between targets or between a target and background should be solved and the target should be searched adaptively. In other words, the target should be identified by the relative brightness to the background, not by the absolute brightness. In this paper, to solve these problems, a new algorithm which can identify the target automatically is proposed. This algorithm uses the set of fuzzy for solving the ambiguity between the boundaries, and using the weight according to the brightness of data in the input image, the target is identified adaptively by the relative brightness to the background. Applying this algorithm to real images, it is experimentally proved that it is can be effectively applied to the automatic target detection.

  • PDF

A Method for Tree Image Segmentation Combined Adaptive Mean Shifting with Image Abstraction

  • Yang, Ting-ting;Zhou, Su-yin;Xu, Ai-jun;Yin, Jian-xin
    • Journal of Information Processing Systems
    • /
    • v.16 no.6
    • /
    • pp.1424-1436
    • /
    • 2020
  • Although huge progress has been made in current image segmentation work, there are still no efficient segmentation strategies for tree image which is taken from natural environment and contains complex background. To improve those problems, we propose a method for tree image segmentation combining adaptive mean shifting with image abstraction. Our approach perform better than others because it focuses mainly on the background of image and characteristics of the tree itself. First, we abstract the original tree image using bilateral filtering and image pyramid from multiple perspectives, which can reduce the influence of the background and tree canopy gaps on clustering. Spatial location and gray scale features are obtained by step detection and the insertion rule method, respectively. Bandwidths calculated by spatial location and gray scale features are then used to determine the size of the Gaussian kernel function and in the mean shift clustering. Furthermore, the flood fill method is employed to fill the results of clustering and highlight the region of interest. To prove the effectiveness of tree image abstractions on image clustering, we compared different abstraction levels and achieved the optimal clustering results. For our algorithm, the average segmentation accuracy (SA), over-segmentation rate (OR), and under-segmentation rate (UR) of the crown are 91.21%, 3.54%, and 9.85%, respectively. The average values of the trunk are 92.78%, 8.16%, and 7.93%, respectively. Comparing the results of our method experimentally with other popular tree image segmentation methods, our segmentation method get rid of human interaction and shows higher SA. Meanwhile, this work shows a promising application prospect on visual reconstruction and factors measurement of tree.

Stereoscopic Conversion of Object-based MPEG-4 Video (객체 기반 MPEG-4 동영상의 입체 변환)

  • 박상훈;김만배;손현식
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2407-2410
    • /
    • 2003
  • In this paper, we propose a new stereoscopic video conversion methodology that converts two-dimensional (2-D) MPEG-4 video to stereoscopic video. In MPEG-4, each Image is composed of background object and primary object. In the first step of the conversion methodology, the camera motion type is determined for stereo Image generation. In the second step, the object-based stereo image generation is carried out. The background object makes use of a current image and a delayed image for its stereo image generation. On the other hand, the primary object uses a current image and its horizontally-shifted version to avoid the possible vertical parallax that could happen. Furthermore, URFA(Uncovered Region Filling Algorithm) is applied in the uncovered region which might be created after the stereo image generation of a primary object. In our experiment, show MPEG-4 test video and its stereoscopic video based upon out proposed methodology and analyze Its results.

  • PDF

Composition of Foreground and Background Images using Optical Flow and Weighted Border Blending (옵티컬 플로우와 가중치 경계 블렌딩을 이용한 전경 및 배경 이미지의 합성)

  • Gebreyohannes, Dawit;Choi, Jung-Ju
    • Journal of the Korea Computer Graphics Society
    • /
    • v.20 no.3
    • /
    • pp.1-8
    • /
    • 2014
  • We propose a method to compose a foreground object into a background image, where the foreground object is a part (or a region) of an image taken by a front-facing camera and the background image is a whole image taken by a back-facing camera in a smart phone at the same time. Recent high-end cell-phones have two cameras and provide users with preview video before taking photos. We extract the foreground object that is moving along with the front-facing camera using the optical flow during the preview. We compose the extracted foreground object into a background image using a simple image composition technique. For better-looking result in the composed image, we apply a border smoothing technique using a weighted-border mask to blend transparency from background to foreground. Since constructing and grouping pixel-level dense optical flow are quite slow even in high-end cell-phones, we compute a mask to extract the foreground object in low-resolution image, which reduces the computational cost greatly. Experimental result shows the effectiveness of our extraction and composition techniques, with much less computational time in extracting the foreground object and better composition quality compared with Poisson image editing technique which is widely used in image composition. The proposed method can improve limitedly the color bleeding artifacts observed in Poisson image editing using weighted-border blending.

A Study on Improving the Adaptive Background Method for Outdoor CCTV Object Tracking System

  • Jung, Do-Wook;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.7
    • /
    • pp.17-24
    • /
    • 2015
  • In this paper, we propose a method to solve ghosting problem. To generate adaptive background, using an exponentially decreasing number of frames, may improve object detection performance. To extract moving objects from the background by using a differential image, detection error may be caused by object rotations or environmental changes. A ghosting problem can be issue-driven when there are outdoor environmental changes and moving objects. We studied that a differential image by adaptive background may reduce the ghosting problem. In experimental results, we test that our method can solve the ghosting problem.

A Basic Study on the Fire Flame Extraction of Non-Residential Facilities Based on Core Object Extraction (핵심 객체 추출에 기반한 비주거 시설의 화재불꽃 추출에 관한 기초 연구)

  • Park, Changmin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.13 no.4
    • /
    • pp.71-79
    • /
    • 2017
  • Recently, Fire watching and dangerous substances monitoring system has been being developed to enhance various fire related security. It is generally assumed that fire flame extraction plays a very important role on this monitoring system. In this study, we propose the fire flame extraction method of Non-Residential Facilities based on core object extraction in image. A core object is defined as a comparatively large object at center of the image. First of all, an input image and its decreased resolution image are segmented. Segmented regions are classified as the outer or the inner region. The outer region is adjacent to boundaries of the image and the rest is not. Then core object regions and core background regions are selected from the inner region and the outer region, respectively. Core object regions are the representative regions for the object and are selected by using the information about the region size and location. Each inner region is classified into foreground or background region by comparing its values of a color histogram intersection of the inner region against the core object region and the core background region. Finally, the extracted core object region is determined as fire flame object in the image. Through experiments, we find that to provide a basic measures can respond effectively and quickly to fire in non-residential facilities.

The Effect of Perceiver′s Fashion Involvement on Clothing Color Perception and Preferences (지각자의 유행관여가 의복색 지각과 선호도에 미치는 영향)

  • 이명희
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.27 no.7
    • /
    • pp.851-861
    • /
    • 2003
  • The objectives of this study were to investigate the effect of perceiver's fashion involvement, clothing color, and background of object person on image perceptions of clothing, and to examine how clothing color preference vary according to perceiver's fashion involvement. Subjects were 273 college women in the metropolitan area of Seoul. The T-shirt was changed into 11 colors by using the CAD system. Five factors were derived to account for the dimensions of image perception. These were individuality, elegance, femininity, activity, and neatness. Perceiver's fashion involvement gave a significant influence on perception of individuality. Clothing color gave significant influences on 5 image dimensions. White and beige were evaluated neat image. Neatness factor had an interaction effect by fashion involvement and clothing color. The high involvement group evaluated white and beige shirt more neatly, and orange and yellow less neatly than the low involvement group. Individuality and elegance had an interaction effect by fashion involvement and background of object person. The high involvement group liked red, violet, and black shirt more than the low involvement. Refined and becomingness image gave significant influences on clothing color preference in both high and low involvement groups.

A study on emotional images and preference of knitwear according to tone on tone combination (톤 온 톤 배색에 따른 니트웨어의 감성이미지와 선호도 연구)

  • Lee, Mi-Sook;Suh, Seo-Young
    • The Research Journal of the Costume Culture
    • /
    • v.22 no.3
    • /
    • pp.399-410
    • /
    • 2014
  • The purpose of this study was to investigate emotional images and preference of knitwear by tone on tone combination. The subjects were 357 university students in Daejeon and Chungnam province, and the measuring instruments were 6 stimuli manipulated by color and tone combination type of background and pattern in the tone and tone combination, and self-administrated questionnaires consisted of emotional images items, preference items, and subjects' demographics attributions. The data were analyzed by Cronbach's ${\alpha}$, factor analysis, t-test, MANOVA and Duncan's multiple range test, using SPSS program. The results were as follows. First, four factors (attractiveness, conspicuity, mildness, and activity) are emerged on emotional images of knitwear. Second, color had main effects on emotional images and preference. Gray color was perceived as most attractive image and more preferred than others. Third, tone combination type had some effects on emotional images. Vivid tone background/light tone pattern was perceived more attractive image but less conspicuous and mild than light tone background/vivid tone pattern. Forth, subjects' gender had an effects on conspicuous image. Male was perceived more conspicuous image on knitwear stimuli than female. Fifth, color and subjects' gender had interaction effects on attractiveness image and preference. Male perceived that blue is more attractive and preferred than female.

A Multi-Layer Perceptron for Color Index based Vegetation Segmentation (색상지수 기반의 식물분할을 위한 다층퍼셉트론 신경망)

  • Lee, Moon-Kyu
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.43 no.1
    • /
    • pp.16-25
    • /
    • 2020
  • Vegetation segmentation in a field color image is a process of distinguishing vegetation objects of interests like crops and weeds from a background of soil and/or other residues. The performance of the process is crucial in automatic precision agriculture which includes weed control and crop status monitoring. To facilitate the segmentation, color indices have predominantly been used to transform the color image into its gray-scale image. A thresholding technique like the Otsu method is then applied to distinguish vegetation parts from the background. An obvious demerit of the thresholding based segmentation will be that classification of each pixel into vegetation or background is carried out solely by using the color feature of the pixel itself without taking into account color features of its neighboring pixels. This paper presents a new pixel-based segmentation method which employs a multi-layer perceptron neural network to classify the gray-scale image into vegetation and nonvegetation pixels. The input data of the neural network for each pixel are 2-dimensional gray-level values surrounding the pixel. To generate a gray-scale image from a raw RGB color image, a well-known color index called Excess Green minus Excess Red Index was used. Experimental results using 80 field images of 4 vegetation species demonstrate the superiority of the neural network to existing threshold-based segmentation methods in terms of accuracy, precision, recall, and harmonic mean.