• Title/Summary/Keyword: Color classification

Search Result 596, Processing Time 0.028 seconds

Color Dispersion as an Indicator of Stellar Population Complexity for Galaxies in Clusters

  • Lee, Joon Hyeop;Pak, Mina;Lee, Hye-Ran;Oh, Sree
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.43 no.2
    • /
    • pp.34.1-34.1
    • /
    • 2018
  • We investigate the properties of bright galaxies with various morphological types in Abell 1139 and Abell 2589, using the pixel color-magnitude diagram (pCMD) analysis. The 32 bright member galaxies ($Mr{\leq}-21.3mag$) are deeply imaged in the g and r bands in our CFHT/MegaCam observations, as a part of the KASI-Yonsei Deep Imaging Survey of Clusters (KYDISC). We examine how the features of their pCMDs depend on galaxy morphology and infrared color. We find that the g - r color dispersion as a function of surface brightness (${\mu}r$) shows better performance in distinguishing galaxy morphology, than the mean g - r color does. The best set of parameters for galaxy classification appears to be a combination of the minimum color dispersion at ${\mu}r{\leq}21.2mag\;arcsec-2$ and the maximum color dispersion at $20.0{\leq}{\mu}r{\leq}21.0mag\;arcsec-2$: the latter reflects the complexity of stellar populations at the disk component in a typical spiral galaxy. Moreover, the color dispersion of an elliptical galaxy appears to be correlated with its WISE infrared color ([4.6]-[12]). This indicates that the complexity of stellar populations in an elliptical galaxy is related to its recent star formation activities. From this observational evidence, we infer that gas-rich minor mergers or gas interactions may have usually occurred during the recent growth of massive elliptical galaxies.

  • PDF

A Study on development for image detection tool using two layer voting method (2단계 분류기법을 이용한 영상분류기 개발)

  • 김명관
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.5
    • /
    • pp.605-610
    • /
    • 2002
  • In this paper, we propose a Internet filtering tool which allows parents to manage their children's Internet access, block access to Internet sites they deem inappropriate. The other filtering tools which like Cyber Patrol, NCA Patrol, Argus, Netfilter are oriented only URL filtering or keyword detection methods. Thease methods are used on limited fields application. But our approach is focus on image color space model. First we convert RGB color space to HLS(Hue Luminance Saturation). Next, this HLS histogram learned by our classification method tools which include cohesion factor, naive baysian, N-nearest neighbor. Then we use voting for result from various classification methods. Using 2,000 picture, we prove that 2-layer voting result have better accuracy than other methods.

  • PDF

Land Cover Classification Using UAV Imagery and Object-Based Image Analysis - Focusing on the Maseo-myeon, Seocheon-gun, Chungcheongnam-do - (UAV와 객체기반 영상분석 기법을 활용한 토지피복 분류 - 충청남도 서천군 마서면 일원을 대상으로 -)

  • MOON, Ho-Gyeong;LEE, Seon-Mi;CHA, Jae-Gyu
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.1
    • /
    • pp.1-14
    • /
    • 2017
  • A land cover map provides basic information to help understand the current state of a region, but its utilization in the ecological research field has deteriorated due to limited temporal and spatial resolutions. The purpose of this study was to investigate the possibility of using a land cover map with data based on high resolution images acquired by UAV. Using the UAV, 10.5 cm orthoimages were obtained from the $2.5km^2$ study area, and land cover maps were obtained from object-based and pixel-based classification for comparison and analysis. From accuracy verification, classification accuracy was shown to be high, with a Kappa of 0.77 for the pixel-based classification and a Kappa of 0.82 for the object-based classification. The overall area ratios were similar, and good classification results were found in grasslands and wetlands. The optimal image segmentation weights for object-based classification were Scale=150, Shape=0.5, Compactness=0.5, and Color=1. Scale was the most influential factor in the weight selection process. Compared with the pixel-based classification, the object-based classification provides results that are easy to read because there is a clear boundary between objects. Compared with the land cover map from the Ministry of Environment (subdivision), it was effective for natural areas (forests, grasslands, wetlands, etc.) but not developed areas (roads, buildings, etc.). The application of an object-based classification method for land cover using UAV images can contribute to the field of ecological research with its advantages of rapidly updated data, good accuracy, and economical efficiency.

Object-oriented Classification of Urban Areas Using Lidar and Aerial Images

  • Lee, Won Hee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.3
    • /
    • pp.173-179
    • /
    • 2015
  • In this paper, object-based classification of urban areas based on a combination of information from lidar and aerial images is introduced. High resolution images are frequently used in automatic classification, making use of the spectral characteristics of the features under study. However, in urban areas, pixel-based classification can be difficult since building colors differ and the shadows of buildings can obscure building segmentation. Therefore, if the boundaries of buildings can be extracted from lidar, this information could improve the accuracy of urban area classifications. In the data processing stage, lidar data and the aerial image are co-registered into the same coordinate system, and a local maxima filter is used for the building segmentation of lidar data, which are then converted into an image containing only building information. Then, multiresolution segmentation is achieved using a scale parameter, and a color and shape factor; a compactness factor and a layer weight are implemented for the classification using a class hierarchy. Results indicate that lidar can provide useful additional data when combined with high resolution images in the object-oriented hierarchical classification of urban areas.

A New Galaxy Classification Scheme in the WISE Color-Luminosity Diagram

  • Lee, Gwang-Ho;Sohn, Jubee;Lee, Myung Gyoon
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.38 no.2
    • /
    • pp.49.1-49.1
    • /
    • 2013
  • We present a new galaxy classification scheme in the Wide-field Infrared Survey Explorer (WISE) [$3.4{\mu}m$]-[$12{\mu}m$] color versus $12{\mu}m$ luminosity diagram. In this diagram, galaxies can be classified into three groups in different evolutionary stages. Late-type galaxies are distributed linearly along "MIR star-forming sequence" identified by Hwang et al. (2012). Some early-type galaxies show another sequence at [3.4]-[12] $(AB){\simeq}-2.0$, and we call this 'MIR blue sequence'. They are quiescent systems with old stellar population older than 10 Gyr. Between the MIR star-forming sequence and the MIR blue sequence, some early- and late-type galaxies are sparsely distributed, and we call these galaxies 'MIR green cloud galaxies'. Interestingly, both MIR blue sequence galaxies and MIR green cloud ones lie on the red sequence in the optical color-magnitude diagram. However, MIR green cloud galaxies have lower stellar masses and younger stellar populations (smaller $D_n4000$) than MIR blue sequence galaxies, suggesting that MIR green cloud galaxies are in the transition stage from MIR star-forming sequence galaxies to MIR blue sequence ones. We present differences in various galaxy properties between the three MIR classes using a multi-wavelength data, combined with the WISE and Sloan Digital Sky Survey Data Release 10, of local (0.03 < z < 0.07) galaxies.

  • PDF

Optimal Image Quality Assessment based on Distortion Classification and Color Perception

  • Lee, Jee-Yong;Kim, Young-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.257-271
    • /
    • 2016
  • The Structural SIMilarity (SSIM) index is one of the most widely-used methods for perceptual image quality assessment (IQA). It is based on the principle that the human visual system (HVS) is sensitive to the overall structure of an image. However, it has been reported that indices predicted by SSIM tend to be biased depending on the type of distortion, which increases the deviation from the main regression curve. Consequently, SSIM can result in serious performance degradation. In this study, we investigate the aforementioned phenomenon from a new perspective and review a constant that plays a big role within the SSIM metric but has been overlooked thus far. Through an experimental study on the influence of this constant in evaluating images with SSIM, we are able to propose a new solution that resolves this issue. In the proposed IQA method, we first design a system to classify different types of distortion, and then match an optimal constant to each type. In addition, we supplement the proposed method by adding color perception-based structural information. For a comprehensive assessment, we compare the proposed method with 15 existing IQA methods. The experimental results show that the proposed method is more consistent with the HVS than the other methods.

Improved Classification of Cancerous Histopathology Images using Color Channel Separation and Deep Learning

  • Gupta, Rachit Kumar;Manhas, Jatinder
    • Journal of Multimedia Information System
    • /
    • v.8 no.3
    • /
    • pp.175-182
    • /
    • 2021
  • Oral cancer is ranked second most diagnosed cancer among Indian population and ranked sixth all around the world. Oral cancer is one of the deadliest cancers with high mortality rate and very less 5-year survival rates even after treatment. It becomes necessary to detect oral malignancies as early as possible so that timely treatment may be given to patient and increase the survival chances. In recent years deep learning based frameworks have been proposed by many researchers that can detect malignancies from medical images. In this paper we have proposed a deep learning-based framework which detects oral cancer from histopathology images very efficiently. We have designed our model to split the color channels and extract deep features from these individual channels rather than single combined channel with the help of Efficient NET B3. These features from different channels are fused by using feature fusion module designed as a layer and placed before dense layers of Efficient NET. The experiments were performed on our own dataset collected from hospitals. We also performed experiments of BreakHis, and ICML datasets to evaluate our model. The results produced by our model are very good as compared to previously reported results.

An Efficient Indoor-Outdoor Scene Classification Method (효율적인 실내의 영상 분류 기법)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.5
    • /
    • pp.48-55
    • /
    • 2009
  • Prior research works in indoor-outdoor classification have been conducted based on a simple combination of low-level features. However, since there are many challenging problems due to the extreme variability of the scene contents, most methods proposed recently tend to combine the low-level features with high-level information such as the presence of trees and sky. To extract these regions from videos, we need to conduct additional tasks, which may yield the increasing number of feature dimensions or computational burden. Therefore, an efficient indoor-outdoor scene classification method is proposed in this paper. First, the video is divided into the five same-sized blocks. Then we define and use the edge and color orientation histogram (ECOH) descriptors to represent each sub-block efficiently. Finally, all ECOH values are simply concatenated to generated the feature vector. To justify the efficiency and robustness of the proposed method, a diverse database of over 1200 videos is evaluated. Moreover, we improve the classification performance by using different weight values determined through the learning process.

Distinction of Color Similarity for Clothes based on the LBG Algorithm (LBG 알고리즘 기반의 의상 색상 유사성 판별)

  • Ju, Hyung-Don;Hong, Min;Cho, We-Duke;Moon, Nam-Mee;Choi, Yoo-Joo
    • Journal of Internet Computing and Services
    • /
    • v.9 no.5
    • /
    • pp.117-130
    • /
    • 2008
  • This paper proposes a stable and robust method to distinct the color similarity for clothes using the LBG algorithm under various light sources, Since the conventional methods, such as the histogram intersection and the accumulated histogram, are profoundly sensitive to the changing of light environments, the distinction of color similarity for the same cloth can be different due to the complicated light sources. To reduce the effects of the light sources, the properties of hue and saturation which consistently sustain the characteristic of the color under the various changes of light sources are analyzed to define the characteristic of the color distribution. In a two-dimensional space determined by the properties of hue and saturation, the LBG algorithm, a non-parametric clustering approach, is applied to examine the color distribution of images for each clothes. The color similarity of images is defined by the average of Euclidean distance between the mapping clusters which are calculated from the result of clustering of both images. To prove the stability of the proposed method, the results of the color similarity between our method and the traditional histogram analysis based methods are compared using a dozen of cloth examples that obtained under different light environments. Our method successively provides the classification between the same cloth image pair and the different cloth image pair and this classification of color similarity for clothe images obtains the 91.6% of success rate.

  • PDF

Recognition of Colors of Image Code Using Hue and Saturation Values (색상 및 채도 값에 의한 이미지 코드의 칼라 인식)

  • Kim Tae-Woo;Park Hung-Kook;Yoo Hyeon-Joong
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.4
    • /
    • pp.150-159
    • /
    • 2005
  • With the increase of interest in ubiquitous computing, image code is attracting attention in various areas. Image code is important in ubiquitous computing in that it can complement or replace RFID (radio frequency identification) in quite a few areas as well as it is more economical. However, because of the difficulty in reading precise colors due to the severe distortion of colors, its application is quite restricted by far. In this paper, we present an efficient method of image code recognition including automatically locating the image code using the hue and saturation values. In our experiments, we use an image code whose design seems most practical among currently commercialized ones. This image code uses six safe colors, i.e., R, G, B, C, M, and Y. We tested for 72 true-color field images with the size of $2464{\times}1632$ pixels. With the color calibration based on the histogram, the localization accuracy was about 96%, and the accuracy of color classification for localized codes was about 91.28%. It took approximately 5 seconds to locate and recognize the image code on a PC with 2 GHz P4 CPU.

  • PDF