• Title/Summary/Keyword: Gray Image

Search Result 1,036, Processing Time 0.025 seconds

Extended SURF Algorithm with Color Invariant Feature and Global Feature (컬러 불변 특징과 광역 특징을 갖는 확장 SURF(Speeded Up Robust Features) 알고리즘)

  • Yoon, Hyun-Sup;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.6
    • /
    • pp.58-67
    • /
    • 2009
  • A correspondence matching is one of the important tasks in computer vision, and it is not easy to find corresponding points in variable environment where a scale, rotation, view point and illumination are changed. A SURF(Speeded Up Robust Features) algorithm have been widely used to solve the problem of the correspondence matching because it is faster than SIFT(Scale Invariant Feature Transform) with closely maintaining the matching performance. However, because SURF considers only gray image and local geometric information, it is difficult to match corresponding points on the image where similar local patterns are scattered. In order to solve this problem, this paper proposes an extended SURF algorithm that uses the invariant color and global geometric information. The proposed algorithm can improves the matching performance since the color information and global geometric information is used to discriminate similar patterns. In this paper, the superiority of the proposed algorithm is proved by experiments that it is compared with conventional methods on the image where an illumination and a view point are changed and similar patterns exist.

Kate Middleton's Royal Fashion Style Analysis (케이트 미들턴의 로열 패션(Royal Fashion) 스타일 분석)

  • Lee, Seunghee;Kim, Jiyoung
    • Journal of Fashion Business
    • /
    • v.22 no.1
    • /
    • pp.1-19
    • /
    • 2018
  • The purpose of this study is to analyze the fashion style of Kate Middleton, the Royal Family, and to examine the social and cultural influence of Middleton fashion. We selected 314 photographs collected from a Google site and Gettyimages.com April 2011-December 2016 as the final research subjects. We categorized the situation by domestic events, royal events, diplomatic activities, and social contribution activities, and analyzed fashion styles focusing on item composition, color, material, silhouette, detail, trimming, and length. As a result of the study, the one piece was the highest in the combination of items, and the color was the most in white. The color tones were mostly vivid, and the material texture was silky. The image was classic, and the dress code was high in semi-formal. In a situational style, the coat was the most common at the Royal Family events and blue or white of the light tones appeared in the formal style of the classic image. In domestic events, there were many silky textures of modern image, and vivid, strong tonal knee length H line dress was the most prevalent. During diplomatic activities, various colors such as red, green, gray appeared in addition to blue or white and in social contribution activities, many dresses of vivid and dark tones of red appeared in the dress code as semi-formal. In conclusion, the stylistic features of Kate Middleton and the Royal Family are largely in the form of royal and noble, low cost and chic, and body-conscious styling.

The Object Image Detection Method using statistical properties (통계적 특성에 의한 객체 영상 검출방안)

  • Kim, Ji-hong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.7
    • /
    • pp.956-962
    • /
    • 2018
  • As the study of the object feature detection from image, we explain methods to identify the species of the tree in forest using the picture taken from dron. Generally there are three kinds of methods, which are GLCM (Gray Level Co-occurrence Matrix) and Gabor filters, in order to extract the object features. We proposed the object extraction method using the statistical properties of trees in this research because of the similarity of the leaves. After we extract the sample images from the original images, we detect the objects using cross correlation techniques between the original image and sample images. Through this experiment, we realized the mean value and standard deviation of the sample images is very important factor to identify the object. The analysis of the color component of the RGB model and HSV model is also used to identify the object.

A Computer Vision-based Method for Detecting Rear Vehicles at Night (컴퓨터비전 기반의 야간 후방 차량 탐지 방법)

  • 노광현;문순환;한민홍
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.3
    • /
    • pp.181-189
    • /
    • 2004
  • This paper describes the method for detecting vehicles in the rear and rear-side at night by using headlight features. A headlight is the outstanding feature that can be used to discriminate a vehicle from a dark background. In the segmentation process, a night image is transformed to a binary image that consists of black background and white regions by gray-level thresholding, and noise in the binary image is eliminated by a morphological operation. In the feature extraction process, the geometric features and moment invariant features of a headlight are defined, and they are measured in each segmented region. Regions that are not appropriate to a headlight are filtered by using geometric feature measurement. In region classification, a pair of headlights is detected by using relational features based on the symmetry of a pair of headlights. Experimental results show that this method is very applicable to an approaching vehicle detection system at nighttime.

  • PDF

Content based Image Retrieval using RGB Maximum Frequency Indexing and BW Clustering (RGB 최대 주파수 인덱싱과 BW 클러스터링을 이용한 콘텐츠 기반 영상 검색)

  • Kang, Ji-Young;Beak, Jung-Uk;Kang, Gwang-Won;An, Young-Eun;Park, Jong-An
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.1 no.2
    • /
    • pp.71-79
    • /
    • 2008
  • This study proposed a content-based image retrieval system that uses RGB maximum frequency indexing and BW clustering in order to deal with existing retrieval errors using histogram. We split RGB from RGB color images, obtained histogram which was evenly split into 32 bins, calculated and analysed pixels of each area at histogram of R, G, B and obtained the maximum value. We indexed the color information obtained, obtained 100 similar images using the values, operated the final image retrieval system using the total number and distribution rate of clusters. The algorithm proposed in this study used space information using the features obtained from R, G, and B and clusters to obtain effective features, which overcame the disadvantage of existing gray-scale algorithm that perceived different images as same if they have the same frequencies of shade. As a result of measuring the performances using Recall and Precision, this study found that the retrieval rate and priority of the proposed algorithm are more outstanding than those of existing algorithm.

  • PDF

Video-based Intelligent Unmanned Fire Surveillance System (영상기반 지능형 무인 화재감시 시스템)

  • Jeon, Hyoung-Seok;Yeom, Dong-Hae;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.4
    • /
    • pp.516-521
    • /
    • 2010
  • In this paper, we propose a video-based intelligent unmanned fire surveillance system using fuzzy color models. In general, to detect heat or smoke, a separate device is required for a fire surveillance system, this system, however, can be implemented by using widely used CCTV, which does not need separate devices and extra cost. The systems called video-based fire surveillance systems use mainly a method extracting smoke or flame from an input image only. The smoke is difficult to extract at night because of its gray-scale color, and the flame color depends on the temperature, the inflammable, the size of flame, etc, which makes it hard to extract the flame region from the input image. This paper deals with a intelligent fire surveillance system which is robust against the variation of the flame color, especially at night. The proposed system extracts the moving object from the input image, makes a decision whether the object is the flame or not by means of the color obtained by fuzzy color model and the shape obtained by histogram, and issues a fire alarm when the flame is spread. Finally, we verify the efficiency of the proposed system through the experiment of the controlled real fire.

Recognition and Visualization of Crack on Concrete Wall using Deep Learning and Transfer Learning (딥러닝과 전이학습을 이용한 콘크리트 균열 인식 및 시각화)

  • Lee, Sang-Ik;Yang, Gyeong-Mo;Lee, Jemyung;Lee, Jong-Hyuk;Jeong, Yeong-Joon;Lee, Jun-Gu;Choi, Won
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.61 no.3
    • /
    • pp.55-65
    • /
    • 2019
  • Although crack on concrete exists from its early formation, crack requires attention as it affects stiffness of structure and can lead demolition of structure as it grows. Detecting cracks on concrete is needed to take action prior to performance degradation of structure, and deep learning can be utilized for it. In this study, transfer learning, one of the deep learning techniques, was used to detect the crack, as the amount of crack's image data was limited. Pre-trained Inception-v3 was applied as a base model for the transfer learning. Web scrapping was utilized to fetch images of concrete wall with or without crack from web. In the recognition of crack, image post-process including changing size or removing color were applied. In the visualization of crack, source images divided into 30px, 50px or 100px size were used as input data, and different numbers of input data per category were applied for each case. With the results of visualized crack image, false positive and false negative errors were examined. Highest accuracy for the recognizing crack was achieved when the source images were adjusted into 224px size under gray-scale. In visualization, the result using 50 data per category under 100px interval size showed the smallest error. With regard to the false positive error, the best result was obtained using 400 data per category, and regarding to the false negative error, the case using 50 data per category showed the best result.

Application of deep convolutional neural network for short-term precipitation forecasting using weather radar-based images

  • Le, Xuan-Hien;Jung, Sungho;Lee, Giha
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.136-136
    • /
    • 2021
  • In this study, a deep convolutional neural network (DCNN) model is proposed for short-term precipitation forecasting using weather radar-based images. The DCNN model is a combination of convolutional neural networks, autoencoder neural networks, and U-net architecture. The weather radar-based image data used here are retrieved from competition for rainfall forecasting in Korea (AI Contest for Rainfall Prediction of Hydroelectric Dam Using Public Data), organized by Dacon under the sponsorship of the Korean Water Resources Association in October 2020. This data is collected from rainy events during the rainy season (April - October) from 2010 to 2017. These images have undergone a preprocessing step to convert from weather radar data to grayscale image data before they are exploited for the competition. Accordingly, each of these gray images covers a spatial dimension of 120×120 pixels and has a corresponding temporal resolution of 10 minutes. Here, each pixel corresponds to a grid of size 4km×4km. The DCNN model is designed in this study to provide 10-minute predictive images in advance. Then, precipitation information can be obtained from these forecast images through empirical conversion formulas. Model performance is assessed by comparing the Score index, which is defined based on the ratio of MAE (mean absolute error) to CSI (critical success index) values. The competition results have demonstrated the impressive performance of the DCNN model, where the Score value is 0.530 compared to the best value from the competition of 0.500, ranking 16th out of 463 participating teams. This study's findings exhibit the potential of applying the DCNN model to short-term rainfall prediction using weather radar-based images. As a result, this model can be applied to other areas with different spatiotemporal resolutions.

  • PDF

Searching Spectrum Band of Crop Area Based on Deep Learning Using Hyper-spectral Image (초분광 영상을 이용한 딥러닝 기반의 작물 영역 스펙트럼 밴드 탐색)

  • Gwanghyeong Lee;Hyunjung Myung;Deepak Ghimire;Donghoon Kim;Sewoon Cho;Sunghwan Jeong;Bvouneiun Kim
    • Smart Media Journal
    • /
    • v.13 no.8
    • /
    • pp.39-48
    • /
    • 2024
  • Recently, various studies have emerged that utilize hyperspectral imaging for crop growth analysis and early disease diagnosis. However, the challenge of using numerous spectral bands or finding the optimal bands for crop area remains a difficult problem. In this paper, we propose a method of searching the optimized spectral band of crop area based on deep learning using the hyper-spectral image. The proposed method extracts RGB images within hyperspectral images to segment background and foreground area through a Vision Transformer-based Seformer. The segmented results project onto each band of gray-scale converted hyperspectral images. It determines the optimized spectral band of the crop area through the pixel comparison of the projected foreground and background area. The proposed method achieved foreground and background segmentation performance with an average accuracy of 98.47% and a mIoU of 96.48%. In addition, it was confirmed that the proposed method converges to the NIR regions closely related to the crop area compared to the mRMR method.

The nanoleakage patterns of experimental hydrophobic adhesives after load cycling (Load cycling에 따른 소수성 실험용 상아질 접착제의 nanoleakage 양상)

  • Sohn, Suh-Jin;Chang, Ju-Hae;Kang, Suk-Ho;Yoo, Hyun-Mi;Cho, Byeong-Hoon;Son, Ho-Hyun
    • Restorative Dentistry and Endodontics
    • /
    • v.33 no.1
    • /
    • pp.9-19
    • /
    • 2008
  • The purpose of this study was: (1) to compare nanoleakage patterns of a conventional 3-step etch and rinse adhesive system and two experimental hydrophobic adhesive systems and (2) to investigate the change of the nanoleakage patterns after load cycling. Two kinds of hydrophobic experimental adhesives, ethanol containing adhesive (EA) and methanol containing adhesive (MA), were prepared. Thirty extracted human molars were embedded in resin blocks and occlusal thirds of the crowns were removed. The polished dentin surfaces were etched with a 35 % phosphoric acid etching gel and rinsed with water. Scotchbond Multi-Purpose (MP), EA and MA were used for bonding procedure. Z-250 composite resin was built-up on the adhesive-treated surfaces. Five teeth of each dentin adhesive group were subjected to mechanical load cycling. The teeth were sectioned into 2 mm thick slabs and then stained with 50 % ammoniacal silver nitrate. Ten specimens for each group were examined under scanning electron microscope in backscattering electron mode. All photographs were analyzed using image analysis software. Three regions of each specimen were used for evaluation of the silver uptake within the hybrid layer. The area of silver deposition was calculated and expressed in gray value. Data were statistically analyzed by two-way ANOVA and post-hoc testing of multiple comparisons was done with the Scheffe's test. Silver particles were observed in all the groups. However, silver particles were more sparsely distributed in the EA group and the MA group than in the MP group (p < .0001). There were no changes in nanoleakage patterns after load cycling.