• Title/Summary/Keyword: Color image detection

Search Result 720, Processing Time 0.027 seconds

Line Edge-Based Type-Specific Corner Points Extraction for the Analysis of Table Form Document Structure (표 서식 문서의 구조 분석을 위한 선분 에지 기반의 유형별 꼭짓점 검출)

  • Jung, Jae-young
    • Journal of Digital Contents Society
    • /
    • v.15 no.2
    • /
    • pp.209-217
    • /
    • 2014
  • It is very important to classify a lot of table-form documents into the same type of classes or to extract information filled in the template automatically. For these, it is necessary to accurately analyze table-form structure. This paper proposes an algorithm to extract corner points based on line edge segments and to classify the type of junction from table-form images. The algorithm preprocesses image through binarization, skew correction, deletion of isolated small area of black color because that they are probably generated by noises.. And then, it processes detections of edge block, line edges from a edge block, corner points. The extracted corner points are classified as 9 types of junction based on the combination of horizontal/vertical line edge segments in a block. The proposed method is applied to the several unconstraint document images such as tax form, transaction receipt, ordinary document containing tables, etc. The experimental results show that the performance of point detection is over 99%. Considering that almost corner points make a correspondence pair in the table, the information of type of corner and width of line may be useful to analyse the structure of table-form document.

A Method of Detecting Character Data through a Adaboost Learning Method (에이다부스트 학습을 이용한 문자 데이터 검출 방법)

  • Jang, Seok-Woo;Byun, Siwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.7
    • /
    • pp.655-661
    • /
    • 2017
  • It is a very important task to extract character regions contained in various input color images, because characters can provide significant information representing the content of an image. In this paper, we propose a new method for extracting character regions from various input images using MCT features and an AdaBoost algorithm. Using geometric features, the method extracts actual character regions by filtering out non-character regions from among candidate regions. Experimental results show that the suggested algorithm accurately extracts character regions from input images. We expect the suggested algorithm will be useful in multimedia and image processing-related applications, such as store signboard detection and car license plate recognition.

Image Tracking Based Lane Departure Warning and Forward Collision Warning Methods for Commercial Automotive Vehicle (이미지 트래킹 기반 상용차용 차선 이탈 및 전방 추돌 경고 방법)

  • Kim, Kwang Soo;Lee, Ju Hyoung;Kim, Su Kwol;Bae, Myung Won;Lee, Deok Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.2
    • /
    • pp.235-240
    • /
    • 2015
  • Active Safety system is requested on the market of the medium and heavy duty commercial vehicle over 4.5ton beside the market of passenger car with advancement of the digital equipment proportionally. Unlike the passenger car, the mounting position of camera in case of the medium and heavy duty commercial vehicle is relatively high, it is disadvantaged conditions for lane recognition in contradiction to passenger car. In this work, we show the method of lane recognition through the Sobel edge, based on the spatial domain processing, Hough transform and color conversion correction. Also we suggest the low error method of front vehicles recognition in order to reduce the detection error through Haar-like, Adaboost, SVM and Template matching, etc., which are the object recognition methods by frontal camera vision. It is verified that the reliability over 98% on lane recognition is obtained through the vehicle test.

Study on the possibility of the aerosol and/or Yellow dust detection in the atmosphere by Ocean Scanning Multispectral Imager(OSMI)

  • Chung, Hyo-Sang;Park, Hye-Sook;Bag, Gyun-Myeong;Yoon, Hong-Joo;Jang, Kwang-Mi
    • Proceedings of the KSRS Conference
    • /
    • 1998.09a
    • /
    • pp.409-414
    • /
    • 1998
  • To examine the detectability of the aerosol and/or Yellow dust from China crossing over the Yellow sea, three works carried out as follows , Firstly, a comparison was made of the visible(VIS), water vapor(WV), and Infrared(IR) images of the GMS-5 and NOAA/AVHRR on the cases of yellow sand event over Korea. Secondly, the spectral radiance and reflectance(%) was observed during the yellow sand phenomena on April, 1998 in Seoul using the GER-2600 spectroradiometer, which observed the reflected radiance from 350 to 2500 nm in the atmosphere. We selected the optimum wavelength for detecting of the yellow sand from this observation, considering the effects of atmospheric absorption. Finally, the atmospheric radiance emerging from the LOWTRAN-7 radiative transfer model was simulated with and without yellow sand, where we used the estimated aerosol column optical depth ($\tau$ 673 nm) in the Meteorological Research Institute and the d'Almeida's statistical atmospheric aerosol radiative characteristics. The image analysis showed that it was very difficult to detect the yellow sand region only by the image processing because the albedo characteristics of the sand vary irregularly according to the density, size, components and depth of the yellow sand clouds. We found that the 670-680 nm band was useful to simulate aerosol characteristics considering the absorption band from the radiance observation. We are now processing the simulation of atmospheric radiance distribution in the range of 400-900 nm. The purpose of this study is to present the preliminary results of the aerosol and/or Yellow dust detectability using the Ocean Scanning Multispectral Imager(OSMI), which will be mounted on KOMPSAT-1 as the ocean color monitoring sensor with the range of 400-900 nm wavelength.

  • PDF

Changes in MCSST and Chlorophyll-a Off Sanriku Area (38-43N, 141-l50N) from NOAA/AVHRR and SeaWiFS Data

  • Kim, Myoung-Sun;Asanuma, Ichio
    • Proceedings of the KSRS Conference
    • /
    • 1998.09a
    • /
    • pp.95-100
    • /
    • 1998
  • The purpose of this study is to describe the change of the spring bloom and oceanographic condition. The variation of pigment concentration derived from the satellite ocean color data has been analyzed. According to the movement of blooming area, blooming was very concerned with a rising trend of sea surface temperature and a supply of nutrients. A nutrient rich water carried by the Oyashio encounters with the warm Core ring, where mixings and blooms are observed. We examined the correlation by using the satellite observations of the temperature and chlorophyll-a for the spring seasons (May, June, July) of 1998 the off Sanriku area (38-43N, 141- l50E). Using the SeaWiFS data, we process the data into the level-3, which contains the geophysical value of chlorophyll-a. And chlorophyll-a data is mapped for the water between 110E and 160E, and 15N and 52N with a 0.08 * 0.05 degree grid for each image. And Sea Surface Temperature (SST) data is produced using the AVHRR onboard the NOAA. The SST is derived by the MCSST. Then, the data is mapped for the water as much as chi-a data. And these gridded image was made by detection of each water masses, which are Kuroshio Extension, the warm-core ring and the Oyashlo Intrusion, etc., using those satellite images to determine short term change. Off Sanriku is a place where warm-water pool and the Oyashio at-e mixed. When warm streamer has intruded in cold water, the volume of phytoplankton increases at the tip of warm streamer. Warm water streamer was trigger of occurring blooming. And also, SeaWiFS images provided as much information for the studies of chlorophyll-a concentrations in the surface.

  • PDF

The near infrared image of GRB100205A field

  • Kim, Yongjung;Im, Myungshin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.37 no.2
    • /
    • pp.82.1-82.1
    • /
    • 2012
  • GRB100205A is a Gamma Ray Burst (GRB) which is suspected to be at redshift z=11-13 due to its very red H-K color($(H-K)_{vega}=2.1{\pm}0.5$). We observed a field centered at GRB100205A with Wide Field Camera (WFCAM) at United Kingdom Infrared Telescope (UKIRT) in Hawaii, in order to search a quasar that could be located around the GRB. The images were obtained in J, H, and K filters covering a square area of $0.78deg^2$. Our J-, H-, and K-band data reach the depths of 22.5, 22.1, and 21.0 mag (Vega) at $5{\sigma}$, respectively. Also using z-band image observed by CFHT, we find 8 candidates that have colors consistent with a quasar at z=11-13(non-detection in z-, J-band and $(H-K)_{vega}$ > 1.6). However, the shallow depths of J-, H-band are not enough to verify their true nature. Instead, we identify many red objects to be old or dusty galaxies at $z{\geq}3$. The number density of such objects appears about twice or more than that of the field of Cosmological Evolution Survey (COSMOS) and Ultra Deep Survey (UDS) of UKIRT Infrared deep sky survey (UKIDSS). On scales between 0.18' and 15' the correlation function is well described by a power law with an exponent of ${\approx}-0.9$ and this implies that those objects are like galaxies. It is interesting that many red galaxies exist in the region where the GRB was detected.

  • PDF

Comparison of Clinical Characteristics of Fluorescence in Quantitative Light-Induced Fluorescence Images according to the Maturation Level of Dental Plaque

  • Jung, Eun-Ha;Oh, Hye-Young
    • Journal of dental hygiene science
    • /
    • v.21 no.4
    • /
    • pp.219-226
    • /
    • 2021
  • Background: Proper detection and management of dental plaque are essential for individual oral health. We aimed to evaluate the maturation level of dental plaque using a two-tone disclosing agent and to compare it with the fluorescence of dental plaque on the quantitative light-induced fluorescence (QLF) image to obtain primary data for the development of a new dental plaque scoring system. Methods: Twenty-eight subjects who consented to participate after understanding the purpose of the study were screened. The images of the anterior teeth were obtained using the QLF device. Subsequently, dental plaque was stained with a two-tone disclosing solution and a photograph was obtained with a digital single-lens reflex (DSLR) camera. The staining scores were assigned as follows: 0 for no staining, 1 for pink staining, and 2 for blue staining. The marked points on the DSLR images were selected for RGB color analysis. The relationship between dental plaque maturation and the red/green (R/G) ratio was evaluated using Spearman's rank correlation. Additionally, different red fluorescence values according to dental plaque accumulation were assessed using one-way analysis of variance followed by Scheffe's post-hoc test to identify statistically significant differences between the groups. Results: A comparison of the intensity of red fluorescence according to the maturation of the two-tone stained dental plaque confirmed that R/G ratio was higher in the QLF images with dental plaque maturation (p<0.001). Correlation analysis between the stained dental plaque and the red fluorescence intensity in the QLF image confirmed an excellent positive correlation (p<0.001). Conclusion: A new plaque scoring system can be developed based on the results of the present study. In addition, these study results may also help in dental plaque management in the clinical setting.

An Accurate Forward Head Posture Detection using Human Pose and Skeletal Data Learning

  • Jong-Hyun Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.87-93
    • /
    • 2023
  • In this paper, we propose a system that accurately and efficiently determines forward head posture based on network learning by analyzing the user's skeletal posture. Forward head posture syndrome is a condition in which the forward head posture is changed by keeping the neck in a bent forward position for a long time, causing pain in the back, shoulders, and lower back, and it is known that daily posture habits are more effective than surgery or drug treatment. Existing methods use convolutional neural networks using webcams, and these approaches are affected by the brightness, lighting, skin color, etc. of the image, so there is a problem that they are only performed for a specific person. To alleviate this problem, this paper extracts the skeleton from the image and learns the data corresponding to the side rather than the frontal view to find the forward head posture more efficiently and accurately than the previous method. The results show that the accuracy is improved in various experimental scenes compared to the previous method.

Detection of Illegal U-turn Vehicles by Optical Flow Analysis (옵티컬 플로우 분석을 통한 불법 유턴 차량 검지)

  • Song, Chang-Ho;Lee, Jaesung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.10
    • /
    • pp.948-956
    • /
    • 2014
  • Today, Intelligent Vehicle Detection System seeks to reduce the negative factors, such as accidents over to get the traffic information of existing system. This paper proposes detection algorithm for the illegal U-turn vehicles which can cause critical accident among violations of road traffic laws. We predicted that if calculated optical flow vectors were shown on the illegal U-turn path, they would be cause of the illegal U-turn vehicles. To reduce the high computational complexity, we use the algorithm of pyramid Lucas-Kanade. This algorithm only track the key-points likely corners. Because of the high computational complexity, we detect center lane first through the color information and progressive probabilistic hough transform and apply to the around of center lane. And then we select vectors on illegal U-turn path and calculate reliability to check whether vectors is cause of the illegal U-turn vehicles or not. Finally, In order to evaluate the algorithm, we calculate process time of the type of algorithm and prove that proposed algorithm is efficiently.

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.