• Title/Summary/Keyword: Color matching algorithm

Search Result 147, Processing Time 0.021 seconds

Quantitative Analysis of MR Image in Cerebral Infarction Period (뇌경색 시기별 MR영상의 정량적 분석)

  • Park, Byeong-Rae;Ha, Kwang;Kim, Hak-Jin;Lee, Seok-Hong;Jeon, Gye-Rok
    • Journal of radiological science and technology
    • /
    • v.23 no.1
    • /
    • pp.39-47
    • /
    • 2000
  • In this study, we showed a comparison and analysis making use of DWI(diffusion weighted image) using early diagnosis of cerebral Infarction and with the classified T2 weighted image, FLAIR images signal intensity for brain infarction period. period of cerebral infarction after the condition of a disease by ischemic stroke. To compare 3 types of image, we performed polynomial warping and affined transform for image matching. Using proposed algorithm, calculated signal intensity difference between T2WI, DWI, FLAIR and DWI. The quantification values between hand made and calculated data are almost the same. We quantified the each period and performed pseudo color mapping by comparing signal intensity each other according to previously obtained hand made data, and compared the result of this paper according to obtained quantified data to that of doctors decision. The examined mean and standard deviation for each brain infarction stage are as follows ; the means and standard deviations of signal intensity difference between DWI and T2WI for each period are $197.7{\pm}6.9$ in hyperacute, $110.2{\pm}5.4$ in acute, and $67.8{\pm}7.2$ in subacute. And the means and standard deviations of signal intensity difference between DWI and FLAIR for each period are $199.8{\pm}7.5$ in hyperacute, $115.3{\pm}8.0$ in acute, and $70.9{\pm}5.8$ in subacute. We can quantificate and decide cerebral infarction period objectively. According to this study, DWI is very exact for early diagnosis. We classified the period of infarction occurrence to analyze the region of disease and normal region in DW, T2WI, FLAIR images.

  • PDF

A Study on Abalone Young Shells Counting System using Machine Vision (머신비전을 이용한 전복 치패 계수에 관한 연구)

  • Park, Kyung-min;Ahn, Byeong-Won;Park, Young-San;Bae, Cherl-O
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.23 no.4
    • /
    • pp.415-420
    • /
    • 2017
  • In this paper, an algorithm for object counting via a conveyor system using machine vision is suggested. Object counting systems using image processing have been applied in a variety of industries for such purposes as measuring floating populations and traffic volume, etc. The methods of object counting mainly used involve template matching and machine learning for detecting and tracking. However, operational time for these methods should be short for detecting objects on quickly moving conveyor belts. To provide this characteristic, this algorithm for image processing is a region-based method. In this experiment, we counted young abalone shells that are similar in shape, size and color. We applied a characteristic conveyor system that operated in one direction. It obtained information on objects in the region of interest by comparing a second frame that continuously changed according to the information obtained with reference to objects in the first region. Objects were counted if the information between the first and second images matched. This count was exact when young shells were evenly spaced without overlap and missed objects were calculated using size information when objects moved without extra space. The proposed algorithm can be applied for various object counting controls on conveyor systems.

Tracking Moving Object using Hierarchical Search Method (계층적 탐색기법을 이용한 이동물체 추적)

  • 방만식;김태식;김영일
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.3
    • /
    • pp.568-576
    • /
    • 2003
  • This paper proposes a moving object tracking algorithm by using hierarchical search method in dynamic scenes. Proposed algorithm is based on two main steps: generation step of initial model from different pictures, and tracking step of moving object under the time-yawing scenes. With a series of this procedure, tracking process is not only stable under far distance circumstance with respect to the previous frame but also reliable under shape variation from the 3-dimensional(3D) motion and camera sway, and consequently, by correcting position of moving object, tracking time is relatively reduced. Partial Hausdorff distance is also utilized as an estimation function to determine the similarity between model and moving object. In order to testify the performance of proposed method, the extraction and tracking performance have tested using some kinds of moving car in dynamic scenes. Experimental results showed that the proposed algorithm provides higher performance. Namely, matching order is 28.21 times on average, and considering the processing time per frame, it is 53.21ms/frame. Computation result between the tracking position and that of currently real with respect to the root-mean-square(rms) is 1.148. In the occasion of different vehicle in terms of size, color and shape, tracking performance is 98.66%. In such case as background-dependence due to the analogy to road is 95.33%, and total average is 97%.

Welfare Interface using Multiple Facial Features Tracking (다중 얼굴 특징 추적을 이용한 복지형 인터페이스)

  • Ju, Jin-Sun;Shin, Yun-Hee;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.1
    • /
    • pp.75-83
    • /
    • 2008
  • We propose a welfare interface using multiple fecial features tracking, which can efficiently implement various mouse operations. The proposed system consist of five modules: face detection, eye detection, mouth detection, facial feature tracking, and mouse control. The facial region is first obtained using skin-color model and connected-component analysis(CCs). Thereafter the eye regions are localized using neutral network(NN)-based texture classifier that discriminates the facial region into eye class and non-eye class, and then mouth region is localized using edge detector. Once eye and mouth regions are localized they are continuously and correctly tracking by mean-shift algorithm and template matching, respectively. Based on the tracking results, mouse operations such as movement or click are implemented. To assess the validity of the proposed system, it was applied to the interface system for web browser and was tested on a group of 25 users. The results show that our system have the accuracy of 99% and process more than 21 frame/sec on PC for the $320{\times}240$ size input image, as such it can supply a user-friendly and convenient access to a computer in real-time operation.

Generation of Feature Map for Improving Localization of Mobile Robot based on Stereo Camera (스테레오 카메라 기반 모바일 로봇의 위치 추정 향상을 위한 특징맵 생성)

  • Kim, Eun-Kyeong;Kim, Sung-Shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.1
    • /
    • pp.58-63
    • /
    • 2020
  • This paper proposes the method for improving the localization accuracy of the mobile robot based on the stereo camera. To restore the position information from stereo images obtained by the stereo camera, the corresponding point which corresponds to one pixel on the left image should be found on the right image. For this, there is the general method to search for corresponding point by calculating the similarity of pixel with pixels on the epipolar line. However, there are some disadvantages because all pixels on the epipolar line should be calculated and the similarity is calculated by only pixel value like RGB color space. To make up for this weak point, this paper implements the method to search for the corresponding point simply by calculating the gap of x-coordinate when the feature points, which are extracted by feature extraction and matched by feature matching method, are a pair and located on the same y-coordinate on the left/right image. In addition, the proposed method tries to preserve the number of feature points as much as possible by finding the corresponding points through the conventional algorithm in case of unmatched features. Because the number of the feature points has effect on the accuracy of the localization. The position of the mobile robot is compensated based on 3-D coordinates of the features which are restored by the feature points and corresponding points. As experimental results, by the proposed method, the number of the feature points are increased for compensating the position and the position of the mobile robot can be compensated more than only feature extraction.

Development of High-Resolution Fog Detection Algorithm for Daytime by Fusing GK2A/AMI and GK2B/GOCI-II Data (GK2A/AMI와 GK2B/GOCI-II 자료를 융합 활용한 주간 고해상도 안개 탐지 알고리즘 개발)

  • Ha-Yeong Yu;Myoung-Seok Suh
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1779-1790
    • /
    • 2023
  • Satellite-based fog detection algorithms are being developed to detect fog in real-time over a wide area, with a focus on the Korean Peninsula (KorPen). The GEO-KOMPSAT-2A/Advanced Meteorological Imager (GK2A/AMI, GK2A) satellite offers an excellent temporal resolution (10 min) and a spatial resolution (500 m), while GEO-KOMPSAT-2B/Geostationary Ocean Color Imager-II (GK2B/GOCI-II, GK2B) provides an excellent spatial resolution (250 m) but poor temporal resolution (1 h) with only visible channels. To enhance the fog detection level (10 min, 250 m), we developed a fused GK2AB fog detection algorithm (FDA) of GK2A and GK2B. The GK2AB FDA comprises three main steps. First, the Korea Meteorological Satellite Center's GK2A daytime fog detection algorithm is utilized to detect fog, considering various optical and physical characteristics. In the second step, GK2B data is extrapolated to 10-min intervals by matching GK2A pixels based on the closest time and location when GK2B observes the KorPen. For reflectance, GK2B normalized visible (NVIS) is corrected using GK2A NVIS of the same time, considering the difference in wavelength range and observation geometry. GK2B NVIS is extrapolated at 10-min intervals using the 10-min changes in GK2A NVIS. In the final step, the extrapolated GK2B NVIS, solar zenith angle, and outputs of GK2A FDA are utilized as input data for machine learning (decision tree) to develop the GK2AB FDA, which detects fog at a resolution of 250 m and a 10-min interval based on geographical locations. Six and four cases were used for the training and validation of GK2AB FDA, respectively. Quantitative verification of GK2AB FDA utilized ground observation data on visibility, wind speed, and relative humidity. Compared to GK2A FDA, GK2AB FDA exhibited a fourfold increase in spatial resolution, resulting in more detailed discrimination between fog and non-fog pixels. In general, irrespective of the validation method, the probability of detection (POD) and the Hanssen-Kuiper Skill score (KSS) are high or similar, indicating that it better detects previously undetected fog pixels. However, GK2AB FDA, compared to GK2A FDA, tends to over-detect fog with a higher false alarm ratio and bias.

Lane Detection in Complex Environment Using Grid-Based Morphology and Directional Edge-link Pairs (복잡한 환경에서 Grid기반 모폴리지와 방향성 에지 연결을 이용한 차선 검출 기법)

  • Lin, Qing;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.6
    • /
    • pp.786-792
    • /
    • 2010
  • This paper presents a real-time lane detection method which can accurately find the lane-mark boundaries in complex road environment. Unlike many existing methods that pay much attention on the post-processing stage to fit lane-mark position among a great deal of outliers, the proposed method aims at removing those outliers as much as possible at feature extraction stage, so that the searching space at post-processing stage can be greatly reduced. To achieve this goal, a grid-based morphology operation is firstly used to generate the regions of interest (ROI) dynamically, in which a directional edge-linking algorithm with directional edge-gap closing is proposed to link edge-pixels into edge-links which lie in the valid directions, these directional edge-links are then grouped into pairs by checking the valid lane-mark width at certain height of the image. Finally, lane-mark colors are checked inside edge-link pairs in the YUV color space, and lane-mark types are estimated employing a Bayesian probability model. Experimental results show that the proposed method is effective in identifying lane-mark edges among heavy clutter edges in complex road environment, and the whole algorithm can achieve an accuracy rate around 92% at an average speed of 10ms/frame at the image size of $320{\times}240$.