• Title/Summary/Keyword: image analysis algorithm

Search Result 1,480, Processing Time 0.032 seconds

Real-time Forward Vehicle Detection Method based on Extended Edge (확장 에지 분석을 통한 실시간 전방 차량 검출 기법)

  • Ji, Young-Suk;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.10
    • /
    • pp.35-47
    • /
    • 2010
  • To complement inaccurate edge information and detect correctly the boundary of a vehicle in an image, an extended edge analysis technique is presented in this paper. The vehicle is detected using the bottom boundary generated by a vehicle and the road surface and the left and right side boundaries of the vehicle. The proposed extended edge analysis method extracts the horizontal edge by merging or dividing the nearby edges inside the region of interest set beforehand because various noises deteriorates the horizontal edge which can be a bottom boundary. The horizontal edge is considered as the bottom boundary and the vertical edges as the side boundaries of a vehicle if the extracted horizontal edge intersects with two vertical edges which satisfy the vehicle width condition at the height of the horizontal edge. This proposed algorithm is more efficient than the other existing methods when the road surface is complex. It is proved by the experiments executed on the roads having various backgrounds.

Skin Pigmentation Detection Using Projection Transformed Block Coefficient (투영 변환 블록 계수를 이용한 피부 색소 침착 검출)

  • Liu, Yang;Lee, Suk-Hwan;Kwon, Seong-Geun;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.9
    • /
    • pp.1044-1056
    • /
    • 2013
  • This paper presents an approach for detecting and measuring human skin pigmentation. In the proposed scheme, we extract a skin area by a GMM-EM clustering based skin color model that is estimated from the statistical analysis of training images and remove tiny noises through the morphology processing. A skin area is decomposed into two components of hemoglobin and melanin by an independent component analysis (ICA) algorithm. Then, we calculate the intensities of hemoglobin and melanin by using the projection transformed block coefficient and determine the existence of skin pigmentation according to the global and local distribution of two intensities. Furthermore, we measure the area and density of the detected skin pigmentation. Experimental results verified that our scheme can both detect the skin pigmentation and measure the quantity of that and also our scheme takes less time because of the location histogram.

A development of a new tongue diagnosis model in the oriental medicine by the color analysis of tongue (혀의 색상 분석에 의한 새로운 한방 설진(舌診) 모델 개발)

  • Choi, Min;Lee, Min-taek;Lee, Kyu-won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.801-804
    • /
    • 2013
  • We propose a new tongue examination model according to the taste division of tongue. The proposed sytem consists of image acquisition, region segmentation, color distribution analysis and abnormality decision of tongue. Tongue DB which is classified into abnormality is constructed with tongue images captured from oriental medicine hospital inpatients. We divided 4 basic taste(bitter, sweet, salty and sour) regions and performed color distribution analysis targeting each region under HSI(Hue Saturation Intensity) color model. To minimize the influence of illumination, the histograms of H and S components only except I are utilized. The abnormality of taste regions each by comparing the proposed diagnosis model with diagnosis results by a doctor of oriental medicine. We confirmed the 87.5% of classification results of abnormality by proposed algorithm is coincide with the doctor's results.

  • PDF

Time series Analysis of Land Cover Change and Surface Temperature in Tuul-Basin, Mongolia Using Landsat Satellite Image (Landsat 위성영상을 이용한 몽골 Tuul-Basin 지역의 토지피복변화 및 지표온도 시계열적 분석)

  • Erdenesumbee, Suld;Cho, Gi Sung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.24 no.3
    • /
    • pp.39-47
    • /
    • 2016
  • In this study analysis the status of land cover change and land degradation of Tuul-Basin in Mongolia by using the Landsat satellite images that was taken in year of 1990, 2001 and 2011 respectively in the summer at the time of great growth of green plants. Analysis of the land cover change during time series data in Tuul-Basin, Mongolia and NDVI (Normalized Difference Vegetation Index), SAVI (Soil-Adjusted Vegetation Index) and LST (Land Surface Temperature) algorithm are used respectively. As a result shows, there was a decrease of forest and green area and increase of dry and fallow land in the study area. It was be considered as trends to be a land degradation. In addition, there was high correlation between LST and vegetation index. The land cover change or vitality of vegetation which is taken in study area can be closely related to the temperature of the surface.

Evaluation of Image Noise and Radiation Dose Analysis In Brain CT Using ASIR(Adaptive Statistical Iterative Reconstruction) (ASIR를 이용한 두부 CT의 영상 잡음 평가 및 피폭선량 분석)

  • Jang, Hyon-Chol;Kim, Kyeong-Keun;Cho, Jae-Hwan;Seo, Jeong-Min;Lee, Haeng-Ki
    • Journal of the Korean Society of Radiology
    • /
    • v.6 no.5
    • /
    • pp.357-363
    • /
    • 2012
  • The purpose of this study on head computed tomography scan corporate reorganization adaptive iteration algorithm using the statistical noise, and quality assessment, reduction of dose was evaluated. Head CT examinations do not apply ASIR group [A group], ASIR 50 applies a group [B group] were divided into examinations. B group of each 46.9 %, 48.2 %, 43.2 %, and 47.9 % the measured in the phantom research result of measurement of CT noise average were reduced more than A group in the central part (A) and peripheral unit (B, C, D). CT number was measured with the quantitive analytical method in the display-image quality evaluation and about noise was analyze. There was A group and difference which the image noise notes statistically between B. And A group was high so that the image noise could note than B group (31.87 HUs, 31.78 HUs, 26.6 HUs, 30.42 HU P<0.05). The score of the observer 1 of A group evaluated 73.17 on 74.2 at the result 80 half tone dot of evaluating by the qualitative evaluation method of the image by the bean curd clinical image evaluation table. And the score of the observer 1 of B group evaluated 71.77 on 72.47. There was no difference (P>0.05) noted statistically. And the inappropriate image was shown to the diagnosis. As to the exposure dose, by examination by applying ASIR 50 % there was no decline in quality of the image, 47.6 % could reduce the radiation dose. In conclusion, if ASIR is applied to the clinical part, it is considered with the dose written much more that examination is possible. And when examination, it is considered that it becomes the positive factor when the examiner determines.

The Consideration for Optimum 3D Seismic Processing Procedures in Block II, Northern Part of South Yellow Sea Basin (대륙붕 2광구 서해분지 북부지역의 3D전산처리 최적화 방안시 고려점)

  • Ko, Seung-Won;Shin, Kook-Sun;Jung, Hyun-Young
    • The Korean Journal of Petroleum Geology
    • /
    • v.11 no.1 s.12
    • /
    • pp.9-17
    • /
    • 2005
  • In the main target area of the block II, Targe-scale faults occur below the unconformity developed around 1 km in depth. The contrast of seismic velocity around the unconformity is generally so large that the strong multiples and the radical velocity variation would deteriorate the quality of migrated section due to serious distortion. More than 15 kinds of data processing techniques have been applied to improve the image resolution for the structures farmed from this active crustal activity. The bad and noisy traces were edited on the common shot gathers in the first step to get rid of acquisition problems which could take place from unfavorable conditions such as climatic change during data acquisition. Correction of amplitude attenuation caused from spherical divergence and inelastic attenuation has been also applied. Mild F/K filter was used to attenuate coherent noise such as guided waves and side scatters. Predictive deconvolution has been applied before stacking to remove peg-leg multiples and water reverberations. The velocity analysis process was conducted at every 2 km interval to analyze migration velocity, and it was iterated to get the high fidelity image. The strum noise caused from streamer was completely removed by applying predictive deconvolution in time space and ${\tau}-P$ domain. Residual multiples caused from thin layer or water bottom were eliminated through parabolic radon transform demultiple process. The migration using curved ray Kirchhoff-style algorithm has been applied to stack data. The velocity obtained after several iteration approach for MVA (migration velocity analysis) was used instead or DMO for the migration velocity. Using various testing methods, optimum seismic processing parameter can be obtained for structural and stratigraphic interpretation in the Block II, Yellow Sea Basin.

  • PDF

Marginal Bone Resorption Analysis of Dental Implant Patients by Applying Pattern Recognition Algorithm (패턴인식 알고리즘을 적용한 임플란트 주변골 흡수 분석)

  • Jung, Min Gi;Kim, Soung Min;Kim, Myung Joo;Lee, Jong Ho;Myoung, Hoon;Kim, Myung Jin
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.35 no.3
    • /
    • pp.167-173
    • /
    • 2013
  • Purpose: The aim of this study is to analyze the series of panoramic radiograph of implant patients using the system to measure peri-implant crestal bone loss according to the elapsed time from fixture installation time to more than three years. Methods: Choose 10 patients having 45 implant fixtures installed, which have series of panoramic radiograph in the period to be analyzed by the system. Then, calculated the crestal bone depth and statistics and selected the implant in concerned by clicking the implant of image shown on the monitor by the implemented pattern recognition system. Then, the system recognized the x, y coordination of the implant and peri-implant alveolar crest, and calculated the distance between the approximated line of implant fixture and alveolar crest. By applying pattern recognition to periodic panoramic radiographs, we attained the results and made a comparison with the results of preceded articles concerning peri-implant marginal bone loss. Analyzing peri-implant crestal bone loss in a regression analysis periodic filmed panoramic radiograph, logarithmic approximation had highest $R^2$ value, and the equation is as shown below. $y=0.245Logx{\pm}0.42$, $R^2=0.53$, unit: month (x), mm (y) Results: Panoramic radiograph is a more wide-scoped view compared with the periapical radiograph in the same resolution. Therefore, there was not enough information in the radiograph in local area. Anterior portion of many radiographs was out of the focal trough and blurred precluding the accurate recognition by the system, and many implants were overlapped with the adjacent structures, in which the alveolar crest was impossible to find. Conclusion: Considering the earlier objective and error, we expect better results from an analysis of periapical radiograph than panoramic radiograph. Implementing additional function, we expect high extensibility of pattern recognition system as a diagnostic tool to evaluate implant-bone integration, calculate length from fixture to inferior alveolar nerve, and from fixture to base of the maxillary sinus.

A Study on Orthogonal Image Detection Precision Improvement Using Data of Dead Pine Trees Extracted by Period Based on U-Net model (U-Net 모델에 기반한 기간별 추출 소나무 고사목 데이터를 이용한 정사영상 탐지 정밀도 향상 연구)

  • Kim, Sung Hun;Kwon, Ki Wook;Kim, Jun Hyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.4
    • /
    • pp.251-260
    • /
    • 2022
  • Although the number of trees affected by pine wilt disease is decreasing, the affected area is expanding across the country. Recently, with the development of deep learning technology, it is being rapidly applied to the detection study of pine wilt nematodes and dead trees. The purpose of this study is to efficiently acquire deep learning training data and acquire accurate true values to further improve the detection ability of U-Net models through learning. To achieve this purpose, by using a filtering method applying a step-by-step deep learning algorithm the ambiguous analysis basis of the deep learning model is minimized, enabling efficient analysis and judgment. As a result of the analysis the U-Net model using the true values analyzed by period in the detection and performance improvement of dead pine trees of wilt nematode using the U-Net algorithm had a recall rate of -0.5%p than the U-Net model using the previously provided true values, precision was 7.6%p and F-1 score was 4.1%p. In the future, it is judged that there is a possibility to increase the precision of wilt detection by applying various filtering techniques, and it is judged that the drone surveillance method using drone orthographic images and artificial intelligence can be used in the pine wilt nematode disaster prevention project.

Flood Disaster Prediction and Prevention through Hybrid BigData Analysis (하이브리드 빅데이터 분석을 통한 홍수 재해 예측 및 예방)

  • Ki-Yeol Eom;Jai-Hyun Lee
    • The Journal of Bigdata
    • /
    • v.8 no.1
    • /
    • pp.99-109
    • /
    • 2023
  • Recently, not only in Korea but also around the world, we have been experiencing constant disasters such as typhoons, wildfires, and heavy rains. The property damage caused by typhoons and heavy rain in South Korea alone has exceeded 1 trillion won. These disasters have resulted in significant loss of life and property damage, and the recovery process will also take a considerable amount of time. In addition, the government's contingency funds are insufficient for the current situation. To prevent and effectively respond to these issues, it is necessary to collect and analyze accurate data in real-time. However, delays and data loss can occur depending on the environment where the sensors are located, the status of the communication network, and the receiving servers. In this paper, we propose a two-stage hybrid situation analysis and prediction algorithm that can accurately analyze even in such communication network conditions. In the first step, data on river and stream levels are collected, filtered, and refined from diverse sensors of different types and stored in a bigdata. An AI rule-based inference algorithm is applied to analyze the crisis alert levels. If the rainfall exceeds a certain threshold, but it remains below the desired level of interest, the second step of deep learning image analysis is performed to determine the final crisis alert level.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.