• Title/Summary/Keyword: multi layer perceptron

Search Result 436, Processing Time 0.031 seconds

SVM Classifier for the Detection of Ventricular Fibrillation (SVM 분류기를 통한 심실세동 검출)

  • Song, Mi-Hye;Lee, Jeon;Cho, Sung-Pil;Lee, Kyoung-Joung
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.42 no.5 s.305
    • /
    • pp.27-34
    • /
    • 2005
  • Ventricular fibrillation(VF) is generally caused by chaotic behavior of electrical propagation in heart and may result in sudden cardiac death. In this study, we proposed a ventricular fibrillation detection algorithm based on support vector machine classifier, which could offer benefits to reduce the teaming costs as well as good classification performance. Before the extraction of input features, raw ECG signal was applied to preprocessing procedures, as like wavelet transform based bandpass filtering, R peak detection and segment assignment for feature extraction. We selected input features which of some are related to the rhythm information and of others are related to wavelet coefficients that could describe the morphology of ventricular fibrillation well. Parameters for SVM classifier, C and ${\alpha}$, were chosen as 10 and 1 respectively by trial and error experiments. Each average performance for normal sinus rhythm ventricular tachycardia and VF, was 98.39%, 96.92% and 99.88%. And, when the VF detection performance of SVM classifier was compared to that of multi-layer perceptron and fuzzy inference methods, it showed similar or higher values. Consequently, we could find that the proposed input features and SVM classifier would one of the most useful algorithm for VF detection.

Improvements of an English Pronunciation Dictionary Generator Using DP-based Lexicon Pre-processing and Context-dependent Grapheme-to-phoneme MLP (DP 알고리즘에 의한 발음사전 전처리와 문맥종속 자소별 MLP를 이용한 영어 발음사전 생성기의 개선)

  • 김회린;문광식;이영직;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.21-27
    • /
    • 1999
  • In this paper, we propose an improved MLP-based English pronunciation dictionary generator to apply to the variable vocabulary word recognizer. The variable vocabulary word recognizer can process any words specified in Korean word lexicon dynamically determined according to the current recognition task. To extend the ability of the system to task for English words, it is necessary to build a pronunciation dictionary generator to be able to process words not included in a predefined lexicon, such as proper nouns. In order to build the English pronunciation dictionary generator, we use context-dependent grapheme-to-phoneme multi-layer perceptron(MLP) architecture for each grapheme. To train each MLP, it is necessary to obtain grapheme-to-phoneme training data from general pronunciation dictionary. To automate the process, we use dynamic programming(DP) algorithm with some distance metrics. For training and testing the grapheme-to-phoneme MLPs, we use general English pronunciation dictionary with about 110 thousand words. With 26 MLPs each having 30 to 50 hidden nodes and the exception grapheme lexicon, we obtained the word accuracy of 72.8% for the 110 thousand words superior to rule-based method showing the word accuracy of 24.0%.

  • PDF

Face Detection Using A Selectively Attentional Hough Transform and Neural Network (선택적 주의집중 Hough 변환과 신경망을 이용한 얼굴 검출)

  • Choi, Il;Seo, Jung-Ik;Chien, Sung-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.93-101
    • /
    • 2004
  • A face boundary can be approximated by an ellipse with five-dimensional parameters. This property allows an ellipse detection algorithm to be adapted to detecting faces. However, the construction of a huge five-dimensional parameter space for a Hough transform is quite unpractical. Accordingly, we Propose a selectively attentional Hough transform method for detecting faces from a symmetric contour in an image. The idea is based on the use of a constant aspect ratio for a face, gradient information, and scan-line-based orientation decomposition, thereby allowing a 5-dimensional problem to be decomposed into a two-dimensional one to compute a center with a specific orientation and an one-dimensional one to estimate a short axis. In addition, a two-point selection constraint using geometric and gradient information is also employed to increase the speed and cope with a cluttered background. After detecting candidate face regions using the proposed Hough transform, a multi-layer perceptron verifier is adopted to reject false positives. The proposed method was found to be relatively fast and promising.

Comparison of Feature Performance in Off-line Hanwritten Korean Alphabet Recognition (오프라인 필기체 한글 자소 인식에 있어서 특징성능의 비교)

  • Ko, Tae-Seog;Kim, Jong-Ryeol;Chung, Kyu-Sik
    • Korean Journal of Cognitive Science
    • /
    • v.7 no.1
    • /
    • pp.57-74
    • /
    • 1996
  • This paper presents a comparison of recognition performance of the features used inthe recent handwritten korean character recognition.This research aims at providing the basis for feature selecion in order to improve not only the recognition rate but also the efficiency of recognition system.For the comparison of feature performace,we analyzed the characteristics of theose features and then,classified them into three rypes:global feature(image transformation)type,statistical feature type,and local/ topological feature type.For each type,we selected four or five features which seem more suitable to represent the characteristics of korean alphabet,and performed recongition experiments for the first consonant,horizontal vowel,and vertical vowel of a korean character, respectively.The classifier used in our experiments is a multi-layered perceptron with one hidden layer which is trained with backpropagation algorithm.The training and test data in the experiment are taken from 30sets of PE92. Experimental results show that 1)local/topological features outperform the other two type features in terms of recognition rates 2)mesh and projection features in statical feature type,walsh and DCT features in global feature type,and gradient and concavity features in local/topological feature type outperform the others in each type, respectively.

  • PDF

Assessing the Impact of Climate Change on Water Resources: Waimea Plains, New Zealand Case Example

  • Zemansky, Gil;Hong, Yoon-Seeok Timothy;Rose, Jennifer;Song, Sung-Ho;Thomas, Joseph
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2011.05a
    • /
    • pp.18-18
    • /
    • 2011
  • Climate change is impacting and will increasingly impact both the quantity and quality of the world's water resources in a variety of ways. In some areas warming climate results in increased rainfall, surface runoff, and groundwater recharge while in others there may be declines in all of these. Water quality is described by a number of variables. Some are directly impacted by climate change. Temperature is an obvious example. Notably, increased atmospheric concentrations of $CO_2$ triggering climate change increase the $CO_2$ dissolving into water. This has manifold consequences including decreased pH and increased alkalinity, with resultant increases in dissolved concentrations of the minerals in geologic materials contacted by such water. Climate change is also expected to increase the number and intensity of extreme climate events, with related hydrologic changes. A simple framework has been developed in New Zealand for assessing and predicting climate change impacts on water resources. Assessment is largely based on trend analysis of historic data using the non-parametric Mann-Kendall method. Trend analysis requires long-term, regular monitoring data for both climate and hydrologic variables. Data quality is of primary importance and data gaps must be avoided. Quantitative prediction of climate change impacts on the quantity of water resources can be accomplished by computer modelling. This requires the serial coupling of various models. For example, regional downscaling of results from a world-wide general circulation model (GCM) can be used to forecast temperatures and precipitation for various emissions scenarios in specific catchments. Mechanistic or artificial intelligence modelling can then be used with these inputs to simulate climate change impacts over time, such as changes in streamflow, groundwater-surface water interactions, and changes in groundwater levels. The Waimea Plains catchment in New Zealand was selected for a test application of these assessment and prediction methods. This catchment is predicted to undergo relatively minor impacts due to climate change. All available climate and hydrologic databases were obtained and analyzed. These included climate (temperature, precipitation, solar radiation and sunshine hours, evapotranspiration, humidity, and cloud cover) and hydrologic (streamflow and quality and groundwater levels and quality) records. Results varied but there were indications of atmospheric temperature increasing, rainfall decreasing, streamflow decreasing, and groundwater level decreasing trends. Artificial intelligence modelling was applied to predict water usage, rainfall recharge of groundwater, and upstream flow for two regionally downscaled climate change scenarios (A1B and A2). The AI methods used were multi-layer perceptron (MLP) with extended Kalman filtering (EKF), genetic programming (GP), and a dynamic neuro-fuzzy local modelling system (DNFLMS), respectively. These were then used as inputs to a mechanistic groundwater flow-surface water interaction model (MODFLOW). A DNFLMS was also used to simulate downstream flow and groundwater levels for comparison with MODFLOW outputs. MODFLOW and DNFLMS outputs were consistent. They indicated declines in streamflow on the order of 21 to 23% for MODFLOW and DNFLMS (A1B scenario), respectively, and 27% in both cases for the A2 scenario under severe drought conditions by 2058-2059, with little if any change in groundwater levels.

  • PDF

Directional Feature Extraction of Handwritten Numerals using Local min/max Operations (Local min/max 연산을 이용한 필기체 숫자의 방향특징 추출)

  • Jung, Soon-Won;Park, Joong-Jo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.10 no.1
    • /
    • pp.7-12
    • /
    • 2009
  • In this paper, we propose a directional feature extraction method for off-line handwritten numerals by using the morphological operations. Direction features are obtained from four directional line images, each of which contains horizontal, vertical, right-diagonal and left-diagonal lines in entire numeral lines. Conventional method for extracting directional features uses Kirsch masks which generate edge-shaped double line images for each direction, whereas our method uses directional erosion operations and generate single line images for each direction. To apply these directional erosion operations to the numeral image, preprocessing steps such as thinning and dilation are required, but resultant directional lines are more similar to numeral lines themselves. Our four [$4{\times}4$] directional features of a numeral are obtained from four directional line images through a zoning method. For obtaining the higher recognition rates of the handwrittern numerals, we use the multiple feature which is comprised of our proposed feature and the conventional features of a kirsch directional feature and a concavity feature. For recognition test with given features, we use a multi-layer perceptron neural network classifier which is trained with the back propagation algorithm. Through the experiments with the CENPARMI numeral database of Concordia University, we have achieved a recognition rate of 98.35%.

  • PDF

Research Trend Analysis for Fault Detection Methods Using Machine Learning (머신러닝을 사용한 단층 탐지 기술 연구 동향 분석)

  • Bae, Wooram;Ha, Wansoo
    • Economic and Environmental Geology
    • /
    • v.53 no.4
    • /
    • pp.479-489
    • /
    • 2020
  • A fault is a geological structure that can be a migration path or a cap rock of hydrocarbon such as oil and gas, formed from source rock. The fault is one of the main targets of seismic exploration to find reservoirs in which hydrocarbon have accumulated. However, conventional fault detection methods using lateral discontinuity in seismic data such as semblance, coherence, variance, gradient magnitude and fault likelihood, have problem that professional interpreters have to invest lots of time and computational costs. Therefore, many researchers are conducting various studies to save computational costs and time for fault interpretation, and machine learning technologies attracted attention recently. Among various machine learning technologies, many researchers are conducting fault interpretation studies using the support vector machine, multi-layer perceptron, deep neural networks and convolutional neural networks algorithms. Especially, researchers use not only their own convolution networks but also proven networks in image processing to predict fault locations and fault information such as strike and dip. In this paper, by investigating and analyzing these studies, we found that the convolutional neural networks based on the U-Net from image processing is the most effective one for fault detection and interpretation. Further studies can expect better results from fault detection and interpretation using the convolutional neural networks along with transfer learning and data augmentation.

A Feasibility Study on Using Neural Network for Dose Calculation in Radiation Treatment (방사선 치료 선량 계산을 위한 신경회로망의 적용 타당성)

  • Lee, Sang Kyung;Kim, Yong Nam;Kim, Soo Kon
    • Journal of Radiation Protection and Research
    • /
    • v.40 no.1
    • /
    • pp.55-64
    • /
    • 2015
  • Dose calculations which are a crucial requirement for radiotherapy treatment planning systems require accuracy and rapid calculations. The conventional radiotherapy treatment planning dose algorithms are rapid but lack precision. Monte Carlo methods are time consuming but the most accurate. The new combined system that Monte Carlo methods calculate part of interesting domain and the rest is calculated by neural can calculate the dose distribution rapidly and accurately. The preliminary study showed that neural networks can map functions which contain discontinuous points and inflection points which the dose distributions in inhomogeneous media also have. Performance results between scaled conjugated gradient algorithm and Levenberg-Marquardt algorithm which are used for training the neural network with a different number of neurons were compared. Finally, the dose distributions of homogeneous phantom calculated by a commercialized treatment planning system were used as training data of the neural network. In the case of homogeneous phantom;the mean squared error of percent depth dose was 0.00214. Further works are programmed to develop the neural network model for 3-dimensinal dose calculations in homogeneous phantoms and inhomogeneous phantoms.

Real-Time Vehicle License Plate Recognition System Using Adaptive Heuristic Segmentation Algorithm (적응 휴리스틱 분할 알고리즘을 이용한 실시간 차량 번호판 인식 시스템)

  • Jin, Moon Yong;Park, Jong Bin;Lee, Dong Suk;Park, Dong Sun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.9
    • /
    • pp.361-368
    • /
    • 2014
  • The LPR(License plate recognition) system has been developed to efficient control for complex traffic environment and currently be used in many places. However, because of light, noise, background changes, environmental changes, damaged plate, it only works limited environment, so it is difficult to use in real-time. This paper presents a heuristic segmentation algorithm for robust to noise and illumination changes and introduce a real-time license plate recognition system using it. In first step, We detect the plate utilized Haar-like feature and Adaboost. This method is possible to rapid detection used integral image and cascade structure. Second step, we determine the type of license plate with adaptive histogram equalization, bilateral filtering for denoise and segment accurate character based on adaptive threshold, pixel projection and associated with the prior knowledge. The last step is character recognition that used histogram of oriented gradients (HOG) and multi-layer perceptron(MLP) for number recognition and support vector machine(SVM) for number and Korean character classifier respectively. The experimental results show license plate detection rate of 94.29%, license plate false alarm rate of 2.94%. In character segmentation method, character hit rate is 97.23% and character false alarm rate is 1.37%. And in character recognition, the average character recognition rate is 98.38%. Total average running time in our proposed method is 140ms. It is possible to be real-time system with efficiency and robustness.

A study on frost prediction model using machine learning (머신러닝을 사용한 서리 예측 연구)

  • Kim, Hyojeoung;Kim, Sahm
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.4
    • /
    • pp.543-552
    • /
    • 2022
  • When frost occurs, crops are directly damaged. When crops come into contact with low temperatures, tissues freeze, which hardens and destroys the cell membranes or chloroplasts, or dry cells to death. In July 2020, a sudden sub-zero weather and frost hit the Minas Gerais state of Brazil, the world's largest coffee producer, damaging about 30% of local coffee trees. As a result, coffee prices have risen significantly due to the damage, and farmers with severe damage can produce coffee only after three years for crops to recover, which is expected to cause long-term damage. In this paper, we tried to predict frost using frost generation data and weather observation data provided by the Korea Meteorological Administration to prevent severe frost. A model was constructed by reflecting weather factors such as wind speed, temperature, humidity, precipitation, and cloudiness. Using XGB(eXtreme Gradient Boosting), SVM(Support Vector Machine), Random Forest, and MLP(Multi Layer perceptron) models, various hyper parameters were applied as training data to select the best model for each model. Finally, the results were evaluated as accuracy(acc) and CSI(Critical Success Index) in test data. XGB was the best model compared to other models with 90.4% ac and 64.4% CSI, followed by SVM with 89.7% ac and 61.2% CSI. Random Forest and MLP showed similar performance with about 89% ac and about 60% CSI.