• Title/Summary/Keyword: 가중치 함수

Search Result 543, Processing Time 0.033 seconds

A Evaluation Model of AHP Results Using Monte Carlo Simulation (Depending on the Case Studies of Road and Rail) (몬테카를로 시뮬레이션을 통한 AHP결과 해석모형개발 (도로 및 철도부문 사례를 중심으로))

  • Sul, You-Jin;Chung, Sung-Bong;Song, Ki-Han;Chon, Kyung-Soo;Rhee, Sung-Mo
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.4
    • /
    • pp.195-204
    • /
    • 2008
  • Multi-Criteria Analysis is one method for optimizing decisions that include numerous characteristics and objective functions. The Analytic Hierarchy Process (AHP) is used as a general Multi-Criteria Analysis considering many critical issues. However, since validation procedures for the decision reliability of AHP valuers had been left off existing methodologies, a new methodology including such validation procedures is required to make more reliable decisions. In this research, idea decision results are derived using Monte Carlo Simulation in cases where AHP valuers do not have expertise in the specific project, and these results are compared with the results derived from experts to develop a new analysis model to make more reliable decisions. Finally, this new analysis is applied to various field case studies of road and rail carried out by the Korea Development Institute (KDI) between 2003 and 2006 to validate the new analysis model. The study found that approximately 20% of decisions resulting from the existing methodology are considered prudent. In future studies, the authors suggest analyzing the correlation between initial weights and final results since final results are enormously influenced by the initial weight.

Utilization of age information for speaker verification using multi-task learning deep neural networks (멀티태스크 러닝 심층신경망을 이용한 화자인증에서의 나이 정보 활용)

  • Kim, Ju-ho;Heo, Hee-Soo;Jung, Jee-weon;Shim, Hye-jin;Kim, Seung-Bin;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.5
    • /
    • pp.593-600
    • /
    • 2019
  • The similarity in tones between speakers can lower the performance of speaker verification. To improve the performance of speaker verification systems, we propose a multi-task learning technique using deep neural network to learn speaker information and age information. Multi-task learning can improve generalization performances, because it helps deep neural networks to prevent hidden layers from overfitting into one task. However, we found in experiments that learning of age information does not work well in the process of learning the deep neural network. In order to improve the learning, we propose a method to dynamically change the objective function weights of speaker identification and age estimation in the learning process. Results show the equal error rate based on RSR2015 evaluation data set, 6.91 % for the speaker verification system without using age information, 6.77 % using age information only, and 4.73 % using age information when weight change technique was applied.

A Novel Approach to a Robust A Priori SNR Estimator in Speech Enhancement (음성 향상에서 강인한 새로운 선행 SNR 추정 기법에 관한 연구)

  • Park, Yun-Sik;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.8
    • /
    • pp.383-388
    • /
    • 2006
  • This Paper presents a novel approach to single channel microphone speech enhancement in noisy environments. Widely used noise reduction techniques based on the spectral subtraction are generally expressed as a spectral gam depending on the signal-to-noise ratio (SNR). The well-known decision-directed(DD) estimator of Ephraim and Malah efficiently reduces musical noise under the background noise conditions, but generates the delay of the a prioiri SNR because the DD weights the speech spectrum component of the Previous frame in the speech signal. Therefore, the noise suppression gain which is affected by the delay of the a priori SNR, which is estimated by the DD matches the previous frame rather than the current one, so after noise suppression. this degrades the noise reduction performance during speech transient periods. We propose a computationally simple but effective speech enhancement technique based on the sigmoid type function for the weight Parameter of the DD. The proposed approach solves the delay problem about the main parameter, the a priori SNR of the DD while maintaining the benefits of the DD. Performances of the proposed enhancement algorithm are evaluated by ITU-T p.862 Perceptual Evaluation of Speech duality (PESQ). the Mean Opinion Score (MOS) and the speech spectrogram under various noise environments and yields better results compared with the fixed weight parameter of the DD.

A Comparative Study of Fuzzy Relationship and ANN for Landslide Susceptibility in Pohang Area (퍼지관계 기법과 인공신경망 기법을 이용한 포항지역의 산사태 취약성 예측 기법 비교 연구)

  • Kim, Jin Yeob;Park, Hyuck Jin
    • Economic and Environmental Geology
    • /
    • v.46 no.4
    • /
    • pp.301-312
    • /
    • 2013
  • Landslides are caused by complex interaction among a large number of interrelated factors such as topography, geology, forest and soils. In this study, a comparative study was carried out using fuzzy relationship method and artificial neural network to evaluate landslide susceptibility. For landslide susceptibility mapping, maps of the landslide occurrence locations, slope angle, aspect, curvature, lithology, soil drainage, soil depth, soil texture, forest type, forest age, forest diameter and forest density were constructed from the spatial data sets. In fuzzy relation analysis, the membership values for each category of thematic layers have been determined using the cosine amplitude method. Then the integration of different thematic layers to produce landslide susceptibility map was performed by Cartesian product operation. In artificial neural network analysis, the relative weight values for causative factors were determined by back propagation algorithm. Landslide susceptibility maps prepared by two approaches were validated by ROC(Receiver Operating Characteristic) curve and AUC(Area Under the Curve). Based on the validation results, both approaches show excellent performance to predict the landslide susceptibility but the performance of the artificial neural network was superior in this study area.

A Study on Optimal Time Distribution of Extreme Rainfall Using Minutely Rainfall Data: A Case Study of Seoul (분단위 강우자료를 이용한 극치강우의 최적 시간분포 연구: 서울지점을 중심으로)

  • Yoon, Sun-Kwon;Kim, Jong-Suk;Moon, Young-Il
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.3
    • /
    • pp.275-290
    • /
    • 2012
  • In this study, we have developed an optimal time distribution model through extraction of peaks over threshold (POT) series. The median values for annual maximum rainfall dataset, which are obtained from the magnetic recording (MMR) and the automatic weather system(AWS) data at Seoul meteorological observatory, were used as the POT criteria. We also suggested the improved methodology for the time distribution of extreme rainfall compared to Huff method, which is widely used for time distributions of design rainfall. The Huff method did not consider changing in the shape of time distribution for each rainfall durations and rainfall criteria as total amount of rainfall for each rainfall events. This study have suggested an extracting methodology for rainfall events in each quartile based on interquartile range (IQR) matrix and selection for the mode quartile storm to determine the ranking cosidering weighting factors on minutely observation data. Finally, the optimal time distribution model in each rainfall duration was derived considering both data size and characteristics of distribution using kernel density function in extracted dimensionless unit rainfall hyetograph.

Optimizing Imaging Conditions in Digital Tomosynthesis for Image-Guided Radiation Therapy (영상유도 방사선 치료를 위한 디지털 단층영상합성법의 촬영조건 최적화에 관한 연구)

  • Youn, Han-Bean;Kim, Jin-Sung;Cho, Min-Kook;Jang, Sun-Young;Song, William Y.;Kim, Ho-Kyung
    • Progress in Medical Physics
    • /
    • v.21 no.3
    • /
    • pp.281-290
    • /
    • 2010
  • Cone-beam digital tomosynthesis (CBDT) has greatly been paid attention in the image-guided radiation therapy because of its attractive advantages such as low patient dose and less motion artifact. Image quality of tomograms is, however, dependent on the imaging conditions such as the scan angle (${\beta}_{scan}$) and the number of projection views. In this paper, we describe the principle of CBDT based on filtered-backprojection technique and investigate the optimization of imaging conditions. As a system performance, we have defined the figure-of-merit with a combination of signal difference-to-noise ratio, artifact spread function and floating-point operations which determine the computational load of image reconstruction procedures. From the measurements of disc phantom, which mimics an impulse signal and thus their analyses, it is concluded that the image quality of tomograms obtained from CBDT is improved as the scan angle is wider than 60 degrees with a larger step scan angle (${\Delta}{\beta}$). As a rule of thumb, the system performance is dependent on $\sqrt{{\Delta}{\beta}}{\times}{\beta}^{2.5}_{scan}$. If the exact weighting factors could be assigned to each image-quality metric, we would find the better quantitative imaging conditions.

A Performance Analysis by Adjusting Learning Methods in Stock Price Prediction Model Using LSTM (LSTM을 이용한 주가예측 모델의 학습방법에 따른 성능분석)

  • Jung, Jongjin;Kim, Jiyeon
    • Journal of Digital Convergence
    • /
    • v.18 no.11
    • /
    • pp.259-266
    • /
    • 2020
  • Many developments have been steadily carried out by researchers with applying knowledge-based expert system or machine learning algorithms to the financial field. In particular, it is now common to perform knowledge based system trading in using stock prices. Recently, deep learning technologies have been applied to real fields of stock trading marketplace as GPU performance and large scaled data have been supported enough. Especially, LSTM has been tried to apply to stock price prediction because of its compatibility for time series data. In this paper, we implement stock price prediction using LSTM. In modeling of LSTM, we propose a fitness combination of model parameters and activation functions for best performance. Specifically, we propose suitable selection methods of initializers of weights and bias, regularizers to avoid over-fitting, activation functions and optimization methods. We also compare model performances according to the different selections of the above important modeling considering factors on the real-world stock price data of global major companies. Finally, our experimental work brings a fitness method of applying LSTM model to stock price prediction.

An efficient 2.5D inversion of loop-loop electromagnetic data (루프-루프 전자탐사자료의 효과적인 2.5차원 역산)

  • Song, Yoon-Ho;Kim, Jung-Ho
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.1
    • /
    • pp.68-77
    • /
    • 2008
  • We have developed an inversion algorithm for loop-loop electromagnetic (EM) data, based on the localised non-linear or extended Born approximation to the solution of the 2.5D integral equation describing an EM scattering problem. Source and receiver configuration may be horizontal co-planar (HCP) or vertical co-planar (VCP). Both multi-frequency and multi-separation data can be incorporated. Our inversion code runs on a PC platform without heavy computational load. For the sake of stable and high-resolution performance of the inversion, we implemented an algorithm determining an optimum spatially varying Lagrangian multiplier as a function of sensitivity distribution, through parameter resolution matrix and Backus-Gilbert spread function analysis. Considering that the different source-receiver orientation characteristics cause inconsistent sensitivities to the resistivity structure in simultaneous inversion of HCP and VCP data, which affects the stability and resolution of the inversion result, we adapted a weighting scheme based on the variances of misfits between the measured and calculated datasets. The accuracy of the modelling code that we have developed has been proven over the frequency, conductivity, and geometric ranges typically used in a loop-loop EM system through comparison with 2.5D finite-element modelling results. We first applied the inversion to synthetic data, from a model with resistive as well as conductive inhomogeneities embedded in a homogeneous half-space, to validate its performance. Applying the inversion to field data and comparing the result with that of dc resistivity data, we conclude that the newly developed algorithm provides a reasonable image of the subsurface.

Super Resolution based on Reconstruction Algorithm Using Wavelet basis (웨이브렛 기저를 이용한 초해상도 기반 복원 알고리즘)

  • Baek, Young-Hyun;Byun, Oh-Sung;Moon, Sung-Ryong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.17-25
    • /
    • 2007
  • In most electronic imaging applications, image with high resolution(HR) are desired. HR means that pixel density within an image is high, and therefore HR image can offer more details that may be critical in various applications. Digital images that are captured by CCD and CMOS cameras usually have a very low resolution, which significantly limits the performance of image recognition systems. Image super-resolution techniques can be applied to overcome the limits of these imaging systems. Super-resolution techniques have been proposed to increase the resolution by combining information from multiple images. To techniques were consisted of the registration algorithm for estimation and shift, the nearest neighbor interpolation using weight of acquired frames and presented frames. In this paper, it is proposed the image interpolation techniques using the wavelet base function. This is applied to embody a correct edge image and natural image when expend part of the still image by applying the wavelet base function coefficient to the conventional Super-Resolution interpolation method. And the proposal algorithm in this paper is confirmed to improve the image applying the nearest neighbor interpolation algorithm, bilinear interpolation algorithm.,bicubic interpolation algorithm through the computer simulation.

Design of Digit Recognition System Realized with the Aid of Fuzzy RBFNNs and Incremental-PCA (퍼지 RBFNNs와 증분형 주성분 분석법으로 실현된 숫자 인식 시스템의 설계)

  • Kim, Bong-Youn;Oh, Sung-Kwun;Kim, Jin-Yul
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.1
    • /
    • pp.56-63
    • /
    • 2016
  • In this study, we introduce a design of Fuzzy RBFNNs-based digit recognition system using the incremental-PCA in order to recognize the handwritten digits. The Principal Component Analysis (PCA) is a widely-adopted dimensional reduction algorithm, but it needs high computing overhead for feature extraction in case of using high dimensional images or a large amount of training data. To alleviate such problem, the incremental-PCA is proposed for the computationally efficient processing as well as the incremental learning of high dimensional data in the feature extraction stage. The architecture of Fuzzy Radial Basis Function Neural Networks (RBFNN) consists of three functional modules such as condition, conclusion, and inference part. In the condition part, the input space is partitioned with the use of fuzzy clustering realized by means of the Fuzzy C-Means (FCM) algorithm. Also, it is used instead of gaussian function to consider the characteristic of input data. In the conclusion part, connection weights are used as the extended diverse types in polynomial expression such as constant, linear, quadratic and modified quadratic. Experimental results conducted on the benchmarking MNIST handwritten digit database demonstrate the effectiveness and efficiency of the proposed digit recognition system when compared with other studies.