• Title/Summary/Keyword: 가우시안

Search Result 1,254, Processing Time 0.023 seconds

Convergence Implementing Emotion Prediction Neural Network Based on Heart Rate Variability (HRV) (심박변이도를 이용한 인공신경망 기반 감정예측 모형에 관한 융복합 연구)

  • Park, Sung Soo;Lee, Kun Chang
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.5
    • /
    • pp.33-41
    • /
    • 2018
  • The purpose of this study is to develop more accurate and robust emotion prediction neural network (EPNN) model by combining heart rate variability (HRV) and neural network. For the sake of improving the prediction performance more reliably, the proposed EPNN model is based on various types of activation functions like hyperbolic tangent, linear, and Gaussian functions, all of which are embedded in hidden nodes to improve its performance. In order to verify the validity of the proposed EPNN model, a number of HRV metrics were calculated from 20 valid and qualified participants whose emotions were induced by using money game. To add more rigor to the experiment, the participants' valence and arousal were checked and used as output node of the EPNN. The experiment results reveal that the F-Measure for Valence and Arousal is 80% and 95%, respectively, proving that the EPNN yields very robust and well-balanced performance. The EPNN performance was compared with competing models like neural network, logistic regression, support vector machine, and random forest. The EPNN was more accurate and reliable than those of the competing models. The results of this study can be effectively applied to many types of wearable computing devices when ubiquitous digital health environment becomes feasible and permeating into our everyday lives.

Compressive Sensing Recovery of Natural Images Using Smooth Residual Error Regularization (평활 잔차 오류 정규화를 통한 자연 영상의 압축센싱 복원)

  • Trinh, Chien Van;Dinh, Khanh Quoc;Nguyen, Viet Anh;Park, Younghyeon;Jeon, Byeungwoo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.6
    • /
    • pp.209-220
    • /
    • 2014
  • Compressive Sensing (CS) is a new signal acquisition paradigm which enables sampling under Nyquist rate for a special kind of signal called sparse signal. There are plenty of CS recovery methods but their performance are still challenging, especially at a low sub-rate. For CS recovery of natural images, regularizations exploiting some prior information can be used in order to enhance CS performance. In this context, this paper addresses improving quality of reconstructed natural images based on Dantzig selector and smooth filters (i.e., Gaussian filter and nonlocal means filter) to generate a new regularization called smooth residual error regularization. Moreover, total variation has been proved for its success in preserving edge objects and boundary of reconstructed images. Therefore, effectiveness of the proposed regularization is verified by experimenting it using augmented Lagrangian total variation minimization. This framework is considered as a new CS recovery seeking smoothness in residual images. Experimental results demonstrate significant improvement of the proposed framework over some other CS recoveries both in subjective and objective qualities. In the best case, our algorithm gains up to 9.14 dB compared with the CS recovery using Bayesian framework.

Traffic Lights Detection Based on Visual Attention and Spot-Lights Regions Detection (시각적 주의 및 Spot-Lights 영역 검출 기반의 교통신호등 검출 방안)

  • Kim, JongBae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.6
    • /
    • pp.132-142
    • /
    • 2014
  • In this paper, we propose a traffic lights detection method using visual attention and spot-lights detection. To detect traffic lights in city streets at day and night time, the proposed method is used the structural form of a traffic lights such as colors, intensity, shape, textures. In general, traffic lights are installed at a position to increase the visibility of the drivers. The proposed method detects the candidate traffic lights regions using the top-down visual saliency model and spot-lights detect models. The visual saliency and spot-lights regions are positions of its difference from the neighboring locations in multiple features and multiple scales. For detecting traffic lights, by not using a color thresholding method, the proposed method can be applied to urban environments of variety changes in illumination and night times.

Extraction and Complement of Hexagonal Borders in Corneal Endothelial Cell Images (각막 내피 세포 영상내 육각형 경계의 검출과 보완법)

  • Kim, Eung-Kyeu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.3
    • /
    • pp.102-112
    • /
    • 2013
  • In this paper, two step processing method of contour extraction and complement which contain hexagonal shape for low contrast and noisy images is proposed. This method is based on the combination of Laplacian-Gaussian filter and an idea of filters which are dependent on the shape. At the first step, an algorithm which has six masks as its extractors to extract the hexagonal edges especially in the corners is used. Here, two tricorn filters are used to detect the tricorn joints of hexagons and other four masks are used to enhance the line segments of hexagonal edges. As a natural image, a corneal endothelial cell image which usually has regular hexagonal form is selected. The edge extraction of hexagonal shapes in corneal endothelial cell is important for clinical diagnosis. The proposed algorithm and other conventional methods are applied to noisy hexagonal images to evaluate each efficiency. As a result, this proposed algorithm shows a robustness against noises and better detection ability in the aspects of the output signal to noise ratio, the edge coincidence ratio and the extraction accuracy factor as compared with other conventional methods. At the second step, the lacking part of the thinned image by an energy minimum algorithm is complemented, and then the area and distribution of cells which give necessary information for medical diagnosis are computed.

A Demodulation Method for DS/CDMA Systems (DS/CDMA 시스템을 위한 새로운 복조 방식)

  • Jung, Bum-Jin;Jin, Ming-Lu;Kwak, Kyung-Sup
    • Journal of IKEEE
    • /
    • v.2 no.2 s.3
    • /
    • pp.212-224
    • /
    • 1998
  • There are two major factors of degrading the performance in the forward link of DS/CDMA systems. One is the multiple access interference (MAI) caused by using the same frequency bands simultaneously and the other is the multipath lading due to multipath propagation. PN codes which have minimum cross correlation properties among spread spectrum codes are necessary to reduce the MAI. In the conventional IS-95A system, the PN sequence has the period of $2^{15}$ and is of the length of 64 chips for spreading each data. In this case, since the length of PN code per bit is very short compared to the period of the PN code, the performance of the conventional system is not satisfied in view of suppressing the multipath interference. However, the correlation property of the PN codes at the demodulation can be improved by increasing the interval of Integration at the demodulation. This paper proposes a demodulation method to reduce the cross correlation among PN codes. The performance of the proposed demodulation method is investigated through computer simulations. We used multipath Ray lading channel and AWGN channel in the simulation. Our simulation results show the improved performance of $0.25{\sim}0.5dB$ SNR in a given BER compared to the conventional demodulation scheme.

  • PDF

Influence of Modelling Approaches of Diffusion Coefficients on Atmospheric Dispersion Factors (확산계수의 모델링방법이 대기확산인자에 미치는 영향)

  • Hwang, Won Tae;Kim, Eun Han;Jeong, Hae Sun;Jeong, Hyo Joon;Han, Moon Hee
    • Journal of Radiation Protection and Research
    • /
    • v.38 no.2
    • /
    • pp.60-67
    • /
    • 2013
  • A diffusion coefficient is an important parameter in the prediction of atmospheric dispersion using a Gaussian plume model, and its modelling approach varies. In this study, dispersion coefficients recommended by the U. S. Nuclear Regulatory Commission's (U. S. NRC's) regulatory guide and the Canadian Nuclear Safety Commission's (CNSC's) regulatory guide, and used in probabilistic accident consequence analysis codes MACCS and MACCS2 have been investigated. Based on the atmospheric dispersion model for a hypothetical accidental release recommended by the U. S. NRC, its influence to atmospheric dispersion factor was discussed. It was found that diffusion coefficients are basically predicted from a Pasquill- Gifford curve, but various curve fitting equations are recommended or used. A lateral dispersion coefficient is corrected with consideration for the additional spread due to plume meandering in all models, however its modelling approach showed a distinctive difference. Moreover, a vertical dispersion coefficient is corrected with consideration for the additional plume spread due to surface roughness in all models, except for the U. S. NRC's recommendation. For a specified surface roughness, the atmospheric dispersion factors showed differences up to approximately 4 times depending on the modelling approach of a dispersion coefficient. For the same model, the atmospheric dispersion factors showed differences by 2 to 3 times depending on surface roughness.

A High Speed Block Turbo Code Decoding Algorithm and Hardware Architecture Design (고속 블록 터보 코드 복호 알고리즘 및 하드웨어 구조 설계)

  • 유경철;신형식;정윤호;김근회;김재석
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.41 no.7
    • /
    • pp.97-103
    • /
    • 2004
  • In this paper, we propose a high speed block turbo code decoding algorithm and an efficient hardware architecture. The multimedia wireless data communication systems need channel codes which have the high-performance error correcting capabilities. Block turbo codes support variable code rates and packet sizes, and show a high performance due to a soft decision iteration decoding of turbo codes. However, block turbo codes have a long decoding time because of the iteration decoding and a complicated extrinsic information operation. The proposed algorithm using the threshold that represents a channel information reduces the long decoding time. After the threshold is decided by a simulation result, the proposed algorithm eliminates the calculation for the bits which have a good channel information and assigns a high reliability value to the bits. The threshold is decided by the absolute mean and the standard deviation of a LLR(Log Likelihood Ratio) in consideration that the LLR distribution is a gaussian one. Also, the proposed algorithm assigns '1', the highest reliable value, to those bits. The hardware design result using verilog HDL reduces a decoding time about 30% in comparison with conventional algorithm, and includes about 20K logic gate and 32Kbit memory sizes.

Performance of Time-averaging Channel Estimator for OFDM System of Terrestrial Broadcasting Channel (지상파 방송 채널에서 OFDM 시스템의 시간 평균 채널 추정기의 성능)

  • 문재경;오길남;박재홍;하영호;김수중
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.10 no.1
    • /
    • pp.44-53
    • /
    • 1999
  • In this paper, we propose a pilot based time-averaging channel estimation method and analyze error performances for efficient transmission of OFDM(Orthogonal Frequency Division Multiplexing) in multipath fading environment. Frequency domain channel estimations have been used in OFDM systems to compensate signal distortions due to fading on each subcarrier. The frequency domain estimation scheme uses scattered pilot to estimate channel response by simple interpolation. This implies that the estimated channel response includes signal distortions due to the noise. In this paper, we propose time-averaged channel estimation method to remove the distortion by noise. The proposed scheme can effectively remove noise components by taking time-average of the estimated channel response after estimating frequency domain channel. The computer simulations were performed to evaluate the performance of the proposed channel estimator. For the Rician channel, we compared the performance of the proposed method to that of a conventional one using channel estimation by gaussian interpolation when SER(Symbol Error Rate) = $10^{-4}$, and compared to perfect channel estimation case. The proposed method showed differences of 0.07 dB, 0.6 dB compared to perfect channel estimation and improvements of 1.7 dB, 1.9 dB for 16 QAM, 64 QAM respectively compared to conventional method.

  • PDF

Development of Drought Stress Measurement Method for Red Pepper Leaves using Hyperspectral Short Wave Infrared Imaging Technique (초분광 단파적외선 영상 기술을 이용한 고추의 수분스트레스 측정 기술 개발)

  • Park, Eunsoo;Cho, Byoung-Kwan
    • Journal of Bio-Environment Control
    • /
    • v.23 no.1
    • /
    • pp.50-55
    • /
    • 2014
  • This study was conducted to investigate the responses of red pepper (Hongjinju) leaves under water stress. Hyperspectral short wave infrared (SWIR, 1000~1800 nm) reflectance imaging techniques were used to acquire the spectral images for the red pepper leaves with and without water stress. The acquired spectral data were analyzed with a multivariate analysis method of ANOVA (analysis of variance). The ANOVA model suggested that 1449 nm wavebands was the most effective to determine the stress responses of the red pepper leaves exposed to the water deficiency. The waveband of 1449 nm was closely related to the water absorption band. The processed spectral image of 1449 nm could separate the non-stress, moderate stress (-20 kPa), and severe stress (-50 kPa) groups of red pepper leaves distinctively. Results demonstrated that hyperspectral imaging technique can be applied to monitoring the stress responses of red pepper leaves which are an indicator of physiological and biochemical changes under water deficiency.

Comparison of Survival Prediction of Rats with Hemorrhagic Shocks Using Artificial Neural Network and Support Vector Machine (출혈성 쇼크를 일으킨 흰쥐에서 인공신경망과 지원벡터기계를 이용한 생존율 비교)

  • Jang, Kyung-Hwan;Yoo, Tae-Keun;Nam, Ki-Chang;Choi, Jae-Rim;Kwon, Min-Kyung;Kim, Deok-Won
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.2
    • /
    • pp.47-55
    • /
    • 2011
  • Hemorrhagic shock is a cause of one third of death resulting from injury in the world. Early diagnosis of hemorrhagic shock makes it possible for physician to treat successfully. The objective of this paper was to select an optimal classifier model using physiological signals from rats measured during hemorrhagic experiment. This data set was used to train and predict survival rate using artificial neural network (ANN) and support vector machine (SVM). To avoid over-fitting, we chose the best classifier according to performance measured by a 10-fold cross validation method. As a result, we selected ANN having three hidden nodes with one hidden layer and SVM with Gaussian kernel function as trained prediction model, and the ANN showed 88.9 % of sensitivity, 96.7 % of specificity, 92.0 % of accuracy and the SVM provided 97.8 % of sensitivity, 95.0 % of specificity, 96.7 % of accuracy. Therefore, SVM was better than ANN for survival prediction.