• Title/Summary/Keyword: squared error loss function

Search Result 49, Processing Time 0.025 seconds

A Novel Broadband Channel Estimation Technique Based on Dual-Module QGAN

  • Li Ting;Zhang Jinbiao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.5
    • /
    • pp.1369-1389
    • /
    • 2024
  • In the era of 6G, the rapid increase in communication data volume poses higher demands on traditional channel estimation techniques and those based on deep learning, especially when processing large-scale data as their computational load and real-time performance often fail to meet practical requirements. To overcome this bottleneck, this paper introduces quantum computing techniques, exploring for the first time the application of Quantum Generative Adversarial Networks (QGAN) to broadband channel estimation challenges. Although generative adversarial technology has been applied to channel estimation, obtaining instantaneous channel information remains a significant challenge. To address the issue of instantaneous channel estimation, this paper proposes an innovative QGAN with a dual-module design in the generator. The adversarial loss function and the Mean Squared Error (MSE) loss function are separately applied for the parameter updates of these two modules, facilitating the learning of statistical channel information and the generation of instantaneous channel details. Experimental results demonstrate the efficiency and accuracy of the proposed dual-module QGAN technique in channel estimation on the Pennylane quantum computing simulation platform. This research opens a new direction for physical layer techniques in wireless communication and offers expanded possibilities for the future development of wireless communication technologies.

A study on combination of loss functions for effective mask-based speech enhancement in noisy environments (잡음 환경에 효과적인 마스크 기반 음성 향상을 위한 손실함수 조합에 관한 연구)

  • Jung, Jaehee;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.3
    • /
    • pp.234-240
    • /
    • 2021
  • In this paper, the mask-based speech enhancement is improved for effective speech recognition in noise environments. In the mask-based speech enhancement, enhanced spectrum is obtained by multiplying the noisy speech spectrum by the mask. The VoiceFilter (VF) model is used as the mask estimation, and the Spectrogram Inpainting (SI) technique is used to remove residual noise of enhanced spectrum. In this paper, we propose a combined loss to further improve speech enhancement. In order to effectively remove the residual noise in the speech, the positive part of the Triplet loss is used with the component loss. For the experiment TIMIT database is re-constructed using NOISEX92 noise and background music samples with various Signal to Noise Ratio (SNR) conditions. Source to Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligibility (STOI) are used as the metrics of performance evaluation. When the VF was trained with the mean squared error and the SI model was trained with the combined loss, SDR, PESQ, and STOI were improved by 0.5, 0.06, and 0.002 respectively compared to the system trained only with the mean squared error.

RELIABILITY ANALYSIS FOR THE TWO-PARAMETER PARETO DISTRIBUTION UNDER RECORD VALUES

  • Wang, Liang;Shi, Yimin;Chang, Ping
    • Journal of applied mathematics & informatics
    • /
    • v.29 no.5_6
    • /
    • pp.1435-1451
    • /
    • 2011
  • In this paper the estimation of the parameters as well as survival and hazard functions are presented for the two-parameter Pareto distribution by using Bayesian and non-Bayesian approaches under upper record values. Maximum likelihood estimation (MLE) and interval estimation are derived for the parameters. Bayes estimators of reliability performances are obtained under symmetric (Squared error) and asymmetric (Linex and general entropy (GE)) losses, when two parameters have discrete and continuous priors, respectively. Finally, two numerical examples with real data set and simulated data, are presented to illustrate the proposed method. An algorithm is introduced to generate records data, then a simulation study is performed and different estimates results are compared.

Bayesian Algorithms for Evaluation and Prediction of Software Reliability (소프트웨어 신뢰도의 평가와 예측을 위한 베이지안 알고리즘)

  • Park, Man-Gon;Ray
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.1
    • /
    • pp.14-22
    • /
    • 1994
  • This paper proposes two Bayes estimators and their evaluation algorithms of the software reliability at the end testing stage in the Smith's Bayesian software reliability growth model under the data prior distribution BE(a, b), which is more general than uniform distribution, as a class of prior information. We consider both a squared-error loss function and the Harris loss function in the Bayesian estimation procedures. We also compare the MSE performances of the Bayes estimators and their algorithms of software reliability using computer simulations. And we conclude that the Bayes estimator of software reliability under the Harris loss function is more efficient than other estimators in terms of the MSE performances as a is larger and b is smaller, and that the Bayes estimators using the beta prior distribution as a conjugate prior is better than the Bayes estimators under the uniform prior distribution as a noninformative prior when a>b.

  • PDF

ON CONSISTENCY OF SOME NONPARAMETRIC BAYES ESTIMATORS WITH RESPECT TO A BETA PROCESS BASED ON INCOMPLETE DATA

  • Hong, Jee-Chang;Jung, In-Ha
    • The Pure and Applied Mathematics
    • /
    • v.5 no.2
    • /
    • pp.123-132
    • /
    • 1998
  • Let F and G denote the distribution functions of the failure times and the censoring variables in a random censorship model. Susarla and Van Ryzin(1978) verified consistency of $F_{\alpha}$, he NPBE of F with respect to the Dirichlet process prior D($\alpha$), in which they assumed F and G are continuous. Assuming that A, the cumulative hazard function, is distributed according to a beta process with parameters c, $\alpha$, Hjort(1990) obtained the Bayes estimator $A_{c,\alpha}$ of A under a squared error loss function. By the theory of product-integral developed by Gill and Johansen(1990), the Bayes estimator $F_{c,\alpha}$ is recovered from $A_{c,\alpha}$. Continuity assumption on F and G is removed in our proof of the consistency of $A_{c,\alpha}$ and $F_{c,\alpha}$. Our result extends Susarla and Van Ryzin(1978) since a particular transform of a beta process is a Dirichlet process and the class of beta processes forms a much larger class than the class of Dirichlet processes.

  • PDF

Performance Evaluation of Loss Functions and Composition Methods of Log-scale Train Data for Supervised Learning of Neural Network (신경 망의 지도 학습을 위한 로그 간격의 학습 자료 구성 방식과 손실 함수의 성능 평가)

  • Donggyu Song;Seheon Ko;Hyomin Lee
    • Korean Chemical Engineering Research
    • /
    • v.61 no.3
    • /
    • pp.388-393
    • /
    • 2023
  • The analysis of engineering data using neural network based on supervised learning has been utilized in various engineering fields such as optimization of chemical engineering process, concentration prediction of particulate matter pollution, prediction of thermodynamic phase equilibria, and prediction of physical properties for transport phenomena system. The supervised learning requires training data, and the performance of the supervised learning is affected by the composition and the configurations of the given training data. Among the frequently observed engineering data, the data is given in log-scale such as length of DNA, concentration of analytes, etc. In this study, for widely distributed log-scaled training data of virtual 100×100 images, available loss functions were quantitatively evaluated in terms of (i) confusion matrix, (ii) maximum relative error and (iii) mean relative error. As a result, the loss functions of mean-absolute-percentage-error and mean-squared-logarithmic-error were the optimal functions for the log-scaled training data. Furthermore, we figured out that uniformly selected training data lead to the best prediction performance. The optimal loss functions and method for how to compose training data studied in this work would be applied to engineering problems such as evaluating DNA length, analyzing biomolecules, predicting concentration of colloidal suspension.

A high-density gamma white spots-Gaussian mixture noise removal method for neutron images denoising based on Swin Transformer UNet and Monte Carlo calculation

  • Di Zhang;Guomin Sun;Zihui Yang;Jie Yu
    • Nuclear Engineering and Technology
    • /
    • v.56 no.2
    • /
    • pp.715-727
    • /
    • 2024
  • During fast neutron imaging, besides the dark current noise and readout noise of the CCD camera, the main noise in fast neutron imaging comes from high-energy gamma rays generated by neutron nuclear reactions in and around the experimental setup. These high-energy gamma rays result in the presence of high-density gamma white spots (GWS) in the fast neutron image. Due to the microscopic quantum characteristics of the neutron beam itself and environmental scattering effects, fast neutron images typically exhibit a mixture of Gaussian noise. Existing denoising methods in neutron images are difficult to handle when dealing with a mixture of GWS and Gaussian noise. Herein we put forward a deep learning approach based on the Swin Transformer UNet (SUNet) model to remove high-density GWS-Gaussian mixture noise from fast neutron images. The improved denoising model utilizes a customized loss function for training, which combines perceptual loss and mean squared error loss to avoid grid-like artifacts caused by using a single perceptual loss. To address the high cost of acquiring real fast neutron images, this study introduces Monte Carlo method to simulate noise data with GWS characteristics by computing the interaction between gamma rays and sensors based on the principle of GWS generation. Ultimately, the experimental scenarios involving simulated neutron noise images and real fast neutron images demonstrate that the proposed method not only improves the quality and signal-to-noise ratio of fast neutron images but also preserves the details of the original images during denoising.

Estimation of exponent value for Pythagorean method in Korean pro-baseball (한국프로야구에서 피타고라스 지수의 추정)

  • Lee, Jang Taek
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.3
    • /
    • pp.493-499
    • /
    • 2014
  • The Pythagorean won-loss formula postulated by James (1980) indicates the percentage of games as a function of runs scored and runs allowed. Several hundred articles have explored variations which improve RMSE by original formula and their fit to empirical data. This paper considers a variation on the formula which allows for variation of the Pythagorean exponent. We provide the most suitable optimal exponent in the Pythagorean method. We compare it with other methods, such as the Pythagenport by Davenport and Woolner, and the Pythagenpat by Smyth and Patriot. Finally, our results suggest that proposed method is superior to other tractable alternatives under criterion of RMSE.

Artifact Reduction in Sparse-view Computed Tomography Image using Residual Learning Combined with Wavelet Transformation (Wavelet 변환과 결합한 잔차 학습을 이용한 희박뷰 전산화단층영상의 인공물 감소)

  • Lee, Seungwan
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.3
    • /
    • pp.295-302
    • /
    • 2022
  • Sparse-view computed tomography (CT) imaging technique is able to reduce radiation dose, ensure the uniformity of image characteristics among projections and suppress noise. However, the reconstructed images obtained by the sparse-view CT imaging technique suffer from severe artifacts, resulting in the distortion of image quality and internal structures. In this study, we proposed a convolutional neural network (CNN) with wavelet transformation and residual learning for reducing artifacts in sparse-view CT image, and the performance of the trained model was quantitatively analyzed. The CNN consisted of wavelet transformation, convolutional and inverse wavelet transformation layers, and input and output images were configured as sparse-view CT images and residual images, respectively. For training the CNN, the loss function was calculated by using mean squared error (MSE), and the Adam function was used as an optimizer. Result images were obtained by subtracting the residual images, which were predicted by the trained model, from sparse-view CT images. The quantitative accuracy of the result images were measured in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The results showed that the trained model is able to improve the spatial resolution of the result images as well as reduce artifacts in sparse-view CT images effectively. Also, the trained model increased the PSNR and SSIM by 8.18% and 19.71% in comparison to the imaging model trained without wavelet transformation and residual learning, respectively. Therefore, the imaging model proposed in this study can restore the image quality of sparse-view CT image by reducing artifacts, improving spatial resolution and quantitative accuracy.