• Title/Summary/Keyword: Decision Error

Search Result 889, Processing Time 0.026 seconds

Machine Learning Methods to Predict Vehicle Fuel Consumption

  • Ko, Kwangho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.9
    • /
    • pp.13-20
    • /
    • 2022
  • It's proposed and analyzed ML(Machine Learning) models to predict vehicle FC(Fuel Consumption) in real-time. The test driving was done for a car to measure vehicle speed, acceleration, road gradient and FC for training dataset. The various ML models were trained with feature data of speed, acceleration and road-gradient for target FC. There are two kind of ML models and one is regression type of linear regression and k-nearest neighbors regression and the other is classification type of k-nearest neighbors classifier, logistic regression, decision tree, random forest and gradient boosting in the study. The prediction accuracy is low in range of 0.5 ~ 0.6 for real-time FC and the classification type is more accurate than the regression ones. The prediction error for total FC has very low value of about 0.2 ~ 2.0% and regression models are more accurate than classification ones. It's for the coefficient of determination (R2) of accuracy score distributing predicted values along mean of targets as the coefficient decreases. Therefore regression models are good for total FC and classification ones are proper for real-time FC prediction.

High Noise Density Median Filter Method for Denoising Cancer Images Using Image Processing Techniques

  • Priyadharsini.M, Suriya;Sathiaseelan, J.G.R
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.11
    • /
    • pp.308-318
    • /
    • 2022
  • Noise is a serious issue. While sending images via electronic communication, Impulse noise, which is created by unsteady voltage, is one of the most common noises in digital communication. During the acquisition process, pictures were collected. It is possible to obtain accurate diagnosis images by removing these noises without affecting the edges and tiny features. The New Average High Noise Density Median Filter. (HNDMF) was proposed in this paper, and it operates in two steps for each pixel. Filter can decide whether the test pixels is degraded by SPN. In the first stage, a detector identifies corrupted pixels, in the second stage, an algorithm replaced by noise free processed pixel, the New average suggested Filter produced for this window. The paper examines the performance of Gaussian Filter (GF), Adaptive Median Filter (AMF), and PHDNF. In this paper the comparison of known image denoising is discussed and a new decision based weighted median filter used to remove impulse noise. Using Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), and Structure Similarity Index Method (SSIM) metrics, the paper examines the performance of Gaussian Filter (GF), Adaptive Median Filter (AMF), and PHDNF. A detailed simulation process is performed to ensure the betterment of the presented model on the Mini-MIAS dataset. The obtained experimental values stated that the HNDMF model has reached to a better performance with the maximum picture quality. images affected by various amounts of pretend salt and paper noise, as well as speckle noise, are calculated and provided as experimental results. According to quality metrics, the HNDMF Method produces a superior result than the existing filter method. Accurately detect and replace salt and pepper noise pixel values with mean and median value in images. The proposed method is to improve the median filter with a significant change.

ROC Analysis of Diagnostie Performance in Liver Scan (간스캔의 ROC분석에 의한 진단적 평가)

  • Lee, Myung-Chul;Moon, Dae-Hyuk;Koh, Chang-Soon;Matumoto, Toru;Tateno, Yukio
    • The Korean Journal of Nuclear Medicine
    • /
    • v.22 no.1
    • /
    • pp.39-45
    • /
    • 1988
  • To evaluate diagnostic accuracy of liver scintigraphy we analysed liver scans of 143 normal and 258 patients with various liver diseases. Three ROC curves for SOL, liver cirrhosis and diffuse liver disease were fitted using rating methods and areas under the ROC curves and their standard errors were calculated by the trapezoidal rule and the variance of the Wilcoxon statistic suggested by McNeil. We compared these results with that of National Institute of Radiological Science in Japan. 1) The sensitivity of liver scintigraphy was 74.2% in SOL, 71.8% in liver cirrhosis and 34.0% in diffuse liver disease. The specificity was 96.0% in SOL, 94.2% in liver cirrhosis and 87.6% in diffuse liver diasease. 2) ROC curves of SOL and liver cirrhosis approached the upper left-hand corner closer than that of diffuse liver disease. Area (${\pm}$ standard error). under the ROC curve was $0.868{\pm}0.024$ in SOL and $0.867{\pm}0.028$ in liver cirrhosis. These were significantly higher than $0.658{\pm}0.043$ in diffuse liver disease. 3) There was no interobserver difference in terms of ROC curves. But low sensitivty and high specificity of authors' SOL diagnosis suggested we used more strict decision threshold.

  • PDF

A Novel Grasshopper Optimization-based Particle Swarm Algorithm for Effective Spectrum Sensing in Cognitive Radio Networks

  • Ashok, J;Sowmia, KR;Jayashree, K;Priya, Vijay
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.520-541
    • /
    • 2023
  • In CRNs, SS is of utmost significance. Every CR user generates a sensing report during the training phase beneath various circumstances, and depending on a collective process, either communicates or remains silent. In the training stage, the fusion centre combines the local judgments made by CR users by a majority vote, and then returns a final conclusion to every CR user. Enough data regarding the environment, including the activity of PU and every CR's response to that activity, is acquired and sensing classes are created during the training stage. Every CR user compares their most recent sensing report to the previous sensing classes during the classification stage, and distance vectors are generated. The posterior probability of every sensing class is derived on the basis of quantitative data, and the sensing report is then classified as either signifying the presence or absence of PU. The ISVM technique is utilized to compute the quantitative variables necessary to compute the posterior probability. Here, the iterations of SVM are tuned by novel GO-PSA by combining GOA and PSO. Novel GO-PSA is developed since it overcomes the problem of computational complexity, returns minimum error, and also saves time when compared with various state-of-the-art algorithms. The dependability of every CR user is taken into consideration as these local choices are then integrated at the fusion centre utilizing an innovative decision combination technique. Depending on the collective choice, the CR users will then communicate or remain silent.

Application into Assessment of Liquefaction Hazard and Geotechnical Vulnerability During Earthquake with High-Precision Spatial-Ground Model for a City Development Area (도시개발 영역 고정밀 공간지반모델의 지진 시 액상화 재해 및 지반 취약성 평가 활용)

  • Kim, Han-Saem;Sun, Chang-Guk;Ha, Ik-Soo
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.27 no.5
    • /
    • pp.221-230
    • /
    • 2023
  • This study proposes a methodology for assessing seismic liquefaction hazard by implementing high-resolution three-dimensional (3D) ground models with high-density/high-precision site investigation data acquired in an area of interest, which would be linked to geotechnical numerical analysis tools. It is possible to estimate the vulnerability of earthquake-induced geotechnical phenomena (ground motion amplification, liquefaction, landslide, etc.) and their triggering complex disasters across an area for urban development with several stages of high-density datasets. In this study, the spatial-ground models for city development were built with a 3D high-precision grid of 5 m × 5 m × 1 m by applying geostatistic methods. Finally, after comparing each prediction error, the geotechnical model from the Gaussian sequential simulation is selected to assess earthquake-induced geotechnical hazards. In particular, with seven independent input earthquake motions, liquefaction analysis with finite element analyses and hazard mappings with LPI and LSN are performed reliably based on the spatial geotechnical models in the study area. Furthermore, various phenomena and parameters, including settlement in the city planning area, are assessed in terms of geotechnical vulnerability also based on the high-resolution spatial-ground modeling. This case study on the high-precision 3D ground model-based zonations in the area of interest verifies the usefulness in assessing spatially earthquake-induced hazards and geotechnical vulnerability and their decision-making support.

A Geographic Routing Algorithm to Prolong the Lifetime of MANET (MANET에서의 네트워크 수명을 연장시키는 위치기반 라우팅 기법)

  • Lee, Ju-Young
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.2
    • /
    • pp.119-125
    • /
    • 2010
  • In ad-hoc networks, dynamically reconfigurable and temporary wireless networks, all mobile devices cooperatively maintain network connectivity with no assistance of base stations while they have limited amounts of energy that is used in different rates depending on the power level. Since every node has to perform the functions of a router, if some nodes die early due to lack of energy, it will not be possible for other nodes to communicate with each other and network lifetime will be shortened. Consequently, it is very important to develop a technique to efficiently consume the limited amounts of energy resources so that the network lifetime is maximized. In this paper, geographical localized routing is proposed to help making smarter routing decision using only local information and reduce the routing overhead. The proposed localized routing algorithm selects energy-aware neighbors considering the transmission energy and error rate over the wireless link, and the residual energy of the node, which enables nodes to achieve balanced energy-consumption and the network lifetime to prolong.

Therapeutic Duplication as a Medication Error Risk in Fixed-Dose Combination Drugs for Dyslipidemia: A Nationwide Study

  • Wonbin Choi;Hyunji Koo;Kyeong Hye Jeong;Eunyoung Kim;Seung-Hun You;Min-Taek Lee;Sun-Young Jung
    • Korean Journal of Clinical Pharmacy
    • /
    • v.33 no.3
    • /
    • pp.168-177
    • /
    • 2023
  • Background & Objectives: Fixed-dose combinations (FDCs) offer advantages in adherence and cost-effectiveness compared to free combinations (FCs), but they can also complicate the prescribing process, potentially leading to therapeutic duplication (TD). This study aimed to identify the prescribing patterns of FDCs for dyslipidemia and investigate their associated risk of TD. Methods: This was a retrospective cohort study involving drugs that included statins, using Health Insurance Review & Assessment Service-National Patient Sample (HIRA-NPS) data from 2018. The unit of analysis was a prescription claim. The primary outcome was TD. The risk ratio of TD was calculated and adjusted for patient, prescriber, and the number of cardiovascular drugs prescribed using a multivariable Poisson model. Results: Our study included 252,797 FDC prescriptions and 515,666 FC prescriptions. Of the FDC group, 46.52% were male patients and 56.21% were aged 41 to 65. Ezetimibe was included in 71.61% of the FDC group, but only 0.25% of the FC group. TD occurred in 0.18% of the FDC group, and the adjusted risk ratio of TD in FDC prescriptions compared to FC was 6. 44 (95% CI 5. 30-7. 82). Conclusions: Prescribing FDCs for dyslipidemia was associated with a higher risk of TD compared to free combinations. Despite the relatively low absolute prevalence of TD, the findings underline the necessity for strategies to mitigate this risk when prescribing FDCs for dyslipidemia. Our study suggests the potential utility of Clinical Decision Support Systems and standardizing nomenclature in reducing medication errors, providing valuable insights for clinical practice and future research.

Using machine learning to forecast and assess the uncertainty in the response of a typical PWR undergoing a steam generator tube rupture accident

  • Tran Canh Hai Nguyen ;Aya Diab
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3423-3440
    • /
    • 2023
  • In this work, a multivariate time-series machine learning meta-model is developed to predict the transient response of a typical nuclear power plant (NPP) undergoing a steam generator tube rupture (SGTR). The model employs Recurrent Neural Networks (RNNs), including the Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and a hybrid CNN-LSTM model. To address the uncertainty inherent in such predictions, a Bayesian Neural Network (BNN) was implemented. The models were trained using a database generated by the Best Estimate Plus Uncertainty (BEPU) methodology; coupling the thermal hydraulics code, RELAP5/SCDAP/MOD3.4 to the statistical tool, DAKOTA, to predict the variation in system response under various operational and phenomenological uncertainties. The RNN models successfully captures the underlying characteristics of the data with reasonable accuracy, and the BNN-LSTM approach offers an additional layer of insight into the level of uncertainty associated with the predictions. The results demonstrate that LSTM outperforms GRU, while the hybrid CNN-LSTM model is computationally the most efficient. This study aims to gain a better understanding of the capabilities and limitations of machine learning models in the context of nuclear safety. By expanding the application of ML models to more severe accident scenarios, where operators are under extreme stress and prone to errors, ML models can provide valuable support and act as expert systems to assist in decision-making while minimizing the chances of human error.

Analysis and Comparison of Standard and Existed Heating Degree-Hours Model for decision of Greenhouse Heating Load in Korea (온실의 난방부하 결정을 위한 Degree-Hour 모델식 비교 분석)

  • Woo, Young-Hoe
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.6 no.1
    • /
    • pp.143-154
    • /
    • 2004
  • The value of daily heating degree hour(described as DH hereafter) is essential for calculating the heating load of a greenhouse during the winter months. Many researchers have so for proposed different models for estimating DH value. Models for estimating DH have been investigated DH(unit, ℃·h·year-1) in this paper. The comparisons of standard and existed DH values were made to know the estimation error of each model proposed so far. The standard DH values and other proposed DH values have were obtained for the inside setpoint temperatures of 9, 13, 16 and 20℃ in greenhouse, estimated based on meterological data from 1961 to 2000 according to locals, and standard DH values were independent and existed DH values were dependent. Among the various model, the one developed theoretically by Mihara modified to author was the best fitting for standard DH values. The DH values were obtained for the inside setpoint temperature of 9, 13, 16 and 20℃ by Modified Mihara's model. A new DH contour line graph was proposed using Modified Mihara's model. Using the DH contour line graph, the DH values can be obtained easily for any setpoint according to local.

An Experimental Study on the Design of the Korean Ballot Paper - Problems of the Regulations of the Public Official Election Act - (한국 투표용지 디자인에 관한 실험 연구 - 공직선거법 규정에 대한 문제제기 -)

  • Jung, Eui Tay;Hong, Jae Woo;Lee, Sang Hyeb;Lee, Eun Jung
    • Design Convergence Study
    • /
    • v.17 no.3
    • /
    • pp.91-108
    • /
    • 2018
  • Rather the ballot paper design could influence voting behavior, there is less study upon the designing ballot paper and the importance of information design. This study examines the possibility of error occurring in the ballot paper design under the rules of Public Official Election Act. To do this, we conducted a heuristic evaluation method to review regulations, and an empirical experiment with closed groups. From this, we found that (1) diverse cases of ballot paper can be produced, and (2) various fonts, sizes, and materials can be used. Accordingly, it is inevitable to deliver regulations on (1) the usage of chromaticity and image, (2) the applying universal designed-typography, and (3) the margin for the spacing between ballot boxes. This study, at the end, suggests institutional measures for securing the validity and legitimacy on the decision-making process to remove latent ambiguity in ballot paper designing.