• Title/Summary/Keyword: Consensus algorithms

Search Result 67, Processing Time 0.025 seconds

A study on the application of blockchain to the edge computing-based Internet of Things (에지 컴퓨팅 기반의 사물인터넷에 대한 블록체인 적용 방안 연구)

  • Choi, Jung-Yul
    • Journal of Digital Convergence
    • /
    • v.17 no.12
    • /
    • pp.219-228
    • /
    • 2019
  • Thanks to the development of information technology and the vitalization of smart services, the Internet of Things (IoT) technology, in which various smart devices are connected to the network, has been continuously developed. In the legacy IoT architecture, data processing has been centralized based on cloud computing, but there are concerns about a single point of failure, end-to-end transmission delay, and security. To solve these problems, it is necessary to apply decentralized blockchain technology to the IoT. However, it is hard for the IoT devices with limited computing power to mine blocks, which consumes a great amount of computing resources. To overcome this difficulty, this paper proposes an IoT architecture based on the edge computing technology that can apply blockchain technology to IoT devices, which lack computing resources. This paper also presents an operaional procedure of blockchain in the edge computing-based IoT architecture.

Analysis of the Quantity and Quality of the Contents of Junior High School Mathematics Curriculum and Textbooks (중학교 수학 교육과정 및 교과서 내용의 양과 난이도 수준 분석)

  • 박경미
    • Journal of Educational Research in Mathematics
    • /
    • v.10 no.1
    • /
    • pp.35-55
    • /
    • 2000
  • There seems to be a public consensus that the content of Korean mathematics textbooks is extensive and of a high level of difficulty. However, such judgment is the result of a generalization based on individual experience or on the results from comparisons of the international levels of achievement. Therefore, a more objective and stricter approach to the determination of the quantity and level of difficulty of mathematics content is necessary. For this purpose, this study has compared the content of Koreas 6th and 7th junior high school curriculums, and the Korean mathematics curriculum to textbooks of the United States, which has a considerable influence on the making of Korean mathematics textbooks. First of all, a comparison of Koreas 6th and 7th junior high school mathematics curriculums showed a slight reduction in the total quantity of content, as more content was deleted than was added in the 7th curriculum. However, given the fact that the number of hours of mathematics classes has been reduced, the reduction in content cannot be regarded as anything more than a simple reflection of the reduction in hours, proving that the 7th curriculum has not met its revision objective of reducing the content by 30%. Meanwhile, the comparison of the United States junior high school mathematics textbooks to Korea's 7th curriculum showed that the 7th grade content in the United States was much broader, encompassing content which in Korea ranged from the 2nd grade of elementary school to the 2nd year of junior high school. Therefore, on the surface, it may appear that the overall level of content in the American mathematics textbook is lower than that of the Korean. However, there are several cafes, such as statistics and probability, where certain content was more difficult and introduced at an earlier grade in the United States than in Korea. In fact, it can be said that Korea students tend to find content of the mathematics textbooks to be harder than they actually are because they are delivered as a mere aggregate of algorithms, with little consideration to its application in their everyday lives. In this respect, there is much room for improvement on the mathematics textbooks of Korea.

  • PDF

Analyzing Machine Learning Techniques for Fault Prediction Using Web Applications

  • Malhotra, Ruchika;Sharma, Anjali
    • Journal of Information Processing Systems
    • /
    • v.14 no.3
    • /
    • pp.751-770
    • /
    • 2018
  • Web applications are indispensable in the software industry and continuously evolve either meeting a newer criteria and/or including new functionalities. However, despite assuring quality via testing, what hinders a straightforward development is the presence of defects. Several factors contribute to defects and are often minimized at high expense in terms of man-hours. Thus, detection of fault proneness in early phases of software development is important. Therefore, a fault prediction model for identifying fault-prone classes in a web application is highly desired. In this work, we compare 14 machine learning techniques to analyse the relationship between object oriented metrics and fault prediction in web applications. The study is carried out using various releases of Apache Click and Apache Rave datasets. En-route to the predictive analysis, the input basis set for each release is first optimized using filter based correlation feature selection (CFS) method. It is found that the LCOM3, WMC, NPM and DAM metrics are the most significant predictors. The statistical analysis of these metrics also finds good conformity with the CFS evaluation and affirms the role of these metrics in the defect prediction of web applications. The overall predictive ability of different fault prediction models is first ranked using Friedman technique and then statistically compared using Nemenyi post-hoc analysis. The results not only upholds the predictive capability of machine learning models for faulty classes using web applications, but also finds that ensemble algorithms are most appropriate for defect prediction in Apache datasets. Further, we also derive a consensus between the metrics selected by the CFS technique and the statistical analysis of the datasets.

Cognitive Virtual Network Embedding Algorithm Based on Weighted Relative Entropy

  • Su, Yuze;Meng, Xiangru;Zhao, Zhiyuan;Li, Zhentao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.1845-1865
    • /
    • 2019
  • Current Internet is designed by lots of service providers with different objects and policies which make the direct deployment of radically new architecture and protocols on Internet nearly impossible without reaching a consensus among almost all of them. Network virtualization is proposed to fend off this ossification of Internet architecture and add diversity to the future Internet. As an important part of network virtualization, virtual network embedding (VNE) problem has received more and more attention. In order to solve the problems of large embedding cost, low acceptance ratio (AR) and environmental adaptability in VNE algorithms, cognitive method is introduced to improve the adaptability to the changing environment and a cognitive virtual network embedding algorithm based on weighted relative entropy (WRE-CVNE) is proposed in this paper. At first, the weighted relative entropy (WRE) method is proposed to select the suitable substrate nodes and paths in VNE. In WRE method, the ranking indicators and their weighting coefficients are selected to calculate the node importance and path importance. It is the basic of the WRE-CVNE. In virtual node embedding stage, the WRE method and breadth first search (BFS) algorithm are both used, and the node proximity is introduced into substrate node ranking to achieve the joint topology awareness. Finally, in virtual link embedding stage, the CPU resource balance degree, bandwidth resource balance degree and path hop counts are taken into account. The path importance is calculated based on the WRE method and the suitable substrate path is selected to reduce the resource fragmentation. Simulation results show that the proposed algorithm can significantly improve AR and the long-term average revenue to cost ratio (LTAR/CR) by adjusting the weighting coefficients in VNE stage according to the network environment. We also analyze the impact of weighting coefficient on the performance of the WRE-CVNE. In addition, the adaptability of the WRE-CVNE is researched in three different scenarios and the effectiveness and efficiency of the WRE-CVNE are demonstrated.

Comparative Review of Pharmacological Treatment Guidelines for Bipolar Disorder (양극성 장애의 약물치료 가이드라인 비교)

  • Seoyeon Chin;Hyoyoung Kim;Yesul Kim;;Bo-young Kwon;Boyoon Choi;Bobae Lee;Jiye Lee;Chae-Eun Kwon;Yeongdo Mun;Kaveesha Fernando;Ji Hyun Park
    • Korean Journal of Clinical Pharmacy
    • /
    • v.33 no.3
    • /
    • pp.153-167
    • /
    • 2023
  • Objective: Bipolar disorder displays a spectrum of manifestations, including manic, hypomanic, depressive, mixed, psychotic, and atypical episodes, contributing to its chronic nature and association with heightened suicide risk. Creating effective pharmacotherapy guidelines is crucial for managing bipolar disorder and reducing its prevalence. Treatment algorithms grounded in science have improved symptom management, but variations in recommended medications arise from research differences, healthcare policies, and cultural nuances globally. Methods: This study compares Korea's bipolar disorder treatment algorithm with guidelines from the UK, Australia, and an international association. The aim is to uncover disparities in key recommended medications and their underlying factors. Differences in CYP450 genotypes affecting drug metabolism contribute to distinct recommended medications. Variances also stem from diverse guideline development approaches-expert consensus versus metaanalysis results-forming the primary differences between Korea and other countries. Results: Discrepancies remain in international guidelines relying on meta-analyses due to timing and utilized studies. Drug approval speeds further impact medication selection. However, limited high-quality research results are the main cause of guideline variations, hampering consistent treatment conclusions. Conclusion: Korea's unique Delphi-based treatment algorithm stands out. To improve evidence-based recommendations, large-scale studies assessing bipolar disorder treatments for the Korean population are necessary. This foundation will ensure future recommendations are rooted in scientific evidence.

A Preliminary Study on the Development of Korean Medication Algorithm for Attention-Deficit Hyperactivity Disorder (한국형 주의력결핍 과잉행동장애 약물치료 알고리듬 개발을 위한 예비연구)

  • Park, Jae-Hong;Kim, Bung-Nyun;Kim, Jae-Won;Kim, Ji-Hoon;Son, Jung-Woo;Shin, Dong-Won;Shin, Yun-Mi;Yang, Su-Jin;Yoo, Hanik-K.;Yoo, Hee-Jeong;Lee, Soyoung Irene;Cheon, Keun-Ah;Hong, Hyun-Ju;Hwang, Jun-Won
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.22 no.1
    • /
    • pp.25-37
    • /
    • 2011
  • Objectives:This study was conducted to develop a Korean algorithm of pharmacological and non-pharmacological treatment strategies in attention-deficit hyperactivity disorder (ADHD) and its specific comorbid disorders (e.g. tic disorder, depressive disorder, anxiety disorder, bipolar disorder, and oppositional defiant disorder/conduct disorder). Methods:Based on a literature review and expert consensus, both paper- and web-based survey tools were developed with respect to a comprehensive range of questions. Most options were scored using a 9-point scale for rating the appropriateness of medical decisions. For the other options, the surveyed experts were asked to provide answers (e.g., duration of treatment, aver-age dosage) or check boxes to indicate their preferred answers. The survey was performed on-line in a self-administered manner. Ultimately, 49 Korean child & adolescent psychiatrists, who had been considered experts in the treatment of ADHD, vol untarily completed the questionnaire. In analyzing the responses to items rated using the 9-point scale, consensus on each option was defined as a non-random distribution of scores as determined by a chi-square test. We assigned a categorical rank (first line/preferred choice, second line/alternate choice, third line/usually inappropriate) to each option based on the 95% confidence interval around the mean rating score. Results:Specific medication strategies for key clinical situations in ADHD and its comorbid disorders were indicated and described. We organized the suggested algorithms of ADHD treatment mainly on the basis of the opinions of the Korean experts. The suggested algorithm was constructed according to the templates of the Texas Child & Adolescent medication algorithm Project (CMAP). Conclusion:We have proposed a Korean treatment algorithm for ADHD, both with and without comorbid disorders through expert consensus and a broad literature review. As the tools available for ADHD treatment evolve, this algorithm could be reorganized and modified as required to suit updated scientific and clinical research findings.

Construction of a Standard Dataset for Liver Tumors for Testing the Performance and Safety of Artificial Intelligence-Based Clinical Decision Support Systems (인공지능 기반 임상의학 결정 지원 시스템 의료기기의 성능 및 안전성 검증을 위한 간 종양 표준 데이터셋 구축)

  • Seung-seob Kim;Dong Ho Lee;Min Woo Lee;So Yeon Kim;Jaeseung Shin;Jin‑Young Choi;Byoung Wook Choi
    • Journal of the Korean Society of Radiology
    • /
    • v.82 no.5
    • /
    • pp.1196-1206
    • /
    • 2021
  • Purpose To construct a standard dataset of contrast-enhanced CT images of liver tumors to test the performance and safety of artificial intelligence (AI)-based algorithms for clinical decision support systems (CDSSs). Materials and Methods A consensus group of medical experts in gastrointestinal radiology from four national tertiary institutions discussed the conditions to be included in a standard dataset. Seventy-five cases of hepatocellular carcinoma, 75 cases of metastasis, and 30-50 cases of benign lesions were retrieved from each institution, and the final dataset consisted of 300 cases of hepatocellular carcinoma, 300 cases of metastasis, and 183 cases of benign lesions. Only pathologically confirmed cases of hepatocellular carcinomas and metastases were enrolled. The medical experts retrieved the medical records of the patients and manually labeled the CT images. The CT images were saved as Digital Imaging and Communications in Medicine (DICOM) files. Results The medical experts in gastrointestinal radiology constructed the standard dataset of contrast-enhanced CT images for 783 cases of liver tumors. The performance and safety of the AI algorithm can be evaluated by calculating the sensitivity and specificity for detecting and characterizing the lesions. Conclusion The constructed standard dataset can be utilized for evaluating the machine-learning-based AI algorithm for CDSS.