• Title/Summary/Keyword: Computation Time

Search Result 3,149, Processing Time 0.034 seconds

An Efficient VEB Beats Detection Algorithm Using the QRS Width and RR Interval Pattern in the ECG Signals (ECG신호의 QRS 폭과 RR Interval의 패턴을 이용한 효율적인 VEB 비트 검출 알고리듬)

  • Chung, Yong-Joo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.2
    • /
    • pp.96-101
    • /
    • 2011
  • In recent days, the demand for the remote ECG monitoring system has been increasing and the automation of the monitoring system is becoming quite of a concern. Automatic detection of the abnormal ECG beats must be a necessity for the successful commercialization of these real time remote ECG monitoring system. From these viewpoints, in this paper, we proposed an automatic detection algorithm for the abnormal ECG beats using QRS width and RR interval patterns. In the previous research, many efforts have been done to classify the ECG beats into detailed categories. But, these approaches have disadvantages such that they produce lots of misclassification errors and variabilities in the classification performance. Also, they require large amount of training data for the accurate classification and heavy computation during the classification process. But, we think that the detection of abnormality from the ECG beats is more important that the detailed classification for the automatic ECG monitoring system. In this paper, we tried to detect the VEB which is most frequently occurring among the abnormal ECG beats and we could achieve satisfactory detection performance when applied the proposed algorithm to the MIT/BIH database.

Co-registration of PET-CT Brain Images using a Gaussian Weighted Distance Map (가우시안 가중치 거리지도를 이용한 PET-CT 뇌 영상정합)

  • Lee, Ho;Hong, Helen;Shin, Yeong-Gil
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.7
    • /
    • pp.612-624
    • /
    • 2005
  • In this paper, we propose a surface-based registration using a gaussian weighted distance map for PET-CT brain image fusion. Our method is composed of three main steps: the extraction of feature points, the generation of gaussian weighted distance map, and the measure of similarities based on weight. First, we segment head using the inverse region growing and remove noise segmented with head using region growing-based labeling in PET and CT images, respectively. And then, we extract the feature points of the head using sharpening filter. Second, a gaussian weighted distance map is generated from the feature points in CT images. Thus it leads feature points to robustly converge on the optimal location in a large geometrical displacement. Third, weight-based cross-correlation searches for the optimal location using a gaussian weighted distance map of CT images corresponding to the feature points extracted from PET images. In our experiment, we generate software phantom dataset for evaluating accuracy and robustness of our method, and use clinical dataset for computation time and visual inspection. The accuracy test is performed by evaluating root-mean-square-error using arbitrary transformed software phantom dataset. The robustness test is evaluated whether weight-based cross-correlation achieves maximum at optimal location in software phantom dataset with a large geometrical displacement and noise. Experimental results showed that our method gives more accuracy and robust convergence than the conventional surface-based registration.

A Resilient Key Renewal Scheme in Wireless Sensor Networks (센서 네트워크에서 복원력을 지닌 키갱신 방안)

  • Wang, Gi-Cheol;Cho, Gi-Hwan
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.47 no.2
    • /
    • pp.103-112
    • /
    • 2010
  • In sensor networks, because sensors are deployed in an unprotected environment, they are prone to be targets of compromise attack, If the number of compromised nodes increases considerably, the key management in the network is paralyzed. In particular, compromise of Cluster Heads (CHs) in clustered sensor networks is much more threatening than that of normalsensors. Recently, rekeying schemes which update the exposed keys using the keys unknown to the compromised nodes are emerging. However, they cause some security and efficiency problems such as single group key employment in a cluster, passive eviction of compromised nodes, and excessive communication and computation overhead. In this paper, we present a proactive rekeying scheme using renewals of duster organization for clustered sensor networks. In the proposed scheme, each sensor establishes individual keys with neighbors at network boot-up time, and these keys are employed for later transmissions between sensors and their CH. By the periodic cluster reorganization, the compromised nodes are expelled from network and the individual keys employed in a cluster are changed continuously. Besides, newly elected CHs securely agree a key with sink by informing their members to sink, without exchangingany keying materials. The simulation results shows that the proposed scheme remarkably improves the confidentiality and integrity of data in spite of the increase of compromised nodes. Also, they show that the proposed scheme exploits the precious energy resource more efficiently than SHELL.

DeNERT: Named Entity Recognition Model using DQN and BERT

  • Yang, Sung-Min;Jeong, Ok-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.4
    • /
    • pp.29-35
    • /
    • 2020
  • In this paper, we propose a new structured entity recognition DeNERT model. Recently, the field of natural language processing has been actively researched using pre-trained language representation models with a large amount of corpus. In particular, the named entity recognition, which is one of the fields of natural language processing, uses a supervised learning method, which requires a large amount of training dataset and computation. Reinforcement learning is a method that learns through trial and error experience without initial data and is closer to the process of human learning than other machine learning methodologies and is not much applied to the field of natural language processing yet. It is often used in simulation environments such as Atari games and AlphaGo. BERT is a general-purpose language model developed by Google that is pre-trained on large corpus and computational quantities. Recently, it is a language model that shows high performance in the field of natural language processing research and shows high accuracy in many downstream tasks of natural language processing. In this paper, we propose a new named entity recognition DeNERT model using two deep learning models, DQN and BERT. The proposed model is trained by creating a learning environment of reinforcement learning model based on language expression which is the advantage of the general language model. The DeNERT model trained in this way is a faster inference time and higher performance model with a small amount of training dataset. Also, we validate the performance of our model's named entity recognition performance through experiments.

A Digital Twin Software Development Framework based on Computing Load Estimation DNN Model (컴퓨팅 부하 예측 DNN 모델 기반 디지털 트윈 소프트웨어 개발 프레임워크)

  • Kim, Dongyeon;Yun, Seongjin;Kim, Won-Tae
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.368-376
    • /
    • 2021
  • Artificial intelligence clouds help to efficiently develop the autonomous things integrating artificial intelligence technologies and control technologies by sharing the learned models and providing the execution environments. The existing autonomous things development technologies only take into account for the accuracy of artificial intelligence models at the cost of the increment of the complexity of the models including the raise up of the number of the hidden layers and the kernels, and they consequently require a large amount of computation. Since resource-constrained computing environments, could not provide sufficient computing resources for the complex models, they make the autonomous things violate time criticality. In this paper, we propose a digital twin software development framework that selects artificial intelligence models optimized for the computing environments. The proposed framework uses a load estimation DNN model to select the optimal model for the specific computing environments by predicting the load of the artificial intelligence models with digital twin data so that the proposed framework develops the control software. The proposed load estimation DNN model shows up to 20% of error rate compared to the formula-based load estimation scheme by means of the representative CNN models based experiments.

A Study on the determination of the optimal resolution for the application of the distributed rainfall-runoff model to the flood forecasting system - focused on Geumho river basin using GRM (분포형 유역유출모형의 홍수예보시스템 적용을 위한 최적해상도 결정에 관한 연구 - GRM 모형을 활용하여 금호강 유역을 중심으로)

  • Kim, Sooyoung;Yoon, Kwang Seok
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.2
    • /
    • pp.107-113
    • /
    • 2019
  • The flood forecasting model currently used in Korea calculates the runoff of basin using the lumped rainfall-runoff model and estimates the river level using the river and reservoir routing models. The lumped model assumes homogeneous drainage zones in the basin. Therefore, it can not consider various spatial characteristics in the basin. In addition, the rainfall data used in lumped model also has the same limitation because of using the point scale rainfall data. To overcome the limitations as mentioned above, many researchers have studied to apply the distributed rainfall-runoff model to flood forecasting system. In this study, to apply the Grid-based Rainfall-Runoff Model (GRM) to the Korean flood forecasting system, the optimal resolution is determined by analyzing the difference of the results of the runoff according to the various resolutions. If the grid size is to small, the computation time becomes excessive and it is not suitable for applying to the flood forecasting model. Even if the grid size is too large, it does not fit the purpose of analyzing the spatial distribution by applying the distributed model. As a result of this study, the optimal resolution which satisfies the accuracy of the bsin runoff prediction and the calculation speed suitable for the flood forecasting was proposed. The accuracy of the runoff prediction was analyzed by comparing the Nash-Sutcliffe model efficiency coefficient (NSE). The optimal resolution estimated from this study will be used as basic data for applying the distributed rainfall-runoff model to the flood forecasting system.

Effective material properties of radially poled piezoelectric ring transducer for analysis of tangentially poled piezoelectric ring (원주 분극 압전 링 트랜스듀서 해석을 위한 방사 분극 링 유효 물성 도출)

  • Lee, Haksue;Cho, Cheeyoung;Park, Seongcheol;Cho, Yo-Han;Lee, Jeong-min
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.2
    • /
    • pp.184-192
    • /
    • 2019
  • Compared to 31-mode rings, 33-mode rings are highly utilized as wide bandwidth underwater acoustic transducers because the electro-mechanical coupling and piezoelectric constant d are high. On the other hand, the 31-mode ring is an axial symmetry structure, so it is possible to model it as a simple two-dimensional asymmetrical model for numerical analysis, but the 33-mode ring requires a three-dimensional numerical analysis. That is, a lot of computing resources and computation time are required. In this study, the effective material properties of an equivalent 31-mode ring were derived to simulate the electro-mechano-acoustical responses of the 33-mode ring transducer. Using the effective material properties derived from this study, a numerical analysis of rings in vacuum, air backed rings in water, and FFR (Free Flooded Ring) transducers were performed to compare the responses of 33-mode rings.

Improvement of LMS Algorithm Convergence Speed with Updating Adaptive Weight in Data-Recycling Scheme (데이터-재순환 구조에서 적응 가중치 갱신을 통한 LMS 알고리즘 수렴 속 도 개선)

  • Kim, Gwang-Jun;Jang, Hyok;Suk, Kyung-Hyu;Na, Sang-Dong
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.9 no.4
    • /
    • pp.11-22
    • /
    • 1999
  • Least-mean-square(LMS) adaptive filters have proven to be extremely useful in a number of signal processing tasks. However LMS adaptive filter suffer from a slow rate of convergence for a given steady-state mean square error as compared to the behavior of recursive least squares adaptive filter. In this paper an efficient signal interference control technique is introduced to improve the convergence speed of LMS algorithm with tap weighted vectors updating which were controled by reusing data which was abandoned data in the Adaptive transversal filter in the scheme with data recycling buffers. The computer simulation show that the character of convergence and the value of MSE of proposed algorithm are faster and lower than the existing LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computation. Adaptive transversal filter with proposed data recycling buffer algorithm could efficiently reject ISI of channel and increase speed of convergence in avoidance burden of computational complexity in reality when it was experimented having the same condition of LMS algorithm.

Efficient RSA-Based PAKE Procotol for Low-Power Devices (저전력 장비에 적합한 효율적인 RSA 기반의 PAKE 프로토콜)

  • Lee, Se-Won;Youn, Taek-Young;Park, Yung-Ho;Hong, Seok-Hie
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.19 no.6
    • /
    • pp.23-35
    • /
    • 2009
  • Password-Authenticated Key Exchange (PAKE) Protocol is a useful tool for secure communication conducted over open networks without sharing a common secret key or assuming the existence of the public key infrastructure (PKI). It seems difficult to design efficient PAKE protocols using RSA, and thus many PAKE protocols are designed based on the Diffie-Hellman key exchange (DH-PAKE). Therefore it is important to design an efficient PAKE based on RSA function since the function is suitable for designing a PAKE protocol for imbalanced communication environment. In this paper, we propose a computationally-efficient key exchange protocol based on the RSA function that is suitable for low-power devices in imbalanced environment. Our protocol is more efficient than previous RSA-PAKE protocols, required theoretical computation and experiment time in the same environment. Our protocol can provide that it is more 84% efficiency key exchange than secure and the most efficient RSA-PAKE protocol CEPEK. We can improve the performance of our protocol by computing some costly operations in offline step. We prove the security of our protocol under firmly formalized security model in the random oracle model.

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.