• Title/Summary/Keyword: Distributed Computing

Search Result 1,279, Processing Time 0.024 seconds

Design of Efficient Intrusion Detection System using Man-Machine (Man-Mchine에 의한 효율적인 침입 탐지 시스템 설계)

  • Shin, Jang-Koon;Ra, Min-Young;Park, Byung-Ho;Choi, Byung-Kab
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.6 no.4
    • /
    • pp.39-52
    • /
    • 1996
  • Networking revolution provides users with data and resources sharing, distributed processing, and computer communication in cyberspace. However, users may use computers as a way of unauthorized access, system destruction, and leakage of the stored data. In recent trend, incresing of hacking instances which are from domestic as well as abroad reaches to the level of seriousness. It, therefore, is required to develop a secure system for the National Depense computing resources and deploy in practice in the working field as soon as possible. In this paper, we focuss on finding the security requirements of a network and designing Intrusion Detection System using statical intrusion detection and rule-based intrusion detection analysis through accumulating audit data.

A Study of Patient's Privacy Protection in U-Healthcare (유헬스케어에서 환자의 프라이버시 보호 방안 연구)

  • Jeong, Yoon-Su;Lee, Sang-Ho
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.22 no.4
    • /
    • pp.913-921
    • /
    • 2012
  • On the strength of the rapid development and propagation of U-healthcare service, the service technologies are full of important changes. However, U-healthcare service has security problem that patient's biometric information can be easily exposed to the third party without service users' consent. This paper proposes a distributed model according authority and access level of hospital officials in order to safely access patients' private information in u-Healthcare Environment. Proposed model can both limit the access to patients' biometric information and keep safe system from DoS attack using time stamp. Also, it can prevent patients' data spill and privacy intrusion because the main server simultaneously controls hospital officials and the access by the access range of officials from each hospital.

Analysis of Checkpointing Model with Instantaneous Error Detection (즉각적 오류 감지가 가능한 경우의 체크포인팅 모형 분석)

  • Lee, Yutae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.170-175
    • /
    • 2022
  • Reactive failure management techniques are required to mitigate the impact of errors in high performance computing. Checkpoint is the standard recovery technique for coping with errors. An application employing checkpoints periodically saves its state, so that when an error occurs while some task is executing, the application is rolled back to its last checkpointed task and resumes execution from that task onward. In this paper, assuming the time-to-errors are independent each other and generally distributed, we analyze the checkpointing model with instantaneous error detection. The conventional assumption that two or more errors do not take place between two consecutive checkpoints is removed. Given the checkpointing time, down-time, and recovery time, we derive the reliability of the checkpointing model. When the time-to-error follows an exponential distribution, we obtain the optimal checkpointing interval to achieve the maximum reliability.

Peer to Peer Search Algorithm based on Advanced Multidirectional Processing (개선된 다방향 프로세싱 기반 P2P 검색 알고리즘)

  • Kim, Boon-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.133-139
    • /
    • 2009
  • A P2P technology in distributed computing fields is presented various methods to share resources between network connected peers. This is very efficient that a degree of resources to good use as compared with peers by using centralized network by a few servers. However peers to compose P2P system is not always online status, therefore it is difficult to support high reliability to user. In our previous work of this paper, it is contributing to reduce the loading rates to select of new resource support peer but a selection method the peers to share works to download resources is very simple that it is just selected about peer to have lowest job. In this paper, we reduced frequency offline peers by estimate based on a average value of success rates for peers.

Resource Metric Refining Module for AIOps Learning Data in Kubernetes Microservice

  • Jonghwan Park;Jaegi Son;Dongmin Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.6
    • /
    • pp.1545-1559
    • /
    • 2023
  • In the cloud environment, microservices are implemented through Kubernetes, and these services can be expanded or reduced through the autoscaling function under Kubernetes, depending on the service request or resource usage. However, the increase in the number of nodes or distributed microservices in Kubernetes and the unpredictable autoscaling function make it very difficult for system administrators to conduct operations. Artificial Intelligence for IT Operations (AIOps) supports resource management for cloud services through AI and has attracted attention as a solution to these problems. For example, after the AI model learns the metric or log data collected in the microservice units, failures can be inferred by predicting the resources in future data. However, it is difficult to construct data sets for generating learning models because many microservices used for autoscaling generate different metrics or logs in the same timestamp. In this study, we propose a cloud data refining module and structure that collects metric or log data in a microservice environment implemented by Kubernetes; and arranges it into computing resources corresponding to each service so that AI models can learn and analogize service-specific failures. We obtained Kubernetes-based AIOps learning data through this module, and after learning the built dataset through the AI model, we verified the prediction result through the differences between the obtained and actual data.

Bitcoin Cryptocurrency: Its Cryptographic Weaknesses and Remedies

  • Anindya Kumar Biswas;Mou Dasgupta
    • Asia pacific journal of information systems
    • /
    • v.30 no.1
    • /
    • pp.21-30
    • /
    • 2020
  • Bitcoin (BTC) is a type of cryptocurrency that supports transaction/payment of virtual money between BTC users without the presence of a central authority or any third party like bank. It uses some cryptographic techniques namely public- and private-keys, digital signature and cryptographic-hash functions, and they are used for making secure transactions and maintaining distributed public ledger called blockchain. In BTC system, each transaction signed by sender is broadcasted over the P2P (Peer-to-Peer) Bitcoin network and a set of such transactions collected over a period is hashed together with the previous block/other values to form a block known as candidate block, where the first block known as genesis-block was created independently. Before a candidate block to be the part of existing blockchain (chaining of blocks), a computation-intensive hard problem needs to be solved. A number of miners try to solve it and a winner earns some BTCs as inspiration. The miners have high computing and hardware resources, and they play key roles in BTC for blockchain formation. This paper mainly analyses the underlying cryptographic techniques, identifies some weaknesses and proposes their enhancements. For these, two modifications of BTC are suggested ― (i) All BTC users must use digital certificates for their authentication and (ii) Winning miner must give signature on the compressed data of a block for authentication of public blocks/blockchain.

IPC-CNN: A Robust Solution for Precise Brain Tumor Segmentation Using Improved Privacy-Preserving Collaborative Convolutional Neural Network

  • Abdul Raheem;Zhen Yang;Haiyang Yu;Muhammad Yaqub;Fahad Sabah;Shahzad Ahmed;Malik Abdul Manan;Imran Shabir Chuhan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.9
    • /
    • pp.2589-2604
    • /
    • 2024
  • Brain tumors, characterized by uncontrollable cellular growths, are a significant global health challenge. Navigating the complexities of tumor identification due to their varied dimensions and positions, our research introduces enhanced methods for precise detection. Utilizing advanced learning techniques, we've improved early identification by preprocessing clinical dataset-derived images, augmenting them via a Generative Adversarial Network, and applying an Improved Privacy-Preserving Collaborative Convolutional Neural Network (IPC-CNN) for segmentation. Recognizing the critical importance of data security in today's digital era, our framework emphasizes the preservation of patient privacy. We evaluated the performance of our proposed model on the Figshare and BRATS 2018 datasets. By facilitating a collaborative model training environment across multiple healthcare institutions, we harness the power of distributed computing to securely aggregate model updates, ensuring individual data protection while leveraging collective expertise. Our IPC-CNN model achieved an accuracy of 99.40%, marking a notable advancement in brain tumor classification and offering invaluable insights for both the medical imaging and machine learning communities.

Adaptive Hard Decision Aided Fast Decoding Method in Distributed Video Coding (적응적 경판정 출력을 이용한 고속 분산 비디오 복호화 기술)

  • Oh, Ryang-Geun;Shim, Hiuk-Jae;Jeon, Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.6
    • /
    • pp.66-74
    • /
    • 2010
  • Recently distributed video coding (DVC) is spotlighted for the environment which has restriction in computing resource at encoder. Wyner-Ziv (WZ) coding is a representative scheme of DVC. The WZ encoder independently encodes key frame and WZ frame respectively by conventional intra coding and channel code. WZ decoder generates side information from reconstructed two key frames (t-1, t+1) based on temporal correlation. The side information is regarded as a noisy version of original WZ frame. Virtual channel noise can be removed by channel decoding process. So the performance of WZ coding greatly depends on the performance of channel code. Among existing channel codes, Turbo code and LDPC code have the most powerful error correction capability. These channel codes use stochastically iterative decoding process. However the iterative decoding process is quite time-consuming, so complexity of WZ decoder is considerably increased. Analysis of the complexity of LPDCA with real video data shows that the portion of complexity of LDPCA decoding is higher than 60% in total WZ decoding complexity. Using the HDA (Hard Decision Aided) method proposed in channel code area, channel decoding complexity can be much reduced. But considerable RD performance loss is possible according to different thresholds and its proper value is different for each sequence. In this paper, we propose an adaptive HDA method which sets up a proper threshold according to sequence. The proposed method shows about 62% and 32% of time saving, respectively in LDPCA and WZ decoding process, while RD performance is not that decreased.

Implementation and Performance Measuring of Erasure Coding of Distributed File System (분산 파일시스템의 소거 코딩 구현 및 성능 비교)

  • Kim, Cheiyol;Kim, Youngchul;Kim, Dongoh;Kim, Hongyeon;Kim, Youngkyun;Seo, Daewha
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1515-1527
    • /
    • 2016
  • With the growth of big data, machine learning, and cloud computing, the importance of storage that can store large amounts of unstructured data is growing recently. So the commodity hardware based distributed file systems such as MAHA-FS, GlusterFS, and Ceph file system have received a lot of attention because of their scale-out and low-cost property. For the data fault tolerance, most of these file systems uses replication in the beginning. But as storage size is growing to tens or hundreds of petabytes, the low space efficiency of the replication has been considered as a problem. This paper applied erasure coding data fault tolerance policy to MAHA-FS for high space efficiency and introduces VDelta technique to solve data consistency problem. In this paper, we compares the performance of two file systems, MAHA-FS and GlusterFS. They have different IO processing architecture, the former is server centric and the latter is client centric architecture. We found the erasure coding performance of MAHA-FS is better than GlusterFS.

Design and Implementation of Event Based Message Exchange Architecture between Servers for Server Push (서버 푸시를 위한 이벤트 기반 서버간 메시지 교환 아키텍처의 설계 및 구현)

  • Cho, Dong-Il;Rhew, Sung-Yul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.4
    • /
    • pp.181-194
    • /
    • 2011
  • Server push which is technology of sending contents from servers to browsers in real time using long polling requests enables real time bidirectional communications between servers and browsers in HTTP environment. Recently, thanks to the rapid supply of mobile devices having ability of full browsing, server push is being applied to various applications. However, because servers providing services should offer distributed contents to a large number of users simultaneously in various user environments, they have a burden that offers contents quickly distinguishing much more concurrent users than before. The method of message exchange so far achieved in distributed server environment has difficulties in the performance of simultaneous user request process, the identification of users and the contents delivery. In this paper, We proposed message exchange architecture between servers for offering server push in the distributed server environment. The proposed architecture enables message exchange in the method of push between servers based on event driven architecture. In addition, the proposed architecture enables flexible identification of a event agent and event processing under the connected environment of a lot of users. In this paper, we designed and implemented the proposed architecture and compared performance with the previous way through a performance test. In addition, function is confirmed through the case realization. As a result of the performance test, the proposed architecture can lessen the use of server Thread and response time of users and increase simultaneous throughput.