• Title/Summary/Keyword: data scalability

Search Result 574, Processing Time 0.025 seconds

Design of Service-adaptive Tactical Data Transmission Protocol for Satellite Communications (위성통신을 위한 서비스 적응적인 전술 데이터 전송 프로토콜 설계)

  • Kim, Sujeong;Lee, Sooho
    • Journal of Satellite, Information and Communications
    • /
    • v.11 no.3
    • /
    • pp.72-79
    • /
    • 2016
  • In this paper, we propose a Service-adaptive Tactical Data Transmission Protocol (STTS) based on Satellite Communications with narrow bandwidth. STTS is designed to provide additional field for scalability and scheduler for reliability of transport stream protocol based on digital broadcasting standard, DVB-S and DVB-S2. It is also verified the effects of lost data packets with narrow bandwidth through the simulator by traffic model and re-transmission of critical data, and checked the design considerations based on STTS system.

BoxBroker: A Policy-Driven Framework for Optimizing Storage Service Federation

  • Heinsen, Rene;Lopez, Cindy;Huh, Eui-Nam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.340-367
    • /
    • 2018
  • Storage services integration can be done for achieving high availability, improving data access performance and scalability while preventing vendor lock-in. However, multiple services environment management and interoperability have become a critical issue as a result of service architectures and communication interfaces heterogeneity. Storage federation model provides the integration of multiple heterogeneous and self-sufficient storage systems with a single control point and automated decision making about data distribution. In order to integrate diverse heterogeneous storage services into a single storage pool, we are proposing a storage service federation framework named BoxBroker. Moreover, an automated decision model based on a policy-driven data distribution algorithm and a service evaluation method is proposed enabling BoxBroker to make optimal decisions. Finally, a demonstration of our proposal capabilities is presented and discussed.

Mathematical Performance Model of Two-Tier Indexing Scheme in Wireless Data Broadcasting

  • Im, Seokjin
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.65-70
    • /
    • 2018
  • Wireless data broadcasting system that can support any number of clients is the effective alternative for the challenge of scalability in ubiquitous computing in IoT environment. In the system, it is important to evaluate quickly the performance parameter, the access time that means how quickly the client access desired data items. In this paper, we derive the mathematical model for the access time in the wireless data broadcast system adopting two-tier indexing scheme. The derived model enables to evaluate the access time without the complicated simulation. In order to evaluate the model, we compare the access time by the model with the access time by the simulation.

Modelling Data Flow in Smart Claim Processing Using Time Invariant Petri Net with Fixed Input Data

  • Amponsah, Anokye Acheampong;Adekoya, Adebayo Felix;Weyori, Benjamin Asubam
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.413-423
    • /
    • 2022
  • The NHIS provides free or highly subsidized healthcare to all people by providing financial fortification. However, the financial sustainability of the scheme is threatened by numerous factors. Therefore, this work sought to provide a solution to process claims intelligently. The provided Petri net model demonstrated successful data flow among the various participant. For efficiency, scalability, and performance two main subsystems were modelled and integrated - data input and claims processing subsystems. We provided smart claims processing algorithm that has a simple and efficient error detection method. The complexity of the main algorithm is good but that of the error detection is excellent when compared to literature. Performance indicates that the model output is reachable from input and the token delivery rate is promising.

Design of a ParamHub for Machine Learning in a Distributed Cloud Environment

  • Su-Yeon Kim;Seok-Jae Moon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.161-168
    • /
    • 2024
  • As the size of big data models grows, distributed training is emerging as an essential element for large-scale machine learning tasks. In this paper, we propose ParamHub for distributed data training. During the training process, this agent utilizes the provided data to adjust various conditions of the model's parameters, such as the model structure, learning algorithm, hyperparameters, and bias, aiming to minimize the error between the model's predictions and the actual values. Furthermore, it operates autonomously, collecting and updating data in a distributed environment, thereby reducing the burden of load balancing that occurs in a centralized system. And Through communication between agents, resource management and learning processes can be coordinated, enabling efficient management of distributed data and resources. This approach enhances the scalability and stability of distributed machine learning systems while providing flexibility to be applied in various learning environments.

A Study on Scalability of Profiling Method Based on Hardware Performance Counter for Optimal Execution of Supercomputer (슈퍼컴퓨터 최적 실행 지원을 위한 하드웨어 성능 카운터 기반 프로파일링 기법의 확장성 연구)

  • Choi, Jieun;Park, Guenchul;Rho, Seungwoo;Park, Chan-Yeol
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.10
    • /
    • pp.221-230
    • /
    • 2020
  • Supercomputer that shares limited resources to multiple users needs a way to optimize the execution of application. For this, it is useful for system administrators to get prior information and hint about the applications to be executed. In most high-performance computing system operations, system administrators strive to increase system productivity by receiving information about execution duration and resource requirements from users when executing tasks. They are also using profiling techniques that generates the necessary information using statistics such as system usage to increase system utilization. In a previous study, we have proposed a scheduling optimization technique by developing a hardware performance counter-based profiling technique that enables characterization of applications without further understanding of the source code. In this paper, we constructed a profiling testbed cluster to support optimal execution of the supercomputer and experimented with the scalability of the profiling method to analyze application characteristics in the built cluster environment. Also, we experimented that the profiling method can be utilized in actual scheduling optimization with scalability even if the application class is reduced or the number of nodes for profiling is minimized. Even though the number of nodes used for profiling was reduced to 1/4, the execution time of the application increased by 1.08% compared to profiling using all nodes, and the scheduling optimization performance improved by up to 37% compared to sequential execution. In addition, profiling by reducing the size of the problem resulted in a quarter of the cost of collecting profiling data and a performance improvement of up to 35%.

Analysis of Overseas Data Management Systems for High Level Radioactive Waste Disposal (고준위방사성폐기물 처분 관련 자료 관리 해외사례 분석)

  • MinJeong Kim;SunJu Park;HyeRim Kim;WoonSang Yoon;JungHoon Park;JeongHwan Lee
    • The Journal of Engineering Geology
    • /
    • v.33 no.2
    • /
    • pp.323-334
    • /
    • 2023
  • The vast volumes of data that are generated during site characterization and associated research for the disposal of high-level radioactive waste require effective data management to properly chronicle and archive this information. The Swedish Nuclear Fuel and Waste Management Company, SKB, established the SICADA database for site selection, evaluation, analysis, and modeling. The German Federal Company for Radioactive Waste Disposal, BGE, established ArbeitsDB, a database and document management system, and the ELO data system to manage data collected according to the Repository Site Selection Act. The U.K. Nuclear Waste Services established the Data Management System to manage any research and survey data pertaining to nuclear waste storage and disposal. The U.S. Department of Energy and Office of Civilian Radioactive Waste Management established the Technical Data Management System for data management and subsequent licensing procedures during site characterization surveys. The presented cases undertaken by these national agencies highlight the importance of data quality management and the scalability of data utilization to ensure effective data management. Korea should also pursue the establishment of both a data management concept for radioactive waste disposal that considers data quality management and scalability from a long-term perspective and an associated data management system.

Design Issues of Digital Display Interface

  • Jeong, Deog-Kyoon;Oh, Do-Hwan
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2007.08a
    • /
    • pp.993-996
    • /
    • 2007
  • Depending on applications where transmission bandwidth, wire distance, power consumption and EMI environments vary, design trade-offs must be made to optimize the display interface. After introducing the digital display interface architecture, topics such as cost, EMI, signal integrity, scalability and content protection are discussed with available techniques. Implementation issues are discussed regarding their cost and design complexity. Existing standards are reviewed and comparison on their strengths and shortcomings are discussed.

  • PDF

The study on the Master Station Configuration of Distribution Automation System in the medium/small scale (중소규모 배전자동화시스템의 중앙제어장치 구성에 관한 연구)

  • Kim, Yong-Pal;Kim, Myong-Soo
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2396-2398
    • /
    • 1998
  • The Distribution Automation System(DAS) Master Station(MS) is the core of the system, which has the functions of supervising, data acquisition, processing and man-machine interface. In this paper, we suggested the optimal configuration of the medium/small scale DAS MS in accordance with the consideration of various requirements such as reliability, scalability flexibility, etc.

  • PDF

Applications of Open-source Spatio-Temporal Database Systems in Wide-field Time-domain Astronomy

  • Chang, Seo-Won;Shin, Min-Su
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.41 no.2
    • /
    • pp.53.2-53.2
    • /
    • 2016
  • We present our experiences with open-source spatio-temporal database systems for managing and analyzing big astronomical data acquired by wide-field time-domain sky surveys. Considering performance, cost, difficulty, and scalability of the database systems, we conduct comparison studies of open-source spatio-temporal databases such as GeoMesa and PostGIS that are already being used for handling big geographical data. Our experiments include ingesting, indexing, and querying millions or billions of astronomical spatio-temporal data. We choose the public VVV (VISTA Variables in the Via Lactea) catalogs of billions measurements for hundreds of millions objects as the test data. We discuss issues of how these spatio-temporal database systems can be adopted in the astronomy community.

  • PDF