• Title/Summary/Keyword: Interface Environment Optimization

Search Result 37, Processing Time 0.023 seconds

A Study on the Performance Monitoring and Optimization of a High Speed Network for the Transfer of Massive VLBI Data (대용량 VLBI 데이터 전송을 위한 초고속 네트워크 성능 모니터링 및 최적화 연구)

  • Song, Min-Gyu;Kim, Hyo-Ryung;Kang, Yong-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.6
    • /
    • pp.1097-1108
    • /
    • 2019
  • In VLBI(Very Long Baseline Interferometry), the observed data created in many observatories which are far away from each other should be collected in correlation center for data analysis. Traditionally, observed data is moved by transportation such as car or airplane. But it is replaced with data transfer over the network rapidly as the advancement of information technology, and therefore, international cooperative research is also now more widely expanding. e-KVN(electronic Korean VLBI Network) has been upgraded two times so the network interface of KVN has been evolved to the highest specification of 100GbE. During this time period, the portion of network usage for VLBI observations and experiments in KVN has been increased exponentially. In this paper, we describe KVN VLBI system and network technology for the performance upgrade and advanced status monitoring between three radio astronomy observatories and Daejeon correlation center with KREONET(Korea Research Environment Open NETwork). Furthermore, future plan of e-KVN for the implementation of wide band VLBI observation will be also briefly discussed.

Developement of a Object Oriented Based Meta Modeling Design Framework Using XML (XML을 이용한 객체지향 메타 모델링 기반 설계 프레임워크)

  • Chu, Min-Sik;Choi, Dong-Hoon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.33 no.4
    • /
    • pp.7-16
    • /
    • 2005
  • Computer applications for engineering design evolve rapidly. Many design frameworks were developed by the simulation based systems so that organizations could achieve significant benefits due to cost reduction in designing. However, today’s transient design issue requires being adaptable to more complicated and atypical problems. In this paper the Multidisciplinary Language Runtime (MLR) design framework is developed. The MLR provides flexible and extensible interface between analysis modules and numerical analysis codes. It also supports Meta Modeling, Meta Variable, and XML script for atypical design formulation. By applying object-oriented design scheme to implement abstractions of the key components required for iterative systems analyses, the MLR provides flexible and extensible problem-solving environment.

Design and Implementation of A Distributed Information Integration System based on Metadata Registry (메타데이터 레지스트리 기반의 분산 정보 통합 시스템 설계 및 구현)

  • Kim, Jong-Hwan;Park, Hea-Sook;Moon, Chang-Joo;Baik, Doo-Kwon
    • The KIPS Transactions:PartD
    • /
    • v.10D no.2
    • /
    • pp.233-246
    • /
    • 2003
  • The mediator-based system integrates heterogeneous information systems with the flexible manner. But it does not give much attention on the query optimization issues, especially for the query reusing. The other thing is that it does not use standardized metadata for schema matching. To improve this two issues, we propose mediator-based Distributed Information Integration System (DIIS) which uses query caching regarding performance and uses ISO/IEC 11179 metadata registry in terms of standardization. The DIIS is designed to provide decision-making support, which logically integrates the distributed heterogeneous business information systems based on the Web environment. We designed the system in the aspect of three-layer expression formula architecture using the layered pattern to improve the system reusability and to facilitate the system maintenance. The functionality and flow of core components of three-layer architecture are expressed in terms of process line diagrams and assembly line diagrams of Eriksson Penker Extension Model (EPEM), a methodology of an extension of UML. For the implementation, Supply Chain Management (SCM) domain is used. And we used the Web-based environment for user interface. The DIIS supports functions of query caching and query reusability through Query Function Manager (QFM) and Query Function Repository (QFR) such that it enhances the query processing speed and query reusability by caching the frequently used queries and optimizing the query cost. The DIIS solves the diverse heterogeneity problems by mapping MetaData Registry (MDR) based on ISO/IEC 11179 and Schema Repository (SCR).

Research and improvement of image analysis and bar code and QR recognition technology for the development of visually impaired applications (시각장애인 애플리케이션 개발을 위한 이미지 분석과 바코드, QR 인식 기술의 연구 및 개선)

  • MinSeok Cho;MinKi Yoon;MinSu Seo;YoungHoon Hwang;Hyun Woo;WonWhoi Huh
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.861-866
    • /
    • 2023
  • Individuals with visual impairments face difficulties in accessing accurate information about medical services and medications, making it challenging for them to ensure proper medication intake. While there are healthcare laws addressing this issue, there is a lack of standardized solutions, and not all over-the-counter medications are covered. Therefore, we have undertaken the design of a mobile application that utilizes image recognition technology, barcode scanning, and QR code recognition to provide guidance on how to take over-the-counter medications, filling the existing gaps in the knowledge of visually impaired individuals. Currently available applications for individuals with visual impairments allow them to access information about medications. However, they still require the user to remember which specific medication they are taking, posing a significant challenge. In this research, we are optimizing the camera capture environment, user interface (UI), and user experience (UX) screens for image recognition, ensuring greater accessibility and convenience for visually impaired individuals. By implementing the findings from our research into the application, we aim to assist visually impaired individuals in acquiring the correct methods for taking over-the-counter medications.

Design and Performance Evaluation of Digital Twin Prototype Based on Biomass Plant (바이오매스 플랜트기반 디지털트윈 프로토타입 설계 및 성능 평가)

  • Chae-Young Lim;Chae-Eun Yeo;Seong-Yool Ahn;Myung-Ok Lee;Ho-Jin Sung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.935-940
    • /
    • 2023
  • Digital-twin technology is emerging as an innovative solution for all industries, including manufacturing and production lines. Therefore, this paper optimizes all the energy used in a biomass plant based on unused resources. We will then implement a digital-twin prototype for biomass plants and evaluate its performance in order to improve the efficiency of plant operations. The proposed digital-twin prototype applies a standard communication platform between the framework and the gateway and is implemented to enable real-time collaboration. and, define the message sequence between the client server and the gateway. Therefore, an interface is implemented to enable communication with the host server. In order to verify the performance of the proposed prototype, we set up a virtual environment to collect data from the server and perform a data collection evaluation. As a result, it was confirmed that the proposed framework can contribute to energy optimization and improvement of operational efficiency when applied to biomass plants.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Low temperature plasma deposition of microcrystalline silicon thin films for active matrix displays: opportunities and challenges

  • Cabarrocas, Pere Roca I;Abramov, Alexey;Pham, Nans;Djeridane, Yassine;Moustapha, Oumkelthoum;Bonnassieux, Yvan;Girotra, Kunal;Chen, Hong;Park, Seung-Kyu;Park, Kyong-Tae;Huh, Jong-Moo;Choi, Joon-Hoo;Kim, Chi-Woo;Lee, Jin-Seok;Souk, Jun-H.
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.107-108
    • /
    • 2008
  • The spectacular development of AMLCDs, been made possible by a-Si:H technology, still faces two major drawbacks due to the intrinsic structure of a-Si:H, namely a low mobility and most important a shift of the transfer characteristics of the TFTs when submitted to bias stress. This has lead to strong research in the crystallization of a-Si:H films by laser and furnace annealing to produce polycrystalline silicon TFTs. While these devices show improved mobility and stability, they suffer from uniformity over large areas and increased cost. In the last decade we have focused on microcrystalline silicon (${\mu}c$-Si:H) for bottom gate TFTs, which can hopefully meet all the requirements for mass production of large area AMOLED displays [1,2]. In this presentation we will focus on the transfer of a deposition process based on the use of $SiF_4$-Ar-$H_2$ mixtures from a small area research laboratory reactor into an industrial gen 1 AKT reactor. We will first discuss on the optimization of the process conditions leading to fully crystallized films without any amorphous incubation layer, suitable for bottom gate TFTS, as well as on the use of plasma diagnostics to increase the deposition rate up to 0.5 nm/s [3]. The use of silicon nanocrystals appears as an elegant way to circumvent the opposite requirements of a high deposition rate and a fully crystallized interface [4]. The optimized process conditions are transferred to large area substrates in an industrial environment, on which some process adjustment was required to reproduce the material properties achieved in the laboratory scale reactor. For optimized process conditions, the homogeneity of the optical and electronic properties of the ${\mu}c$-Si:H films deposited on $300{\times}400\;mm$ substrates was checked by a set of complementary techniques. Spectroscopic ellipsometry, Raman spectroscopy, dark conductivity, time resolved microwave conductivity and hydrogen evolution measurements allowed demonstrating an excellent homogeneity in the structure and transport properties of the films. On the basis of these results, optimized process conditions were applied to TFTs, for which both bottom gate and top gate structures were studied aiming to achieve characteristics suitable for driving AMOLED displays. Results on the homogeneity of the TFT characteristics over the large area substrates and stability will be presented, as well as their application as a backplane for an AMOLED display.

  • PDF