• Title/Summary/Keyword: 전송 복구

Search Result 366, Processing Time 0.022 seconds

Implementation of Software Downloading and Installing for upgrading Digital TV Settop Box (디지털 방송 TV수신기의 기능 업그레이드를 위한 소프트웨어 다운로드와 설치 기능 구현)

  • Ryu Yll-Kwon;Jung Moon-Ryul;Kim Jung-Hwan;Choi Jin-Su;Bang Gun
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.66-79
    • /
    • 2006
  • As constant development of digital broadcasting and data broadcasting system, new technology will be introduced to digital broadcasting and new broadcasting service will be appeared. These services need to be changed and processed to suit each services and the software of the receiver need to be upgraded. Though, generally the digital broadcasting receivers are not getting updated once it's delivered to home for long time and it need to be done by hand or collected each one of them with trouble. Therefore this paper suggests a way to overcome these difficulties via broadcasting stream. This research is to describe how three modules-namely (1) Downloader, downloads new software from data carrousel stream, (2) Update Loader, installs the software received by downloader; and (3) Recoverer, recovers the former version of the software if some serious problem has been occurred during downloading and installing the software. This paper tries to realize the accommodation of terrestrial STB based on the new technique and service following ATSC A-97 agreement.

A Bridge-Station Packet Marker for Performance Improvement of DiffServ QoS in WiMedia WLP-based Networks (WiMedia WLP 망에서의 DiffServ QoS 성능 향상을 위한 Bridge-Station 패킷 Marker)

  • Lee, Seung-Beom;Hur, Kyeong;Eom, Doo-Seop;Joo, Yang-Ick
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.5
    • /
    • pp.740-753
    • /
    • 2010
  • Performance of TCP can be severely degraded in WLP-based Mobile IP wireless networks where packet loss not related to network congestion occurs frequently during WLP-based inter-subnetwork handoff by user mobility. To resolve such a problem in the networks using WLP-based Mobile IP, the packet buffering method recovering seamlessly the packets dropped due to user mobility has been proposed. The packet buffering method at a bridge station recovers those packets dropped during handoff by forwarding buffered packets at the old bridge station to the WLP device. But, when the WLP device moves to a congested bridge station in a new WLP foreign subnetwork, those buffered packets forwarded by the old bridge station are dropped and TCP transmission performance of a WLP device in the congested bridge station degrades due to increased congestion by those forwarded burst packets. In this paper, a PBM(Packet Bridge Marker) is proposed for preventing buffered out-of-profile(OUT) packets from reducing the throughput of in-profile(IN) packets of an Assured Service WLP device. From this operation, the packet losses of buffered OUT packets are avoided and the throughput of IN and Total packets of an AS WLP device are increased.

Development of Natural Disaster Damage Investigation System using High Resolution Spatial Images (고해상도 공간영상을 이용한 자연재해 피해조사시스템 설계 및 구현)

  • Kim, Tae-Hoon;Kim, Kye-Hyun;Nam, Gi-Beom;Shim, Jae-Hyun;Choi, Woo-Jung;Cho, Myung-Hum
    • Journal of Korea Spatial Information System Society
    • /
    • v.12 no.1
    • /
    • pp.57-65
    • /
    • 2010
  • In this study, disaster damage investigation system was developed using high resolution satellite images and GIS technique to afford effective damage investigation system for widely disaster damaged area. Study area was selected in Bonghwa, Gyungsangbukdo where high magnitude of damages from torrential rain has occurred at July in 2008. GIS DB was built using 1:5,000 topographic map, cadastral map, satellite image and aerial photo to apply for investigation algorithm. Disaster damage investigation system was developed using VB NET languages, ArcObject component and MS-SQL DBMS for effective management of damage informations. The system can finding damaged area comparing pre- and post-disaster images and drawing damaged area according to the damage item unit. Extracted object was saved in Shape file format and overlayed with background GIS DB for obtaining detail information of damaged area. Disaster damage investigation system using high resolution spatial images can extract damage information rapidly and highly reliably for widely disaster areas. This system can be expected to highly contributing to enhance the disaster prevention capabilities in national level field investigation supporting and establishing recovery plan etc. This system can be utilized at the plan of disaster prevention through digital damage information and linked in national disaster information management system. Further studies are needed to better improvement in system and cover for the linkage of damage information with digital disaster registry.

Efficient Self-Healing Key Distribution Scheme (효율적인 Self-Healing키 분배 기법)

  • 홍도원;강주성;신상욱
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.13 no.6
    • /
    • pp.141-148
    • /
    • 2003
  • The self-healing key distribution scheme with revocation capability proposed by Staddon et al. enables a dynamic group of users to establish a group key over an unreliable network, and has the ability to revoke users from and add users to the group while being resistant to collusion attacks. In such a protocol, if some packet gets lost, users ale still capable of recovering the group key using the received packets without requesting additional transmission from the group manager. In this scheme, the storage overhead at each group member is O($m^2$1og p) and the broadcast message size of a group manager is O( ((m$t^2$+mt)log p), where m is the number of sessions, t is the maximum number of colluding group members, and p is a prime number that is large enough to accommodate a cryptographic key. In this paper we describe the more efficient self-healing key distribution scheme with revocation capability, which achieves the same goal with O(mlog p) storage overhead and O(($t^2$+mt)log p) communication overhead. We can reduce storage overhead at each group member and the broadcast message size of the group manager without adding additional computations at user's end and group manager's end.

A Study on the Design and Implementation of Multi-Disaster Drone System Using Deep Learning-Based Object Recognition and Optimal Path Planning (딥러닝 기반 객체 인식과 최적 경로 탐색을 통한 멀티 재난 드론 시스템 설계 및 구현에 대한 연구)

  • Kim, Jin-Hyeok;Lee, Tae-Hui;Han, Yamin;Byun, Heejung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.4
    • /
    • pp.117-122
    • /
    • 2021
  • In recent years, human damage and loss of money due to various disasters such as typhoons, earthquakes, forest fires, landslides, and wars are steadily occurring, and a lot of manpower and funds are required to prevent and recover them. In this paper, we designed and developed a disaster drone system based on artificial intelligence in order to monitor these various disaster situations in advance and to quickly recognize and respond to disaster occurrence. In this study, multiple disaster drones are used in areas where it is difficult for humans to monitor, and each drone performs an efficient search with an optimal path by applying a deep learning-based optimal path algorithm. In addition, in order to solve the problem of insufficient battery capacity, which is a fundamental problem of drones, the optimal route of each drone is determined using Ant Colony Optimization (ACO) technology. In order to implement the proposed system, it was applied to a forest fire situation among various disaster situations, and a forest fire map was created based on the transmitted data, and a forest fire map was visually shown to the fire fighters dispatched by a drone equipped with a beam projector. In the proposed system, multiple drones can detect a disaster situation in a short time by simultaneously performing optimal path search and object recognition. Based on this research, it can be used to build disaster drone infrastructure, search for victims (sea, mountain, jungle), self-extinguishing fire using drones, and security drones.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.