• Title/Summary/Keyword: 대용량 전송

Search Result 716, Processing Time 0.023 seconds

Hybrid Buffer Structured Optical Packet Switch with the Limited Numbers of Tunable Wavelength Converters and Internal Wavelengths (제한된 수의 튜닝 가능한 파장변환기와 내부파장을 갖는 하이브리드 버퍼 구조의 광 패킷 스위치)

  • Lim, Huhn-Kuk
    • Journal of Internet Computing and Services
    • /
    • v.10 no.2
    • /
    • pp.171-177
    • /
    • 2009
  • Optical packet switching(OPS) is a strong candidate for the next-generation internet, since it has a fine switching granularity at the packet level for providing flexible bandwidth, and provides seamless integration between WDM layer and IP layer. Optical packet switching have been studied in two categories: OPS in synchronous and OPS in asynchronous networks. In this article we are focused on contention resolution of OPS in asynchronous networks. The hybrid buffer have been addressed, to reduce packet loss further as one of the alternative buffer structures for contention resolution of asynchronous and variable length packets, which consists of the FDL buffer and the electronic buffer. The OPS design issue for the limited number of TWCs and internal wavelengths is important in the aspect of switch cost and resource efficiency. Therefore, an hybrid buffer structured optical packet switch and its scheduling algorithm is presented for considering the limited number of TWCs and internal wavelengths, for contention resolution of asynchronous and variable length packets. The proposed algorithm could lead to the packet loss improvement compared to the legacy LAUC-VF algorithm with only the FDL buffer.

  • PDF

BIM Mesh Optimization Algorithm Using K-Nearest Neighbors for Augmented Reality Visualization (증강현실 시각화를 위해 K-최근접 이웃을 사용한 BIM 메쉬 경량화 알고리즘)

  • Pa, Pa Win Aung;Lee, Donghwan;Park, Jooyoung;Cho, Mingeon;Park, Seunghee
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.42 no.2
    • /
    • pp.249-256
    • /
    • 2022
  • Various studies are being actively conducted to show that the real-time visualization technology that combines BIM (Building Information Modeling) and AR (Augmented Reality) helps to increase construction management decision-making and processing efficiency. However, when large-capacity BIM data is projected into AR, there are various limitations such as data transmission and connection problems and the image cut-off issue. To improve the high efficiency of visualizing, a mesh optimization algorithm based on the k-nearest neighbors (KNN) classification framework to reconstruct BIM data is proposed in place of existing mesh optimization methods that are complicated and cannot adequately handle meshes with numerous boundaries of the 3D models. In the proposed algorithm, our target BIM model is optimized with the Unity C# code based on triangle centroid concepts and classified using the KNN. As a result, the algorithm can check the number of mesh vertices and triangles before and after optimization of the entire model and each structure. In addition, it is able to optimize the mesh vertices of the original model by approximately 56 % and the triangles by about 42 %. Moreover, compared to the original model, the optimized model shows no visual differences in the model elements and information, meaning that high-performance visualization can be expected when using AR devices.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Development of a Real-Time Mobile GIS using the HBR-Tree (HBR-Tree를 이용한 실시간 모바일 GIS의 개발)

  • Lee, Ki-Yamg;Yun, Jae-Kwan;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.6 no.1 s.11
    • /
    • pp.73-85
    • /
    • 2004
  • Recently, as the growth of the wireless Internet, PDA and HPC, the focus of research and development related with GIS(Geographic Information System) has been changed to the Real-Time Mobile GIS to service LBS. To offer LBS efficiently, there must be the Real-Time GIS platform that can deal with dynamic status of moving objects and a location index which can deal with the characteristics of location data. Location data can use the same data type(e.g., point) of GIS, but the management of location data is very different. Therefore, in this paper, we studied the Real-Time Mobile GIS using the HBR-tree to manage mass of location data efficiently. The Real-Time Mobile GIS which is developed in this paper consists of the HBR-tree and the Real-Time GIS Platform HBR-tree. we proposed in this paper, is a combined index type of the R-tree and the spatial hash Although location data are updated frequently, update operations are done within the same hash table in the HBR-tree, so it costs less than other tree-based indexes Since the HBR-tree uses the same search mechanism of the R-tree, it is possible to search location data quickly. The Real-Time GIS platform consists of a Real-Time GIS engine that is extended from a main memory database system. a middleware which can transfer spatial, aspatial data to clients and receive location data from clients, and a mobile client which operates on the mobile devices. Especially, this paper described the performance evaluation conducted with practical tests if the HBR-tree and the Real-Time GIS engine respectively.

  • PDF

ATM Cell Encipherment Method using Rijndael Algorithm in Physical Layer (Rijndael 알고리즘을 이용한 물리 계층 ATM 셀 보안 기법)

  • Im Sung-Yeal;Chung Ki-Dong
    • The KIPS Transactions:PartC
    • /
    • v.13C no.1 s.104
    • /
    • pp.83-94
    • /
    • 2006
  • This paper describes ATM cell encipherment method using Rijndael Algorithm adopted as an AES(Advanced Encryption Standard) by NIST in 2001. ISO 9160 describes the requirement of physical layer data processing in encryption/decryption. For the description of ATM cell encipherment method, we implemented ATM data encipherment equipment which satisfies the requirements of ISO 9160, and verified the encipherment/decipherment processing at ATM STM-1 rate(155.52Mbps). The DES algorithm can process data in the block size of 64 bits and its key length is 64 bits, but the Rijndael algorithm can process data in the block size of 128 bits and the key length of 128, 192, or 256 bits selectively. So it is more flexible in high bit rate data processing and stronger in encription strength than DES. For tile real time encryption of high bit rate data stream. Rijndael algorithm was implemented in FPGA in this experiment. The boundary of serial UNI cell was detected by the CRC method, and in the case of user data cell the payload of 48 octets (384 bits) is converted in parallel and transferred to 3 Rijndael encipherment module in the block size of 128 bits individually. After completion of encryption, the header stored in buffer is attached to the enciphered payload and retransmitted in the format of cell. At the receiving end, the boundary of ceil is detected by the CRC method and the payload type is decided. n the payload type is the user data cell, the payload of the cell is transferred to the 3-Rijndael decryption module in the block sire of 128 bits for decryption of data. And in the case of maintenance cell, the payload is extracted without decryption processing.

Efficacy of Small Bowel Displacement System in Post-Operative Pelvic Radiation Therapy of Rectal Cancer (소장 용적 측정을 통한 직장암의 수술 후 방사선치료 시 사용하는 소장 전위 장치(Small Bowel Displacement System : SBDS) 의 효용성 검토)

  • Ahn Yong Chan;Lim Do Hoon;Kim Moon Kyung;Wu Hong Gyun;Kim Dae Yong;Huh Seung Jae
    • Radiation Oncology Journal
    • /
    • v.16 no.1
    • /
    • pp.63-69
    • /
    • 1998
  • Purpose : This study is to evaluate the efficacy of small bowel displacement system(SBDS) in post-operative pelvic radiation therapy(RT) of rectal cancer patients by measurement of small bowel volume included in the radiation fields receiving therapeutic dose. Materials and Method : Ten consecutive new rectal cancer patients referred to the department of Radiation Oncology of Samsung Medical Center in May of 1997 were included in this study. All patients were asked to drink $Castrographin^(R)$ before simulation and were laid prone for conventional simulation and CT scans with and without SBDS. The volume of opacified small bowel on CT scans, which was to be included in the radiation fields receiving therapeutic dose, was measured using Picture archiving and communication system (PACS). Results : The average small bowel volumes with and without SBDS were 176.0ml(5.2-415.6ml) and 185.1ml(54.5-434.2ml), respectively The changes of small bowel volume with SBDS compared to those without SBDS were more than $10\%$ decrease in three, less than 10% decrease in two, less than $10\%$ increase in three, and more than $10\%$ increase in two patients. Conclusion : No significant advantage of using SBDS in post-operative pelvic RT for rectal cancer patients has been shown by small bowel volume measurement using CT scan considering additional effort and time needed for simulation and treatment setup.

  • PDF