• Title/Summary/Keyword: Network Bottleneck

Search Result 256, Processing Time 0.021 seconds

Supporting RSVP for IP Multicast over ATM Networks with MARS Architecture based on MCS (MCS 기반 MARS를 사용하는 ATM 망에서의 IP 멀티캐스트를 위한 RSVP 지원 방안)

  • Choe, Jeong-Hyeon;Lee, Mi-Jeong
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.7
    • /
    • pp.813-826
    • /
    • 1999
  • 실시간 멀티미디어 응용의 등장으로 멀티캐스트와 QoS(Quality of Service) 지원이 필수적인 망 서비스로 부각되고 있다. 이에, ATM 기반의 인터넷에서 IP 멀티캐스트의 효율적인 처리를 위하여 MARS(Multicast Address Resolution Server)가 제안되었고, 기존의 최선 서비스 기반의 인터넷에서 QoS(Quality of Service)를 지원하기 위하여 RSVP(Resource Reservation Protocol)가 제안되었다. 본 논문에서는 ATM 망에서 QoS가 지원되는 IP 멀티캐스트 서비스를 제공하기 위하여 MARS 구조에서 RSVP를 지원하는 두 가지 방안을 제안하고, 시뮬레이션을 통하여 그 성능을 분석하였다. 제안하는 두 가지 방법은 각각 'RSVP 전 홉 노드 방식'과 'MARS 서버 방식'이라 명명하였다. RSVP 전 홉 노드 방식은 송신원으로부터 ATM 망으로 진입하는 노드와 수신원을 향하여 ATM 망을 진출하는 노드 간에 각각 일대일 양방향 VC를 설정하여 멀티캐스트 그룹에 속하는 수신원들이 보내는 자원 예약 메시지를 ATM 망에서 전송하는 방안이다. MARS 서버 방식은 ATM 망을 진출하는 노드와 MARS 서버간에 MARS 제어 메시지 교환을 위해 존재하는 ATM VC를 사용하여 RSVP의 자원 예약 메시지를 전송하고, MARS 서버가 RSVP 자원 예약 메시지를 처리하도록 그 기능을 확장함으로써 ATM 망에서 필요로 하는 제어 VC 수를 절약할 수 있는 방안이다. 시뮬레이션을 통하여, MARS 서버 방식은 ATM 제어 VC의 수를 절약할 수 있을 뿐 아니라 경우에 따라 RSVP 자원 예약 메시지 전달 지연을 줄일 수도 있음을 볼 수 있었다. 그러나, MARS 클러스터 내에 동시에 존재하는 RSVP 흐름이 많을 때에는 MARS 서버 방식의 경우 MARS 서버에서의 병목 현상으로 인해 성능이 저하될 수 있다.Abstract Emerging real time multimedia applications require multicast service with a QoS(Quality of Service) support. An overlay service architecture MARS(Multicast Address Resolution Server) is proposed to support IP multicast over an ATM network, and a resource reservation protocol RSVP is proposed to provide QoS support in the Internet which is originally based upon best effort service only. In this paper, we propose two schemes to support IP multicast service with QoS support over ATM networks: 'RSVP Previous Hop Node(RPHN) scheme' and 'MARS server based scheme'. In RPHN scheme, the RSVP reservation messages are transported via one-to-one ATM control VC from the egress nodes to the ingress nodes of the the multicast flow set up between each pair of nodes. The RSVP message processing occurs at the ingress nodes of the multicast flow. Whereas, in the MARS server based scheme, the RSVP reservation messages are transported via the MARS control VCs between the egress nodes and the MARS server. The RSVP message processing burden is imposed at MARS server in this scheme. For MARS server based scheme, no additional ATM VC is required for RSVP reservation message transmission, while the processing burden at the MARS server is high. Simulation results show that the MARS server based scheme, may accomplish RSVP reservation message delivery with smaller delay as well as saving of the number of ATM VCs. When the number of simultaneous RSVP flows in the MARS cluster is large, however, MARS based scheme may suffer performance degradation since MARS server becomes a performance bottleneck.

An Extended Virtual LAM System Deploying Multiple Route Server (다중 라우트 서버를 두는 확장된 가상랜 시스템)

  • Seo, Ju-Yeon;Lee, Mee-Jeong
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.2
    • /
    • pp.117-128
    • /
    • 2002
  • Virtual LAN (VLAN) is an architecture to enable communication between end stations as if they were on the same LAN regardless of their physical locations. VLAN defines a limited broadcast domain to reduce the bandwidth waste. The Newbridge Inc. developed a layer 3 VLAN product called VIVID, which configures a VLAN based on W subnet addresses. In a VIVID system, a single route server is deployed for address resolution, VLAN configuration, and data broadcasting to a VLAN. If the size of the network, over which the VLANS supported by the VIVID system spans, becomes larger, this single route server could become a bottleneck point of the system performance. One possible approach to cope with this problem is to deploy multiple route servers. We propose two architectures, organic and independent, to expand the original VIVID system to deploy multiple route servers. A course of simulations are done to analyze the performance of each architecture that we propose. The simulation results show that the performances of the proposed architectures depend on the lengths of VLAN broadcasting sessions and the number of broadcast data frames generated by a session. It has also been shown that there are tradeoffs between the scalability of the architecture and their efficiency in data transmissions.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

Study on the Neural Network for Handwritten Hangul Syllabic Character Recognition (수정된 Neocognitron을 사용한 필기체 한글인식)

  • 김은진;백종현
    • Korean Journal of Cognitive Science
    • /
    • v.3 no.1
    • /
    • pp.61-78
    • /
    • 1991
  • This paper descibes the study of application of a modified Neocognitron model with backward path for the recognition of Hangul(Korean) syllabic characters. In this original report, Fukushima demonstrated that Neocognitron can recognize hand written numerical characters of $19{\times}19$ size. This version accepts $61{\times}61$ images of handwritten Hangul syllabic characters or a part thereof with a mouse or with a scanner. It consists of an input layer and 3 pairs of Uc layers. The last Uc layer of this version, recognition layer, consists of 24 planes of $5{\times}5$ cells which tell us the identity of a grapheme receiving attention at one time and its relative position in the input layer respectively. It has been trained 10 simple vowel graphemes and 14 simple consonant graphemes and their spatial features. Some patterns which are not easily trained have been trained more extrensively. The trained nerwork which can classify indivisual graphemes with possible deformation, noise, size variance, transformation or retation wre then used to recongnize Korean syllabic characters using its selective attention mechanism for image segmentation task within a syllabic characters. On initial sample tests on input characters our model could recognize correctly up to 79%of the various test patterns of handwritten Korean syllabic charactes. The results of this study indeed show Neocognitron as a powerful model to reconginze deformed handwritten charavters with big size characters set via segmenting its input images as recognizable parts. The same approach may be applied to the recogition of chinese characters, which are much complex both in its structures and its graphemes. But processing time appears to be the bottleneck before it can be implemented. Special hardware such as neural chip appear to be an essestial prerquisite for the practical use of the model. Further work is required before enabling the model to recognize Korean syllabic characters consisting of complex vowels and complex consonants. Correct recognition of the neighboring area between two simple graphemes would become more critical for this task.

Betweenness Centrality-based Evacuation Vulnerability Analysis for Subway Stations: Case Study on Gwanggyo Central Station (매개 중심성 기반 지하철 역사 재난 대피 취약성 분석: 광교중앙역 사례연구)

  • Jeong, Ji Won;Ahn, Seungjun;Yoo, Min-Taek
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.3
    • /
    • pp.407-416
    • /
    • 2024
  • Over the past 20 years, there has been a rapid increase in the number and size of subway stations and underground structures worldwide, and the importance of safety for subway users has also continuously grown. Subway stations, due to their structural characteristics, have limited visibility and escape routes in disaster situations, posing a high risk of human casualties and economic losses. Therefore, an analysis of disaster vulnerabilities is essential not only for existing subway systems but also for deep underground facilities like GTX. This paper presents a case study applying a betweenness centrality-based disaster vulnerability analysis framework to the case of Gwanggyo Central Station. The analysis of Gwanggyo Central Station's base model and various disaster scenarios revealed that the betweenness centrality distribution is symmetrical, following the symmetrical spatial structure of the station, with high centrality concentrated in the central areas of basement levels one and two. These areas exhibited values more than 220% above the average, indicating a high likelihood of bottleneck phenomena during evacuation in disaster situations. To mitigate this vulnerability, scenarios were proposed to distribute evacuation flows concentrated in the central areas, enhancing the usability of peripheral areas as evacuation routes by connecting staircases continuously. This modification, when considered, showed a decrease in centrality concentration, confirming that the proposed addition of evacuation paths could effectively contribute to dispersing the flow of evacuation in Gwanggyo Central Station. This case study demonstrates the effectiveness of the proposed framework for assessing evacuation vulnerability in enhancing subway station user safety and can be effectively applied in disaster response and management plans for major underground facilities.