• Title/Summary/Keyword: S대역

Search Result 1,982, Processing Time 0.023 seconds

SANET-CC : Zone IP Allocation Protocol for Offshore Networks (SANET-CC : 해상 네트워크를 위한 구역 IP 할당 프로토콜)

  • Bae, Kyoung Yul;Cho, Moon Ki
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.87-109
    • /
    • 2020
  • Currently, thanks to the major stride made in developing wired and wireless communication technology, a variety of IT services are available on land. This trend is leading to an increasing demand for IT services to vessels on the water as well. And it is expected that the request for various IT services such as two-way digital data transmission, Web, APP, etc. is on the rise to the extent that they are available on land. However, while a high-speed information communication network is easily accessible on land because it is based upon a fixed infrastructure like an AP and a base station, it is not the case on the water. As a result, a radio communication network-based voice communication service is usually used at sea. To solve this problem, an additional frequency for digital data exchange was allocated, and a ship ad-hoc network (SANET) was proposed that can be utilized by using this frequency. Instead of satellite communication that costs a lot in installation and usage, SANET was developed to provide various IT services to ships based on IP in the sea. Connectivity between land base stations and ships is important in the SANET. To have this connection, a ship must be a member of the network with its IP address assigned. This paper proposes a SANET-CC protocol that allows ships to be assigned their own IP address. SANET-CC propagates several non-overlapping IP addresses through the entire network from land base stations to ships in the form of the tree. Ships allocate their own IP addresses through the exchange of simple requests and response messages with land base stations or M-ships that can allocate IP addresses. Therefore, SANET-CC can eliminate the IP collision prevention (Duplicate Address Detection) process and the process of network separation or integration caused by the movement of the ship. Various simulations were performed to verify the applicability of this protocol to SANET. The outcome of such simulations shows us the following. First, using SANET-CC, about 91% of the ships in the network were able to receive IP addresses under any circumstances. It is 6% higher than the existing studies. And it suggests that if variables are adjusted to each port's environment, it may show further improved results. Second, this work shows us that it takes all vessels an average of 10 seconds to receive IP addresses regardless of conditions. It represents a 50% decrease in time compared to the average of 20 seconds in the previous study. Also Besides, taking it into account that when existing studies were on 50 to 200 vessels, this study on 100 to 400 vessels, the efficiency can be much higher. Third, existing studies have not been able to derive optimal values according to variables. This is because it does not have a consistent pattern depending on the variable. This means that optimal variables values cannot be set for each port under diverse environments. This paper, however, shows us that the result values from the variables exhibit a consistent pattern. This is significant in that it can be applied to each port by adjusting the variable values. It was also confirmed that regardless of the number of ships, the IP allocation ratio was the most efficient at about 96 percent if the waiting time after the IP request was 75ms, and that the tree structure could maintain a stable network configuration when the number of IPs was over 30000. Fourth, this study can be used to design a network for supporting intelligent maritime control systems and services offshore, instead of satellite communication. And if LTE-M is set up, it is possible to use it for various intelligent services.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.