• Title/Summary/Keyword: low-latency processing

Search Result 106, Processing Time 0.024 seconds

Efficient Group Management Mechanism and Architecture for Secure Multicast (안전한 멀티캐스트 서비스 제공을 위한 효율적인 그룹 관리 메커니즘 및 구조)

  • Eun, Sang-A;Jo, Tae-Nam;Chae, Gi-Jun;Lee, Sang-Ho;Park, Won-Ju;Na, Jae-Hun
    • The KIPS Transactions:PartC
    • /
    • v.9C no.3
    • /
    • pp.323-330
    • /
    • 2002
  • Multicast services are gradually diversified and used widely. Proportionately, they become the center of attackers' attention and there are growing possibilities of an intelligence leak. Therefore, research related to secure multicast should be required to provide multicast service efficiently. This paper presents the architecture for secure multicast which provides efficient group management mechanism in group consists using member's dynamic join and leave. This architecture can provide secure multicast services to many users with regard to security aspects in one-to-many communication. The simulation results show that the proposed architecture achieves an efficient group management and a secure data transmission with low latency compared with the other existing secure multicast architecture.

Wide Coverage Microphone System for Lecture Using Ceiling-Mounted Array Structure (천정형 배열 마이크를 이용한 강의용 광역 마이크 시스템)

  • Oh, Woojin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.4
    • /
    • pp.624-633
    • /
    • 2018
  • While the multimedia lecture system has been getting smart using immerging technology, the microphone still relies on the classical approach such as holding in hand or attaching on the body. In this paper, we propose a ceiling mounted array microphone system that allows a wide reception coverage and instructors to move freely without attaching microphone. The proposed system adopts cell and handover of mobile communication instead of a complicated beamforming method and implements a wide range microphone over several cells with low cost. Since the characteristics of unvoiced speech is similar to Pseudo Noise it is shown that soft handover are possible with 3 microphones connected to delay-sum multipath receiver. The proposed system is tested in $6.3{\times}1.5m$ area. For real-time processing the correlation range can be reduced by 82% or more, and the output latency delay can be improved by using the delay adaptive filter.

A Low-latency L2 Handoff Scheme between WiBro and cdma2000 Mobile Networks (WiBro와 cdma2000 이동통신망간 적은 지연을 위한 L2 핸드오프 방안)

  • Lee, Geon-Baik;Cho, Jin-Sung
    • The KIPS Transactions:PartC
    • /
    • v.13C no.7 s.110
    • /
    • pp.873-880
    • /
    • 2006
  • Since various networks are deployed and the most of users request higher mobility, there are many researches about the interworking between widely deployed 3G network and rapidly boarded WLAN. On the other side, WiBro is focused on as a next generation network, because many people expect that WiBro gives satisfaction about the enough mobility and mass data transmission. So the study of the integration between WiBro and cdma2000 will show better effects than the present study of the integration between WLAN and cdma2000. The L2 handoff proposed in this paper takes advantages over the existing L3 handoff scheme because it does not require the L3 procedure for the mobility unlike the L3 handoff. Through extensive computer simulations, the efficiency of the proposed scheme has been validated.

A mobile data caching synchronization strategy based on in-demand replacement priority (수요에 따른 교체 우선 순위 기반 모바일 데이터베이스 캐쉬 동기화 정책)

  • Zhao, Jinhua;Xia, Ying;Lee, Soon-Jo;Bae, Hae-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.2
    • /
    • pp.13-21
    • /
    • 2012
  • Mobile data caching is usually used as an effective way to improve the speed of local transaction processing and reduce server load. In mobile database environment, due to its characters - low bandwidth, excessive latency and intermittent network, caching is especially crucial. A lot of mobile data caching strategies have been proposed to handle these problems over the last few years. However, with smart phone widely application these approaches cannot support vast data requirements efficiently. In this paper, to make full use of cache data, lower wireless transmission quantity and raise transaction success rate, we design a new mobile data caching synchronization strategy based on in-demand and replacement priority. We experimentally verify that our techniques significantly reduce quantity of wireless transmission and improve transaction success rate, especially when mobile client request a large amount of data.

Transmission Latency-Aware MAC Protocol Design for Intra-Body Communications (인체 채널에서 전자기파 전송 지연 특성을 고려한 다중 매체 제어 프로토콜 설계)

  • Kim, Seungmin;Park, JongSung;Ko, JeongGil
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.8
    • /
    • pp.201-208
    • /
    • 2019
  • Intra-Body Communication (IBC) is a communication method using the human body as a communication medium. The fact that our human body consists of water and electrolyte allow such communication method could work and have strength in low-power. However, because the IBC directly affects to human body by using it as a medium, there was a lack of research in communication protocols of each communication layer. In this paper, we suggests MAC parameters which affects the performance of communication in human body channel, and propose new MAC protocol. Our results shows that our MAC is suitable for supporting high data rate applications with comparable radio duty cycle performance.

Performance Comparison of Task Partitioning Methods in MEC System (MEC 시스템에서 태스크 파티셔닝 기법의 성능 비교)

  • Moon, Sungwon;Lim, Yujin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.5
    • /
    • pp.139-146
    • /
    • 2022
  • With the recent development of the Internet of Things (IoT) and the convergence of vehicles and IT technologies, high-performance applications such as autonomous driving are emerging, and multi-access edge computing (MEC) has attracted lots of attentions as next-generation technologies. In order to provide service to these computation-intensive tasks in low latency, many methods have been proposed to partition tasks so that they can be performed through cooperation of multiple MEC servers(MECSs). Conventional methods related to task partitioning have proposed methods for partitioning tasks on vehicles as mobile devices and offloading them to multiple MECSs, and methods for offloading them from vehicles to MECSs and then partitioning and migrating them to other MECSs. In this paper, the performance of task partitioning methods using offloading and migration is compared and analyzed in terms of service delay, blocking rate and energy consumption according to the method of selecting partitioning targets and the number of partitioning. As the number of partitioning increases, the performance of the service delay improves, but the performance of the blocking rate and energy consumption decreases.

IoT Edge Architecture Model to Prevent Blockchain-Based Security Threats (블록체인 기반의 보안 위협을 예방할 수 있는 IoT 엣지 아키텍처 모델)

  • Yoon-Su Jeong
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.77-84
    • /
    • 2024
  • Over the past few years, IoT edges have begun to emerge based on new low-latency communication protocols such as 5G. However, IoT edges, despite their enormous advantages, pose new complementary threats, requiring new security solutions to address them. In this paper, we propose a cloud environment-based IoT edge architecture model that complements IoT systems. The proposed model acts on machine learning to prevent security threats in advance with network traffic data extracted from IoT edge devices. In addition, the proposed model ensures load and security in the access network (edge) by allocating some of the security data at the local node. The proposed model further reduces the load on the access network (edge) and secures the vulnerable part by allocating some functions of data processing and management to the local node among IoT edge environments. The proposed model virtualizes various IoT functions as a name service, and deploys hardware functions and sufficient computational resources to local nodes as needed.

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

Effects of Low-Level Visual Attributes on Threat Detection: Testing the Snake Detection Theory (저수준 시각적 특질이 위협 탐지에 미치는 효과: 뱀 탐지 이론의 검증)

  • Kim, Taehoon;Kwon, Dasom;Yi, Do-Joon
    • Science of Emotion and Sensibility
    • /
    • v.23 no.3
    • /
    • pp.47-62
    • /
    • 2020
  • The snake detection theory posits that, due to competition with snakes, the primate visual system has been evolved to detect camouflaged snakes. Specifically, one of its hypotheses states that the subcortical visual pathway mainly consisting of koniocellular cells enables humans to automatically detect the threat of snakes without consuming mental resources. Here we tested the hypothesis by comparing human participants' responses to snakes with those to fearful faces and flowers. Participants viewed either original images or converted ones, which lacked the differences in color, luminance, contrast, and spatial frequency energies between categories. While participants in Experiment 1 produced valence and arousal ratings to each image, those in Experiment 2 detected target images in the breaking continuous flash suppression (bCFS) paradigm. As a result, visual factors influenced the responses to snakes most strongly. After minimizing visual differences, snakes were rated as being less negative and less arousing, and detected more slowly from suppression. In contrast, the images of the other categories were less affected by image conversion. In particular, fearful faces were rated as greater threats and detected more quickly than other categories. In addition, for snakes, changes in arousal ratings and those in bCFS response times were negatively correlated: Those snake images, the arousal ratings of which decreased, produced increased detection latency. These findings suggest that the influence of snakes on human responses to threat is limited relative to fearful faces, and that detection responses in bCFS share common processing mechanisms with conscious ratings. In conclusion, the current study calls into question the assumption that snake detection in humans is a product of unconscious subcortical visual processing.