• Title/Summary/Keyword: simulation and mobile communications

Search Result 620, Processing Time 0.024 seconds

Modulation Scheme for Network-coded Bi-directional Relaying over an Asymmetric Channel (양방향 비대칭 채널에서 네트워크 부호화를 위한 변조 방식)

  • Ryu, Hyun-Seok;Kang, Chung-G.
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.2B
    • /
    • pp.97-109
    • /
    • 2012
  • In this paper, we propose a modulation scheme for a network-coded bi-directional relaying (NBR) system over an asymmetric channel, which means that the qualities of the relay channel (the link between the BS and RS) and access channel (the link between the RS and MS) are not identical. The proposed scheme employs a dual constellation in such a way that the RS broadcasts the network-coded symbols modulated by two different constellations to the MS and BS over two consecutive transmission intervals. We derive an upper bound on the average bit error rate (BER) of the proposed scheme, and compare it with the hybrid constellation-based modulation scheme proposed for the asymmetric bi-directional link. Furthermore, we investigate the channel utilization of the existing bi-directional relaying schemes as well as the NBR system with the proposed dual constellation diversity-based modulation (DCD). From our simulation results, we show that the DCD gives better average BER performance about 3.5~4dB when $E_b/N_0$ is equal to $10^{-2}$, while maintaining the same spectral efficiency as the existing NBR schemes over the asymmetric bi-directional relaying channel.

Dynamic Power Management Method Considering VBR Video Traffic in Wi-Fi Direct (Wi-Fi Direct에서 VBR 비디오 트래픽을 고려한 동적 에너지 관리 기법)

  • Jin, Mei-Hua;Jung, Ji-Young;Lee, Jung-Ryun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.11
    • /
    • pp.2218-2229
    • /
    • 2015
  • Recently Wi-Fi Alliance defined Wi-Fi direct, which can communicate through a direct connection between the mobile device anytime, anywhere. In Wi-Fi direct, all devices are categorized by group of owner (GO) and client. Since portability is emphasized in Wi-Fi direct devices, it is essential to control the energy consumption of a device very efficiently. In order to avoid unnecessary power consumed by GO, Wi-Fi direct standard defines two power management schemes: Opportunistic power saving scheme and Notice Of Absence (NOA) scheme. But, these two schemes do not consider the traffic pattern, so we cannot expect high energy efficiency. In this paper, we suggest an algorithm to enhance the energy efficiency of Wi-Fi direct power saving, considering the characteristics of multimedia video traffic. Proposed algorithm utilizes the statistical distribution for the size of video frames and adjusts the length of awake interval dynamically. Also, considering the inter-dependency among video frames, the proposed algorithm assigns priorities to video frames and ensures that a video frame with high priority is transmitted with higher probability than other frames with low priority. Simulation results shows that the proposed method outperforms the traditional NOA in terms with average delay and energy efficiency.

OFDM/OQAM-IOTA System With Odd/Even Center Preamble Structure (Odd/Even Center Preamble 구조를 가진 OFDM/OQAM-IOTA 시스템)

  • Kang, Seung-Won;Heo, Joo;Chang, Kyung-Hi
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.12A
    • /
    • pp.1153-1160
    • /
    • 2005
  • OFDM/OQAM(Offset QAM)-IOTA system requires the IOTA(Isotropic Orthogonal Transform Algorithm) function that has superior localization property in time and frequency domain instead of guard interval used for conventional OFDM/QAM system to be robust to multipath channel. Therefore, OFDM/OQAM-IOTA system has more spectral efficiency than conventional OFDM/QAM system. But, when channel estimation scheme for conventional OFDM/QAM system is applied straightforwardly to OFDM/OQAM-IOTA system, an intrinsic Inter- symbol-Interference is observed. So suitable preamble structure for the channel estimation scheme of OFDM/OQAM-IOTA system is required. In this paper, we propose a new preamble structure that is appropriate to OPDM/OQAM-IOTA system and then perform ideal channel estimation and practical channel estimation in low-to-medium mobile speed and compare them with conventional OFDM/QAM system. Simulation results show that OFDM/OQAM-IOTA system with proposed preamble structure has 1.5 dB Eb/NO gain on Target BER $10^{-3}$ and about $25\%$ transmission rate gain against the conventional OFDM/QAM system considering quarter of FFT size as guard interval size.

Analysis and Compensation of STO Effects in the Multi-band OFDM Communication System of TDM Reception Method (TDM 수신 방식의 멀티 대역 OFDM 통신 시스템에서 STO 특성 분석 및 보상)

  • Lee, Hui-Kyu;Ryu, Heung-Gyoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.5A
    • /
    • pp.432-440
    • /
    • 2011
  • For the 4th generation mobile communication, LTE-advanced system needs the broad frequency band up to 100MHz for providing the data rate of maximum 1Gpbs. However, it is very difficult to secure the broad frequency band in the current frequency allocation situation. So, carrier aggregation was proposed as the solution, in which several fragmented frequency bands are used at the same time. Basically, multiple parallel receivers are required to get the information data from the different frequency bands but this conventional multi-chain receiver system is very inefficient. Therefore, in this paper, we like to study the single chain system that is able to receive the multi-band signals in a single receiver based on the time division multiplexing (TDM) reception method. This proposed TDM receiver efficiently manage to receive the multi-band signals in time domain and handle the baseband signals with one DSP board. However, the serious distortion could be generated by the sampling timing offset (STO) in the TDM-based system. Therefore, we like to analyze STO effects in the TDM-based system and propose a compensation method using estimated STO. Finally, it is shown by simulation that the proposed method is appropriate for the single chain receiver and show good compensation performance.

RFID Based Mobile Robot Docking Using Estimated DOA (방향 측정 RFID를 이용한 로봇 이동 시스템)

  • Kim, Myungsik;Kim, Kwangsoo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.9
    • /
    • pp.802-810
    • /
    • 2012
  • This paper describes RFID(Radio Frequency Identification) based target acquisition and docking system. RFID is non-contact identification system, which can send relatively large amount of information using RF signal. Robot employing RFID reader can identify neighboring tag attached objects without any other sensing or supporting systems such as vision sensor. However, the current RFID does not provide spatial information of the identified object, the target docking problem remains in order to execute a task in a real environment. For the problem, the direction sensing RFID reader is developed using a dual-directional antenna. The dual-directional antenna is an antenna set, which is composed of perpendicularly positioned two identical directional antennas. By comparing the received signal strength in each antenna, the robot can know the DOA (Direction of Arrival) of transmitted RF signal. In practice, the DOA estimation poses a significant technical challenge, since the RF signal is easily distorted by the surrounded environmental conditions. Therefore, the robot loses its way to the target in an electromagnetically disturbed environment. For the problem, the g-filter based error correction algorithm is developed in this paper. The algorithm reduces the error using the difference of variances between current estimated and the previously filtered directions. The simulation and experiment results clearly demonstrate that the robot equipped with the developed system can successfully dock to a target tag in obstacles-cluttered environment.

A New Upper Layer Decoding Algorithm for a Hybrid Satellite and Terrestrial Delivery System (혼합된 위성 및 지상 전송 시스템에서 새로운 상위 계층 복호 알고리즘)

  • Kim, Min-Hyuk;Park, Tae-Doo;Kim, Nam-Soo;Kim, Chul-Seung;Jung, Ji-Won;Chun, Seung-Young
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.20 no.9
    • /
    • pp.835-842
    • /
    • 2009
  • DVB-SSP is a new broadcasting system for hybrid satellite communications, which supports mobile handheld systems and fixed terrestrial systems. However, a critical factor must be considered in upper layer decoding which including erasure Reed-Solomon error correction combined with cyclic redundancy check. If there is only one bit error in an IP packet, the entire IP packet is considered as unreliable bytes, even if it contains correct bytes. IF, for example, there is one real byte error, in an If packet of 512 bytes, 511 correct bytes are erased from the frame. Therefore, this paper proposed two kinds of upper layer decoding methods; LLR-based decoding and hybrid decoding. By means of simulation we show that the performance of the proposed decoding algorithm is superior to that of the conventional one.

An Adaptive Coverage Control Algorithm for Throughput Improvement in OFDMA-based Relay Systems (OFDMA 기반 Relay 시스템에서 Throughput 성능 향상을 위한 적응적 커버리지 조절 기법)

  • Hyun, Myung-Reun;Hong, Dae-Hyoung;Lim, Jae-Chan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.9B
    • /
    • pp.876-882
    • /
    • 2009
  • In this paper, we propose a sub-cell coverage control algorithm for enhancement of the cell throughput in OFDMA based relay systems. Relay station (RS) is exploited for improved quality of the received signal in cellular communication systems, especially in shadow areas. However, since a RS requires additional radio resource consumption for the link between the base station (BS) and the RS, we have to carefully control the coverage areas if a mobile station (MS) is serviced via the BS or the RS considering the cell throughput. We consider radio resource reuse as well for the sub-cell coverage configuration by applying various reuse patterns between RSs. We also consider a time varying system by adaptively changing the threshold for coverage depending on the MSs' traffic in the cell. We initially determine the sub-cell coverage of the system depending on the ratio of received signal-interference-noise-ratio (SINR) of the MS from the BS and RSs, respectively. Then, the "sub-cell coverage threshold" varies based on the "effective transmitted bits per sub-channel" with time. Simulation result shows that the proposed "time varying coverage control algorithm" leads to throughput improvement compared to the fixed sub-cell coverage configuration.

Adaptive Power Control Dynamic Range Algorithm in WCDMA Downlink Systems (WCDMA 하향 링크 시스템에서의 적응적 PCDR 알고리즘)

  • Jung, Soo-Sung;Park, Hyung-Won;Lim, Jae-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.9A
    • /
    • pp.1048-1057
    • /
    • 2004
  • WCDMA system is 3rd generation wireless mobile system specified by 3GPP. In WCDMA downlink, two power control schemes are operated. One is inner loop power control operated m every slot Another is outer loop power control based on one frame time. Base staion (BS) can estimate proper transmission power by these two power control schemes. However, because each MS's transmission power makes a severe effect on BS's performance, BS cannot give excessive transmission power to the speclfic user 3GPP defined Power Control Dynamic Range (PCDR) to guarantee proper BS's performance. In this paper, we propose Adaptive PCDR algorithm. By APCDR algorithm, Radio Network Controller (RNC) can estimate each MS's current state using received signal to interference ratio (SIR) APCDR algorithm changes MS's maximum code channel power based on frame. By proposed scheme, each MS can reduce wireless channel effect and endure outages in cell edge. Therefore, each MS can obtain better QoS. Simulation result indicate that APCDR algorithm show more attractive output than fixed PCDR algorithm.

Factors Influencing the Adoption of Location-Based Smartphone Applications: An Application of the Privacy Calculus Model (스마트폰 위치기반 어플리케이션의 이용의도에 영향을 미치는 요인: 프라이버시 계산 모형의 적용)

  • Cha, Hoon S.
    • Asia pacific journal of information systems
    • /
    • v.22 no.4
    • /
    • pp.7-29
    • /
    • 2012
  • Smartphone and its applications (i.e. apps) are increasingly penetrating consumer markets. According to a recent report from Korea Communications Commission, nearly 50% of mobile subscribers in South Korea are smartphone users that accounts for over 25 million people. In particular, the importance of smartphone has risen as a geospatially-aware device that provides various location-based services (LBS) equipped with GPS capability. The popular LBS include map and navigation, traffic and transportation updates, shopping and coupon services, and location-sensitive social network services. Overall, the emerging location-based smartphone apps (LBA) offer significant value by providing greater connectivity, personalization, and information and entertainment in a location-specific context. Conversely, the rapid growth of LBA and their benefits have been accompanied by concerns over the collection and dissemination of individual users' personal information through ongoing tracking of their location, identity, preferences, and social behaviors. The majority of LBA users tend to agree and consent to the LBA provider's terms and privacy policy on use of location data to get the immediate services. This tendency further increases the potential risks of unprotected exposure of personal information and serious invasion and breaches of individual privacy. To address the complex issues surrounding LBA particularly from the user's behavioral perspective, this study applied the privacy calculus model (PCM) to explore the factors that influence the adoption of LBA. According to PCM, consumers are engaged in a dynamic adjustment process in which privacy risks are weighted against benefits of information disclosure. Consistent with the principal notion of PCM, we investigated how individual users make a risk-benefit assessment under which personalized service and locatability act as benefit-side factors and information privacy risks act as a risk-side factor accompanying LBA adoption. In addition, we consider the moderating role of trust on the service providers in the prohibiting effects of privacy risks on user intention to adopt LBA. Further we include perceived ease of use and usefulness as additional constructs to examine whether the technology acceptance model (TAM) can be applied in the context of LBA adoption. The research model with ten (10) hypotheses was tested using data gathered from 98 respondents through a quasi-experimental survey method. During the survey, each participant was asked to navigate the website where the experimental simulation of a LBA allows the participant to purchase time-and-location sensitive discounted tickets for nearby stores. Structural equations modeling using partial least square validated the instrument and the proposed model. The results showed that six (6) out of ten (10) hypotheses were supported. On the subject of the core PCM, H2 (locatability ${\rightarrow}$ intention to use LBA) and H3 (privacy risks ${\rightarrow}$ intention to use LBA) were supported, while H1 (personalization ${\rightarrow}$ intention to use LBA) was not supported. Further, we could not any interaction effects (personalization X privacy risks, H4 & locatability X privacy risks, H5) on the intention to use LBA. In terms of privacy risks and trust, as mentioned above we found the significant negative influence from privacy risks on intention to use (H3), but positive influence from trust, which supported H6 (trust ${\rightarrow}$ intention to use LBA). The moderating effect of trust on the negative relationship between privacy risks and intention to use LBA was tested and confirmed by supporting H7 (privacy risks X trust ${\rightarrow}$ intention to use LBA). The two hypotheses regarding to the TAM, including H8 (perceived ease of use ${\rightarrow}$ perceived usefulness) and H9 (perceived ease of use ${\rightarrow}$ intention to use LBA) were supported; however, H10 (perceived effectiveness ${\rightarrow}$ intention to use LBA) was not supported. Results of this study offer the following key findings and implications. First the application of PCM was found to be a good analysis framework in the context of LBA adoption. Many of the hypotheses in the model were confirmed and the high value of $R^2$ (i.,e., 51%) indicated a good fit of the model. In particular, locatability and privacy risks are found to be the appropriate PCM-based antecedent variables. Second, the existence of moderating effect of trust on service provider suggests that the same marginal change in the level of privacy risks may differentially influence the intention to use LBA. That is, while the privacy risks increasingly become important social issues and will negatively influence the intention to use LBA, it is critical for LBA providers to build consumer trust and confidence to successfully mitigate this negative impact. Lastly, we could not find sufficient evidence that the intention to use LBA is influenced by perceived usefulness, which has been very well supported in most previous TAM research. This may suggest that more future research should examine the validity of applying TAM and further extend or modify it in the context of LBA or other similar smartphone apps.

  • PDF

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.