• Title/Summary/Keyword: Algorithm Complexity

Search Result 2,993, Processing Time 0.028 seconds

System Design and Evaluation of Digital Retrodirective Array Antenna for High Speed Tracking Performance (고속 추적 특성을 위한 디지털 역지향성 배열 안테나 시스템 설계와 특성 평가)

  • Kim, So-Ra;Ryu, Heung-Gyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.8
    • /
    • pp.623-628
    • /
    • 2013
  • The retrodirective array antenna system is operated faster than existing techniques of beamforming due to its less complexity. Therefore, it is effective for beam tracking in the environment of fast vehicle. On the other hand, it also has difficulty in estimating AOA according to multipath environment or multiuser signals. To improve the certainty of estimating AOA), this article proposes hybrid digital retrodirective array antenna systme combined with MUSIC algorithm. In this paper, the digital retrodirective array antenna system is designed according to the number of antenna array by using only one digital PLL which finds angle of delayed phase. And we evaluate the performance of the digital retrodirective array antenna for the high speed tracking application. Performance is studied by simulink when the speed of mobile is 300km/h and the distance between transmitter and receiver is 100m and then we have to confirm the performance of the system in multi path environment. As a result, the mean of AOA (Angle Of Arrival) error is $4.2^{\circ}$ when SNR is 10dB and it is $1.3^{\circ}$ when SNR is 20dB. Consequently, the digital RDA shows very good performance for high speed tracking due to the simple calculation and realization.

Development of Copycat Harmony Search : Adapting Copycat Scheme for the Improvement of Optimization Performance (모방 화음탐색법의 개발 : 흉내내기에 의한 최적화 성능 향상)

  • Jun, Sang Hoon;Choi, Young Hwan;Jung, Donghwi;Kim, Joong Hoon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.9
    • /
    • pp.304-315
    • /
    • 2018
  • Harmony Search (HS) is a recently developed metaheuristic algorithm that is widely known to many researchers. However, due to the increasing complexity of optimization problems, the optimal solution cannot be efficiently found by HS. To overcome this problem, there have been many studies that have improved the performance of HS by modifying the parameter settings and incorporating other metaheuristic algorithms. In this study, Copycat HS (CcHS) is suggested, which improves the parameter setting method and the performance of searching for the optimal solution. To verify the performance of CcHS, the results were compared to those of HS variants with a set of well-known mathematical benchmark problems. The effectiveness of CcHS was proven by finding final solutions that are closer to the global optimum than other algorithms in all problems. To analyze the applicability of CcHS to engineering optimization problems, it was applied to a design problem for Water Distribution Systems (WDS), which is widely applied in previous research. As a result, CcHS proposed the minimum design cost, which was 21.91% cheaper than the cost suggested by simple HS.

The Impact of the PCA Dimensionality Reduction for CNN based Hyperspectral Image Classification (CNN 기반 초분광 영상 분류를 위한 PCA 차원축소의 영향 분석)

  • Kwak, Taehong;Song, Ahram;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.959-971
    • /
    • 2019
  • CNN (Convolutional Neural Network) is one representative deep learning algorithm, which can extract high-level spatial and spectral features, and has been applied for hyperspectral image classification. However, one significant drawback behind the application of CNNs in hyperspectral images is the high dimensionality of the data, which increases the training time and processing complexity. To address this problem, several CNN based hyperspectral image classification studies have exploited PCA (Principal Component Analysis) for dimensionality reduction. One limitation to this is that the spectral information of the original image can be lost through PCA. Although it is clear that the use of PCA affects the accuracy and the CNN training time, the impact of PCA for CNN based hyperspectral image classification has been understudied. The purpose of this study is to analyze the quantitative effect of PCA in CNN for hyperspectral image classification. The hyperspectral images were first transformed through PCA and applied into the CNN model by varying the size of the reduced dimensionality. In addition, 2D-CNN and 3D-CNN frameworks were applied to analyze the sensitivity of the PCA with respect to the convolution kernel in the model. Experimental results were evaluated based on classification accuracy, learning time, variance ratio, and training process. The size of the reduced dimensionality was the most efficient when the explained variance ratio recorded 99.7%~99.8%. Since the 3D kernel had higher classification accuracy in the original-CNN than the PCA-CNN in comparison to the 2D-CNN, the results revealed that the dimensionality reduction was relatively less effective in 3D kernel.

Application of neural network for airship take-off and landing mode by buoyancy control (기낭 부력 제어에 의한 비행선 이착륙의 인공신경망 적용)

  • Chang, Yong-Jin;Woo, Gui-Ae;Kim, Jong-Kwon;Lee, Dae-Woo;Cho, Kyeum-Rae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.33 no.2
    • /
    • pp.84-91
    • /
    • 2005
  • For long time, the takeoff and landing control of airship was worked by human handling. With the development of the autonomous control system, the exact controls during the takeoff and landing were required and lots of methods and algorithms were suggested. This paper presents the result of airship take-off and landing by buoyancy control using air ballonet volume change and performance control of pitch angle for stable flight within the desired altitude. For the complexity of airship's dynamics, firstly, simple PID controller was applied. Due to the various atmospheric conditions, this controller didn't give satisfactory results. Therefore, new control method was designed to reduce rapidly the error between designed trajectory and actual trajectory by learning algorithm using an artificial neural network. Generally, ANN has various weaknesses such as large training time, selection of neuron and hidden layer numbers required to deal with complex problem. To overcome these drawbacks, in this paper, the RBFN (radial basis function network) controller developed. The weight value of RBFN is acquired by learning which to reduce the error between desired input output through and airship dynamics to impress the disturbance. As a result of simulation, the controller using the RBFN is superior to PID controller which maximum error is 15M.

Extraction of Network Threat Signatures Using Latent Dirichlet Allocation (LDA를 활용한 네트워크 위협 시그니처 추출기법)

  • Lee, Sungil;Lee, Suchul;Lee, Jun-Rak;Youm, Heung-youl
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.1-10
    • /
    • 2018
  • Network threats such as Internet worms and computer viruses have been significantly increasing. In particular, APTs(Advanced Persistent Threats) and ransomwares become clever and complex. IDSes(Intrusion Detection Systems) have performed a key role as information security solutions during last few decades. To use an IDS effectively, IDS rules must be written properly. An IDS rule includes a key signature and is incorporated into an IDS. If so, the network threat containing the signature can be detected by the IDS while it is passing through the IDS. However, it is challenging to find a key signature for a specific network threat. We first need to analyze a network threat rigorously, and write a proper IDS rule based on the analysis result. If we use a signature that is common to benign and/or normal network traffic, we will observe a lot of false alarms. In this paper, we propose a scheme that analyzes a network threat and extracts key signatures corresponding to the threat. Specifically, our proposed scheme quantifies the degree of correspondence between a network threat and a signature using the LDA(Latent Dirichlet Allocation) algorithm. Obviously, a signature that has significant correspondence to the network threat can be utilized as an IDS rule for detection of the threat.

Effects of Spatial Resolution on PSO Target Detection Results of Airplane and Ship (항공기와 선박의 PSO 표적탐지 결과에 공간해상도가 미치는 영향)

  • Yeom, Jun Ho;Kim, Byeong Hee;Kim, Yong Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.1
    • /
    • pp.23-29
    • /
    • 2014
  • The emergence of high resolution satellite images and the evolution of spatial resolution facilitate various studies using high resolution satellite images. Above all, target detection algorithms are effective for monitoring of traffic flow and military surveillance and reconnaissance because vehicles, airplanes, and ships on broad area could be detected easily using high resolution satellite images. Recently, many satellites are launched from global countries and the diversity of satellite images are also increased. On the contrary, studies on comparison about the spatial resolution or target detection, especially, are insufficient in domestic and foreign countries. Therefore, in this study, effects of spatial resolution on target detection are analyzed using the PSO target detection algorithm. The resampling techniques such as nearest neighbor, bilinear, and cubic convolution are adopted to resize the original image into 0.5m, 1m, 2m, 4m spatial resolutions. Then, accuracy of target detection is assessed according to not only spatial resolution but also resampling method. As a result of the study, the resolution of 0.5m and nearest neighbor among the resampling methods have the best accuracy. Additionally, it is necessary to satisfy the criteria of 2m and 4m resolution for the detection of airplane and ship, respectively. The detection of airplane need more high spatial resolution than ship because of their complexity of shape. This research suggests the appropriate spatial resolution for the plane and ship target detection and contributes to the criteria of satellite sensor design.

Symbol Timing Alignment and Combining Technique in Rake Receiver for cdma2000 Systems (cdma2000 시스템용 레이크 수신기에서의 심볼 정렬 및 컴바이닝 기법)

  • Lee, Seong-Ju;Kim, Jae-Seok;Eo, Ik-Su;Kim, Gyeong-Su
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.39 no.1
    • /
    • pp.34-41
    • /
    • 2002
  • In the conventional rake receiver structure for the IS-95 CDMA system, each finger has its own time-deskew buffer or FIFO that aligns the multipath signals to the same timing reference in order to combine symbols. This architecture is not a burden to the rake receiver design mainly because of the small number and size of the buffers. However, the number and size of the buffers are significantly increased in the cdma2000 system which adopts multiple carriers and the small spreading gain for a higher rate in data services. In order to decrease the number of buffers, we propose a new model of the time-deskew buffers, which combines the symbols as well as realigns them at the same time. Our architecture reduces the hardware complexity of the buffers by about more than 60% and 70% compared with the conventional one when we consider each rake receiver has three and four independent fingers, respectively. Moreover, the proposed algorithm is very useful not only to the cdma2000 rake receiver but also to the receiver with many fingers in order to increase the BER performance.

Total Degradation Performance Evaluation of the Time- and Frequency-Domain Clipping in OFDM Systems (OFDM 시스템에서 시간 및 주파수 영역 클리핑의 Total Degradation 성능평가)

  • Han, Chang-Sik;Seo, Man-Jung;Im, Sung-Bin
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.7 s.361
    • /
    • pp.17-22
    • /
    • 2007
  • OFDM (Orthogonal Frequency Division Multiplexing) is a special case of multicarrier transmission, where a single data stream is transmitted over a number of lower-rate subcarrier. One of the main reasons to use OFDM is to increase robustness against frequency-selective fading or narrowband interference. Unfortunately, an OFDM signal consists of a number of independently modulated subcarriers, which can give a large PAPR (Peak-to-Average Power Ratio) when added up coherently. In this paper, we investigate the performance of a simple PAPR reduction scheme, which requires no change of a receiver structure or no additional information transmission. The approach we employed is clipping in the time and frequency domains. The time-domain clipping is carried out with a predetermined clipping level while the frequency-domain clipping is done within EVM (Error Vector Magnitude). This approach is suboptimal with lower computational complexity compared to the optimal method. This evaluation is carried out on the OFDM system with an nonlinear amplifier. The simulation results demonstrated that the PAPR reduction algorithm is one of ways to reduce the effects of the nonlinear distortion of an HPA (High Power Amplifier).

Unified Design Methodology and Verification Platform for Giga-scale System on Chip (기가 스케일 SoC를 위한 통합 설계 방법론 및 검증 플랫폼)

  • Kim, Jeong-Hun
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.47 no.2
    • /
    • pp.106-114
    • /
    • 2010
  • We proposed an unified design methodology and verification platform for giga-scale System on Chip (SoC). According to the growth of VLSI integration, the existing RTL design methodology has a limitation of a production gap because a design complexity increases. A verification methodology need an evolution to overcome a verification gap. The proposed platform includes a high level synthesis, and we develop a power-aware verification platform for low power design and verification automation using it's results. We developed a verification automation and power-aware verification methodology based on control and data flow graph (CDFG) and an abstract level language and RTL. The verification platform includes self-checking and the coverage driven verification methodology. Especially, the number of the random vector decreases minimum 5.75 times with the constrained random vector algorithm which is developed for the power-aware verification. This platform can verify a low power design with a general logic simulator using a power and power cell modeling method. This unified design and verification platform allow automatically to verify, design and synthesis the giga-scale design from the system level to RTL level in the whole design flow.

Distributed Multi-channel Assignment Scheme Based on Hops in Wireless Mesh Networks (무선 메쉬 네트워크를 위한 홉 기반 분산형 다중 채널 할당 방안)

  • Kum, Dong-Won;Choi, Jae-In;Lee, Sung-Hyup;Cho, You-Ze
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.5
    • /
    • pp.1-6
    • /
    • 2007
  • In wireless mesh networks (WMNs), the end-to-end throughput of a flow decreases drastically according to the traversed number of hops due to interference among different hops of the same flow in addition to interference between hops of different flows with different paths. This paper proposes a distributed multi-channel assignment scheme based on hops (DMASH) to improve the performance of a static WMN. The proposed DMASH is a novel distributed multi-channel assignment scheme based on hops to enhance the end-to-end throughput by reducing interference between channels when transmitting packets in the IEEE 802.11 based multi-interface environments. The DMASH assigns a channel group to each hop, which has no interference between adjacent hops from a gateway in channel assignment phase, then each node selects its channel randomly among the channel group. Since the DMASH is a distributed scheme with unmanaged and auto-configuration of channel assignment, it has a less overhead and implementation complexity in algorithm than centralized multi-channel assignment schemes. Simulation results using the NS-2 showed that the DMASH could improve remarkably the total network throughput in multi-hop environments, comparing with a random channel assignment scheme.