• Title/Summary/Keyword: Computational complexity

Search Result 2,074, Processing Time 0.033 seconds

A Sequential Estimation Algorithm for TDOA/FDOA Extraction for VHF Communication Signals (VHF 대역 통신 신호에서 TDOA/FDOA 정보 추출을 위한 순차 추정 알고리즘)

  • Kim, Dong-Gyu;Kim, Yong-Hee;Park, Jin-Oh;Lee, Moon Seok;Park, Young-Mi;Kim, Hyoung-Nam
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.7
    • /
    • pp.60-68
    • /
    • 2014
  • In modern electronic warfare systems, a demand on the more accurate estimation method based on TDOA and FDOA has been increased. TDOA/FDOA localization consists of two-stage procedures; the extraction of information from signals, and the estimation of emitter location. CAF(complex ambiguity function) is known as a basic method in the extraction stage. However, when we extract TDOA and FDOA information from VHF(very high frequency) communication signals, conventional CAF algorithms may not work within a permitted time because of much computation. Therefore, in this paper, an improved sequential estimation algorithm based on CAF is proposed for effective calculation of extracting TDOA and FDOA estimates in terms of computational complexity. The proposed method is compared with the conventional CAF-based algorithms through simulation. In addition, we derive the optimal performance based on the CRLB(Cramer-Lao lower bound) to check the extraction performance of the proposed method.

PAPR Reduction Method of OFDM System Using Fuzzy Theory (Fuzzy 이론을 이용한 OFDM 시스템에서 PAPR 감소 기법)

  • Lee, Dong-Ho;Choi, Jung-Hun;Kim, Nam;Lee, Bong-Woon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.21 no.7
    • /
    • pp.715-725
    • /
    • 2010
  • Orthgonal Frequency Division Multiplexing(OFDM) system is effective for the high data rate transmission in the frequency selective fading channel. In this paper we propose PAPR(Peak to Average Power Ratio) reduction method of problem in OFDM system used Fuzzy theory that often control machine. This thesis proposes PAPR reducing method of OFDM system using Fuzzy theory. The advantages for using Fuzzy theory to reduce PAPR are that it is easy to manage the data and embody the hardware, and required smaller amount of operation. Firstly, we proposed simple algorithm that is reconstructed at receiver with transmitted overall PAPR which is reduced PAPR of sub-block using Fuzzy. Although there are some drawbacks that the operation of the system is increased comparing conventional OFDM system and it is needed to send the information about Fuzzy indivisually, it is assured that the performance of the system is enhanced for PAPR reducing. To evaluate the perfomance, the proposed search algorithm is compared with the proposed algorithm in terms of the complementary cumulative distribution function(CCDF) of the PAPR and the computational complexity. As a result of using the QPSK and 16QAM modulation, Fuzzy theory method is more an effective method of reducing 2.3 dB and 3.1 dB PAPR than exiting OFDM system when FFT size(N)=512, and oversampling=4 in the base PR of $10^{-5}$.

Fuzzy discretization with spatial distribution of data and Its application to feature selection (데이터의 공간적 분포를 고려한 퍼지 이산화와 특징선택에의 응용)

  • Son, Chang-Sik;Shin, A-Mi;Lee, In-Hee;Park, Hee-Joon;Park, Hyoung-Seob;Kim, Yoon-Nyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.2
    • /
    • pp.165-172
    • /
    • 2010
  • In clinical data minig, choosing the optimal subset of features is such important, not only to reduce the computational complexity but also to improve the usefulness of the model constructed from the given data. Moreover the threshold values (i.e., cut-off points) of selected features are used in a clinical decision criteria of experts for differential diagnosis of diseases. In this paper, we propose a fuzzy discretization approach, which is evaluated by measuring the degree of separation of redundant attribute values in overlapping region, based on spatial distribution of data with continuous attributes. The weighted average of the redundant attribute values is then used to determine the threshold value for each feature and rough set theory is utilized to select a subset of relevant features from the overall features. To verify the validity of the proposed method, we compared experimental results, which applied to classification problem using 668 patients with a chief complaint of dyspnea, based on three discretization methods (i.e., equal-width, equal-frequency, and entropy-based) and proposed discretization method. From the experimental results, we confirm that the discretization methods with fuzzy partition give better results in two evaluation measures, average classification accuracy and G-mean, than those with hard partition.

Parallel Inverse Transform and Small-sized Inverse Quantization Architectures Design of H.264/AVC Decoder (H.264/AVC 복호기의 병렬 역변환 구조 및 저면적 역양자화 구조 설계)

  • Jung, Hong-Kyun;Cha, Ki-Jong;Park, Seung-Yong;Kim, Jin-Young;Ryoo, Kwang-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.444-447
    • /
    • 2011
  • In this paper, parallel IT(inverse transform) architecture and IQ(inverse quantization) architecture with common operation unit for the H.264/AVC decoder are proposed. By using common operation unit, the area cost and computational complexity of IQ are reduced. In order to take four execution cycles to perform IT, the proposed IT architecture has parallel architecture with one horizontal DCT unit and four vertical DCT units. Furthermore, the execution cycles of the proposed architecture is reduced to five cycles by applying two state pipeline architecture. The proposed architecture is implemented to a single chip by using Magnachip 0.18um CMOS technology. The gate count of the proposed architecture is 14.3k at clock frequency of 13MHz and the area of proposed IQ is reduced 39.6% compared with the previous one. The experimental result shows that execution cycle the proposed architecture is about 49.09% higher than that of the previous one.

  • PDF

The Fast Search Algorithm for Raman Spectrum (라만 스펙트럼 고속 검색 알고리즘)

  • Ko, Dae-Young;Baek, Sung-June;Park, Jun-Kyu;Seo, Yu-Gyeong;Seo, Sung-Il
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.5
    • /
    • pp.3378-3384
    • /
    • 2015
  • The problem of fast search for raman spectrum has attracted much attention recently. By far the most simple and widely used method is to calculate and compare the Euclidean distance between the given spectrum and the spectra in a database. But it is non-trivial problem because of the inherent high dimensionality of the data. One of the most serious problems is the high computational complexity of searching for the closet codeword. To overcome this problem, The fast codeword search algorithm based on the mean pyramids of codewords is currently used in image coding applications. In this paper, we present three new methods for the fast algorithm to search for the closet codeword. the proposed algorithm uses two significant features of a vector, mean values and variance, to reject many unlikely codewords and save a great deal of computation time. The Experiment results show about 42.8-55.2% performance improvement for the 1DMPS+PDS. The results obtained confirm the effectiveness of the proposed algorithm.

An Improvement in K-NN Graph Construction using re-grouping with Locality Sensitive Hashing on MapReduce (MapReduce 환경에서 재그룹핑을 이용한 Locality Sensitive Hashing 기반의 K-Nearest Neighbor 그래프 생성 알고리즘의 개선)

  • Lee, Inhoe;Oh, Hyesung;Kim, Hyoung-Joo
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.11
    • /
    • pp.681-688
    • /
    • 2015
  • The k nearest neighbor (k-NN) graph construction is an important operation with many web-related applications, including collaborative filtering, similarity search, and many others in data mining and machine learning. Despite its many elegant properties, the brute force k-NN graph construction method has a computational complexity of $O(n^2)$, which is prohibitive for large scale data sets. Thus, (Key, Value)-based distributed framework, MapReduce, is gaining increasingly widespread use in Locality Sensitive Hashing which is efficient for high-dimension and sparse data. Based on the two-stage strategy, we engage the locality sensitive hashing technique to divide users into small subsets, and then calculate similarity between pairs in the small subsets using a brute force method on MapReduce. Specifically, generating a candidate group stage is important since brute-force calculation is performed in the following step. However, existing methods do not prevent large candidate groups. In this paper, we proposed an efficient algorithm for approximate k-NN graph construction by regrouping candidate groups. Experimental results show that our approach is more effective than existing methods in terms of graph accuracy and scan rate.

Empirical Mode Decomposition using the Second Derivative (이차 미분을 이용한 경험적 모드분해법)

  • Park, Min-Su;Kim, Donghoh;Oh, Hee-Seok
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.2
    • /
    • pp.335-347
    • /
    • 2013
  • There are various types of real world signals. For example, an electrocardiogram(ECG) represents myocardium activities (contraction and relaxation) according to the beating of the heart. ECG can be expressed as the fluctuation of ampere ratings over time. A signal is a composite of various types of signals. An orchestra (which boasts a beautiful melody) consists of a variety of instruments with a unique frequency; subsequently, each sound is combined to form a perfect harmony. Various research on how to to decompose mixed stationary signals have been conducted. In the case of non-stationary signals, there is a limitation to use methodologies for stationary signals. Huang et al. (1998) proposed empirical mode decomposition(EMD) to deal with non-stationarity. EMD provides a data-driven approach to decompose a signal into intrinsic mode functions according to local oscillation through the identification of local extrema. However, due to the repeating process in the construction of envelopes, EMD algorithm is not efficient and not robust to a noise, and its computational complexity tends to increase as the size of a signal grows. In this research, we propose a new method to extract a local oscillation embedded in a signal by utilizing the second derivative.

MODFLOW or FEFLOW: A Case Study of Groundwater Model Selection for the Upper Waikato Catchment, New Zealand

  • Weir, Julian;Moore, Dr Catherine;Hadfield, John
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2011.05a
    • /
    • pp.14-14
    • /
    • 2011
  • Groundwater in the Waikatoregion is a valuable resource for agriculture, water supply, forestry and industries. The 434,000 ha study area comprises the upper Waikato River catchment from the outflow of Lake Taupo (New Zealand's largest lake) through to Lake Karapiro (a man-made hydro lake with high recreational value) (Figure 1). Water quality in the area is naturally high. However, there are indications that this quality is deteriorating as a result of land use intensification and deforestation. Compounding this concern for decision makers is the lag time between land use changes and the realisation of effects on groundwater and surface water quality. It is expected that the effects of land use changes have not yet fully manifested, and additional intensification may take decadesto fully develop, further compounding the deterioration. Consequently, Environment Waikato (EW) have proposed a programme of work to develop a groundwater model to assist managing water quality and appropriate policy development within the catchment. One of the most important and critical decisions of any modelling exercise is the choice of the modelling platform to be used. It must not inhibit future decision making and scenario exploration and needs to allow as accurate representation of reality as feasible. With this in mind, EW requested that two modelling platforms, MODFLOW/MT3DMS and FEFLOW, be assessed for their ability to deliver the long-term modelling objectives for this project. The two platforms were compared alongside various selection criteria including complexity of model set-up and development, computational burden, ease and accuracy of representing surface water-groundwater interactions, precision in predictive scenarios and ease with which the model input and output files could be interrogated. This latter criteria is essential for the thorough assessment of predictive uncertainty with third-party software, such as PEST. This paper will focus on the attributes of each modelling platform and the comparison of the two approaches against the key criteria in the selection process. Primarily due to the ease of handling and developing input files and interrogating output files, MODFLOW/MT3DMS was selected as the preferred platform. Other advantages and disadvantages of the two modelling platforms were somewhat balanced. A preliminary regional groundwater numerical model of the study area was subsequently constructed. The model simulates steady state groundwater and surface water flows using MODFLOW and transient contaminant transport with MT3DMS, focussing on nitrate nitrogen (as a conservative solute). Geological information for this project was provided by GNS Science. Professional peer review was completed by Dr. Vince Bidwell (of Lincoln Environmental).

  • PDF

Statistical Analysis of Receding Horizon Particle Swarm Optimization for Multi-Robot Formation Control (다개체 로봇 편대 제어를 위한 이동 구간 입자 군집 최적화 알고리즘의 통계적 성능 분석)

  • Lee, Seung-Mok
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.5
    • /
    • pp.115-120
    • /
    • 2019
  • In this paper, we present the results of the performance statistical analysis of the multi-robot formation control based on receding horizon particle swarm optimization (RHPSO). The formation control problem of multi-robot system can be defined as a constrained nonlinear optimization problem when considering collision avoidance between robots. In general, the constrained nonlinear optimization problem has a problem that it takes a long time to find the optimal solution. The RHPSO algorithm was proposed to quickly find a suboptimal solution to the optimization problem of multi-robot formation control. The computational complexity of the RHPSO increases as the number of candidate solutions and generations increases. Therefore, it is important to find a suboptimal solution that can be used for real-time control with minimal candidate solutions and generations. In this paper, we compared the formation error according to the number of candidate solutions and the number of generations. Through numerical simulations under various conditions, the results are analyzed statistically and the minimum number of candidate solutions and the minimum number of generations of the RHPSO algorithm are derived within the allowable control error.

Deep Learning-based SISR (Single Image Super Resolution) Method using RDB (Residual Dense Block) and Wavelet Prediction Network (RDB 및 웨이블릿 예측 네트워크 기반 단일 영상을 위한 심층 학습기반 초해상도 기법)

  • NGUYEN, HUU DUNG;Kim, Eung-Tae
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.703-712
    • /
    • 2019
  • Single image Super-Resolution (SISR) aims to generate a visually pleasing high-resolution image from its degraded low-resolution measurement. In recent years, deep learning - based super - resolution methods have been actively researched and have shown more reliable and high performance. A typical method is WaveletSRNet, which restores high-resolution images through wavelet coefficient learning based on feature maps of images. However, there are two disadvantages in WaveletSRNet. One is a big processing time due to the complexity of the algorithm. The other is not to utilize feature maps efficiently when extracting input image's features. To improve this problems, we propose an efficient single image super resolution method, named RDB-WaveletSRNet. The proposed method uses the residual dense block to effectively extract low-resolution feature maps to improve single image super-resolution performance. We also adjust appropriated growth rates to solve complex computational problems. In addition, wavelet packet decomposition is used to obtain the wavelet coefficients according to the possibility of large scale ratio. In the experimental result on various images, we have proven that the proposed method has faster processing time and better image quality than the conventional methods. Experimental results have shown that the proposed method has better image quality by increasing 0.1813dB of PSNR and 1.17 times faster than the conventional method.