• Title/Summary/Keyword: Code Complexity

Search Result 592, Processing Time 0.024 seconds

Absolute Atmospheric Correction Procedure for the EO-1 Hyperion Data Using MODTRAN Code

  • Kim, Sun-Hwa;Kang, Sung-Jin;Chi, Jun-Hwa;Lee, Kyu-Sung
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.1
    • /
    • pp.7-14
    • /
    • 2007
  • Atmospheric correction is one of critical procedures to extract quantitative information related to biophysical variables from hyperspectral imagery. Most atmospheric correction algorithms developed for hyperspectral data have been based upon atmospheric radiative transfer (RT) codes, such as MODTRAN. Because of the difficulty in acquisition of atmospheric data at the time of image capture, the complexity of RT model, and large volume of hyperspectral data, atmospheric correction can be very difficult and time-consuming processing. In this study, we attempted to develop an efficient method for the atmospheric correction of EO-1 Hyperion data. This method uses the pre-calculated look-up-table (LUT) for fast and simple processing. The pre-calculated LUT was generated by successive running of MODTRAN model with several input parameters related to solar and sensor geometry, radiometric specification of sensor, and atmospheric condition. Atmospheric water vapour contents image was generated directly from a few absorption bands of Hyperion data themselves and used one of input parameters. This new atmospheric correction method was tested on the Hyperion data acquired on June 3, 2001 over Seoul area. Reflectance spectra of several known targets corresponded with the typical pattern of spectral reflectance on the atmospherically corrected Hyperion image, although further improvement to reduce sensor noise is necessary.

A Study on Performance Improvement of Mobile Rake Finger System for the IMT-2000 (IMT-2000을 위한 이동국 Rake Finger 시스템 성능개선에 관한 연구)

  • 정우열;이선근
    • Journal of the Korea Society of Computer and Information
    • /
    • v.7 no.3
    • /
    • pp.135-142
    • /
    • 2002
  • In this paper, we proposed the new structure of the Rake Finger using Walsh Switch, the shared accumulator and the pipeline FWHT algorithm for reducing the signal processing complexity resulting from the increase of the number of data correlators. The number of computational operation in the proposed data correlators is 160 additions when the number of walsh code channels is 4. As a result, it is reduced about 3.2 times other than the number of computational operation of the conventional ones. Also, the result shows that the data processing time of the proposed Rake Finger architecture is 90,496〔ns〕 and the conventional ones is 110,696〔ns〕. It is 18.3% faster than the data processing time of the conventional Rake Finger architecture.

  • PDF

Cohort-based evacuation time estimation using TSIS-CORSIM

  • Park, Sunghyun;Sohn, Seokwoo;Jae, Moosung
    • Nuclear Engineering and Technology
    • /
    • v.53 no.6
    • /
    • pp.1979-1990
    • /
    • 2021
  • Evacuation Time Estimate (ETE) can provide decision-makers with a likelihood to implement evacuation of a population with radiation exposure risk by a nuclear power plant. Thus, the ETE is essential for developing an emergency response preparedness. However, studies on ETE have not been conducted adequately in Korea to date. In this study, different cohorts were selected based on assumptions. Existing local data were collected to construct a multi-model network by TSIS-CORSIM code. Furthermore, several links were aggregated to make simple calculations, and post-processing was conducted for dealing with the stochastic property of TSIS-CORSIM. The average speed of each cohort was calculated by the link aggregation and post-processing, and the evacuation time was estimated. As a result, the average cohort-based evacuation time was estimated as 2.4-6.8 h, and the average clearance time from ten simulations in 26 km was calculated as 27.3 h. Through this study, uncertainty factors to ETE results, such as classifying cohorts, degree of model complexity, traffic volume outside of the network, were identified. Various studies related to these factors will be needed to improve ETE's methodology and obtain the reliability of ETE results.

Additional degree of freedom in phased-MIMO radar signal design using space-time codes

  • Vahdani, Roholah;Bizaki, Hossein Khaleghi;Joshaghani, Mohsen Fallah
    • ETRI Journal
    • /
    • v.43 no.4
    • /
    • pp.640-649
    • /
    • 2021
  • In this paper, an additional degree of freedom in phased multi-input multi-output (phased-MIMO) radar with any arbitrary desired covariance matrix is proposed using space-time codes. By using the proposed method, any desired transmit covariance matrix in MIMO radar (phased-MIMO radars) can be realized by employing fully correlated base waveforms such as phased-array radars and simply extending them to different time slots with predesigned phases and amplitudes. In the proposed method, the transmit covariance matrix depends on the base waveform and space-time codes. For simplicity, a base waveform can be selected arbitrarily (ie, all base waveforms can be fully correlated, similar to phased-array radars). Therefore, any desired covariance matrix can be achieved by using a very simple phased-array structure and space-time code in the transmitter. The main advantage of the proposed scheme is that it does not require diverse uncorrelated waveforms. This considerably reduces transmitter hardware and software complexity and cost. One the receiver side, multiple signals can be analyzed jointly in the time and space domains to improve the signal-to-interference-plus-noise ratio.

Issues and Improvements on the Country of Origin Labeling System for Consumer Protection in Korea (소비자보호를 위한 한국 원산지표시제도의 문제점과 개선방안)

  • Jin, Byung-Jin;Lim, Byeong-Ho
    • Korea Trade Review
    • /
    • v.44 no.2
    • /
    • pp.143-157
    • /
    • 2019
  • The purpose of this study is to review domestic and foreign origin labeling system in order to implement origin labeling system in the perspective of protecting the interests of consumers, and to suggest governmental improvements by analyzing problems embedded in current labeling system. The results analysis show complexity of related legal system, lack of expertise at the stage of labeling, and inefficiency of crackdown authority. The improvement could be suggested in two ways: supporting plans for the ones who have duty of labeling and improvement plans in origin management system. As supporting plans, we suggest the need for an automatic origin determination system, appropriate education on origin stakeholders, and introduction of origin certification system. For improvement plans, there are unification of country of origin labeling related laws, utilization of FTA product specific rules, and QR code, expert confirmation system. Since the origin labeling issue has become important, proactive and quick responses must follow with thorough examination the effect of the origin labeling on consumer welfare.

Channel Decoding Scheme in Digital Communication Systems (디지털 통신 시스템의 채널 복호 방식)

  • Shim, Yong-Geol
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.3
    • /
    • pp.565-570
    • /
    • 2021
  • A soft-decision decoding scheme of a channel code for correcting an error occurring in a receiver of a digital communication systems is proposed. A method for efficiently decoding by use of the linear and arithmetic structure of linear block codes is presented. In this way, the probability of decoding errors has been reduced. In addition, it is possible to reduce the complexity of decoding as well. Sufficient conditions for achieving optimal decoding has been derived. As a result, the sufficient conditions enable efficient search for candidate codewords. With the proposed decoding scheme, we can effectively perform the decoding while lowering the block error probability.

Analysis and Comparison of Sorting Algorithms (Insertion, Merge, and Heap) Using Java

  • Khaznah, Alhajri;Wala, Alsinan;Sahar, Almuhaishi;Fatimah, Alhmood;Narjis, AlJumaia;Azza., A.A
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.12
    • /
    • pp.197-204
    • /
    • 2022
  • Sorting is an important data structure in many applications in the real world. Several sorting algorithms are currently in use for searching and other operations. Sorting algorithms rearrange the elements of an array or list based on the elements' comparison operators. The comparison operator is used in the accurate data structure to establish the new order of elements. This report analyzes and compares the time complexity and running time theoretically and experimentally of insertion, merge, and heap sort algorithms. Java language is used by the NetBeans tool to implement the code of the algorithms. The results show that when dealing with sorted elements, insertion sort has a faster running time than merge and heap algorithms. When it comes to dealing with a large number of elements, it is better to use the merge sort. For the number of comparisons for each algorithm, the insertion sort has the highest number of comparisons.

R-S Decoder Design for Single Error Correction and Erasure Generation (단일오류 정정 및 Erasure 발생을 위한 R-S 복호기 설계)

  • Kim, Yong Serk;Song, Dong Il;Kim, Young Woong;Lee, Kuen Young
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.23 no.5
    • /
    • pp.719-725
    • /
    • 1986
  • Reed-solomon(R-S) code is very effective to coerrect both random and burst errors over a noise communication channel. However, the required hardware is very complex if the B/M algorithm was employed. Moreover, when the error correction system consists of two R-S decoder and de-interleave, the I/O data bns lines becomes 9bits because of an erasure flag bit. Thus, it increases the complexity of hardware. This paper describes the R-S decoder which consisits of a error correction section that uses a direct decoding algorithm and erasure generation section and a erasure generation section which does not use the erasure flag bit. It can be shown that the proposed R-S dicoder is very effective in reducing the size of required hardware for error correction.

  • PDF

An Enhanced Function Point Model for Software Size Estimation: Micro-FP Model (소프트웨어 규모산정을 위한 기능점수 개선 Micro-FP 모형의 제안)

  • Ahn, Yeon-S.
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.12
    • /
    • pp.225-232
    • /
    • 2009
  • Function Point Method have been applied to measure software size estimation in industry because it supports to estimate the software's size by user's view not developer's. However, the current function point method has some problems for example complexity's upper limit etc. So, In this paper, an enhanced function point model. Micro FP model, was suggested. Using this model, software effort estimation can be more efficiently because this model has some regression equation. This model specially can be applied to estimate in detail the large application system's size Analysis results show that measured software size by this Micro FP model has the advantage with more correlative between the one of LOC, as of 10 applications operated in an large organization.

Low Computational FFT-based Fine Acquisition Technique for BOC Signals

  • Kim, Jeong-Hoon;Kim, Binhee;Kong, Seung-Hyun
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.11 no.1
    • /
    • pp.11-21
    • /
    • 2022
  • Fast Fourier transform (FFT)-based parallel acquisition techniques with reduced computational complexity have been widely used for the acquisition of binary phase shift keying (BPSK) global positioning system (GPS) signals. In this paper, we propose a low computational FFT-based fine acquisition technique, for binary offset carrier (BOC) modulated BPSK signals, that depending on the subcarrier-to-code chip rate ratio (SCR) selectively utilizes the computationally efficient frequency-domain realization of the BPSK-like technique and two-dimensional compressed correlator (BOC-TDCC) technique in the first stage in order to achieve a fast coarse acquisition and accomplishes a fine acquisition in the second stage. It is analyzed and demonstrated that the proposed technique requires much smaller mean fine acquisition computation (MFAC) than the conventional FFT-based BOC acquisition techniques. The proposed technique is one of the first techniques that achieves a fast FFT-based fine acquisition of BOC signals with a slight loss of detection probability. Therefore, the proposed technique is beneficial for the receivers to make a quick position fix when there are plenty of strong (i.e., line-of-sight) GNSS satellites to be searched.