• Title/Summary/Keyword: 연산 수행

Search Result 2,664, Processing Time 0.027 seconds

A Design of Memory-efficient 2k/8k FFT/IFFT Processor using R4SDF/R4SDC Hybrid Structure (R4SDF/R4SDC Hybrid 구조를 이용한 메모리 효율적인 2k/8k FFT/IFFT 프로세서 설계)

  • 신경욱
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.2
    • /
    • pp.430-439
    • /
    • 2004
  • This paper describes a design of 8192/2048-point FFT/IFFT processor (CFFT8k2k), which performs multi-carrier modulation/demodulation in OFDM-based DVB-T receiver. Since a large size FFT requires a large buffer memory, two design techniques are considered to achieve memory-efficient implementation of 8192-point FFT/IFFT. A hybrid structure, which is composed of radix-4 single-path delay feedback (R4SDF) and radix-4 single-path delay commutator (R4SDC), reduces its memory by 20% compared to R4SDC structure. In addition, a memory reduction of about 24% is achieved by a novel two-step convergent block floating-point scaling. As a result, it requires only 57% of memory used in conventional design, reducing chip area and power consumption. The CFFT8k2k core is designed in Verilog-HDL, and has about 102,000 Bates, RAM of 292k bits, and ROM of 39k bits. Using gate-level netlist with SDF which is synthesized using a $0.25-{\um}m$ CMOS library, timing simulation show that it can safely operate with 50-MHz clock at 2.5-V supply, resulting that a 8192-point FFT/IFFT can be computed every 164-${\mu}\textrm{s}$. The functionality of the core is fully verified by FPGA implementation, and the average SQNR of 60-㏈ is achieved.

A Resource Adaptive Data Dissemination Protocol for Wireless Sensor Networks (무선 센서 네트워크를 위한 자원 적응형 데이터 확산프로토콜)

  • Kim, Hyun-Tae;Choi, Nak-Sun;Jung, Kyu-Su;Jeon, Yeong-Bae;Ra, In-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.11
    • /
    • pp.2091-2098
    • /
    • 2006
  • In this paper, it proposes a protocol of resource adaptive data dissemination for sensor nodes in a wireless sensor network. In general, each sensor node used in a wireless sensor network delivers the required information to the final destination by conducting cooperative works such as sensing, processing, and communicating each other using the battery power of a independent sensor node. So, a protocol used for transferring the acquired information to users through the wireless sensor network can minimize the power consumption of energy resource given to a sensor node. Especially, it is very important to minimize the total amount of power consumption with a method for handling the problems on implosion. data delivery overlapping, and excessive message transfer caused by message broadcasting. In this paper, for the maintaining of the shortest path between sensor nodes, maximizing of the life time of a sensor node and minimizing of communication cost, it presents a method for selecting the representative transfer node for an event arising area based on the negotiation scheme and maintaining optimal transfer path using hop and energy information. Finally, for the performance evaluation, we compare the proposed protocol to existing directed diffusion and SPIN protocol. And, with the simulation results, we show that the proposed protocol enhances the performance on the power consumption rate when the number of overall sensor nodes in a sensor network or neighbor sensor nodes in an event area are increased and on the number of messages disseminated from a sensor node.

Development of the HEMP Generation, Propagation Analysis, and Optimal Shelter Design Tool (고 고도 전자기파(HEMP) 발생과 전파해석 및 방호실 최적 설계 Tool 개발)

  • Kim, Dong Il;Min, Gyeong Chan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.10
    • /
    • pp.2331-2338
    • /
    • 2014
  • The HEMP threat may have acquired new, and urgent, relevance as the proliferation of nuclear weapons and missile technology accelerates of the North Korea, for example, is assessed as already having developed few atomic weapons, and is on the verge of North Korea already has missiles capable of delivering a nuclear warhead against South Korea. ITU K.78, K81 and IEC recommended its counter-measuring for the industrial facilities with navigation and sailing facilities in order to obviate the all of processor equipped system malfunctions from the EMP/HEMP but its simulation must only be done by the computer simulation which had studied on the 1960-1990 years USA/AFWL papers. This result has a significant activities to the South Korea being under the North Korea threat because all of HEMP related products was strongly limited for export. The HEMP cord which was developed newly by the KTI including the HEMP generation & propagation analysis, optimal shelter design tool, essential EM energy attenuation in multi-layered various soils and rocks and HEMP filter design tool. Especially, the least square fitting method was adopted to analysis for the EM energy attenuation in the soils and rocks because it has a various characteristics based on the many times field test reports.

Building a Log Framework for Personalization Based on a Java Open Source (JAVA 오픈소스 기반의 개인화를 지원하는 Log Framework 구축)

  • Sin, Choongsub;Park, Seog
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.524-530
    • /
    • 2015
  • A log is for text monitoring and perceiving the issues of a system during the development and operation of a program. Based on the log, system developers and operators can trace the cause of an issue. In the development phase, it is relatively simple for a log to be traced while there are only a small number of personnel uses of a system such as developers and testers. However, it is the difficult to trace a log when many people can use the system in the operation phase. In major cases, because a log cannot be tracked, even tracing is dropped. This study proposed a simplified tracing of a log during the system operation. Thus, the purpose is to create a log on the run time based on an ID/IP, using features provided by the Logback. It saves an ID/IP of the tracking user on a DB, and loads the user's ID/IP onto the memory to trace once WAS starts running. Before the online service operates, an Interceptor is executed to decide whether to load a log file, and then it generates the service requested by a certain user in a separate log file. The load is insignificant since the arithmetic operation occurs in a JVM, although every service must pass through the Interceptor to be executed.

An Efficient Clustering Algorithm based on Heuristic Evolution (휴리스틱 진화에 기반한 효율적 클러스터링 알고리즘)

  • Ryu, Joung-Woo;Kang, Myung-Ku;Kim, Myung-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.1_2
    • /
    • pp.80-90
    • /
    • 2002
  • Clustering is a useful technique for grouping data points such that points within a single group/cluster have similar characteristics. Many clustering algorithms have been developed and used in engineering applications including pattern recognition and image processing etc. Recently, it has drawn increasing attention as one of important techniques in data mining. However, clustering algorithms such as K-means and Fuzzy C-means suffer from difficulties. Those are the needs to determine the number of clusters apriori and the clustering results depending on the initial set of clusters which fails to gain desirable results. In this paper, we propose a new clustering algorithm, which solves mentioned problems. In our method we use evolutionary algorithm to solve the local optima problem that clustering converges to an undesirable state starting with an inappropriate set of clusters. We also adopt a new measure that represents how well data are clustered. The measure is determined in terms of both intra-cluster dispersion and inter-cluster separability. Using the measure, in our method the number of clusters is automatically determined as the result of optimization process. And also, we combine heuristic that is problem-specific knowledge with a evolutionary algorithm to speed evolutionary algorithm search. We have experimented our algorithm with several sets of multi-dimensional data and it has been shown that one algorithm outperforms the existing algorithms.

A Ranking Cleaning Policy for Embedded Flash File Systems (임베디드 플래시 파일시스템을 위한 순위별 지움 정책)

  • Kim, Jeong-Ki;Park, Sung-Min;Kim, Chae-Kyu
    • The KIPS Transactions:PartA
    • /
    • v.9A no.4
    • /
    • pp.399-404
    • /
    • 2002
  • Along the evolution of information and communication technologies, manufacturing embedded systems such as PDA (personal digital assistant), HPC (hand -held PC), settop box. and information appliance became realistic. And RTOS (real-time operating system) and filesystem have been played essential re]os within the embedded systems as well. For the filesystem of embedded systems, flash memory has been used extensively instead of traditional hard disk drives because of embedded system's requirements like portability, fast access time, and low power consumption. Other than these requirements, nonvolatile storage characteristic of flash memory is another reason for wide adoption in industry. However, there are some technical challenges to cope with to use the flash memory as an indispensable component of the embedded systems. These would be relatively slow cleaning time and the limited number of times to write-and-clean. In this paper, a new cleaning policy is proposed to overcome the problems mentioned above and relevant performance comparison results will be provided. Ranking cleaning policy(RCP) decides when and where to clean within the flash memory considering the cost of cleaning and the number of times of cleaning. This method will maximize not only the lifetime of flash memory but also the performance of access time and manageability. As a result of performance comparison, RCP has showed about 10 ~ 50% of performance evolution compared to traditional policies, Greedy and Cost-benefit methods, by write throughputs.

Effcient Neural Network Architecture for Fat Target Detection and Recognition (목표물의 고속 탐지 및 인식을 위한 효율적인 신경망 구조)

  • Weon, Yong-Kwan;Baek, Yong-Chang;Lee, Jeong-Su
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.10
    • /
    • pp.2461-2469
    • /
    • 1997
  • Target detection and recognition problems, in which neural networks are widely used, require translation invariant and real-time processing in addition to the requirements that general pattern recognition problems need. This paper presents a novel architecture that meets the requirements and explains effective methodology to train the network. The proposed neural network is an architectural extension of the shared-weight neural network that is composed of the feature extraction stage followed by the pattern recognition stage. Its feature extraction stage performs correlational operation on the input with a weight kernel, and the entire neural network can be considered a nonlinear correlation filter. Therefore, the output of the proposed neural network is correlational plane with peak values at the location of the target. The architecture of this neural network is suitable for implementing with parallel or distributed computers, and this fact allows the application to the problems which require realtime processing. Net training methodology to overcome the problem caused by unbalance of the number of targets and non-targets is also introduced. To verify the performance, the proposed network is applied to detection and recognition problem of a specific automobile driving around in a parking lot. The results show no false alarms and fast processing enough to track a target that moves as fast as about 190 km per hour.

  • PDF

Pattern Analysis of Personalized ECG Signal by Q, R, S Peak Variability (Q, R, S 피크 변화에 따른 개인별 ECG 신호의 패턴 분석)

  • Cho, Ik-Sung;Kwon, Hyeog-Soong;Kim, Joo-Man;Kim, Seon-Jong;Kim, Byoung-Chul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.1
    • /
    • pp.192-200
    • /
    • 2015
  • Several algorithms have been developed to classify arrhythmia which rely on specific ECG(Electrocardiogram) database. Nevertheless personalized difference of ECG signal exist, performance degradation occurs because of carrying out diagnosis by general classification rule. Most methods require accurate detection of P-QRS-T point, higher computational cost and larger processing time. But it is difficult to detect the P and T wave signal because of person's individual difference. Therefore it is necessary to classify the pattern by analyzing personalized ECG signal and extracting minimal feature. Thus, QRS pattern Analysis of personalized ECG Signal by Q, R, S peak variability is presented in this paper. For this purpose, we detected R wave through the preprocessing method and extract eight feature by amplitude and phase variability. Also, we classified nine pattern in realtime through peak and morphology variability. PVC, PAC, Normal, LBBB, RBBB, Paced beat arrhythmia is evaluated by using 43 record of MIT-BIH arrhythmia database. The achieved scores indicate the average of 93.72% in QRS pattern detection classification.

A Study on the Ordered Subsets Expectation Maximization Reconstruction Method Using Gibbs Priors for Emission Computed Tomography (Gibbs 선행치를 사용한 배열된부분집합 기대값최대화 방출단층영상 재구성방법에 관한 연구)

  • Im, K. C.;Choi, Y.;Kim, J. H.;Lee, S. J.;Woo, S. K.;Seo, H. K.;Lee, K. H.;Kim, S. E.;Choe, Y. S.;Park, C. C;Kim, B. T.
    • Journal of Biomedical Engineering Research
    • /
    • v.21 no.5
    • /
    • pp.441-448
    • /
    • 2000
  • 방출단층영상 재구성을 위한 최대우도 기대값최대화(maximum likelihood expectation maximization, MLEM) 방법은 영상 획득과정을 통계학적으로 모델링하여 영상을 재구성한다. MLEM은 일반적으로 사용하여 여과후역투사(filtered backprojection)방법에 비해 많은 장점을 가지고 있으나 반복횟수 증가에 따른 발산과 재구성 시간이 오래 걸리는 단점을 가지고 있다. 이 논문에서는 이러한 단점을 보완하기 위해 계산시간을 현저히 단축시킨 배열된부분집합 기대값최대화(ordered subsets expectation maximization. OSEM)에 Gibbs 선행치인 membrance (MM) 또는 thin plate(TP)을 첨가한 OSEM-MAP (maximum a posteriori)을 구현함으로써 알고리즘의 안정성 및 재구성된 영상의 질을 향상시키고자 g나다. 실험에서 알고리즘의 수렴시간을 가속화하기 위해 투사 데이터를 16개의 부분집합으로 분할하여 반복연산을 수행하였으며, 알고리즘의 성능을 비교하기 위해 소프트웨어 모형(원숭이 뇌 자가방사선, 수학적심장흉부)을 사용한 영상재구성 결과를 제곱오차로 비교하였다. 또한 알고리즘의 사용 가능성을 평가하기 위해 물리모형을 사용하여 PET 기기로부터 획득한 실제 투사 데이터를 사용하였다.

  • PDF

A Method for Optimal Moving Pattern Mining using Frequency of Moving Sequence (이동 시퀀스의 빈발도를 이용한 최적 이동 패턴 탐사 기법)

  • Lee, Yon-Sik;Ko, Hyun
    • The KIPS Transactions:PartD
    • /
    • v.16D no.1
    • /
    • pp.113-122
    • /
    • 2009
  • Since the traditional pattern mining methods only probe unspecified moving patterns that seem to satisfy users' requests among diverse patterns within the limited scopes of time and space, they are not applicable to problems involving the mining of optimal moving patterns, which contain complex time and space constraints, such as 1) searching the optimal path between two specific points, and 2) scheduling a path within the specified time. Therefore, in this paper, we illustrate some problems on mining the optimal moving patterns with complex time and space constraints from a vast set of historical data of numerous moving objects, and suggest a new moving pattern mining method that can be used to search patterns of an optimal moving path as a location-based service. The proposed method, which determines the optimal path(most frequently used path) using pattern frequency retrieved from historical data of moving objects between two specific points, can efficiently carry out pattern mining tasks using by space generalization at the minimum level on the moving object's location attribute in consideration of topological relationship between the object's location and spatial scope. Testing the efficiency of this algorithm was done by comparing the operation processing time with Dijkstra algorithm and $A^*$ algorithm which are generally used for searching the optimal path. As a result, although there were some differences according to heuristic weight on $A^*$ algorithm, it showed that the proposed method is more efficient than the other methods mentioned.