• Title/Summary/Keyword: amount of computation

Search Result 604, Processing Time 0.023 seconds

Adaptive Discrete Wavelet Transform Based on Block Energy for JPEG2000 Still Images (JPEG2000 정지영상을 위한 블록 에너지 기반 적응적 이산 웨이블릿 변환)

  • Kim, Dae-Won
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.8 no.1
    • /
    • pp.22-31
    • /
    • 2007
  • The proposed algorithm in this paper is based on the wavelet decomposition and the energy computation of composed blocks so the amount of calculation and complexity is minimized by adaptively replacing the DWT coefficients and managing the resources effectively. We are now living in the world of a lot. of multimedia applications for many digital electric appliances and mobile devices. Among so many multimedia applications, the digital image compression is very important technology for digital cameras to store and transmit digital images to other sites and JPEG2000 is one of the cutting edge technology to compress still images efficiently. The digital cm technology is mainly using the digital image compression features so that those images could be efficiently saved locally and transferred to other sites without any losses. JPEG2000 standard is applicable for processing the digital images usefully to keep, send and receive through wired and/or wireless networks. The discrete wavelet transform (DWT) is one of the main differences to the previous digital image compression standard such as JPEG, performing the DWT to the entire image rather than splitting into many blocks. Several digital images m tested with this method and restored to compare to the results of conventional DWT which shows that the proposed algorithm get the better result without any significant degradation in terms of MSE & PSNR and the number of zero coefficients when the energy based adaptive DWT is applied.

  • PDF

Real-time Watermarking Algorithm using Multiresolution Statistics for DWT Image Compressor (DWT기반 영상 압축기의 다해상도의 통계적 특성을 이용한 실시간 워터마킹 알고리즘)

  • 최순영;서영호;유지상;김대경;김동욱
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.13 no.6
    • /
    • pp.33-43
    • /
    • 2003
  • In this paper, we proposed a real-time watermarking algorithm to be combined and to work with a DWT(Discrete Wavelet Transform)-based image compressor. To reduce the amount of computation in selecting the watermarking positions, the proposed algorithm uses a pre-established look-up table for critical values, which was established statistically by computing the correlation according to the energy values of the corresponding wavelet coefficients. That is, watermark is embedded into the coefficients whose values are greater than the critical value in the look-up table which is searched on the basis of the energy values of the corresponding level-1 subband coefficients. Therefore, the proposed algorithm can operate in a real-time because the watermarking process operates in parallel with the compression procession without affecting the operation of the image compression. Also it improved the property of losing the watermark and the efficiency of image compression by watermark inserting, which results from the quantization and Huffman-Coding during the image compression. Visual recognizable patterns such as binary image were used as a watermark The experimental results showed that the proposed algorithm satisfied the properties of robustness and imperceptibility that are the major conditions of watermarking.

Estimating North Korea's GNP by Physical Indicators Approach (실물지표(實物指標)에 의한 북한(北韓)의 GNP 추정(推定))

  • Chun, Hong-tack
    • KDI Journal of Economic Policy
    • /
    • v.14 no.1
    • /
    • pp.167-189
    • /
    • 1992
  • The most difficult problem one faces in estimating North Korea's GNP is the lack of basic national income data. In addition, there is no appropriate foreign exchange rate available to convert North Korea's GNP to dollar values. The physical indicators method is particularly useful in estimating North Korean GNP because it requires only a modest amount of data and obtains dollar GNP directly by applying a relationship between physical indicators and GNP, which is estimated from reference countries, to physical indicators of North Korea. The estimated result of North Korean GNP in 1990 is 27.1 billion dollars and per capita GNP 1,268 dollars. The trade participation ratio (Trade/GNP) implied by the GNP estimate was plausible and so was the ratio of fiscal expenditure to GNP. This paper examined the physical indicators method's logic, the quality of the North Korean data that was used in the estimation, and the plausibility of estimation result. Relatively simple data requirement, comparative ease of computation and plausible estimation results suggest that use of physical indicators method could enhance the reliability of North Korean GNP estimate.

  • PDF

A Development of Generalized Coupled Markov Chain Model for Stochastic Prediction on Two-Dimensional Space (수정 연쇄 말콥체인을 이용한 2차원 공간의 추계론적 예측기법의 개발)

  • Park Eun-Gyu
    • Journal of Soil and Groundwater Environment
    • /
    • v.10 no.5
    • /
    • pp.52-60
    • /
    • 2005
  • The conceptual model of under-sampled study area will include a great amount of uncertainty. In this study, we investigate the applicability of Markov chain model in a spatial domain as a tool for minimizing the uncertainty arose from the lack of data. A new formulation is developed to generalize the previous two-dimensional coupled Markov chain model, which has more versatility to fit any computational sequence. Furthermore, the computational algorithm is improved to utilize more conditioning information and reduce the artifacts, such as the artificial parcel inclination, caused by sequential computation. A generalized 20 coupled Markov chain (GCMC) is tested through applying a hypothetical soil map to evaluate the appropriateness as a substituting model for conventional geostatistical models. Comparing to sequential indicator model (SIS), the simulation results from GCMC shows lower entropy at the boundaries of indicators which is closer to real soil maps. For under-sampled indicators, however, GCMC under-estimates the presence of the indicators, which is a common aspect of all other geostatistical models. To improve this under-estimation, further study on data fusion (or assimilation) inclusion in the GCMC is required.

Research on improvement of target tracking performance of LM-IPDAF through improvement of clutter density estimation method (클러터밀도 추정 방법 개선을 통한 LM-IPDAF의 표적 추적 성능 향상 연구)

  • Yoo, In-Je;Park, Sung-Jae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.5
    • /
    • pp.99-110
    • /
    • 2017
  • Improving tracking performance by estimating the status of multiple targets using radar is important. In a clutter environment, a joint event occurs between the track and measurement in multiple target tracking using a tracking filter. As the number increases, the joint event increases exponentially. The problem to be considered when multiple target tracking filter design in such environments is that first, the tracking filter minimizes the rate of false track alarmsby eliminating the false track and quickly confirming the target track. The purpose is to increase the FTD performance. The second consideration is to improve the track maintenance performance by allocating each measurement to a track efficiently when an event occurs. Through two considerations, a single target tracking data association technique is extended to a multiple target tracking filter, and representative algorithms are JIPDAF and LM-IPDAF. In this study, a probabilistic evaluation of many hypotheses in the assignment of measurements was not performed, so that the computation amount does not increase nonlinearly according to the number of measurements and tracks, and the track existence probability based on the track density The LM-IPDAF algorithm was introduced. This paper also proposes a method to reduce the computational complexity by improving the clutter density estimation method for calculating the track existence probability of LM-IPDAF. The performance was verified by a comparison with the existing algorithm through simulation. As a result, it was possible to reduce the simulation processing time by approximately 20% while achieving equivalent performance on the position RMSE and Confirmed True Track.

An Efficient VEB Beats Detection Algorithm Using the QRS Width and RR Interval Pattern in the ECG Signals (ECG신호의 QRS 폭과 RR Interval의 패턴을 이용한 효율적인 VEB 비트 검출 알고리듬)

  • Chung, Yong-Joo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.2
    • /
    • pp.96-101
    • /
    • 2011
  • In recent days, the demand for the remote ECG monitoring system has been increasing and the automation of the monitoring system is becoming quite of a concern. Automatic detection of the abnormal ECG beats must be a necessity for the successful commercialization of these real time remote ECG monitoring system. From these viewpoints, in this paper, we proposed an automatic detection algorithm for the abnormal ECG beats using QRS width and RR interval patterns. In the previous research, many efforts have been done to classify the ECG beats into detailed categories. But, these approaches have disadvantages such that they produce lots of misclassification errors and variabilities in the classification performance. Also, they require large amount of training data for the accurate classification and heavy computation during the classification process. But, we think that the detection of abnormality from the ECG beats is more important that the detailed classification for the automatic ECG monitoring system. In this paper, we tried to detect the VEB which is most frequently occurring among the abnormal ECG beats and we could achieve satisfactory detection performance when applied the proposed algorithm to the MIT/BIH database.

Efficient Privacy-Preserving Duplicate Elimination in Edge Computing Environment Based on Trusted Execution Environment (신뢰실행환경기반 엣지컴퓨팅 환경에서의 암호문에 대한 효율적 프라이버시 보존 데이터 중복제거)

  • Koo, Dongyoung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.9
    • /
    • pp.305-316
    • /
    • 2022
  • With the flood of digital data owing to the Internet of Things and big data, cloud service providers that process and store vast amount of data from multiple users can apply duplicate data elimination technique for efficient data management. The user experience can be improved as the notion of edge computing paradigm is introduced as an extension of the cloud computing to improve problems such as network congestion to a central cloud server and reduced computational efficiency. However, the addition of a new edge device that is not entirely reliable in the edge computing may cause increase in the computational complexity for additional cryptographic operations to preserve data privacy in duplicate identification and elimination process. In this paper, we propose an efficiency-improved duplicate data elimination protocol while preserving data privacy with an optimized user-edge-cloud communication framework by utilizing a trusted execution environment. Direct sharing of secret information between the user and the central cloud server can minimize the computational complexity in edge devices and enables the use of efficient encryption algorithms at the side of cloud service providers. Users also improve the user experience by offloading data to edge devices, enabling duplicate elimination and independent activity. Through experiments, efficiency of the proposed scheme has been analyzed such as up to 78x improvements in computation during data outsourcing process compared to the previous study which does not exploit trusted execution environment in edge computing architecture.

Comparative Performance Analysis of Feature Detection and Matching Methods for Lunar Terrain Images (달 지형 영상에서 특징점 검출 및 정합 기법의 성능 비교 분석)

  • Hong, Sungchul;Shin, Hyu-Soung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.4
    • /
    • pp.437-444
    • /
    • 2020
  • A lunar rover's optical camera is used to provide navigation and terrain information in an exploration zone. However, due to the scant presence of atmosphere, the Moon has homogeneous terrain with dark soil. Also, in extreme environments, the rover has limited data storage with low computation capability. Thus, for successful exploration, it is required to examine feature detection and matching methods which are robust to lunar terrain and environmental characteristics. In this research, SIFT, SURF, BRISK, ORB, and AKAZE are comparatively analyzed with lunar terrain images from a lunar rover. Experimental results show that SIFT and AKAZE are most robust for lunar terrain characteristics. AKAZE detects less quantity of feature points than SIFT, but feature points are detected and matched with high precision and the least computational cost. AKAZE is adequate for fast and accurate navigation information. Although SIFT has the highest computational cost, the largest quantity of feature points are stably detected and matched. The rover periodically sends terrain images to Earth. Thus, SIFT is suitable for global 3D terrain map construction in that a large amount of terrain images can be processed on Earth. Study results are expected to provide a guideline to utilize feature detection and matching methods for future lunar exploration rovers.

A New Memory-based Learning using Dynamic Partition Averaging (동적 분할 평균을 이용한 새로운 메모리 기반 학습기법)

  • Yih, Hyeong-Il
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.456-462
    • /
    • 2008
  • The classification is that a new data is classified into one of given classes and is one of the most generally used data mining techniques. Memory-Based Reasoning (MBR) is a reasoning method for classification problem. MBR simply keeps many patterns which are represented by original vector form of features in memory without rules for reasoning, and uses a distance function to classify a test pattern. If training patterns grows in MBR, as well as size of memory great the calculation amount for reasoning much have. NGE, FPA, and RPA methods are well-known MBR algorithms, which are proven to show satisfactory performance, but those have serious problems for memory usage and lengthy computation. In this paper, we propose DPA (Dynamic Partition Averaging) algorithm. it chooses partition points by calculating GINI-Index in the entire pattern space, and partitions the entire pattern space dynamically. If classes that are included to a partition are unique, it generates a representative pattern from partition, unless partitions relevant partitions repeatedly by same method. The proposed method has been successfully shown to exhibit comparable performance to k-NN with a lot less number of patterns and better result than EACH system which implements the NGE theory and FPA, and RPA.

Incremental Frequent Pattern Detection Scheme Based on Sliding Windows in Graph Streams (그래프 스트림에서 슬라이딩 윈도우 기반의 점진적 빈발 패턴 검출 기법)

  • Jeong, Jaeyun;Seo, Indeok;Song, Heesub;Park, Jaeyeol;Kim, Minyeong;Choi, Dojin;Bok, Kyoungsoo;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.2
    • /
    • pp.147-157
    • /
    • 2018
  • Recently, with the advancement of network technologies, and the activation of IoT and social network services, many graph stream data have been generated. As the relationship between objects in the graph streams changes dynamically, studies have been conducting to detect or analyze the change of the graph. In this paper, we propose a scheme to incrementally detect frequent patterns by using frequent patterns information detected in previous sliding windows. The proposed scheme calculates values that represent whether the frequent patterns detected in previous sliding windows will be frequent in how many future silding windows. By using the values, the proposed scheme reduces the overall amount of computation by performing only necessary calculations in the next sliding window. In addition, only the patterns that are connected between the patterns are recognized as one pattern, so that only the more significant patterns are detected. We conduct various performance evaluations in order to show the superiority of the proposed scheme. The proposed scheme is faster than existing similar scheme when the number of duplicated data is large.