• Title/Summary/Keyword: Algorithm Complexity

Search Result 2,993, Processing Time 0.028 seconds

RPC Model Generation from the Physical Sensor Model (영상의 물리적 센서모델을 이용한 RPC 모델 추출)

  • Kim, Hye-Jin;Kim, Jae-Bin;Kim, Yong-Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.11 no.4 s.27
    • /
    • pp.21-27
    • /
    • 2003
  • The rational polynomial coefficients(RPC) model is a generalized sensor model that is used as an alternative for the physical sensor model for IKONOS-2 and QuickBird. As the number of sensors increases along with greater complexity, and as the need for standard sensor model has become important, the applicability of the RPC model is also increasing. The RPC model can be substituted for all sensor models, such as the projective camera the linear pushbroom sensor and the SAR This paper is aimed at generating a RPC model from the physical sensor model of the KOMPSAT-1(Korean Multi-Purpose Satellite) and aerial photography. The KOMPSAT-1 collects $510{\sim}730nm$ panchromatic images with a ground sample distance (GSD) of 6.6m and a swath width of 17 km by pushbroom scanning. We generated the RPC from a physical sensor model of KOMPSAT-1 and aerial photography. The iterative least square solution based on Levenberg-Marquardt algorithm is used to estimate the RPC. In addition, data normalization and regularization are applied to improve the accuracy and minimize noise. And the accuracy of the test was evaluated based on the 2-D image coordinates. From this test, we were able to find that the RPC model is suitable for both KOMPSAT-1 and aerial photography.

  • PDF

Damage Analysis and Accuracy Assessment for River-side Facilities using UAV images (UAV 영상을 활용한 수변구조물 피해분석 및 정확도 평가)

  • Kim, Min Chul;Yoon, Hyuk Jin;Chang, Hwi Jeong;Yoo, Jong Su
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.24 no.1
    • /
    • pp.81-87
    • /
    • 2016
  • It is important to analyze the exact damage information for fast recovery when natural disasters cause damage on river-side facilities such as dams, bridges, embankments etc. In this study, we shows the method to effectively damage analysis plan using UAV(Unmanned aerial vehicle) images and accuracy assessment of it. The UAV images are captured on area near the river-side facilities and the core methodology for damage analysis are image matching and change detection algorithm. The result(point cloud) from image matching is to construct 3-dimensional data using by 2-dimensional images, it extracts damage areas by comparing the height values on same area with reference data. The results are tested absolute locational precision compared by post-processed aerial LiDAR data named reference data. The assessment analysis test shows our matching results 10-20 centimeter level precision if external orientation parameters are very accurate. This study shows suggested method is very useful for damage analysis in a large size structure like river-side facilities. But the complexity building can't apply this method, it need to the other method for damage analysis.

An Improvement in K-NN Graph Construction using re-grouping with Locality Sensitive Hashing on MapReduce (MapReduce 환경에서 재그룹핑을 이용한 Locality Sensitive Hashing 기반의 K-Nearest Neighbor 그래프 생성 알고리즘의 개선)

  • Lee, Inhoe;Oh, Hyesung;Kim, Hyoung-Joo
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.11
    • /
    • pp.681-688
    • /
    • 2015
  • The k nearest neighbor (k-NN) graph construction is an important operation with many web-related applications, including collaborative filtering, similarity search, and many others in data mining and machine learning. Despite its many elegant properties, the brute force k-NN graph construction method has a computational complexity of $O(n^2)$, which is prohibitive for large scale data sets. Thus, (Key, Value)-based distributed framework, MapReduce, is gaining increasingly widespread use in Locality Sensitive Hashing which is efficient for high-dimension and sparse data. Based on the two-stage strategy, we engage the locality sensitive hashing technique to divide users into small subsets, and then calculate similarity between pairs in the small subsets using a brute force method on MapReduce. Specifically, generating a candidate group stage is important since brute-force calculation is performed in the following step. However, existing methods do not prevent large candidate groups. In this paper, we proposed an efficient algorithm for approximate k-NN graph construction by regrouping candidate groups. Experimental results show that our approach is more effective than existing methods in terms of graph accuracy and scan rate.

Scheduling System using CSP leer Effective Assignment of Repair Warrant Job (효율적인 A/S작업 배정을 위한 CSP기반의 스케줄링 시스템)

  • 심명수;조근식
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2000.11a
    • /
    • pp.247-256
    • /
    • 2000
  • 오늘날의 기업은 상품을 판매하는 것 뿐만 아니라 기업의 신용과 이미지를 위해 그 상품에 대한 사후처리(After Service) 업무에 많은 투자를 하고 있다. 이러한 양질의 사후서비스를 고객에게 공급하기 위해서는 많은 인력을 합리적으로 관리해야 하고 요청되는 고장수리 서비스 업무를 빠르게 해결하기 위해서는 업무를 인력들에게 합리적으로 배정을 하고 회사의 비용을 최소화하면서 정해진 시간에 요청된 작업을 처리하기 위해서는 인력들에게 작업을 배정하고 스케줄링하는 문제가 발생된다. 본 논문에서는 이러한 문제를 해결하기 위해 화학계기의 A/S 작업을 인력에게 합리적으로 배정하는 스케줄링 시스템에 관한 연구이다. 먼저 스케줄링 모델을 HP 사의 화학분석 및 시스템을 판매, 유지보수 해 주는 "영진과학(주)"회사의 작업 스케줄을 분석하여 필요한 도메인과 고객서비스전략과 인력관리전략에서 제약조건을 추출하였고 여기에 스케줄링 문제를 해결하기 위한 방법으로 제약만족문제(CSP) 해결기법인 도메인 여과기법을 적용하였다. 도메인 여과기법은 제약조건에 의해 변수가 갖는 도메인의 불필요한 부분을 여과하는 것으로 제약조건과 관련되어 있는 변수의 도메인이 축소되는 것이다. 또한, 스케줄링을 하는데에 있어서 비용적인 측면에서의 스케줄링방법과 고객 만족도에서의 스케줄링 방법을 비교하여 가장 이상적인 해를 찾는데 트래이드오프(Trade-off)를 이용하여 최적의 해를 구했으며 실험을 통해 인력에게 더욱 효율적으로 작업들을 배정 할 수 있었고 또한, 정해진 시간에 많은 작업을 처리 할 수 있었으며 작업을 처리하는데 있어 소요되는 비용을 감소하는 결과를 얻을 수 있었다. 검증하였다.를, 지지도(support), 신뢰도(confidence), 리프트(lift), 컨빅션(conviction)등의 관계를 통해 다양한 방법으로 모색해본다. 이 연구에서 제안하는 이러한 개념계층상의 흥미로운 부분의 탐색은, 전자 상거래에서의 CRM(Customer Relationship Management)나 틈새시장(niche market) 마케팅 등에 적용가능하리라 여겨진다.선의 효과가 나타났다. 표본기업들을 훈련과 시험용으로 구분하여 분석한 결과는 전체적으로 재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity

  • PDF

Empirical Mode Decomposition using the Second Derivative (이차 미분을 이용한 경험적 모드분해법)

  • Park, Min-Su;Kim, Donghoh;Oh, Hee-Seok
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.2
    • /
    • pp.335-347
    • /
    • 2013
  • There are various types of real world signals. For example, an electrocardiogram(ECG) represents myocardium activities (contraction and relaxation) according to the beating of the heart. ECG can be expressed as the fluctuation of ampere ratings over time. A signal is a composite of various types of signals. An orchestra (which boasts a beautiful melody) consists of a variety of instruments with a unique frequency; subsequently, each sound is combined to form a perfect harmony. Various research on how to to decompose mixed stationary signals have been conducted. In the case of non-stationary signals, there is a limitation to use methodologies for stationary signals. Huang et al. (1998) proposed empirical mode decomposition(EMD) to deal with non-stationarity. EMD provides a data-driven approach to decompose a signal into intrinsic mode functions according to local oscillation through the identification of local extrema. However, due to the repeating process in the construction of envelopes, EMD algorithm is not efficient and not robust to a noise, and its computational complexity tends to increase as the size of a signal grows. In this research, we propose a new method to extract a local oscillation embedded in a signal by utilizing the second derivative.

Parallel Range Query processing on R-tree with Graphics Processing Units (GPU를 이용한 R-tree에서의 범위 질의의 병렬 처리)

  • Yu, Bo-Seon;Kim, Hyun-Duk;Choi, Won-Ik;Kwon, Dong-Seop
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.5
    • /
    • pp.669-680
    • /
    • 2011
  • R-trees are widely used in various areas such as geographical information systems, CAD systems and spatial databases in order to efficiently index multi-dimensional data. As data sets used in these areas grow in size and complexity, however, range query operations on R-tree are needed to be further faster to meet the area-specific constraints. To address this problem, there have been various research efforts to develop strategies for acceleration query processing on R-tree by using the buffer mechanism or parallelizing the query processing on R-tree through multiple disks and processors. As a part of the strategies, approaches which parallelize query processing on R-tree through Graphics Processor Units(GPUs) have been explored. The use of GPUs may guarantee improved performances resulting from faster calculations and reduced disk accesses but may cause additional overhead costs caused by high memory access latencies and low data exchange rate between GPUs and the CPU. In this paper, to address the overhead problems and to adapt GPUs efficiently, we propose a novel approach which uses a GPU as a buffer to parallelize query processing on R-tree. The use of buffer algorithm can give improved performance by reducing the number of disk access and maximizing coalesced memory access resulting in minimizing GPU memory access latencies. Through the extensive performance studies, we observed that the proposed approach achieved up to 5 times higher query performance than the original CPU-based R-trees.

A Data Aggregation Scheme for Enhancing the Efficiency of Data Aggregation and Correctness in Wireless Sensor Networks (무선 센서 네트워크에서 데이터 수집의 효율성 및 정확성 향상을 위한 데이터 병합기법)

  • Kim, Hyun-Tae;Yu, Tae-Young;Jung, Kyu-Su;Jeon, Yeong-Bae;Ra, In-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.5
    • /
    • pp.531-536
    • /
    • 2006
  • Recently, many of researchers have been studied in data processing oriented middleware for wireless sensor networks with the rapid advances on sensor and wireless communication technologies. In a wireless sensor network, a middleware should handle the data loss problem at an intermediate sensor node caused by instantaneous data burstness to support efficient processing and fast delivering of the sensing data. To handle this problem, a simple data discarding or data compressing policy for reducing the total amount of data to be transferred is typically used. But, data discarding policy decreases the correctness of a collected data, in other hand, data compressing policy requires additional processing overhead with the high complexity of the given algorithm. In this paper, it proposes a data-average method for enhancing the efficiency of data aggregation and correctness where the sensed data should be delivered only with the limited computing power and energy resource. With the proposed method, unnecessary data transfer of the overlapped data is eliminated and data correctness is enhanced by using the proposed averaging scheme when an instantaneous data burstness is occurred. Finally, with the TOSSTM simulation results on TinyBB, we show that the correctness of the transferred data is enhanced.

Deep Learning-based SISR (Single Image Super Resolution) Method using RDB (Residual Dense Block) and Wavelet Prediction Network (RDB 및 웨이블릿 예측 네트워크 기반 단일 영상을 위한 심층 학습기반 초해상도 기법)

  • NGUYEN, HUU DUNG;Kim, Eung-Tae
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.703-712
    • /
    • 2019
  • Single image Super-Resolution (SISR) aims to generate a visually pleasing high-resolution image from its degraded low-resolution measurement. In recent years, deep learning - based super - resolution methods have been actively researched and have shown more reliable and high performance. A typical method is WaveletSRNet, which restores high-resolution images through wavelet coefficient learning based on feature maps of images. However, there are two disadvantages in WaveletSRNet. One is a big processing time due to the complexity of the algorithm. The other is not to utilize feature maps efficiently when extracting input image's features. To improve this problems, we propose an efficient single image super resolution method, named RDB-WaveletSRNet. The proposed method uses the residual dense block to effectively extract low-resolution feature maps to improve single image super-resolution performance. We also adjust appropriated growth rates to solve complex computational problems. In addition, wavelet packet decomposition is used to obtain the wavelet coefficients according to the possibility of large scale ratio. In the experimental result on various images, we have proven that the proposed method has faster processing time and better image quality than the conventional methods. Experimental results have shown that the proposed method has better image quality by increasing 0.1813dB of PSNR and 1.17 times faster than the conventional method.

Fast Content-Aware Video Retargeting Algorithm (고속 컨텐츠 인식 동영상 리타겟팅 기법)

  • Park, Dae-Hyun;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.77-86
    • /
    • 2013
  • In this paper, we propose a fast video retargeting method which preserves the contents of a video and converts the image size. Since the conventional Seam Carving which is the well-known content-aware image retargeting technique uses the dynamic programming method, the repetitive update procedure of the accumulation energy is absolutely needed to obtain seam. The energy update procedure cannot avoid the processing time delay because of many operations by the image full-searching. By applying the proposed method, frames which have similar features in video are classified into a scene, and the first frame of a scene is resized by the modified Seam Carving where multiple seams are extracted from candidate seams to reduce the repetitive update procedure. After resizing the first frame of a scene, all continuous frames of the same scene are resized with reference to the seam information stored in the previous frame without the calculation of the accumulation energy. Therefore, although the fast processing is possible with reducing complexity and without analyzing all frames of scene, the quality of an image can be analogously maintained with an existing method. The experimental results show that the proposed method can preserve the contents of an image and can be practically applied to retarget the image on real time.

Design and Implementation of CW Radar-based Human Activity Recognition System (CW 레이다 기반 사람 행동 인식 시스템 설계 및 구현)

  • Nam, Jeonghee;Kang, Chaeyoung;Kook, Jeongyeon;Jung, Yunho
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.5
    • /
    • pp.426-432
    • /
    • 2021
  • Continuous wave (CW) Doppler radar has the advantage of being able to solve the privacy problem unlike camera and obtains signals in a non-contact manner. Therefore, this paper proposes a human activity recognition (HAR) system using CW Doppler radar, and presents the hardware design and implementation results for acceleration. CW Doppler radar measures signals for continuous operation of human. In order to obtain a single motion spectrogram from continuous signals, an algorithm for counting the number of movements is proposed. In addition, in order to minimize the computational complexity and memory usage, binarized neural network (BNN) was used to classify human motions, and the accuracy of 94% was shown. To accelerate the complex operations of BNN, the FPGA-based BNN accelerator was designed and implemented. The proposed HAR system was implemented using 7,673 logics, 12,105 registers, 10,211 combinational ALUTs, and 18.7 Kb of block memory. As a result of performance evaluation, the operation speed was improved by 99.97% compared to the software implementation.