• Title/Summary/Keyword: redundancy resolution

Search Result 56, Processing Time 0.026 seconds

Automatic Generation of GCP Chips from High Resolution Images using SUSAN Algorithms

  • Um Yong-Jo;Kim Moon-Gyu;Kim Taejung;Cho Seong-Ik
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.220-223
    • /
    • 2004
  • Automatic image registration is an essential element of remote sensing because remote sensing system generates enormous amount of data, which are multiple observations of the same features at different times and by different sensor. The general process of automatic image registration includes three steps: 1) The extraction of features to be used in the matching process, 2) the feature matching strategy and accurate matching process, 3) the resampling of the data based on the correspondence computed from matched feature. For step 2) and 3), we have developed an algorithms for automated registration of satellite images with RANSAC(Random Sample Consensus) in success. However, for step 1), There still remains human operation to generate GCP Chips, which is time consuming, laborious and expensive process. The main idea of this research is that we are able to automatically generate GCP chips with comer detection algorithms without GPS survey and human interventions if we have systematic corrected satellite image within adaptable positional accuracy. In this research, we use SUSAN(Smallest Univalue Segment Assimilating Nucleus) algorithm in order to detect the comer. SUSAN algorithm is known as the best robust algorithms for comer detection in the field of compute vision. However, there are so many comers in high-resolution images so that we need to reduce the comer points from SUSAN algorithms to overcome redundancy. In experiment, we automatically generate GCP chips from IKONOS images with geo level using SUSAN algorithms. Then we extract reference coordinate from IKONOS images and DEM data and filter the comer points using texture analysis. At last, we apply automatically collected GCP chips by proposed method and the GCP by operator to in-house automatic precision correction algorithms. The compared result will be presented to show the GCP quality.

  • PDF

Strain-based structural condition assessment of an instrumented arch bridge using FBG monitoring data

  • Ye, X.W.;Yi, Ting-Hua;Su, Y.H.;Liu, T.;Chen, B.
    • Smart Structures and Systems
    • /
    • v.20 no.2
    • /
    • pp.139-150
    • /
    • 2017
  • The structural strain plays a significant role in structural condition assessment of in-service bridges in terms of structural bearing capacity, structural reliability level and entire safety redundancy. Therefore, it has been one of the most important parameters concerned by researchers and engineers engaged in structural health monitoring (SHM) practices. In this paper, an SHM system instrumented on the Jiubao Bridge located in Hangzhou, China is firstly introduced. This system involves nine subsystems and has been continuously operated for five years since 2012. As part of the SHM system, a total of 166 fiber Bragg grating (FBG) strain sensors are installed on the bridge to measure the dynamic strain responses of key structural components. Based on the strain monitoring data acquired in recent two years, the strain-based structural condition assessment of the Jiubao Bridge is carried out. The wavelet multi-resolution algorithm is applied to separate the temperature effect from the raw strain data. The obtained strain data under the normal traffic and wind condition and under the typhoon condition are examined for structural safety evaluation. The structural condition rating of the bridge in accordance with the AASHTO specification for condition evaluation and load and resistance factor rating of highway bridges is performed by use of the processed strain data in combination with finite element analysis. The analysis framework presented in this study can be used as a reference for facilitating the assessment, inspection and maintenance activities of in-service bridges instrumented with long-term SHM system.

SHVC-based Texture Map Coding for Scalable Dynamic Mesh Compression (스케일러블 동적 메쉬 압축을 위한 SHVC 기반 텍스처 맵 부호화 방법)

  • Naseong Kwon;Joohyung Byeon;Hansol Choi;Donggyu Sim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.314-328
    • /
    • 2023
  • In this paper, we propose a texture map compression method based on the hierarchical coding method of SHVC to support the scalability function of dynamic mesh compression. The proposed method effectively eliminates the redundancy of multiple-resolution texture maps by downsampling a high-resolution texture map to generate multiple-resolution texture maps and encoding them with SHVC. The dynamic mesh decoder supports the scalability of mesh data by decoding a texture map having an appropriate resolution according to receiver performance and network environment. To evaluate the performance of the proposed method, the proposed method is applied to V-DMC (Video-based Dynamic Mesh Coding) reference software, TMMv1.0, and the performance of the scalable encoder/decoder proposed in this paper and TMMv1.0-based simulcast method is compared. As a result of experiments, the proposed method effectively improves in performance the average of -7.7% and -5.7% in terms of point cloud-based BD-rate (Luma PSNR) in AI and LD conditions compared to the simulcast method, confirming that it is possible to effectively support the texture map scalability of dynamic mesh data through the proposed method.

A Feature Map Compression Method for Multi-resolution Feature Map with PCA-based Transformation (PCA 기반 변환을 통한 다해상도 피처 맵 압축 방법)

  • Park, Seungjin;Lee, Minhun;Choi, Hansol;Kim, Minsub;Oh, Seoung-Jun;Kim, Younhee;Do, Jihoon;Jeong, Se Yoon;Sim, Donggyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.56-68
    • /
    • 2022
  • In this paper, we propose a compression method for multi-resolution feature maps for VCM. The proposed compression method removes the redundancy between the channels and resolution levels of the multi-resolution feature map through PCA-based transformation. According to each characteristic, the basis vectors and mean vector used for transformation, and the transformation coefficient obtained through the transformation are compressed using a VVC-based coder and DeepCABAC. In order to evaluate performance of the proposed method, the object detection performance was measured for the OpenImageV6 and COCO 2017 validation set, and the BD-rate of MPEG-VCM anchor and feature map compression anchor proposed in this paper was compared using bpp and mAP. As a result of the experiment, the proposed method shows a 25.71% BD-rate performance improvement compared to feature map compression anchor in OpenImageV6. Furthermore, for large objects of the COCO 2017 validation set, the BD-rate performance is improved by up to 43.72% compared to the MPEG-VCM anchor.

An Adaptive Block Matching Algorithm based on Temporal Correlations

  • Yoon, Hyo-Sun;Son, Nam-Rye;Lee, Guee-Sang;Kim, Soo-Hyung
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.188-191
    • /
    • 2002
  • To reduce the bit-rate of video sequences by removing temporal redundancy, motion estimation techniques have been developed. However, the high computational complexity of the problem makes such techniques very difficult to be applied to high-resolution applications in a real time environment. For this reason, low computational complexity motion estimation algorithms are viable solutions. If a priori knowledge about the motion of the current block is available before the motion estimation, a better starting point for the search of n optimal motion vector on be selected and also the computational complexity will be reduced. In this paper, we present an adaptive block matching algorithm based on temporal correlations of consecutive image frames that defines the search pattern and the location of initial starting point adaptively to reduce computational complexity. Experiments show that, comparing with DS(Diamond Search) algorithm, the proposed algorithm is about 0.1∼0.5(㏈) better than DS in terms of PSNR and improves as much as 50% in terms of the average number of search points per motion estimation.

  • PDF

Motion-Compensated Layered Video Coding for Dynamic Adaptation (동적 적응을 위한 움직임 보상 계층형 동영상 부호화)

  • 이재용;박희라;고성제
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.10B
    • /
    • pp.1912-1920
    • /
    • 1999
  • In this paper, we propose a layered video coding scheme which can generate multi-layered bitstream for heterogeneous environments. A new motion prediction structure with temporal hierarchy of frames is developed to afford temporal resolution scalability and the wavelet decomposition is adopted to offer spatial acalability. The proposed scheme can have a higher compression ratio than replenishment schemes by using motion estimation and compensation which can further reduce the temporal redundancy, and it effectively works with dynamic adaption or errors using dispersive intra-subband update (DISU). Moreover, data rate scalability can be attained by employing embeded zerotree wavelet (EZW) technique which can produce embeded bitstream. Therefore, the proposed scheme is expected to be effectively used in heterogeneous environments such as the Internet, ATM, and mobile networks where interoperability are required.

  • PDF

An Enhanced Broadcasting Algorithm in Wireless Ad hoc Networks (무선 ad hoc 네트워크를 위한 향상된 방송 알고리즘)

  • Kim, Kwan-Woong;Bae, Sung-Hwan;Kim, Dae-Ik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.10A
    • /
    • pp.956-963
    • /
    • 2008
  • In a multi-hop wireless ad hoc network broadcasting is an elementary operation to support route discovery, address resolution and other application tasks. Broadcasting by flooding may cause serious redundancy, contention, and collision in the network which is referred to as the broadcast storm problem. Many broadcasting schemes have been proposed to give better performance than simple flooding in wireless ad hoc network. How to decide whether re-broadcast or not also poses a dilemma between reachability and efficiency under different host densities. In this paper, we propose enhanced broadcasting schemes, which can reduce re-broadcast packets without loss of reachability. Simulation results show that proposed schemes can offer better reachability as well as efficiency as compared to other previous schemes.

Compressed Sensing Based Dynamic MR Imaging: A Short Survey (Compressed Sensing 기법을 이용한 Dynamic MR Imaging)

  • Jung, Hong;Ye, Jong-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.5
    • /
    • pp.25-31
    • /
    • 2009
  • The recently developed sampling theory, "compressed sensing" is gathering huge interest in MR reconstruction area because of its feasibility of high spatio-temporal resolution of dynamic MRI which has been limited in conventional methods based on Nyquist sampling theory. Since dynamic MRI usually has high redundant information along temporal direction, this can be very sparsely represented in most of cases. Therefore, compressed sensing that exploits the sparsity of unknown images can be effectively applied in most of dynamic MRI. This review article briefly introduces currently proposed compressed sensing based dynamic MR imaging algorithms and other methods exploiting sparsity. By comparing them with conventional methods, you may have insight how the compressed sensing based methods can impact nearly every area of clinical dynamic MRI.

A Study on Effective Satellite Selection Method for Multi-Constellation GNSS

  • Taek Geun, Lee;Yu Dam, Lee;Hyung Keun, Lee
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.12 no.1
    • /
    • pp.11-22
    • /
    • 2023
  • In this paper, we propose an efficient satellite selection method for multi-constellation GNSS. The number of visible satellites has increased dramatically recently due to multi-constellation GNSS. By the increased availability, the overall GNSS performance can be improved. Whereas, due to the increase of the number of visible satellites, the computational burden in implementing advanced processing such as integer ambiguity resolution and fault detection can be increased considerably. As widely known, the optimal satellite selection method requires very large computational burden and its real-time implementation is practically impossible. To reduce computational burden, several sub-optimal but efficient satellite selection methods have been proposed recently. However, these methods are prone to the local optimum problem and do not fully utilize the information redundancy between different constellation systems. To solve this problem, the proposed method utilizes the inter-system biases and geometric assignments. As a result, the proposed method can be implemented in real-time, avoids the local optimum problem, and does not exclude any single-satellite constellation. The performance of the proposed method is compared with the optimal method and two popular sub-optimal methods by a simulation and an experiment.

Site-Specific Error-Cross Correlation-Informed Quadruple Collocation Approach for Improved Global Precipitation Estimates

  • Alcantara, Angelika;Ahn Kuk-Hyun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.180-180
    • /
    • 2023
  • To improve global risk management, understanding the characteristics and distribution of precipitation is crucial. However, obtaining spatially and temporally resolved climatic data remains challenging due to sparse gauge observations and limited data availability, despite the use of satellite and reanalysis products. To address this challenge, merging available precipitation products has been introduced to generate spatially and temporally reliable data by taking advantage of the strength of the individual products. However, most of the existing studies utilize all the available products without considering the varying performances of each dataset in different regions. Comprehensively considering the relative contributions of each parent dataset is necessary since their contributions may vary significantly and utilizing all the available datasets for data merging may lead to significant data redundancy issues. Hence, for this study, we introduce a site-specific precipitation merging method that utilizes the Quadruple Collocation (QC) approach, which acknowledges the existence of error-cross correlation between the parent datasets, to create a high-resolution global daily precipitation data from 2001-2020. The performance of multiple gridded precipitation products are first evaluated per region to determine the best combination of quadruplets to be utilized in estimating the error variances through the QC approach and computation of merging weights. The merged precipitation is then computed by adding the precipitation from each dataset in the quadruplet multiplied by each respective merging weight. Our results show that our approach holds promise for generating reliable global precipitation data for data-scarce regions lacking spatially and temporally resolved precipitation data.

  • PDF