• Title/Summary/Keyword: Bilinear Interpolation

Search Result 140, Processing Time 0.03 seconds

A Novel Least Square and Image Rotation based Method for Solving the Inclination Problem of License Plate in Its Camera Captured Image

  • Wu, ChangCheng;Zhang, Hao;Hua, JiaFeng;Hua, Sha;Zhang, YanYi;Lu, XiaoMing;Tang, YiChen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.12
    • /
    • pp.5990-6008
    • /
    • 2019
  • Recognizing license plate from its traffic camera captured images is one of the most important aspects in many traffic management systems. Despite many sophisticated license plate recognition related algorithms available online, license plate recognition is still a hot research issue because license plates in each country all round the world lack of uniform format and their camera captured images are often affected by multiple adverse factors, such as low resolution, poor illumination effects, installation problem etc. A novel method is proposed in this paper to solve the inclination problem of license plates in their camera captured images through four parts: Firstly, special edge pixels of license plate are chosen to represent main information of license plates. Secondly, least square methods are used to compute the inclined angle of license plates. Then, coordinate rotation methods are used to rotate the license plate. At last, bilinear interpolation methods are used to improve the performance of license plate rotation. Several experimental results demonstrated that our proposed method can solve the inclination problem about license plate in visual aspect and can improve the recognition rate when used as the image preprocessing method.

Half-Pixel Accuracy Motion Estimation Algorithm in the Transform Domain for H.264 (H.264를 위한 주파수 영역에서의 반화소 정밀도 움직임 예측 알고리듬)

  • Kang, Min-Jung;Heo, Jae-Seong;Ryu, Chul
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.11C
    • /
    • pp.917-924
    • /
    • 2008
  • Motion estimation and compensation in the spatial domain check the searching area of specified size in the previous frame and search block to minimize the difference with current block. When we check the searching area, it consumes the most encoding times due to increasing the complexity. We can solve this fault by means of motion estimation using shifting matrix in the transform domain instead of the spatial domain. We derive so the existed shifting matrix to a new recursion equation that we decrease more computations. We modify simply vertical shifting matrix and horizontal shifting matrix in the transform domain for motion estimation of half-pixel accuracy. So, we solve increasing computation due to bilinear interpolation in the spatial domain. Simulation results prove that motion estimation by the proposed algorithm in DCT-based transform domain provides higher PSNR using fewer bits than results in the spatial domain.

A Fast Half Pixel Motion Estimation Method based on the Correlations between Integer pixel MVs and Half pixel MVs (정 화소 움직임 벡터와 반 화소 움직임 벡터의 상관성을 이용한 빠른 반 화소 움직임 추정 기법)

  • Yoon HyoSun;Lee GueeSang
    • The KIPS Transactions:PartB
    • /
    • v.12B no.2 s.98
    • /
    • pp.131-136
    • /
    • 2005
  • Motion Estimation (ME) has been developed to remove redundant data contained in a sequence of image. And ME is an important part of video encoding systems, since it can significantly affect the qualify of an encoded sequences. Generally, ME consists of two stages, the integer pixel motion estimation and the half pixel motion estimation. Many methods have been developed to reduce the computational complexity at the integer pixel motion estimation. However, the studies are needed at the half pixel motion estimation to reduce the complexity. In this paper, a method based on the correlations between integer pixel motion vectors and half pixel motion vectors is proposed for the half pixel motion estimation. The proposed method has less computational complexity than the full half pixel search method (FHSM) that needs the bilinear interpolation of half pixels and examines nine half pixel points to the find the half pixel motion vector. Experimental results show that the speedup improvement of the proposed method over FHSM can be up to $2.5\~80$ times faster and the image quality degradation is about to $0.07\~0.69(dB)$.

Projection and Analysis of Future Temperature and Precipitation in East Asia Region Using RCP Climate Change Scenario (RCP 기반 동아시아 지역의 미래 기온 및 강수량 변화 분석)

  • Lee, Moon-Hwan;Bae, Deg-Hyo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2015.05a
    • /
    • pp.578-578
    • /
    • 2015
  • 동아시아 지역의 대부분은 몬순의 영향으로 인해 수자원의 계절적 변동성이 크며 이로 인해 홍수 및 가뭄이 빈번하게 발생하고 있다. 기후변화에 따른 기온과 강수량의 변화는 수자원의 변동성을 더욱 악화시킬 수 있으며, 수재해 피해를 더욱 가중시킬 것으로 전망되고 있다. 본 연구에서는 기후변화에 따른 동아시아 지역의 기온 및 강수량의 변화를 전망하고, 그 특성을 분석하고자 한다. 이를 위해 CMIP5의 핵심실험인 2개 RCP시나리오(RCP4.5, RCP8.5)에 대한 다수의 GCMs 결과를 이용하였다. 구축한 기후시나리오를 이중선형보간법(bilinear interpolation)을 이용하여 공간적으로 상세화하였으며, Delta method를 이용하여 편의보정을 수행하였다. GCM 모의자료의 편의를 산정하기 위해 관측자료는 APHRODITE의 기온 및 강수량 자료를 이용하였다. GCM에 따라 차이가 나지만, 우리나라의 경우 평균적으로 100~300mm 정도 과소모의 되는 것으로 나타났다. 미래 기온 및 강수량 전망을 위해 과거기간은 1976~2005년, 미래기간은 2021~2050년(2040s), 2061~2090년(2070s)으로 구분하였다. 우리나라의 경우 RCP 4.5 하에서 연평균기온은 $1.4{\sim}1.7^{\circ}C$(2040s), $2.2{\sim}3.4^{\circ}C$(2070s) 정도 상승할 것으로 나타났으며, 연평균 강수량은 4.6~5.3% (2040s), 8.4~10.5% (2070s) 정도 증가할 것으로 나타났다. RCP 8.5에서는 연평균 기온은 RCP4.5에 비해 상승폭이 더 컸으며, 강수량은 유사한 결과가 나타났다. 또한, 동아시아 지역에서도 연평균 기온이 상승하고 연평균 강수량은 증가하는 것으로 나타났다. 다만, 지역별로 계절별 기온 및 강수량이 매우 다른 양상으로 나타났다. 이는 동아시아 지역과 같이 계절별 강수량 발생패턴이 다른 지역에서는 홍수 및 가뭄에 매우 중요한 역할을 할 것이다. 따라서 지역적으로 계절별 강수량의 변화를 분석해야 할 것으로 판단되며, 추후 유출량 모의를 기반으로 홍수 및 가뭄의 영향을 직접적으로 분석해야할 것으로 판단된다.

  • PDF

Implementation of Spatial Downscaling Method Based on Gradient and Inverse Distance Squared (GIDS) for High-Resolution Numerical Weather Prediction Data (고해상도 수치예측자료 생산을 위한 경도-역거리 제곱법(GIDS) 기반의 공간 규모 상세화 기법 활용)

  • Yang, Ah-Ryeon;Oh, Su-Bin;Kim, Joowan;Lee, Seung-Woo;Kim, Chun-Ji;Park, Soohyun
    • Atmosphere
    • /
    • v.31 no.2
    • /
    • pp.185-198
    • /
    • 2021
  • In this study, we examined a spatial downscaling method based on Gradient and Inverse Distance Squared (GIDS) weighting to produce high-resolution grid data from a numerical weather prediction model over Korean Peninsula with complex terrain. The GIDS is a simple and effective geostatistical downscaling method using horizontal distance gradients and an elevation. The predicted meteorological variables (e.g., temperature and 3-hr accumulated rainfall amount) from the Limited-area ENsemble prediction System (LENS; horizontal grid spacing of 3 km) are used for the GIDS to produce a higher horizontal resolution (1.5 km) data set. The obtained results were compared to those from the bilinear interpolation. The GIDS effectively produced high-resolution gridded data for temperature with the continuous spatial distribution and high dependence on topography. The results showed a better agreement with the observation by increasing a searching radius from 10 to 30 km. However, the GIDS showed relatively lower performance for the precipitation variable. Although the GIDS has a significant efficiency in producing a higher resolution gridded temperature data, it requires further study to be applied for rainfall events.

A deep learning framework for wind pressure super-resolution reconstruction

  • Xiao Chen;Xinhui Dong;Pengfei Lin;Fei Ding;Bubryur Kim;Jie Song;Yiqing Xiao;Gang Hu
    • Wind and Structures
    • /
    • v.36 no.6
    • /
    • pp.405-421
    • /
    • 2023
  • Strong wind is the main factors of wind-damage of high-rise buildings, which often creates largely economical losses and casualties. Wind pressure plays a critical role in wind effects on buildings. To obtain the high-resolution wind pressure field, it often requires massive pressure taps. In this study, two traditional methods, including bilinear and bicubic interpolation, and two deep learning techniques including Residual Networks (ResNet) and Generative Adversarial Networks (GANs), are employed to reconstruct wind pressure filed from limited pressure taps on the surface of an ideal building from TPU database. It was found that the GANs model exhibits the best performance in reconstructing the wind pressure field. Meanwhile, it was confirmed that k-means clustering based retained pressure taps as model input can significantly improve the reconstruction ability of GANs model. Finally, the generalization ability of k-means clustering based GANs model in reconstructing wind pressure field is verified by an actual engineering structure. Importantly, the k-means clustering based GANs model can achieve satisfactory reconstruction in wind pressure field under the inputs processing by k-means clustering, even the 20% of pressure taps. Therefore, it is expected to save a huge number of pressure taps under the field reconstruction and achieve timely and accurately reconstruction of wind pressure field under k-means clustering based GANs model.

Development of Regularized Expectation Maximization Algorithms for Fan-Beam SPECT Data (부채살 SPECT 데이터를 위한 정칙화된 기댓값 최대화 재구성기법 개발)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Soo-Jin;Kim, Kyeong-Min;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.464-472
    • /
    • 2005
  • Purpose: SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. Materials and Methods: The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam protection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. Results: for the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Conclusion: Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions.

Evaluation of Damaged Stand Volume in Burned Area of Mt. Weol-A using Remotely Sensed Data (위성자료를 이용한 산화지의 입목 손실량 평가)

  • Ma, Ho-Seop;Chung, Young-Gwan;Jung, Su-Young;Choi, Dong-Wook
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.2 no.2
    • /
    • pp.79-86
    • /
    • 1999
  • This study was carried out to estimate the area of damaged forest and the volume of stand tree in burned area, Mt. Weol-A in eastern Chinju, Korea using digital maps derived from supervised classification of Landsat thematic mapper(TM) imagery as reference data. Criterion laser estimator and WinDENDRO$^{tm}$(v. 6.3b) system as a computer-aided tree ring measuring system were used to measure a volume and age of sampled tree. The sample site had been chosen in unburned areas having the same terrain condition and forest type of burned areas. The tree age, diameter at breast height, tree height and volume of the sample tree selected from sample site in unburned area were 27years, 20.9cm, 9.7m and $0.1396m^3$ respectively. Total stand volume of sample site was estimated $2.9316m^3$/0.04ha, Damaged stand volume evaluated to about $16,007m^3$ in the burned area of 218.4ha.

  • PDF

Oil Spill Visualization and Particle Matching Algorithm (유출유 이동 가시화 및 입자 매칭 알고리즘)

  • Lee, Hyeon-Chang;Kim, Yong-Hyuk
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.3
    • /
    • pp.53-59
    • /
    • 2020
  • Initial response is important in marine oil spills, such as the Hebei Spirit oil spill, but it is very difficult to predict the movement of oil out of the ocean, where there are many variables. In order to solve this problem, the forecasting of oil spill has been carried out by expanding the particle prediction, which is an existing study that studies the movement of floats on the sea using the data of the float. In the ocean data format HDF5, the current and wind velocity data at a specific location were extracted using bilinear interpolation, and then the movement of numerous points was predicted by particles and the results were visualized using polygons and heat maps. In addition, we propose a spill oil particle matching algorithm to compensate for the lack of data and the difference between the spilled oil and movement. The spilled oil particle matching algorithm is an algorithm that tracks the movement of particles by granulating the appearance of surface oil spilled oil. The problem was segmented using principal component analysis and matched using genetic algorithm to the point where the variance of travel distance of effluent oil is minimized. As a result of verifying the effluent oil visualization data, it was confirmed that the particle matching algorithm using principal component analysis and genetic algorithm showed the best performance, and the mean data error was 3.2%.

A modified U-net for crack segmentation by Self-Attention-Self-Adaption neuron and random elastic deformation

  • Zhao, Jin;Hu, Fangqiao;Qiao, Weidong;Zhai, Weida;Xu, Yang;Bao, Yuequan;Li, Hui
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.1-16
    • /
    • 2022
  • Despite recent breakthroughs in deep learning and computer vision fields, the pixel-wise identification of tiny objects in high-resolution images with complex disturbances remains challenging. This study proposes a modified U-net for tiny crack segmentation in real-world steel-box-girder bridges. The modified U-net adopts the common U-net framework and a novel Self-Attention-Self-Adaption (SASA) neuron as the fundamental computing element. The Self-Attention module applies softmax and gate operations to obtain the attention vector. It enables the neuron to focus on the most significant receptive fields when processing large-scale feature maps. The Self-Adaption module consists of a multiplayer perceptron subnet and achieves deeper feature extraction inside a single neuron. For data augmentation, a grid-based crack random elastic deformation (CRED) algorithm is designed to enrich the diversities and irregular shapes of distributed cracks. Grid-based uniform control nodes are first set on both input images and binary labels, random offsets are then employed on these control nodes, and bilinear interpolation is performed for the rest pixels. The proposed SASA neuron and CRED algorithm are simultaneously deployed to train the modified U-net. 200 raw images with a high resolution of 4928 × 3264 are collected, 160 for training and the rest 40 for the test. 512 × 512 patches are generated from the original images by a sliding window with an overlap of 256 as inputs. Results show that the average IoU between the recognized and ground-truth cracks reaches 0.409, which is 29.8% higher than the regular U-net. A five-fold cross-validation study is performed to verify that the proposed method is robust to different training and test images. Ablation experiments further demonstrate the effectiveness of the proposed SASA neuron and CRED algorithm. Promotions of the average IoU individually utilizing the SASA and CRED module add up to the final promotion of the full model, indicating that the SASA and CRED modules contribute to the different stages of model and data in the training process.