• Title/Summary/Keyword: 업 샘플링

Search Result 77, Processing Time 0.038 seconds

Epipolar Image Resampling from Kompsat-3 In-track Stereo Images (아리랑3호 스테레오 영상의 에피폴라 기하 분석 및 영상 리샘플링)

  • Oh, Jae Hong;Seo, Doo Chun;Lee, Chang No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.6_1
    • /
    • pp.455-461
    • /
    • 2013
  • Kompsat-3 is an optical high-resolution earth observation satellite launched in May 2012. The AEISS sensor of the Korean satellite provides 0.7m panchromatic and 2.8m multi-spectral images with 16.8km swath width from the sun-synchronous near-circular orbit of 685km altitude. Kompsat-3 is more advanced than Kompsat-2 and the improvements include better agility such as in-track stereo acquisition capability. This study investigated the characteristic of the epipolar curves of in-track Kompsat-3 stereo images. To this end we used the RPCs(Rational Polynomial Coefficients) to derive the epipolar curves over the entire image area and found out that the third order polynomial equation is required to model the curves. In addition, we could observe two different groups of curve patterns due to the dual CCDs of AEISS sensor. From the experiment we concluded that the third order polynomial-based RPCs update is required to minimize the sample direction image distortion. Finally we carried out the experiment on the epipolar resampling and the result showed the third order polynomial image transformation produced less than 0.7 pixels level of y-parallax.

Underwater Moving Target Simulation by Transmission Line Matrix Modeling Approach (전달선로행렬 모델링에 의한 수중물체의 이동 시뮬레이션 방법에 대한 연구)

  • Park, Kyu-Chil;Yoon, Jong Rak
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.8
    • /
    • pp.1777-1783
    • /
    • 2013
  • We do research on the simulation of Doppler effect from a target's moving under the sea by Transmission Line Matrix modeling which is one of numerical methods on time domain. To implement the effect, the input signal was entered at a moving node according to a moving target's moving speed. The result had maximum 2.47% error compared with the theoretical value. And from simulation results with speed control of a moving target, we could also obtain resonable results within 0.63% error range.

Temporally-Consistent High-Resolution Depth Video Generation in Background Region (배경 영역의 시간적 일관성이 향상된 고해상도 깊이 동영상 생성 방법)

  • Shin, Dong-Won;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.20 no.3
    • /
    • pp.414-420
    • /
    • 2015
  • The quality of depth images is important in the 3D video system to represent complete 3D contents. However, the original depth image from a depth camera has a low resolution and a flickering problem which shows vibrating depth values in terms of temporal meaning. This problem causes an uncomfortable feeling when we look 3D contents. In order to solve a low resolution problem, we employ 3D warping and a depth weighted joint bilateral filter. A temporal mean filter can be applied to solve the flickering problem while we encounter a residual spectrum problem in the depth image. Thus, after classifying foreground andbackground regions, we use an upsampled depth image for a foreground region and temporal mean image for background region.Test results shows that the proposed method generates a time consistent depth video with a high resolution.

Multi-scale Texture Synthesis (다중 스케일 텍스처 합성)

  • Lee, Sung-Ho;Park, Han-Wook;Lee, Jung;Kim, Chang-Hun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.14 no.2
    • /
    • pp.19-25
    • /
    • 2008
  • We synthesize a texture with different structures at different scales. Our technique is based on deterministic parallel synthesis allowing real-time processing on a GPU. A new coordinate transformation operator is used to construct a synthesized coordinate map based on different exemplars at different scales. The runtime overhead is minimal because this operator can be precalculated as a small lookup table. Our technique is effective for upsampling texture-rich images, because the result preserves texture detail well. In addition, a user can design a texture by coloring a low-resolution control image. This design tool can also be used for the interactive synthesis of terrain in the style of a particular exemplar, using the familiar 'raise and lower' airbrush to specify elevation.

  • PDF

Sequential Bayesian Updating Module of Input Parameter Distributions for More Reliable Probabilistic Safety Assessment of HLW Radioactive Repository (고준위 방사성 폐기물 처분장 확률론적 안전성평가 신뢰도 제고를 위한 입력 파라미터 연속 베이지안 업데이팅 모듈 개발)

  • Lee, Youn-Myoung;Cho, Dong-Keun
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.18 no.2
    • /
    • pp.179-194
    • /
    • 2020
  • A Bayesian approach was introduced to improve the belief of prior distributions of input parameters for the probabilistic safety assessment of radioactive waste repository. A GoldSim-based module was developed using the Markov chain Monte Carlo algorithm and implemented through GSTSPA (GoldSim Total System Performance Assessment), a GoldSim template for generic/site-specific safety assessment of the radioactive repository system. In this study, sequential Bayesian updating of prior distributions was comprehensively explained and used as a basis to conduct a reliable safety assessment of the repository. The prior distribution to three sequential posterior distributions for several selected parameters associated with nuclide transport in the fractured rock medium was updated with assumed likelihood functions. The process was demonstrated through a probabilistic safety assessment of the conceptual repository for illustrative purposes. Through this study, it was shown that insufficient observed data could enhance the belief of prior distributions for input parameter values commonly available, which are usually uncertain. This is particularly applicable for nuclide behavior in and around the repository system, which typically exhibited a long time span and wide modeling domain.

Super-Resolution Image Processing Algorithm Using Hybrid Up-sampling (하이브리드 업샘플링을 이용한 베이시안 초해상도 영상처리)

  • Park, Jong-Hyun;Kang, Moon-Gi
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.2
    • /
    • pp.294-302
    • /
    • 2008
  • In this paper, we present a new image up-sampling method which registers low resolution images to the high resolution grid when Bayesian super-resolution image processing is performed. The proposed up-sampling method interpolates high-resolution pixels using high-frequency data lying in all the low resolution images, instead of up-sampling each low resolution image separately. The interpolation is based on B-spline non-uniform re-sampling, adjusted for the super-resolution image processing. The experimental results demonstrate the effects when different up-sampling methods generally used such as zero-padding or bilinear interpolation are applied to the super-resolution image reconstruction. Then, we show that the proposed hybird up-sampling method generates high-resolution images more accurately than conventional methods with quantitative and qualitative assess measures.

Joint Bilateral Upsampling using Variance (분산 값을 이용한 결합 양측 업샘플링)

  • Lee, Dong-Woo;Kim, Manbae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.07a
    • /
    • pp.398-400
    • /
    • 2012
  • 최근 3D에 대한 관심이 집중되면서 고품질의 3D영상을 얻기 위해 고품질의 깊이 영상이 필요하고 이를 구현하기 위한 연구가 활발히 진행되고 있다. 깊이 영상을 얻기 위해서 Time-of-Flight(ToF)방식의 깊이 센서가 활용되고 있는데 이 깊이 센서는 실시간으로 깊이 정보를 획득할 수 있지만 낮은 해상도와 노이즈가 발생한다는 단점이 있다. 따라서 깊이 센서로 생성된 저해상도 깊이맵을 고해상도로 변환해야 한다. 주로 깊이 영상의 해상도를 높이기 위해서 Joint Bilateral Upsampling(JBU) 방식이 사용되고 있다. 따라서 본 논문은 JBU 방식을 보강하여 블록단위로 분산에 따른 참조 영상의 가중치를 다르게 두어 깊이 영상의 품질을 향상시키는 방법을 제안한다.

  • PDF

Sampling-based Super Resolution U-net for Pattern Expression of Local Areas (국소부위 패턴 표현을 위한 샘플링 기반 초해상도 U-Net)

  • Lee, Kyo-Seok;Gal, Won-Mo;Lim, Myung-Jae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.185-191
    • /
    • 2022
  • In this study, we propose a novel super-resolution neural network based on U-Net, residual neural network, and sub-pixel convolution. To prevent the loss of detailed information due to the max pooling of U-Net, we propose down-sampling and connection using sub-pixel convolution. This uses all pixels in the filter, unlike the max pooling that creates a new feature map with only the max value in the filter. As a 2×2 size filter passes, it creates a feature map consisting only of pixels in the upper left, upper right, lower left, and lower right. This makes it half the size and quadruple the number of feature maps. And we propose two methods to reduce the computation. The first uses sub-pixel convolution, which has no computation, and has better performance, instead of up-convolution. The second uses a layer that adds two feature maps instead of the connection layer of the U-Net. Experiments with a banchmark dataset show better PSNR values on all scale and benchmark datasets except for set5 data on scale 2, and well represent local area patterns.

Investigation of the Super-resolution Algorithm for the Prediction of Periodontal Disease in Dental X-ray Radiography (치주질환 예측을 위한 치과 X-선 영상에서의 초해상화 알고리즘 적용 가능성 연구)

  • Kim, Han-Na
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.2
    • /
    • pp.153-158
    • /
    • 2021
  • X-ray image analysis is a very important field to improve the early diagnosis rate and prediction accuracy of periodontal disease. Research on the development and application of artificial intelligence-based algorithms to improve the quality of such dental X-ray images is being widely conducted worldwide. Thus, the aim of this study was to design a super-resolution algorithm for predicting periodontal disease and to evaluate its applicability in dental X-ray images. The super-resolution algorithm was constructed based on the convolution layer and ReLU, and an image obtained by up-sampling a low-resolution image by 2 times was used as an input data. Also, 1,500 dental X-ray data used for deep learning training were used. Quantitative evaluation of images used root mean square error and structural similarity, which are factors that can measure similarity through comparison of two images. In addition, the recently developed no-reference based natural image quality evaluator and blind/referenceless image spatial quality evaluator were additionally analyzed. According to the results, we confirmed that the average similarity and no-reference-based evaluation values were improved by 1.86 and 2.14 times, respectively, compared to the existing bicubic-based upsampling method when the proposed method was used. In conclusion, the super-resolution algorithm for predicting periodontal disease proved useful in dental X-ray images, and it is expected to be highly applicable in various fields in the future.

A Study on Adaptive Learning Model for Performance Improvement of Stream Analytics (실시간 데이터 분석의 성능개선을 위한 적응형 학습 모델 연구)

  • Ku, Jin-Hee
    • Journal of Convergence for Information Technology
    • /
    • v.8 no.1
    • /
    • pp.201-206
    • /
    • 2018
  • Recently, as technologies for realizing artificial intelligence have become more common, machine learning is widely used. Machine learning provides insight into collecting large amounts of data, batch processing, and taking final action, but the effects of the work are not immediately integrated into the learning process. In this paper proposed an adaptive learning model to improve the performance of real-time stream analysis as a big business issue. Adaptive learning generates the ensemble by adapting to the complexity of the data set, and the algorithm uses the data needed to determine the optimal data point to sample. In an experiment for six standard data sets, the adaptive learning model outperformed the simple machine learning model for classification at the learning time and accuracy. In particular, the support vector machine showed excellent performance at the end of all ensembles. Adaptive learning is expected to be applicable to a wide range of problems that need to be adaptively updated in the inference of changes in various parameters over time.