• Title/Summary/Keyword: 업샘플링

Search Result 77, Processing Time 0.028 seconds

Sequential Bayesian Updating Module of Input Parameter Distributions for More Reliable Probabilistic Safety Assessment of HLW Radioactive Repository (고준위 방사성 폐기물 처분장 확률론적 안전성평가 신뢰도 제고를 위한 입력 파라미터 연속 베이지안 업데이팅 모듈 개발)

  • Lee, Youn-Myoung;Cho, Dong-Keun
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.18 no.2
    • /
    • pp.179-194
    • /
    • 2020
  • A Bayesian approach was introduced to improve the belief of prior distributions of input parameters for the probabilistic safety assessment of radioactive waste repository. A GoldSim-based module was developed using the Markov chain Monte Carlo algorithm and implemented through GSTSPA (GoldSim Total System Performance Assessment), a GoldSim template for generic/site-specific safety assessment of the radioactive repository system. In this study, sequential Bayesian updating of prior distributions was comprehensively explained and used as a basis to conduct a reliable safety assessment of the repository. The prior distribution to three sequential posterior distributions for several selected parameters associated with nuclide transport in the fractured rock medium was updated with assumed likelihood functions. The process was demonstrated through a probabilistic safety assessment of the conceptual repository for illustrative purposes. Through this study, it was shown that insufficient observed data could enhance the belief of prior distributions for input parameter values commonly available, which are usually uncertain. This is particularly applicable for nuclide behavior in and around the repository system, which typically exhibited a long time span and wide modeling domain.

New Adaptive Interpolation Based on Edge Direction extracted from the DCT Coefficient Distribution (DCT 계수 분포를 이용해 추출한 edge 방향성에 기반한 새로운 적응적 보간 기법)

  • Kim, Jaehun;Kim, Kibaek;Jeon, Gwanggil;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.18 no.1
    • /
    • pp.10-20
    • /
    • 2013
  • Nowadays, video technology has been successfully improved creating tremendous results. As video technology improve, multimedia devices and demands from users are diversified. Therefore, a video codec used in these devices should support various displays with different resolutions. The technology to generate a higher resolution image from the associated low-resolution image is called interpolation. Interpolation is generally performed in either the spatial domain or the DCT domain. To use the advantages of both domains, we have proposed the new adaptive interpolation algorithm based on edge direction, which adaptively exploits the advantages of both domains. The experimental results demonstrate that our algorithm performs well in terms of PSNR and reduces the blocking artifacts.

Analysis of Optimal Resolution and Number of GCP Chips for Precision Sensor Modeling Efficiency in Satellite Images (농림위성영상 정밀센서모델링 효율성 재고를 위한 최적의 해상도 및 지상기준점 칩 개수 분석)

  • Choi, Hyeon-Gyeong;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1445-1462
    • /
    • 2022
  • Compact Advanced Satellite 500-4 (CAS500-4), which is scheduled to be launched in 2025, is a mid-resolution satellite with a 5 m resolution developed for wide-area agriculture and forest observation. To utilize satellite images, it is important to establish a precision sensor model and establish accurate geometric information. Previous research reported that a precision sensor model could be automatically established through the process of matching ground control point (GCP) chips and satellite images. Therefore, to improve the geometric accuracy of satellite images, it is necessary to improve the GCP chip matching performance. This paper proposes an improved GCP chip matching scheme for improved precision sensor modeling of mid-resolution satellite images. When using high-resolution GCP chips for matching against mid-resolution satellite images, there are two major issues: handling the resolution difference between GCP chips and satellite images and finding the optimal quantity of GCP chips. To solve these issues, this study compared and analyzed chip matching performances according to various satellite image upsampling factors and various number of chips. RapidEye images with a resolution of 5m were used as mid-resolution satellite images. GCP chips were prepared from aerial orthographic images with a resolution of 0.25 m and satellite orthogonal images with a resolution of 0.5 m. Accuracy analysis was performed using manually extracted reference points. Experiment results show that upsampling factor of two and three significantly improved sensor model accuracy. They also show that the accuracy was maintained with reduced number of GCP chips of around 100. The results of the study confirmed the possibility of applying high-resolution GCP chips for automated precision sensor modeling of mid-resolution satellite images with improved accuracy. It is expected that the results of this study can be used to establish a precise sensor model for CAS500-4.

Detection of Zebra-crossing Areas Based on Deep Learning with Combination of SegNet and ResNet (SegNet과 ResNet을 조합한 딥러닝에 기반한 횡단보도 영역 검출)

  • Liang, Han;Seo, Suyoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.141-148
    • /
    • 2021
  • This paper presents a method to detect zebra-crossing using deep learning which combines SegNet and ResNet. For the blind, a safe crossing system is important to know exactly where the zebra-crossings are. Zebra-crossing detection by deep learning can be a good solution to this problem and robotic vision-based assistive technologies sprung up over the past few years, which focused on specific scene objects using monocular detectors. These traditional methods have achieved significant results with relatively long processing times, and enhanced the zebra-crossing perception to a large extent. However, running all detectors jointly incurs a long latency and becomes computationally prohibitive on wearable embedded systems. In this paper, we propose a model for fast and stable segmentation of zebra-crossing from captured images. The model is improved based on a combination of SegNet and ResNet and consists of three steps. First, the input image is subsampled to extract image features and the convolutional neural network of ResNet is modified to make it the new encoder. Second, through the SegNet original up-sampling network, the abstract features are restored to the original image size. Finally, the method classifies all pixels and calculates the accuracy of each pixel. The experimental results prove the efficiency of the modified semantic segmentation algorithm with a relatively high computing speed.

Durability Prediction for Concrete Structures Exposed to Chloride Attack Using a Bayesian Approach (베이지안 기법을 이용한 염해 콘크리트구조물의 내구성 예측)

  • Jung, Hyun-Jun;Zi, Goang-Seup;Kong, Jung-Sik;Kang, Jin-Gu
    • Journal of the Korea Concrete Institute
    • /
    • v.20 no.1
    • /
    • pp.77-88
    • /
    • 2008
  • This paper provides a new approach for predicting the corrosion resistivity of reinforced concrete structures exposed to chloride attack. In this method, the prediction can be updated successively by a Bayesian theory when additional data are available. The stochastic properties of model parameters are explicitly taken into account into the model. To simplify the procedure of the model, the probability of the durability limit is determined from the samples obtained from the Latin hypercube sampling technique. The new method may be very useful in designing important concrete structures and help to predict the remaining service life of existing concrete structures which have been monitored.

Sampling-based Super Resolution U-net for Pattern Expression of Local Areas (국소부위 패턴 표현을 위한 샘플링 기반 초해상도 U-Net)

  • Lee, Kyo-Seok;Gal, Won-Mo;Lim, Myung-Jae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.185-191
    • /
    • 2022
  • In this study, we propose a novel super-resolution neural network based on U-Net, residual neural network, and sub-pixel convolution. To prevent the loss of detailed information due to the max pooling of U-Net, we propose down-sampling and connection using sub-pixel convolution. This uses all pixels in the filter, unlike the max pooling that creates a new feature map with only the max value in the filter. As a 2×2 size filter passes, it creates a feature map consisting only of pixels in the upper left, upper right, lower left, and lower right. This makes it half the size and quadruple the number of feature maps. And we propose two methods to reduce the computation. The first uses sub-pixel convolution, which has no computation, and has better performance, instead of up-convolution. The second uses a layer that adds two feature maps instead of the connection layer of the U-Net. Experiments with a banchmark dataset show better PSNR values on all scale and benchmark datasets except for set5 data on scale 2, and well represent local area patterns.

A Study on Adaptive Learning Model for Performance Improvement of Stream Analytics (실시간 데이터 분석의 성능개선을 위한 적응형 학습 모델 연구)

  • Ku, Jin-Hee
    • Journal of Convergence for Information Technology
    • /
    • v.8 no.1
    • /
    • pp.201-206
    • /
    • 2018
  • Recently, as technologies for realizing artificial intelligence have become more common, machine learning is widely used. Machine learning provides insight into collecting large amounts of data, batch processing, and taking final action, but the effects of the work are not immediately integrated into the learning process. In this paper proposed an adaptive learning model to improve the performance of real-time stream analysis as a big business issue. Adaptive learning generates the ensemble by adapting to the complexity of the data set, and the algorithm uses the data needed to determine the optimal data point to sample. In an experiment for six standard data sets, the adaptive learning model outperformed the simple machine learning model for classification at the learning time and accuracy. In particular, the support vector machine showed excellent performance at the end of all ensembles. Adaptive learning is expected to be applicable to a wide range of problems that need to be adaptively updated in the inference of changes in various parameters over time.

TeT: Distributed Tera-Scale Tensor Generator (분산 테라스케일 텐서 생성기)

  • Jeon, ByungSoo;Lee, JungWoo;Kang, U
    • Journal of KIISE
    • /
    • v.43 no.8
    • /
    • pp.910-918
    • /
    • 2016
  • A tensor is a multi-dimensional array that represents many data such as (user, user, time) in the social network system. A tensor generator is an important tool for multi-dimensional data mining research with various applications including simulation, multi-dimensional data modeling/understanding, and sampling/extrapolation. However, existing tensor generators cannot generate sparse tensors like real-world tensors that obey power law. In addition, they have limitations such as tensor sizes that can be processed and additional time required to upload generated tensor to distributed systems for further analysis. In this study, we propose TeT, a distributed tera-scale tensor generator to solve these problems. TeT generates sparse random tensor as well as sparse R-MAT and Kronecker tensor without any limitation on tensor sizes. In addition, a TeT-generated tensor is immediately ready for further tensor analysis on the same distributed system. The careful design of TeT facilitates nearly linear scalability on the number of machines.

An Development of Image Retrieval Model based on Image2Vec using GAN (Generative Adversarial Network를 활용한 Image2Vec기반 이미지 검색 모델 개발)

  • Jo, Jaechoon;Lee, Chanhee;Lee, Dongyub;Lim, Heuiseok
    • Journal of Digital Convergence
    • /
    • v.16 no.12
    • /
    • pp.301-307
    • /
    • 2018
  • The most of the IR focus on the method for searching the document, so the keyword-based IR system is not able to reflect the feature information of the image. In order to overcome these limitations, we have developed a system that can search similar images based on the vector information of images, and it can search for similar images based on sketches. The proposed system uses the GAN to up sample the sketch to the image level, convert the image to the vector through the CNN, and then retrieve the similar image using the vector space model. The model was learned using fashion image and the image retrieval system was developed. As a result, the result is showed meaningful performance.

Iterative Deep Convolutional Grid Warping Network for Joint Depth Upsampling (반복적인 격자 워핑 기법을 이용한 깊이 영상 초해상화 기술)

  • Kim, Dongsin;Yang, Yoonmo;Oh, Byung Tae
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.965-972
    • /
    • 2020
  • Depth maps have distance information of objects. They play an important role in organizing 3D information. Color and depth images are often simultaneously obtained. However, depth images have lower resolution than color images due to limitation in hardware technology. Therefore, it is useful to upsample depth maps to have the same resolution as color images. In this paper, we propose a novel method to upsample depth map by shifting the pixel position instead of compensating pixel value. This approach moves the position of the pixel around the edge to the center of the edge, and this process is carried out in several steps to restore blurred depth map. The experimental results show that the proposed method improves both quantitative and visual quality compared to the existing methods.