• Title/Summary/Keyword: Kernel Parameter

Search Result 120, Processing Time 0.026 seconds

An Image Interpolation by Adaptive Parametric Cubic Convolution (3차 회선 보간법에 적응적 매개변수를 적용한 영상 보간)

  • Yoo, Jea-Wook;Park, Dae-Hyun;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.6
    • /
    • pp.163-171
    • /
    • 2008
  • In this paper, we present an adaptive parametric cubic convolution technique in order to enlarge the low resolution image to the high resolution image. The proposed method consists of two steps. During the first interpolation step, we acquire adaptive parameters in introducing a new cost-function to reflect frequency properties. And, the second interpolation step performs cubic convolution by applying the parameters obtained from the first step. The enhanced interpolation kernel using adaptive parameters produces output image better than the conventional one using a fixed parameter. Experimental results show that the proposed method can not only provides the performances of $0.5{\sim}4dB$ improvements in terms of PSNR, but also exhibit better edge preservation ability and original image similarity than conventional methods in the enlarged images.

  • PDF

A-priori Comparative Assessment of the Performance of Adjustment Models for Estimation of the Surface Parameters against Modeling Factors (표면 파라미터 계산시 모델링 인자에 따른 조정계산 추정 성능의 사전 비교분석)

  • Seo, Su-Young
    • Spatial Information Research
    • /
    • v.19 no.2
    • /
    • pp.29-36
    • /
    • 2011
  • This study performed quantitative assessment of the performance of adjustment models by a-priori analysis of the statistics of the surface parameter estimates against modeling factors. Lidar, airborne imagery, and SAR imagery have been used to acquire the earth surface elevation, where the shape properties of the surface need to be determined through neighboring observations around target location. In this study, parameters which are selected to be estimated are elevation, slope, second order coefficient. In this study, several factors which are needed to be specified to compose adjustment models are classified into three types: mathematical functions, kernel sizes, and weighting types. Accordingly, a-priori standard deviations of the parameters are computed for varying adjustment models. Then their corresponding confidence regions for both the standard deviation of the estimate and the estimate itself are calculated in association with probability distributions. Thereafter, the resulting confidence regions are compared to each other against the factors constituting the adjustment models and the quantitative performance of adjustment models are ascertained.

Lightweight Attention-Guided Network with Frequency Domain Reconstruction for High Dynamic Range Image Fusion

  • Park, Jae Hyun;Lee, Keuntek;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.205-208
    • /
    • 2022
  • Multi-exposure high dynamic range (HDR) image reconstruction, the task of reconstructing an HDR image from multiple low dynamic range (LDR) images in a dynamic scene, often produces ghosting artifacts caused by camera motion and moving objects and also cannot deal with washed-out regions due to over or under-exposures. While there has been many deep-learning-based methods with motion estimation to alleviate these problems, they still have limitations for severely moving scenes. They also require large parameter counts, especially in the case of state-of-the-art methods that employ attention modules. To address these issues, we propose a frequency domain approach based on the idea that the transform domain coefficients inherently involve the global information from whole image pixels to cope with large motions. Specifically we adopt Residual Fast Fourier Transform (RFFT) blocks, which allows for global interactions of pixels. Moreover, we also employ Depthwise Overparametrized convolution (DO-conv) blocks, a convolution in which each input channel is convolved with its own 2D kernel, for faster convergence and performance gains. We call this LFFNet (Lightweight Frequency Fusion Network), and experiments on the benchmarks show reduced ghosting artifacts and improved performance up to 0.6dB tonemapped PSNR compared to recent state-of-the-art methods. Our architecture also requires fewer parameters and converges faster in training.

  • PDF

Gradient Estimation for Progressive Photon Mapping (점진적 광자 매핑을 위한 기울기 계산 기법)

  • Donghee Jeon;Jeongmin Gu;Bochang Moon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.141-147
    • /
    • 2024
  • Progressive photon mapping is a widely adopted rendering technique that conducts a kernel-density estimation on photons progressively generated from lights. Its hyperparameter, which controls the reduction rate of the density estimation, highly affects the quality of its rendering image due to the bias-variance tradeoff of pixel estimates in photon-mapped results. We can minimize the errors of rendered pixel estimates in progressive photon mapping by estimating the optimal parameters based on gradient-based optimization techniques. To this end, we derived the gradients of pixel estimates with respect to the parameters when performing progressive photon mapping and compared our estimated gradients with finite differences to verify estimated gradients. The gradient estimated in this paper can be applied in an online learning algorithm that simultaneously performs progressive photon mapping and parameter optimization in future work.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Suspension of Sediment over Swash Zone (Swash대역에서의 해빈표사 부유거동에 관한 연구)

  • Cho, Yong Jun;Kim, Kwon Soo;Ryu, Ha Sang
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.1B
    • /
    • pp.95-109
    • /
    • 2008
  • We numerically analyzed the nonlinear shoaling, a plunging breaker and its accompanying energetic suspension of sediment at a bed, and a redistribution of suspended sediments by a down rush of preceding waves and the following plunger using SPH with a Gaussian kernel function, Lagrangian Dynamic Smagorinsky model (LDS), Van Rijn's pick up function. In that process, we came to the conclusion that the conventional model for the tractive force at a bottom like a quadratic law can not accurately describe the rapidly accelerating flow over a swash zone, and propose new methodology to accurately estimate the bottom tractive force. Using newly proposed wave model in this study, we can successfully duplicate severely deformed water surface profile, free falling water particles, a queuing splash after the landing of water particles on the free surface and a wave finger due to the structured vortex on a rear side of wave crest (Narayanaswamy and Dalrymple, 2002), a circulation of suspended sediments over a swash zone, net transfer of sediments clouds suspended over a swash zone toward the offshore, which so far have been regarded very difficult features to mimic in the computational fluid mechanics.

Multivariate Time Series Simulation With Component Analysis (독립성분분석을 이용한 다변량 시계열 모의)

  • Lee, Tae-Sam;Salas, Jose D.;Karvanen, Juha;Noh, Jae-Kyoung
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2008.05a
    • /
    • pp.694-698
    • /
    • 2008
  • In hydrology, it is a difficult task to deal with multivariate time series such as modeling streamflows of an entire complex river system. Normal distribution based model such as MARMA (Multivariate Autorgressive Moving average) has been a major approach for modeling the multivariate time series. There are some limitations for the normal based models. One of them might be the unfavorable data-transformation forcing that the data follow the normal distribution. Furthermore, the high dimension multivariate model requires the very large parameter matrix. As an alternative, one might be decomposing the multivariate data into independent components and modeling it individually. In 1985, Lins used Principal Component Analysis (PCA). The five scores, the decomposed data from the original data, were taken and were formulated individually. The one of the five scores were modeled with AR-2 while the others are modeled with AR-1 model. From the time series analysis using the scores of the five components, he noted "principal component time series might provide a relatively simple and meaningful alternative to conventional large MARMA models". This study is inspired from the researcher's quote to develop a multivariate simulation model. The multivariate simulation model is suggested here using Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Three modeling step is applied for simulation. (1) PCA is used to decompose the correlated multivariate data into the uncorrelated data while ICA decomposes the data into independent components. Here, the autocorrelation structure of the decomposed data is still dominant, which is inherited from the data of the original domain. (2) Each component is resampled by block bootstrapping or K-nearest neighbor. (3) The resampled components bring back to original domain. From using the suggested approach one might expect that a) the simulated data are different with the historical data, b) no data transformation is required (in case of ICA), c) a complex system can be decomposed into independent component and modeled individually. The model with PCA and ICA are compared with the various statistics such as the basic statistics (mean, standard deviation, skewness, autocorrelation), and reservoir-related statistics, kernel density estimate.

  • PDF

Comparison of Texture Images and Application of Template Matching for Geo-spatial Feature Analysis Based on Remote Sensing Data (원격탐사 자료 기반 지형공간 특성분석을 위한 텍스처 영상 비교와 템플레이트 정합의 적용)

  • Yoo Hee Young;Jeon So Hee;Lee Kiwon;Kwon Byung-Doo
    • Journal of the Korean earth science society
    • /
    • v.26 no.7
    • /
    • pp.683-690
    • /
    • 2005
  • As remote sensing imagery with high spatial resolution (e.g. pixel resolution of 1m or less) is used widely in the specific application domains, the requirements of advanced methods for this imagery are increasing. Among many applicable methods, the texture image analysis, which was characterized by the spatial distribution of the gray levels in a neighborhood, can be regarded as one useful method. In the texture image, we compared and analyzed different results according to various directions, kernel sizes, and parameter types for the GLCM algorithm. Then, we studied spatial feature characteristics within each result image. In addition, a template matching program which can search spatial patterns using template images selected from original and texture images was also embodied and applied. Probabilities were examined on the basis of the results. These results would anticipate effective applications for detecting and analyzing specific shaped geological or other complex features using high spatial resolution imagery.

Effect of The Degree and Duration of Low Temperature on the Degeneration and Sterility of Spikelets in Rice (저온(低溫)의 정도(程度)와 기간(期間)이 수도(水稻)의 영화퇴화(穎花退化)와 불임(不稔)에 미치는 영향(影響))

  • Ahn, Su Bong
    • Korean Journal of Agricultural Science
    • /
    • v.7 no.1
    • /
    • pp.1-5
    • /
    • 1980
  • In order to evaluate cold tolerance and to counter measure the cold damage of newly released rice varieties, the effects of degree and duration of low temperature at the meiotic stage on the sterility and ripening of rice spikelets were investigated and the results were as follows: 1. As the temperature was lowered and the duration of low temperature was extended during the meiotic stage, the heading dates were delayed and the sterility were increased. The main factor for the low yield due to low temperature was due to the increased sterility, and under the below $15^{\circ}C$, the delayed heading was also responsible for the low yield. 2. The sterility and delayed kernel development of rice were increased when grown at $15^{\circ}C$ for six days. 3. The newly released rice varieties were highly sensitive te low temperature damage during the meiotic stage. The treatment of rice at $15^{\circ}C$ for four days might be used as a perameter to evaluate the low temperature tolerance of rice varieties.

  • PDF

Implementation of GLCM/GLDV-based Texture Algorithm and Its Application to High Resolution Imagery Analysis (GLCM/GLDV 기반 Texture 알고리즘 구현과 고 해상도 영상분석 적용)

  • Lee Kiwon;Jeon So-Hee;Kwon Byung-Doo
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.2
    • /
    • pp.121-133
    • /
    • 2005
  • Texture imaging, which means texture image creation by co-occurrence relation, has been known as one of the useful image analysis methodologies. For this purpose, most commercial remote sensing software provides texture analysis function named GLCM (Grey Level Co-occurrence Matrix). In this study, texture-imaging program based on GLCM algorithm is newly implemented. As well, texture imaging modules for GLDV (Grey Level Difference Vector) are contained in this program. As for GLCM/GLDV Texture imaging parameters, it composed of six types of second order texture functions such as Homogeneity, Dissimilarity, Energy, Entropy, Angular Second Moment, and Contrast. As for co-occurrence directionality in GLCM/GLDV, two direction modes such as Omni-mode and Circular mode newly implemented in this program are provided with basic eight-direction mode. Omni-mode is to compute all direction to avoid directionality complexity in the practical level, and circular direction is to compute texture parameters by circular direction surrounding a target pixel in a kernel. At the second phase of this study, some case studies with artificial image and actual satellite imagery are carried out to analyze texture images in different parameters and modes by correlation matrix analysis. It is concluded that selection of texture parameters and modes is the critical issues in an application based on texture image fusion.