• Title/Summary/Keyword: gaussian weight

Search Result 113, Processing Time 0.031 seconds

Implementation of Neural Filter Optimal Algorithms for Image Restoration (영상복원용 신경회로망 필터의 최적화 알고리즘 구현)

  • Lee, Bae-Ho;Mun, Byeong-Jin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.7
    • /
    • pp.1980-1987
    • /
    • 1999
  • Restored image is always lower quality than original one due to distortion and noise. The purpose of image restoration is to improve the image quality by fixing the noise or distortion information. One category of spatial filters for image restoration is linear filter. This filter algorithm is easily implemented and can be suppressed the Gaussian noise effectively, but not so good performance for spot or impulse noise. In this paper, we propose the nonlinear spatial filter algorithm for image restoration called the optimal adaptive multistage filter(OAMF). The OAMF is used to reduce the filtering time, increases the noise suppression ratio and preserves the edge information. The OAMF optimizes the adaptive multistage filter(AMF) by using weight learning algorithm of back-propagation learning algorithm. Simulation results of this filter algorithm are presented and discussed.

  • PDF

Modified Weighted Filter by Standard Deviation in S&P Noise Environments (S&P 잡음 환경에서 표준편차를 이용한 변형된 가중치 필터)

  • Baek, Ji-Hyeon;Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.4
    • /
    • pp.474-480
    • /
    • 2020
  • With the advent of the Fourth Industrial Revolution, many new technologies are being utilized. In particular, video signals are used in various fields. However, when transmitting and receiving video signals, salt and pepper noise and additive white Gaussian noise (AWGN) occur for multiple reasons. Failure to remove such noise when performing image processing can cause problems. Generally, filters such as CWMF, MF, and AMF remove noise. However, these filters perform somewhat poorly in the high-density noise domain and cause smoothing, resulting in slightly lower retention of the edge components. In this paper, we propose an algorithm by effectively eliminating salt and pepper noise using a modified weight filter using standard deviation. In order to prove the noise reduction performance of the proposed algorithm, we compared it with the existing algorithm using PSNR and magnified images.

EDNN based prediction of strength and durability properties of HPC using fibres & copper slag

  • Gupta, Mohit;Raj, Ritu;Sahu, Anil Kumar
    • Advances in concrete construction
    • /
    • v.14 no.3
    • /
    • pp.185-194
    • /
    • 2022
  • For producing cement and concrete, the construction field has been encouraged by the usage of industrial soil waste (or) secondary materials since it decreases the utilization of natural resources. Simultaneously, for ensuring the quality, the analyses of the strength along with durability properties of that sort of cement and concrete are required. The prediction of strength along with other properties of High-Performance Concrete (HPC) by optimization and machine learning algorithms are focused by already available research methods. However, an error and accuracy issue are possessed. Therefore, the Enhanced Deep Neural Network (EDNN) based strength along with durability prediction of HPC was utilized by this research method. Initially, the data is gathered in the proposed work. Then, the data's pre-processing is done by the elimination of missing data along with normalization. Next, from the pre-processed data, the features are extracted. Hence, the data input to the EDNN algorithm which predicts the strength along with durability properties of the specific mixing input designs. Using the Switched Multi-Objective Jellyfish Optimization (SMOJO) algorithm, the weight value is initialized in the EDNN. The Gaussian radial function is utilized as the activation function. The proposed EDNN's performance is examined with the already available algorithms in the experimental analysis. Based on the RMSE, MAE, MAPE, and R2 metrics, the performance of the proposed EDNN is compared to the existing DNN, CNN, ANN, and SVM methods. Further, according to the metrices, the proposed EDNN performs better. Moreover, the effectiveness of proposed EDNN is examined based on the accuracy, precision, recall, and F-Measure metrics. With the already-existing algorithms i.e., JO, GWO, PSO, and GA, the fitness for the proposed SMOJO algorithm is also examined. The proposed SMOJO algorithm achieves a higher fitness value than the already available algorithm.

Image-based Soft Drink Type Classification and Dietary Assessment System Using Deep Convolutional Neural Network with Transfer Learning

  • Rubaiya Hafiz;Mohammad Reduanul Haque;Aniruddha Rakshit;Amina khatun;Mohammad Shorif Uddin
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.158-168
    • /
    • 2024
  • There is hardly any person in modern times who has not taken soft drinks instead of drinking water. The rate of people taking soft drinks being surprisingly high, researchers around the world have cautioned from time to time that these drinks lead to weight gain, raise the risk of non-communicable diseases and so on. Therefore, in this work an image-based tool is developed to monitor the nutritional information of soft drinks by using deep convolutional neural network with transfer learning. At first, visual saliency, mean shift segmentation, thresholding and noise reduction technique, collectively known as 'pre-processing' are adopted to extract the location of drinks region. After removing backgrounds and segment out only the desired area from image, we impose Discrete Wavelength Transform (DWT) based resolution enhancement technique is applied to improve the quality of image. After that, transfer learning model is employed for the classification of drinks. Finally, nutrition value of each drink is estimated using Bag-of-Feature (BoF) based classification and Euclidean distance-based ratio calculation technique. To achieve this, a dataset is built with ten most consumed soft drinks in Bangladesh. These images were collected from imageNet dataset as well as internet and proposed method confirms that it has the ability to detect and recognize different types of drinks with an accuracy of 98.51%.

Mechanical Properties of Concrete with Statistical Variations (통계적 분산을 고려한 콘크리트의 역학적 특성)

  • Kim, Jee-Sang;Shin, Jeong-Ho
    • Journal of the Korea Concrete Institute
    • /
    • v.21 no.6
    • /
    • pp.789-796
    • /
    • 2009
  • The randomness in the strength of a RC member is caused mainly by the variability of the mechanical properties of concrete and steel, the dimensions of concrete cross sections, and the placement of reinforcing bars and so on . Among those variations, the randomness and uncertainty of mechanical properties of concrete, such as compressive strength, tensile strength, and elastic modulus give the most significant influences and show relatively large statistical variations. In Korea, there has been little effort for the construction of its own statistical models for mechanical properties of concrete and steel, thus the foreign data have been utilized till now. In this paper, variability of compressive strength, tensile strength and elastic modulus of normal-weight structural concrete with various specified design compressive strength levels are examined based on the data obtained from a number of published and unpublished sources in this country and additional laboratory tests done by the authors. The inherent probabilistic models for compressive and tensile strength of normal-weight concrete are proposed as Gaussian distribution. Also, the relationships between compressive and splitting tensile strength and between compressive strength and elastic modulus in current KCI Code are verified and new ones are suggested based on local data.

Algorithms for Indexing and Integrating MPEG-7 Visual Descriptors (MPEG-7 시각 정보 기술자의 인덱싱 및 결합 알고리즘)

  • Song, Chi-Ill;Nang, Jong-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.1-10
    • /
    • 2007
  • This paper proposes a new indexing mechanism for MPEG-7 visual descriptors, especially Dominant Color and Contour Shape descriptors, that guarantees an efficient similarity search for the multimedia database whose visual meta-data are represented with MPEG-7. Since the similarity metric used in the Dominant Color descriptor is based on Gaussian mixture model, the descriptor itself could be transform into a color histogram in which the distribution of the color values follows the Gauss distribution. Then, the transformed Dominant Color descriptor (i.e., the color histogram) is indexed in the proposed indexing mechanism. For the indexing of Contour Shape descriptor, we have used a two-pass algorithm. That is, in the first pass, since the similarity of two shapes could be roughly measured with the global parameters such as eccentricity and circularity used in Contour shape descriptor, the dissimilar image objects could be excluded with these global parameters first. Then, the similarities between the query and remaining image objects are measured with the peak parameters of Contour Shape descriptor. This two-pass approach helps to reduce the computational resources to measure the similarity of image objects using Contour Shape descriptor. This paper also proposes two integration schemes of visual descriptors for an efficient retrieval of multimedia database. The one is to use the weight of descriptor as a yardstick to determine the number of selected similar image objects with respect to that descriptor, and the other is to use the weight as the degree of importance of the descriptor in the global similarity measurement. Experimental results show that the proposed indexing and integration schemes produce a remarkable speed-up comparing to the exact similarity search, although there are some losses in the accuracy because of the approximated computation in indexing. The proposed schemes could be used to build a multimedia database represented in MPEG-7 that guarantees an efficient retrieval.

Coupled Finite Element Analysis of Partially Saturated Soil Slope Stability (유한요소 연계해석을 이용한 불포화 토사사면 안전성 평가)

  • Kim, Jae-Hong;Lim, Jae-Seong;Park, Seong-Wan
    • Journal of the Korean Geotechnical Society
    • /
    • v.30 no.4
    • /
    • pp.35-45
    • /
    • 2014
  • Limit equilibrium methods of slope stability analysis have been widely adopted mainly due to their simplicity and applicability. However, the conventional methods may not give reliable and convincing results for various geological conditions such as nonhomogeneous and anisotropic soils. Also, they do not take into account soil slope history nor the initial state of stress, for example excavation or fill placement. In contrast to the limit equilibrium analysis, the analysis of deformation and stress distribution by finite element method can deal with the complex loading sequence and the growth of inelastic zone with time. This paper proposes a technique to determine the critical slip surface as well as to calculate the factor of safety for shallow failure on partially saturated soil slope. Based on the effective stress field in finite element analysis, all stresses are estimated at each Gaussian point of elements. The search strategy for a noncircular critical slip surface along weak points is appropriate for rainfall-induced shallow slope failure. The change of unit weight by seepage force has an effect on the horizontal and vertical displacements on the soil slope. The Drucker-Prager failure criterion was adopted for stress-strain relation to calculate coupling hydraulic and mechanical behavior of the partially saturated soil slope.

Design of Data-centroid Radial Basis Function Neural Network with Extended Polynomial Type and Its Optimization (데이터 중심 다항식 확장형 RBF 신경회로망의 설계 및 최적화)

  • Oh, Sung-Kwun;Kim, Young-Hoon;Park, Ho-Sung;Kim, Jeong-Tae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.3
    • /
    • pp.639-647
    • /
    • 2011
  • In this paper, we introduce a design methodology of data-centroid Radial Basis Function neural networks with extended polynomial function. The two underlying design mechanisms of such networks involve K-means clustering method and Particle Swarm Optimization(PSO). The proposed algorithm is based on K-means clustering method for efficient processing of data and the optimization of model was carried out using PSO. In this paper, as the connection weight of RBF neural networks, we are able to use four types of polynomials such as simplified, linear, quadratic, and modified quadratic. Using K-means clustering, the center values of Gaussian function as activation function are selected. And the PSO-based RBF neural networks results in a structurally optimized structure and comes with a higher level of flexibility than the one encountered in the conventional RBF neural networks. The PSO-based design procedure being applied at each node of RBF neural networks leads to the selection of preferred parameters with specific local characteristics (such as the number of input variables, a specific set of input variables, and the distribution constant value in activation function) available within the RBF neural networks. To evaluate the performance of the proposed data-centroid RBF neural network with extended polynomial function, the model is experimented with using the nonlinear process data(2-Dimensional synthetic data and Mackey-Glass time series process data) and the Machine Learning dataset(NOx emission process data in gas turbine plant, Automobile Miles per Gallon(MPG) data, and Boston housing data). For the characteristic analysis of the given entire dataset with non-linearity as well as the efficient construction and evaluation of the dynamic network model, the partition of the given entire dataset distinguishes between two cases of Division I(training dataset and testing dataset) and Division II(training dataset, validation dataset, and testing dataset). A comparative analysis shows that the proposed RBF neural networks produces model with higher accuracy as well as more superb predictive capability than other intelligent models presented previously.

Denoising on Image Signal in Wavelet Basis with the VisuShrink Technique Using the Estimated Noise Deviation by the Monotonic Transform (웨이블릿 기저의 영상신호에서 단조변환으로 추정된 잡음편차를 사용한 VisuShrink 기법의 잡음제거)

  • 우창용;박남천
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.2
    • /
    • pp.111-118
    • /
    • 2004
  • Techniques based on thresholding of wavelet coefficients are gaining popularity for denoising data because of the reasonable performance at the low complexity. The VisuShrink which removes the noise with the universal threshold is one of the techniques. The universal threshold is proportional to the noise deviation and the number of data samples. In general, because the noise deviation is not known, one needs to estimate the deviation for determining the value of the universal threshold. But, only for the finest scale wavelet coefficients, it has been known the way of estimating the noise deviation, so the noise in coarse scales cannot be removed with the VisuShrink. We propose here a new denoising method which removes the noise in each scale except the coarsest scale by Visushrink method. The noise deviation at each band is estimated by the monotonic transform and weighted deviation, the product of estimated noise deviation by the weight, is applied to the universal threshold. By making use of the universal threshold and the Soft-Threshold technique, the noise in each band is removed. The denoising characteristics of the proposed method is compared with that of the traditional VisuShrink and SureShrink method. The result showed that the proposed method is effective in denoising on Gaussian noise and quantization noise.

  • PDF

Linear programming models using a Dantzig type risk for portfolio optimization (Dantzig 위험을 사용한 포트폴리오 최적화 선형계획법 모형)

  • Ahn, Dayoung;Park, Seyoung
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.2
    • /
    • pp.229-250
    • /
    • 2022
  • Since the publication of Markowitz's (1952) mean-variance portfolio model, research on portfolio optimization has been conducted in many fields. The existing mean-variance portfolio model forms a nonlinear convex problem. Applying Dantzig's linear programming method, it was converted to a linear form, which can effectively reduce the algorithm computation time. In this paper, we proposed a Dantzig perturbation portfolio model that can reduce management costs and transaction costs by constructing a portfolio with stable and small (sparse) assets. The average return and risk were adjusted according to the purpose by applying a perturbation method in which a certain part is invested in the existing benchmark and the rest is invested in the assets proposed as a portfolio optimization model. For a covariance estimation, we proposed a Gaussian kernel weight covariance that considers time-dependent weights by reflecting time-series data characteristics. The performance of the proposed model was evaluated by comparing it with the benchmark portfolio with 5 real data sets. Empirical results show that the proposed portfolios provide higher expected returns or lower risks than the benchmark. Further, sparse and stable asset selection was obtained in the proposed portfolios.