• Title/Summary/Keyword: 일반화 성능

Search Result 590, Processing Time 0.031 seconds

Performance Analysis of Generalized Triangular QAM (일반화된 Triangular QAM의 성능 분석)

  • Cho, Kyong-Kuk;Lee, Jae-Yoon;Yoon, Dong-Weon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11C
    • /
    • pp.885-888
    • /
    • 2010
  • It is known that triangular QAM with triangular signal constellations gets better SER gain than square QAM with rectangular signal constellations. In the paper, we propose an exact and general closed-form expression of the probability of symbol error for generalized TQAM, which has been currently researched. We verify that the simulation is consistent with the derived equation of generalized TQAM. And, we also suggest the optimal signal constellation schemes for generalized TQAM at given SNR values.

Hybrid census transform considering gaussian noise and computational complexity (가우시안 잡음과 계산량을 고려한 하이브리드 센서스 변환)

  • Jeong, Seong-Hwan;Kang, Sung-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.8
    • /
    • pp.3983-3991
    • /
    • 2013
  • Census transform is one of the stereo vision methods which is robust to radiometric distortion and illuminance change. This paper proposes a hybrid census transform using the mini census transform and the generalized census transform concurrently. This method uses simplicity of mini census transform and noise feature of generalized census transform together. This paper performed stereo matching containing post processing to evaluate each methods. The result shows that hybrid census transform has similar performance to generalized census transform and mean value of calculation complexity between mini census transform and generalized census transform.

k-Nearest Neighbor Learning with Varying Norms (놈(Norm)에 따른 k-최근접 이웃 학습의 성능 변화)

  • Kim, Doo-Hyeok;Kim, Chan-Ju;Hwang, Kyu-Baek
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06c
    • /
    • pp.371-375
    • /
    • 2008
  • 예제 기반 학습(instance-based learning) 방법 중 하나인 k-최근접 이웃(k-nearest reighbor, k-NN) 학습은 간단하고 예측 정확도가 비교적 높아 분류 및 회귀 문제 해결을 위한 기반 방법론으로 널리 적용되고 있다. k-NN 학습을 위한 알고리즘은 기본적으로 유클리드 거리 혹은 2-놈(norm)에 기반하여 학습예제들 사이의 거리를 계산한다. 본 논문에서는 유클리드 거리를 일반화한 개념인 p-놈의 사용이 k-NN 학습의 성능에 어떠한 영향을 미치는지 연구하였다. 구체적으로 합성데이터와 다수의 기계학습 벤치마크 문제 및 실제 데이터에 다양한 p-놈을 적용하여 그 일반화 성능을 경험적으로 조사하였다. 실험 결과, 데이터에 잡음이 많이 존재하거나 문제가 어려운 경우에 p의 값을 작게 하는 것이 성능을 향상시킬 수 있었다.

  • PDF

Learning Performance Improvement of Fuzzy RBF Network (퍼지 RBF 네트워크의 학습 성능 개선)

  • Kim Jae-Yong;Kim Kwang-Baek
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2005.04a
    • /
    • pp.335-339
    • /
    • 2005
  • 본 논문에서는 퍼지 RBF 네트워크의 학습 성능을 개선하기 위하여 Delta-bar-Delta 알고리즘을 적용하여 학습률을 동적으로 조정하는 개선된 퍼지 RBF 네트워크를 제안한다. 제안된 학습 알고리즘은 일반화된 델타 학습 방법에 퍼지 C-Means 알고리즘을 결합한 방법으로, 중간층의 노드를 자가 생성하고 중간층과 출력충의 학습에는 일반화된 델타 학습 방법에 Delta-bar-Delta 알고리즘을 적용하여 학습률을 동적으로 조정하여 학습 성능을 개선한다. 제안된 RBF 네트워크의 학습 성능을 평가하기 위하여 컨테이너 영상에서 추출한 40개의 식별자를 학습 데이터로 적용한 결과, 기존의 ART2 기반 RBF 네트워크와 기존의 퍼지 RBF 네트워크 보다 학습 시간이 적게 소요되고, 학습의 수렴성이 개선된 것을 확인하였다.

  • PDF

Cross-Validated Ensemble Methods in Natural Language Inference (자연어 추론에서의 교차 검증 앙상블 기법)

  • Yang, Kisu;Whang, Taesun;Oh, Dongsuk;Park, Chanjun;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2019.10a
    • /
    • pp.8-11
    • /
    • 2019
  • 앙상블 기법은 여러 모델을 종합하여 최종 판단을 산출하는 기계 학습 기법으로서 딥러닝 모델의 성능 향상을 보장한다. 하지만 대부분의 기법은 앙상블만을 위한 추가적인 모델 또는 별도의 연산을 요구한다. 이에 우리는 앙상블 기법을 교차 검증 방법과 결합하여 앙상블 연산을 위한 비용을 줄이며 일반화 성능을 높이는 교차 검증 앙상블 기법을 제안한다. 본 기법의 효과를 입증하기 위해 MRPC, RTE 데이터셋과 BiLSTM, CNN, BERT 모델을 이용하여 기존 앙상블 기법보다 향상된 성능을 보인다. 추가로 교차 검증에서 비롯한 일반화 원리와 교차 검증 변수에 따른 성능 변화에 대하여 논의한다.

  • PDF

A Pruning Algorithm of Neural Networks Using Impact Factors (임팩트 팩터를 이용한 신경 회로망의 연결 소거 알고리즘)

  • 이하준;정승범;박철훈
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.77-86
    • /
    • 2004
  • In general, small-sized neural networks, even though they show good generalization performance, tend to fail to team the training data within a given error bound, whereas large-sized ones learn the training data easily but yield poor generalization. Therefore, a way of achieving good generalization is to find the smallest network that can learn the data, called the optimal-sized neural network. This paper proposes a new scheme for network pruning with ‘impact factor’ which is defined as a multiplication of the variance of a neuron output and the square of its outgoing weight. Simulation results of function approximation problems show that the proposed method is effective in regression.

Performance of Generalized BER for Hierarchical MPSK Signal (계층적 MPSK 신호에 대한 일반화된 BER 성능)

  • Lee Jae-Yoon;Yoon Dong-Weon;Hyun Kwang-Min;Park Sang-Kyu
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.9C
    • /
    • pp.831-839
    • /
    • 2006
  • In this paper, we present an exact and general expression involving two-dimensional Gaussian Q-functions for the bit error rate (BER) of hierarchical MPSK with I/Q phase and amplitude imbalances over an additive white Gaussian noise (AWGN) channel. First we derive a BER expression for the k-th bit of hierarchical 4, 8, 16-PSK signal constellations when Gray code bit mapping is employed. Then, from the derived k-th bit BER expression, we present the exact and general average BER expression for hierarchical MPSK with I/Q phase and amplitude imbalances. This result can readily be applied to numerical evaluation for various cases of practical interest in an I/Q unbalanced hierarchical MPSK system, because the one- and two-dimensional Gaussian Q-functions can be easily and directly computed usinB commonly available mathematical software tools.

The Joint Effect of factors on Generalization Performance of Neural Network Learning Procedure (신경망 학습의 일반화 성능향상을 위한 인자들의 결합효과)

  • Yoon YeoChang
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.343-348
    • /
    • 2005
  • The goal of this paper is to study the joint effect of factors of neural network teaming procedure. There are many factors, which may affect the generalization ability and teaming speed of neural networks, such as the initial values of weights, the learning rates, and the regularization coefficients. We will apply a constructive training algerian for neural network, then patterns are trained incrementally by considering them one by one. First, we will investigate the effect of these factors on generalization performance and learning speed. Based on these factors' effect, we will propose a joint method that simultaneously considers these three factors, and dynamically hue the learning rate and regularization coefficient. Then we will present the results of some experimental comparison among these kinds of methods in several simulated nonlinear data. Finally, we will draw conclusions and make plan for future work.

Comparing the efficiency of dispersion parameter estimators in gamma generalized linear models (감마 일반화 선형 모형에서의 산포 모수 추정량에 대한 효율성 연구)

  • Jo, Seongil;Lee, Woojoo
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.95-102
    • /
    • 2017
  • Gamma generalized linear models have received less attention than Poisson and binomial generalized linear models. Therefore, many old-established statistical techniques are still used in gamma generalized linear models. In particular, existing literature and textbooks still use approximate estimates for the dispersion parameter. In this paper we study the efficiency of various dispersion parameter estimators in gamma generalized linear models and perform numerical simulations. Numerical studies show that the maximum likelihood estimator and Cox-Reid adjusted maximum likelihood estimator are recommended and that approximate estimates should be avoided in practice.

Performance Evaluation of the Extractiojn Method of Representative Keywords by Fuzzy Inference (퍼지추론 기반 대표 키워드 추출방법의 성능 평가)

  • Rho Sun-Ok;Kim Byeong Man;Oh Sang Yeop;Lee Hyun Ah
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.10 no.1
    • /
    • pp.28-37
    • /
    • 2005
  • In our previous works, we suggested a method that extracts representative keywords from a few positive documents and assigns weights to them. To show the usefulness of the method, in this paper, we evaluate the performance of a famous classification algorithm called GIS(Generalized Instance Set) when it is combined with our method. In GIS algorithm, generalized instances are built from learning documents by a generalization function and then the K-NN algorithm is applied to them. Here, our method is used as a generalization function. For comparative works, Rocchio and Widrow-Hoff algorithms are also used as a generalization function. Experimental results show that our method is better than the others for the case that only positive documents are considered, but not when negative documents are considered together.

  • PDF