• Title/Summary/Keyword: 활성화 함수

Search Result 285, Processing Time 0.026 seconds

Radiation-Implemented Combustion of Homogeneous Solid Propellants (복사 열 속의 영향을 받는 고체 추진제의 연소반응)

  • 남삼식;이창진;김성인
    • Proceedings of the Korean Society of Propulsion Engineers Conference
    • /
    • 1998.10a
    • /
    • pp.5-5
    • /
    • 1998
  • 수정된 연소 반응 함수를 이용하여 복사 열속 교란에 대한 연소 반응 특성을 살펴보았다. Catalyzed DB N5 추진제의 연소 반응을 살펴보기 위하여 Son 등의 실험 결과와 비교하였다. Son 등의 연소 반응 함수는 물리적으로 타당하지 않은 활성화 에너지에서 실험 결과를 예측하였지만, 수정된 연소 반응 함수는 비슷한 추진제의 실제 활성화 에너지 범위에서 복사 열속에 대한 연소율의 반응을 비교적 잘 예측할 수 있었다. 이것은 Son 등에 의해 과소 평가된 복사 열속(f, J)의 영향이 고려되었기 때문인 것으로 판단된다. 민감 변수(sensitivity parameter)들을 구하기 위하여 Ibiricu 등이 제시한 정상 연소 관계식을 이용하였다. AP계 추진제의 표면 온도에 대한 정상 연소율 변화를 살펴본 결과 Zanotti의 AP2 추진제의 실험 결과와 정성적으로 비슷한 경향을 나타내었다.

  • PDF

Comparison of Gradient Descent for Deep Learning (딥러닝을 위한 경사하강법 비교)

  • Kang, Min-Jae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.2
    • /
    • pp.189-194
    • /
    • 2020
  • This paper analyzes the gradient descent method, which is the one most used for learning neural networks. Learning means updating a parameter so the loss function is at its minimum. The loss function quantifies the difference between actual and predicted values. The gradient descent method uses the slope of the loss function to update the parameter to minimize error, and is currently used in libraries that provide the best deep learning algorithms. However, these algorithms are provided in the form of a black box, making it difficult to identify the advantages and disadvantages of various gradient descent methods. This paper analyzes the characteristics of the stochastic gradient descent method, the momentum method, the AdaGrad method, and the Adadelta method, which are currently used gradient descent methods. The experimental data used a modified National Institute of Standards and Technology (MNIST) data set that is widely used to verify neural networks. The hidden layer consists of two layers: the first with 500 neurons, and the second with 300. The activation function of the output layer is the softmax function, and the rectified linear unit function is used for the remaining input and hidden layers. The loss function uses cross-entropy error.

가우스 전위함수를 가지는 신경회로망 모델

  • O, Sang-Hun;Kim, Meong-Won
    • Electronics and Telecommunications Trends
    • /
    • v.5 no.2
    • /
    • pp.39-50
    • /
    • 1990
  • 다층 퍼셉트론 신경회로망 모델이 여러가지 복잡한 문제를 역전파 학습에 의하여 해결할 수 있다고 보고된 후로, 이 모델을 이용한 응용분야의 연구가 활발하다. 그렇지만, 이 다층 퍼셉트론 모델은 오랜 학습시간이 필요하며, 또 분류경계가 입력층과 숨겨진 층간의 연결가중치에 의해 결정되는 초기하 평면의 조합으로 이루어지기 때문에, 숨겨진 층의 뉴런 수가 부족하면 분류경계를 제대로 나타낼 수 없게 된다. 이러한 단점들을 극복하기 위하여 숨겨진 층의 활성화 함수는 시그모이드 형태가 아닌 가우스 함수가 되도록 하고 이 가우스 함수들의 선형적 합에 의하여 출력층 뉴런들의 값이 결정되는, 즉, 가우스 함수가 출력층의 전위함수(potential function)가 되는 신경회로망이 여러번 제안되었다. 본 논문에서는 가우스 함수를 전위함수로 가지는 신경회로망 모델들에 대하여 이 모델들의 실제 응용 예와 함께 알아보겠다.

The Study of Neural Networks Using Orthogonal Function System (직교함수를 사용한 신경회로망에 대한 연구)

  • 권성훈;최용준;이정훈;손동설;엄기환
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.214-217
    • /
    • 1999
  • In this paper we proposed a heterogeneous hidden layer consisting of both sigmoid functions and RBFs(Radial Basis Function) in multi-layered neural networks. Focusing on the orthogonal relationship between the sigmoid function and its derivative, a derived RBF that is a derivative of the sigmoid function is used as the RBF in the neural network. so the proposed neural network is called ONN's feasibility Neural Network). Identification results using a nonlinear. function confirm both the ONN's feasibility and characteristics by comparing with those obtained using a conventional neural network which has sigmoid function or RBF in hidden layer.

  • PDF

Enhanced Time Measurement Function Using Virtual System Call (가상 시스템 호출을 이용한 시간 측정 함수의 성능 향상)

  • Kim, Chei-Yol;Son, Sung-Hoon;Jung, Sung-In
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.11b
    • /
    • pp.1237-1240
    • /
    • 2003
  • 최근의 인터넷의 급속한 발전에 힘입어 멀티미디어 데이터를 인터넷을 통해 서비스해 주는 스트리밍 서비스가 활성화가 되고 있다. 스트리밍 서버는 서비스를 요청한 클라이언트에게 일정한 데이터 전송률로 서비스를 하게 되는데, 이때 시간측정 함수를 매우 빈번히 사용하게 된다. 본 논문은 시간 측정함수를 가상 시스템 호출 기법을 이용해 성능을 향상 시킬 수 있음을 보여준다.

  • PDF

Characteristics of Soil Pavement by Red Mud Content and Binder Type (레드머드 대체율에 따른 결합재별 흙포장재의 특성)

  • Kang, Suk-Pyo;Kang, Hye-Ju;Kim, Jae-Hwan;Kim, Byeong-Ki
    • Journal of the Korean Recycled Construction Resources Institute
    • /
    • v.5 no.1
    • /
    • pp.37-44
    • /
    • 2017
  • Red mud is an inorganic by-product produced from the mineral processing of alumina from Bauxite ores. The development of alkali-activated slag-red mud cement can be a representative study aimed at recycling the strong alkali of the red mud as a construction material. This study is to investigate the optimum water content, compressive strength, water absorption and efflorescence of alkali-activated slag-red mud soil pavement according to binder type. The results showed that the optimum water content, moisture absorption coefficient and efflorescence area of alkali-activated slag-red mud soil pavement increased but the compressive strength of that decreased as the redmud content increased.

Optimization of Fuzzy Learning Machine by Using Particle Swarm Optimization (PSO 알고리즘을 이용한 퍼지 Extreme Learning Machine 최적화)

  • Roh, Seok-Beom;Wang, Jihong;Kim, Yong-Soo;Ahn, Tae-Chon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.1
    • /
    • pp.87-92
    • /
    • 2016
  • In this paper, optimization technique such as particle swarm optimization was used to optimize the parameters of fuzzy Extreme Learning Machine. While the learning speed of conventional neural networks is very slow, that of Extreme Learning Machine is very fast. Fuzzy Extreme Learning Machine is composed of the Extreme Learning Machine with very fast learning speed and fuzzy logic which can represent the linguistic information of the field experts. The general sigmoid function is used for the activation function of Extreme Learning Machine. However, the activation function of Fuzzy Extreme Learning Machine is the membership function which is defined in the procedure of fuzzy C-Means clustering algorithm. We optimize the parameters of the membership functions by using optimization technique such as Particle Swarm Optimization. In order to validate the classification capability of the proposed classifier, we make several experiments with the various machine learning datas.

Improvement of Learning Capability with Combination of the Generalized Cascade Correlation and Generalized Recurrent Cascade Correlation Algorithms (일반화된 캐스케이드 코릴레이션 알고리즘과 일반화된 순환 캐스케이드 코릴레이션 알고리즘의 결합을 통한 학습 능력 향상)

  • Lee, Sang-Wha;Song, Hae-Sang
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.2
    • /
    • pp.97-105
    • /
    • 2009
  • This paper presents a combination of the generalized Cascade Correlation and generalized Recurrent Cascade Correlation learning algorithms. The new network will be able to grow with vertical or horizontal direction and with recurrent or without recurrent units for the quick solution of the pattern classification problem. The proposed algorithm was tested learning capability with the sigmoidal activation function and hyperbolic tangent activation function on the contact lens and balance scale standard benchmark problems. And results are compared with those obtained with Cascade Correlation and Recurrent Cascade Correlation algorithms. By the learning the new network was composed with the minimal number of the created hidden units and shows quick learning speed. Consequently it will be able to improve a learning capability.

Analysis on the Accuracy of Building Construction Cost Estimation by Activation Function and Training Model Configuration (활성화함수와 학습노드 진행 변화에 따른 건축 공사비 예측성능 분석)

  • Lee, Ha-Neul;Yun, Seok-Heon
    • Journal of KIBIM
    • /
    • v.12 no.2
    • /
    • pp.40-48
    • /
    • 2022
  • It is very important to accurately predict construction costs in the early stages of the construction project. However, it is difficult to accurately predict construction costs with limited information from the initial stage. In recent years, with the development of machine learning technology, it has become possible to predict construction costs more accurately than before only with schematic construction characteristics. Based on machine learning technology, this study aims to analyze plans to more accurately predict construction costs by using only the factors influencing construction costs. To the end of this study, the effect of the error rate according to the activation function and the node configuration of the hidden layer was analyzed.

Learning Ability of Deterministic Boltzmann Machine with Non-Monotonic Neurons in Hidden Layer (은닉층에 비단조 뉴런을 갖는 결정론적 볼츠만 머신의 학습능력에 관한 연구)

  • 박철영
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.6
    • /
    • pp.505-509
    • /
    • 2001
  • In this paper, we evaluate the learning ability of non-monotonic DMM(Deterministic Boltzmann Machine) network through numerical simulations. The simulation results show that the proposed system has higher performance than monotonic DBM network model. Non-monotonic DBM network also show an interesting result that network itself adjusts the number of hidden layer neurons. DBM network can be realized with fewer components than other neural network models. These results enhance the utilization of non-monotonic neurons in the large scale integration of neuro-chips.

  • PDF