• Title/Summary/Keyword: 합성함수

Search Result 667, Processing Time 0.021 seconds

대학수학에서 함수의 합성과 합성함수의 극한에 대한 이해

  • Kim, Byeong-Mu
    • Communications of Mathematical Education
    • /
    • v.18 no.1 s.18
    • /
    • pp.289-296
    • /
    • 2004
  • 수업시간을 이용하지 않고 인터넷을 이용하여 조사와 학생 스스로 학습할 기회와 자료를 제공하여 개념을 이해할 모델을 만들어 본다. 함수의 합성과 극한에 대한 이해도를 1차로 조사한 결과는 정답율이 7.5%에 불과하여 같은 설문지에 대해 각자 공부하고 대답하도록 2차 조사를 하고, 함수의 합성과 합성함수의 극한에 대해 개념의 이해를 도우려고 그래프를 이용한 자료를 수집하여 확실하고 쉽게 이해할 기회를 제공하며 새로운 교수-학습 방법을 개발한다.

  • PDF

A New Functional Synthesis Method for Macro Quantum Circuits Realized in Affine-Controlled NCV-Gates (의사-제어된 NCV 게이트로 실현된 매크로 양자회로의 새로운 함수 합성법)

  • Park, Dong-Young;Jeong, Yeon-Man
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.4
    • /
    • pp.447-454
    • /
    • 2014
  • Recently most of functional synthesis methods for quantum circuit realization have a tendency to adopt the declarative functional expression more suitable for computer algorithms, so it's difficult to analysis synthesized quantum functions. This paper presents a new functional representation of quantum circuits compatible with simple architecture and intuitive thinking. The proposal of this paper is a new functional synthesis development by using the control functions as the power of corresponding to affine-controlled quantum gates based on the mathematical substitution of serial-product matrix operation over the target line for the arithmetic and modulo-2 ones between power functions of unitary operators. The functional synthesis algorithm proposed in this paper is useful for the functional expressions and synthesis using both of reversible and irreversible affine-controlled NCV-quantum gates.

Performance Improvement Method of Convolutional Neural Network Using Combined Parametric Activation Functions (결합된 파라메트릭 활성함수를 이용한 합성곱 신경망의 성능 향상)

  • Ko, Young Min;Li, Peng Hang;Ko, Sun Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.9
    • /
    • pp.371-380
    • /
    • 2022
  • Convolutional neural networks are widely used to manipulate data arranged in a grid, such as images. A general convolutional neural network consists of a convolutional layers and a fully connected layers, and each layer contains a nonlinear activation functions. This paper proposes a combined parametric activation function to improve the performance of convolutional neural networks. The combined parametric activation function is created by adding the parametric activation functions to which parameters that convert the scale and location of the activation function are applied. Various nonlinear intervals can be created according to parameters that convert multiple scales and locations, and parameters can be learned in the direction of minimizing the loss function calculated by the given input data. As a result of testing the performance of the convolutional neural network using the combined parametric activation function on the MNIST, Fashion MNIST, CIFAR10 and CIFAR100 classification problems, it was confirmed that it had better performance than other activation functions.

Mathematical Analysis of Ladder Diagram Games for the introduction of the function (함수의 도입을 위한 사다리타기 게임의 수학적 분석)

  • Lee, Gwangyeon;Lee, Kwangsang;Yoo, Gijong
    • Communications of Mathematical Education
    • /
    • v.27 no.3
    • /
    • pp.267-281
    • /
    • 2013
  • In this paper, we explore the possibility that ladder diagram games can be used for the introduction of the function and composite function. A ladder diagram with at most one rung is a bijection. Thus a ladder diagram with r rungs is the composition of r one-to-one correspondence. In this paper, we use ladder diagrams to give simple proofs of some fundamental facts about one-to-one correspondence. Also, we suggest Story-telling for introduction of function in middle school and high school. The ladder diagram approach to one-to-one correspondence not only grabs our students' attention, but also facilitates their understanding of the concept of functions.

An Improved Function Synthesis Algorithm Using Genetic Programming (유전적 프로그램을 이용한 함수 합성 알고리즘의 개선)

  • Jung, Nam-Chae
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.1
    • /
    • pp.80-87
    • /
    • 2010
  • The method of function synthesis is essential when we control the systems not known their characteristic, by predicting the function to satisfy a relation between input and output from the given pairs of input-output data. In general the most systems operate non-linearly, it is easy to come about problem is composed with combinations of parameter, constant, condition, and so on. Genetic programming is proposed by one of function synthesis methods. This is a search method of function tree to satisfy a relation between input and output, with appling genetic operation to function tree to convert function into tree structure. In this paper, we indicate problems of a function synthesis method by an existing genetic programming propose four type of new improved method. In other words, there are control of function tree growth, selection of local search method for early convergence, effective elimination of redundancy in function tree, and utilization of problem characteristic of object, for preventing function from complicating when the function tree is searched. In case of this improved method, we confirmed to obtain superior structure to function synthesis method by an existing genetic programming in a short period of time by means of computer simulation for the two-spirals problem.

Performance Improvement Method of Convolutional Neural Network Using Agile Activation Function (민첩한 활성함수를 이용한 합성곱 신경망의 성능 향상)

  • Kong, Na Young;Ko, Young Min;Ko, Sun Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.7
    • /
    • pp.213-220
    • /
    • 2020
  • The convolutional neural network is composed of convolutional layers and fully connected layers. The nonlinear activation function is used in each layer of the convolutional layer and the fully connected layer. The activation function being used in a neural network is a function that simulates the method of transmitting information in a neuron that can transmit a signal and not send a signal if the input signal is above a certain criterion when transmitting a signal between neurons. The conventional activation function does not have a relationship with the loss function, so the process of finding the optimal solution is slow. In order to improve this, an agile activation function that generalizes the activation function is proposed. The agile activation function can improve the performance of the deep neural network in a way that selects the optimal agile parameter through the learning process using the primary differential coefficient of the loss function for the agile parameter in the backpropagation process. Through the MNIST classification problem, we have identified that agile activation functions have superior performance over conventional activation functions.

Overlap and Add Sinusoidal Synthesis Method of Speech Signal using Amplitude-weighted Phase Error Function (정현파 크기로 가중치 된 위상 오류 함수를 사용한 음성의 중첩합산 정현파 합성 방법)

  • Park, Jong-Bae;Kim, Gyu-Jin;Hyeok, Jeong-Gyu;Kim, Jong-Hark;Lee, In-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.12C
    • /
    • pp.1149-1155
    • /
    • 2007
  • In this paper, we propose a new overlap and add speech synthesis method which demonstrates improved continuity performance. The proposed method uses a weighted phase error function and minimizes the wave discontinuity of the synthesis signal, rather than the phase discontinuity, to estimate the mid-point phase. Experimental results show that the proposed method improves the continuity between the synthesized signals relative to the existing method.

Analysis of Random Sequences using Nonlinear Combining Functions (비선형 합성 함수를 이용한 랜덤 계열의 특성 분석)

  • 염흥열
    • Proceedings of the Korea Institutes of Information Security and Cryptology Conference
    • /
    • 1994.11a
    • /
    • pp.132-156
    • /
    • 1994
  • 본 논문에서는 비선형 합성 함수를 이용하여 생성된 난수 계열의 특성을 분석한다. 먼저 트레이스 함수 등을 정의하고, 선형 복잡도 및 생성기 구조 분석시 요구되는 관련 이론을 도출하고, 특정 난수 계열이 주어진 경우 이계열을 생성할 수 있는 최소 길이의 LFSR을 합성할 수 있는 USR 합성 알고리듬을 제시한다. 동일한 계열을 위상 천이한 계열간의 비선형 결합으로 생성된 난수 계열과 다른 계열간의 비선형 결합으로 생성된 난수 계열에 대한 주기 및 선형 복잡도 등의 특성을 분석하고 생성기의 구조를 제시한다.

  • PDF

Synthesizing a Boolean Function of an S-box with Integer Linear Programming (수리계획법을 이용한 S-box의 부울함수 합성)

  • 송정환;구본욱
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.14 no.4
    • /
    • pp.49-59
    • /
    • 2004
  • Boolean function synthesize problem is to find a boolean expression with in/outputs of original function. This problem can be modeled into a 0-1 integer programming. In this paper, we find a boolean expressions of S-boxes of DES for an example, whose algebraic structure has been unknown for many years. The results of this paper can be used for efficient hardware implementation of a function and cryptanalysis using algebraic structure of a block cipher.

Convolution Interpretation of Nonparametric Kernel Density Estimate and Rainfall-Runoff Modeling (비매개변수 핵밀도함수와 강우-유출모델의 합성곱(Convolution)을 이용한 수학적 해석)

  • Lee, Taesam
    • Journal of Korean Society of Disaster and Security
    • /
    • v.8 no.1
    • /
    • pp.15-19
    • /
    • 2015
  • In rainfall-runoff models employed in hydrological applications, runoff amount is estimated through temporal delay of effective precipitation based on a linear system. Its amount is resulted from the linearized ratio by analyzing the convolution multiplier. Furthermore, in case of kernel density estimate (KDE) used in probabilistic analysis, the definition of the kernel comes from the convolution multiplier. Individual data values are smoothed through the kernel to derive KDE. In the current study, the roles of the convolution multiplier for KDE and rainfall-runoff models were revisited and their similarity and dissimilarity were investigated to discover the mathematical applicability of the convolution multiplier.