• Title/Summary/Keyword: 오류함수

Search Result 387, Processing Time 0.028 seconds

Analysis of the ability to interpret and draw a graph of the function to high school students (고등학생의 함수의 모양 그리기와 해석하는 능력 분석)

  • An, Jong-Su
    • Journal of the Korean School Mathematics Society
    • /
    • v.15 no.2
    • /
    • pp.299-316
    • /
    • 2012
  • In this paper, we examine high school in order to know their ability for understanding about fundamental functions, such as polynomial, trigonometric, logarithm and exponential functions which have learned from high school. The result of this study shows as follows. More than half students are not able to draw shape of given functions, except polynomial. More students do not fully understand about function properties such as domain, codomain, range, maximum and minimum value.

  • PDF

A Historical Study on the Continuity of Function - Focusing on Aristotle's Concept of Continuity and the Arithmetization of Analysis - (함수의 연속성에 대한 역사적 고찰 - 아리스토텔레스의 연속 개념과 해석학의 산술화 과정을 중심으로 -)

  • Baek, Seung Ju;Choi, Younggi
    • Journal of Educational Research in Mathematics
    • /
    • v.27 no.4
    • /
    • pp.727-745
    • /
    • 2017
  • This study investigated the Aristotle's continuity and the historical development of continuity of function to explore the differences between the concepts of mathematics and students' thinking about continuity of functions. Aristotle, who sought the essence of continuity, characterized continuity as an 'indivisible unit as a whole.' Before the nineteenth century, mathematicians considered the continuity of functions based on space, and after the arithmetization of nineteenth century modern ${\epsilon}-{\delta}$ definition appeared. Some scholars thought the process was revolutionary. Students tended to think of the continuity of functions similar to that of Aristotle and mathematicians before the arithmetization, and it is inappropriate to regard students' conceptions simply as errors. This study on the continuity of functions examined that some conceptions which have been perceived as misconceptions of students could be viewed as paradigmatic thoughts rather than as errors.

Quadratic Sigmoid Neural Equalizer (이차 시그모이드 신경망 등화기)

  • Choi, Soo-Yong;Ong, Sung-Hwan;You, Cheol-Woo;Hong, Dae-Sik
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.1
    • /
    • pp.123-132
    • /
    • 1999
  • In this paper, a quadratic sigmoid neural equalizer(QSNE) is proposed to improve the performance of conventional neural equalizer in terms of bit error probability by using a quadratic sigmoid function as the activation function of neural networks. Conventional neural equalizers which have been used to compensate for nonlinear distortions adopt the sigmoid function. In the case of sigmoid neural equalizer, each neuron has one linear decision boundary. So many neurons are required when the neural equalizer has to separate complicated structure. But in case of the proposed QSNF and quadratic sigmoid neural decision feedback equalizer(QSNDFE), each neuron separates decision region with two parallel lines. Therefore, QSNE and QSNDFE have better performance and simpler structure than the conventional neural equalizers in terms of bit error probability. When the proposed QSNDFE is applied to communication systems and digital magnetic recording systems, it is an improvement of approximately 1.5dB~8.3dB in signal to moise ratio(SNR) over the conventional decision feedback equalizer(DEF) and neural decision feedback equalizer(NDFE). As intersymbol interference(ISI) and nonlinear distortions become severer, QSNDFE shows astounding SNR shows astounding SNR performance gain over the conventional equalizers in the same bit error probability.

  • PDF

A Software Release Policy with Testing Time and the Number of Corrected Errors (시험시간과 오류수정개수를 고려한 소프트웨어 출시 시점결정)

  • Yoo, Young Kwan
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.7 no.4
    • /
    • pp.49-54
    • /
    • 2012
  • In this paper, a software policy considering testing time and the number of errors corrected is presented. The software is tested until a specified testing time or the time to a specified number of errors are corrected, whichever comes first. The model includes the cost of error correction and software testing during the testing time, and the cost of error correction during operation. It is assumed that the length of software life cycle has no bounds, and the error correction follows an non-homogeneous Poisson process. An expression for the total cost under the policy is derived. It is shown that the model includes the previous models as special cases.

  • PDF

Reliability Assessment of A Redundant System with Maintenance Activity (보수를 고려한 병렬결합 시스템의 신뢰성 평가)

  • 제무성;이수경
    • Proceedings of the Korean Institute of Industrial Safety Conference
    • /
    • 1997.05a
    • /
    • pp.85-90
    • /
    • 1997
  • 시스템의 안전성을 확보하기 위하여 각 부품들은 주기적으로 점검되어야 하고, 필요시 교체되어야 한다. 시스템의 신뢰성은 부품 고장률, 점검주기, 점검시간, 인적오류의 함수이다. 시스템의 너무 잦은 보수점검은 보수시 인적오류가능성을 증대시켜 시스템의 신뢰성을 저하시키는 효과가 있으며, 반면에 너무 긴 주기를 갖는 보수점검도 고장시스템을 적시에 찾아내어 교체하지 못함으로 인하여 시스템의 신뢰성이 감소하게된다. 그러므로 적절한 점검주기와 허용정지시간으로 보수되어야 시스템의 효율성과 안전성을 제고시킬 수 있다. 본 논문에서는 이러한 시스템 신뢰성에 영향을 미치는 요소들을 함수로하는 시스템 신뢰성의 해석식을 유도하였고 유도된 수식을 예제시스템인 가스정압기에 적용하였다. (중략)

  • PDF

A Debugger based on Selective Redex Trail (선택적 레덱스 트레일 기반의 디버거)

  • Park, Hee-Wan;Han, Tai-Sook
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.9
    • /
    • pp.973-985
    • /
    • 2000
  • 함수형 프로그래밍 언어는 전통적인 프로시저형 언어에 비하여 많은 장점이 있다. 그러나 함수 언어 프로그래머를 위한 실용적인 디버깅 환경은 상대적으로 빈약하다. 그동안 유용한 디버거 구현을 위해서 많은 시도가 있었고, 그 결과로 하향식 기법으로 이용한 알고리즈믹 디버거와 상향식 기법을 이용한 레덱스 트레일 디버거가 연구되었다. 두가지 기법은 모두 실제 프로그래밍에 적용하기에는 유지해야 하는 디버깅 정보의 양이 많다는 단점이 있다. 이 논문에서는 선택적 레덱스 트레일 디버깅 방법을 제안한다. 이 방법을 이용하면 디버거 사용자는 프로그램에서 오류가 예상되는 부분에 포커스를 설정할 수 있고 단지 선택된 부분에 한하여 트레일을 생성하게 된다. 이 방법은 프로그램의 오류에 대한 디버거 사용자의 예측을 반영하고 디버깅에 필요한 정보의 양을 줄이는 장점이 있다. 구현된 디버깅 시스템은 선택적 레덱스 트레일을 생성하는 추상기계와 실제 디버깅이 이루어지는 레덱스 트레일 탐색기로 구성된다.

  • PDF

Optimal Criterion of Classification Accuracy Measures for Normal Mixture (정규혼합에서 분류정확도 측도들의 최적기준)

  • Yoo, Hyun-Sang;Hong, Chong-Sun
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.3
    • /
    • pp.343-355
    • /
    • 2011
  • For a data with the assumption of the mixture distribution, it is important to find an appropriate threshold and evaluate its performance. The relationship is found of well-known nine classification accuracy measures such as MVD, Youden's index, the closest-to-(0, 1) criterion, the amended closest-to-(0, 1) criterion, SSS, symmetry point, accuracy area, TA, TR. Then some conditions of these measures are categorized into seven groups. Under the normal mixture assumption, we calculate thresholds based on these measures and obtain the corresponding type I and II errors. We could explore that which classification measure has minimum type I and II errors for estimated mixture distribution to understand the strength and weakness of these classification measures.

Fingerprint Verification using Cross-Correlation Function (상호상관함수를 이용한 지문인식)

  • 박중조;오영일
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.4
    • /
    • pp.248-255
    • /
    • 2003
  • This paper presents a fingerprint recognition algorithm using cross-correlation function. This algorithm consists of minutiae extraction, minutiae alignment and minutiae matching, where we propose a new minutiae alignment method. In our alignment method, the rotation angle between two fingerprints is obtained by using cross-correlation function of the minutia directions, thereafter the displacement is obtained from the rotated fingerprint. This alignment method is capable of finding rotation angle and displacement of two fingerprints without resorting to exhaustive search. Our fingerprint recognition algorithm has been tested on fingerprint images captured with inkless scanner. The experiment results show that 17.299% false rejection ratio(FRR) at 2.086% false acceptance ratio(FAR).

Speaker Verification Performance Improvement Using Weighted Residual Cepstrum (가중된 예측 오차 파라미터를 사용한 화자 확인 성능 개선)

  • 위진우;강철호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.5
    • /
    • pp.48-53
    • /
    • 2001
  • In speaker verification based on LPC analysis the prediction residues are ignored and LPCC(LPC cepstrum) are only used to compose feature vectors. In this study, LPCC and RCEP (residual cepstrum) extracted from residues are used as feature parameters in the various environmental speaker verification. We propose the weighting function which can enlarge inter-speaker variation by weighting pitch, speaker inherent vector, included in residual cepstrum. Simulation results show that the average speaker verification rate is improved in the rate of 6% with RCEP and LPCC at the same time and is improved in the rate of 2.45% with the proposed weighted RCEP and LPCC at the same time compared with no weighting.

  • PDF

Design of Fluctuation Function to Improve BER Performance of Data Hiding in Encrypted Image (암호화된 영상의 데이터 은닉 기법의 오류 개선을 위한 섭동 함수 설계)

  • Kim, Young-Hun;Lim, Dae-Woon;Kim, Young-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.3
    • /
    • pp.307-316
    • /
    • 2016
  • Reversible data hiding is a technique to hide any data without affecting the original image. Zhang proposed the encryption of original image and a data hiding scheme in encrypted image. First, the encrypted image is decrypted and uses the fluctuation function which exploits the spatial correlation property of decrypted image to extract hidden data. In this paper, the new fluctuation function is proposed to reduce errors which arise from the process extracting hidden data and the performance is verified by simulation.