• Title/Summary/Keyword: threshold learning

Search Result 209, Processing Time 0.021 seconds

ART1 Neural Network for the Detection of Tool Breakage (공구파단 검출을 위한 ART2 신경회로망)

  • 고태조;김희술;조동우
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.04b
    • /
    • pp.451-456
    • /
    • 1995
  • This study investigates the feasibility of the real time detection of tool breadage in face milling operation. The proposed methodology using an ART2 neural network overcomes a cumbersome task in terms of the learning or determining a threshold value. The features taken in the researchare the AR parameters modelled from a RLS, and those are proven to be good features for tool breakage from experiments. From the results of the off line application, we can conclude that an ART2 neural network can be well applied to the clustering of tool states in real time regardless of the unsupervised learning.

  • PDF

Named entity recognition using transfer learning and small human- and meta-pseudo-labeled datasets

  • Kyoungman Bae;Joon-Ho Lim
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.59-70
    • /
    • 2024
  • We introduce a high-performance named entity recognition (NER) model for written and spoken language. To overcome challenges related to labeled data scarcity and domain shifts, we use transfer learning to leverage our previously developed KorBERT as the base model. We also adopt a meta-pseudo-label method using a teacher/student framework with labeled and unlabeled data. Our model presents two modifications. First, the student model is updated with an average loss from both human- and pseudo-labeled data. Second, the influence of noisy pseudo-labeled data is mitigated by considering feedback scores and updating the teacher model only when below a threshold (0.0005). We achieve the target NER performance in the spoken language domain and improve that in the written language domain by proposing a straightforward rollback method that reverts to the best model based on scarce human-labeled data. Further improvement is achieved by adjusting the label vector weights in the named entity dictionary.

Performance Improvement of an Energy Efficient Cluster Management Based on Autonomous Learning (자율학습기반의 에너지 효율적인 클러스터 관리에서의 성능 개선)

  • Cho, Sungchul;Chung, Kyusik
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.11
    • /
    • pp.369-382
    • /
    • 2015
  • Energy aware server clusters aim to reduce power consumption at maximum while keeping QoS(quality of service) compared to energy non-aware server clusters. They adjust the power mode of each server in a fixed or variable time interval to activate only the minimum number of servers needed to handle current user requests. Previous studies on energy aware server cluster put efforts to reduce power consumption or heat dissipation, but they do not consider energy efficiency well. In this paper, we propose an energy efficient cluster management method to improve not only performance per watt but also QoS of the existing server power mode control method based on autonomous learning. Our proposed method is to adjust server power mode based on a hybrid approach of autonomous learning method with multi level thresholds and power consumption prediction method. Autonomous learning method with multi level thresholds is applied under normal load situation whereas power consumption prediction method is applied under abnormal load situation. The decision on whether current load is normal or abnormal depends on the ratio of the number of current user requests over the average number of user requests during recent past few minutes. Also, a dynamic shutdown method is additionally applied to shorten the time delay to make servers off. We performed experiments with a cluster of 16 servers using three different kinds of load patterns. The multi-threshold based learning method with prediction and dynamic shutdown shows the best result in terms of normalized QoS and performance per watt (valid responses). For banking load pattern, real load pattern, and virtual load pattern, the numbers of good response per watt in the proposed method increase by 1.66%, 2.9% and 3.84%, respectively, whereas QoS in the proposed method increase by 0.45%, 1.33% and 8.82%, respectively, compared to those in the existing autonomous learning method with single level threshold.

A Dynamic Channel Switching Policy Through P-learning for Wireless Mesh Networks

  • Hossain, Md. Kamal;Tan, Chee Keong;Lee, Ching Kwang;Yeoh, Chun Yeow
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.2
    • /
    • pp.608-627
    • /
    • 2016
  • Wireless mesh networks (WMNs) based on IEEE 802.11s have emerged as one of the prominent technologies in multi-hop communications. However, the deployment of WMNs suffers from serious interference problem which severely limits the system capacity. Using multiple radios for each mesh router over multiple channels, the interference can be reduced and improve system capacity. Nevertheless, interference cannot be completely eliminated due to the limited number of available channels. An effective approach to mitigate interference is to apply dynamic channel switching (DCS) in WMNs. Conventional DCS schemes trigger channel switching if interference is detected or exceeds a predefined threshold which might cause unnecessary channel switching and long protocol overheads. In this paper, a P-learning based dynamic switching algorithm known as learning automaton (LA)-based DCS algorithm is proposed. Initially, an optimal channel for communicating node pairs is determined through the learning process. Then, a novel switching metric is introduced in our LA-based DCS algorithm to avoid unnecessary initialization of channel switching. Hence, the proposed LA-based DCS algorithm enables each pair of communicating mesh nodes to communicate over the least loaded channels and consequently improve network performance.

Improved Parameter Estimation with Threshold Adaptation of Cognitive Local Sensors

  • Seol, Dae-Young;Lim, Hyoung-Jin;Song, Moon-Gun;Im, Gi-Hong
    • Journal of Communications and Networks
    • /
    • v.14 no.5
    • /
    • pp.471-480
    • /
    • 2012
  • Reliable detection of primary user activity increases the opportunity to access temporarily unused bands and prevents harmful interference to the primary system. By extracting a global decision from local sensing results, cooperative sensing achieves high reliability against multipath fading. For the effective combining of sensing results, which is generalized by a likelihood ratio test, the fusion center should learn some parameters, such as the probabilities of primary transmission, false alarm, and detection at the local sensors. During the training period in supervised learning, the on/off log of primary transmission serves as the output label of decision statistics from the local sensor. In this paper, we extend unsupervised learning techniques with an expectation maximization algorithm for cooperative spectrum sensing, which does not require an external primary transmission log. Local sensors report binary hard decisions to the fusion center and adjust their operating points to enhance learning performance. Increasing the number of sensors, the joint-expectation step makes a confident classification on the primary transmission as in the supervised learning. Thereby, the proposed scheme provides accurate parameter estimates and a fast convergence rate even in low signal-to-noise ratio regimes, where the primary signal is dominated by the noise at the local sensors.

Optimal Synthesis Method for Binary Neural Network using NETLA (NETLA를 이용한 이진 신경회로망의 최적 합성방법)

  • Sung, Sang-Kyu;Kim, Tae-Woo;Park, Doo-Hwan;Jo, Hyun-Woo;Ha, Hong-Gon;Lee, Joon-Tark
    • Proceedings of the KIEE Conference
    • /
    • 2001.07d
    • /
    • pp.2726-2728
    • /
    • 2001
  • This paper describes an optimal synthesis method of binary neural network(BNN) for an approximation problem of a circular region using a newly proposed learning algorithm[7] Our object is to minimize the number of connections and neurons in hidden layer by using a Newly Expanded and Truncated Learning Algorithm(NETLA) for the multilayer BNN. The synthesis method in the NETLA is based on the extension principle of Expanded and Truncated Learning(ETL) and is based on Expanded Sum of Product (ESP) as one of the boolean expression techniques. And it has an ability to optimize the given BNN in the binary space without any iterative training as the conventional Error Back Propagation(EBP) algorithm[6] If all the true and false patterns are only given, the connection weights and the threshold values can be immediately determined by an optimal synthesis method of the NETLA without any tedious learning. Futhermore, the number of the required neurons in hidden layer can be reduced and the fast learning of BNN can be realized. The superiority of this NETLA to other algorithms was proved by the approximation problem of one circular region.

  • PDF

Continuous Digit Recognition Using the Weight Initialization and LR Parser

  • Choi, Ki-Hoon;Lee, Seong-Kwon;Kim, Soon-Hyob
    • The Journal of the Acoustical Society of Korea
    • /
    • v.15 no.2E
    • /
    • pp.14-23
    • /
    • 1996
  • This paper is a on the neural network to recognize the phonemes, the weight initialization to reduce learning speed, and LR parser for continuous speech recognition. The neural network spots the phonemes in continuous speech and LR parser parses the output of neural network. The whole phonemes recognized in neural network are divided into several groups which are grouped by the similarity of phonemes, and then each group consists of neural network. Each group of neural network to recognize the phonemes consisits of that recognize the phonemes of their own group and VGNN(Verify Group Neural Network) which judges whether the inputs are their own group or not. The weights of neural network are not initialized with random values but initialized from learning data to reduce learning speed. The LR parsing method applied to this paper is not a method which traces a unique path, but one which traces several possible paths because the output of neural network is not accurate. The parser processes the continuous speech frame by frame as accumulating the output of neural network through several possible paths. If this accumulated path-value drops below the threshold value, this path is deleted in possible parsing paths. This paper applies the continuous speech recognition system to the threshold value, this path is deleted in possible parsing paths. This paper applies the continuous speech recognition system to the continuous Korea digits recognition. The recognition rate of isolated digits is 97% in speaker dependent, and 75% in speaker dependent. The recognition rate of continuous digits is 74% in spaker dependent.

  • PDF

A Study on Improving English Pronunciation and Intonation utilizing Fluency Improvement system (음성인식 학습 시스템활용 영어 발음 및 억양 개선방안에 관한 연구)

  • Yi, Jae-Il;Kim, Young-Kwon;Kim, Gui-Jung
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.11
    • /
    • pp.1-6
    • /
    • 2017
  • This paper focuses on the development of a system that improves the convenience of foreign language learning and enhaces the learning ability of the target language through the use of IT devices. In addition to the basic grammar, the importance of pronunciation and intonation have somewhat crucial effect in everyday communication. Pronunciation and intonation of English are different according to the basic characteristics of a native language and these differences often cause problems in communication. The proposed system distinguishes acceptability in English communication process and requests the correction in realtime. The proposed system minimizes system intervention by collecting various voice signals of foreign language learners and setting that can be considered as acceptable threshold points. As a result, the learner can increase the learning efficiency with minimal interruption of the utterance caused by unnecessary system intervention.

A Method to Improve the Performance of Adaboost Algorithm by Using Mixed Weak Classifier (혼합 약한 분류기를 이용한 AdaBoost 알고리즘의 성능 개선 방법)

  • Kim, Jeong-Hyun;Teng, Zhu;Kim, Jin-Young;Kang, Dong-Joong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.457-464
    • /
    • 2009
  • The weak classifier of AdaBoost algorithm is a central classification element that uses a single criterion separating positive and negative learning candidates. Finding the best criterion to separate two feature distributions influences learning capacity of the algorithm. A common way to classify the distributions is to use the mean value of the features. However, positive and negative distributions of Haar-like feature as an image descriptor are hard to classify by a single threshold. The poor classification ability of the single threshold also increases the number of boosting operations, and finally results in a poor classifier. This paper proposes a weak classifier that uses multiple criterions by adding a probabilistic criterion of the positive candidate distribution with the conventional mean classifier: the positive distribution has low variation and the values are closer to the mean while the negative distribution has large variation and values are widely spread. The difference in the variance for the positive and negative distributions is used as an additional criterion. In the learning procedure, we use a new classifier that provides a better classifier between them by selective switching between the mean and standard deviation. We call this new type of combined classifier the "Mixed Weak Classifier". The proposed weak classifier is more robust than the mean classifier alone and decreases the number of boosting operations to be converged.

A Simple Approach of Improving Back-Propagation Algorithm

  • Zhu, H.;Eguchi, K.;Tabata, T.;Sun, N.
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.1041-1044
    • /
    • 2000
  • The enhancement to the back-propagation algorithm presented in this paper has resulted from the need to extract sparsely connected networks from networks employing product terms. The enhancement works in conjunction with the back-propagation weight update process, so that the actions of weight zeroing and weight stimulation enhance each other. It is shown that the error measure, can also be interpreted as rate of weight change (as opposed to ${\Delta}W_{ij}$), and consequently used to determine when weights have reached a stable state. Weights judged to be stable are then compared to a zero weight threshold. Should they fall below this threshold, then the weight in question is zeroed. Simulation of such a system is shown to return improved learning rates and reduce network connection requirements, with respect to the optimal network solution, trained using the normal back-propagation algorithm for Multi-Layer Perceptron (MLP), Higher Order Neural Network (HONN) and Sigma-Pi networks.

  • PDF