• Title/Summary/Keyword: Simulated speech

Search Result 70, Processing Time 0.024 seconds

Developing an Embedded Method to Recognize Human Pilot Intentions In an Intelligent Cockpit Aids for the Pilot Decision Support System

  • Cha, U-Chang
    • Journal of the Ergonomics Society of Korea
    • /
    • v.17 no.3
    • /
    • pp.23-39
    • /
    • 1998
  • Several recent aircraft accidents occurred due to goal conflicts between human and machine actors. To facilitate the management of the cockpit activities considering these observations. a computational aid. the Agenda Manager (AM) has been developed for use in simulated cockpit environments. It is important to know pilot intentions performing cockpit operations accurately to improve AM performance. Without accurate knowledge of pilot goals or intentions, the information from AM may lead to the wrong direction to the pilot who is using the information. To provide a reliable flight simulation environment regarding goal conflicts. a pilot goal communication method (GCM) was developed to facilitate accurate recognition of pilot goals. Embedded within AM, the GCM was used to recognize pilot goals and to declare them to the AM. Two approaches to the recognition of pilots goals were considered: (1) The use of an Automatic Speech Recognition (ASR) system to recognize overtly or explicitly declared pilot goals. and (2) inference of covertly or implicitly declared pilot goals via the use of an intent inferencing mechanism. The integrated mode of these two methods could overcome the covert goal mis-understanding by use of overt GCM. And also could it overcome workload concern with overt mode by the use of covert GCM. Through simulated flight environment experimentation with real pilot subjects, the proposed GCM has demonstrated its capability to recognize pilot intentions with a certain degree of accuracy and to handle incorrectly declared goals. and was validated in terms of subjective workload and pilot flight control performance. The GCM communicating pilot goals were implemented within the AM to provide a rich environment for the study of human-machine interactions in the supervisory control of complex dynamic systems.

  • PDF

Acoustic Feedback and Noise Cancellation of Hearing Aids by Deep Learning Algorithm (심층학습 알고리즘을 이용한 보청기의 음향궤환 및 잡음 제거)

  • Lee, Haeng-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.6
    • /
    • pp.1249-1256
    • /
    • 2019
  • In this paper, we propose a new algorithm to remove acoustic feedback and noise in hearing aids. Instead of using the conventional FIR structure, this algorithm is a deep learning algorithm using neural network adaptive prediction filter to improve the feedback and noise reduction performance. The feedback canceller first removes the feedback signal from the microphone signal and then removes the noise using the Wiener filter technique. Noise elimination is to estimate the speech from the speech signal containing noise using the linear prediction model according to the periodicity of the speech signal. In order to ensure stable convergence of two adaptive systems in a loop, coefficient updates of the feedback canceller and noise canceller are separated and converged using the residual error signal generated after the cancellation. In order to verify the performance of the feedback and noise canceller proposed in this study, a simulation program was written and simulated. Experimental results show that the proposed deep learning algorithm improves the signal to feedback ratio(: SFR) of about 10 dB in the feedback canceller and the signal to noise ratio enhancement(: SNRE) of about 3 dB in the noise canceller than the conventional FIR structure.

A study on end-to-end speaker diarization system using single-label classification (단일 레이블 분류를 이용한 종단 간 화자 분할 시스템 성능 향상에 관한 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.536-543
    • /
    • 2023
  • Speaker diarization, which labels for "who spoken when?" in speech with multiple speakers, has been studied on a deep neural network-based end-to-end method for labeling on speech overlap and optimization of speaker diarization models. Most deep neural network-based end-to-end speaker diarization systems perform multi-label classification problem that predicts the labels of all speakers spoken in each frame of speech. However, the performance of the multi-label-based model varies greatly depending on what the threshold is set to. In this paper, it is studied a speaker diarization system using single-label classification so that speaker diarization can be performed without thresholds. The proposed model estimate labels from the output of the model by converting speaker labels into a single label. To consider speaker label permutations in the training, the proposed model is used a combination of Permutation Invariant Training (PIT) loss and cross-entropy loss. In addition, how to add the residual connection structures to model is studied for effective learning of speaker diarization models with deep structures. The experiment used the Librispech database to generate and use simulated noise data for two speakers. When compared with the proposed method and baseline model using the Diarization Error Rate (DER) performance the proposed method can be labeling without threshold, and it has improved performance by about 20.7 %.

A Collaborative Framework for Discovering the Organizational Structure of Social Networks Using NER Based on NLP (NLP기반 NER을 이용해 소셜 네트워크의 조직 구조 탐색을 위한 협력 프레임 워크)

  • Elijorde, Frank I.;Yang, Hyun-Ho;Lee, Jae-Wan
    • Journal of Internet Computing and Services
    • /
    • v.13 no.2
    • /
    • pp.99-108
    • /
    • 2012
  • Many methods had been developed to improve the accuracy of extracting information from a vast amount of data. This paper combined a number of natural language processing methods such as NER (named entity recognition), sentence extraction, and part of speech tagging to carry out text analysis. The data source is comprised of texts obtained from the web using a domain-specific data extraction agent. A framework for the extraction of information from unstructured data was developed using the aforementioned natural language processing methods. We simulated the performance of our work in the extraction and analysis of texts for the detection of organizational structures. Simulation shows that our study outperformed other NER classifiers such as MUC and CoNLL on information extraction.

Deep Level Situation Understanding for Casual Communication in Humans-Robots Interaction

  • Tang, Yongkang;Dong, Fangyan;Yoichi, Yamazaki;Shibata, Takanori;Hirota, Kaoru
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.1-11
    • /
    • 2015
  • A concept of Deep Level Situation Understanding is proposed to realize human-like natural communication (called casual communication) among multi-agent (e.g., humans and robots/machines), where the deep level situation understanding consists of surface level understanding (such as gesture/posture understanding, facial expression understanding, speech/voice understanding), emotion understanding, intention understanding, and atmosphere understanding by applying customized knowledge of each agent and by taking considerations of thoughtfulness. The proposal aims to reduce burden of humans in humans-robots interaction, so as to realize harmonious communication by excluding unnecessary troubles or misunderstandings among agents, and finally helps to create a peaceful, happy, and prosperous humans-robots society. A simulated experiment is carried out to validate the deep level situation understanding system on a scenario where meeting-room reservation is done between a human employee and a secretary-robot. The proposed deep level situation understanding system aims to be applied in service robot systems for smoothing the communication and avoiding misunderstanding among agents.

Low Pass Filter Design using CMOS Floating Resister (CMOS Floating 저항을 이용한 저역통과 필터의 설계)

  • 이영훈
    • Journal of the Korea Society of Computer and Information
    • /
    • v.3 no.2
    • /
    • pp.77-84
    • /
    • 1998
  • The continuous time signal system by development of CMOS technology have been receiving consideration attention. In this paper, Low pass filter using CMOS floating resistor have been designed with cut off frequency for speech signal processing. Especially a new floating resistor consisting entirely of CMOS devices in saturation has been developed. Linearity within $\pm$0.04% is achieved through nonlineartiy via current mirrors over an applied range of $\pm$1V. The frequency response exceeds 10MHz, and the resistors are expected to be useful in implementing integrated circuit active RC filters. The low pass filter designed using this method has simpler structure than switched capacitor filter. So reduce the chip area. The characteristics of the designed low pass filter using this method are simulated by pspice program.

  • PDF

An adaptive time-delay recurrent neural network for temporal learning and prediction (시계열패턴의 학습과 예측을 위한 적응 시간지연 회귀 신경회로망)

  • 김성식
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.2
    • /
    • pp.534-540
    • /
    • 1996
  • This paper presents an Adaptive Time-Delay Recurrent Neural Network (ATRN) for learning and recognition of temporal correlations of temporal patterns. The ATRN employs adaptive time-delays and recurrent connections, which are inspired from neurobiology. In the ATRN, the adaptive time-delays make the ATRN choose the optimal values of time-delays for the temporal location of the important information in the input parrerns, and the recurrent connections enable the network to encode and integrate temporal information of sequences which have arbitrary interval time and arbitrary length of temporal context. The ATRN described in this paper, ATNN proposed by Lin, and TDNN introduced by Waibel were simulated and applied to the chaotic time series preditcion of Mackey-Glass delay-differential equation. The simulation results show that the normalized mean square error (NMSE) of ATRN is 0.0026, while the NMSE values of ATNN and TDNN are 0.014, 0.0117, respectively, and in temporal learning, employing recurrent links in the network is more effective than putting multiple time-delays into the neurons. The best performance is attained bythe ATRN. This ATRN will be sell applicable for temporally continuous domains, such as speech recognition, moving object recognition, motor control, and time-series prediction.

  • PDF

A Study on Analysis of Variant Factors of Recognition Performance for Lip-reading at Dynamic Environment (동적 환경에서의 립리딩 인식성능저하 요인분석에 대한 연구)

  • 신도성;김진영;이주헌
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.5
    • /
    • pp.471-477
    • /
    • 2002
  • Recently, lip-reading has been studied actively as an auxiliary method of automatic speech recognition(ASR) in noisy environments. However, almost of research results were obtained based on the database constructed in indoor condition. So, we dont know how developed lip-reading algorithms are robust to dynamic variation of image. Currently we have developed a lip-reading system based on image-transform based algorithm. This system recognize 22 words and this word recognizer achieves word recognition of up to 53.54%. In this paper we present how stable the lip-reading system is in environmental variance and what the main variant factors are about dropping off in word-recognition performance. For studying lip-reading robustness we consider spatial valiance (translation, rotation, scaling) and illumination variance. Two kinds of test data are used. One Is the simulated lip image database and the other is real dynamic database captured in car environment. As a result of our experiment, we show that the spatial variance is one of degradations factors of lip reading performance. But the most important factor of degradation is not the spatial variance. The illumination variances make severe reduction of recognition rates as much as 70%. In conclusion, robust lip reading algorithms against illumination variances should be developed for using lip reading as a complementary method of ASR.

Modeling of Sensorineural Hearing Loss for the Evaluation of Digital Hearing Aid Algorithms (디지털 보청기 알고리즘 평가를 위한 감음신경성 난청의 모델링)

  • 김동욱;박영철
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.1
    • /
    • pp.59-68
    • /
    • 1998
  • Digital hearing aids offer many advantages over conventional analog hearing aids. With the advent of high speed digital signal processing chips, new digital techniques have been introduced to digital hearing aids. In addition, the evaluation of new ideas in hearing aids is necessarily accompanied by intensive subject-based clinical tests which requires much time and cost. In this paper, we present an objective method to evaluate and predict the performance of hearing aid systems without the help of such subject-based tests. In the hearing impairment simulation(HIS) algorithm, a sensorineural hearing impairment medel is established from auditory test data of the impaired subject being simulated. Also, the nonlinear behavior of the loudness recruitment is defined using hearing loss functions generated from the measurements. To transform the natural input sound into the impaired one, a frequency sampling filter is designed. The filter is continuously refreshed with the level-dependent frequency response function provided by the impairment model. To assess the performance, the HIS algorithm was implemented in real-time using a floating-point DSP. Signals processed with the real-time system were presented to normal subjects and their auditory data modified by the system was measured. The sensorineural hearing impairment was simulated and tested. The threshold of hearing and the speech discrimination tests exhibited the efficiency of the system in its use for the hearing impairment simulation. Using the HIS system we evaluated three typical hearing aid algorithms.

  • PDF

Real data-based active sonar signal synthesis method (실데이터 기반 능동 소나 신호 합성 방법론)

  • Yunsu Kim;Juho Kim;Jongwon Seok;Jungpyo Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.1
    • /
    • pp.9-18
    • /
    • 2024
  • The importance of active sonar systems is emerging due to the quietness of underwater targets and the increase in ambient noise due to the increase in maritime traffic. However, the low signal-to-noise ratio of the echo signal due to multipath propagation of the signal, various clutter, ambient noise and reverberation makes it difficult to identify underwater targets using active sonar. Attempts have been made to apply data-based methods such as machine learning or deep learning to improve the performance of underwater target recognition systems, but it is difficult to collect enough data for training due to the nature of sonar datasets. Methods based on mathematical modeling have been mainly used to compensate for insufficient active sonar data. However, methodologies based on mathematical modeling have limitations in accurately simulating complex underwater phenomena. Therefore, in this paper, we propose a sonar signal synthesis method based on a deep neural network. In order to apply the neural network model to the field of sonar signal synthesis, the proposed method appropriately corrects the attention-based encoder and decoder to the sonar signal, which is the main module of the Tacotron model mainly used in the field of speech synthesis. It is possible to synthesize a signal more similar to the actual signal by training the proposed model using the dataset collected by arranging a simulated target in an actual marine environment. In order to verify the performance of the proposed method, Perceptual evaluation of audio quality test was conducted and within score difference -2.3 was shown compared to actual signal in a total of four different environments. These results prove that the active sonar signal generated by the proposed method approximates the actual signal.