• Title/Summary/Keyword: Verbal Signal

Search Result 9, Processing Time 0.021 seconds

Cerebrum Lateralization by Area based on the Intensity of BOLD Signal during Cognitive Performance (인지 기능 수행 시 BOLD 신호 크기에 기반 한 영역별 대뇌 편측화)

  • Chung Soon Cheol;Shon Jin Hun;Kim Ik Hyeon;Lee Soo Yeol
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.22 no.1
    • /
    • pp.183-192
    • /
    • 2005
  • This study compared cerebral lateralization index based on the area of neural activation with that based on the intensity of neural activation. For this purpose, 8 right-handed male college students (the mean age - 23.5 years) and 10 right-handed male college students (the mean age - 25.1 years) participated respectively in researches on visuospatial and verbal task brain function. Functional brain images were taken from 3T MRI using the single-shot EPI method. The result of measuring cerebral lateralization index based on the area of neural activation suggested that the right hemisphere is dominant in visuospatial tasks and the left one is in verbal tasks. However, the dominance is not sufficient to locate the exact part of the brain for these tasks. When cerebral lateralization index was computed based on the intensity of neural activation, it was derived that the area of cerebral lateralization closely related to visuospatial tasks is the superior parietal lobe, and the area of cerebral lateralization closely related to verbal tasks is the inferior and middle frontal lobes. Thus, cerebral lateralization index by area based on the intensity of neural activation as proposed by this study can determine the dominance of the cerebrum by area, so is helpful for accurate and quantitative determination of cerebral lateralization.

A Development of a Real-time, Traffic Adaptive Control Scheme Through VIDs. (영상검지기를 이용한 실시간 교통신호 감응제어)

  • 김성호
    • Journal of Korean Society of Transportation
    • /
    • v.14 no.2
    • /
    • pp.89-118
    • /
    • 1996
  • The development and implementation of a real-time, traffic adaptive control scheme based on fuzzy logic through Video Image Detector systems (VIDs) is presented. Through VIDs based image processing, fuzzy logic can be used for a real-time traffic adaptive signal control scheme. Fuzzy control logic allows linguistic and inexact traffic data to be manipulated as a useful tool in designing signal timing plans. The fuzzy logic has the ability to comprehend linguistic instructions and to generate control strategy based on a priori verbal communication. The implementation of fuzzy logic controller for a traffic network is introduced. Comparisons are made between implementations of the fuzzy logic controller and the actuated controller in an isolated intersection. The results obtained from the application of the fuzzy logic controller are also compared with those corresponding to a pretimed controller for the coordinated intersections. Simulation results from the comparisons indicate the performance of the system is between under the fuzzy logic controller. Integration of the aforementioned schemes into and ATMS framework will lead to real-time adjustment of the traffic control signals, resulting in significant reduction in traffic congestion.

  • PDF

Bidirectional BLE enabled Audible Pedestrian Signal for Visual Impaired Pedestrian (양방향 통신 지원 시각장애인용 BLE 기반 음향신호기)

  • Kim, Ju-Wan;Kim, Jungsook;Na, Dong-Gil;Kim, Hyoung-Sun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.99-106
    • /
    • 2017
  • Audible Pedestrian Signal(APS) is an additional equipment to provide information about pedestrian signals in non-visual formats such as audible tones and verbal messages. It can provide information to pedestrians about the location and directions of a crosswalk, and the traffic signal status. In general, a pedestrian who is visually impaired uses a 358.5MHz wireless remote controller to activate the APS and crosses at the crosswalk with the help of the APS. The existing APS can only receive an activation message from the remote controller. Therefore, the APS has an issue, especially at intersections, that all APSs within the communication range of the remote controller are activated simultaneously. It makes to confuse the visual impaired pedestrians. In this paper, we propose a BLE-enabled APS that can enable two-way communication via Bluetooth with a smartphone. The proposed system solves the problem and provides them with additional information, so that visually impaired pedestrians can cross at the crosswalk more conveniently.

Happy Applicants Achieve More: Expressed Positive Emotions Captured Using an AI Interview Predict Performances

  • Shin, Ji-eun;Lee, Hyeonju
    • Science of Emotion and Sensibility
    • /
    • v.24 no.2
    • /
    • pp.75-80
    • /
    • 2021
  • Do happy applicants achieve more? Although it is well established that happiness predicts desirable work-related outcomes, previous findings were primarily obtained in social settings. In this study, we extended the scope of the "happiness premium" effect to the artificial intelligence (AI) context. Specifically, we examined whether an applicant's happiness signal captured using an AI system effectively predicts his/her objective performance. Data from 3,609 job applicants showed that verbally expressed happiness (frequency of positive words) during an AI interview predicts cognitive task scores, and this tendency was more pronounced among women than men. However, facially expressed happiness (frequency of smiling) recorded using AI could not predict the performance. Thus, when AI is involved in a hiring process, verbal rather than the facial cues of happiness provide a more valid marker for applicants' hiring chances.

Design of Model to Recognize Emotional States in a Speech

  • Kim Yi-Gon;Bae Young-Chul
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.6 no.1
    • /
    • pp.27-32
    • /
    • 2006
  • Verbal communication is the most commonly used mean of communication. A spoken word carries a lot of informations about speakers and their emotional states. In this paper we designed a model to recognize emotional states in a speech, a first phase of two phases in developing a toy machine that recognizes emotional states in a speech. We conducted an experiment to extract and analyse the emotional state of a speaker in relation with speech. To analyse the signal output we referred to three characteristics of sound as vector inputs and they are the followings: frequency, intensity, and period of tones. Also we made use of eight basic emotional parameters: surprise, anger, sadness, expectancy, acceptance, joy, hate, and fear which were portrayed by five selected students. In order to facilitate the differentiation of each spectrum features, we used the wavelet transform analysis. We applied ANFIS (Adaptive Neuro Fuzzy Inference System) in designing an emotion recognition model from a speech. In our findings, inference error was about 10%. The result of our experiment reveals that about 85% of the model applied is effective and reliable.

Socio-Emotional Cues Can Help 10-Month-Olds Understand the Relationship Between Others' Words and Goals (타인의 단어와 행동 목표의 관계성에 대한 10개월 영아의 이해에 있어서 사회정서 단서의 영향)

  • Lee, Youn Mi Cathy;Kim, Min Ju;Song, Hyun-joo
    • Korean Journal of Child Studies
    • /
    • v.38 no.1
    • /
    • pp.205-215
    • /
    • 2017
  • Objective: The current study examined whether providing both an actor's eye gaze and emotional expressions can help 10-month-olds interpret a change in the actor's words as a signal to a change in the actor's goal object. Methods: Sixteen 10-month-olds participated in an experiment using the violation-of-expectation paradigm and were compared to 16 10-month-olds in a control condition. The infants in the experimental condition were familiarized to an event in which an actor looks at one of two novel objects, excitingly utters a sentence, "Wow, here's a modi!", and grasps the object. The procedure in the control condition was identical to that of the experimental condition except that the infants heard the sentence without any emotional excitement and the eye gaze of the agent was hidden by a visor. In the following test trial, the infants in both conditions heard the agent changing her word (from modi to papu) and watched her grasping either the same object as before (old-goal event) or the new object (new-goal event). Results: The infants in the experimental condition looked at the old-goal event longer than at the new-goal event, suggesting that they expected the agent to change her goal object when the actor changed her word. However, the infants in the control condition looked at the two events about equally. Conclusion: When both eye gaze and emotional cues were provided, 10-month-olds were able to exploit the agent's verbal information when reasoning about whether the agent would pursue the same goal object as before.

A Study on Improving English Pronunciation and Intonation utilizing Fluency Improvement system (음성인식 학습 시스템활용 영어 발음 및 억양 개선방안에 관한 연구)

  • Yi, Jae-Il;Kim, Young-Kwon;Kim, Gui-Jung
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.11
    • /
    • pp.1-6
    • /
    • 2017
  • This paper focuses on the development of a system that improves the convenience of foreign language learning and enhaces the learning ability of the target language through the use of IT devices. In addition to the basic grammar, the importance of pronunciation and intonation have somewhat crucial effect in everyday communication. Pronunciation and intonation of English are different according to the basic characteristics of a native language and these differences often cause problems in communication. The proposed system distinguishes acceptability in English communication process and requests the correction in realtime. The proposed system minimizes system intervention by collecting various voice signals of foreign language learners and setting that can be considered as acceptable threshold points. As a result, the learner can increase the learning efficiency with minimal interruption of the utterance caused by unnecessary system intervention.

Development of Neck-Type Electrolarynx Blueton and Acoustic Characteristic Analysis (경부형 전기인공후두 Blueton의 개발과 음향학적 성능 분석)

  • Choi, Seong-Hee;Park, Young-Jae;Park, Young-Kwan;Kim, Tae-Jung;Nam, Do-Hyun;Lim, Sung-Eun;Lee, Sung-Eun;Kim, Han-Soo;Choi, Hong-Shik;Kim, Kwang-Moon
    • Journal of the Korean Society of Laryngology, Phoniatrics and Logopedics
    • /
    • v.15 no.1
    • /
    • pp.37-42
    • /
    • 2004
  • Electrolarynx(EL), battery operated vibrators which are held against the neck by on-off button, has been widely used as a verbal communication method among post-laryngectomized patients. EL speech can produce easily without need of any additional surgery or special training and be used with any other methods. This institute developed a neck-typed EL named "Blueton" in commperation with EL Company Linkus, which consists of 3 parts : Vibrator part, Control part, Battery part. In this study we evaluated the acoustic characteristics of the produced voices by Blueton compared with Servox-inton using MDVP. Three EL users (2 full time users, 1 part time user) were participated. The results revelaed that NHR higher in Servox than Blueton and intensity is higher in Blueton than Servox. The spectra for vowels produced by EL speakers are mixed signals combined with talkers' vocal output and electrolarynx noise. The spectra pattern is similar with two ELs. High, SPI index and vowel spectra from MDVP demonstrated characteristics of both electrolarynxes related to noise signal. This finding suggests that Blueton helps to provide one of useful rehabilitation options in the post laryngectomy patients.

  • PDF

A Neurobiological Measure of General Intelligence in the Gifted (뇌기능영상 측정법을 이용한 영재성 평가의 타당성 연구)

  • Cho, Sun-Hee;Kim, Heui-Baik;Choi, Yu-Yong;Chae, Jeong-Ho;Lee, Kun-Ho
    • Journal of Gifted/Talented Education
    • /
    • v.15 no.2
    • /
    • pp.101-125
    • /
    • 2005
  • We applied functional magnetic resonance imaging (fMRI) techniques to examine whether general intelligence (g) could be assessed using a neurobiological signal of the brain. Participants were students in a national science academy and several local high schools. They were administered diverse intelligence (RAPM and WAIS) and creativity tests (TTCT-figural and TTCT-verbal). Forty of them were scanned using fMRI while performing complex and simple g tasks. In brain regions of greater blood flow in complex compared with simple g tasks, the gifted group with an exceptional g level was not significantly different from the average group with an ordinary g level: both of them activated the lateral prefrontal, anterior cingulate, posterior parietal cortices. However, the activation levels of the gifted group were greater than those of the average group, particularly in the posterior parietal cortex. Correlation analysis showed that the activity of the posterior parietal cortex has the highest correlation ($(r=0.73{\sim}0.74)$) with individual g levels and other regions also have moderate correlation ($(r=0.53{\sim}0.66)$). On the other hand, two-sample t test showed a striking contrast in intelligence tests scores between the gifted and the average group, whereas it did not show in creativity tests scores. These results suggest that it is within the bounds of possibility that a neurobiological signal of the brain is used in the assessment of the gifted and also suggest that creativity has to be given a great deal of weight on the assessment of the gifted.