• Title/Summary/Keyword: 시각 음성인식

Search Result 129, Processing Time 0.052 seconds

Development of intelligent IoT control-related AI distributed speech recognition module (지능형 IoT 관제 연계형 AI 분산음성인식 모듈개발)

  • Bae, Gi-Tae;Lee, Hee-Soo;Bae, Su-Bin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.1212-1215
    • /
    • 2017
  • 현재 출시되는 AI스피커들의 기능들을 재현하면서 문제점을 찾아서 보완하고 특히 우리나라 1인 가구의 급격한 증가로 인한 다양한 사회 문제들의 해소 방안으로 표정인식을 통해 먼저 사용자에게 다가가는 감정적인 대화가 가능한 인공지능 서비스와 인터넷 환경에 무관한 홈 IoT 제어 그리고 시각데이터 제공이 가능한 다중 AI 스피커를 제작 하였다.

AI Announcer : Information Transfer Software Using Artificial Intelligence Technology (AI 아나운서 : 인공지능 기술을 이용한 정보 전달 소프트웨어)

  • Kim, Hye-Won;Lee, Young-Eun;Lee, Hong-Chang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.937-940
    • /
    • 2020
  • 본 논문은 AI 기술을 기반으로 텍스트 스크립트를 자동으로 인식하고 영상 합성 기술을 응용하여 텍스트 정보를 시각화하는 AI 아나운서 소프트웨어 연구에 대하여 기술한다. 기존의 AI 기반 영상 정보 전달 서비스인 AI 앵커는 텍스트를 인식하여 영상을 합성하는데 오랜 시간이 필요하였으며, 특정 인물 이미지로만 영상 합성이 가능했기 때문에 그 용도가 제한적이었다. 본 연구에서 제안하는 방법은 Tacotron 으로 새로운 음성을 학습 및 합성하여, LRW 데이터셋으로 학습된 모델을 사용하여 자연스러운 영상 합성 체계를 구축한다. 단순한 얼굴 이미지의 합성을 개선하고 다채로운 이미지 제작을 위한 과정을 간략화하여 다양한 비대면 영상 정보 제공 환경을 구성할 수 있을 것으로 기대된다.

Smart Portable Navigation System Development and Implementation of 1:N service for Visually impaired person (Smart Portable Navigation System 개발 및 1:N 서비스 구현)

  • Kim, Jae-Kyung;Seo, Jae-Gil;Kim, Young-Kil
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.11
    • /
    • pp.2424-2430
    • /
    • 2012
  • The current Navigation System for the Visually Impaired Person has a short and limited communication distance and can't receive enough information from Visually Impaired Person to assist directly. In addition, because the path is dangerous and incomplete for the Visually Impaired Person, moving with White Stick is still inconvenient and dangerous. To solve this problem we implement communication that can send and receive video, voice, location information between the Visually Impaired Person's Smart Portable Navigation System Development and assistant's PC.

Mobile Photo Shooting Guide System for A Blind Person (시각 장애인을 위한 모바일 사진촬영 가이드 시스템)

  • Kim, Taehyub;Kim, Doyun;Im, Donghyuk;Hong, Hyunki
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.7
    • /
    • pp.167-174
    • /
    • 2013
  • Various smart phone applications have been widely developed and an intelligent service to take a picture for the visually handicapped is needed. By using an auditory service and/or other's help, the blind capture an image with their cameras for keeping a record or exchanging the information with another people. However, they are difficult to take the wanted picture because they are unable to grasp the visual contents in the viewfinder. This paper presents a novel guiding system for the blind people to take a picture with the smart phone. The proposed method identifies a person in the picture and determines the presence of eye blinks and blurring. The system provides a guiding information such as camera directions, which are transferred into the blind user with auditory service.

Navigation system for the people who are visually impaired using ARM Cortex-A9 Platform (ARM Cortex-A9 Platform기반의 시각장애인을 위한 Navigation System 구현)

  • Lim, Ik-chan;Kim, Young-kil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.93-95
    • /
    • 2013
  • The conventional assistive tool for visually impaired people provide Simple service using a ultrasound, or an RFID tag to identify the obstacles. It is impossible to clear guide and has a vulnerability to unforeseen circumstances because of short recognition distance, The ARM Cortex-A9 Platform based implementation of the Portable Navigation System and Service Center will help the visually impaired gait. The Service Center will also provide solution for the lack of jobs due to the increase of the aging population. Navigation System that the visually impaired can carry possessing devices such as a camera, GPS, Audio, Ethernet transmit Image shown at the location of the visually impaired, GPS information and Sound Via TCP / IP. The staff of the service center receives information and can provide directions by communicating with them. So, the system can provide effective guidance to the visually impaired.

  • PDF

Personalized Speech Classification Scheme for the Smart Speaker Accessibility Improvement of the Speech-Impaired people (언어장애인의 스마트스피커 접근성 향상을 위한 개인화된 음성 분류 기법)

  • SeungKwon Lee;U-Jin Choe;Gwangil Jeon
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.17-24
    • /
    • 2022
  • With the spread of smart speakers based on voice recognition technology and deep learning technology, not only non-disabled people, but also the blind or physically handicapped can easily control home appliances such as lights and TVs through voice by linking home network services. This has greatly improved the quality of life. However, in the case of speech-impaired people, it is impossible to use the useful services of the smart speaker because they have inaccurate pronunciation due to articulation or speech disorders. In this paper, we propose a personalized voice classification technique for the speech-impaired to use for some of the functions provided by the smart speaker. The goal of this paper is to increase the recognition rate and accuracy of sentences spoken by speech-impaired people even with a small amount of data and a short learning time so that the service provided by the smart speaker can be actually used. In this paper, data augmentation and one cycle learning rate optimization technique were applied while fine-tuning ResNet18 model. Through an experiment, after recording 10 times for each 30 smart speaker commands, and learning within 3 minutes, the speech classification recognition rate was about 95.2%.

A study on the lip shape recognition algorithm using 3-D Model (3차원 모델을 이용한 입모양 인식 알고리즘에 관한 연구)

  • 배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.3 no.1
    • /
    • pp.59-68
    • /
    • 1999
  • Recently, research and developmental direction of communication system is concurrent adopting voice data and face image in speaking to provide more higher recognition rate then in the case of only voice data. Therefore, we present a method of lipreading in speech image sequence by using the 3-D facial shape model. The method use a feature information of the face image such as the opening-level of lip, the movement of jaw, and the projection height of lip. At first, we adjust the 3-D face model to speeching face image sequence. Then, to get a feature information we compute variance quantity from adjusted 3-D shape model of image sequence and use the variance quality of the adjusted 3-D model as recognition parameters. We use the intensity inclination values which obtaining from the variance in 3-D feature points as the separation of recognition units from the sequential image. After then, we use discrete HMM algorithm at recognition process, depending on multiple observation sequence which considers the variance of 3-D feature point fully. As a result of recognition experiment with the 8 Korean vowels and 2 Korean consonants, we have about 80% of recognition rate for the plosives and vowels. We propose that usability with visual distinguishing factor that using feature vector because as a result of recognition experiment for recognition parameter with the 10 korean vowels, obtaining high recognition rate.

  • PDF

Development of Tennis Training Machine in Ourdoor Environment with Human Tracking (사용자 추적 기능을 가진 야외용 테니스 훈련용 장치 개발)

  • Yang, Jeong-Yean
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.3
    • /
    • pp.424-431
    • /
    • 2020
  • This paper focused on the development of sports robot that detects a human player and shots a serve ball automatically. When robot technologies apply to the sports machine, the domain problems occurs such as outdoor environments and playing condition to recognize the visual and the vocal modalities. Gaussian mixture model and Kalman filter are used to detect the player's position in the left, right, and depth direction and to avoid the noises caused by the player's posture variation around the net. The sports robot is designed by the pan-tilt structure to shot a serve ball by pneumatic control under the multi layered software architecture. Finally, the proposed tracking and the machine performance are discussed by experimental results.

Face Emotion Recognition using ResNet with Identity-CBAM (Identity-CBAM ResNet 기반 얼굴 감정 식별 모듈)

  • Oh, Gyutea;Kim, Inki;Kim, Beomjun;Gwak, Jeonghwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.559-561
    • /
    • 2022
  • 인공지능 시대에 들어서면서 개인 맞춤형 환경을 제공하기 위하여 사람의 감정을 인식하고 교감하는 기술이 많이 발전되고 있다. 사람의 감정을 인식하는 방법으로는 얼굴, 음성, 신체 동작, 생체 신호 등이 있지만 이 중 가장 직관적이면서도 쉽게 접할 수 있는 것은 표정이다. 따라서, 본 논문에서는 정확도 높은 얼굴 감정 식별을 위해서 Convolution Block Attention Module(CBAM)의 각 Gate와 Residual Block, Skip Connection을 이용한 Identity- CBAM Module을 제안한다. CBAM의 각 Gate와 Residual Block을 이용하여 각각의 표정에 대한 핵심 특징 정보들을 강조하여 Context 한 모델로 변화시켜주는 효과를 가지게 하였으며 Skip-Connection을 이용하여 기울기 소실 및 폭발에 강인하게 해주는 모듈을 제안한다. AI-HUB의 한국인 감정 인식을 위한 복합 영상 데이터 세트를 이용하여 총 6개의 클래스로 구분하였으며, F1-Score, Accuracy 기준으로 Identity-CBAM 모듈을 적용하였을 때 Vanilla ResNet50, ResNet101 대비 F1-Score 0.4~2.7%, Accuracy 0.18~2.03%의 성능 향상을 달성하였다. 또한, Guided Backpropagation과 Guided GradCam을 통해 시각화하였을 때 중요 특징점들을 더 세밀하게 표현하는 것을 확인하였다. 결과적으로 이미지 내 표정 분류 Task에서 Vanilla ResNet50, ResNet101을 사용하는 것보다 Identity-CBAM Module을 함께 사용하는 것이 더 적합함을 입증하였다.

HunMinJeomUm: Text Extraction and Braille Conversion System for the Learning of the Blind (시각장애인의 학습을 위한 텍스트 추출 및 점자 변환 시스템)

  • Kim, Chae-Ri;Kim, Ji-An;Kim, Yong-Min;Lee, Ye-Ji;Kong, Ki-Sok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.5
    • /
    • pp.53-60
    • /
    • 2021
  • The number of visually impaired and blind people is increasing, but braille translation textbooks for them are insufficient, which violates their rights to education despite their will. In order to guarantee their rights, this paper develops a learning system, HunMinJeomUm, that helps them access textbooks, documents, and photographs that are not available in braille, without the assistance of others. In our system, a smart phone app and web pages are designed to promote the accessibility of the blind, and a braille kit is produced using Arduino and braille modules. The system supports the following functions. First, users select documents or pictures that they want, and the system extracts the text using OCR. Second, the extracted text is converted into voice and braille. Third, a membership registration function is provided so that the user can view the extracted text. Experiments have confirmed that our system generates braille and audio outputs successfully, and provides high OCR recognition rates. The study has also found that even completely blind users can easily access the smart phone app.