• Title/Summary/Keyword: Voice recognition system

Search Result 334, Processing Time 0.03 seconds

Design and Implementation of a Language Identification System for Handwriting Input Data (필기 입력데이터에 대한 언어식별 시스템의 설계 및 구현)

  • Lim, Chae-Gyun;Kim, Kyu-Ho;Lee, Ki-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.63-68
    • /
    • 2010
  • Recently, to accelerate the Ubiquitous generation, the input interface of the mobile machinery and tools are actively being researched. In addition with the existing interfaces such as the keyboard and curser (mouse), other subdivisions including the handwriting, voice, vision, and touch are under research for new interfaces. Especially in the case of small-sized mobile machinery and tools, there is a increasing need for an efficient input interface despite the small screens. This is because, additional installment of other devices are strictly limited due to its size. Previous studies on handwriting recognition have generally been based on either two-dimensional images or algorithms which identify handwritten data inserted through vectors. Futhermore, previous studies have only focused on how to enhance the accuracy of the handwriting recognition algorithms. However, a problem arisen is that when an actual handwriting is inserted, the user must select the classification of their characters (e.g Upper or lower case English, Hangul - Korean alphabet, numbers). To solve the given problem, the current study presents a system which distinguishes different languages by analyzing the form/shape of inserted handwritten characters. The proposed technique has treated the handwritten data as sets of vector units. By analyzing the correlation and directivity of each vector units, a more efficient language distinguishing system has been made possible.

The Character Recognition System of Mobile Camera Based Image (모바일 이미지 기반의 문자인식 시스템)

  • Park, Young-Hyun;Lee, Hyung-Jin;Baek, Joong-Hwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.5
    • /
    • pp.1677-1684
    • /
    • 2010
  • Recently, due to the development of mobile phone and supply of smart phone, many contents have been developed. Especially, since the small-sized cameras are equiped in mobile devices, people are interested in the image based contents development, and it also becomes important part in their practical use. Among them, the character recognition system can be widely used in the applications such as blind people guidance systems, automatic robot navigation systems, automatic video retrieval and indexing systems, automatic text translation systems. Therefore, this paper proposes a system that is able to extract text area from the natural images captured by smart phone camera. The individual characters are recognized and result is output in voice. Text areas are extracted using Adaboost algorithm and individual characters are recognized using error back propagated neural network.

Design and Implementation of Real Time Device Monitoring and History Management System based on Multiple devices in Smart Factory (스마트팩토리에서 다중장치기반 실시간 장비 모니터링 및 이력관리 시스템 설계 및 구현)

  • Kim, Dong-Hyun;Lee, Jae-min;Kim, Jong-Deok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.1
    • /
    • pp.124-133
    • /
    • 2021
  • Smart factory is a future factory that collects, analyzes, and monitors various data in real time by attaching sensors to equipment in the factory. In a smart factory, it is very important to inquire and generate the status and history of equipment in real time, and the emergence of various smart devices enables this to be performed more efficiently. This paper proposes a multi device-based system that can create, search, and delete equipment status and history in real time. The proposed system uses the Android system and the smart glass system at the same time in consideration of the special environment of the factory. The smart glass system uses a QR code for equipment recognition and provides a more efficient work environment by using a voice recognition function. We designed a system structure for real time equipment monitoring based on multi devices, and we show practicality by implementing and Android system, a smart glass system, and a web application server.

Subtitle Automatic Generation System using Speech to Text (음성인식을 이용한 자막 자동생성 시스템)

  • Son, Won-Seob;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.81-88
    • /
    • 2021
  • Recently, many videos such as online lecture videos caused by COVID-19 have been generated. However, due to the limitation of working hours and lack of cost, they are only a part of the videos with subtitles. It is emerging as an obstructive factor in the acquisition of information by deaf. In this paper, we try to develop a system that automatically generates subtitles using voice recognition and generates subtitles by separating sentences using the ending and time to reduce the time and labor required for subtitle generation.

Knowledge Transfer Using User-Generated Data within Real-Time Cloud Services

  • Zhang, Jing;Pan, Jianhan;Cai, Zhicheng;Li, Min;Cui, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.1
    • /
    • pp.77-92
    • /
    • 2020
  • When automatic speech recognition (ASR) is provided as a cloud service, it is easy to collect voice and application domain data from users. Harnessing these data will facilitate the provision of more personalized services. In this paper, we demonstrate our transfer learning-based knowledge service that built with the user-generated data collected through our novel system that deliveries personalized ASR service. First, we discuss the motivation, challenges, and prospects of building up such a knowledge-based service-oriented system. Second, we present a Quadruple Transfer Learning (QTL) method that can learn a classification model from a source domain and transfer it to a target domain. Third, we provide an overview architecture of our novel system that collects voice data from mobile users, labels the data via crowdsourcing, utilises these collected user-generated data to train different machine learning models, and delivers the personalised real-time cloud services. Finally, we use the E-Book data collected from our system to train classification models and apply them in the smart TV domain, and the experimental results show that our QTL method is effective in two classification tasks, which confirms that the knowledge transfer provides a value-added service for the upper-layer mobile applications in different domains.

A Study on Safety Management Improvement System Using Similarity Inspection Technique (유사도검사 기법을 이용한 안전관리 개선시스템 연구)

  • Park, Koo-Rack
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.4
    • /
    • pp.23-29
    • /
    • 2018
  • To reduce the accident rate caused by the delay of corrective action, which is common in the construction site, in order to shorten the time from correcting the existing system to the corrective action, I used a time similarity check to inform the inspectors of the problem in real time, modeling the system so that corrective action can be performed immediately on site, and studied a system that can actively cope with safety accidents. The research result shows that there is more than 90% opening effect and more than 60% safety accident reduction rate. I will continue to study more effective system combining voice recognition and deep learning based on this system.

The Conference Management System Architecture for Ontological Knowledge (지식의 온톨로지화를 위한 관리 시스템 아키텍처)

  • Hong, Hyun-Woo;Koh, Gwang-san;Kim, Chang-Soo;Jeong, Jae-Gil;Jung, Hoe-kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.1115-1118
    • /
    • 2005
  • With the development of the internet technology, The on-line conference system have been producted. Now, the on-line conference system is developing for using pattern recognition system and voice recognition system. Comparing with the off-line conference, the on-line conference is excellent in free from distance limitation. But, the on-line meetings have unavoidable weak points. it is the same as the off-line conference that when the conference goes on, the content orthopedic and the content consistency is weak. So the conference members can not seize the conference flow. Therefore, in this paper, we introduce the ontology concept. Design a new architecture using ontology mining technique for making the conference content and conference knowledge ontological. Then in order to inspection the new architecture, We design and implementation the new conference management system based knowledge.

  • PDF

Improved Transformer Model for Multimodal Fashion Recommendation Conversation System (멀티모달 패션 추천 대화 시스템을 위한 개선된 트랜스포머 모델)

  • Park, Yeong Joon;Jo, Byeong Cheol;Lee, Kyoung Uk;Kim, Kyung Sun
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.1
    • /
    • pp.138-147
    • /
    • 2022
  • Recently, chatbots have been applied in various fields and have shown good results, and many attempts to use chatbots in shopping mall product recommendation services are being conducted on e-commerce platforms. In this paper, for a conversation system that recommends a fashion that a user wants based on conversation between the user and the system and fashion image information, a transformer model that is currently performing well in various AI fields such as natural language processing, voice recognition, and image recognition. We propose a multimodal-based improved transformer model that is improved to increase the accuracy of recommendation by using dialogue (text) and fashion (image) information together for data preprocessing and data representation. We also propose a method to improve accuracy through data improvement by analyzing the data. The proposed system has a recommendation accuracy score of 0.6563 WKT (Weighted Kendall's tau), which significantly improved the existing system's 0.3372 WKT by 0.3191 WKT or more.

Improved Motion-Recognizing Remote Controller for Realistic Contents (실감형 컨텐츠를 위한 향상된 동작 인식 리모트 컨트롤러)

  • Park, Gun-Hyuk;Kim, Sang-Ki;Yim, Sung-Hoon;Han, Gab-Jong;Choi, Seung-Moon;Choi, Seung-Jin;Eoh, Hong-Jun;Cho, Sun-Young
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.396-401
    • /
    • 2009
  • This paper describes the improvements made on hardware and software of the remote controller for realistic contents. The controller can provide vibrotactile feedback which uses both of a voice-coil actuator and a vibration motor. A vision tracking system for the 3D position of the controller is optimized with respect to the marker size and the camera parameters. We also present the improvements of motion recognition due to the effective motion segmentation and the fusion of vision and acceleration data. We apply the developed controller to realistic contents and validate its usability.

  • PDF

Classification of Diphthongs using Acoustic Phonetic Parameters (음향음성학 파라메터를 이용한 이중모음의 분류)

  • Lee, Suk-Myung;Choi, Jeung-Yoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.2
    • /
    • pp.167-173
    • /
    • 2013
  • This work examines classification of diphthongs, as part of a distinctive feature-based speech recognition system. Acoustic measurements related to the vocal tract and the voice source are examined, and analysis of variance (ANOVA) results show that vowel duration, energy trajectory, and formant variation are significant. A balanced error rate of 17.8% is obtained for 2-way diphthong classification on the TIMIT database, and error rates of 32.9%, 29.9%, and 20.2% are obtained for /aw/, /ay/, and /oy/, for 4-way classification, respectively. Adding the acoustic features to widely used Mel-frequency cepstral coefficients also improves classification.