• Title/Summary/Keyword: voice command

Search Result 94, Processing Time 0.023 seconds

Design of Real-time Disaster Safety management Solution in a Smart Environment (스마트 환경에서의 실시간 재난 안전 관리 솔루션 설계)

  • Seo, Ssang-Hee;Kim, Bong-Hyun
    • Journal of Digital Convergence
    • /
    • v.18 no.7
    • /
    • pp.31-36
    • /
    • 2020
  • In recent years, increasing the variety of disasters and accidents that accompany large-scale damage. Disasters are accidents with uncertainty and have a direct impact on people's lives, safety and property protection. Therefore, it is necessary to establish and operate safety management systems such as prevention, response, and recovery for various disasters. Therefore, in this paper, a real-time disaster safety management solution in a smart environment was designed to systematically respond to disaster accidents. To this end, 1: 1 or 1: N situation propagation was performed to the situation room, related organizations, and experts through smart devices. Through this, the solution was configured to respond quickly and appropriately through multi-party information sharing and communication. In other words, we designed a solution that applied functions such as real-time and multi-party HD video transmission, mobile-type report management, voice / text situation propagation, location information sharing, recording and history management, and security.

Comparative Study on the Educational Use of Home Robots for Children

  • Han, Jeong-Hye;Jo, Mi-Heon;Jones, Vicki;Jo, Jun-H.
    • Journal of Information Processing Systems
    • /
    • v.4 no.4
    • /
    • pp.159-168
    • /
    • 2008
  • Human-Robot Interaction (HRI), based on already well-researched Human-Computer Interaction (HCI), has been under vigorous scrutiny since recent developments in robot technology. Robots may be more successful in establishing common ground in project-based education or foreign language learning for children than in traditional media. Backed by its strong IT environment and advances in robot technology, Korea has developed the world's first available e-Learning home robot. This has demonstrated the potential for robots to be used as a new educational media - robot-learning, referred to as 'r-Learning'. Robot technology is expected to become more interactive and user-friendly than computers. Also, robots can exhibit various forms of communication such as gestures, motions and facial expressions. This study compared the effects of non-computer based (NCB) media (using a book with audiotape) and Web-Based Instruction (WBI), with the effects of Home Robot-Assisted Learning (HRL) for children. The robot gestured and spoke in English, and children could touch its monitor if it did not recognize their voice command. Compared to other learning programs, the HRL was superior in promoting and improving children's concentration, interest, and academic achievement. In addition, the children felt that a home robot was friendlier than other types of instructional media. The HRL group had longer concentration spans than the other groups, and the p-value demonstrated a significant difference in concentration among the groups. In regard to the children's interest in learning, the HRL group showed the highest level of interest, the NCB group and the WBI group came next in order. Also, academic achievement was the highest in the HRL group, followed by the WBI group and the NCB group respectively. However, a significant difference was also found in the children's academic achievement among the groups. These results suggest that home robots are more effective as regards children's learning concentration, learning interest and academic achievement than other types of instructional media (such as: books with audiotape and WBI) for English as a foreign language.

Implementation of Speech Recognition and Flight Controller Based on Deep Learning for Control to Primary Control Surface of Aircraft

  • Hur, Hwa-La;Kim, Tae-Sun;Park, Myeong-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.9
    • /
    • pp.57-64
    • /
    • 2021
  • In this paper, we propose a device that can control the primary control surface of an aircraft by recognizing speech commands. The speech command consists of 19 commands, and a learning model is constructed based on a total of 2,500 datasets. The training model is composed of a CNN model using the Sequential library of the TensorFlow-based Keras model, and the speech file used for training uses the MFCC algorithm to extract features. The learning model consists of two convolution layers for feature recognition and Fully Connected Layer for classification consists of two dense layers. The accuracy of the validation dataset was 98.4%, and the performance evaluation of the test dataset showed an accuracy of 97.6%. In addition, it was confirmed that the operation was performed normally by designing and implementing a Raspberry Pi-based control device. In the future, it can be used as a virtual training environment in the field of voice recognition automatic flight and aviation maintenance.

Real-Time Comprehensive Assistance for Visually Impaired Navigation

  • Amal Al-Shahrani;Amjad Alghamdi;Areej Alqurashi;Raghad Alzahrani;Nuha imam
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.1-10
    • /
    • 2024
  • Individuals with visual impairments face numerous challenges in their daily lives, with navigating streets and public spaces being particularly daunting. The inability to identify safe crossing locations and assess the feasibility of crossing significantly restricts their mobility and independence. Globally, an estimated 285 million people suffer from visual impairment, with 39 million categorized as blind and 246 million as visually impaired, according to the World Health Organization. In Saudi Arabia alone, there are approximately 159 thousand blind individuals, as per unofficial statistics. The profound impact of visual impairments on daily activities underscores the urgent need for solutions to improve mobility and enhance safety. This study aims to address this pressing issue by leveraging computer vision and deep learning techniques to enhance object detection capabilities. Two models were trained to detect objects: one focused on street crossing obstacles, and the other aimed to search for objects. The first model was trained on a dataset comprising 5283 images of road obstacles and traffic signals, annotated to create a labeled dataset. Subsequently, it was trained using the YOLOv8 and YOLOv5 models, with YOLOv5 achieving a satisfactory accuracy of 84%. The second model was trained on the COCO dataset using YOLOv5, yielding an impressive accuracy of 94%. By improving object detection capabilities through advanced technology, this research seeks to empower individuals with visual impairments, enhancing their mobility, independence, and overall quality of life.