• Title/Summary/Keyword: Visual

Search Result 18,811, Processing Time 0.052 seconds

A Study on the Realization of Virtual Simulation Face Based on Artificial Intelligence

  • Zheng-Dong Hou;Ki-Hong Kim;Gao-He Zhang;Peng-Hui Li
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.2
    • /
    • pp.152-158
    • /
    • 2023
  • In recent years, as computer-generated imagery has been applied to more industries, realistic facial animation is one of the important research topics. The current solution for realistic facial animation is to create realistic rendered 3D characters, but the 3D characters created by traditional methods are always different from the actual characters and require high cost in terms of staff and time. Deepfake technology can achieve the effect of realistic faces and replicate facial animation. The facial details and animations are automatically done by the computer after the AI model is trained, and the AI model can be reused, thus reducing the human and time costs of realistic face animation. In addition, this study summarizes the way human face information is captured and proposes a new workflow for video to image conversion and demonstrates that the new work scheme can obtain higher quality images and exchange effects by evaluating the quality of No Reference Image Quality Assessment.

Analyzing Construction Workers' Recognition of Hazards by Estimating Visual Focus of Attention

  • Fang, Yihai;Cho, Yong K.
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.248-251
    • /
    • 2015
  • High injury and fatality rates remain a serious problem in the construction industry. Many construction injuries and fatalities can be prevented if workers can recognize potential hazards and take actions in time. Many efforts have been devoted in improving workers' ability of hazard recognition through various safety training and education methods. However, a reliable approach for evaluating this ability is missing. Previous studies in the field of human behavior and phycology indicate that the visual focus of attention (VFOA) is a good indicator of worker's actual focus. Towards this direction, this study introduces an automated approach for estimating the VFOA of equipment operators using a head orientation-based VFOA estimation method. The proposed method is validated in a virtual reality scenario using an immersive head mounted display. Results show that the proposed method can effectively estimate the VFOA of test subjects in different test scenarios. The findings in this study broaden the knowledge of detecting the visual focus and distraction of construction workers, and envision the future work in improving work's ability of hazard recognition.

  • PDF

A Study on Unreal Engine Lumen Lighting System for Visual Storytelling in Games

  • Chenghao Wang;Jeanhun Chung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.75-80
    • /
    • 2024
  • Research on the visual narrative impact of Unreal Engine's Lumen lighting system in games aims to delve into how Lumen's lighting technology plays a crucial role in game design and gameplay experience, thereby enhancing the visual storytelling of games. Lumen, Unreal Engine's dynamic global illumination solution, calculates lighting and shadows in real-time during gameplay, creating a more realistic and immersive environment. Analysis indicates that Lumen technology not only provides visually realistic and dynamic lighting effects but also significantly enriches the expressiveness and immersion of the game narrative through its changes in light and shadow.

Design of HCI System of Museum Guide Robot Based on Visual Communication Skill

  • Qingqing Liang
    • Journal of Information Processing Systems
    • /
    • v.20 no.3
    • /
    • pp.328-336
    • /
    • 2024
  • Visual communication is widely used and enhanced in modern society, where there is an increasing demand for spirituality. Museum robots are one of many service robots that can replace humans to provide services such as display, interpretation and dialogue. For the improvement of museum guide robots, the paper proposes a human-robot interaction system based on visual communication skills. The system is based on a deep neural mesh structure and utilizes theoretical analysis of computer vision to introduce a Tiny+CBAM mesh structure in the gesture recognition component. This combines basic gestures and gesture states to design and evaluate gesture actions. The test results indicated that the improved Tiny+CBAM mesh structure could enhance the mean average precision value by 13.56% while maintaining a loss of less than 3 frames per second during static basic gesture recognition. After testing the system's dynamic gesture performance, it was found to be over 95% accurate for all items except double click. Additionally, it was 100% accurate for the action displayed on the current page.