• Title/Summary/Keyword: Camera Action

Search Result 120, Processing Time 0.021 seconds

Study on Influencing Factors of Camera Balance in MOBA Games - Focused on (MOBA 게임 카메라 밸런스 개선을 위한 영향요소 분석 - 중심으로)

  • LI, JING;Cho, Dong-Min
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.12
    • /
    • pp.1565-1575
    • /
    • 2020
  • This study examines the game balance of the MOBA game genre, which was selected as a model item for the Asian Games. The "bird-eye view" was used for a more efficient representation of 3D modeling. Based on that, statistical analysis was conducted to present appropriate game camera settings and camera balance to match the competitive structure of the MOBA game. A review of the game camera settings reveals that 64° to 70° is the angle that minimizes the difference in vision between the two-player teams the most. Through a one-way ANOVA analysis, we found that the user ranking level and SVB value are closely related. Therefore, the factor of the regression equation using the SVB value must have a user ranking level. As a result of the optimized camera focus analysis of , the camera setting methods were classified into 3 types. For main action games, the recommended camera angle is 64°~66°, and the recommended camera focus is 11.2 mm~19.3 mm. For action and strategy games, the camera angle is 66°~68°, camera focus - 19.3 mm~27.3 mm. And lastly, for the main strategy game, the recommended camera angle is 68°~70°, and the camera focus is 27.3 mm~35.3 mm.

Image Based Human Action Recognition System to Support the Blind (시각장애인 보조를 위한 영상기반 휴먼 행동 인식 시스템)

  • Ko, ByoungChul;Hwang, Mincheol;Nam, Jae-Yeal
    • Journal of KIISE
    • /
    • v.42 no.1
    • /
    • pp.138-143
    • /
    • 2015
  • In this paper we develop a novel human action recognition system based on communication between an ear-mounted Bluetooth camera and an action recognition server to aid scene recognition for the blind. First, if the blind capture an image of a specific location using the ear-mounted camera, the captured image is transmitted to the recognition server using a smartphone that is synchronized with the camera. The recognition server sequentially performs human detection, object detection and action recognition by analyzing human poses. The recognized action information is retransmitted to the smartphone and the user can hear the action information through the text-to-speech (TTS). Experimental results using the proposed system showed a 60.7% action recognition performance on the test data captured in indoor and outdoor environments.

Optical Vehicle to Vehicle Communications for Autonomous Mirrorless Cars

  • Jin, Sung Yooun;Choi, Dongnyeok;Kim, Byung Wook
    • Journal of Multimedia Information System
    • /
    • v.5 no.2
    • /
    • pp.105-110
    • /
    • 2018
  • Autonomous cars require the integration of multiple communication systems for driving safety. Many carmakers unveil mirrorless concept cars aiming to replace rear and sideview mirrors in vehicles with camera monitoring systems, which eliminate blind spots and reduce risk. This paper presents optical vehicle-to-vehicle (V2V) communications for autonomous mirrorless cars. The flicker-free light emitting diode (LED) light sources, providing illumination and data transmission simultaneously, and a high speed camera are used as transmitters and a receiver in the OCC link, respectively. The rear side vehicle transmits both future action data and vehicle type data using a headlamp or daytime running light, and the front vehicle can receive OCC data from the camera that replaces side mirrors so as not to prevent accidents while driving. Experimental results showed that action and vehicle type information were sent by LED light sources successfully to the front vehicle's camera via the OCC link and proved that OCC-based V2V communications for mirrorless cars can be a viable solution to improve driving safety.

A Study on the Production Efficiency of Movie Filming Environment Using 360° VR (360VR을 활용한 영화촬영 환경을 위한 제작 효율성 연구)

  • Lee, Young-suk;Kim, Jungwhan
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.12
    • /
    • pp.2036-2043
    • /
    • 2016
  • The $360^{\circ}$ Virtual Reality (VR) live-action movies are filmed by attaching multiple cameras to a rig to shoot the images omni-directionally. Especially, for a live-action film that requires a variety of scenes, the director of photography and his staff usually have to operate the rigged cameras directly all around the scene and edit the footage during the post-production stage so that the entire process can incur much time and high cost. However, it will also be possible to acquire high-quality omni-directional images with fewer staff if the camera rig(s) can be controlled remotely to allow more flexible camera walking. Thus, a $360^{\circ}$ VR filming system with remote-controlled camera rig has been proposed in this study. The movie producers will be able to produce the movies that provide greater immersion with this system.

Developing a first-person horror game using Unreal Engine and an action camera perspective (언리얼엔진과 액션 카메라 시점을 활용한 1인칭 공포 게임 개발)

  • Nam-Young Kim;Young-Min Joo;Won-Whoi Huh
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.75-81
    • /
    • 2024
  • This paper focuses on developing a first-person 3D game to provide extreme fear to players through realistic camera direction utilizing the features of action cameras. As a new camera production technique, we introduce perspective distortion using a wide-angle lens and camera shake when moving to provide higher immersion than existing games. The theme of the game is horror room escape, and the player starts with a firearm, but in order to overcome the concern that the game's difficulty is low due to the use of firearms, the player is asked to control the use of firearms by imposing burdens such as chasing monsters and reducing the number of magazines. The significance of this paper is that we developed a new type of 3D game that maximizes the fear effect of players through realistic production.

Real-Time Cattle Action Recognition for Estrus Detection

  • Heo, Eui-Ju;Ahn, Sung-Jin;Choi, Kang-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2148-2161
    • /
    • 2019
  • In this paper, we present a real-time cattle action recognition algorithm to detect the estrus phase of cattle from a live video stream. In order to classify cattle movement, specifically, to detect the mounting action, the most observable sign of the estrus phase, a simple yet effective feature description exploiting motion history images (MHI) is designed. By learning the proposed features using the support vector machine framework, various representative cattle actions, such as mounting, walking, tail wagging, and foot stamping, can be recognized robustly in complex scenes. Thanks to low complexity of the proposed action recognition algorithm, multiple cattle in three enclosures can be monitored simultaneously using a single fisheye camera. Through extensive experiments with real video streams, we confirmed that the proposed algorithm outperforms a conventional human action recognition algorithm by 18% in terms of recognition accuracy even with much smaller dimensional feature description.

Improved DT Algorithm Based Human Action Features Detection

  • Hu, Zeyuan;Lee, Suk-Hwan;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.4
    • /
    • pp.478-484
    • /
    • 2018
  • The choice of the motion features influences the result of the human action recognition method directly. Many factors often influence the single feature differently, such as appearance of the human body, environment and video camera. So the accuracy of action recognition is restricted. On the bases of studying the representation and recognition of human actions, and giving fully consideration to the advantages and disadvantages of different features, the Dense Trajectories(DT) algorithm is a very classic algorithm in the field of behavior recognition feature extraction, but there are some defects in the use of optical flow images. In this paper, we will use the improved Dense Trajectories(iDT) algorithm to optimize and extract the optical flow features in the movement of human action, then we will combined with Support Vector Machine methods to identify human behavior, and use the image in the KTH database for training and testing.

A Survey of Human Action Recognition Approaches that use an RGB-D Sensor

  • Farooq, Adnan;Won, Chee Sun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.4
    • /
    • pp.281-290
    • /
    • 2015
  • Human action recognition from a video scene has remained a challenging problem in the area of computer vision and pattern recognition. The development of the low-cost RGB depth camera (RGB-D) allows new opportunities to solve the problem of human action recognition. In this paper, we present a comprehensive review of recent approaches to human action recognition based on depth maps, skeleton joints, and other hybrid approaches. In particular, we focus on the advantages and limitations of the existing approaches and on future directions.

Human Action Recognition Using Deep Data: A Fine-Grained Study

  • Rao, D. Surendra;Potturu, Sudharsana Rao;Bhagyaraju, V
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.97-108
    • /
    • 2022
  • The video-assisted human action recognition [1] field is one of the most active ones in computer vision research. Since the depth data [2] obtained by Kinect cameras has more benefits than traditional RGB data, research on human action detection has recently increased because of the Kinect camera. We conducted a systematic study of strategies for recognizing human activity based on deep data in this article. All methods are grouped into deep map tactics and skeleton tactics. A comparison of some of the more traditional strategies is also covered. We then examined the specifics of different depth behavior databases and provided a straightforward distinction between them. We address the advantages and disadvantages of depth and skeleton-based techniques in this discussion.

Spatio-temporal Semantic Features for Human Action Recognition

  • Liu, Jia;Wang, Xiaonian;Li, Tianyu;Yang, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.10
    • /
    • pp.2632-2649
    • /
    • 2012
  • Most approaches to human action recognition is limited due to the use of simple action datasets under controlled environments or focus on excessively localized features without sufficiently exploring the spatio-temporal information. This paper proposed a framework for recognizing realistic human actions. Specifically, a new action representation is proposed based on computing a rich set of descriptors from keypoint trajectories. To obtain efficient and compact representations for actions, we develop a feature fusion method to combine spatial-temporal local motion descriptors by the movement of the camera which is detected by the distribution of spatio-temporal interest points in the clips. A new topic model called Markov Semantic Model is proposed for semantic feature selection which relies on the different kinds of dependencies between words produced by "syntactic " and "semantic" constraints. The informative features are selected collaboratively based on the different types of dependencies between words produced by short range and long range constraints. Building on the nonlinear SVMs, we validate this proposed hierarchical framework on several realistic action datasets.