• Title/Summary/Keyword: Camera Action

검색결과 120건 처리시간 0.027초

MOBA 게임 카메라 밸런스 개선을 위한 영향요소 분석 - 중심으로 (Study on Influencing Factors of Camera Balance in MOBA Games - Focused on )

  • 이정;조동민
    • 한국멀티미디어학회논문지
    • /
    • 제23권12호
    • /
    • pp.1565-1575
    • /
    • 2020
  • This study examines the game balance of the MOBA game genre, which was selected as a model item for the Asian Games. The "bird-eye view" was used for a more efficient representation of 3D modeling. Based on that, statistical analysis was conducted to present appropriate game camera settings and camera balance to match the competitive structure of the MOBA game. A review of the game camera settings reveals that 64° to 70° is the angle that minimizes the difference in vision between the two-player teams the most. Through a one-way ANOVA analysis, we found that the user ranking level and SVB value are closely related. Therefore, the factor of the regression equation using the SVB value must have a user ranking level. As a result of the optimized camera focus analysis of , the camera setting methods were classified into 3 types. For main action games, the recommended camera angle is 64°~66°, and the recommended camera focus is 11.2 mm~19.3 mm. For action and strategy games, the camera angle is 66°~68°, camera focus - 19.3 mm~27.3 mm. And lastly, for the main strategy game, the recommended camera angle is 68°~70°, and the camera focus is 27.3 mm~35.3 mm.

시각장애인 보조를 위한 영상기반 휴먼 행동 인식 시스템 (Image Based Human Action Recognition System to Support the Blind)

  • 고병철;황민철;남재열
    • 정보과학회 논문지
    • /
    • 제42권1호
    • /
    • pp.138-143
    • /
    • 2015
  • 본 논문에서는 시각장애인의 장면인식 보조를 위해, 귀걸이 형 블루투수 카메라와 행동인식 서버간의 통신을 통해 휴먼의 행동을 인식하는 시스템을 제안한다. 먼저 시각장애인이 귀걸이 형 블루투수 카메라를 이용하여 원하는 위치의 장면을 촬영하면, 촬영된 영상은 카메라와 연동된 스마트 폰을 통해 인식서버로 전송된다. 인식 서버에서는 영상 분석 알고리즘을 이용하여 휴먼 및 객체를 검출하고 휴먼의 포즈를 분석하여 휴먼 행동을 인식한다. 인식된 휴먼 행동 정보는 스마트 폰에 재 전송되고 사용자는 스마트 폰을 통해 text-to-speech (TTS)로 인식결과를 듣게 된다. 본 논문에서 제안한 시스템에서는 실내 외에서 촬영된 실험데이터에 대해서 60.7%의 휴먼 행동 인식 성능을 보여 주었다.

Optical Vehicle to Vehicle Communications for Autonomous Mirrorless Cars

  • Jin, Sung Yooun;Choi, Dongnyeok;Kim, Byung Wook
    • Journal of Multimedia Information System
    • /
    • 제5권2호
    • /
    • pp.105-110
    • /
    • 2018
  • Autonomous cars require the integration of multiple communication systems for driving safety. Many carmakers unveil mirrorless concept cars aiming to replace rear and sideview mirrors in vehicles with camera monitoring systems, which eliminate blind spots and reduce risk. This paper presents optical vehicle-to-vehicle (V2V) communications for autonomous mirrorless cars. The flicker-free light emitting diode (LED) light sources, providing illumination and data transmission simultaneously, and a high speed camera are used as transmitters and a receiver in the OCC link, respectively. The rear side vehicle transmits both future action data and vehicle type data using a headlamp or daytime running light, and the front vehicle can receive OCC data from the camera that replaces side mirrors so as not to prevent accidents while driving. Experimental results showed that action and vehicle type information were sent by LED light sources successfully to the front vehicle's camera via the OCC link and proved that OCC-based V2V communications for mirrorless cars can be a viable solution to improve driving safety.

360VR을 활용한 영화촬영 환경을 위한 제작 효율성 연구 (A Study on the Production Efficiency of Movie Filming Environment Using 360° VR)

  • 이영숙;김정환
    • 한국멀티미디어학회논문지
    • /
    • 제19권12호
    • /
    • pp.2036-2043
    • /
    • 2016
  • The $360^{\circ}$ Virtual Reality (VR) live-action movies are filmed by attaching multiple cameras to a rig to shoot the images omni-directionally. Especially, for a live-action film that requires a variety of scenes, the director of photography and his staff usually have to operate the rigged cameras directly all around the scene and edit the footage during the post-production stage so that the entire process can incur much time and high cost. However, it will also be possible to acquire high-quality omni-directional images with fewer staff if the camera rig(s) can be controlled remotely to allow more flexible camera walking. Thus, a $360^{\circ}$ VR filming system with remote-controlled camera rig has been proposed in this study. The movie producers will be able to produce the movies that provide greater immersion with this system.

언리얼엔진과 액션 카메라 시점을 활용한 1인칭 공포 게임 개발 (Developing a first-person horror game using Unreal Engine and an action camera perspective)

  • 김남영;주영민;허원회
    • 한국인터넷방송통신학회논문지
    • /
    • 제24권1호
    • /
    • pp.75-81
    • /
    • 2024
  • 본 논문에서는 1인칭 3D 게임을 개발하여 액션 카메라의 특징을 활용한 현실적인 카메라 연출을 통해 플레이어에게 극한의 공포를 제공하는 데 중점을 두고 있다. 새로운 카메라 연출 기법으로 광각 렌즈를 사용한 시점 왜곡과 이동 시 카메라 흔들림을 도입하여 기존 게임보다 더 높은 몰입도를 제공하고자 한다. 게임의 주제느 공포 방 탈출이며, 플레이어는 총기를 소지하고 시작한다. 그러나 총기 사용으로 인한 게임의 난도가 낮다는 우려를 극복하기 위해 몬스터 추격과 탄창 수 감소 등의 부담감을 부여하여 플레이어에게 총기 사용을 조절하도록 하였다. 본 논문은 사실적인 연출을 통해 플레이어들의 공포 효과를 극대화 한 새로운 방식의 3D게임을 개발하였다는 데 그 의의가 있다.

Real-Time Cattle Action Recognition for Estrus Detection

  • Heo, Eui-Ju;Ahn, Sung-Jin;Choi, Kang-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권4호
    • /
    • pp.2148-2161
    • /
    • 2019
  • In this paper, we present a real-time cattle action recognition algorithm to detect the estrus phase of cattle from a live video stream. In order to classify cattle movement, specifically, to detect the mounting action, the most observable sign of the estrus phase, a simple yet effective feature description exploiting motion history images (MHI) is designed. By learning the proposed features using the support vector machine framework, various representative cattle actions, such as mounting, walking, tail wagging, and foot stamping, can be recognized robustly in complex scenes. Thanks to low complexity of the proposed action recognition algorithm, multiple cattle in three enclosures can be monitored simultaneously using a single fisheye camera. Through extensive experiments with real video streams, we confirmed that the proposed algorithm outperforms a conventional human action recognition algorithm by 18% in terms of recognition accuracy even with much smaller dimensional feature description.

Improved DT Algorithm Based Human Action Features Detection

  • Hu, Zeyuan;Lee, Suk-Hwan;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제21권4호
    • /
    • pp.478-484
    • /
    • 2018
  • The choice of the motion features influences the result of the human action recognition method directly. Many factors often influence the single feature differently, such as appearance of the human body, environment and video camera. So the accuracy of action recognition is restricted. On the bases of studying the representation and recognition of human actions, and giving fully consideration to the advantages and disadvantages of different features, the Dense Trajectories(DT) algorithm is a very classic algorithm in the field of behavior recognition feature extraction, but there are some defects in the use of optical flow images. In this paper, we will use the improved Dense Trajectories(iDT) algorithm to optimize and extract the optical flow features in the movement of human action, then we will combined with Support Vector Machine methods to identify human behavior, and use the image in the KTH database for training and testing.

A Survey of Human Action Recognition Approaches that use an RGB-D Sensor

  • Farooq, Adnan;Won, Chee Sun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권4호
    • /
    • pp.281-290
    • /
    • 2015
  • Human action recognition from a video scene has remained a challenging problem in the area of computer vision and pattern recognition. The development of the low-cost RGB depth camera (RGB-D) allows new opportunities to solve the problem of human action recognition. In this paper, we present a comprehensive review of recent approaches to human action recognition based on depth maps, skeleton joints, and other hybrid approaches. In particular, we focus on the advantages and limitations of the existing approaches and on future directions.

Human Action Recognition Using Deep Data: A Fine-Grained Study

  • Rao, D. Surendra;Potturu, Sudharsana Rao;Bhagyaraju, V
    • International Journal of Computer Science & Network Security
    • /
    • 제22권6호
    • /
    • pp.97-108
    • /
    • 2022
  • The video-assisted human action recognition [1] field is one of the most active ones in computer vision research. Since the depth data [2] obtained by Kinect cameras has more benefits than traditional RGB data, research on human action detection has recently increased because of the Kinect camera. We conducted a systematic study of strategies for recognizing human activity based on deep data in this article. All methods are grouped into deep map tactics and skeleton tactics. A comparison of some of the more traditional strategies is also covered. We then examined the specifics of different depth behavior databases and provided a straightforward distinction between them. We address the advantages and disadvantages of depth and skeleton-based techniques in this discussion.

Spatio-temporal Semantic Features for Human Action Recognition

  • Liu, Jia;Wang, Xiaonian;Li, Tianyu;Yang, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제6권10호
    • /
    • pp.2632-2649
    • /
    • 2012
  • Most approaches to human action recognition is limited due to the use of simple action datasets under controlled environments or focus on excessively localized features without sufficiently exploring the spatio-temporal information. This paper proposed a framework for recognizing realistic human actions. Specifically, a new action representation is proposed based on computing a rich set of descriptors from keypoint trajectories. To obtain efficient and compact representations for actions, we develop a feature fusion method to combine spatial-temporal local motion descriptors by the movement of the camera which is detected by the distribution of spatio-temporal interest points in the clips. A new topic model called Markov Semantic Model is proposed for semantic feature selection which relies on the different kinds of dependencies between words produced by "syntactic " and "semantic" constraints. The informative features are selected collaboratively based on the different types of dependencies between words produced by short range and long range constraints. Building on the nonlinear SVMs, we validate this proposed hierarchical framework on several realistic action datasets.