• Title/Summary/Keyword: Camera Action

Search Result 121, Processing Time 0.024 seconds

Vision-based garbage dumping action detection for real-world surveillance platform

  • Yun, Kimin;Kwon, Yongjin;Oh, Sungchan;Moon, Jinyoung;Park, Jongyoul
    • ETRI Journal
    • /
    • v.41 no.4
    • /
    • pp.494-505
    • /
    • 2019
  • In this paper, we propose a new framework for detecting the unauthorized dumping of garbage in real-world surveillance camera. Although several action/behavior recognition methods have been investigated, these studies are hardly applicable to real-world scenarios because they are mainly focused on well-refined datasets. Because the dumping actions in the real-world take a variety of forms, building a new method to disclose the actions instead of exploiting previous approaches is a better strategy. We detected the dumping action by the change in relation between a person and the object being held by them. To find the person-held object of indefinite form, we used a background subtraction algorithm and human joint estimation. The person-held object was then tracked and the relation model between the joints and objects was built. Finally, the dumping action was detected through the voting-based decision module. In the experiments, we show the effectiveness of the proposed method by testing on real-world videos containing various dumping actions. In addition, the proposed framework is implemented in a real-time monitoring system through a fast online algorithm.

A study on comparison between 3D computer graphics cameras and actual cameras (3D컴퓨터그래픽스 가상현실 애니메이션 카메라와 실제카메라의 비교 연구 - Maya, Softimage 3D, XSI 소프트웨어와 실제 정사진과 동사진 카메라를 중심으로)

  • Kang, Chong-Jin
    • Cartoon and Animation Studies
    • /
    • s.6
    • /
    • pp.193-220
    • /
    • 2002
  • The world being made by computers showing great expanses and complex and various expression provides not simply communication places but also a new civilization and a new creative world. Among these, 3D computer graphics, 3D animation and virtual reality technology wore sublimated as a new culture and a new genre of art by joining graphic design and computer engineering. In this study, I tried to make a diagnosis of possibilities, limits and differences of expression in the area of virtual reality computer graphics animation as a comparison between camera action, angle of actual still camera and film camera and virtual software for 3D computer graphics software - Maya, XSI, Softimage3D.

  • PDF

Traffic Safety Recommendation Using Combined Accident and Speeding Data

  • Onuean, Athita;Lee, Daesung;Jung, Hanmin
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.1
    • /
    • pp.49-54
    • /
    • 2020
  • Speed enforcement is one of the major challenges in traffic safety. The increasing number of accidents and fatalities has led governments to respond by implementing an intelligent control system. For example, the Korean government implemented a speed camera system for maintaining road safety. However, many drivers still engage in speeding behavior in blackspot areas where speed cameras are not provided. Therefore, we propose a methodology to analyze the combined accident and speeding data to offer recommendations to maintain traffic safety. We investigate three factors: "section," "existing speed camera location," and "over speeding data." To interpret the results, we used the QGIS tool for visualizing the spatial distribution of the incidents. Finally, we provide four recommendations based on the three aforementioned factors: "investigate with experts," "no action," "install fixed speed cameras," and "deploy mobile speed cameras."

Cooperative Robot for Table Balancing Using Q-learning (테이블 균형맞춤 작업이 가능한 Q-학습 기반 협력로봇 개발)

  • Kim, Yewon;Kang, Bo-Yeong
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.4
    • /
    • pp.404-412
    • /
    • 2020
  • Typically everyday human life tasks involve at least two people moving objects such as tables and beds, and the balancing of such object changes based on one person's action. However, many studies in previous work performed their tasks solely on robots without factoring human cooperation. Therefore, in this paper, we propose cooperative robot for table balancing using Q-learning that enables cooperative work between human and robot. The human's action is recognized in order to balance the table by the proposed robot whose camera takes the image of the table's state, and it performs the table-balancing action according to the recognized human action without high performance equipment. The classification of human action uses a deep learning technology, specifically AlexNet, and has an accuracy of 96.9% over 10-fold cross-validation. The experiment of Q-learning was carried out over 2,000 episodes with 200 trials. The overall results of the proposed Q-learning show that the Q function stably converged at this number of episodes. This stable convergence determined Q-learning policies for the robot actions. Video of the robotic cooperation with human over the table balancing task using the proposed Q-Learning can be found at http://ibot.knu.ac.kr/videocooperation.html.

Object-Action and Risk-Situation Recognition Using Moment Change and Object Size's Ratio (모멘트 변화와 객체 크기 비율을 이용한 객체 행동 및 위험상황 인식)

  • Kwak, Nae-Joung;Song, Teuk-Seob
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.5
    • /
    • pp.556-565
    • /
    • 2014
  • This paper proposes a method to track object of real-time video transferred through single web-camera and to recognize risk-situation and human actions. The proposed method recognizes human basic actions that human can do in daily life and finds risk-situation such as faint and falling down to classify usual action and risk-situation. The proposed method models the background, obtains the difference image between input image and the modeled background image, extracts human object from input image, tracts object's motion and recognizes human actions. Tracking object uses the moment information of extracting object and the characteristic of object's recognition is moment's change and ratio of object's size between frames. Actions classified are four actions of walking, waling diagonally, sitting down, standing up among the most actions human do in daily life and suddenly falling down is classified into risk-situation. To test the proposed method, we applied it for eight participants from a video of a web-cam, classify human action and recognize risk-situation. The test result showed more than 97 percent recognition rate for each action and 100 percent recognition rate for risk-situation by the proposed method.

An Implementation of Taekwondo Action Recognition System using Multiple Sensing (멀티플 센싱을 이용한 태권도 동작 인식 시스템 구현)

  • Lee, Byong Kwon
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.436-442
    • /
    • 2016
  • There are a lot of sports when you left the victory and the defeat of the match the referee subjective judgment. In particular, TaeKwonDo pumse How accurate a given action? Is important. Objectively evaluate the subjective opinion of victory and defeat in a sporting event and the technology to keep as evidence is required. This study was implemented a system for recognizing Taekwondo executed through the number of motion recognition device. Step Sensor also used to detect a user's location. This study evaluated the rate matching the standard gesture data and the motion data. Through multiple gesture recognition equipment was more accurate assessment of the Taekwondo action.

A Miniature Humanoid Robot That Can Play Soccor

  • Lim, Seon-Ho;Cho, Jeong-San;Sung, Young-Whee;Yi, Soo-Yeong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.628-632
    • /
    • 2003
  • An intelligent miniature humanoid robot system is designed and implemented as a platform for researching walking algorithm. The robot system consists of a mechanical robot body, a control system, a sensor system, and a human interface system. The robot has 6 dofs per leg, 3 dofs per arm, and 2 dofs for a neck, so it has total of 20 dofs to have dexterous motion capability. For the control system, a supervisory controller runs on a remote host computer to plan high level robot actions based on the vision sensor data, a main controller implemented with a DSP chip generates walking trajectories for the robot to perform the commanded action, and an auxiliary controller implemented with an FPGA chip controls 20 actuators. The robot has three types of sensors. A two-axis acceleration sensor and eight force sensing resistors for acquiring information on walking status of the robot, and a color CCD camera for acquiring information on the surroundings. As an example of an intelligent robot action, some experiments on playing soccer are performed.

  • PDF

Implementation of an Intelligent Action of a Small Biped Robot (소형 2족 보행 로봇의 지능형 동작의 구현)

  • Lim Seun ho;Cho Jung san;Yi Soo-Yeong;Ahn Hee-Wook;Sung Young Whee
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.10 no.9
    • /
    • pp.825-832
    • /
    • 2004
  • A small biped robot system is designed and implemented. The robot system consists of a mechanical robot body, a control system, a sensor system, and a user interface system. The robot has 12 dofs for two legs, 6 dofs for two arms, 2 dofs for a neck, so it has total 20 dofs to have dexterous motion capability. The implemented robot has the capability of performing intelligent actions such as playing soccer, resisting external forces, and walking on a slope terrain. In this paper, we focus on the robot's capability of playing soccer. The robot uses a color CCD camera attached on its head as a sensor for playing soccer. To make the robot play soccer with only one camera, an algorithm, which consists of searching, localization, and motion planning, is proposed and experimented. The results show that the robot can play soccer successfully in the given environments.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

Three-Dimensional Image Reconstruction from Compton Scattered Data Using the Row-Action Maximum Likelihood Algorithm (행작용 최대우도 알고리즘을 사용한 컴프턴 산란 데이터로부터의 3차원 영상재구성)

  • Lee, Mi-No;Lee, Soo-Jin;Nguyen, Van-Giang;Kim, Soo-Mee;Lee, Jae-Sung
    • Journal of Biomedical Engineering Research
    • /
    • v.30 no.1
    • /
    • pp.56-65
    • /
    • 2009
  • Compton imaging is often recognized as a potentially more valuable 3-D technique in nuclear medicine than conventional emission tomography. Due to inherent computational limitations, however, it has been of a difficult problem to reconstruct images with good accuracy. In this work we show that the row-action maximum likelihood algorithm (RAMLA), which have proven useful for conventional tomographic reconstruction, can also be applied to the problem of 3-D reconstruction of cone-beam projections from Compton scattered data. The major advantage of RAMLA is that it converges to a true maximum likelihood solution at an order of magnitude faster than the standard expectation maximiation (EM) algorithm. For our simulations, we first model a Compton camera system consisting of the three pairs of scatterer and absorber detectors placed at x-, y- and z-axes, and generate conical projection data using a software phantom. We then compare the quantitative performance of RAMLA and EM reconstructions in terms of the percentage error. The net conclusion based on our experimental results is that the RAMLA applied to Compton camera reconstruction significantly outperforms the EM algorithm in convergence rate; while computational costs of one iteration of RAMLA and EM are about the same, one iteration of RAMLA performs as well as 128 iterations of EM.