• Title/Summary/Keyword: Camera Movement

Search Result 530, Processing Time 0.029 seconds

Long-Term Monitoring of the Barrier Effect of the Wild Boar Fence

  • Lim, Sang Jin;Kwon, Ji Hyun;Namgung, Hun;Park, Joong Yeol;Kim, Eui Kyeong;Park, Yung Chul
    • Journal of Forest and Environmental Science
    • /
    • v.38 no.2
    • /
    • pp.128-132
    • /
    • 2022
  • Wild boars (Sus scrofa) not only cause crop damage and human casualties, but also facilitate the spread of many infectious diseases in domestic animals and humans. To determine the efficiency of a fencing system in blocking the movement of wild boars, long-term monitoring was performed in a fenced area in Bukhansan National Park using camera traps. Upon monitoring for a period of 46 months, there was a 72.6% reduction in the number of wild boar appearances in the fence-enclosed area, compared to that in the unenclosed area. For 20 months after the fence installation, the blocking effect of the fence was effective enough to reduce the appearance of wild boars by 92.6% in the fence-enclosed area, compared to that in the unenclosed area. The blocking effect of the fence remained effective for 20 months after its installation, after which its effectiveness decreased. Maintaining a fence for a long time is likely to lead to habitat fragmentation. It can also block the movement of other wild animals, including the endangered species - the long-tailed goral. This study suggests a 20-month retention period for the fences installed to inhibit the movement of wild boars in wide forests such as Gangwon-do in South Korea. To identify how long the blocking effect of the fences lasts, further studies are needed focusing on the length and height of the fence, and the conditions of the ground surface.

An Effect Analysis of Rearfoot Movement and Impact force by Different Design of Running Shoes Hardness (런닝화의 경도 차이가 후족 제어 및 충격력에 미치는 영향 분석)

  • Lee Dong-Choon;Lee Woo-Chang
    • Proceedings of the Society of Korea Industrial and System Engineering Conference
    • /
    • 2002.05a
    • /
    • pp.291-296
    • /
    • 2002
  • The midsole hardness of athletic footwear affects capability of absorbing impact shock and controls rearfoot movement during running and walking. The prior studies were focused on examining the proper hardness of footwear for rearfoot movement or to finding effective hardness for absorbing impact shock. The displacements of maximal Achilles tendon angle described a amount of pronation motion is decreased when medial hardness of midsole is large more than lateral. Increasing hardness of footwear midsole are effected to reduce maximum and intial pronation angle, but declined the ability of impact shock during heelstrike. For determination of effectiveness hardness of midsole, therefore, the study that makes a compromise between rearfoot movement and absorbing impact during footstrike must be performed. The purpose of this study is to examine quantitative values of rearfoot control and absorbing impact shock with different hardness of medial and lateral midsole on heel portion. The results are useful to define biomechanical hardness of midsole for developing running shoes. As variable for impact shock, accelerations onto shank and knee are measured during 4 running speeds (5, 7, 9, 11km/h). Also, maximum and $10\%$ pronation angle (Achilles tendon angle) were measured using high-speed camera.

  • PDF

Kinect-based Motion Recognition Model for the 3D Contents Control (3D 콘텐츠 제어를 위한 키넥트 기반의 동작 인식 모델)

  • Choi, Han Suk
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.1
    • /
    • pp.24-29
    • /
    • 2014
  • This paper proposes a kinect-based human motion recognition model for the 3D contents control after tracking the human body gesture through the camera in the infrared kinect project. The proposed human motion model in this paper computes the distance variation of the body movement from shoulder to right and left hand, wrist, arm, and elbow. The human motion model is classified into the movement directions such as the left movement, right movement, up, down, enlargement, downsizing. and selection. The proposed kinect-based human motion recognition model is very natural and low cost compared to other contact type gesture recognition technologies and device based gesture technologies with the expensive hardware system.

Visual Sensing of the Light Spot of a Laser Pointer for Robotic Applications

  • Park, Sung-Ho;Kim, Dong Uk;Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.27 no.4
    • /
    • pp.216-220
    • /
    • 2018
  • In this paper, we present visual sensing techniques that can be used to teach a robot using a laser pointer. The light spot of an off-the-shelf laser pointer is detected and its movement is tracked on consecutive images of a camera. The three-dimensional position of the spot is calculated using stereo cameras. The light spot on the image is detected based on its color, brightness, and shape. The detection results in a binary image, and morphological processing steps are performed on the image to refine the detection. The movement of the laser spot is measured using two methods. The first is a simple method of specifying the region of interest (ROI) centered at the current location of the light spot and finding the spot within the ROI on the next image. It is assumed that the movement of the spot is not large on two consecutive images. The second method is using a Kalman filter, which has been widely employed in trajectory estimation problems. In our simulation study of various cases, Kalman filtering shows better results mostly. However, there is a problem of fitting the system model of the filter to the pattern of the spot movement.

Monosyllable Speech Recognition through Facial Movement Analysis (안면 움직임 분석을 통한 단음절 음성인식)

  • Kang, Dong-Won;Seo, Jeong-Woo;Choi, Jin-Seung;Choi, Jae-Bong;Tack, Gye-Rae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.6
    • /
    • pp.813-819
    • /
    • 2014
  • The purpose of this study was to extract accurate parameters of facial movement features using 3-D motion capture system in speech recognition technology through lip-reading. Instead of using the features obtained through traditional camera image, the 3-D motion system was used to obtain quantitative data for actual facial movements, and to analyze 11 variables that exhibit particular patterns such as nose, lip, jaw and cheek movements in monosyllable vocalizations. Fourteen subjects, all in 20s of age, were asked to vocalize 11 types of Korean vowel monosyllables for three times with 36 reflective markers on their faces. The obtained facial movement data were then calculated into 11 parameters and presented as patterns for each monosyllable vocalization. The parameter patterns were performed through learning and recognizing process for each monosyllable with speech recognition algorithms with Hidden Markov Model (HMM) and Viterbi algorithm. The accuracy rate of 11 monosyllables recognition was 97.2%, which suggests the possibility of voice recognition of Korean language through quantitative facial movement analysis.

Gaze Detection System by Wide and Narrow View Camera (광각 및 협각 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1239-1249
    • /
    • 2003
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Previous gaze detection system uses a wide view camera, which can capture the whole face of user. However, the image resolution is too low with such a camera and the fine movements of user's eye cannot be exactly detected. So, we implement the gaze detection system with a wide view camera and a narrow view camera. In order to detect the position of user's eye changed by facial movements, the narrow view camera has the functionalities of auto focusing and auto pan/tilt based on the detected 3D facial feature positions. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 3.1 cm of RMS error in case of Permitting facial movements and 3.57 cm in case of permitting facial and eye movement. The processing time is so short as to be implemented in real-time system(below 30 msec in Pentium -IV 1.8 GHz)

Automatic identification of ARPA radar tracking vessels by CCTV camera system (CCTV 카메라 시스템에 의한 ARPA 레이더 추적선박의 자동식별)

  • Lee, Dae-Jae
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.45 no.3
    • /
    • pp.177-187
    • /
    • 2009
  • This paper describes a automatic video surveillance system(AVSS) with long range and 360$^{\circ}$ coverage that is automatically rotated in an elevation over azimuth mode in response to the TTM(tracked target message) signal of vessels tracked by ARPA(automatic radar plotting aids) radar. This AVSS that is a video security and tracking system supported by ARPA radar, CCTV(closed-circuit television) camera system and other sensors to automatically identify and track, detect the potential dangerous situations such as collision accidents at sea and berthing/deberthing accidents in harbor, can be used in monitoring the illegal fishing vessels in inshore and offshore fishing ground, and in more improving the security and safety of domestic fishing vessels in EEZ(exclusive economic zone) area. The movement of the target vessel chosen by the ARPA radar operator in the AVSS can be automatically tracked by a CCTV camera system interfaced to the ECDIS(electronic chart display and information system) with the special functions such as graphic presentation of CCTV image, camera position, camera azimuth and angle of view on the ENC, automatic and manual controls of pan and tilt angles for CCTV system, and the capability that can replay and record continuously all information of a selected target. The test results showed that the AVSS developed experimentally in this study can be used as an extra navigation aid for the operator on the bridge under the confusing traffic situations, to improve the detection efficiency of small targets in sea clutter, to enhance greatly an operator s ability to identify visually vessels tracked by ARPA radar and to provide a recorded history for reference or evidentiary purposes in EEZ area.

Development of a Real-time Sensor-based Virtual Imaging System (센서기반 실시간 가상이미징 시스템의 구현)

  • 남승진;오주현;박성춘
    • Journal of Broadcast Engineering
    • /
    • v.8 no.1
    • /
    • pp.63-71
    • /
    • 2003
  • In sport programs, real-time virtual imaging system come into notice for new technology which can compose information like team logos, scores. distances directly on playing ground, so it can compensate for the defects of general character generator. In order to synchronize graphics to camera movements, generally two method is used. One is for using sensors attached to camera moving axis and the other is for analyzing camera video itself. KBS technical research institute developed real-time sensor-based virtual imaging system 'VIVA', which uses four sensors on pan, tilt, zoom, focus axis and controls virtual graphic camera in three dimensional coordinates in real-time. In this paper, we introduce our system 'VIVA' and it's technology. For accurate camera tracking we calculated view-point movement occurred by zooming based on optical principal point variation data and we considered field of view variation not only by zoom but also by focus. We developed our system based on three dimensional graphic environment. so many useful three dimensional graphic techniques such as keyframe animation can be used. VIVA was successfully used both in Busan Asian Games and 2002 presidential election. We confirmed that it can be used not only in the field but also in the studio programs in which camera is used within more close range.

A Study on Implementation of Motion Graphics Virtual Camera with AR Core

  • Jung, Jin-Bum;Lee, Jae-Soo;Lee, Seung-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.85-90
    • /
    • 2022
  • In this study, to reduce the time and cost disadvantages of the traditional motion graphic production method in order to realize the movement of a virtual camera identical to that of the real camera, motion graphics virtualization using AR Core-based mobile device real-time tracking data A method for creating a camera is proposed. The proposed method is a method that simplifies the tracking operation in the video file stored after shooting, and simultaneously proceeds with shooting on an AR Core-based mobile device to determine whether or not tracking is successful in the shooting stage. As a result of the experiment, there was no difference in the motion graphic result image compared to the conventional method, but the time of 6 minutes and 10 seconds was consumed based on the 300frame image, whereas the proposed method has very high time efficiency because this step can be omitted. At a time when interest in image production using virtual augmented reality and various studies are underway, this study will be utilized in virtual camera creation and match moving.

A Study on Portable Green-algae Remover Device based on Arduino and OpenCV using Do Sensor and Raspberry Pi Camera (DO 센서와 라즈베리파이 카메라를 활용한 아두이노와 OpenCV기반의 이동식 녹조제거장치에 관한 연구)

  • Kim, Min-Seop;Kim, Ye-Ji;Im, Ye-Eun;Hwang, You-Seong;Baek, Soo-Whang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.4
    • /
    • pp.679-686
    • /
    • 2022
  • In this paper, we implemented an algae removal device that recognizes and removes algae existing in water using Raspberry Pi camera and DO (Dissolved Oxygen) sensor. The Raspberry Pi board recognizes the color of green algae by converting the RGB values obtained from the camera into HSV. Through this, the location of the algae is identified and when the amount of dissolved oxygen's decrease at the location is more than the reference value using the DO sensor, the algae removal device is driven to spray the algae removal solution. Raspberry Pi's camera uses OpenCV, and the motor movement is controlled according to the output value of the DO sensor and the result of the camera's green algae recognition. Algae recognition and spraying of algae removal solution were implemented through Arduino and Raspberry Pi, and the feasibility of the proposed portable algae removal device was verified through experiments.