• Title/Summary/Keyword: Camera Action

Search Result 121, Processing Time 0.024 seconds

A vision-based system for inspection of expansion joints in concrete pavement

  • Jung Hee Lee ;bragimov Eldor ;Heungbae Gil ;Jong-Jae Lee
    • Smart Structures and Systems
    • /
    • v.32 no.5
    • /
    • pp.309-318
    • /
    • 2023
  • The appropriate maintenance of highway roads is critical for the safe operation of road networks and conserves maintenance costs. Multiple methods have been developed to investigate the surface of roads for various types of cracks and potholes, among other damage. Like road surface damage, the condition of expansion joints in concrete pavement is important to avoid unexpected hazardous situations. Thus, in this study, a new system is proposed for autonomous expansion joint monitoring using a vision-based system. The system consists of the following three key parts: (1) a camera-mounted vehicle, (2) indication marks on the expansion joints, and (3) a deep learning-based automatic evaluation algorithm. With paired marks indicating the expansion joints in a concrete pavement, they can be automatically detected. An inspection vehicle is equipped with an action camera that acquires images of the expansion joints in the road. You Only Look Once (YOLO) automatically detects the expansion joints with indication marks, which has a performance accuracy of 95%. The width of the detected expansion joint is calculated using an image processing algorithm. Based on the calculated width, the expansion joint is classified into the following two types: normal and dangerous. The obtained results demonstrate that the proposed system is very efficient in terms of speed and accuracy.

Real-Time CCTV Based Garbage Detection for Modern Societies using Deep Convolutional Neural Network with Person-Identification

  • Syed Muhammad Raza;Syed Ghazi Hassan;Syed Ali Hassan;Soo Young Shin
    • Journal of information and communication convergence engineering
    • /
    • v.22 no.2
    • /
    • pp.109-120
    • /
    • 2024
  • Trash or garbage is one of the most dangerous health and environmental problems that affect pollution. Pollution affects nature, human life, and wildlife. In this paper, we propose modern solutions for cleaning the environment of trash pollution by enforcing strict action against people who dump trash inappropriately on streets, outside the home, and in unnecessary places. Artificial Intelligence (AI), especially Deep Learning (DL), has been used to automate and solve issues in the world. We availed this as an excellent opportunity to develop a system that identifies trash using a deep convolutional neural network (CNN). This paper proposes a real-time garbage identification system based on a deep CNN architecture with eight distinct classes for the training dataset. After identifying the garbage, the CCTV camera captures a video of the individual placing the trash in the incorrect location and sends an alert notice to the relevant authority.

Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling (3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작 인식)

  • Suk, Heung-Il;Lee, Ji-Hong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.780-788
    • /
    • 2008
  • Modeling hand poses and tracking its movement are one of the challenging problems in computer vision. There are two typical approaches for the reconstruction of hand poses in 3D, depending on the number of cameras from which images are captured. One is to capture images from multiple cameras or a stereo camera. The other is to capture images from a single camera. The former approach is relatively limited, because of the environmental constraints for setting up multiple cameras. In this paper we propose a method of reconstructing 3D hand poses from a 2D input image sequence captured from a single camera by means of Belief Propagation in a graphical model and recognizing a finger clicking motion using a hidden Markov model. We define a graphical model with hidden nodes representing joints of a hand, and observable nodes with the features extracted from a 2D input image sequence. To track hand poses in 3D, we use a Belief Propagation algorithm, which provides a robust and unified framework for inference in a graphical model. From the estimated 3D hand pose we extract the information for each finger's motion, which is then fed into a hidden Markov model. To recognize natural finger actions, we consider the movements of all the fingers to recognize a single finger's action. We applied the proposed method to a virtual keypad system and the result showed a high recognition rate of 94.66% with 300 test data.

Multiple Pedestrians Detection using Motion Information and Support Vector Machine from a Moving Camera Image (이동 카메라 영상에서 움직임 정보와 Support Vector Machine을 이용한 다수 보행자 검출)

  • Lim, Jong-Seok;Park, Hyo-Jin;Kim, Wook-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.250-257
    • /
    • 2011
  • In this paper, we proposed the method detecting multiple pedestrians using motion information and SVM(Support Vector Machine) from a moving camera image. First, we detect moving pedestrians from both the difference image and the projection histogram which is compensated for the camera ego-motion using corresponding feature sets. The difference image is simple method but it is not detected motionless pedestrians. Thus, to fix up this problem, we detect motionless pedestrians using SVM The SVM works well particularly in binary classification problem such as pedestrian detection. However, it is not detected in case that the pedestrians are adjacent or they move arms and legs excessively in the image. Therefore, in this paper, we proposed the method detecting motionless and adjacent pedestrians as well as people who take excessive action in the image using motion information and SVM The experimental results on our various test video sequences demonstrated the high efficiency of our approach as it had shown an average detection ratio of 94% and False Positive of 2.8%.

Occluded Object Motion Tracking Method based on Combination of 3D Reconstruction and Optical Flow Estimation (3차원 재구성과 추정된 옵티컬 플로우 기반 가려진 객체 움직임 추적방법)

  • Park, Jun-Heong;Park, Seung-Min;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.5
    • /
    • pp.537-542
    • /
    • 2011
  • A mirror neuron is a neuron fires both when an animal acts and when the animal observes the same action performed by another. We propose a method of 3D reconstruction for occluded object motion tracking like Mirror Neuron System to fire in hidden condition. For modeling system that intention recognition through fire effect like Mirror Neuron System, we calculate depth information using stereo image from a stereo camera and reconstruct three dimension data. Movement direction of object is estimated by optical flow with three-dimensional image data created by three dimension reconstruction. For three dimension reconstruction that enables tracing occluded part, first, picture data was get by stereo camera. Result of optical flow is made be robust to noise by the kalman filter estimation algorithm. Image data is saved as history from reconstructed three dimension image through motion tracking of object. When whole or some part of object is disappeared form stereo camera by other objects, it is restored to bring image date form history of saved past image and track motion of object.

Volume Control using Gesture Recognition System

  • Shreyansh Gupta;Samyak Barnwal
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.161-170
    • /
    • 2024
  • With the technological advances, the humans have made so much progress in the ease of living and now incorporating the use of sight, motion, sound, speech etc. for various application and software controls. In this paper, we have explored the project in which gestures plays a very significant role in the project. The topic of gesture control which has been researched a lot and is just getting evolved every day. We see the usage of computer vision in this project. The main objective that we achieved in this project is controlling the computer settings with hand gestures using computer vision. In this project we are creating a module which acts a volume controlling program in which we use hand gestures to control the computer system volume. We have included the use of OpenCV. This module is used in the implementation of hand gestures in computer controls. The module in execution uses the web camera of the computer to record the images or videos and then processes them to find the needed information and then based on the input, performs the action on the volume settings if that computer. The program has the functionality of increasing and decreasing the volume of the computer. The setup needed for the program execution is a web camera to record the input images and videos which will be given by the user. The program will perform gesture recognition with the help of OpenCV and python and its libraries and them it will recognize or identify the specified human gestures and use them to perform or carry out the changes in the device setting. The objective is to adjust the volume of a computer device without the need for physical interaction using a mouse or keyboard. OpenCV, a widely utilized tool for image processing and computer vision applications in this domain, enjoys extensive popularity. The OpenCV community consists of over 47,000 individuals, and as of a survey conducted in 2020, the estimated number of downloads exceeds 18 million.

Development of Intelligent CCTV System Using CNN Technology (CNN 기술을 사용한 지능형 CCTV 개발)

  • Do-Eun Kim;Hee-Jin Kong;Ji-Hu Woo;Jae-Moon Lee;Kitae Hwang;Inhwan Jung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.99-105
    • /
    • 2023
  • In this paper, an intelligent CCTV was designed and experimentally developed by using an IOT device, Raspberry Pi, and artificial intelligence technology. Object Detection technology was used to detect the number of people on the CCTV screen, and Action Detection technology provided by OpenPose was used to detect emergency situations. The proposed system has a structure of CCTV, server and client. CCTV uses Raspberry Pi and USB camera, server uses Linux, and client uses iPhone. Communication between each subsystem was implemented using the MQTT protocol. The system developed as a prototype could transmit images at 2.7 frames per second and detect emergencies from images at 0.2 frames per second.

Robust Control of Robot Manipulators using Vision Systems

  • Lee, Young-Chan;Jie, Min-Seok;Lee, Kang-Woong
    • Journal of Advanced Navigation Technology
    • /
    • v.7 no.2
    • /
    • pp.162-170
    • /
    • 2003
  • In this paper, we propose a robust controller for trajectory control of n-link robot manipulators using feature based on visual feedback. In order to reduce tracking error of the robot manipulator due to parametric uncertainties, integral action is included in the dynamic control part of the inner control loop. The desired trajectory for tracking is generated from feature extraction by the camera mounted on the end effector. The stability of the robust state feedback control system is shown by the Lyapunov method. Simulation and experimental results on a 5-link robot manipulator with two degree of freedom show that the proposed method has good tracking performance.

  • PDF

Robot Fish Tracking Control using an Optical Flow Object-detecting Algorithm

  • Shin, Kyoo Jae
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.6
    • /
    • pp.375-382
    • /
    • 2016
  • This paper realizes control of the motion of a swimming robot fish in order to implement an underwater robot fish aquarium. And it implements positional control of a two-axis trajectory path of the robot fish in the aquarium. The performance of the robot was verified though certified field tests. It provided excellent performance in driving force, durability, and water resistance in experimental results. It can control robot motion, that is, it recognizes an object by using an optical flow object-detecting algorithm, which uses a video camera rather than image-detecting sensors inside the robot fish. It is possible to find the robot's position and control the motion of the robot fish using a radio frequency (RF) modem controlled via personal computer. This paper proposes realization of robot fish motion-tracking control using the optical flow object-detecting algorithm. It was verified via performance tests of lead-lag action control of robot fish in the aquarium.

Planar Motion of a Rigid Part Being Striked (타격되는 강체 부품의 평면 거동)

  • 박상욱;한인환
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1996.11a
    • /
    • pp.787-792
    • /
    • 1996
  • The method of manipulation by striking a part and letting it slide until it comes to rest, has been very little studied. However, the manipulation method is not uncommon in our daily lives. We analyze the dynamic behavior of a rigid polygonal part being striked and sliding on a horizontal surface under the action of fiction. There are two parts in this problem; one is the impact problem, and the other is the sliding problem. We characterize the impact and sliding dynamics with friction for polygonal parts, and present the possibility of reverse calculation for motion planning of striking operations. Using a high speed video camera, the computer simulation results are experimentally verified.

  • PDF