• 제목/요약/키워드: Vision Information

검색결과 2,952건 처리시간 0.029초

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

Tracking Control of a Moving Target Using a Robot Vision System

  • Kim, Dong-Hwan;Cheon, Gyung-Il
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.77.5-77
    • /
    • 2001
  • A Robot vision system with a visual skill so as take information for arbitrary target or object has been applied to auto-inspection and assembling system. It catches the moving target with the manipulator by using the information from the vision system. The robot needs some information where the moving object will place after certain time. A camera is fixed on a robot manipulator, not on the fixed support outside of the robot. It secures wider working area than the fixed camera, and it dedicates to auto scanning of the object. It computes some information on the object center, angle and speed by vision data, and can guess grabbing spot by arriving time. When the location ...

  • PDF

Intelligent User Pattern Recognition based on Vision, Audio and Activity for Abnormal Event Detections of Single Households

  • Jung, Ju-Ho;Ahn, Jun-Ho
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권5호
    • /
    • pp.59-66
    • /
    • 2019
  • According to the KT telecommunication statistics, people stayed inside their houses on an average of 11.9 hours a day. As well as, according to NSC statistics in the united states, people regardless of age are injured for a variety of reasons in their houses. For purposes of this research, we have investigated an abnormal event detection algorithm to classify infrequently occurring behaviors as accidents, health emergencies, etc. in their daily lives. We propose a fusion method that combines three classification algorithms with vision pattern, audio pattern, and activity pattern to detect unusual user events. The vision pattern algorithm identifies people and objects based on video data collected through home CCTV. The audio and activity pattern algorithms classify user audio and activity behaviors using the data collected from built-in sensors on their smartphones in their houses. We evaluated the proposed individual pattern algorithm and fusion method based on multiple scenarios.

전방향 구동 로봇에서의 비젼을 이용한 이동 물체의 추적 (Moving Target Tracking using Vision System for an Omni-directional Wheel Robot)

  • 김산;김동환
    • 제어로봇시스템학회논문지
    • /
    • 제14권10호
    • /
    • pp.1053-1061
    • /
    • 2008
  • In this paper, a moving target tracking using a binocular vision for an omni-directional mobile robot is addressed. In the binocular vision, three dimensional information on the target is extracted by vision processes including calibration, image correspondence, and 3D reconstruction. The robot controller is constituted with SPI(serial peripheral interface) to communicate effectively between robot master controller and wheel controllers.

시나리오기반 IT 미래전략연구의 공학적 접근법 (Scenario based Information Technology Future Strategy approach to Engineering)

  • 류동현;박정용;이우진
    • 한국정보통신학회논문지
    • /
    • 제14권10호
    • /
    • pp.2171-2179
    • /
    • 2010
  • 전세계 국가는 21세기 메가 트렌드인 기술 융합(Technology Convergence) 추세에 발 맞춰 정보통신(IT)분야 미래비전 및 전략수립에 몰두하고 있다. 그리고, 자국의 국부 증대 및 신산업 창출을 위해 WWRF (Wireless World Research Forum), AmI (Ambient Intelligence), and mITF (mobile IT Forum)등의 공학적 시나리오를 기반으로 한 연구를 하고 있다. 하지만 국내의 경우 비전연구 및 전략수립에 대한 연구가 초보단계로 미래전략 수립에 능동적으로 대처하기에는 미흡하다는 지적이다. 따라서, 본 논문에서는 시나리오 기법을 활용한 선진국의 공학적인 미래전략연구 사례를 분석하고, 공학적인 시나리오 비전연구 방법을 제시한다.

A Platform-Based SoC Design for Real-Time Stereo Vision

  • Yi, Jong-Su;Park, Jae-Hwa;Kim, Jun-Seong
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • 제12권2호
    • /
    • pp.212-218
    • /
    • 2012
  • A stereo vision is able to build three-dimensional maps of its environment. It can provide much more complete information than a 2D image based vision but has to process, at least, that much more data. In the past decade, real-time stereo has become a reality. Some solutions are based on reconfigurable hardware and others rely on specialized hardware. However, they are designed for their own specific applications and are difficult to extend their functionalities. This paper describes a vision system based on a System on a Chip (SoC) platform. A real-time stereo image correlator is implemented using Sum of Absolute Difference (SAD) algorithm and is integrated into the vision system using AMBA bus protocol. Since the system is designed on a pre-verified platform it can be easily extended in its functionality increasing design productivity. Simulation results show that the vision system is suitable for various real-time applications.

원격 컴퓨터 비전 실습 사례연구 (A Case Study on Remote Computer Vision Laboratory)

  • 이성열
    • 한국산업정보학회논문지
    • /
    • 제12권2호
    • /
    • pp.60-67
    • /
    • 2007
  • 본 연구에서는 영상처리 및 패턴인식기법의 온라인 교육을 위한 컴퓨터 비전 실습에 대한 사례 연구를 다룬다. 컴퓨터 비전 실습내용은 원격 영상획득방법, 기초 영상처리 및 패턴인식방법, 렌즈 및 조명 선택방법, 통신을 포함한다. 본 연구는 원격 학습환경에서의 컴퓨터 비전 실습교육에 대한 사례연구로써, 원격 실습환경 구축방법과 영상처리 실습사례들이 소개되었다. 인터넷 환경구축보다는 원격 환경에 적합한 컴퓨터 비전실습 내용과 방법에 본 연구의 주안점을 두었다. 마지막으로, 온라인 컴퓨터 비전실습을 향상시킬 수 있는 방법과 추후연구과제를 제안하였다.

  • PDF

MEAN Stack 기반의 컴퓨터 비전 플랫폼 설계 (Computer Vision Platform Design with MEAN Stack Basis)

  • 홍선학;조경순;윤진섭
    • 디지털산업정보학회논문지
    • /
    • 제11권3호
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, we implemented the computer vision platform design with MEAN Stack through Raspberry PI 2 model which is an open source platform. we experimented the face recognition, temperature and humidity sensor data logging with WiFi communication under Raspberry Pi 2 model. Especially we directly made the shape of platform with 3D printing design. In this paper, we used the face recognition algorithm with OpenCV software through haarcascade feature extraction machine learning algorithm, and extended the functionality of wireless communication function ability with Bluetooth technology for the purpose of making Android Mobile devices interface. And therefore we implemented the functions of the vision platform for identifying the face recognition characteristics of scanning with PI camera with gathering the temperature and humidity sensor data under IoT environment. and made the vision platform with 3D printing technology. Especially we used MongoDB for developing the performance of vision platform because the MongoDB is more akin to working with objects in a programming language than what we know of as a database. Afterwards, we would enhance the performance of vision platform for clouding functionalities.

Intelligent Hybrid Fusion Algorithm with Vision Patterns for Generation of Precise Digital Road Maps in Self-driving Vehicles

  • Jung, Juho;Park, Manbok;Cho, Kuk;Mun, Cheol;Ahn, Junho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권10호
    • /
    • pp.3955-3971
    • /
    • 2020
  • Due to the significant increase in the use of autonomous car technology, it is essential to integrate this technology with high-precision digital map data containing more precise and accurate roadway information, as compared to existing conventional map resources, to ensure the safety of self-driving operations. While existing map technologies may assist vehicles in identifying their locations via Global Positioning System, it is however difficult to update the environmental changes of roadways in these maps. Roadway vision algorithms can be useful for building autonomous vehicles that can avoid accidents and detect real-time location changes. We incorporate a hybrid architectural design that combines unsupervised classification of vision data with supervised joint fusion classification to achieve a better noise-resistant algorithm. We identify, via a deep learning approach, an intelligent hybrid fusion algorithm for fusing multimodal vision feature data for roadway classifications and characterize its improvement in accuracy over unsupervised identifications using image processing and supervised vision classifiers. We analyzed over 93,000 vision frame data collected from a test vehicle in real roadways. The performance indicators of the proposed hybrid fusion algorithm are successfully evaluated for the generation of roadway digital maps for autonomous vehicles, with a recall of 0.94, precision of 0.96, and accuracy of 0.92.

Line Type 디지털 항공사진측량 카메라 영상의 컴퓨터비전 해석을 통한 고품질 공간정보 생성 (Generation of High Quality Geospatial Information Using Computer Vision Analysis of Line Type Digital Aerial Photogrammetry Camera Imagery)

  • 이현직
    • 한국지리정보학회지
    • /
    • 제23권1호
    • /
    • pp.41-50
    • /
    • 2020
  • 우리나라의 국토지리정보원에서는 2년 주기로 정사영상 제작과 수치지도 수정/갱신 등을 위해 디지털 항공사진영상을 촬영하고 있다. 이러한 디지털 항공사진영상을 촬영하기 위한 항공사진측량용 카메라는 면형(Frame type) 및 선형(Line type)으로 구분된다. 항공사진영상의 컴퓨터비전 해석은 Frame type만 가능하였다. 이에 본 연구에서는 Line type 항공사진영상을 컴퓨터비전 해석으로 공간정보를 생성하고자 하였으며, 항공사진영상의 활용 방안으로 산림공간정보를 생성하고자 하였다. 그 결과 Line type 항공사진영상의 컴퓨터비전 해석으로 생성된 공간정보는 수평위치 및 수직위치 오차의 RMSE가 GSD의 4배 이내로 나타났다. 컴퓨터비전 해석으로 생성된 공간정보를 이용해 산림공간정보를 생성하였으며, 이를 이용해 수관형상의 추출, 수고의 산정이 가능함을 확인하였다. 본 연구를 통하여 항공사진영상 활용성을 제고할 수 있을 것으로 기대된다.