• Title/Summary/Keyword: intelligent vision

Search Result 465, Processing Time 0.023 seconds

Smart Vision Sensor for Satellite Video Surveillance Sensor Network (위성 영상감시 센서망을 위한 스마트 비젼 센서)

  • Kim, Won-Ho;Im, Jae-Yoo
    • Journal of Satellite, Information and Communications
    • /
    • v.10 no.2
    • /
    • pp.70-74
    • /
    • 2015
  • In this paper, satellite communication based video surveillance system that consisted of ultra-small aperture terminals with small-size smart vision sensor is proposed. The events such as forest fire, smoke, intruder movement are detected automatically in field and false alarms are minimized by using intelligent and high-reliable video analysis algorithms. The smart vision sensor is necessary to achieve high-confidence, high hardware endurance, seamless communication and easy maintenance requirements. To satisfy these requirements, real-time digital signal processor, camera module and satellite transceiver are integrated as a smart vision sensor-based ultra-small aperture terminal. Also, high-performance video analysis and image coding algorithms are embedded. The video analysis functions and performances were verified and confirmed practicality through computer simulation and vision sensor prototype test.

Implementation of the Intelligent MUX System for Green USN (녹색 유비쿼터스 지능형 다중화장비의 구현)

  • Kang, Jeong-Jin;Chang, Hark-Sin;Lee, Young-Chul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.1
    • /
    • pp.1-5
    • /
    • 2010
  • Recently emphasizing the importance of security such as national major institutions' facilities or major industrial facilities for the maintenance of security/crime prevention due to specialization of the Green Information Technology(G-IT) and Green Ubiquitous Technology(G-UT) industry fields, The Security System Building linked high technology within the government-related organization, enterprise and army the military is urgently required. This paper is about the green USN intelligent an unmanned guard MUX system that receive the signals, from alarm device within surveillance area, with various ways of communication techniques and then transmit to local control center and remote control server trough TCP/IP network. This study enables the mutual senergy effect by realizing a total solution of an unmanned guard system and also significantly contributes to the global/domestic market expansion. That can be applied to the crime prevention/security fields in the Green Ubiquitous Environment Implemented Business(survalance-Home, Gu-City, Gu-Health, etc.), and will contribute to expand companies with international competitiveness that can provide the Green Ubiquitous Vision(Gu-Vision).

Monocular Vision Based Localization System using Hybrid Features from Ceiling Images for Robot Navigation in an Indoor Environment (실내 환경에서의 로봇 자율주행을 위한 천장영상으로부터의 이종 특징점을 이용한 단일비전 기반 자기 위치 추정 시스템)

  • Kang, Jung-Won;Bang, Seok-Won;Atkeson, Christopher G.;Hong, Young-Jin;Suh, Jin-Ho;Lee, Jung-Woo;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.6 no.3
    • /
    • pp.197-209
    • /
    • 2011
  • This paper presents a localization system using ceiling images in a large indoor environment. For a system with low cost and complexity, we propose a single camera based system that utilizes ceiling images acquired from a camera installed to point upwards. For reliable operation, we propose a method using hybrid features which include natural landmarks in a natural scene and artificial landmarks observable in an infrared ray domain. Compared with previous works utilizing only infrared based features, our method reduces the required number of artificial features as we exploit both natural and artificial features. In addition, compared with previous works using only natural scene, our method has an advantage in the convergence speed and robustness as an observation of an artificial feature provides a crucial clue for robot pose estimation. In an experiment with challenging situations in a real environment, our method was performed impressively in terms of the robustness and accuracy. To our knowledge, our method is the first ceiling vision based localization method using features from both visible and infrared rays domains. Our system can be easily utilized with a variety of service robot applications in a large indoor environment.

Odor Cognition and Source Tracking of an Intelligent Robot based upon Wireless Sensor Network (센서 네트워크 기반 지능 로봇의 냄새 인식 및 추적)

  • Lee, Jae-Yeon;Kang, Geun-Taek;Lee, Won-Chang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.1
    • /
    • pp.49-54
    • /
    • 2011
  • In this paper, we represent a mobile robot which can recognize chemical odor, measure concentration, and track its source indoors. The mobile robot has the function of smell that can sort several gases in experiment such as ammonia, ethanol, and their mixture with neural network algorithm and measure each gas concentration with fuzzy rules. In addition, it can not only navigate to the desired position with vision system by avoiding obstacles but also transmit odor information and warning messages earned from its own operations to other nodes by multi-hop communication in wireless sensor network. We suggest the way of odor sorting, concentration measurement, and source tracking for a mobile robot in wireless sensor network using a hybrid algorithm with vision system and gas sensors. The experimental studies prove that the efficiency of the proposed algorithm for odor recognition, concentration measurement, and source tracking.

Analysis on Lightweight Methods of On-Device AI Vision Model for Intelligent Edge Computing Devices (지능형 엣지 컴퓨팅 기기를 위한 온디바이스 AI 비전 모델의 경량화 방식 분석)

  • Hye-Hyeon Ju;Namhi Kang
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.1-8
    • /
    • 2024
  • On-device AI technology, which can operate AI models at the edge devices to support real-time processing and privacy enhancement, is attracting attention. As intelligent IoT is applied to various industries, services utilizing the on-device AI technology are increasing significantly. However, general deep learning models require a lot of computational resources for inference and learning. Therefore, various lightweighting methods such as quantization and pruning have been suggested to operate deep learning models in embedded edge devices. Among the lightweighting methods, we analyze how to lightweight and apply deep learning models to edge computing devices, focusing on pruning technology in this paper. In particular, we utilize dynamic and static pruning techniques to evaluate the inference speed, accuracy, and memory usage of a lightweight AI vision model. The content analyzed in this paper can be used for intelligent video control systems or video security systems in autonomous vehicles, where real-time processing are highly required. In addition, it is expected that the content can be used more effectively in various IoT services and industries.

A Robust Localization and Orientation Method for Vacuum Robot Generating a Vision System (비전 기반 청소로봇 시스템에 적합한 로봇의 이동 추적 방법)

  • Jeong, Moon-Seok;Nguyen, Viet Thang;Lee, Jun-Bae;Choi, Ho-Chul;Baik, Sung-Wook
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06c
    • /
    • pp.482-487
    • /
    • 2007
  • 실내의 완벽 주행이 주요 목적인 청소로봇은 자기 위치를 인식 할 수 있어야한다. 자기 위치를 제대로 인식하지 못하게 된다면 청소할 방을 모두 돌지 못하고 방 청소를 마치게 된다. 청소로봇이 상용화 되기 위해서는 저렴한 가격의 보드를 선호한다. 이것은 현재 나온 복잡한 계산을 요구하는 알고리즘을 사용하지 못하거나 사용하여도 속도가 느린 문제를 가진다. 영상 프레임 처리 속도가 느릴 경우 처리되는 동안 로봇이 움직이지 못하여 부드러운 움직임을 불가능하게 한다. 본 논문은 저사양의 하드웨어에서 자기 인식을 할 수 있는 시스템을 제안한다. 자기 인식을 하기 위해 처리 되어야 하는 전처리 과정과 전처리를 거친 데이터를 이용하여 자기 위치를 인식하도록 이동거리와 회전각을 계산하는 방법을 제안한다. 마지막으로 제안된 방법들을 이용하여 실제 이동 값과 비교, 분석한다.

  • PDF

Real Time Motion Processing for Autonomous Navigation

  • Kolodko, J.;Vlacic, L.
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.1
    • /
    • pp.156-161
    • /
    • 2003
  • An overview of our approach to autonomous navigation is presented showing how motion information can be integrated into existing navigation schemes. Particular attention is given to our short range motion estimation scheme which utilises a number of unique assumptions regarding the nature of the visual environment allowing a direct fusion of visual and range information. Graduated non-convexity is used to solve the resulting non-convex minimisation problem. Experimental results show the advantages of our fusion technique.

Calibration of Active Binocular Vision Systems (능동 양안시 장치의 보정)

  • 도용태
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.11a
    • /
    • pp.432-435
    • /
    • 2000
  • 로봇의 지능적 작업을 위해서는 3차원 공간에 대한 감각기능이 필수적이며, 이러한 목적으로 시각장치가 사용되었을 경우 보정은 여타 절차에 선행하게 된다. 실제 보정 과정중 중요한 것은 제어점들과 그들의 영상점을 획득하는 일과 이를 이용한 광학적, 기하학적 카메라의 파라메터 결정에 있다. 본 논문에서는 빔프로젝터와 컴퓨터에 의해 제어되는 양안시 장치를 활용하여 많은 수의 3차원 제어점들과 이들의 영상좌표값들을 간편하게 획득하는 방법을 서술한다. 또, 위치가 고정된 카메라의 경우와 달리 능동 카메라 장치는 그 파라메터의 일부가 변수가 되는데, 이 경우에 유용한 선형의 보정 기법을 제안한다.

  • PDF