• Title/Summary/Keyword: Embedded Computer Vision

Search Result 69, Processing Time 0.023 seconds

EVALUATION OF SPEED AND ACCURACY FOR COMPARISON OF TEXTURE CLASSIFICATION IMPLEMENTATION ON EMBEDDED PLATFORM

  • Tou, Jing Yi;Khoo, Kenny Kuan Yew;Tay, Yong Haur;Lau, Phooi Yee
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.89-93
    • /
    • 2009
  • Embedded systems are becoming more popular as many embedded platforms have become more affordable. It offers a compact solution for many different problems including computer vision applications. Texture classification can be used to solve various problems, and implementing it in embedded platforms will help in deploying these applications into the market. This paper proposes to deploy the texture classification algorithms onto the embedded computer vision (ECV) platform. Two algorithms are compared; grey level co-occurrence matrices (GLCM) and Gabor filters. Experimental results show that raw GLCM on MATLAB could achieves 50ms, being the fastest algorithm on the PC platform. Classification speed achieved on PC and ECV platform, in C, is 43ms and 3708ms respectively. Raw GLCM could achieve only 90.86% accuracy compared to the combination feature (GLCM and Gabor filters) at 91.06% accuracy. Overall, evaluating all results in terms of classification speed and accuracy, raw GLCM is more suitable to be implemented onto the ECV platform.

  • PDF

Performance Evaluation of Efficient Vision Transformers on Embedded Edge Platforms (임베디드 엣지 플랫폼에서의 경량 비전 트랜스포머 성능 평가)

  • Minha Lee;Seongjae Lee;Taehyoun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.3
    • /
    • pp.89-100
    • /
    • 2023
  • Recently, on-device artificial intelligence (AI) solutions using mobile devices and embedded edge devices have emerged in various fields, such as computer vision, to address network traffic burdens, low-energy operations, and security problems. Although vision transformer deep learning models have outperformed conventional convolutional neural network (CNN) models in computer vision, they require more computations and parameters than CNN models. Thus, they are not directly applicable to embedded edge devices with limited hardware resources. Many researchers have proposed various model compression methods or lightweight architectures for vision transformers; however, there are only a few studies evaluating the effects of model compression techniques of vision transformers on performance. Regarding this problem, this paper presents a performance evaluation of vision transformers on embedded platforms. We investigated the behaviors of three vision transformers: DeiT, LeViT, and MobileViT. Each model performance was evaluated by accuracy and inference time on edge devices using the ImageNet dataset. We assessed the effects of the quantization method applied to the models on latency enhancement and accuracy degradation by profiling the proportion of response time occupied by major operations. In addition, we evaluated the performance of each model on GPU and EdgeTPU-based edge devices. In our experimental results, LeViT showed the best performance in CPU-based edge devices, and DeiT-small showed the highest performance improvement in GPU-based edge devices. In addition, only MobileViT models showed performance improvement on EdgeTPU. Summarizing the analysis results through profiling, the degree of performance improvement of each vision transformer model was highly dependent on the proportion of parts that could be optimized in the target edge device. In summary, to apply vision transformers to on-device AI solutions, either proper operation composition and optimizations specific to target edge devices must be considered.

Kicks from The Penalty Mark of The Humanoid Robot using Computer Vision (컴퓨터 비전을 이용한 휴머노이드 로봇의 축구 승부차기)

  • Han, Chung-Hui;Lee, Jang-Hae;Jang, Se-In;Park, Choong-Shik;Lee, Ho-Jun;Moon, Seok-Hwan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.264-267
    • /
    • 2009
  • 기존의 자율형 휴머노이드 로봇 축구승부차기에서는 거리센서와 시각센서를 모두 이용한다. 본 논문에서는 시각센서만을 사용하는 사람과 유사한 승부차기 시스템을 제안한다. 이를 위하여 시각센서가 유연하게 움직일 수 있는 적합한 로봇의 조립 형태와 지능적 3차원 공간분석을 채용한다. 지식표현과 추론은 자체 개발한 지식처리 시스템인 NEO를 사용하였고, 그 NEO 시스템에 지능적 처리를 위한 영상처리 라이브러리인 OpenCV를 탑재한 시스템 VisionNEO를 사용하였다.

  • PDF

Design Vision Box base on Embedded Platform (Embedded Platform을 기반으로 하는 Vision Box 설계)

  • Kim, Pan-Kyu;Hoang, Tae-Moon;Park, Sang-Su;Lee, Jong-Hyeok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.1103-1106
    • /
    • 2005
  • The purpose of this research is design of Vision Box which can capture an image which inputs through the camera and understand movement of object included in captured image. In design, we tried to satisfy user's requirements. We made Vision Box can analyze movement of object in image captured by camera without additional instruments. In addition, it is possible that communicate with PLC and operate Vision Box by remote control. We could verify the Vision Box capability by using it to automobile engine pattern analysis. We expect the Vision Box will be used various industrial fields.

  • PDF

A Survey on Vision Transformers for Object Detection Task (객체 탐지 과업에서의 트랜스포머 기반 모델의 특장점 분석 연구)

  • Jungmin, Ha;Hyunjong, Lee;Jungmin, Eom;Jaekoo, Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.6
    • /
    • pp.319-327
    • /
    • 2022
  • Transformers are the most famous deep learning models that has achieved great success in natural language processing and also showed good performance on computer vision. In this survey, we categorized transformer-based models for computer vision, particularly object detection tasks and perform comprehensive comparative experiments to understand the characteristics of each model. Next, we evaluated the models subdivided into standard transformer, with key point attention, and adding attention with coordinates by performance comparison in terms of object detection accuracy and real-time performance. For performance comparison, we used two metrics: frame per second (FPS) and mean average precision (mAP). Finally, we confirmed the trends and relationships related to the detection and real-time performance of objects in several transformer models using various experiments.

Trends in Low-Power On-Device Vision SW Framework Technology (저전력 온디바이스 비전 SW 프레임워크 기술 동향)

  • Lee, M.S.;Bae, S.Y.;Kim, J.S.;Seok, J.S.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.2
    • /
    • pp.56-64
    • /
    • 2021
  • Many computer vision algorithms are computationally expensive and require a lot of computing resources. Recently, owing to machine learning technology and high-performance embedded systems, vision processing applications, such as object detection, face recognition, and visual inspection, are widely used. However, on-devices need to use their resources to handle powerful vision works with low power consumption in heterogeneous environments. Consequently, global manufacturers are trying to lock many developers into their ecosystem, providing integrated low-power chips and dedicated vision libraries. Khronos Group-an international standard organization-has released the OpenVX standard for high-performance/low-power vision processing in heterogeneous on-device systems. This paper describes vision libraries for the embedded systems and presents the OpenVX standard along with related trends for on-device vision system.

Autonomous-flight Drone Algorithm use Computer vision and GPS (컴퓨터 비전과 GPS를 이용한 드론 자율 비행 알고리즘)

  • Kim, Junghwan;Kim, Shik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.3
    • /
    • pp.193-200
    • /
    • 2016
  • This paper introduces an algorithm to middle-low price drone's autonomous navigation flight system using computer vision and GPS. Existing drone operative system mainly contains using methods such as, by inputting course of the path to the installed software of the particular drone in advance of the flight or following the signal that is transmitted from the controller. However, this paper introduces new algorithm that allows autonomous navigation flight system to locate specific place, specific shape of the place and specific space in an area that the user wishes to discover. Technology developed for military industry purpose was implemented on a lower-quality hobby drones without changing its hardware, and used this paper's algorithm to maximize the performance. Camera mounted on middle-low price drone will process the image which meets user's needs will look through and search for specific area of interest when the user inputs certain image of places it wishes to find. By using this algorithm, middle-low price drone's autonomous navigation flight system expect to be apply to a variety of industries.

Development of Stand-Alone Vision Processing Module Based on Linux OS in ARM CPU (ARM CUP를 이용한 리눅스기반 독립형 Vision 처리 모듈 개발)

  • Lee, Seok;Moon, Seung-Bin
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.04a
    • /
    • pp.657-660
    • /
    • 2002
  • 현재 Embedded system 에서 많은 기업체들이 리눅스를 채용하고 있고, 이러한 임베디드 리눅스는 실시간 운영체제가 필요한 로봇제어기에서부터 PDA, set-top box등 여러 분야에 걸쳐 응용되고 있다. 본 논문에서는 StrongARM SA-1110 CPU을 이용하여 만들어진 임베디드 시스템에 리눅스를 사용하여 독립형 비전모듈을 개발한 내용을 기술한다. 또한, WinCE 를 사용하여 개발된 비전모듈과의 성능을 비교하여 리눅스를 이용한 독립형 비전모듈을 평가하고, 머신비전 분야에서의 리눅스 응용 가능성을 제시하였다.

  • PDF

Visual object tracking using inter-frame correlation of convolutional feature maps (컨볼루션 특징 맵의 상관관계를 이용한 영상물체추적)

  • Kim, Min-Ji;Kim, Sungchan
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.4
    • /
    • pp.219-225
    • /
    • 2016
  • Visual object tracking is one of the key tasks in computer vision. Robust trackers should address challenging issues such as fast motion, deformation, occlusion and so on. In this paper, we therefore propose a visual object tracking method that exploits inter-frame correlations of convolutional feature maps in Convolutional Neural Net (ConvNet). The proposed method predicts the location of a target by considering inter-frame spatial correlation between target location proposals in the present frame and its location in the previous frame. The experimental results show that the proposed algorithm outperforms the state-of-the-art work especially in hard-to-track sequences.

Robust Vision-Based Autonomous Navigation Against Environment Changes (환경 변화에 강인한 비전 기반 로봇 자율 주행)

  • Kim, Jungho;Kweon, In So
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.3 no.2
    • /
    • pp.57-65
    • /
    • 2008
  • Recently many researches on intelligent robots have been studied. An intelligent robot is capable of recognizing environments or objects to autonomously perform specific tasks using sensor readings. One of fundamental problems in vision-based robot applications is to recognize where it is and to decide safe path to perform autonomous navigation. However, previous approaches only consider well-organized environments that there is no moving object and environment changes. In this paper, we introduce a novel navigation strategy to handle occlusions caused by moving objects using various computer vision techniques. Experimental results demonstrate the capability to overcome such difficulties for autonomous navigation.

  • PDF