• 제목/요약/키워드: Recognition Speed

검색결과 769건 처리시간 0.027초

심층학습 기반의 자동 객체 추적 및 핸디 모션 제어 드론 시스템 구현 및 검증 (Implementation and Verification of Deep Learning-based Automatic Object Tracking and Handy Motion Control Drone System)

  • 김영수;이준범;이찬영;전혜리;김승필
    • 대한임베디드공학회논문지
    • /
    • 제16권5호
    • /
    • pp.163-169
    • /
    • 2021
  • In this paper, we implemented a deep learning-based automatic object tracking and handy motion control drone system and analyzed the performance of the proposed system. The drone system automatically detects and tracks targets by analyzing images obtained from the drone's camera using deep learning algorithms, consisting of the YOLO, the MobileNet, and the deepSORT. Such deep learning-based detection and tracking algorithms have both higher target detection accuracy and processing speed than the conventional color-based algorithm, the CAMShift. In addition, in order to facilitate the drone control by hand from the ground control station, we classified handy motions and generated flight control commands through motion recognition using the YOLO algorithm. It was confirmed that such a deep learning-based target tracking and drone handy motion control system stably track the target and can easily control the drone.

Artificial Intelligence-based Echocardiogram Video Classification by Aggregating Dynamic Information

  • Ye, Zi;Kumar, Yogan J.;Sing, Goh O.;Song, Fengyan;Ni, Xianda;Wang, Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권2호
    • /
    • pp.500-521
    • /
    • 2021
  • Echocardiography, an ultrasound scan of the heart, is regarded as the primary physiological test for heart disease diagnoses. How an echocardiogram is interpreted also relies intensively on the determination of the view. Some of such views are identified as standard views because of the presentation and ease of the evaluations of the major cardiac structures of them. However, finding valid cardiac views has traditionally been time-consuming, and a laborious process because medical imaging is interpreted manually by the specialist. Therefore, this study aims to speed up the diagnosis process and reduce diagnostic error by providing an automated identification of standard cardiac views based on deep learning technology. More importantly, based on a brand-new echocardiogram dataset of the Asian race, our research considers and assesses some new neural network architectures driven by action recognition in video. Finally, the research concludes and verifies that these methods aggregating dynamic information will receive a stronger classification effect.

가시광통신체계에서 난반사 조명을 고려한 인코딩 스킴 (An Encoding Scheme Considering Diffused Lights In a Visual Light Communication System)

  • 은성배;김동규;차신
    • 한국멀티미디어학회논문지
    • /
    • 제22권2호
    • /
    • pp.186-193
    • /
    • 2019
  • Visible light communication technology is being studied and developed in various ways due to advantages such as high transmission speed, excellent positioning and higher security. However, existing visible light communication systems have difficulties in entering the market because they use special transmitters and receivers. We will overcome the difficulty if we develope a VLC system that uses a conventional LED light as a transmitter and a smartphone camera as a receiver. What matters is that LED lights include a scatter filter to prevent glareness for human eyes, but the existing VLC(Visual Light Communication) method can not be applied. In this paper, we propose a method to encode data with On / Off patterns of LEDs in the lighting with $M{\times}N$ LEDs. We defined parameters like L-off-able and K-seperated to facilitate the recognition of On / Off patterns in the diffused Lights. We conducted experiments using an LED lighting and smart phones to determine the parameter values. Also, the maximum transmission rate of our encoding technique is mathematically presented. Our encoding scheme can be applied to indoor and outdoor positioning applications or settlement of commercial transactions.

Runway visual range prediction using Convolutional Neural Network with Weather information

  • Ku, SungKwan;Kim, Seungsu;Hong, Seokmin
    • International Journal of Advanced Culture Technology
    • /
    • 제6권4호
    • /
    • pp.190-194
    • /
    • 2018
  • The runway visual range is one of the important factors that decide the possibility of taking offs and landings of the airplane at local airports. The runway visual range is affected by weather conditions like fog, wind, etc. The pilots and aviation related workers check a local weather forecast such as runway visual range for safe flight. However there are several local airfields at which no other forecasting functions are provided due to realistic problems like the deterioration, breakdown, expensive purchasing cost of the measurement equipment. To this end, this study proposes a prediction model of runway visual range for a local airport by applying convolutional neural network that has been most commonly used for image/video recognition, image classification, natural language processing and so on to the prediction of runway visual range. For constituting the prediction model, we use the previous time series data of wind speed, humidity, temperature and runway visibility. This paper shows the usefulness of the proposed prediction model of runway visual range by comparing with the measured data.

A Distributed Real-time 3D Pose Estimation Framework based on Asynchronous Multiviews

  • Taemin, Hwang;Jieun, Kim;Minjoon, Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권2호
    • /
    • pp.559-575
    • /
    • 2023
  • 3D human pose estimation is widely applied in various fields, including action recognition, sports analysis, and human-computer interaction. 3D human pose estimation has achieved significant progress with the introduction of convolutional neural network (CNN). Recently, several researches have proposed the use of multiview approaches to avoid occlusions in single-view approaches. However, as the number of cameras increases, a 3D pose estimation system relying on a CNN may lack in computational resources. In addition, when a single host system uses multiple cameras, the data transition speed becomes inadequate owing to bandwidth limitations. To address this problem, we propose a distributed real-time 3D pose estimation framework based on asynchronous multiple cameras. The proposed framework comprises a central server and multiple edge devices. Each multiple-edge device estimates a 2D human pose from its view and sendsit to the central server. Subsequently, the central server synchronizes the received 2D human pose data based on the timestamps. Finally, the central server reconstructs a 3D human pose using geometrical triangulation. We demonstrate that the proposed framework increases the percentage of detected joints and successfully estimates 3D human poses in real-time.

다중 신경망을 이용한 객체 탐지 효율성 개선방안 (Improving Efficiency of Object Detection using Multiple Neural Networks)

  • 박대흠;임종훈;장시웅
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 춘계학술대회
    • /
    • pp.154-157
    • /
    • 2022
  • 기존의 Tensorflow CNN 환경에서 Object 탐지 방식은 Tensorflow 자체적으로 Object 라벨링 작업과 탐지를 하는 방식이다. 그러나 현재 YOLO의 등장으로 이미지 객체 탐지의 효율성이 높아졌다. 그로 인하여 기존 신경망보다 더 많은 심층 레이어를 구축할 수 있으며 또한 이미지 객체 인식률을 높일 수 있다. 따라서 본 논문에서는 Darknet, YOLO를 기반으로 한 Object 탐지 시스템을 설계하고 기존에 사용하던 합성곱 신경망에 기반한 다중 레이어 구축과 학습을 수행함으로써 탐지능력과 속도를 비교, 분석하였다. 이로 인하여 본 논문에서는 Darknet의 학습을 효율적으로 이용하는 신경망 방법론을 제시하였다.

  • PDF

유니버설미들웨어기반의 IoT 적재폐기물 화재예방 동적 상황인지 플랫폼 구축 (Implementation of Dynamic Context-Awareness Platform for IoT Loading Waste Fire-Prevention based on Universal Middleware)

  • 이해준;황치곤;윤창표
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 춘계학술대회
    • /
    • pp.346-348
    • /
    • 2022
  • 적재폐기물에서 발생하는 발효열의 발화요인 분석 자료를 기반으로 발생 원인을 규명하는 모니터링 시스템을 구성하였다. 화재 조기경보 유형별 시나리오의 구성과 신속성을 위해 실시간 런타임 환경을 제공하기 위한 유니버설미들웨어를 활용하였다. 적재폐기물의 적재 높이와 압력, 대표적인 구성폐기물인 목재, 건전지, 플라스틱 폐기물의 건조, 표면의 탄화변화를 동적으로 인지해야 한다. 따라서, 저온발화 화재 가능성 데이터 분석을 위한 IoT 상황인지 플랫폼을 동적으로 구성하여 제시하였다.

  • PDF

Design and Implementation of Web-Based Cooperative Learning System Co-Net

  • WANG, Kyungsu
    • Educational Technology International
    • /
    • 제6권1호
    • /
    • pp.103-119
    • /
    • 2005
  • This study investigated to designand implement web-based collaborative learning system Co-Net and map out students' learning procedure using the system, based upon Student Team Achievement Division (STAD Slavin, 1990, 1996). There are technical process and instructional considerations to be made during the design process. The former are those that concern equipment requirements and specifications and include Ease of Use, Speed of Access, and Flexibility. On the other hand, instructional considerationsare concerned with the delivery and access of instructional materials and their outcomes on learners. They are cooperative interactions within groups and group heterogeneity, learner control, group incentives, individual accountability, equal opportunity for earning high scores and contributing to group effort, task specialization, and competition among groups. A web site for a virtual learning environment designed and built by the authors and known as Co-Net is then explained along with the whole process learners inside the environment. The main page of Co-Net consists of 15 menus to implement cooperative learning process. The cooperative learning activities using 15 menus are composed of six phases (1) preparation of the new knowledge (2) presentation of the new knowledge (3) knowledge assimilation and application (4) team and individual evaluation (5) team and individual recognition Throughout the five phases, the appropriate use of cooperative learning techniques has been shown to have both academic and social benefits to learners.

Real-time 3D multi-pedestrian detection and tracking using 3D LiDAR point cloud for mobile robot

  • Ki-In Na;Byungjae Park
    • ETRI Journal
    • /
    • 제45권5호
    • /
    • pp.836-846
    • /
    • 2023
  • Mobile robots are used in modern life; however, object recognition is still insufficient to realize robot navigation in crowded environments. Mobile robots must rapidly and accurately recognize the movements and shapes of pedestrians to navigate safely in pedestrian-rich spaces. This study proposes real-time, accurate, three-dimensional (3D) multi-pedestrian detection and tracking using a 3D light detection and ranging (LiDAR) point cloud in crowded environments. The pedestrian detection quickly segments a sparse 3D point cloud into individual pedestrians using a lightweight convolutional autoencoder and connected-component algorithm. The multi-pedestrian tracking identifies the same pedestrians considering motion and appearance cues in continuing frames. In addition, it estimates pedestrians' dynamic movements with various patterns by adaptively mixing heterogeneous motion models. We evaluate the computational speed and accuracy of each module using the KITTI dataset. We demonstrate that our integrated system, which rapidly and accurately recognizes pedestrian movement and appearance using a sparse 3D LiDAR, is applicable for robot navigation in crowded spaces.

그래픽 하드웨어 가속을 이용한 실시간 색상 인식 (Real-time Color Recognition Based on Graphic Hardware Acceleration)

  • 김구진;윤지영;최유주
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제14권1호
    • /
    • pp.1-12
    • /
    • 2008
  • 본 논문에서는 야외 및 실내에서 촬영된 차량 영상에 대해 실시간으로 차량 색상을 인식할 수 있는 GPU(Graphics Processing Unit) 기반의 알고리즘을 제시한다. 전처리 과정에서는 차량 색상의 표본 영상들로부터 특징벡터를 계산한 뒤, 이들을 색상 별로 조합하여 GPU에서 사용할 참조 텍스쳐(Reference texture)로 저장한다. 차량 영상이 입력되면, 특징벡터를 계산한 뒤 GPU로 전송하고, GPU에서는 참조 텍스쳐 내의 표본 특징리터들과 비교하여 색상 별 유사도를 측정한 뒤 CPU로 전송하여 해당 색상명을 인식한다. 분류의 대상이 되는 색상은 가장 흔히 발견되는 차량 색상들 중에서 선택한 7가지 색상이며, 검정색, 은색, 흰색과 같은 3가지의 무채색과 빨강색, 노랑색, 파랑색, 녹색과 같은 4가지의 유채색으로 구성된다. 차량 영상에 대한 특징벡터는 차량 영상에 대해 HSI(Hue-Saturation-Intensity) 색상모델을 적용하여 색조-채도 조합과 색조-명도 조합으로 색상 히스토램을 구성하고, 이 중의 채도 값에 가중치를 부여함으로써 구성한다. 본 논문에서 제시하는 알고리즘은 다양한 환경에서 촬영된 많은 수의 표본 특징벡터를 사용하고, 색상 별 특성을 뚜렷이 반영하는 특징벡터를 구성하였으며, 적합한 유사도 측정함수(likelihood function)를 적용함으로써, 94.67%에 이르는 색상 인식 성공률을 보였다. 또한, GPU를 이용함으로써 대량의 표본 특징벡터의 집합과 입력 영상에 대한 특징벡터 간의 유사도 측정 및 색상 인식과정을 병렬로 처리하였다. 실험에서는, 색상 별로 1,024장씩, 총 7,168장의 차량 표본 영상을 이용하여 GPU에서 사용하는 참조 텍스쳐를 구성하였다. 특징벡터의 구성에 소요되는 시간은 입력 영상의 크기에 따라 다르지만, 해상도 $150{\times}113$의 입력 영상에 대해 측정한 결과 평균 0.509ms가 소요된다. 계산된 특징벡터를 이용하여 색상 인식의 수행시간을 계산한 결과 평균 2.316ms의 시간이 소요되었고, 이는 같은 알고리즘을 CPU 상에서 수행한 결과에 비해 5.47배 빠른 속도이다. 본 연구에서는 차량만을 대상으로 하여 색상 인식을 실험하였으나, 일반적인 피사체의 색상 인식에 대해서도 제시된 알고리즘을 확장하여 적용할 수 있다.