• Title/Summary/Keyword: Tracking performance

Search Result 3,316, Processing Time 0.029 seconds

Online Snapshot Method based on Directory and File Change Tracking for Virtual File System (가상파일시스템에서 디렉토리 및 파일 변경 추적에 기반한 온라인 스냅샷 방법)

  • Kim, Jinsu;Song, Seokil;Shin, Jae Ryong
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.5
    • /
    • pp.417-425
    • /
    • 2019
  • Storage snapshot technology allows to preserve data at a specific point in time, and recover and access data at a desired point in time. It is an essential technology for storage protection application. Existing snapshot methods have some problems in that they dependent on storage hardware vendor, file system or virtual block device. In this paper, we propose a new snapshot method for solving the problems and creating snapshots on-line. The proposed snapshot method uses a method of extracting the log records of update operations at the virtual file system layer to enable the snapshot method to operate independently on file systems, virtual block devices, and storage hardwares. In addition, the proposed snapshot mehod creates and manages snapshots for directories and files without interruption to the storage service. Finally, through experiments we measure the snapshot creation time and the performance degradation caused by the snapshot.

Separation of Dynamic RCS using Hough Transform in Multi-target Environment (허프 변환을 이용한 다표적 환경에서 동적 RCS 분리)

  • Kim, Yu-Jin;Choi, Young-Jae;Choi, In-Sik
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.9
    • /
    • pp.91-97
    • /
    • 2019
  • When a radar tracks the warhead of a ballistic missile, decoys of a ballistic missile put a heavy burden on the radar resource management tracking the targets. To reduce this burden, it is necessary to be able to separate the signal of the warhead from the received dynamic radar cross section (RCS) signal on the radar. In this paper, we propose the method of separating the dynamic RCS of each target from the received signal by the Hough transform which extracts straight lines from the image. The micro motion of the targets was implemented using a 3D CAD model of the warhead and decoys. Then, we calculated the dynamic RCS from the 3D CAD model having micromotion and verified the performance by applying the proposed algorithm. Simulation results show that the proposed method can separate the signals of the warhead and decoys at the signal-to-noise ratio (SNR) of 10dB.

Extraction of Skin Regions through Filtering-based Noise Removal (필터링 기반의 잡음 제거를 통한 피부 영역의 추출)

  • Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.672-678
    • /
    • 2020
  • Ultra-high-speed images that accurately depict the minute movements of objects have become common as low-cost and high-performance cameras that can film at high speeds have emerged. In this paper, the proposed method removes unexpected noise contained in images after input at high speed, and then extracts an area of interest that can represent personal information, such as skin areas, from the image in which noise has been removed. In this paper, noise generated by abnormal electrical signals is removed by applying bilateral filters. A color model created through pre-learning is then used to extract the area of interest that represents the personal information contained within the image. Experimental results show that the introduced algorithms remove noise from high-speed images and then extract the area of interest robustly. The approach presented in this paper is expected to be useful in various applications related to computer vision, such as image preprocessing, noise elimination, tracking and monitoring of target areas, etc.

Design and Implementation of Human and Object Classification System Using FMCW Radar Sensor (FMCW 레이다 센서 기반 사람과 사물 분류 시스템 설계 및 구현)

  • Sim, Yunsung;Song, Seungjun;Jang, Seonyoung;Jung, Yunho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.364-372
    • /
    • 2022
  • This paper proposes the design and implementation results for human and object classification systems utilizing frequency modulated continuous wave (FMCW) radar sensor. Such a system requires the process of radar sensor signal processing for multi-target detection and the process of deep learning for the classification of human and object. Since deep learning requires such a great amount of computation and data processing, the lightweight process is utmost essential. Therefore, binary neural network (BNN) structure was adopted, operating convolution neural network (CNN) computation in a binary condition. In addition, for the real-time operation, a hardware accelerator was implemented and verified via FPGA platform. Based on performance evaluation and verified results, it is confirmed that the accuracy for multi-target classification of 90.5%, reduced memory usage by 96.87% compared to CNN and the run time of 5ms are achieved.

A Study on Sensor-Based Upper Full-Body Motion Tracking on HoloLens

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.39-46
    • /
    • 2021
  • In this paper, we propose a method for the motion recognition method required in the industrial field in mixed reality. In industrial sites, movements (grasping, lifting, and carrying) are required throughout the upper full-body, from trunk movements to arm movements. In this paper, we use a method composed of sensors and wearable devices that are not vision-based such as Kinect without using heavy motion capture equipment. We used two IMU sensors for the trunk and shoulder movement, and used Myo arm band for the arm movements. Real-time data coming from a total of 4 are fused to enable motion recognition for the entire upper body area. As an experimental method, a sensor was attached to the actual clothes, and objects were manipulated through synchronization. As a result, the method using the synchronization method has no errors in large and small operations. Finally, through the performance evaluation, the average result was 50 frames for single-handed operation on the HoloLens and 60 frames for both-handed operation.

Object-based Compression of Thermal Infrared Images for Machine Vision (머신 비전을 위한 열 적외선 영상의 객체 기반 압축 기법)

  • Lee, Yegi;Kim, Shin;Lim, Hanshin;Choo, Hyon-Gon;Cheong, Won-Sik;Seo, Jeongil;Yoon, Kyoungro
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.738-747
    • /
    • 2021
  • Today, with the improvement of deep learning technology, computer vision areas such as image classification, object detection, object segmentation, and object tracking have shown remarkable improvements. Various applications such as intelligent surveillance, robots, Internet of Things, and autonomous vehicles in combination with deep learning technology are being applied to actual industries. Accordingly, the requirement of an efficient compression method for video data is necessary for machine consumption as well as for human consumption. In this paper, we propose an object-based compression of thermal infrared images for machine vision. The input image is divided into object and background parts based on the object detection results to achieve efficient image compression and high neural network performance. The separated images are encoded in different compression ratios. The experimental result shows that the proposed method has superior compression efficiency with a maximum BD-rate value of -19.83% to the whole image compression done with VVC.

A novel computer vision-based vibration measurement and coarse-to-fine damage assessment method for truss bridges

  • Wen-Qiang Liu;En-Ze Rui;Lei Yuan;Si-Yi Chen;You-Liang Zheng;Yi-Qing Ni
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.393-407
    • /
    • 2023
  • To assess structural condition in a non-destructive manner, computer vision-based structural health monitoring (SHM) has become a focus. Compared to traditional contact-type sensors, the advantages of computer vision-based measurement systems include lower installation costs and broader measurement areas. In this study, we propose a novel computer vision-based vibration measurement and coarse-to-fine damage assessment method for truss bridges. First, a deep learning model FairMOT is introduced to track the regions of interest (ROIs) that include joints to enhance the automation performance compared with traditional target tracking algorithms. To calculate the displacement of the tracked ROIs accurately, a normalized cross-correlation method is adopted to fine-tune the offset, while the Harris corner matching is utilized to correct the vibration displacement errors caused by the non-parallel between the truss plane and the image plane. Then, based on the advantages of the stochastic damage locating vector (SDLV) and Bayesian inference-based stochastic model updating (BI-SMU), they are combined to achieve the coarse-to-fine localization of the truss bridge's damaged elements. Finally, the severity quantification of the damaged components is performed by the BI-SMU. The experiment results show that the proposed method can accurately recognize the vibration displacement and evaluate the structural damage.

3D Ultrasound Panoramic Image Reconstruction using Deep Learning (딥러닝을 활용한 3차원 초음파 파노라마 영상 복원)

  • SiYeoul Lee;Seonho Kim;Dongeon Lee;ChunSu Park;MinWoo Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.4
    • /
    • pp.255-263
    • /
    • 2023
  • Clinical ultrasound (US) is a widely used imaging modality with various clinical applications. However, capturing a large field of view often requires specialized transducers which have limitations for specific clinical scenarios. Panoramic imaging offers an alternative approach by sequentially aligning image sections acquired from freehand sweeps using a standard transducer. To reconstruct a 3D volume from these 2D sections, an external device can be employed to track the transducer's motion accurately. However, the presence of optical or electrical interferences in a clinical setting often leads to incorrect measurements from such sensors. In this paper, we propose a deep learning (DL) framework that enables the prediction of scan trajectories using only US data, eliminating the need for an external tracking device. Our approach incorporates diverse data types, including correlation volume, optical flow, B-mode images, and rawer data (IQ data). We develop a DL network capable of effectively handling these data types and introduce an attention technique to emphasize crucial local areas for precise trajectory prediction. Through extensive experimentation, we demonstrate the superiority of our proposed method over other DL-based approaches in terms of long trajectory prediction performance. Our findings highlight the potential of employing DL techniques for trajectory estimation in clinical ultrasound, offering a promising alternative for panoramic imaging.

Unity Engine-based Underwater Robot 3D Positioning Program Implementation (Unity Engine 기반 수중 로봇 3차원 포지셔닝 프로그램 구현)

  • Choi, Chul-Ho;Kim, Jong-Hun;Kim, Jun-Yeong;Park, Jun;Park, Sung-Wook;Jung, Se-Hoon;Sim, Chun-Bo
    • Smart Media Journal
    • /
    • v.11 no.9
    • /
    • pp.64-74
    • /
    • 2022
  • A number of studies related to underwater robots are being conducted to utilize marine resources. However, unlike ordinary drones, underwater robots have a problem that it is not easy to locate because the medium is water, not air. The monitoring and positioning program of underwater robots, an existing study for identifying underwater locations, has difficulty in locating and monitoring in small spaces because it aims to be utilized in large spaces. Therefore, in this paper, we propose a three-dimensional positioning program for continuous monitoring and command delivery in small spaces. The proposed program consists of a multi-dimensional positioning monitoring function and a ability to control the path of travel through a three-dimensional screen so that the depth of the underwater robot can be identified. Through the performance evaluation, a robot underwater could be monitored and verified from various angles with a 3D screen, and an error within the assumed range was verified as the difference between the set path and the actual position is within 6.44 m on average.

Object detection within the region of interest based on gaze estimation (응시점 추정 기반 관심 영역 내 객체 탐지)

  • Seok-Ho Han;Hoon-Seok Jang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.3
    • /
    • pp.117-122
    • /
    • 2023
  • Gaze estimation, which automatically recognizes where a user is currently staring, and object detection based on estimated gaze point, can be a more accurate and efficient way to understand human visual behavior. in this paper, we propose a method to detect the objects within the region of interest around the gaze point. Specifically, after estimating the 3D gaze point, a region of interest based on the estimated gaze point is created to ensure that object detection occurs only within the region of interest. In our experiments, we compared the performance of general object detection, and the proposed object detection based on region of interest, and found that the processing time per frame was 1.4ms and 1.1ms, respectively, indicating that the proposed method was faster in terms of processing speed.