• Title/Summary/Keyword: Fusion Model

Search Result 947, Processing Time 0.032 seconds

Efficient Object Tracking System Using the Fusion of a CCD Camera and an Infrared Camera (CCD카메라와 적외선 카메라의 융합을 통한 효과적인 객체 추적 시스템)

  • Kim, Seung-Hun;Jung, Il-Kyun;Park, Chang-Woo;Hwang, Jung-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.3
    • /
    • pp.229-235
    • /
    • 2011
  • To make a robust object tracking and identifying system for an intelligent robot and/or home system, heterogeneous sensor fusion between visible ray system and infrared ray system is proposed. The proposed system separates the object by combining the ROI (Region of Interest) estimated from two different images based on a heterogeneous sensor that consolidates the ordinary CCD camera and the IR (Infrared) camera. Human's body and face are detected in both images by using different algorithms, such as histogram, optical-flow, skin-color model and Haar model. Also the pose of human body is estimated from the result of body detection in IR image by using PCA algorithm along with AdaBoost algorithm. Then, the results from each detection algorithm are fused to extract the best detection result. To verify the heterogeneous sensor fusion system, few experiments were done in various environments. From the experimental results, the system seems to have good tracking and identification performance regardless of the environmental changes. The application area of the proposed system is not limited to robot or home system but the surveillance system and military system.

Implementation of a Sensor Fusion FPGA for an IoT System (사물인터넷 시스템을 위한 센서 융합 FPGA 구현)

  • Jung, Chang-Min;Lee, Kwang-Yeob;Park, Tae-Ryong
    • Journal of IKEEE
    • /
    • v.19 no.2
    • /
    • pp.142-147
    • /
    • 2015
  • In this paper, a Kalman filter-based sensor fusion filter that measures posture by calibrating and combining information obtained from acceleration and gyro sensors was proposed. Recent advancements in sensor network technology have required sensor fusion technology. In the proposed approach, the nonlinear system model of the filter is converted to a linear system model through a Jacobian matrix operation, and the measurement value predicted via Euler integration. The proposed filter was implemented at an operating frequency of 74 MHz using a Virtex-6 FPGA Board from Xilinx Inc. Further, the accuracy and reliability of the measured posture were validated by comparing the values obtained using the implemented filters with those from existing filters.

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

8-port Coupled Transmission Line Modeling of KSATR ICRF Antenna and Comparison with Measurement (커플링이 고려된 KSTAR ICRF 안테나의 8포트 전송선 회로 모델링 및 측정 결과 비교)

  • Kim, S.H.;Wang, S.J.;Hwang, C.K.;Kwak, J.G.
    • Journal of the Korean Vacuum Society
    • /
    • v.19 no.1
    • /
    • pp.72-80
    • /
    • 2010
  • It is very important to predict and analyze the change of voltage and current distribution of current strap, abnormal voltage distribution of transmission line and resonance phenomenon by coupling between current straps for more stable operation of ICRF system. In this study, to understand those phenomena by coupling, 8-port coupled transmission line model is completed by appling S-parameter measured in the prototype KSTAR ICRF antenna to the model. The determined self-inductance, mutual-inductance and capacitance of antenna straps are shown to be lower than that calculated from 2D approximate model due to finite length of strap. The coupled transmission line model of current strap will be utilized to the operation of ICRF system of KSTAR in the future.

Semantic Segmentation of Agricultural Crop Multispectral Image Using Feature Fusion (특징 융합을 이용한 농작물 다중 분광 이미지의 의미론적 분할)

  • Jun-Ryeol Moon;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.2
    • /
    • pp.238-245
    • /
    • 2024
  • In this paper, we propose a framework for improving the performance of semantic segmentation of agricultural multispectral image using feature fusion techniques. Most of the semantic segmentation models being studied in the field of smart farms are trained on RGB images and focus on increasing the depth and complexity of the model to improve performance. In this study, we go beyond the conventional approach and optimize and design a model with multispectral and attention mechanisms. The proposed method fuses features from multiple channels collected from a UAV along with a single RGB image to increase feature extraction performance and recognize complementary features to increase the learning effect. We study the model structure to focus on feature fusion and compare its performance with other models by experimenting with favorable channels and combinations for crop images. The experimental results show that the model combining RGB and NDVI performs better than combinations with other channels.

Biomechanical Comparison of Spinal Fusion Methods Using Interspinous Process Compressor and Pedicle Screw Fixation System Based on Finite Element Method

  • Choi, Jisoo;Kim, Sohee;Shin, Dong-Ah
    • Journal of Korean Neurosurgical Society
    • /
    • v.59 no.2
    • /
    • pp.91-97
    • /
    • 2016
  • Objective : To investigate the biomechanical effects of a newly proposed Interspinous Process Compressor (IPC) and compare with pedicle screw fixation at surgical and adjacent levels of lumbar spine. Methods : A three dimensional finite element model of intact lumbar spine was constructed and two spinal fusion models using pedicle screw fixation system and a new type of interspinous devices, IPC, were developed. The biomechanical effects such as range of motion (ROM) and facet contact force were analyzed at surgical level (L3/4) and adjacent levels (L2/3, L4/5). In addition, the stress in adjacent intervertebral discs (D2, D4) was investigated. Results : The entire results show biomechanical parameters such as ROM, facet contact force, and stress in adjacent intervertebral discs were similar between PLIF and IPC models in all motions based on the assumption that the implants were perfectly fused with the spine. Conclusion : The newly proposed fusion device, IPC, had similar fusion effect at surgical level, and biomechanical effects at adjacent levels were also similar with those of pedicle screw fixation system. However, for clinical applications, real fusion effect between spinous process and hooks, duration of fusion, and influence on spinous process need to be investigated through clinical study.

Building Detection by Convolutional Neural Network with Infrared Image, LiDAR Data and Characteristic Information Fusion (적외선 영상, 라이다 데이터 및 특성정보 융합 기반의 합성곱 인공신경망을 이용한 건물탐지)

  • Cho, Eun Ji;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.635-644
    • /
    • 2020
  • Object recognition, detection and instance segmentation based on DL (Deep Learning) have being used in various practices, and mainly optical images are used as training data for DL models. The major objective of this paper is object segmentation and building detection by utilizing multimodal datasets as well as optical images for training Detectron2 model that is one of the improved R-CNN (Region-based Convolutional Neural Network). For the implementation, infrared aerial images, LiDAR data, and edges from the images, and Haralick features, that are representing statistical texture information, from LiDAR (Light Detection And Ranging) data were generated. The performance of the DL models depends on not only on the amount and characteristics of the training data, but also on the fusion method especially for the multimodal data. The results of segmenting objects and detecting buildings by applying hybrid fusion - which is a mixed method of early fusion and late fusion - results in a 32.65% improvement in building detection rate compared to training by optical image only. The experiments demonstrated complementary effect of the training multimodal data having unique characteristics and fusion strategy.

Spinal Fusion Based on Ex Vivo Gene Therapy Using Recombinant Human BMP Adenoviruses (사람 골 형성 단백질 Ex vivo 유전자 치료법을 이용한 척추 유합)

  • Kim, Gi-Beom;Kim, Jae-Ryong;Ahn, Myun-Hwan;Seo, Jae-Sung
    • Journal of Yeungnam Medical Science
    • /
    • v.24 no.2
    • /
    • pp.262-274
    • /
    • 2007
  • Purpose : Bone morphogenetic proteins (BMPs) play an important role in the formation of cartilage and bone, as well as regulating the growth of chondroblasts and osteoblasts. In this study, we investigated whether recombinant human BMP adenoviruses are available for ex vivo gene therapy, using human fibroblasts and human bone marrow stromal cells in an animal spinal fusion model. Materials and Methods : Human fibroblasts and human bone marrow stromal cells were transduced with recombinant BMP-2 adenovirus (AdBMP-2) or recombinant BMP-7 adenovirus (AdBMP-7), referred to as AdBMP-7/BMSC, AdBMP-2/BMSC, AdBMP-7/HuFb, and AdBMP-2/HuFb. We showed that each cell secreted active BMPs by alkaline phosphatase staining. Since AdBMP-2 or AdBMP-7 tranducing cells were injected into the paravertebral muscle of athymic nude mice, at 4 weeks and 7 weeks, we confirmed that new bone formation occurred by induction of spinal fusion on radiographs and histochemical staining. Results : In the region where the AdBMP-7/BMSC was injected, new bone formation was observed in all cases and spinal fusion was induced in two of these. AdBMP-2/BMSC induced bone formation and spinal fusion occurred among one of five. However, in the region where AdBMP/HuFb was injected, neither bone formation nor spinal fusion was observed. Conclusion : The osteoinductivity of AdBMP-7 was superior to that of AdBMP-2. In addition, the human bone marrow stromal cells were more efficient than the human fibroblasts for bone formation and spinal fusion. Therefore, the results of this study suggest that AdBMP-7/BMSC would be the most useful approach to ex vivo gene therapy for an animal spinal fusion model.

  • PDF

Evaluation of Spinal Fusion Using Bone Marrow Derived Mesenchymal Stem Cells with or without Fibroblast Growth Factor-4

  • Seo, Hyun-Sung;Jung, Jong-Kwon;Lim, Mi-Hyun;Hyun, Dong-Keun;Oh, Nam-Sik;Yoon, Seung-Hwan
    • Journal of Korean Neurosurgical Society
    • /
    • v.46 no.4
    • /
    • pp.397-402
    • /
    • 2009
  • Objective : In this study, the authors assessed the ability of rat bone marrow derived mesenchymal stem cells (BMDMSCs), in the presence of a growth factor, fibroblast growth factor-4 (FGF-4) and hydroxyapatite, to act as a scaffold for posterolateral spinal fusion in a rat model. Methods : Using a rat posterolateral spine fusion model. the experimental study comprised 3 groups. Group 1 was composed of 6 animals that were implanted with 0.08 gram hydroxyapatite only. Group 2 was composed of 6 animals that were implanted with 0.08 gram hydroxyapatite containing $1{\times}10^6/60{\mu}L$ rat of BMDMSCs. Group 3 was composed of 6 animals that were implanted with 0.08 gram hydroxyapatite containing $1{\times}10^6/60{\mu}L$ of rat BMDMSCs and FGF-4 $1{\mu}G$ to induce the bony differentiation of the BMDMSCs. Rats were assessed using radiographs obtained at 4, 6, and 8 weeks postoperatively. After sacrifice, spines were explanted and assessed by manual palpation, high-resolution microcomputerized tomography, and histological analysis. Results : Radiographic, high-resolution microcomputerized tomographic, and manual palpation revealed spinal fusion in five rats (83%) in Group 2 at 8 weeks. However, in Group 1, three (60%) rats developed fusion at L4-L5 by radiography and two (40%) by manual palpation in radiographic examination. In addition, in Group 3, bone fusion was observed in only 50% of rats by manual palpation and radiographic examination at this time. Conclusion : The present study demonstrates that 0.08 gram of hydroxyapatite with $1{\times}10^6/60{\mu}L$ rat of BMDMSCs induced bone fusion. FGF4, added to differentiate primitive $1{\times}10^6/60{\mu}L$ rat of BMDMSCs did not induce fusion. Based on histologic data, FGF-4 appears to induce fibrotic change rather than differentiation to bone by $1{\times}10^6/60{\mu}L$ rat of BMDMSCs.

POSE-VIWEPOINT ADAPTIVE OBJECT TRACKING VIA ONLINE LEARNING APPROACH

  • Mariappan, Vinayagam;Kim, Hyung-O;Lee, Minwoo;Cho, Juphil;Cha, Jaesang
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.20-28
    • /
    • 2015
  • In this paper, we propose an effective tracking algorithm with an appearance model based on features extracted from a video frame with posture variation and camera view point adaptation by employing the non-adaptive random projections that preserve the structure of the image feature space of objects. The existing online tracking algorithms update models with features from recent video frames and the numerous issues remain to be addressed despite on the improvement in tracking. The data-dependent adaptive appearance models often encounter the drift problems because the online algorithms does not get the required amount of data for online learning. So, we propose an effective tracking algorithm with an appearance model based on features extracted from a video frame.