• Title/Summary/Keyword: vision model

Search Result 1,333, Processing Time 0.025 seconds

The Anti-inflammatory Effect of Cinnamomi Ramulus (계지의 항염 효과에 관한 연구)

  • Park Hi-Joon;Lee Ji-Suk;Lee Jae-Dong;Kim Nam-Jae;Pyo Ji-Hee;Kang Jun-Mo;Choe Il-Hwan;Kim Su-Young;Shim Bum-Sang;Lee Je-Hyun;Lim Sabina
    • The Journal of Korean Medicine
    • /
    • v.26 no.2 s.62
    • /
    • pp.140-151
    • /
    • 2005
  • Objectives: Cinnamomi Ramulus (CR), the young twig of Cinnamomum loureirri nees, has been used for treating symptoms related to pain, rheumatic arthritis and inflammation in Korean herb medicine. This study was carried out to investigate the anti-inflammatory effect of CR in vivo and in vitro. Methods: Extracts of CR were prepared and the chemical components of the extracts were examined by gas chromatography-mass spectrometry (GC-MS). The extracts were administrated to the rat paw edema model induced by carrageenan to evaluate the anti-inflammatory effect of CR. The expressions of nitric oxide (NO), prostaglandin E2 (PGE2), inducible nitric oxide synthase (iNOS) and cyclooxygenase (COX)-2 were also quantified in lipopolysaccharide(LPS)­induced RAW 264.7 macrophages to survey the effect of CR in vitro. The main components were cinnamaldehyde and coumarin. Results: We examined the anti-inflammatory activity of the $80\%$ ethanol extract of Cinnamomi Ramulus in vivo by using carrageenan-induced rat paw edema model. Maximum inhibition of $54.91\%$ was noted at the dose of l1000mg/kg after 2 hours of drug administration in carrageenan-induced rat paw edema and this showed a potent anti-inflammatory effect. Conclusions: The results showed that Cinnamomi Ramulus suppressed dose-dependently LPS-induced NO production in RAW 264.7 macrophages and also decreased iNOS protein expression. Cinnamomi Ramulus also showed a significant inhibitory effect in LPS-induced PGE2 production and COX-2 expression.

  • PDF

Image-based structural dynamic displacement measurement using different multi-object tracking algorithms

  • Ye, X.W.;Dong, C.Z.;Liu, T.
    • Smart Structures and Systems
    • /
    • v.17 no.6
    • /
    • pp.935-956
    • /
    • 2016
  • With the help of advanced image acquisition and processing technology, the vision-based measurement methods have been broadly applied to implement the structural monitoring and condition identification of civil engineering structures. Many noncontact approaches enabled by different digital image processing algorithms are developed to overcome the problems in conventional structural dynamic displacement measurement. This paper presents three kinds of image processing algorithms for structural dynamic displacement measurement, i.e., the grayscale pattern matching (GPM) algorithm, the color pattern matching (CPM) algorithm, and the mean shift tracking (MST) algorithm. A vision-based system programmed with the three image processing algorithms is developed for multi-point structural dynamic displacement measurement. The dynamic displacement time histories of multiple vision points are simultaneously measured by the vision-based system and the magnetostrictive displacement sensor (MDS) during the laboratory shaking table tests of a three-story steel frame model. The comparative analysis results indicate that the developed vision-based system exhibits excellent performance in structural dynamic displacement measurement by use of the three different image processing algorithms. The field application experiments are also carried out on an arch bridge for the measurement of displacement influence lines during the loading tests to validate the effectiveness of the vision-based system.

Analysis of AC PDP Dynamic False Contour based on the Model of the Human Vision System (시각 반응 모델에 의한 AC PDP의 Dynamic False Contour Noise 해석)

  • 나중민;김영환;강봉구
    • Proceedings of the IEEK Conference
    • /
    • 2000.11d
    • /
    • pp.85-88
    • /
    • 2000
  • We present a new approach to analyzing the dynamic false contour noise of AC plasma display panels (PDP), which is known to degrade the image quality severely. Compared with the existing methods that consider only the amount of light emission from PDP during 1 field time, the proposed approach uses the impulse response model of the human vision system and estimates how the human beings actually feel as the function of time. Experimental results using various benchmark sub-field scan algorithms are included.

  • PDF

A novel computer vision-based vibration measurement and coarse-to-fine damage assessment method for truss bridges

  • Wen-Qiang Liu;En-Ze Rui;Lei Yuan;Si-Yi Chen;You-Liang Zheng;Yi-Qing Ni
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.393-407
    • /
    • 2023
  • To assess structural condition in a non-destructive manner, computer vision-based structural health monitoring (SHM) has become a focus. Compared to traditional contact-type sensors, the advantages of computer vision-based measurement systems include lower installation costs and broader measurement areas. In this study, we propose a novel computer vision-based vibration measurement and coarse-to-fine damage assessment method for truss bridges. First, a deep learning model FairMOT is introduced to track the regions of interest (ROIs) that include joints to enhance the automation performance compared with traditional target tracking algorithms. To calculate the displacement of the tracked ROIs accurately, a normalized cross-correlation method is adopted to fine-tune the offset, while the Harris corner matching is utilized to correct the vibration displacement errors caused by the non-parallel between the truss plane and the image plane. Then, based on the advantages of the stochastic damage locating vector (SDLV) and Bayesian inference-based stochastic model updating (BI-SMU), they are combined to achieve the coarse-to-fine localization of the truss bridge's damaged elements. Finally, the severity quantification of the damaged components is performed by the BI-SMU. The experiment results show that the proposed method can accurately recognize the vibration displacement and evaluate the structural damage.

Chinese-clinical-record Named Entity Recognition using IDCNN-BiLSTM-Highway Network

  • Tinglong Tang;Yunqiao Guo;Qixin Li;Mate Zhou;Wei Huang;Yirong Wu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1759-1772
    • /
    • 2023
  • Chinese named entity recognition (NER) is a challenging work that seeks to find, recognize and classify various types of information elements in unstructured text. Due to the Chinese text has no natural boundary like the spaces in the English text, Chinese named entity identification is much more difficult. At present, most deep learning based NER models are developed using a bidirectional long short-term memory network (BiLSTM), yet the performance still has some space to improve. To further improve their performance in Chinese NER tasks, we propose a new NER model, IDCNN-BiLSTM-Highway, which is a combination of the BiLSTM, the iterated dilated convolutional neural network (IDCNN) and the highway network. In our model, IDCNN is used to achieve multiscale context aggregation from a long sequence of words. Highway network is used to effectively connect different layers of networks, allowing information to pass through network layers smoothly without attenuation. Finally, the global optimum tag result is obtained by introducing conditional random field (CRF). The experimental results show that compared with other popular deep learning-based NER models, our model shows superior performance on two Chinese NER data sets: Resume and Yidu-S4k, The F1-scores are 94.98 and 77.59, respectively.

Autonomous Control System of Compact Model-helicopter

  • Kang, Chul-Ung;Jun Satake;Takakazu Ishimatsu;Yoichi Shimomoto;Jun Hashimoto
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.95-99
    • /
    • 1998
  • We introduce an autonomous flying system using a model-helicopter. A feature of the helicopter is that autonomous flight is realized on the low-cost compact model-helicopter. Our helicopter system is divided into two parts. One is on the helicopter, and the other is on the land. The helicopter is loaded with a vision sensor and an electronic compass including a tilt sensor. The control system on the land monitors the helicopter movement and controls. We firstly introduce the configuration of our helicopter system with a vision sensor and an electronic compass. To determine the 3-D position and posture of helicopter, a technique of image recognition using a monocular image is described based on the idea of the sensor fusion of vision and electronic compass. Finally, we show an experiment result, which we obtained in the hovering. The result shows the effectiveness of our system in the compact model-helicopter.

  • PDF

Vision-based Predictive Model on Particulates via Deep Learning

  • Kim, SungHwan;Kim, Songi
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.5
    • /
    • pp.2107-2115
    • /
    • 2018
  • Over recent years, high-concentration of particulate matters (e.g., a.k.a. fine dust) in South Korea has increasingly evoked considerable concerns about public health. It is intractable to track and report $PM_{10}$ measurements to the public on a real-time basis. Even worse, such records merely amount to averaged particulate concentration at particular regions. Under this circumstance, people are prone to being at risk at rapidly dispersing air pollution. To address this challenge, we attempt to build a predictive model via deep learning to the concentration of particulates ($PM_{10}$). The proposed method learns a binary decision rule on the basis of video sequences to predict whether the level of particulates ($PM_{10}$) in real time is harmful (>$80{\mu}g/m^3$) or not. To our best knowledge, no vision-based $PM_{10}$ measurement method has been proposed in atmosphere research. In experimental studies, the proposed model is found to outperform other existing algorithms in virtue of convolutional deep learning networks. In this regard, we suppose this vision based-predictive model has lucrative potentials to handle with upcoming challenges related to particulate measurement.

Preliminary Study for Vision A.I-based Automated Quality Supervision Technique of Exterior Insulation and Finishing System - Focusing on Form Bonding Method - (인공지능 영상인식 기반 외단열 공법 품질감리 자동화 기술 기초연구 - 단열재 습식 부착방법을 중심으로 -)

  • Yoon, Sebeen;Lee, Byoungmin;Lee, Changsu;Kim, Taehoon
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.04a
    • /
    • pp.133-134
    • /
    • 2022
  • This study proposed vision artificial intelligence-based automated supervision technology for external insulation and finishing system, and basic research was conducted for it. The automated supervision technology proposed in this study consists of the object detection model (YOLOv5) and the part that derives necessary information based on the object detection result and then determines whether the external insulation-related adhesion regulations are complied with. As a result of a test, the judgement accuracy of the proposed model showed about 70%. The results of this study are expected to contribute to securing the external insulation quality and further contributing to the realization of energy-saving eco-friendly buildings. As further research, it is necessary to develop a technology that can improve the accuracy of the object detection model by supplementing the number of data for model training and determine additional related regulations such as the adhesive area ratio.

  • PDF

A object tracking based robot manipulator built on fast stereo vision

  • Huang, Hua;Won, Sangchul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.99.5-99
    • /
    • 2002
  • $\textbullet$ 3-D object tracking framework $\textbullet$ Using fast stereo vision system for range image $\textbullet$ Using CONDENSATION algorithm to tracking object $\textbullet$ For recognizing object, superquardrics model is used $\textbullet$ Our target object is like coils in steel works

  • PDF