• Title/Summary/Keyword: Camera Matrix

Search Result 195, Processing Time 0.02 seconds

In vitro application of Angiographic PIV technique to blood flows (Angiographic PIV 기법을 이용한 혈액유동의 in-vitro 연구)

  • Kim, Guk-Bae;Lim, Nam-Yun;Jung, Sung-Yong;Lee, Sang-Joon
    • 한국가시화정보학회:학술대회논문집
    • /
    • 2007.11a
    • /
    • pp.105-108
    • /
    • 2007
  • To diagnose the vascular diseases from the viewpoint of hemodynamics, we need detailed quantitative hemodynamic information of related blood flows with a high spatial resolution of tens micrometer and a high temporal resolution in the order of millisecond. For investigating in-vivo hemodynamic phenomena of vascular circulatory diseases, a new diagnosing technique combining a medical radiography and PIV method was newly developed. This technique called 'Angiographic PIV system' consists of a medical X-ray tube, an X-ray CCD camera, a shutter module for generating double pulse-type X-ray, and a synchronizer. Through several preliminary tests, the feasibility of the Angiographic PIV technique was verified. For in-vivo applications to real blood flows, we developed tracer microcapsules, which were optimized to this system, made of a contrast material of iodine and a matrix material of PVA (polyvinylpyrrolidone). In near future, the Angiographic PIV technique will be used for understanding hemodynamic phenomena of vascular diseases and for their early detection.

  • PDF

Autonomous-flight Drone Algorithm use Computer vision and GPS (컴퓨터 비전과 GPS를 이용한 드론 자율 비행 알고리즘)

  • Kim, Junghwan;Kim, Shik
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.3
    • /
    • pp.193-200
    • /
    • 2016
  • This paper introduces an algorithm to middle-low price drone's autonomous navigation flight system using computer vision and GPS. Existing drone operative system mainly contains using methods such as, by inputting course of the path to the installed software of the particular drone in advance of the flight or following the signal that is transmitted from the controller. However, this paper introduces new algorithm that allows autonomous navigation flight system to locate specific place, specific shape of the place and specific space in an area that the user wishes to discover. Technology developed for military industry purpose was implemented on a lower-quality hobby drones without changing its hardware, and used this paper's algorithm to maximize the performance. Camera mounted on middle-low price drone will process the image which meets user's needs will look through and search for specific area of interest when the user inputs certain image of places it wishes to find. By using this algorithm, middle-low price drone's autonomous navigation flight system expect to be apply to a variety of industries.

Fast High-throughput Screening of the H1N1 Virus by Parallel Detection with Multi-channel Microchip Electrophoresis

  • Zhang, Peng;Park, Guenyoung;Kang, Seong Ho
    • Bulletin of the Korean Chemical Society
    • /
    • v.35 no.4
    • /
    • pp.1082-1086
    • /
    • 2014
  • A multi-channel microchip electrophoresis (MCME) method with parallel laser-induced fluorescence (LIF) detection was developed for rapid screening of H1N1 virus. The hemagglutinin (HA) and nucleocapsid protein (NP) gene of H1N1 virus were amplified using polymerase chain reaction (PCR). The amplified PCR products of the H1N1 virus DNA (HA, 116 bp and NP, 195 bp) were simultaneously detected within 25 s in three parallel channels using an expanded laser beam and a charge-coupled device camera. The parallel separations were demonstrated using a sieving gel matrix of 0.3% poly(ethylene oxide) ($M_r$ = 8,000,000) in $1{\times}$ TBE buffer (pH 8.4) with a programmed step electric field strength (PSEFS). The method was ~20 times faster than conventional slab gel electrophoresis, without any loss of resolving power or reproducibility. The proposed MCME/PSEFS assay technique provides a simple and accurate method for fast high-throughput screening of infectious virus DNA molecules under 400 bp.

Overlap Estimation for Panoramic Image Generation (중첩 영역 추정을 통한 파노라마 영상 생성)

  • Yang, Jihee;Jeon, Jihye;Park, Gooman
    • Journal of Satellite, Information and Communications
    • /
    • v.9 no.4
    • /
    • pp.32-37
    • /
    • 2014
  • The panorama is a good alternative to overcome narrow FOV under study in robot vision, stereo camera and panorama image registration and modeling. The panorama can materialize view with angles wider than human view and provide realistic space which make feeling of being on the scene based on realism. If we use all correspondence, it is too difficult to find strong features and correspondences and assume accurate homography matrix in geographic changes in images as load of calculation increases. Accordingly, we used SURF algorithm to estimate overlapping areas with high similarity by comparing and analyzing the input images' histograms and to detect features. And we solved the problem of input order so we can make panorama by input images without order.

All-In-One Observing Software for Small Telescope

  • Han, Jimin;Pak, Soojong;Ji, Tae-Geun;Lee, Hye-In;Byeon, Seoyeon;Ahn, Hojae;Im, Myungshin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.43 no.2
    • /
    • pp.57.2-57.2
    • /
    • 2018
  • In astronomical observation, sequential device control and real-time data processing are important to maximize observing efficiency. We have developed series of automatic observing software (KAOS, KHU Automatic Observing Software), e.g. KAOS30 for the 30 inch telescope in the McDonald Observatory and KAOS76 for the 76 cm telescope in the KHAO. The series consist of four packages: the DAP (Data Acquisition Package) for CCD Camera control, the TCP (Telescope Control Package) for telescope control, the AFP (Auto Focus Package) for focusing, and the SMP (Script Mode Package) for automation of sequences. In this poster, we introduce KAOS10 which is being developed for controlling a small telescope such as aperture size of 10 cm. The hardware components are the QHY8pro CCD, the QHY5-II CMOS, the iOptron CEM 25 mount, and the Stellarvue SV102ED telescope. The devices are controlled on ASCOM Platform. In addition to the previous packages (DAP, SMP, TCP), KAOS10 has QLP (Quick Look Package) and astrometry function in the TCP. QHY8pro CCD has RGB Bayer matrix and the QLP transforms RGB images into BVR images in real-time. The TCP includes astrometry function which adjusts the telescope position by comparing the image with a star catalog. In the future, We expect KAOS10 be used on the research of transient objects such as a variable star.

  • PDF

Deep Learning Application of Gamma Camera Quality Control in Nuclear Medicine (핵의학 감마카메라 정도관리의 딥러닝 적용)

  • Jeong, Euihwan;Oh, Joo-Young;Lee, Joo-Young;Park, Hoon-Hee
    • Journal of radiological science and technology
    • /
    • v.43 no.6
    • /
    • pp.461-467
    • /
    • 2020
  • In the field of nuclear medicine, errors are sometimes generated because the assessment of the uniformity of gamma cameras relies on the naked eye of the evaluator. To minimize these errors, we created an artificial intelligence model based on CNN algorithm and wanted to assess its usefulness. We produced 20,000 normal images and partial cold region images using Python, and conducted artificial intelligence training with Resnet18 models. The training results showed that accuracy, specificity and sensitivity were 95.01%, 92.30%, and 97.73%, respectively. According to the results of the evaluation of the confusion matrix of artificial intelligence and expert groups, artificial intelligence was accuracy, specificity and sensitivity of 94.00%, 91.50%, and 96.80%, respectively, and expert groups was accuracy, specificity and sensitivity of 69.00%, 64.00%, and 74.00%, respectively. The results showed that artificial intelligence was better than expert groups. In addition, by checking together with the radiological technologist and AI, errors that may occur during the quality control process can be reduced, providing a better examination environment for patients, providing convenience to radiologists, and improving work efficiency.

Sidewalk Gaseous Pollutants Estimation Through UAV Video-based Model

  • Omar, Wael;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.1
    • /
    • pp.1-20
    • /
    • 2022
  • As unmanned aerial vehicle (UAV) technology grew in popularity over the years, it was introduced for air quality monitoring. This can easily be used to estimate the sidewalk emission concentration by calculating road traffic emission factors of different vehicle types. These calculations require a simulation of the spread of pollutants from one or more sources given for estimation. For this purpose, a Gaussian plume dispersion model was developed based on the US EPA Motor Vehicle Emissions Simulator (MOVES), which provides an accurate estimate of fuel consumption and pollutant emissions from vehicles under a wide range of user-defined conditions. This paper describes a methodology for estimating emission concentration on the sidewalk emitted by different types of vehicles. This line source considers vehicle parameters, wind speed and direction, and pollutant concentration using a UAV equipped with a monocular camera. All were sampled over an hourly interval. In this article, the YOLOv5 deep learning model is developed, vehicle tracking is used through Deep SORT (Simple Online and Realtime Tracking), vehicle localization using a homography transformation matrix to locate each vehicle and calculate the parameters of speed and acceleration, and ultimately a Gaussian plume dispersion model was developed to estimate the CO, NOx concentrations at a sidewalk point. The results demonstrate that these estimated pollutants values are good to give a fast and reasonable indication for any near road receptor point using a cheap UAV without installing air monitoring stations along the road.

Lane Detection Based on Inverse Perspective Transformation and Machine Learning in Lightweight Embedded System (경량화된 임베디드 시스템에서 역 원근 변환 및 머신 러닝 기반 차선 검출)

  • Hong, Sunghoon;Park, Daejin
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.1
    • /
    • pp.41-49
    • /
    • 2022
  • This paper proposes a novel lane detection algorithm based on inverse perspective transformation and machine learning in lightweight embedded system. The inverse perspective transformation method is presented for obtaining a bird's-eye view of the scene from a perspective image to remove perspective effects. This method requires only the internal and external parameters of the camera without a homography matrix with 8 degrees of freedom (DoF) that maps the points in one image to the corresponding points in the other image. To improve the accuracy and speed of lane detection in complex road environments, machine learning algorithm that has passed the first classifier is used. Before using machine learning, we apply a meaningful first classifier to the lane detection to improve the detection speed. The first classifier is applied in the bird's-eye view image to determine lane regions. A lane region passed the first classifier is detected more accurately through machine learning. The system has been tested through the driving video of the vehicle in embedded system. The experimental results show that the proposed method works well in various road environments and meet the real-time requirements. As a result, its lane detection speed is about 3.85 times faster than edge-based lane detection, and its detection accuracy is better than edge-based lane detection.

Compression and Performance Evaluation of CNN Models on Embedded Board (임베디드 보드에서의 CNN 모델 압축 및 성능 검증)

  • Moon, Hyeon-Cheol;Lee, Ho-Young;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.200-207
    • /
    • 2020
  • Recently, deep neural networks such as CNN are showing excellent performance in various fields such as image classification, object recognition, visual quality enhancement, etc. However, as the model size and computational complexity of deep learning models for most applications increases, it is hard to apply neural networks to IoT and mobile environments. Therefore, neural network compression algorithms for reducing the model size while keeping the performance have been being studied. In this paper, we apply few compression methods to CNN models and evaluate their performances in the embedded environment. For evaluate the performance, the classification performance and inference time of the original CNN models and the compressed CNN models on the image inputted by the camera are evaluated in the embedded board equipped with QCS605, which is a customized AI chip. In this paper, a few CNN models of MobileNetV2, ResNet50, and VGG-16 are compressed by applying the methods of pruning and matrix decomposition. The experimental results show that the compressed models give not only the model size reduction of 1.3~11.2 times at a classification performance loss of less than 2% compared to the original model, but also the inference time reduction of 1.2~2.21 times, and the memory reduction of 1.2~3.8 times in the embedded board.

Hierrachical manner of motion parameters for sports video mosaicking (스포츠 동영상의 모자익을 위한 이동계수의 계층적 향상)

  • Lee, Jae-Cheol;Lee, Soo-Jong;Ko, Young-Hoon;Noh, Heung-Sik;Lee Wan-Ju
    • The Journal of Information Technology
    • /
    • v.7 no.2
    • /
    • pp.93-104
    • /
    • 2004
  • Sports scene is characterized by large amount of global motion due to pan and zoom of camera motion, and includes many small objects moving independently. Some short period of sports games is thrilling to televiewers, and important to producers. At the same time that kinds of scenes exhibit exceptionally dynamic motions and it is very difficult to analyze the motions with conventional algorithms. In this thesis, several algorithms are proposed for global motion analysis on these dynamic scenes. It is shown that proposed algorithms worked well for motion compensation and panorama synthesis. When cascading the inter frame motions, accumulated errors are unavoidable. In order to minimize these errors, interpolation method of motion vectors is introduced. Affined transform or perspective projection transform is regarded as a square matrix, which can be factorized into small amount of motion vectors. To solve factorization problem, we preposed the adaptation of Newton Raphson method into vector and matrix form, which is also computationally efficient. Combining multi frame motion estimation and the corresponding interpolation in hierarchical manner enhancement algorithm of motion parameters is proposed, which is suitable for motion compensation and panorama synthesis. The proposed algorithms are suitable for special effect rendering for broadcast system, video indexing, tracking in complex scenes, and other fields requiring global motion estimation.

  • PDF