• Title/Summary/Keyword: camera model

Search Result 1,514, Processing Time 0.028 seconds

An Improved RANSAC Algorithm Based on Correspondence Point Information for Calculating Correct Conversion of Image Stitching (이미지 Stitching의 정확한 변환관계 계산을 위한 대응점 관계정보 기반의 개선된 RANSAC 알고리즘)

  • Lee, Hyunchul;Kim, Kangseok
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.1
    • /
    • pp.9-18
    • /
    • 2018
  • Recently, the use of image stitching technology has been increasing as the number of contents based on virtual reality increases. Image Stitching is a method for matching multiple images to produce a high resolution image and a wide field of view image. The image stitching is used in various fields beyond the limitation of images generated from one camera. Image Stitching detects feature points and corresponding points to match multiple images, and calculates the homography among images using the RANSAC algorithm. Generally, corresponding points are needed for calculating conversion relation. However, the corresponding points include various types of noise that can be caused by false assumptions or errors about the conversion relationship. This noise is an obstacle to accurately predict the conversion relation. Therefore, RANSAC algorithm is used to construct an accurate conversion relationship from the outliers that interfere with the prediction of the model parameters because matching methods can usually occur incorrect correspondence points. In this paper, we propose an algorithm that extracts more accurate inliers and computes accurate transformation relations by using correspondence point relation information used in RANSAC algorithm. The correspondence point relation information uses distance ratio between corresponding points used in image matching. This paper aims to reduce the processing time while maintaining the same performance as RANSAC.

Analysis of Heat Environment in Nursery Pig Behavior (자돈의 행동에 미치는 열환경 분석)

  • Sang, J.I.;Choi, H.L.;Jeon, J.H.;Jeon, B.S.;Kang, H.S.;Lee, E.S.;Park, K.H.
    • Journal of Animal Environmental Science
    • /
    • v.15 no.2
    • /
    • pp.131-138
    • /
    • 2009
  • This study was conducted to find ways to control environment with the difference between body temperature and background temperature based on swine activity, and to apply to the environment control system of swine barns based on the findings. Following are the results. 1. Swine activity related to background temperature was achieved as color images and swine activity status was categorized into cold, comfortable, and hot periods with visualization system (thermal image system). 2. Thermal image system consisted of an infrared CCD camera, an image processing board - DIF (TH3100), an main computer (400Hz, 128M, 586 Pentium model) with C++ program installed. 3. Thermal image system categorizing temperatures into cold, comfortable, and hot was applicable to the environment control system of swine barns 4. Feed intake was higher in cold temperature, and finishing weight and weight gain per day in cold temperature were lower than others (p<0.05).

  • PDF

Application of EOC Images to Developed the GIUH (지형학적순간단위유랑도 분석을 위한 EOC 스테레오 영상 활용)

  • Choi, Hyun;Kang, In-Joon;Hong, Sun-Heun
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.2
    • /
    • pp.91-102
    • /
    • 2004
  • This paper reflects the estimation of using the EOC(Electro-optical Camera) images supporting GIUH(geomorphological instantaneous unit hydrograph) approach. We have analyzed GIUH in its density and frequency distribution by creating a DEM(digital elevation model) for the sub basin produced from the EOC images and examined topographical and hydrological application possibility of the EOC images. In this process, we have topographical basin characteristic analysis that use the remote sensing technique analyzing the DEM creation process of the EOC stereo images by studying the basic topographical hydrology analysis about abstraction technique since it is flirty complex and is more time-consuming than other method. we executed statistical analysis of a basin size and river length using the frequency function after divided lattice spacing applied have to the sub river basin from the image data and the digital map into 10m intervals ranging from 10m to 100m. After comparing and examining the peak and time to peak of the GIUH, we proceeded with a comparative analysis by lattice concerning the topographical divergence rate, area ratio, length ratio. Accumulating the peak and time to peak of the GIUH is altered to non-linear form in accordance to lattice dimension as well as basin factor. It was proved that the lattice dimension is one of the important factors about the peak and time to peak of the GIUH.

Design and Strength Analysis of a Mast and Mounting Part of Dummy Gun for Multi-Mission Unmanned Surface Vehicle (복합임무 무인수상정의 마스트 및 특수임무장비 장착부 설계 및 강도해석)

  • Son, Juwon;Kim, Donghee;Choi, Byungwoong;Lee, Youngjin
    • The Journal of Korean Institute of Information Technology
    • /
    • v.16 no.11
    • /
    • pp.51-59
    • /
    • 2018
  • The Multi-Mission Unmanned Surface Vehicle(MMUSV), which is manufactured using glass Fiber Reinforced Plastic(FRP) material, is designed to perform a surveillance and reconnaissance on the sea. Various navigation sensors, such as RADAR, RIDAR, camera, are mounted on a mast to perform an autonomous navigation. And a dummy gun is mounted on the deck of the MMUSV for a target tracking and disposal. It is necessary to analyze a strength for structures mounted on the deck because the MMUSV performs missions under a severe sea state. In this paper, a strength analysis of the mast structure is performed on static loads and lateral external loads to verify an adequacy of the designed mast through a series of simulations. Based on the results of captive model tests, a strength analysis for a heave motion of the mast structure is conducted using a simulation tool. Also a simulation and fatigue test for a mounting part between the MMUSV and the dummy gun are performed using a specimen. The simulation and test results are represented that a structure of the mast and mounting part of the dummy gun are appropriately designed.he impact amount are performed through simulation and experiments.

White striping degree assessment using computer vision system and consumer acceptance test

  • Kato, Talita;Mastelini, Saulo Martiello;Campos, Gabriel Fillipe Centini;Barbon, Ana Paula Ayub da Costa;Prudencio, Sandra Helena;Shimokomaki, Massami;Soares, Adriana Lourenco;Barbon, Sylvio Jr.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.32 no.7
    • /
    • pp.1015-1026
    • /
    • 2019
  • Objective: The objective of this study was to evaluate three different degrees of white striping (WS) addressing their automatic assessment and customer acceptance. The WS classification was performed based on a computer vision system (CVS), exploring different machine learning (ML) algorithms and the most important image features. Moreover, it was verified by consumer acceptance and purchase intent. Methods: The samples for image analysis were classified by trained specialists, according to severity degrees regarding visual and firmness aspects. Samples were obtained with a digital camera, and 25 features were extracted from these images. ML algorithms were applied aiming to induce a model capable of classifying the samples into three severity degrees. In addition, two sensory analyses were performed: 75 samples properly grilled were used for the first sensory test, and 9 photos for the second. All tests were performed using a 10-cm hybrid hedonic scale (acceptance test) and a 5-point scale (purchase intention). Results: The information gain metric ranked 13 attributes. However, just one type of image feature was not enough to describe the phenomenon. The classification models support vector machine, fuzzy-W, and random forest showed the best results with similar general accuracy (86.4%). The worst performance was obtained by multilayer perceptron (70.9%) with the high error rate in normal (NORM) sample predictions. The sensory analysis of acceptance verified that WS myopathy negatively affects the texture of the broiler breast fillets when grilled and the appearance attribute of the raw samples, which influenced the purchase intention scores of raw samples. Conclusion: The proposed system has proved to be adequate (fast and accurate) for the classification of WS samples. The sensory analysis of acceptance showed that WS myopathy negatively affects the tenderness of the broiler breast fillets when grilled, while the appearance attribute of the raw samples eventually influenced purchase intentions.

A vision-based system for long-distance remote monitoring of dynamic displacement: experimental verification on a supertall structure

  • Ni, Yi-Qing;Wang, You-Wu;Liao, Wei-Yang;Chen, Wei-Huan
    • Smart Structures and Systems
    • /
    • v.24 no.6
    • /
    • pp.769-781
    • /
    • 2019
  • Dynamic displacement response of civil structures is an important index for in-construction and in-service structural condition assessment. However, accurately measuring the displacement of large-scale civil structures such as high-rise buildings still remains as a challenging task. In order to cope with this problem, a vision-based system with the use of industrial digital camera and image processing has been developed for long-distance, remote, and real-time monitoring of dynamic displacement of supertall structures. Instead of acquiring image signals, the proposed system traces only the coordinates of the target points, therefore enabling real-time monitoring and display of displacement responses in a relatively high sampling rate. This study addresses the in-situ experimental verification of the developed vision-based system on the Canton Tower of 600 m high. To facilitate the verification, a GPS system is used to calibrate/verify the structural displacement responses measured by the vision-based system. Meanwhile, an accelerometer deployed in the vicinity of the target point also provides frequency-domain information for comparison. Special attention has been given on understanding the influence of the surrounding light on the monitoring results. For this purpose, the experimental tests are conducted in daytime and nighttime through placing the vision-based system outside the tower (in a brilliant environment) and inside the tower (in a dark environment), respectively. The results indicate that the displacement response time histories monitored by the vision-based system not only match well with those acquired by the GPS receiver, but also have higher fidelity and are less noise-corrupted. In addition, the low-order modal frequencies of the building identified with use of the data obtained from the vision-based system are all in good agreement with those obtained from the accelerometer, the GPS receiver and an elaborate finite element model. Especially, the vision-based system placed at the bottom of the enclosed elevator shaft offers better monitoring data compared with the system placed outside the tower. Based on a wavelet filtering technique, the displacement response time histories obtained by the vision-based system are easily decomposed into two parts: a quasi-static ingredient primarily resulting from temperature variation and a dynamic component mainly caused by fluctuating wind load.

Design and Implementation of BNN based Human Identification and Motion Classification System Using CW Radar (연속파 레이다를 활용한 이진 신경망 기반 사람 식별 및 동작 분류 시스템 설계 및 구현)

  • Kim, Kyeong-min;Kim, Seong-jin;NamKoong, Ho-jung;Jung, Yun-ho
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.4
    • /
    • pp.211-218
    • /
    • 2022
  • Continuous wave (CW) radar has the advantage of reliability and accuracy compared to other sensors such as camera and lidar. In addition, binarized neural network (BNN) has a characteristic that dramatically reduces memory usage and complexity compared to other deep learning networks. Therefore, this paper proposes binarized neural network based human identification and motion classification system using CW radar. After receiving a signal from CW radar, a spectrogram is generated through a short-time Fourier transform (STFT). Based on this spectrogram, we propose an algorithm that detects whether a person approaches a radar. Also, we designed an optimized BNN model that can support the accuracy of 90.0% for human identification and 98.3% for motion classification. In order to accelerate BNN operation, we designed BNN hardware accelerator on field programmable gate array (FPGA). The accelerator was implemented with 1,030 logics, 836 registers, and 334.904 Kbit block memory, and it was confirmed that the real-time operation was possible with a total calculation time of 6 ms from inference to transferring result.

A Deep Learning-based Real-time Deblurring Algorithm on HD Resolution (HD 해상도에서 실시간 구동이 가능한 딥러닝 기반 블러 제거 알고리즘)

  • Shim, Kyujin;Ko, Kangwook;Yoon, Sungjoon;Ha, Namkoo;Lee, Minseok;Jang, Hyunsung;Kwon, Kuyong;Kim, Eunjoon;Kim, Changick
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.3-12
    • /
    • 2022
  • Image deblurring aims to remove image blur, which can be generated while shooting the pictures by the movement of objects, camera shake, blurring of focus, and so forth. With the rise in popularity of smartphones, it is common to carry portable digital cameras daily, so image deblurring techniques have become more significant recently. Originally, image deblurring techniques have been studied using traditional optimization techniques. Then with the recent attention on deep learning, deblurring methods based on convolutional neural networks have been actively proposed. However, most of them have been developed while focusing on better performance. Therefore, it is not easy to use in real situations due to the speed of their algorithms. To tackle this problem, we propose a novel deep learning-based deblurring algorithm that can be operated in real-time on HD resolution. In addition, we improved the training and inference process and could increase the performance of our model without any significant effect on the speed and the speed without any significant effect on the performance. As a result, our algorithm achieves real-time performance by processing 33.74 frames per second at 1280×720 resolution. Furthermore, it shows excellent performance compared to its speed with a PSNR of 29.78 and SSIM of 0.9287 with the GoPro dataset.

A Study on the Application of Task Offloading for Real-Time Object Detection in Resource-Constrained Devices (자원 제약적 기기에서 자율주행의 실시간 객체탐지를 위한 태스크 오프로딩 적용에 관한 연구)

  • Jang Shin Won;Yong-Geun Hong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.12
    • /
    • pp.363-370
    • /
    • 2023
  • Object detection technology that accurately recognizes the road and surrounding conditions is a key technology in the field of autonomous driving. In the field of autonomous driving, object detection technology requires real-time performance as well as accuracy of inference services. Task offloading technology should be utilized to apply object detection technology for accuracy and real-time on resource-constrained devices rather than high-performance machines. In this paper, experiments such as performance comparison of task offloading, performance comparison according to input image resolution, and performance comparison according to camera object resolution were conducted and the results were analyzed in relation to the application of task offloading for real-time object detection of autonomous driving in resource-constrained devices. In this experiment, the low-resolution image could derive performance improvement through the application of the task offloading structure, which met the real-time requirements of autonomous driving. The high-resolution image did not meet the real-time requirements for autonomous driving due to the increase in communication time, although there was an improvement in performance. Through these experiments, it was confirmed that object recognition in autonomous driving affects various conditions such as input images and communication environments along with the object recognition model used.

Influence of Mixture Non-uniformity on Methane Explosion Characteristics in a Horizontal Duct (수평 배관의 메탄 폭발특성에 있어서 불균일성 혼합기의 영향)

  • Ou-Sup Han;Yi-Rac Choi;HyeongHk Kim;JinHo Lim
    • Korean Chemical Engineering Research
    • /
    • v.62 no.1
    • /
    • pp.27-35
    • /
    • 2024
  • Fuel gases such as methane and propane are used in explosion hazardous area of domestic plants and can form non-uniform mixtures with the influence of process conditions due to leakage. The fire-explosion risk assessment using literature data measured under uniform mixtures, damage prediction can be obtained the different results from actual explosion accidents by gas leaks. An explosion characteristics such as explosion pressure and flame velocity of non-uniform gas mixtures with concentration change similar to that of facility leak were examined. The experiments were conducted in a closed 0.82 m long stainless steel duct with observation recorded by color high speed camera and piezo pressure sensor. Also we proposed the quantification method of non-uniform mixtures from a regression analysis model on the change of concentration difference with time in explosion duct. For the non-uniform condition of this study, the area of flame surface enlarged with increasing the concentration non-uniform in the flame propagation of methane and was similar to the wrinkled flame structure existing in a turbulent flame. The time to peak pressure of methane decreased as the non-uniform increased and the explosion pressure increased with increasing the non-uniform. The ranges of KG (Deflagration index) of methane with the concentration non-uniform were 1.30 to 1.58 [MPa·m/s] and the increase rate of KG was 17.7% in methane with changing from uniform to non-uniform.