• Title/Summary/Keyword: multi-camera

Search Result 879, Processing Time 0.029 seconds

Multi-view Human Recognition based on Face and Gait Features Detection

  • Nguyen, Anh Viet;Yu, He Xiao;Shin, Jae-Ho;Park, Sang-Yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.12
    • /
    • pp.1676-1687
    • /
    • 2008
  • In this paper, we proposed a new multi-view human recognition method based on face and gait features detection algorithm. For getting the position of moving object, we used the different of two consecutive frames. And then, base on the extracted object, the first important characteristic, walking direction, will be determined by using the contour of head and shoulder region. If this individual appears in camera with frontal direction, we will use the face features for recognition. The face detection technique is based on the combination of skin color and Haar-like feature whereas eigen-images and PCA are used in the recognition stage. In the other case, if the walking direction is frontal view, gait features will be used. To evaluate the effect of this proposed and compare with another method, we also present some simulation results which are performed in indoor and outdoor environment. Experimental result shows that the proposed algorithm has better recognition efficiency than the conventional sing]e view recognition method.

  • PDF

A cavitation performance prediction method for pumps: Part2-sensitivity and accuracy

  • Long, Yun;Zhang, Yan;Chen, Jianping;Zhu, Rongsheng;Wang, Dezhong
    • Nuclear Engineering and Technology
    • /
    • v.53 no.11
    • /
    • pp.3612-3624
    • /
    • 2021
  • At present, in the case of pump fast optimization, there is a problem of rapid, accurate and effective prediction of cavitation performance. In "A Cavitation Performance Prediction Method for Pumps PART1-Proposal and Feasibility" [1], a new cavitation performance prediction method is proposed, and the feasibility of this method is demonstrated in combination with experiments of a mixed flow pump. However, whether this method is applicable to vane pumps with different specific speeds and whether the prediction results of this method are accurate is still worthy of further study. Combined with the experimental results, the research evaluates the sensitivity and accuracy at different flow rates. For a certain operating condition, the method has better sensitivity to different flow rates. This is suitable for multi-parameter multi-objective optimization of pump impeller. For the test mixed flow pump, the method is more accurate when the area ratios are 13.718% and 13.826%. The cavitation vortex flow is obtained through high-speed camera, and the correlation between cavitation flow structure and cavitation performance is established to provide more scientific support for cavitation performance prediction. The method is not only suitable for cavitation performance prediction of the mixed flow pump, but also can be expanded to cavitation performance prediction of blade type hydraulic machinery, which will solve the problem of rapid prediction of hydraulic machinery cavitation performance.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

A Study on the Factors Influencing the Number of Enforcement on Traffic Signal Violation - Focused on Daegu City - (신호위반 단속건수에 영향을 미치는 요인에 관한 연구 - 대구시를 중심으로 -)

  • Kim, Ki-Hyuk;Jung, Youn-Jae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.6D
    • /
    • pp.553-560
    • /
    • 2010
  • This study examined the factors influencing the number of enforcement cases on traffic signal violation. The study collected enforcement data on traffic signal violation by multi-functional unmanned cameras across the Daegu metropolitan area, and analyzed the data to determine correlations among various factors including physical dimensions and engineering characteristics of the roadway as to how they would influence the number of violations. The resulting analysis provided a general model that assesses applicability for multi-functional unmanned cameras, and further shed helpful insights on efficient utilization of such traffic signal enforcement equipments in terms of installation thresholds and location. By identifying violation-prone features and characteristics and subsequent enforcement considerations, this research also supports safety efforts in reducing the number of traffic accidents as well.

Application of Smartphone Camera Calibration for Close-Range Digital Photogrammetry (근접수치사진측량을 위한 스마트폰 카메라 검보정)

  • Yun, MyungHyun;Yu, Yeon;Choi, Chuluong;Park, Jinwoo
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.1
    • /
    • pp.149-160
    • /
    • 2014
  • Recently studies on application development and utilization using sensors and devices embedded in smartphones have flourished at home and abroad. This study aimed to analyze the accuracy of the images of smartphone to determine three-dimension position of close objects prior to the development of photogrammetric system applying smartphone and evaluate the feasibility to use. First of all, camera calibration was conducted on autofocus and infinite focus. Regarding camera calibration distortion model with balance system and unbalance system was used for the decision of lens distortion coefficient, the results of calibration on 16 types of projects showed that all cases were in RMS error by less than 1 mm from bundle adjustment. Also in terms of autofocus and infinite focus on S and S2 model, the pattern of distorted curve was almost the same, so it could be judged that change in distortion pattern according to focus mode is very little. The result comparison according to autofocus and infinite focus and the result comparison according to a software used for multi-image processing showed that all cases were in standard deviation less than ${\pm}3$ mm. It is judged that there is little result difference between focus mode and determination of three-dimension position by distortion model. Lastly the checkpoint performance by total station was fixed as most probable value and the checkpoint performance determined by each project was fixed as observed value to calculate statistics on residual of individual methods. The result showed that all projects had relatively large errors in the direction of Y, the direction of object distance compared to the direction of X and Z. Like above, in terms of accuracy for determination of three-dimension position for a close object, the feasibility to use smartphone camera would be enough.

DEVELOPMENT OF AN AMPHIBIOUS ROBOT FOR VISUAL INSPECTION OF APR1400 NPP IRWST STRAINER ASSEMBLY

  • Jang, You Hyun;Kim, Jong Seog
    • Nuclear Engineering and Technology
    • /
    • v.46 no.3
    • /
    • pp.439-446
    • /
    • 2014
  • An amphibious inspection robot system (hereafter AIROS) is being developed to visually inspect the in-containment refueling storage water tank (hereafter IRWST) strainer in APR1400 instead of a human diver. Four IRWST strainers are located in the IRWST, which is filled with boric acid water. Each strainer has 108 sub-assembly strainer fin modules that should be inspected with the VT-3 method according to Reg. guide 1.82 and the operation manual. AIROS has 6 thrusters for submarine voyage and 4 legs for walking on the top of the strainer. An inverse kinematic algorithm was implemented in the robot controller for exact walking on the top of the IRWST strainer. The IRWST strainer has several top cross braces that are extruded on the top of the strainer, which can be obstacles of walking on the strainer, to maintain the frame of the strainer. Therefore, a robot leg should arrive at the position beside the top cross brace. For this reason, we used an image processing technique to find the top cross brace in the sole camera image. The sole camera image is processed to find the existence of the top cross brace using the cross edge detection algorithm in real time. A 5-DOF robot arm that has multiple camera modules for simultaneous inspection of both sides can penetrate narrow gaps. For intuitive presentation of inspection results and for management of inspection data, inspection images are stored in the control PC with camera angles and positions to synthesize and merge the images. The synthesized images are then mapped in a 3D CAD model of the IRWST strainer with the location information. An IRWST strainer mock-up was fabricated to teach the robot arm scanning and gaiting. It is important to arrive at the designated position for inserting the robot arm into all of the gaps. Exact position control without anchor under the water is not easy. Therefore, we designed the multi leg robot for the role of anchoring and positioning. Quadruped robot design of installing sole cameras was a new approach for the exact and stable position control on the IRWST strainer, unlike a traditional robot for underwater facility inspection. The developed robot will be practically used to enhance the efficiency and reliability of the inspection of nuclear power plant components.

Spectral Reflectance Estimation based on Similar Training Set using Correlation Coefficient (상관 계수를 이용한 유사 모집단 기반의 분광 반사율 추정)

  • Yo, Ji-Hoon;Ha, Ho-Gun;Kim, Dae-Chul;Ha, Yeong-Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.10
    • /
    • pp.142-149
    • /
    • 2013
  • In general, a color of an image is represented by using red, green, and blue channels in a RGB camera system. However, only information of three channels are limited to estimate a spectral reflectance of a real scene. Because of this, the RGB camera system can not accurately represent the color. To overcome this limitation and represent an accurate color, researches to estimate the spectral reflectance by using a multi-channel camera system are being actively proceeded. Recently, a reflectance estimation method adaptively constructing a similar training set from a traditional training set according to a camera response by using a spectral similarity was introduced. However, in this method, an accuracy of the similar training set is reduced because the spectral similarity based on an average and a maximum distances was applied. In this paper, a reflectance estimation method applied a spectral similarity based on a correlation coefficient is proposed to improve the accuracy of the similar training set. Firstly, the correlation coefficient between the similar training set and the spectral reflectance obtained by Wiener estimation method is calculated. Secondly, the similar training set is constructed from the traditional training set according to the correlation coefficient. Finally, Wiener estimation method applied the similar training set is performed to estimate the spectral reflectance. To evaluate a performance of the proposed method with previous methods, experimental results are compared. As a result, the proposed method showed the best performance.

Multi-View Video System using Single Encoder and Decoder (단일 엔코더 및 디코더를 이용하는 다시점 비디오 시스템)

  • Kim Hak-Soo;Kim Yoon;Kim Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.116-129
    • /
    • 2006
  • The progress of data transmission technology through the Internet has spread a variety of realistic contents. One of such contents is multi-view video that is acquired from multiple camera sensors. In general, the multi-view video processing requires encoders and decoders as many as the number of cameras, and thus the processing complexity results in difficulties of practical implementation. To solve for this problem, this paper considers a simple multi-view system utilizing a single encoder and a single decoder. In the encoder side, input multi-view YUV sequences are combined on GOP units by a video mixer. Then, the mixed sequence is compressed by a single H.264/AVC encoder. The decoding is composed of a single decoder and a scheduler controling the decoding process. The goal of the scheduler is to assign approximately identical number of decoded frames to each view sequence by estimating the decoder utilization of a Gap and subsequently applying frame skip algorithms. Furthermore, in the frame skip, efficient frame selection algorithms are studied for H.264/AVC baseline and main profiles based upon a cost function that is related to perceived video quality. Our proposed method has been performed on various multi-view test sequences adopted by MPEG 3DAV. Experimental results show that approximately identical decoder utilization is achieved for each view sequence so that each view sequence is fairly displayed. As well, the performance of the proposed method is examined in terms of bit-rate and PSNR using a rate-distortion curve.

Distributed Multi-view Video Coding Based on Illumination Compensation (조명보상 기반 분산 다시점 비디오 코딩)

  • Park, Sea-Nae;Sim, Dong-Gyu;Jeon, Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.17-26
    • /
    • 2008
  • In this paper, we propose a distributed multi-view video coding method employing illumination compensation for multi-view video coding. Distributed multi-view video coding (DMVC) methods can be classified either into a temporal or an inter-view interpolation-based ones according to ways to generate side information. DMVC with inter-view interpolation utilizes characteristics of multi-view videos to improve coding efficiency of the DMVC by using side information based on the inter-view interpolation. However, mismatch of camera parameters and illumination change between two views could bring about inaccurate side information generation. In this paper, a modified distributed multi-view coding method is presented by applying illumination compensation in generating the side information. In the proposed encoder system, in addition to parity bits for AC coefficients, DC coefficients are transmitted as well to the decoder side. By doing so, the decoder can generate more accurate side information by compensating illumination changes with the transmitted DC coefficients. We found that the proposed algorithm is $0.1{\sim}0.2\;dB$ better than the conventional algorithm that does not make use of illumination compensation.

A New Mapping Algorithm for Depth Perception in 3D Screen and Its Implementation (3차원 영상의 깊이 인식에 대한 매핑 알고리즘 구현)

  • Ham, Woon-Chul;Kim, Seung-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.95-101
    • /
    • 2008
  • In this paper, we present a new smoothing algorithm for variable depth mapping for real time stereoscopic image for 3D display. Proposed algorithm is based on the physical concept, called Laplacian equation and we also discuss the mapping of the depth from scene to displayed image. The approach to solve the problem in stereoscopic image which we adopt in this paper is similar to multi-region algorithm which was proposed by N.Holliman. The main difference thing in our algorithm compared with the N.Holliman's multi-region algorithm is that we use the Laplacian equation by considering the distance between viewer and object. We implement the real time stereoscopic image generation method for OpenGL on the circular polarized LCD screen to demonstrate its real functioning in the visual sensory system in human brain. Even though we make and use artificial objects by using OpenGL to simulate the proposed algorithm we assure that this technology may be applied to stereoscopic camera system not only for personal computer system but also for public broad cast system.