• Title/Summary/Keyword: Camera Performance

Search Result 1,824, Processing Time 0.032 seconds

Efficient Intermediate Joint Estimation using the UKF based on the Numerical Inverse Kinematics (수치적인 역운동학 기반 UKF를 이용한 효율적인 중간 관절 추정)

  • Seo, Yung-Ho;Lee, Jun-Sung;Lee, Chil-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.6
    • /
    • pp.39-47
    • /
    • 2010
  • A research of image-based articulated pose estimation has some problems such as detection of human feature, precise pose estimation, and real-time performance. In particular, various methods are currently presented for recovering many joints of human body. We propose the novel numerical inverse kinematics improved with the UKF(unscented Kalman filter) in order to estimate the human pose in real-time. An existing numerical inverse kinematics is required many iterations for solving the optimal estimation and has some problems such as the singularity of jacobian matrix and a local minima. To solve these problems, we combine the UKF as a tool for optimal state estimation with the numerical inverse kinematics. Combining the solution of the numerical inverse kinematics with the sampling based UKF provides the stability and rapid convergence to optimal estimate. In order to estimate the human pose, we extract the interesting human body using both background subtraction and skin color detection algorithm. We localize its 3D position with the camera geometry. Next, through we use the UKF based numerical inverse kinematics, we generate the intermediate joints that are not detect from the images. Proposed method complements the defect of numerical inverse kinematics such as a computational complexity and an accuracy of estimation.

A Study on Iris Recognition by Iris Feature Extraction from Polar Coordinate Circular Iris Region (극 좌표계 원형 홍채영상에서의 특징 검출에 의한 홍채인식 연구)

  • Jeong, Dae-Sik;Park, Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.3
    • /
    • pp.48-60
    • /
    • 2007
  • In previous researches for iris feature extraction, they transform a original iris image into rectangular one by stretching and interpolation, which causes the distortion of iris patterns. Consequently, it reduce iris recognition accuracy. So we are propose the method that extracts iris feature by using polar coordinates without distortion of iris patterns. Our proposed method has three strengths compared with previous researches. First, we extract iris feature directly from polar coordinate circular iris image. Though it requires a little more processing time, there is no degradation of accuracy for iris recognition and we compares the recognition performance of polar coordinate to rectangular type using by Hamming Distance, Cosine Distance and Euclidean Distance. Second, in general, the center position of pupil is different from that of iris due to camera angle, head position and gaze direction of user. So, we propose the method of iris feature detection based on polar coordinate circular iris region, which uses pupil and iris position and radius at the same time. Third, we overcome override point from iris patterns by using polar coordinates circular method. each overlapped point would be extracted from the same position of iris region. To overcome such problem, we modify Gabor filter's size and frequency on first track in order to consider low frequency iris patterns caused by overlapped points. Experimental results showed that EER is 0.29%, d' is 5,9 and EER is 0.16%, d' is 6,4 in case of using conventional rectangular image and proposed method, respectively.

S-FDS : a Smart Fire Detection System based on the Integration of Fuzzy Logic and Deep Learning (S-FDS : 퍼지로직과 딥러닝 통합 기반의 스마트 화재감지 시스템)

  • Jang, Jun-Yeong;Lee, Kang-Woon;Kim, Young-Jin;Kim, Won-Tae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.4
    • /
    • pp.50-58
    • /
    • 2017
  • Recently, some methods of converging heterogeneous fire sensor data have been proposed for effective fire detection, but the rule-based methods have low adaptability and accuracy, and the fuzzy inference methods suffer from detection speed and accuracy by lack of consideration for images. In addition, a few image-based deep learning methods were researched, but it was too difficult to rapidly recognize the fire event in absence of cameras or out of scope of a camera in practical situations. In this paper, we propose a novel fire detection system combining a deep learning algorithm based on CNN and fuzzy inference engine based on heterogeneous fire sensor data including temperature, humidity, gas, and smoke density. we show it is possible for the proposed system to rapidly detect fire by utilizing images and to decide fire in a reliable way by utilizing multi-sensor data. Also, we apply distributed computing architecture to fire detection algorithm in order to avoid concentration of computing power on a server and to enhance scalability as a result. Finally, we prove the performance of the system through two experiments by means of NIST's fire dynamics simulator in both cases of an explosively spreading fire and a gradually growing fire.

A Real-time Hand Pose Recognition Method with Hidden Finger Prediction (은닉된 손가락 예측이 가능한 실시간 손 포즈 인식 방법)

  • Na, Min-Young;Choi, Jae-In;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.12 no.5
    • /
    • pp.79-88
    • /
    • 2012
  • In this paper, we present a real-time hand pose recognition method to provide an intuitive user interface through hand poses or movements without a keyboard and a mouse. For this, the areas of right and left hands are segmented from the depth camera image, and noise removal is performed. Then, the rotation angle and the centroid point of each hand area are calculated. Subsequently, a circle is expanded at regular intervals from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing. Lastly, the matching between the hand information calculated previously and the hand model of previous frame is performed, and the hand model is recognized to update the hand model for the next frame. This method enables users to predict the hidden fingers through the hand model information of the previous frame using temporal coherence in consecutive frames. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 95% and the performance indicated over 32 fps. The proposed method can be used as a contactless input interface in presentation, advertisement, education, and game applications.

A Method of Hand Recognition for Virtual Hand Control of Virtual Reality Game Environment (가상 현실 게임 환경에서의 가상 손 제어를 위한 사용자 손 인식 방법)

  • Kim, Boo-Nyon;Kim, Jong-Ho;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.10 no.2
    • /
    • pp.49-56
    • /
    • 2010
  • In this paper, we propose a control method of virtual hand by the recognition of a user's hand in the virtual reality game environment. We display virtual hand on the game screen after getting the information of the user's hand movement and the direction thru input images by camera. We can utilize the movement of a user's hand as an input interface for virtual hand to select and move the object. As a hand recognition method based on the vision technology, the proposed method transforms input image from RGB color space to HSV color space, then segments the hand area using double threshold of H, S value and connected component analysis. Next, The center of gravity of the hand area can be calculated by 0 and 1 moment implementation of the segmented area. Since the center of gravity is positioned onto the center of the hand, the further apart pixels from the center of the gravity among the pixels in the segmented image can be recognized as fingertips. Finally, the axis of the hand is obtained as the vector of the center of gravity and the fingertips. In order to increase recognition stability and performance the method using a history buffer and a bounding box is also shown. The experiments on various input images show that our hand recognition method provides high level of accuracy and relatively fast stable results.

Comparative Performance Analysis of Feature Detection and Matching Methods for Lunar Terrain Images (달 지형 영상에서 특징점 검출 및 정합 기법의 성능 비교 분석)

  • Hong, Sungchul;Shin, Hyu-Soung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.4
    • /
    • pp.437-444
    • /
    • 2020
  • A lunar rover's optical camera is used to provide navigation and terrain information in an exploration zone. However, due to the scant presence of atmosphere, the Moon has homogeneous terrain with dark soil. Also, in extreme environments, the rover has limited data storage with low computation capability. Thus, for successful exploration, it is required to examine feature detection and matching methods which are robust to lunar terrain and environmental characteristics. In this research, SIFT, SURF, BRISK, ORB, and AKAZE are comparatively analyzed with lunar terrain images from a lunar rover. Experimental results show that SIFT and AKAZE are most robust for lunar terrain characteristics. AKAZE detects less quantity of feature points than SIFT, but feature points are detected and matched with high precision and the least computational cost. AKAZE is adequate for fast and accurate navigation information. Although SIFT has the highest computational cost, the largest quantity of feature points are stably detected and matched. The rover periodically sends terrain images to Earth. Thus, SIFT is suitable for global 3D terrain map construction in that a large amount of terrain images can be processed on Earth. Study results are expected to provide a guideline to utilize feature detection and matching methods for future lunar exploration rovers.

Design of FPGA Camera Module with AVB based Multi-viewer for Bus-safety (AVB 기반의 버스안전용 멀티뷰어의 FPGA 카메라모듈 설계)

  • Kim, Dong-jin;Shin, Wan-soo;Park, Jong-bae;Kang, Min-goo
    • Journal of Internet Computing and Services
    • /
    • v.17 no.4
    • /
    • pp.11-17
    • /
    • 2016
  • In this paper, we proposed a multi-viewer system with multiple HD cameras based AVB(Audio Video Bridge) ethernet cable using IP networking, and FPGA(Xilinx Zynq 702) for bus safety systems. This AVB (IEEE802.1BA) system can be designed for the low latency based on FPGA, and transmit real-time with HD video and audio signals in a vehicle network. The proposed multi-viewer platform can multiplex H.264 video signals from 4 wide-angle HD cameras with existed ethernet 1Gbps. and 2-wire 100Mbps cables. The design of Zynq 702 based low latency to H.264 AVC CODEC was proposed for the minimization of time-delay in the HD video transmission of car area network, too. And the performance of PSNR(Peak Signal-to-noise-ratio) was analyzed with the reference model JM for encoding and decoding results in H.264 AVC CODEC. These PSNR values can be confirmed according the theoretical and HW result from the signal of H.264 AVC CODEC based on Zynq 702 the multi-viewer with multiple cameras. As a result, proposed AVB multi-viewer platform with multiple cameras can be used for the surveillance of audio and video around a bus for the safety due to the low latency of H.264 AVC CODEC design.

Multi-View Video System using Single Encoder and Decoder (단일 엔코더 및 디코더를 이용하는 다시점 비디오 시스템)

  • Kim Hak-Soo;Kim Yoon;Kim Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.11 no.1 s.30
    • /
    • pp.116-129
    • /
    • 2006
  • The progress of data transmission technology through the Internet has spread a variety of realistic contents. One of such contents is multi-view video that is acquired from multiple camera sensors. In general, the multi-view video processing requires encoders and decoders as many as the number of cameras, and thus the processing complexity results in difficulties of practical implementation. To solve for this problem, this paper considers a simple multi-view system utilizing a single encoder and a single decoder. In the encoder side, input multi-view YUV sequences are combined on GOP units by a video mixer. Then, the mixed sequence is compressed by a single H.264/AVC encoder. The decoding is composed of a single decoder and a scheduler controling the decoding process. The goal of the scheduler is to assign approximately identical number of decoded frames to each view sequence by estimating the decoder utilization of a Gap and subsequently applying frame skip algorithms. Furthermore, in the frame skip, efficient frame selection algorithms are studied for H.264/AVC baseline and main profiles based upon a cost function that is related to perceived video quality. Our proposed method has been performed on various multi-view test sequences adopted by MPEG 3DAV. Experimental results show that approximately identical decoder utilization is achieved for each view sequence so that each view sequence is fairly displayed. As well, the performance of the proposed method is examined in terms of bit-rate and PSNR using a rate-distortion curve.

Calibration of a UAV Based Low Altitude Multi-sensor Photogrammetric System (UAV기반 저고도 멀티센서 사진측량 시스템의 캘리브레이션)

  • Lee, Ji-Hun;Choi, Kyoung-Ah;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.1
    • /
    • pp.31-38
    • /
    • 2012
  • The geo-referencing accuracy of the images acquired by a UAV based multi-sensor system is affected by the accuracy of the mounting parameters involving the relationship between a camera and a GPS/INS system as well as the performance of a GPS/INS system. Therefore, the estimation of the accurate mounting parameters of a multi-sensor system is important. Currently, we are developing a low altitude multi-sensor system based on a UAV, which can monitor target areas in real time for rapid responses for emergency situations such as natural disasters and accidents. In this study, we suggest a system calibration method for the estimation of the mounting parameters of a multi-sensor system like our system. We also generate simulation data with the sensor specifications of our system, and derive an effective flight configuration and the number of ground control points for accurate and efficient system calibration by applying the proposed method to the simulated data. The experimental results indicate that the proposed method can estimate accurate mounting parameters using over five ground control points and flight configuration composed of six strips. In the near future, we plan to estimate mounting parameters of our system using the proposed method and evaluate the geo-referencing accuracy of the acquired sensory data.

The Kinematical Analysis of Li Xiaopeng Motion in Horse Vaulting (도마운동 Li Xiaopeng 동작의 운동학적 분석)

  • Park, Jong-Hoon;Yoon, Sang-Moon
    • Korean Journal of Applied Biomechanics
    • /
    • v.13 no.3
    • /
    • pp.81-98
    • /
    • 2003
  • The purpose of this study is to closely examine kinematic characteristics by jump phase of Li Xiaopeng motion in horse vaulting and provide the training data. In doing so, as a result of analyzing kinematic variables through 3-dimensional cinematographic using the high-speed video camera to Li Xiaopeng motion first performed at the men's vault competition at the 14th Busan Asian Games, the following conclusion was obtained. 1. It was indicated that at the post-flight, the increase of flight time and height and twisting rotational velocity has a decisive effect on the increase of twist displacement. And Li Xiaopeng motion showed longer flight time and higher flight height than Ropez motion with the same twist displacement of entire movement. Also the rotational displacement of the trunk at peak of COG was much short of $360^{\circ}$(one rotation) but twist displacement showed $606^{\circ}$. Likewise, Li Xiaopeng motion was indicated to concentrate on twist movement in the early flight. 2. It was indicated that at the landing, Li Xiaopeng motion gets the hip to move back, the trunk to stand up and the horizontal velocity of COG to slow down. This is thought to be the performance of sufficient landing, resulting from large security of rotational displacement of airborne and twist displacement. 3. It was indicated that at the board contact, Li Xiaopeng motion made a rapid rotation uprighting the trunk to recover slowing velocity caused by jumping with the horse in the back, and has already twisted the trunk nearly close to $40^{\circ}$ at board contact. Under the premise that elasticity is generated without the change of the feet contacting the board, it will give an aid to the rotation and twist of pre-flight. Thus, in the round-oH phase, the tap of waist according to the fraction and extension of hip joint and arm push is thought to be very important. 4. It was indicated that at the pre-flight, Li Xiaopeng motion showed bigger movement than the techniques of precedented studies rushing to the horse, and overcomes the concern of relatively low power of jump through the rapid rotation of the trunk. Li Xiaopeng motion secured much twist distance, increased rotational distance with the trunk bent forward, resulting in the effect of rushing to the horse. 5. At horse contact, Li Xiaopeng motion makes a short-time contact, and maintains horse take-off angle close to vertical, contributing to the increase of post-flight time and height. This is thought to be resulted from rapid move toward movement direction along with the rotational velocity of trunk rapidly earned prior to horse contact, and little shave of rotation axis according to twist motion because of effective twist in the same direction.