• 제목/요약/키워드: motion estimation detection

검색결과 161건 처리시간 0.031초

차선검출 기반 카메라 포즈 추정 (Lane Detection-based Camera Pose Estimation)

  • 정호기;서재규
    • 한국자동차공학회논문집
    • /
    • 제23권5호
    • /
    • pp.463-470
    • /
    • 2015
  • When a camera installed on a vehicle is used, estimation of the camera pose including tilt, roll, and pan angle with respect to the world coordinate system is important to associate camera coordinates with world coordinates. Previous approaches using huge calibration patterns have the disadvantage that the calibration patterns are costly to make and install. And, previous approaches exploiting multiple vanishing points detected in a single image are not suitable for automotive applications as a scene where multiple vanishing points can be captured by a front camera is hard to find in our daily environment. This paper proposes a camera pose estimation method. It collects multiple images of lane markings while changing the horizontal angle with respect to the markings. One vanishing point, the cross point of the left and right lane marking, is detected in each image, and vanishing line is estimated based on the detected vanishing points. Finally, camera pose is estimated from the vanishing line. The proposed method is based on the fact that planar motion does not change the vanishing line of the plane and the normal vector of the plane can be estimated by the vanishing line. Experiments with large and small tilt and roll angle show that the proposed method outputs accurate estimation results respectively. It is verified by checking the lane markings are up right in the bird's eye view image when the pan angle is compensated.

A Novel Interaction Method for Mobile Devices Using Low Complexity Global Motion Estimation

  • Nguyen, Toan Dinh;Kim, JeongHwan;Kim, SooHyung;Yang, HyungJeong;Lee, GueeSang;Chang, JuneYoung;Eum, NakWoong
    • ETRI Journal
    • /
    • 제34권5호
    • /
    • pp.734-742
    • /
    • 2012
  • A novel interaction method for mobile phones using their built-in cameras is presented. By estimating the path connecting the center points of frames captured by the camera phone, objects of interest can be easily extracted and recognized. To estimate the movement of the mobile phone, corners and corresponding Speeded-Up Robust Features descriptors are used to calculate the spatial transformation parameters between the previous and current frames. These parameters are then used to recalculate the locations of the center points in the previous frame into the current frame. The experiment results obtained from real image sequences show that the proposed system is efficient, flexible, and able to provide accurate and stable results.

어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM (Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image)

  • 최윤원;최정원;대염염;이석규
    • 제어로봇시스템학회논문지
    • /
    • 제20권8호
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

운전자 피로 감지를 위한 얼굴 동작 인식 (Facial Behavior Recognition for Driver's Fatigue Detection)

  • 박호식;배철수
    • 한국통신학회논문지
    • /
    • 제35권9C호
    • /
    • pp.756-760
    • /
    • 2010
  • 본 논문에서는 운전자 피로 감지를 위한 얼굴 동작을 효과적으로 인식하는 방법을 제안하고자 한다. 얼굴 동작은 얼굴 표정, 얼굴 자세, 시선, 주름 같은 얼굴 특징으로 나타난다. 그러나 얼굴 특징으로 하나의 동작 상태를 뚜렷이 구분한다는 것은 대단히 어려운 문제이다. 왜냐하면 사람의 동작은 복합적이며 그 동작을 표현하는 얼굴은 충분한 정보를 제공하기에는 모호성을 갖기 때문이다. 제안된 얼굴 동작 인식 시스템은 먼저 적외선 카메라로 눈 검출, 머리 방향 추정, 머리 움직임 추정, 얼굴 추적과 주름 검출과 같은 얼굴 특징 등을 감지하고 획득한 특징을 FACS의 AU로 나타낸다. 획득한 AU를 근간으로 동적 베이지안 네트워크를 통하여 각 상태가 일어날 확률을 추론한다.

서비스 자동화 시스템을 위한 물체 자세 인식 및 동작 계획 (Object Pose Estimation and Motion Planning for Service Automation System)

  • 권영우;이동영;강호선;최지욱;이인호
    • 로봇학회논문지
    • /
    • 제19권2호
    • /
    • pp.176-187
    • /
    • 2024
  • Recently, automated solutions using collaborative robots have been emerging in various industries. Their primary functions include Pick & Place, Peg in the Hole, fastening and assembly, welding, and more, which are being utilized and researched in various fields. The application of these robots varies depending on the characteristics of the grippers attached to the end of the collaborative robots. To grasp a variety of objects, a gripper with a high degree of freedom is required. In this paper, we propose a service automation system using a multi-degree-of-freedom gripper, collaborative robots, and vision sensors. Assuming various products are placed at a checkout counter, we use three cameras to recognize the objects, estimate their pose, and create grasping points for grasping. The grasping points are grasped by the multi-degree-of-freedom gripper, and experiments are conducted to recognize barcodes, a key task in service automation. To recognize objects, we used a CNN (Convolutional Neural Network) based algorithm and point cloud to estimate the object's 6D pose. Using the recognized object's 6d pose information, we create grasping points for the multi-degree-of-freedom gripper and perform re-grasping in a direction that facilitates barcode scanning. The experiment was conducted with four selected objects, progressing through identification, 6D pose estimation, and grasping, recording the success and failure of barcode recognition to prove the effectiveness of the proposed system.

Voting based Cue Integration for Visual Servoing

  • Cho, Che-Seung;Chung, Byeong-Mook
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.798-802
    • /
    • 2003
  • The robustness and reliability of vision algorithms is the key issue in robotic research and industrial applications. In this paper, the robust real time visual tracking in complex scene is considered. A common approach to increase robustness of a tracking system is to use different models (CAD model etc.) known a priori. Also fusion of multiple features facilitates robust detection and tracking of objects in scenes of realistic complexity. Because voting is a very simple or no model is needed for fusion, voting-based fusion of cues is applied. The approach for this algorithm is tested in a 3D Cartesian robot which tracks a toy vehicle moving along 3D rail, and the Kalman filter is used to estimate the motion parameters, namely the system state vector of moving object with unknown dynamics. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.

  • PDF

칼만필터를 이용한 3-D 이동물체의 강건한 시각추적 (Robust Visual Tracking for 3-D Moving Object using Kalman Filter)

  • 조지승;정병묵
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2003년도 춘계학술대회 논문집
    • /
    • pp.1055-1058
    • /
    • 2003
  • The robustness and reliability of vision algorithms is the key issue in robotic research and industrial applications. In this paper robust real time visual tracking in complex scene is considered. A common approach to increase robustness of a tracking system is the use of different model (CAD model etc.) known a priori. Also fusion or multiple features facilitates robust detection and tracking of objects in scenes of realistic complexity. Voting-based fusion of cues is adapted. In voting. a very simple or no model is used for fusion. The approach for this algorithm is tested in a 3D Cartesian robot which tracks a toy vehicle moving along 3D rail, and the Kalman filter is used to estimate the motion parameters. namely the system state vector of moving object with unknown dynamics. Experimental results show that fusion of cues and motion estimation in a tracking system has a robust performance.

  • PDF

움직임 물체 검출 기반 학원 통학차량 승하차 위험 경고 시스템 (Motion Object Detection Based Hagwon-Bus Boarding Danger Warning System)

  • 송영철;박성령;양승한
    • 전기학회논문지
    • /
    • 제63권6호
    • /
    • pp.810-812
    • /
    • 2014
  • In this paper, a hagwon-bus boarding danger warning system based on computer vision is proposed to protect children from an accident causing injuries or death. Three zones are defined and different algorithms are applied to detect moving objects. In zone 1, a block-based entropy value is calculated using the absolute difference image generated by the absolute differential estimation between background image and incoming video frame. In zone 2, an effective and robust motion object tracking algorithm is performed based on the particle filter. Experimental results demonstrate the efficient and effectively of the algorithm for moving object inspection in each zone.

다중 균열을 갖는 신장 보의 균열 에너지와 지배방정식 (Crack Energy and Governing Equation of an Extensible Beam with Multiple Cracks)

  • 손수덕
    • 한국공간구조학회논문집
    • /
    • 제24권1호
    • /
    • pp.65-72
    • /
    • 2024
  • This paper aims to advance our understanding of extensible beams with multiple cracks by presenting a crack energy and motion equation, and mathematically justifying the energy functions of axial and bending deformations caused by cracks. Utilizing an extended form of Hamilton's principle, we derive a normalized governing equation for the motion of the extensible beam, taking into account crack energy. To achieve a closed-form solution of the beam equation, we employ a simple approach that incorporates the crack's patching condition into the eigenvalue problem associated with the linear part of the governing equation. This methodology not only yields a valuable eigenmode function but also significantly enhances our understanding of the dynamics of cracked extensible beams. Furthermore, we derive a governing equation that is an ordinary differential equation concerning time, based on orthogonal eigenmodes. This research lays the foundation for further studies, including experimental validations, applications, and the study of damage estimation and detection in the presence of cracks.

조기 화재인식을 위한 화염 및 연기 검출 (Flame and Smoke Detection for Early Fire Recognition)

  • 박장식;김현태;최수영;강창순
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2007년도 추계종합학술대회
    • /
    • pp.427-430
    • /
    • 2007
  • 본 논문에서는 화재에 의한 인적물적 피해를 최소화하기 위하여 조기에 화재를 영상처리 기법을 이용하여 검출하는 방법을 제안한다. 인공조명으로 부터 화염을 판별하기 위해 화염의 고유한 색정보를 이용하여 화염후보영역을 판별하고 화염후보영역이 아닌 경우는 배경과 현재 프레임의 밝기차이와 채도를 측정하여 연기후보영역을 판별한다. 그러나 단순한 밝기 및 색체 정보만으로 화염이나 연기로 판별할 경우 오인식할 경우가 많아 화염 및 연기 후보영역에 대해 움직임을 측정한다. 각 후보영역에 대해 전형적인 움직임이 검출되면 최종적으로 화염인 경우는 활동성 정보를 이용하여 화염으로 판별하고 연기인 경우는 경계검출법을 적용하여 최종 연기 영역을 검출한다. 제안하는 방법에 대해 실제 CCTV 카메라의 영상신호에 적용한 시뮬레이션을 통해 효과적으로 화염과 연기를 동시에 검출할 수 있음을 확인하였다.

  • PDF