• 제목/요약/키워드: 2D Vision Sensors

검색결과 39건 처리시간 0.022초

스마트폰을 이용한 양안식 증강현실 시스템의 3차원 공간에서의 시점 위치 추적 및 맨손 인터랙션 기술 (Viewer Tracking in 3D Environment and Bare-hand Interaction using the Binocular Augmented Reality System with Smartphones)

  • 황재인;이진우;노승민;이윤아;임용완;김준호
    • 한국HCI학회논문지
    • /
    • 제10권2호
    • /
    • pp.65-71
    • /
    • 2015
  • 이 논문에서는 스마트폰을 이용하여 양안식 증강현실에 필요한 기술들인 스마트폰 기반 시점 추적 및 인터랙션에 대해서 소개한다. 최근 들어 스마트폰의 해상도 및 성능의 향상으로 스마트폰을 이용한 가상현실 장치들이 일반화 되고 있는 상황에서 스마트폰을 이용한 양안식 증강현실에 대한 개발이 절실히 필요한 상태이다. 스마트폰을 이용한 양안식 증강현실에서 필요한 3차원 추적 및 맨손 기반 인터랙션을 구현한 방법을 본 논문에서는 설명하고자 한다. 환경 영상의 분석을 통해 3차원 특징점 지도를 생성하고 그를 이용해서 3차원 공간 추적에 이용한다. 또한 센서를 함께 사용하여 추적 실패의 경우에도 대응하는 방법에 대해서도 다룬다. 맨손을 사용한 3차원 인터랙션을 이런 환경에서 구현하는 방법에 대해서도 논한다.

Three-dimensional Head Tracking Using Adaptive Local Binary Pattern in Depth Images

  • Kim, Joongrock;Yoon, Changyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제16권2호
    • /
    • pp.131-139
    • /
    • 2016
  • Recognition of human motions has become a main area of computer vision due to its potential human-computer interface (HCI) and surveillance. Among those existing recognition techniques for human motions, head detection and tracking is basis for all human motion recognitions. Various approaches have been tried to detect and trace the position of human head in two-dimensional (2D) images precisely. However, it is still a challenging problem because the human appearance is too changeable by pose, and images are affected by illumination change. To enhance the performance of head detection and tracking, the real-time three-dimensional (3D) data acquisition sensors such as time-of-flight and Kinect depth sensor are recently used. In this paper, we propose an effective feature extraction method, called adaptive local binary pattern (ALBP), for depth image based applications. Contrasting to well-known conventional local binary pattern (LBP), the proposed ALBP cannot only extract shape information without texture in depth images, but also is invariant distance change in range images. We apply the proposed ALBP for head detection and tracking in depth images to show its effectiveness and its usefulness.

카메라 기반 객체의 위치인식을 위한 왜곡제거 및 오검출 필터링 기법 (Distortion Removal and False Positive Filtering for Camera-based Object Position Estimation)

  • 진실;송지민;최지호;진용식;정재진;이상준
    • 대한임베디드공학회논문지
    • /
    • 제19권1호
    • /
    • pp.1-8
    • /
    • 2024
  • Robotic arms have been widely utilized in various labor-intensive industries such as manufacturing, agriculture, and food services, contributing to increasing productivity. In the development of industrial robotic arms, camera sensors have many advantages due to their cost-effectiveness and small sizes. However, estimating object positions is a challenging problem, and it critically affects to the robustness of object manipulation functions. This paper proposes a method for estimating the 3D positions of objects, and it is applied to a pick-and-place task. A deep learning model is utilized to detect 2D bounding boxes in the image plane, and the pinhole camera model is employed to compute the object positions. To improve the robustness of measuring the 3D positions of objects, we analyze the effect of lens distortion and introduce a false positive filtering process. Experiments were conducted on a real-world scenario for moving medicine bottles by using a camera-based manipulator. Experimental results demonstrated that the distortion removal and false positive filtering are effective to improve the position estimation precision and the manipulation success rate.

A Framework for Cognitive Agents

  • Petitt, Joshua D.;Braunl, Thomas
    • International Journal of Control, Automation, and Systems
    • /
    • 제1권2호
    • /
    • pp.229-235
    • /
    • 2003
  • We designed a family of completely autonomous mobile robots with local intelligence. Each robot has a number of on-board sensors, including vision, and does not rely on global positioning systems The on-board embedded controller is sufficient to analyze several low-resolution color images per second. This enables our robots to perform several complex tasks such as navigation, map generation, or providing intelligent group behavior. Not being limited to playing the game of soccer and being completely autonomous, we are also looking at a number of other interesting scenarios. The robots can communicate with each other, e.g. for exchanging positions, information about objects or just the local states they are currently in (e.g. sharing their current objectives with other robots in the group). We are particularly interested in the differences between a behavior-based approach versus a traditional control algorithm at this still very low level of action.

산업용 지능형 로봇의 물체 인식 방법 (Object Recognition Method for Industrial Intelligent Robot)

  • 김계경;강상승;김중배;이재연;도현민;최태용;경진호
    • 한국정밀공학회지
    • /
    • 제30권9호
    • /
    • pp.901-908
    • /
    • 2013
  • The introduction of industrial intelligent robot using vision sensor has been interested in automated factory. 2D and 3D vision sensors have used to recognize object and to estimate object pose, which is for packaging parts onto a complete whole. But it is not trivial task due to illumination and various types of objects. Object image has distorted due to illumination that has caused low reliability in recognition. In this paper, recognition method of complex shape object has been proposed. An accurate object region has detected from combined binary image, which has achieved using DoG filter and local adaptive binarization. The object has recognized using neural network, which is trained with sub-divided object class according to object type and rotation angle. Predefined shape model of object and maximal slope have used to estimate the pose of object. The performance has evaluated on ETRI database and recognition rate of 96% has obtained.

고밀도 3D 형상 계측 시스템에서의 고속 시차 추정을 위한 NCC 알고리즘 기반 하드웨어 구조 (A hardware architecture based on the NCC algorithm for fast disparity estimation in 3D shape measurement systems)

  • 배경렬;권순;이용환;이종훈;문병인
    • 센서학회지
    • /
    • 제19권2호
    • /
    • pp.99-111
    • /
    • 2010
  • This paper proposes an efficient hardware architecture to estimate disparities between 2D images for generating 3D depth images in a stereo vision system. Stereo matching methods are classified into global and local methods. The local matching method uses the cost functions based on pixel windows such as SAD(sum of absolute difference), SSD(sum of squared difference) and NCC(normalized cross correlation). The NCC-based cost function is less susceptible to differences in noise and lighting condition between left and right images than the subtraction-based functions such as SAD and SSD, and for this reason, the NCC is preferred to the other functions. However, software-based implementations are not adequate for the NCC-based real-time stereo matching, due to its numerous complex operations. Therefore, we propose a fast pipelined hardware architecture suitable for real-time operations of the NCC function. By adopting a block-based box-filtering scheme to perform NCC operations in parallel, the proposed architecture improves processing speed compared with the previous researches. In this architecture, it takes almost the same number of cycles to process all the pixels, irrespective of the window size. Also, the simulation results show that its disparity estimation has low error rate.

다중 영상을 이용한 생체모방형 물체 접근 감지 센서 (Biomimetic approach object detection sensors using multiple imaging)

  • 최명훈;김민;정재훈;박원현;이동헌;변기식;김관형
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2016년도 춘계학술대회
    • /
    • pp.91-93
    • /
    • 2016
  • 2차원 영상으로부터 3차원 정보를 추출하는 과정은 매우 중요한 단계로서 하나의 카메라를 이용하는 단안시법과 두 개의 카메라를 이용하는 양안 시법이 있는 후자를 일반적으로"스테레오 비전"이라고 한다. 요즘 많이 CCTV나 여러 매체에서 사용되고 있는 자동 물체추적 시스템에서 인간의 두 눈을 모방한 스테레오 카메라를 이용하여 현장의 상황이나 작업 전개를 보다 명확하게 알 수 있어 회피/제어 기동 및 여러 작업의 효율을 극대화할 수 있다. 기존의 2D 영상에서의 물체 추적시스템은 거리를 인식할 수 없어 전이를 인식할 수 없었으나 스테레오 영상의 시차를 이용하고 객체를 표시하여 관측자가 보다 효과적으로 제어할 수 있을 것이다.

  • PDF

초음파 센서와 카메라를 이용한 거리측정 시스템 설계 (Design of range measurement systems using a sonar and a camera)

  • 문창수;도용태
    • 센서학회지
    • /
    • 제14권2호
    • /
    • pp.116-124
    • /
    • 2005
  • In this paper range measurement systems are designed using an ultrasonic sensor and a camera. An ultrasonic sensor provides the range measurement to a target quickly and simply but its low resolution is a disadvantage. We tackle this problem by employing a camera. Instead using a stereoscopic sensor, which is widely used for 3D sensing but requires a computationally intensive stereo matching, the range is measured by focusing and structured lighting. In focusing a straightforward focusing measure named as MMDH(min-max difference in histogram) is proposed and compared with existing techniques. In the method of structure lighting, light stripes projected by a beam projector are used. Compared to those using a laser beam projector, the designed system can be constructed easily in a low-budget. The system equation is derived by analysing the sensor geometry. A sensing scenario using the systems designed is in two steps. First, when better accuracy is required, measurements by ultrasonic sensing and focusing of a camera are fused by MLE(maximum likelihood estimation). Second, when the target is in a range of particular interest, a range map of the target scene is obtained by using structured lighting technique. The systems designed showed measurement accuracy up to 0.3[mm] approximately in experiments.

서비스 자동화 시스템을 위한 물체 자세 인식 및 동작 계획 (Object Pose Estimation and Motion Planning for Service Automation System)

  • 권영우;이동영;강호선;최지욱;이인호
    • 로봇학회논문지
    • /
    • 제19권2호
    • /
    • pp.176-187
    • /
    • 2024
  • Recently, automated solutions using collaborative robots have been emerging in various industries. Their primary functions include Pick & Place, Peg in the Hole, fastening and assembly, welding, and more, which are being utilized and researched in various fields. The application of these robots varies depending on the characteristics of the grippers attached to the end of the collaborative robots. To grasp a variety of objects, a gripper with a high degree of freedom is required. In this paper, we propose a service automation system using a multi-degree-of-freedom gripper, collaborative robots, and vision sensors. Assuming various products are placed at a checkout counter, we use three cameras to recognize the objects, estimate their pose, and create grasping points for grasping. The grasping points are grasped by the multi-degree-of-freedom gripper, and experiments are conducted to recognize barcodes, a key task in service automation. To recognize objects, we used a CNN (Convolutional Neural Network) based algorithm and point cloud to estimate the object's 6D pose. Using the recognized object's 6d pose information, we create grasping points for the multi-degree-of-freedom gripper and perform re-grasping in a direction that facilitates barcode scanning. The experiment was conducted with four selected objects, progressing through identification, 6D pose estimation, and grasping, recording the success and failure of barcode recognition to prove the effectiveness of the proposed system.