• 제목/요약/키워드: Unknown Object

검색결과 192건 처리시간 0.03초

컴퓨터 비젼을 이용한 원기둥형 물체의 3차원 측정 (3-Dimensional Measurement of the Cylindrical Object Using Computer Vision)

  • 장택준;주기세;한민홍
    • 한국정밀공학회지
    • /
    • 제12권12호
    • /
    • pp.38-44
    • /
    • 1995
  • This paper presents a method to measure the position and orientation of a Cylindrical Object(unknown the eiameter and length) lying on a floor, using a camera. The two extreme cross section of the cylinder will be viewed as distorted ellipese or circular are, while its limb edge will be shown as two straight lines. The diameter of the cylinder is determined from the geometric properties of the two straight lines, which in turn provides information regarding the length of the cylinder. From the 3-dimensional measurement, the 3D coordinates of the center points of the two extreme cross sections are determined to give the position and orientation of the cylinder. This method is used for automated pick-and-place operations of cylinder, such as sheet coils, or drums in warehouses.

  • PDF

3차원 물체의 인식 성능 향상을 위한 감각 융합 시스템 (Sensor Fusion System for Improving the Recognition Performance of 3D Object)

  • Kim, Ji-Kyoung;Oh, Yeong-Jae;Chong, Kab-Sung;Wee, Jae-Woo;Lee, Chong-Ho
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2004년도 학술대회 논문집 정보 및 제어부문
    • /
    • pp.107-109
    • /
    • 2004
  • In this paper, authors propose the sensor fusion system that can recognize multiple 3D objects from 2D projection images and tactile information. The proposed system focuses on improving recognition performance of 3D object. Unlike the conventional object recognition system that uses image sensor alone, the proposed method uses tactual sensors in addition to visual sensor. Neural network is used to fuse these informations. Tactual signals are obtained from the reaction force by the pressure sensors at the fingertips when unknown objects are grasped by four-fingered robot hand. The experiment evaluates the recognition rate and the number of teaming iterations of various objects. The merits of the proposed systems are not only the high performance of the learning ability but also the reliability of the system with tactual information for recognizing various objects even though visual information has a defect. The experimental results show that the proposed system can improve recognition rate and reduce learning time. These results verify the effectiveness of the proposed sensor fusion system as recognition scheme of 3D object.

  • PDF

근접 센서를 이용한 로봇 손의 파지 충격 개선 (Grasping Impact-Improvement of Robot Hands using Proximate Sensor)

  • 홍예선;진성무
    • 한국정밀공학회지
    • /
    • 제16권1호통권94호
    • /
    • pp.42-48
    • /
    • 1999
  • A control method for a robot hand grasping a object in a partially unknown environment will be proposed, where a proximate sensor detecting the distance between the fingertip and object was used. Particularly, the finger joints were driven servo-pneumatically in this study. Based on the proximate sensor signal the finger motion controller could plan the grasping process divided in three phases ; fast aproach, slow transitional contact and contact force control. That is, the fingertip approached to the object with full speed, until the output signal of the proximate sensor began to change. Within the perating range of the proximate sensor, the finger joint was moved by a state-variable feedback position controller in order to obtain a smooth contact with the object. The contact force of fingertip was then controlled using the blocked-line pressure sensitivity of the flow control servovalve for finger joint control. In this way, the grasping impact could be reduced without reducing the object approaching speed. The performance of the proposed grasping method was experimentally compared with that of a open loop-controlled one.

  • PDF

Strategy of Object Search for Distributed Autonomous Robotic Systems

  • Kim Ho-Duck;Yoon Han-Ul;Sim Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권3호
    • /
    • pp.264-269
    • /
    • 2006
  • This paper presents the strategy for searching a hidden object in an unknown area for using by multiple distributed autonomous robotic systems (DARS). To search the target in Markovian space, DARS should recognize th ε ir surrounding at where they are located and generate some rules to act upon by themselves. First of all, DARS obtain 6-distances from itself to environment by infrared sensor which are hexagonally allocated around itself. Second, it calculates 6-areas with those distances then take an action, i.e., turn and move toward where the widest space will be guaranteed. After the action is taken, the value of Q will be updated by relative formula at the state. We set up an experimental environment with five small mobile robots, obstacles, and a target object, and tried to research for a target object while navigating in a un known hallway where some obstacles were placed. In the end of this paper, we present the results of three algorithms - a random search, an area-based action making process to determine the next action of the robot and hexagon-based Q-learning to enhance the area-based action making process.

Human Tracking using Multiple-Camera-Based Global Color Model in Intelligent Space

  • Jin Tae-Seok;Hashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권1호
    • /
    • pp.39-46
    • /
    • 2006
  • We propose an global color model based method for tracking motions of multiple human using a networked multiple-camera system in intelligent space as a human-robot coexistent system. An intelligent space is a space where many intelligent devices, such as computers and sensors(color CCD cameras for example), are distributed. Human beings can be a part of intelligent space as well. One of the main goals of intelligent space is to assist humans and to do different services for them. In order to be capable of doing that, intelligent space must be able to do different human related tasks. One of them is to identify and track multiple objects seamlessly. In the environment where many camera modules are distributed on network, it is important to identify object in order to track it, because different cameras may be needed as object moves throughout the space and intelligent space should determine the appropriate one. This paper describes appearance based unknown object tracking with the distributed vision system in intelligent space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

Multiple Human Recognition for Networked Camera based Interactive Control in IoT Space

  • Jin, Taeseok
    • 한국산업융합학회 논문집
    • /
    • 제22권1호
    • /
    • pp.39-45
    • /
    • 2019
  • We propose an active color model based method for tracking motions of multiple human using a networked multiple-camera system in IoT space as a human-robot coexistent system. An IoT space is a space where many intelligent devices, such as computers and sensors(color CCD cameras for example), are distributed. Human beings can be a part of IoT space as well. One of the main goals of IoT space is to assist humans and to do different services for them. In order to be capable of doing that, IoT space must be able to do different human related tasks. One of them is to identify and track multiple objects seamlessly. In the environment where many camera modules are distributed on network, it is important to identify object in order to track it, because different cameras may be needed as object moves throughout the space and IoT space should determine the appropriate one. This paper describes appearance based unknown object tracking with the distributed vision system in IoT space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

옵셋 보정 주기에 따른 망원경 시스템 관측 성능 분석 (Observation Performance Analysis of the Telescope System according to the Offset Compensation Cycle)

  • 이호진;현철;이상욱
    • 한국정보통신학회논문지
    • /
    • 제24권1호
    • /
    • pp.15-21
    • /
    • 2020
  • 본 논문에서는 미확인 우주물체를 감시하기 위해, 전자광학 관측 장비인 망원경 시스템의 관측 성능에 대한 M&S(Modeling & Simulation) 분석을 수행한다. 2개의 망원경 시스템을 활용한 미확인 우주물체 관측에 대한 운용개념을 고려하고, M&S 모델을 구성한다. 관측 운용개념을 바탕으로 초기궤도결정을 수행하여 추정궤도를 생성하고, 추정궤도에 대한 옵셋 보정을 수행하여 보정 주기에 따른 관측 성능을 분석한다. 본 논문의 M&S 분석 결과는 옵셋 보정 주기가 짧을수록 관측 성능이 높게 나타나며, 길어질수록 오차 보정 기회가 줄어들기 때문에 성능이 낮아짐을 보여준다. 그래서 미확인 우주물체 감시를 위한 망원경 시스템의 관측 성능을 높이기 위해서는 초기궤도 추정을 정밀화하거나 옵셋 보정을 지속적으로 수행할 수 있도록 하는 관측 시스템이 구성되어야 한다.

GrabCut을 이용한 IR 영상 분할 (IR Image Segmentation using GrabCut)

  • 이희열;이은영;구은혜;최일;최병재;류강수;박길흠
    • 한국지능시스템학회논문지
    • /
    • 제21권2호
    • /
    • pp.260-267
    • /
    • 2011
  • 본 논문은 GrabCut 알고리듬을 기반으로 적외선(infrared; IR) 영상에서 물체를 배경으로부터 분할하는 방법을 제안한다. GrabCut 알고리듬은 관심 있는 물체를 둘러싸는 윈도우가 필요하며, 이는 사용자가 설정한다. 그렇지만 이 알고리듬을 영상 시이퀀스에서 물체인식에 적용하려면 윈도우의 로케이션이 자동으로 결정되어야만 한다. 이를 위해서 본 논문에서는 Otsu 알고리듬으로 한 영상에서 관심은 있으나 알져지지 않는 물체를 적당히 분할하고 블랍 해석을 통해 윈도우를 자동으로 로케이션한다. 그랩 컷 일고리듬은 관심있는 물체와 배경의 확률분포를 추정해야한다. 이 경우에 관심 있는 물체의 확률분포는 자동으로 로케이션된 윈도우 내부의 화소들로부터 추정하고, 배경의 확률 분포는 물체의 윈도우를 둘러싸고 면적은 동일한 영역으로부터 추정한다. 다양한 IR 영상에 대한 분할 실험을 통해 제안한 분할 방법이 IR 영상의 분할에 적합함을 보이고, 기존의 IR 영상 분할 방법과의 비교 및 분석을 통해 제안 알고리듬이 우수한 분할 성능을 보임을 증명한다.

스캐닝 프로브를 이용한 미지의 자유곡면 점군 획득에 관한 연구 (Digitization of Unknown Sculptured Surface Using a Scanning Probe)

  • 권기복;김재현;이정근;박정환;고태조
    • 한국정밀공학회지
    • /
    • 제21권4호
    • /
    • pp.57-63
    • /
    • 2004
  • This paper describes a method for digitizing the compound surfaces which are comprised of several unknown feature shapes such as base surface, and draft wall. From the reverse engineering's point of view, the main step is to digitize or gather three-dimensional points on an object rapidly and precisely. As well known, the non-contact digitizing apparatus using a laser or structured light can rapidly obtain a great bulk of digitized points, while the touch or scanning probe gives higher accuracy by directly contacting its stylus onto the part surface. By combining those two methods, unknown features can be digitized efficiently. The paper proposes a digitizing methodology using the approximated surface model obtained from laser-scanned data, followed by the use of a scanning probe. Each surface boundary curve and the confining area is investigated to select the most suitable digitizing path topology, which is similar to generating NC tool-paths. The methodology was tested with a simple physical model whose shape is comprised of a base surface, draft walls and cavity volumes.

Calibration 모형을 이용한 판별분석 (Discriminant analysis based on a calibration model)

  • 이석훈;박래현;복혜영
    • 응용통계연구
    • /
    • 제10권2호
    • /
    • pp.261-274
    • /
    • 1997
  • 기존에 제안되어온 판별분석 기법이 대상으로 하는 대부분의 자료는 각 개체가 어느 한 특정한 집단에 전적으로 소속되어 있는 것으로 국한되어 왔다. 그러나 오늘날 (0-1)의 이치논리가 퍼지(Fuzzy) 개념과 다치논리로 확장되는 현상은 어느 한 개체를 꼭 한개의 집단에만 국한시키는 관점 역시 변화를 요구하고 있다고 본다. 이에 본 논문에서는 한 개체가 어떤 소속확률을 갖고 여러개의 집단에 소속되어 있는 상황을 고려하여 이러한 개체들로 구성된 학습표본으로부터 판별분석 규칙을 개발하는 것을 목표로 하였다. 방법론으로는 개체들의 특성벡터와 소속상태의 관계를 역추정(calibration) 모형으로 표현하고 판별대상개체의 특성벡터가 주어졌을 때 소속상태를 추정하도록 하며 이때 추정은 베이지안 방법, Metropolis 알고리즘 등을 사용하였다. 또한 제안된 판별규칙의 평가를 위한 기준을 제안하고 두개의 자료를 기존의 다른 규칙들과 함께 분석하여 결과를 비교하였다.

  • PDF