• Title/Summary/Keyword: object-orientation

Search Result 307, Processing Time 0.027 seconds

Carrying pose optimization by using wrench space (렌치 스페이스를 이용한 물체 들기 자세 최적화)

  • Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.4
    • /
    • pp.19-26
    • /
    • 2015
  • This paper presents a method for optimizing a carrying pose of human body for a given object. The inputs are articulated human body model and and arbitrary-shaped object. We assume that the object is big and heavy, so that both arms should be used to carry it. Unlike small and light objects, big and heaby objects can be hold by only a small range of body poses while keeping a physical statbility. We first introduce an algorithm that evaluates a physical stability of a given human body pose and object state (position and orientation). Then, we define a configuration space and search the space for the most stable carrying pose by using the evaluation algorithm. Finally, to demonstrate the usability of our method, we present the results which each is experimented with different shaped objects and additional user conditions.

Recognition Direction Improvement of Target Object for Machine Vision based Automatic Inspection (머신비전 자동검사를 위한 대상객체의 인식방향성 개선)

  • Hong, Seung-Beom;Hong, Seung-Woo;Lee, Kyou-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.11
    • /
    • pp.1384-1390
    • /
    • 2019
  • This paper proposes a technological solution for improving the recognition direction of target objects for automatic vision inspection by machine vision. This paper proposes a technological solution for improving the recognition direction of target objects for automatic vision inspection by machine vision. This enables the automatic machine vision inspection to detect the image of the inspection object regardless of the position and orientation of the object, eliminating the need for a separate inspection jig and improving the automation level of the inspection process. This study develops the technology and method that can be applied to the wire harness manufacturing process as the inspection object and present the result of real system. The results of the system implementation was evaluated by the accredited institution. This includes successful measurement in the accuracy, detection recognition, reproducibility and positioning success rate, and achievement the goal in ten kinds of color discrimination ability, inspection time within one second and four automatic mode setting, etc.

A Method for Indoor Positioning Utilizing Depth Camera (깊이 측정 카메라를 이용한 실내 위치결정 방법)

  • Seokjin Kim;Seunghyeon Jeon;Taegwan Lee;Seungo Kim;Chaelyn Park;Bongen Gu
    • Journal of Platform Technology
    • /
    • v.12 no.1
    • /
    • pp.44-54
    • /
    • 2024
  • The existing indoor positioning methods using beacons or tags suffer from issues such as occasional undetection or increased errors due to noise. In this paper, we propose a method for determining the indoor position of a robot using the distance and, the angle between the direction of a target object whose position is known and the direction in which the robot views the target object from the front. The method proposed in this paper utilizes a depth camera to measure distance and calculate angles. Distance is measured using depth information captured by the camera, while angles are determined using images captured by the camera to determine the orientation of the target object. The proposed method calculates coordinate displacements using distance and angle. And then the method determines the position of the mobile robot using these displacements and the coordinates of the target object. To show the applicability of the proposed method for indoor positioning, we conducted experimental implementation and compared measured displacements. The results showed errors within 50mm, but considering the size of the mobile robot, it is judged that the method proposed in this paper can be sufficiently used for indoor positioning.

  • PDF

A study on vision system based on Generalized Hough Transform 2-D object recognition (Generalized Hough Transform을 이용한 이차원 물체인식 비젼 시스템 구현에 대한 연구)

  • Koo, Bon-Cheol;Park, Jin-Soo;Chien Sung-Il
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.67-78
    • /
    • 1996
  • The purpose of this paper is object recognition even in the presence of occlusion by using generalized Hough transform(GHT). The GHT can be considered as a kind of model based object recognition algorithm and is executed in the following two stages. The first stage is to store the information of the model in the form of R-table (Reference table). The next stage is to identify the existence of the objects in the image by using the R-table. The improved GHT method is proposed for the practical vision system. First, in constructing the R-table, we extracted the partial arc from the portion of the whole object boundary, and this partial arc can be used for constructing the R-table. Also, clustering algorithm is employed for compensating an error arised by digitizing an object image. Second, an efficient method is introduced to avoid Ballard's use of 4-D array which is necessary for estimating position, orientation and scale change of an object. Only 2-D array is enough for recognizing an object. Especially, scale token method is introduced for calculating the scale change which is easily affected by camera zoom. The results of our test show that the improved hierarchical GHT method operates stably in the realistic vision situation, even in the case of object occlusion.

  • PDF

Multiple Texture Image Recognition with Unsupervised Block-based Clustering (비교사 블록-기반 군집에 의한 다중 텍스쳐 영상 인식)

  • Lee, Woo-Beom;Kim, Wook-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.9B no.3
    • /
    • pp.327-336
    • /
    • 2002
  • Texture analysis is an important technique in many image understanding areas, such as perception of surface, object, shape and depth. But the previous works are intend to the issue of only texture segment, that is not capable of acquiring recognition information. No unsupervised method is basased on the recognition of texture in image. we propose a novel approach for efficient texture image analysis that uses unsupervised learning schemes for the texture recognition. The self-organization neural network for multiple texture image identification is based on block-based clustering and merging. The texture features used are the angle and magnitude in orientation-field that might be different from the sample textures. In order to show the performance of the proposed system, After we have attempted to build a various texture images. The final segmentation is achieved by using efficient edge detection algorithm applying to block-based dilation. The experimental results show that the performance of the system Is very successful.

Adjustment of Exterior Orientation Parameters Geometric Registration of Aerial Images and LIDAR Data (항공영상과 라이다데이터의 기하학적 정합을 위한 외부표정요소의 조정)

  • Hong, Ju-Seok;Lee, Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.585-597
    • /
    • 2009
  • This research aims to develop a registration method to remove the geometric inconsistency between aerial images and LIDAR data acquired from an airborne multi-sensor system. The proposed method mainly includes registration primitives extraction, correspondence establishment, and EOP(Exterior Orientation Parameters) adjustment. As the registration primitives, we extracts planar patches and intersection edges from the LIDAR data and object points and linking edges from the aerial images. The extracted primitives are then categorized into horizontal and vertical ones; and their correspondences are established. These correspondent pairs are incorporated as stochastic constraints into the bundle block adjustment, which finally precisely adjusts the exterior orientation parameters of the images. According to the experimental results from the application of the proposed method to real data, we found that the attitude parameters of EOPs were meaningfully adjusted and the geometric inconsistency of the primitives used for the adjustment is reduced from 2 m to 2 cm before and after the registration. Hence, the results of this research can contribute to data fusion for the high quality 3D spatial information.

The Effects of the Initial Crack Length and Fiber Orientation on the Interlaminar Delamination of the CFRP/GFRP Hybrid Laminate (초기 균열길이 및 섬유방향이 CFRP/GFRP 하이브리드 적층재의 층간 파괴에 미치는 영향)

  • Kwon, Oh-Heon;Kwon, Woo-Deok;Kang, Ji-Woong
    • Journal of the Korean Society of Safety
    • /
    • v.28 no.1
    • /
    • pp.12-17
    • /
    • 2013
  • Considering the wind power system and the rotor blades which are composed of much technology, the wind power blade would be the most dangerous part because it revolves at high speed and weighs about dozens of tons, if the accident happens. Therefore, the light weight composite materials have been replacing as substitutional materials. The object of this study is to examine the delamination and damage for CFRP/GFRP hybrid composite that is used for strength improvement of a wind power blade. The influence of the initial crack length and fiber orientation for the interlaminar delamination was exposed for the blade safety. Plain woven CFRP instead of GFRP was inserted into the layer of the box spar for improving the strength and blade life. DCB(Double Cantilever Beam) specimen was used for evaluating fracture toughness and damage evaluation of interlaminar delamination. The material used in the experiment is a commercial material known as CF 3327 EPC in plain woven carbon prepreg(Hankuk Carbon Co.) and UD glass fiber prepreg(Hyundai Fiber Co.). From the results, crack growth rate is not so different according to the variation of the initial crack length. Mode I interlamainar fracture toughness of fiber direction $0^{\circ}$ is higher than that of $45^{\circ}$. Interlaminar fracture has an effect on fiber direction and K decreased with lower value according to increasing initial crack length. Also energy release rate fracture toughness was evaluated because CFRP/GFRP hybrid composite with a different thickness is under the mixed mode loading condition. The interlaminar fracture was almost governed by mode I fracture even though the mixed mode.

Analysis on 3D Positioning Precision Using Mobile Mapping System Images in Photograrmmetric Perspective (사진측량 관점에서 차량측량시스템 영상을 이용한 3차원 위치의 정밀도 분석)

  • 조우석;황현덕
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.6
    • /
    • pp.431-445
    • /
    • 2003
  • In this paper, we experimentally investigated the precision of 3D positioning using 4S-Van images in photograrmmetric perspective. The 3D calibration target was built over building facade outside and was captured separately by two CCD cameras installed in 4S-Van. After then, we determined the interior orientation parameter for each CCD camera through self-calibration technique. With the interior orientation parameter computed, the bundle adjustment was performed to obtain the exterior orientation parameters simultaneously for two CCD cameras using calibration target image and object coordinates. The reverse lens distortion coefficients were computed and acquired by least squares method so as to introduce lens distortion into epipolar line. It was shown that the reverse lens distortion coefficients could transform image coordinates into lens distorted image coordinates within about 0.5 pixel. The proposed semi-automatic matching scheme incorporated with lens distorted epipolar line was implemented with scene images captured by 4S-Van in moving. The experimental results showed that the precision of 3D positioning from 4S-Van images in photograrmmetric perspective is within 2cm in the range of 20m from the camera.

Control of a mobile robot supporting a task robot on the top

  • Lee, Jang M.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10a
    • /
    • pp.1-7
    • /
    • 1996
  • This paper addresses the control problem of a mobile robot supporting a task robot with needs to be positioned precisely. The main difficulty residing in the precise control of a mobile robot supporting a task robot is providing an accurate and stable base for the task robot. That is, the end-plate of the mobile robot which is the base of the task robot can not be positioned accurately without external position sensors. This difficulty is resolved in this paper through the vision information obtained from the camera attached at the end of a task robot. First of all, the camera parameters were measured by using the images of a fixed object captured by the camera. The measured parameters include the rotation, the position, the scale factor, and the focal length of the camera. These parameters could be measured by using the features of each vertex point for a hexagonal object and by using the pin-hole model of a camera. Using the measured pose(position and orientation) of the camera and the given kinematics of the task robot, we calculate a pose of the end-plate of the mobile robot, which is used for the precise control of the mobile robot. Experimental results for the pose estimations are shown.

  • PDF

Language (Meaning) and Cognitive Science (언어(특히 의미)와 인지과학)

  • Lee, Chung-Min
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2005.05a
    • /
    • pp.23-27
    • /
    • 2005
  • Humans perceptually segment events, but models that predict where events will be segmented are limited. Developing a detailed model may be hard because of the overlapping quality of events (i.e., one can smile and walk at the same time, but the endpoint of each event can be different). However, some aspects of events appear to be universally represented in the world's languages. For example, path, the trajectory of an object's movement, is one of the most universally encoded event features. Although it is generally encoded in the prepositions of English (e.g., up), in other languagesit is encoded in the verbs (e.g., descendere). Linguistic universals may represent basic levels of event perception. Here we consider how one of these, path, might be parsed. Because the spatiotemporal projection of paths to an observation point is similar to the spatial projection of objects, we tested the hypothesis that path segmentation and object segmentation would be based on similar image properties, such as discontinuities in orientation.

  • PDF