• Title/Summary/Keyword: Object Position

Search Result 1,228, Processing Time 0.026 seconds

Relative Quantifier Scope and Object Shift

  • Lee, Chang-Su
    • Korean Journal of English Language and Linguistics
    • /
    • v.2 no.1
    • /
    • pp.97-121
    • /
    • 2002
  • Aoun and Li (1989) and Hornstein (1995) suggest that the cross-linguistic contrast in quantifier scope between English and East Asian languages is attributed to the parametric difference in the base subject position, viz. VP-internal position in English and Spec IP in East Asian languages. This paper argues that their suggestion is untenable, and that the cross linguistic contrast in question is due to the parametric difference that English permits and East Asian languages do not permit (overt) object shift.

  • PDF

Magnetic Substance Search Using Finite Element Method and Neural Network (유한요소법과 인공지능을 이용할 자성체 탐사)

  • Lee, Kang-Woo;Park, Il-Han
    • Proceedings of the KIEE Conference
    • /
    • 1997.07a
    • /
    • pp.198-200
    • /
    • 1997
  • This paper consider a simple Nondestructive Testing(NDT) having eddy currnt effect. We analyzed the two dimension modeling of alternative magnetic field. eddy current with voltage source. And, the current magnitude and phase data obtained from each different frequency five object position is used for learning the neural network. Therefore, we can recognize an object position pattern from new input current magnitude, phase data.

  • PDF

Object Detection and 3D Position Estimation based on Stereo Vision (스테레오 영상 기반의 객체 탐지 및 객체의 3차원 위치 추정)

  • Son, Haengseon;Lee, Seonyoung;Min, Kyoungwon;Seo, Seongjin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.4
    • /
    • pp.318-324
    • /
    • 2017
  • We introduced a stereo camera on the aircraft to detect flight objects and to estimate the 3D position of them. The Saliency map algorithm based on PCT was proposed to detect a small object between clouds, and then we processed a stereo matching algorithm to find out the disparity between the left and right camera. In order to extract accurate disparity, cost aggregation region was used as a variable region to adapt to detection object. In this paper, we use the detection result as the cost aggregation region. In order to extract more precise disparity, sub-pixel interpolation is used to extract float type-disparity at sub-pixel level. We also proposed a method to estimate the spatial position of an object by using camera parameters. It is expected that it can be applied to image - based object detection and collision avoidance system of autonomous aircraft in the future.

Wearable Robot System Enabling Gaze Tracking and 3D Position Acquisition for Assisting a Disabled Person with Disabled Limbs (시선위치 추적기법 및 3차원 위치정보 획득이 가능한 사지장애인 보조용 웨어러블 로봇 시스템)

  • Seo, Hyoung Kyu;Kim, Jun Cheol;Jung, Jin Hyung;Kim, Dong Hwan
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.37 no.10
    • /
    • pp.1219-1227
    • /
    • 2013
  • A new type of wearable robot is developed for a disabled person with disabled limbs, that is, a person who cannot intentionally move his/her legs and arms. This robot can enable the disabled person to grip an object using eye movements. A gaze tracking algorithm is employed to detect pupil movements by which the person observes the object to be gripped. By using this gaze tracking 2D information, the object is identified and the distance to the object is measured using a Kinect device installed on the robot shoulder. By using several coordinate transformations and a matching scheme, the final 3D information about the object from the base frame can be clearly identified, and the final position data is transmitted to the DSP-controlled robot controller, which enables the target object to be gripped successfully.

Extraction of Object 3-Dimension Position Coordinates using CCD-Camera (CCD-Camera를 이용한 목적대상의 3차원 위치좌표 추출)

  • Kim, Moo-Hyun;Lee, Ji-Hyun;Kim, Young-Hee;Park, Mu-Hun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.245-249
    • /
    • 2010
  • In the stereo vision system, information about an object could be gained by searching through images. Edges which are based on the information about an object are used to find the position of the object and send a message of its position coordinate to a unmanned crain. This thesis proposes an algorithm to find the center point of the object's surface which is connected to the unmanned crain's arm, and to recognize the shape of the object by using two CCD cameras. At first, getting information about the edges, and distinguishing each edge's characteristics depend on user's option, and then find the location information by a set of positions that are proposed. This thesis is expected to be devoted to the development of an automation system of unmanned moving equipment.

  • PDF

A Development of Object Position Information Extraction Algorithm using Stereo Vision (스테레오 비전을 이용한 물체의 위치정보 추출 알고리즘 개발)

  • Kim, Moo-Hyun;Lee, Ji-Hyun;Lee, Seung-Kuy;Kim, Young-Hee;Park, Mu-Hun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.8
    • /
    • pp.1767-1775
    • /
    • 2010
  • As factory automation is getting popular, there has been a lot of research concerned with stereo vision systems as a part of an automation system with unmanned moving equipment. In the stereo vision system, information about an object could be gained by searching through images. Edges which are based on the information about an object are used to find the position of the object and send a message of its position coordinate to a unmanned crain. This thesis proposes an algorithm to find the center point of the object's surface which is connected to the unmanned crain's hookblock, and to recognize the shape of the object by using two CCD cameras. At first, getting information about the edges, and distinguishing each edge's characteristics depend on user's option, and then find the location information by a set of positions that are proposed. This thesis is expected to be devoted to the development of an automation system of unmanned moving equipment.

The Image Position Measurement for the Selected Object out of the Center using the 2 Points Polar Coordinate Transform (2 포인트 극좌표계 변환을 이용한 중심으로부터의 목표물 영상 위치 측정)

  • Seo, Choon Weon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.11
    • /
    • pp.147-155
    • /
    • 2015
  • For the image processing system to be classified the selected object in the nature, the rotation, scale and transition invariant features is to be necessary. There are many investigations to get the information for the object processing system and the log-polar transform which is to be get the invariant feature for the scale and rotation is used. In this paper, we suggested the 2 points polar coordinate transform methods to measure the selected object position out of the center in input image including the centroid method. In this proposed system, the position results of objects are very good, and we obtained the similarity ratio 99~104% for the object coordinate values.

Effect of object position in the field of view and application of a metal artifact reduction algorithm on the detection of vertical root fractures on cone-beam computed tomography scans: An in vitro study

  • Nikbin, Ava;Kajan, Zahra Dalili;Taramsari, Mehran;Khosravifard, Negar
    • Imaging Science in Dentistry
    • /
    • v.48 no.4
    • /
    • pp.245-254
    • /
    • 2018
  • Purpose: To assess the effects of object position in the field of view (FOV) and application of a metal artifact reduction (MAR) algorithm on the diagnostic accuracy of cone-beam computed tomography (CBCT) for the detection of vertical root fractures(VRFs). Materials and Methods: Sixty human single-canal premolars received root canal treatment. VRFs were induced in 30 endodontically treated teeth. The teeth were then divided into 4 groups, with 2 groups receiving metal posts and the remaining 2 only having an empty post space. The roots from different groups were mounted in a phantom made of cow rib bone, and CBCT scans were obtained for the 4 different groups. Three observers evaluated the images independently. Results: The highest frequency of correct diagnoses of VRFs was obtained with the object positioned centrally in the FOV, using the MAR algorithm. Peripheral positioning of the object without the MAR algorithm yielded the highest sensitivity for the first observer (66.7%). For the second and third observers, a central position improved sensitivity, with or without the MAR algorithm. In the presence of metal posts, central positioning of the object in the FOV significantly increased the diagnostic sensitivity and accuracy compared to peripheral positioning. Conclusion: Diagnostic accuracy was higher with central positioning than with peripheral positioning, irrespective of whether the MAR algorithm was applied. However, the effect of the MAR algorithm was more significant with central positioning than with peripheral positioning of the object in the FOV. The clinical experience and expertise of the observers may serve as a confounder in this respect.

Object Recognition Using Local Binary Pattern Based on Confidence Measure (신뢰 척도 기반 지역 이진 패턴을 이용한 객체 인식)

  • Yonggeol Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.1
    • /
    • pp.126-132
    • /
    • 2023
  • Object recognition is a technology that detects and identifies various objects in images and videos. LBP is a descriptor that operates robustly to illumination variations and is actively used in object recognition. LBP considers the range of neighboring pixels, the order of combining the neighbors after the comparison operation, and the starting position of combining. In particular, the starting position of the LBP becomes the "most significant bit"; it dramatically affects the performance of object recognition. In this paper, based on the N starting positions, the data most similar to the input data are searched in each of the N feature spaces. Object recognition is performed by the confidence measure that can compare different results of each feature space under the same criterion and select the most reliable result. In the experimental results, it was confirmed that there is a difference in performance depending on the starting position of LBP. The proposed method showed a high performance of up to 12.66% compared to the recognition performance of the existing LBP.

Study on the Error Compensation in Strain Measurement of Sheet Metal Forming (박판성형 변형률 측정 오차보정에 관한 연구)

  • 한병엽;차지혜;금영탁
    • Proceedings of the Korean Society for Technology of Plasticity Conference
    • /
    • 2003.05a
    • /
    • pp.270-273
    • /
    • 2003
  • The strain measurement of the panel in the sheet metal forming is essential work which provides experimental data needed to die design, process design, and product inspection. To measure efficiently the complex geometry strain, the 3-dimensional automative strain measurement system, which has high accuracy in theory, but has some 3∼5% errors in practice, is often used. The object of this study is to develop the error compensation technology to eliminate the strain, errors resulted when formed panels are measured using an automated strain measurement system. To achieve the study object, the position error calibration method correcting coordinates of the grid node recognized by a camera using error functions is suggested. Then the position errors were found by calculating the difference in the position of the cube node between real coordinates and measured coordinates in toms of node coordinates and the error calibration equations were derived by regressing the position errors. In order to show the validation of the suggested position error calibration method, finite element analysis and current calibration method was performed for the initial-blankformed.

  • PDF