• Title/Summary/Keyword: 2D Vision

Search Result 613, Processing Time 0.029 seconds

2D Map generation Using Omnidirectional Image sensor and Stereo Vision for MobileRobot MAIRO (자율이동로봇MAIRO의 전방향 이미지센서와 스테레오 비전 시스템을 이용한 2차원 지도 생성)

  • Kim, Kyung-Ho;Lee, Hyung-Kyu;Son, Young-Jun;Song, Jae-Keun
    • Proceedings of the KIEE Conference
    • /
    • 2002.11c
    • /
    • pp.495-500
    • /
    • 2002
  • Recently, a service robot industry outstands as an up and coming industry of the next generation. Specially, there are so many research in self-steering movement(SSM). In order to implement SSM, robot must effectively recognize all around, detect objects and make a surrounding map with sensors. So, many robots have a sonar and a infrared sensor, etc. But, in these sensors, We only know informations about between the robot and the object as well as resolution faculty is of inferior quality. In this paper, we will introduce new algorithm that recognizes objects around robot and makes a two dimension surrounding map with a omni-direction vision camera and two stereo vision cameras.

  • PDF

Test bed for autonomous controlled space robot (우주로봇 자율제어 테스트 베드)

  • 최종현;백윤수;박종오
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1828-1831
    • /
    • 1997
  • this paper, to represent the robot motion approximately in space, delas with algorithm for position recognition of space robot, target and obstacle with vision system in 2-D. And also there are algorithms for precise distance-measuring and calibration usign laser displacement system, and for trajectory selection for optimizing moving to object, and for robot locomtion with air-thrust valve. And the software synthesizing of these algorithms hleps operator to realize the situation certainly and perform the job without any difficulty.

  • PDF

Construction Site Scene Understanding: A 2D Image Segmentation and Classification

  • Kim, Hongjo;Park, Sungjae;Ha, Sooji;Kim, Hyoungkwan
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.333-335
    • /
    • 2015
  • A computer vision-based scene recognition algorithm is proposed for monitoring construction sites. The system analyzes images acquired from a surveillance camera to separate regions and classify them as building, ground, and hole. Mean shift image segmentation algorithm is tested for separating meaningful regions of construction site images. The system would benefit current monitoring practices in that information extracted from images could embrace an environmental context.

  • PDF

Visual Performances of the Corrected Navarro Accommodation-Dependent Finite Model Eye (안구의 굴절능 조절을 고려한 수정된 Navarro 정밀모형안의 시성능 분석)

  • Choi, Ka-Ul;Song, Seok-Ho;Kim, Sang-Gee
    • Korean Journal of Optics and Photonics
    • /
    • v.18 no.5
    • /
    • pp.337-344
    • /
    • 2007
  • In recent years, there has been rapid progress in different areas of vision science, such as refractive surgical procedures, contact lenses and spectacles, and near vision. This progress requires a highly accurate modeling of optical performance of the human eyes in different accommodation states. A new novel model-eye was designed using the Navarro accommodation-dependent finite model eye. For each of the vergence distances, ocular wavefront error, accommodative response, and visual acuity were calculated. Using the new model eye ocular wavefront error, accommodation dative response, and visual acuity are calculated for six vergence stimuli, -0.17D, 1D, 2D, 3D, 4D and -5D. Also, $3^{rd}\;and\;4^{th}$ order aberrations, modulation transfer function, and visual acuity of the accommodation-dependent model eye were analyzed. These results are well-matched to anatomical, biometric, and optical realities. Our corrected accommodation-dependent model-eye may provide a more accurate way to evaluate optical transfer functions and optical performances of the human eye.

Pose and Expression Invariant Alignment based Multi-View 3D Face Recognition

  • Ratyal, Naeem;Taj, Imtiaz;Bajwa, Usama;Sajid, Muhammad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4903-4929
    • /
    • 2018
  • In this study, a fully automatic pose and expression invariant 3D face alignment algorithm is proposed to handle frontal and profile face images which is based on a two pass course to fine alignment strategy. The first pass of the algorithm coarsely aligns the face images to an intrinsic coordinate system (ICS) through a single 3D rotation and the second pass aligns them at fine level using a minimum nose tip-scanner distance (MNSD) approach. For facial recognition, multi-view faces are synthesized to exploit real 3D information and test the efficacy of the proposed system. Due to optimal separating hyper plane (OSH), Support Vector Machine (SVM) is employed in multi-view face verification (FV) task. In addition, a multi stage unified classifier based face identification (FI) algorithm is employed which combines results from seven base classifiers, two parallel face recognition algorithms and an exponential rank combiner, all in a hierarchical manner. The performance figures of the proposed methodology are corroborated by extensive experiments performed on four benchmark datasets: GavabDB, Bosphorus, UMB-DB and FRGC v2.0. Results show mark improvement in alignment accuracy and recognition rates. Moreover, a computational complexity analysis has been carried out for the proposed algorithm which reveals its superiority in terms of computational efficiency as well.

Accurate Pose Measurement of Label-attached Small Objects Using a 3D Vision Technique (3차원 비전 기술을 이용한 라벨부착 소형 물체의 정밀 자세 측정)

  • Kim, Eung-su;Kim, Kye-Kyung;Wijenayake, Udaya;Park, Soon-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.10
    • /
    • pp.839-846
    • /
    • 2016
  • Bin picking is a task of picking a small object from a bin. For accurate bin picking, the 3D pose information, position, and orientation of a small object is required because the object is mixed with other objects of the same type in the bin. Using this 3D pose information, a robotic gripper can pick an object using exact distance and orientation measurements. In this paper, we propose a 3D vision technique for accurate measurement of 3D position and orientation of small objects, on which a paper label is stuck to the surface. We use a maximally stable extremal regions (MSERs) algorithm to detect the label areas in a left bin image acquired from a stereo camera. In each label area, image features are detected and their correlation with a right image is determined by a stereo vision technique. Then, the 3D position and orientation of the objects are measured accurately using a transformation from the camera coordinate system to the new label coordinate system. For stable measurement during a bin picking task, the pose information is filtered by averaging at fixed time intervals. Our experimental results indicate that the proposed technique yields pose accuracy between 0.4~0.5mm in positional measurements and $0.2-0.6^{\circ}$ in angle measurements.

From Broken Visions to Expanded Abstractions (망가진 시선으로부터 확장된 추상까지)

  • Hattler, Max
    • Cartoon and Animation Studies
    • /
    • s.49
    • /
    • pp.697-712
    • /
    • 2017
  • In recent years, film and animation for cinematic release have embraced stereoscopic vision and the three-dimensional depth it creates for the viewer. The maturation of consumer-level virtual reality (VR) technology simultaneously spurred a wave of media productions set within 3D space, ranging from computer games to pornographic videos, to Academy Award-nominated animated VR short film Pearl. All of these works rely on stereoscopic fusion through stereopsis, that is, the perception of depth produced by the brain from left and right images with the amount of binocular parallax that corresponds to our eyes. They aim to emulate normal human vision. Within more experimental practices however, a fully rendered 3D space might not always be desirable. In my own abstract animation work, I tend to favour 2D flatness and the relative obfuscation of spatial relations it affords, as this underlines the visual abstraction I am pursuing. Not being able to immediately understand what is in front and what is behind can strengthen the desired effects. In 2015, Jeffrey Shaw challenged me to create a stereoscopic work for Animamix Biennale 2015-16, which he co-curated. This prompted me to question how stereoscopy, rather than hyper-defining space within three dimensions, might itself be used to achieve a confusion of spatial perception. And in turn, how abstract and experimental moving image practices can benefit from stereoscopy to open up new visual and narrative opportunities, if used in ways that break with, or go beyond stereoscopic fusion. Noteworthy works which exemplify a range of non-traditional, expanded approaches to binocular vision will be discussed below, followed by a brief introduction of the stereoscopic animation loop III=III which I created for Animamix Biennale. The techniques employed in these works might serve as a toolkit for artists interested in exploring a more experimental, expanded engagement with stereoscopy.

Tele-operating System of Field Robot for Cultivation Management - Vision based Tele-operating System of Robotic Smart Farming for Fruit Harvesting and Cultivation Management

  • Ryuh, Youngsun;Noh, Kwang Mo;Park, Joon Gul
    • Journal of Biosystems Engineering
    • /
    • v.39 no.2
    • /
    • pp.134-141
    • /
    • 2014
  • Purposes: This study was to validate the Robotic Smart Work System that can provides better working conditions and high productivity in unstructured environments like bio-industry, based on a tele-operation system for fruit harvesting with low cost 3-D positioning system on the laboratory level. Methods: For the Robotic Smart Work System for fruit harvesting and cultivation management in agriculture, a vision based tele-operating system and 3-D position information are key elements. This study proposed Robotic Smart Farming, an agricultural version of Robotic Smart Work System, and validated a 3-D position information system with a low cost omni camera and a laser marker system in the lab environment in order to get a vision based tele-operating system and 3-D position information. Results: The tasks like harvesting of the fixed target and cultivation management were accomplished even if there was a short time delay (30 ms ~ 100 ms). Although automatic conveyor works requiring accurate timing and positioning yield high productivity, the tele-operation with user's intuition will be more efficient in unstructured environments which require target selection and judgment. Conclusions: This system increased work efficiency and stability by considering ancillary intelligence as well as user's experience and knowhow. In addition, senior and female workers will operate the system easily because it can reduce labor and minimized user fatigue.

Relationship between Surface Sag Error and Optical Power of Progressive Addition Lens

  • Liu, Zhiying;Li, Dan
    • Current Optics and Photonics
    • /
    • v.1 no.5
    • /
    • pp.538-543
    • /
    • 2017
  • Progressive addition lenses (PAL) have very wide application in the modern glasses market. The unique progressive surface can make a lens have progressive refractive power, which can meet the human eye's different needs for distance-vision and near-vision. According to the national glasses fabrication standard, the difference between actual optical power after fabrication and nominal design value should be less than 0.1D over the lens effective area. The optical power distribution of PAL is determined directly by the surface. Consequently, the surface processing accuracy requirement is proposed. Beginning from the surface expressions of progressive addition lenses, the relationship equations between the surface sag and optical power distribution are derived. They are demonstrated through tolerance analysis and test of an example progressive addition lens with addition of 2.09D (5.46D-7.55D). The example addition surface is fabricated under given accuracy by a single-point diamond ultra-precision machine. The optical power of the PAL example is tested with a focal-meter after fabrication. The optical power addition difference between test result and design nominal value is 0.09D, which is less than 0.1D. The derived relationship between the surface error and optical power is verified from the PAL example simulation and test result. It can provide theoretical tolerance analysis proof for the PAL surface fabricating process.

DEVELOPMENT OF A MACHINE VISION SYSTEM FOR WEED CONTROL USING PRECISION CHEMICAL APPLICATION

  • Lee, Won-Suk;David C. Slaughter;D.Ken Giles
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.802-811
    • /
    • 1996
  • Farmers need alternatives for weed control due to the desire to reduce chemicals used in farming. However, conventional mechanical cultivation cannot selectively remove weeds located in the seedline between crop plants and there are no selective heribicides for some crop/weed situations. Since hand labor is costly , an automated weed control system could be feasible. A robotic weed control system can also reduce or eliminate the need for chemicals. Currently no such system exists for removing weeds located in the seedline between crop plants. The goal of this project is to build a real-time , machine vision weed control system that can detect crop and weed locations. remove weeds and thin crop plants. In order to accomplish this objective , a real-time robotic system was developed to identify and locate outdoor plants using machine vision technology, pattern recognition techniques, knowledge-based decision theory, and robotics. The prototype weed control system is composed f a real-time computer vision system, a uniform illumination device, and a precision chemical application system. The prototype system is mounted on the UC Davis Robotic Cultivator , which finds the center of the seedline of crop plants. Field tests showed that the robotic spraying system correctly targeted simulated weeds (metal coins of 2.54 cm diameter) with an average error of 0.78 cm and the standard deviation of 0.62cm.

  • PDF