• Title/Summary/Keyword: PROCESS VISION

Search Result 1,196, Processing Time 0.023 seconds

Position Estimation of the Welding Panels for Sub-assembly line in Shipbuilding by Vision System (시각 장치를 사용한 조선 소조립 라인에서의 용접부재 위치 인식)

  • 노영준;고국원;조형석;윤재웅;전자롬
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1997.04a
    • /
    • pp.719-723
    • /
    • 1997
  • The welding automation in ship manufacturing process,especially in the sub-assembly line is considered to be a difficult job because the welding part is too huge, various, unstructured for a welding robot to weld fully automatically. The weld orocess at the sub-assembly line for ship manufacturing is to joint the various stiffener on the base panel. In order to realize automatic robot weld in sub-assembly line, robot have to equip with the sensing system to recognize the position of the parts. In this research,we developed a vision system to detect the position of base panle for sub-assembly line is shipbuilding process. The vision system is composed of one CCD camera attached on the base of robot, 2-500W halogen lamps for active illumination. In the image processing algorithm,the base panel is represented by two set of lines located at its two corner through hough transform. However, the various noise line caused by highlight,scratches and stiffener,roller in conveyor, and so on is contained in the captured image, this nosie can be eliminated by region segmentation and threshold in hough transform domain. The matching process to recognize the position of weld panel is executed by finding patterns in the Hough transformed domain. The sets of experiments performed in the sub-assembly line show the effectiveness of the proposed algorithm.

  • PDF

Active Peg-in-hole of Chamferless Parts Using Multi-sensors (다중센서를 사용한 챔퍼가 없는 부품의 능동적인 삽입작업)

  • Jeon, Hun-Jong;Kim, Kab-Il;Kim, Dae-Won;Son, Yu-Seck
    • Proceedings of the KIEE Conference
    • /
    • 1993.07a
    • /
    • pp.410-413
    • /
    • 1993
  • Chamferless peg-in-hole process of the cylindrical type parts using force/torque sensor and vision sensor is analyzed and simulated in this paper. Peg-in-hole process is classified to the normal mode (only position error) and tilted mode(position and orientation error). The tilted mode is sub-classified to the small and the big tilted mode according to the relative orientation error. Since the big tilted node happened very rare, most papers dealt with only the normal or the small tilted mode. But the most errors of the peg-in-hole process happened in the big tilted mode. This problem is analyzed and simulated in this paper using the force/torque sensor and vision senor. In the normal mode, fuzzy logic is introduced to combine the data of the force/torque sensor and vision sensor. Also the whole processing algorithms and simulations are presented.

  • PDF

A 3-D Vision Sensor Implementation on Multiple DSPs TMS320C31 (다중 TMS320C31 DSP를 사용한 3-D 비젼센서 Implementation)

  • Oksenhendler, V.;Bensrhair, Abdelaziz;Miche, Pierre;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.2
    • /
    • pp.124-130
    • /
    • 1998
  • High-speed 3D vision systems are essential for autonomous robot or vehicle control applications. In our study, a stereo vision process has been developed. It consists of three steps : extraction of edges in right and left images, matching corresponding edges and calculation of the 3D map. This process is implemented in a VME 150/40 Imaging Technology vision system. It is a modular system composed by a display, an acquisition, a four Mbytes image frame memory, and three computational cards. Programmable accelerator computational modules are running at 40 MHz and are based on TMS320C31 DSP with a $64{\times}32$ bit instruction cache and two $1024{\times}32$ bit internal RAMs. Each is equipped with 512 Kbytes static RAM, 4 Mbytes image memory, 1 Mbytes flash EEPROM and a serial port. Data transfers and communications between modules are provided by three 8 bit global video bus, and three local configurable pipeline 8 bit video bus. The VME bus is dedicated to system management. Tasks between DSPs are distributed as follows: two DSPs are used to edges detection, one for the right image and the other for the left one. The last processor computes the matching process and the 3D calculation. With $512{\times}512$ pixels images, this sensor generates dense 3D maps at a rate of about 1 Hz depending of the scene complexity. Results can surely be improved by using a special suited multiprocessors cards.

  • PDF

A Study on the Elliptical Gear Inspection System Using Machine Vision (머신비전을 이용한 타원형 기어 검사 시스템에 관한 연구)

  • Park, Jin Joo;Kim, Gi Hwan;Lee, Eung Seok
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.38 no.1
    • /
    • pp.59-63
    • /
    • 2014
  • Elliptical gears are used in the oval flowmeter and oval flow meter inspects volume of water thanks to space by the elliptical shape. The purpose of this study is to judge accuracy of processing of the elliptical gear and develop inspection system using machine vision. Demand of machine vision is increasing while the factory automation is spreading and principle factor in-process inspection. But, gear inspection using the machine vision rarely used because of complex shape of gear. In this study, it seems possible that elliptical gear is inspected by inspection software using machine vision and inspection program can judge accuracy of processing of the elliptical gear designed this study.

Implementation of Virtual Instrumentation based Realtime Vision Guided Autopilot System and Onboard Flight Test using Rotory UAV (가상계측기반 실시간 영상유도 자동비행 시스템 구현 및 무인 로터기를 이용한 비행시험)

  • Lee, Byoung-Jin;Yun, Suk-Chang;Lee, Young-Jae;Sung, Sang-Kyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.9
    • /
    • pp.878-886
    • /
    • 2012
  • This paper investigates the implementation and flight test of realtime vision guided autopilot system based on virtual instrumentation platform. A graphical design process via virtual instrumentation platform is fully used for the image processing, communication between systems, vehicle dynamics control, and vision coupled guidance algorithms. A significatnt ojective of the algorithm is to achieve an environment robust autopilot despite wind and an irregular image acquisition condition. For a robust vision guided path tracking and hovering performance, the flight path guidance logic is combined in a multi conditional basis with the position estimation algorithm coupled with the vehicle attitude dynamics. An onboard flight test equipped with the developed realtime vision guided autopilot system is done using the rotary UAV system with full attitude control capability. Outdoor flight test demonstrated that the designed vision guided autopilot system succeeded in UAV's hovering on top of ground target within about several meters under geenral windy environment.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

The Components of the Child-care Teachers' Professional Vision Through the Video-based Learning Community: Focusing on Selective Attention and Knowledge-based Reasoning (비디오 활용 학습공동체를 통해 나타난 보육교사의 전문적 시각의 구성 요소: 선택적 주의와 인지 기반 추론을 중심으로)

  • Kim, Soo Jung;Lee, Young Shin;Lee, Min Joo
    • Korean Journal of Child Education & Care
    • /
    • v.19 no.1
    • /
    • pp.27-43
    • /
    • 2019
  • Objective: The purpose of this study is to investigate how the child care teachers experience about professional vision development through participating in video clubs with their peers while watching videos about their interactions with children in the classroom. Methods: We selected three child care teachers in a day care center in Seoul area and conducted the qualitative case study. Video clubs were designed to support the quality of teacher-child interaction by developing child-care teachers' professional vision. And the video clubs used the self-reflection process and cooperative self-reflection process as an important educational method. Results: Teachers were able to experience the change of attention in watching their interaction scene through the 4-time video club participation and to have opportunity for educational (knowledge based) reasoning. Particularly, through participation in the video club, the teacher could pay attention to teachers' intention, teachers' decision making process, and child's intention. In addition, through video club participation, the teachers experienced educational interpretation based on children's thinking and interest; and reasoning through reflective thinking about the results of teaching behavior. This change of professional vision was possible through mutual scaffolding through cooperative reflection among participating teachers. Conclusion/Implications: Based on the results of this study we discussed the importance of the professional vision development of the child care teachers and the effectiveness of the video club for supporting their professional vision development.

A Study on the Vision Sensor Using Scanning Beam for Welding Process Automation (용접자동화를 위한 주사빔을 이용한 시각센서에 관한 연구)

  • You, Won-Sang;Na, Suck-Joo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.20 no.3
    • /
    • pp.891-900
    • /
    • 1996
  • The vision sensor which is based on the optical triangulation theory with the laser as an auxiliary light source can detect not only the seam position but the shape of seam. In this study, a vision sensor using the scanning laser beam was investigated. To design the vision sensor which considers the reflectivity of the sensing object and satisfies the desired resolution and measuring range, the equation of the focused laser beam which has a Gaussian irradiance profile was firstly formulated, Secondly, the image formaing sequence, and thirdly the relation between the displacement in the measuring surface and the displacement in the camera plane was formulated. Therefore, the focused beam diameter in the measuring range could be determined and the influence of the relative location between the laser and camera plane could be estimated. The measuring range and the resolution of the vision sensor which was based on the Scheimpflug's condition could also be calculated. From the results mentioned above a vision sensor was developed, and an adequate calibration technique was proposed. The image processing algorithm which and recognize the center of joint and its shape informaitons was investigated. Using the developed vision sensor and image processing algorithm, the shape informations was investigated. Using the developed vision sensor and image processing algorithm, the shape informations of the vee-, butt- and lap joint were extracted.

An Analysis on Conceptual Sequence and Representations of Eye Vision in Korean Science Textbooks and a Suggestion of Contents Construct Considering Conceptual Sequence in the Eye Vision (초 . 중등학교 과학 교과서에서의 시각(eye vision) 개념의 연계성과 표현 방식 분석 및 연계성을 고려한 시각 개념 구성의 한 가지 제안)

  • Kim, Young-Min
    • Journal of The Korean Association For Science Education
    • /
    • v.27 no.5
    • /
    • pp.456-464
    • /
    • 2007
  • The aims of this research are to analyze the representations and conceptual sequence of eye vision in Korean science textbooks and to suggest a contents construct about eye vision where the conceptual sequence is considered. Research method was literature review, and the literatures that were used for analysis were the 7th Korean science curriculum which was revised in 1997, and the science and physics textbooks developed based on the 7th Korean science curriculum. The research results are as follows: 1) Although the science curriculum seems to have no problem on sequence in the eye vision concepts, the science and physics textbooks based on the curriculum reveal problems on the sequence in the eye vision concepts; 2) Some Korean science textbooks explain retinal image formation according to the Alhazen's idea, except in inverse image; 3) Some Korean science textbooks explain about the reasons of near- and far-sightedness without consistency between the textbooks for 7th and 8th grade students; 4) A few Korean science textbooks give an inappropriate explanation about the principle of eye sight correction by eye glasses; 5) According to the analysis result, the concepts related to eye vision should be presented in the order of explanation about light refraction phenomena, image formation process by convex lens, structure of human eye and retinal image formation process, correction of eye sight using lens.

Stereo Vision Neural Networks with Competition and Cooperation for Phoneme Recognition

  • Kim, Sung-Ill;Chung, Hyun-Yeol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.1E
    • /
    • pp.3-10
    • /
    • 2003
  • This paper describes two kinds of neural networks for stereoscopic vision, which have been applied to an identification of human speech. In speech recognition based on the stereoscopic vision neural networks (SVNN), the similarities are first obtained by comparing input vocal signals with standard models. They are then given to a dynamic process in which both competitive and cooperative processes are conducted among neighboring similarities. Through the dynamic processes, only one winner neuron is finally detected. In a comparative study, with, the average phoneme recognition accuracy on the two-layered SVNN was 7.7% higher than the Hidden Markov Model (HMM) recognizer with the structure of a single mixture and three states, and the three-layered was 6.6% higher. Therefore, it was noticed that SVNN outperformed the existing HMM recognizer in phoneme recognition.