• Title/Summary/Keyword: Vision-based

Search Result 3,438, Processing Time 0.033 seconds

Real-Time Pipe Fault Detection System Using Computer Vision

  • Kim Hyoung-Seok;Lee Byung-Ryong
    • International Journal of Precision Engineering and Manufacturing
    • /
    • v.7 no.1
    • /
    • pp.30-34
    • /
    • 2006
  • Recently, there has been an increasing demand for computer-vision-based inspection and/or measurement system as a part of factory automation equipment. In general, it is almost impossible to check the fault of all parts, coming from part-feeding system, with only manual inspection because of time limitation. Therefore, most of manual inspection is applied to specific samples, not all coming parts, and manual inspection neither guarantee consistent measuring accuracy nor decrease working time. Thus, in order to improve the measuring speed and accuracy of the inspection, a computer-aided measuring and analysis method is highly needed. In this paper, a computer-vision-based pipe inspection system is proposed, where the front and side-view profiles of three different kinds of pipes, coming from a forming line, are acquired by computer vision. And the edge detection is processed by using Laplace operator. To reduce the vision processing time, modified Hough transform is used with clustering method for straight line detection. And the center points and diameters of inner and outer circle are found to determine eccentricity of the parts. Also, an inspection system has been built so that the data and images of faulted parts are stored as files and transferred to the server.

Visualizations of Relational Capital for Shared Vision

  • Russell, Martha G.;Still, Kaisa;Huhtamaki, Jukka;Rubens, Neil
    • World Technopolis Review
    • /
    • v.5 no.1
    • /
    • pp.47-60
    • /
    • 2016
  • In today's digital non-linear global business environment, innovation initiatives are influenced by inter-organizational, political, economic, environmental, technological systems, as well as by decisions made individually by key actors in these systems. Network-based structures emerge from social linkages and collaborations among various actors, creating innovation ecosystems, complex adaptive systems in which entities co-create value. A shared vision of value co-creation allows people operating individually to arrive together at the same future. Yet, relationships are difficult to see, continually changing and challenging to manage. The Innovation Ecosystem Transformation Framework construct includes three core components to make innovation relationships visible and articulate networks of relational capital for the wellbeing, sustainability and business success of innovation ecosystems: data-driven visualizations, storytelling and shared vision. Access to data facilitates building evidence-based visualizations using relational data. This has dramatically altered the way leaders can use data-driven analysis to develop insights and provide ongoing feedback needed to orchestrate relational capital and build shared vision for high quality decisions about innovation. Enabled by a shared vision, relational capital can guide decisions that catalyze, support and sustain an ecosystemic milieu conducive to innovation for business growth.

Smart Vision Sensor for Satellite Video Surveillance Sensor Network (위성 영상감시 센서망을 위한 스마트 비젼 센서)

  • Kim, Won-Ho;Im, Jae-Yoo
    • Journal of Satellite, Information and Communications
    • /
    • v.10 no.2
    • /
    • pp.70-74
    • /
    • 2015
  • In this paper, satellite communication based video surveillance system that consisted of ultra-small aperture terminals with small-size smart vision sensor is proposed. The events such as forest fire, smoke, intruder movement are detected automatically in field and false alarms are minimized by using intelligent and high-reliable video analysis algorithms. The smart vision sensor is necessary to achieve high-confidence, high hardware endurance, seamless communication and easy maintenance requirements. To satisfy these requirements, real-time digital signal processor, camera module and satellite transceiver are integrated as a smart vision sensor-based ultra-small aperture terminal. Also, high-performance video analysis and image coding algorithms are embedded. The video analysis functions and performances were verified and confirmed practicality through computer simulation and vision sensor prototype test.

Evaluation of Robot Vision Control Scheme Based on EKF Method for Slender Bar Placement in the Appearance of Obstacles (장애물 출현 시 얇은 막대 배치작업에 대한 EKF 방법을 이용한 로봇 비젼제어기법 평가)

  • Hong, Sung-Mun;Jang, Wan-Shik;Kim, Jae-Meung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.5
    • /
    • pp.471-481
    • /
    • 2015
  • This paper presents the robot vision control schemes using Extended Kalman Filter (EKF) method for the slender bar placement in the appearance of obstacles during robot movement. The vision system model used for this study involves the six camera parameters($C_1{\sim}C_6$). In order to develop the robot vision control scheme, first, the six parameters are estimated. Then, based on the estimated parameters, the robot's joint angles are estimated for the slender bar placement. Especially, robot trajectory caused by obstacles is divided into three obstacle regions, which are beginning region, middle region and near target region. Finally, the effects of number of obstacles using the proposed robot's vision control schemes are investigated in each obstacle region by performing experiments of the slender bar placement.

Development of a Vision-based Blank Alignment Unit for Press Automation Process (프레스 자동화 공정을 위한 비전 기반 블랭크 정렬 장치 개발)

  • Oh, Jong-Kyu;Kim, Daesik;Kim, Soo-Jong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.65-69
    • /
    • 2015
  • A vision-based blank alignment unit for a press automation line is introduced in this paper. A press is a machine tool that changes the shape of a blank by applying pressure and is widely used in industries requiring mass production. In traditional press automation lines, a mechanical centering unit, which consists of guides and ball bearings, is employed to align a blank before a robot inserts it into the press. However it can only align limited sized and shaped of blanks. Moreover it cannot be applied to a process where more than two blanks are simultaneously inserted. To overcome these problems, we developed a press centering unit by means of vision sensors for press automation lines. The specification of the vision system is determined by considering information of the blank and the required accuracy. A vision application S/W with pattern recognition, camera calibration and monitoring functions is designed to successfully detect multiple blanks. Through real experiments with an industrial robot, we validated that the proposed system was able to align various sizes and shapes of blanks, and successfully detect more than two blanks which were simultaneously inserted.

Calibration for Color Measurement of Lean Tissue and Fat of the Beef

  • Lee, S.H.;Hwang, H.
    • Agricultural and Biosystems Engineering
    • /
    • v.4 no.1
    • /
    • pp.16-21
    • /
    • 2003
  • In the agricultural field, a machine vision system has been widely used to automate most inspection processes especially in quality grading. Though machine vision system was very effective in quantifying geometrical quality factors, it had a deficiency in quantifying color information. This study was conducted to evaluate color of beef using machine vision system. Though measuring color of a beef using machine vision system had an advantage of covering whole lean tissue area at a time compared to a colorimeter, it revealed the problem of sensitivity depending on the system components such as types of camera, lighting conditions, and so on. The effect of color balancing control of a camera was investigated and multi-layer BP neural network based color calibration process was developed. Color calibration network model was trained using reference color patches and showed the high correlation with L*a*b* coordinates of a colorimeter. The proposed calibration process showed the successful adaptability to various measurement environments such as different types of cameras and light sources. Compared results with the proposed calibration process and MLR based calibration were also presented. Color calibration network was also successfully applied to measure the color of the beef. However, it was suggested that reflectance properties of reference materials for calibration and test materials should be considered to achieve more accurate color measurement.

  • PDF

Implementation of Virtual Instrumentation based Realtime Vision Guided Autopilot System and Onboard Flight Test using Rotory UAV (가상계측기반 실시간 영상유도 자동비행 시스템 구현 및 무인 로터기를 이용한 비행시험)

  • Lee, Byoung-Jin;Yun, Suk-Chang;Lee, Young-Jae;Sung, Sang-Kyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.9
    • /
    • pp.878-886
    • /
    • 2012
  • This paper investigates the implementation and flight test of realtime vision guided autopilot system based on virtual instrumentation platform. A graphical design process via virtual instrumentation platform is fully used for the image processing, communication between systems, vehicle dynamics control, and vision coupled guidance algorithms. A significatnt ojective of the algorithm is to achieve an environment robust autopilot despite wind and an irregular image acquisition condition. For a robust vision guided path tracking and hovering performance, the flight path guidance logic is combined in a multi conditional basis with the position estimation algorithm coupled with the vehicle attitude dynamics. An onboard flight test equipped with the developed realtime vision guided autopilot system is done using the rotary UAV system with full attitude control capability. Outdoor flight test demonstrated that the designed vision guided autopilot system succeeded in UAV's hovering on top of ground target within about several meters under geenral windy environment.

Analysis of Vision based Technology for Smart Railway Station System (스마트 철도역사시스템 구축을 위한 영상기반 기술 분석)

  • Lee, Sang-Hak
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.5
    • /
    • pp.1065-1070
    • /
    • 2018
  • These days there are many researches on the vision based technology using deep learning. The lots of studies on the intelligent operation and maintenance for railway station system used technologies with vision analysis function. This paper analyzes the papers which studied the intelligent station system with vision analysis function for passengers and facilities monitoring, platform monitoring, fire monitoring, and effective operation and design. Also, this paper proposes research which uses the more powerful vision technology with deep-learning for smart railway station system.

Vision-sensor-based Drivable Area Detection Technique for Environments with Changes in Road Elevation and Vegetation (도로의 높낮이 변화와 초목이 존재하는 환경에서의 비전 센서 기반)

  • Lee, Sangjae;Hyun, Jongkil;Kwon, Yeon Soo;Shim, Jae Hoon;Moon, Byungin
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.2
    • /
    • pp.94-100
    • /
    • 2019
  • Drivable area detection is a major task in advanced driver assistance systems. For drivable area detection, several studies have proposed vision-sensor-based approaches. However, conventional drivable area detection methods that use vision sensors are not suitable for environments with changes in road elevation. In addition, if the boundary between the road and vegetation is not clear, judging a vegetation area as a drivable area becomes a problem. Therefore, this study proposes an accurate method of detecting drivable areas in environments in which road elevations change and vegetation exists. Experimental results show that when compared to the conventional method, the proposed method improves the average accuracy and recall of drivable area detection on the KITTI vision benchmark suite by 3.42%p and 8.37%p, respectively. In addition, when the proposed vegetation area removal method is applied, the average accuracy and recall are further improved by 6.43%p and 9.68%p, respectively.

A Study on the Analysis of the Vision Achievement and Social Status of the ABEEK (한국공학교육인증원의 2020 비전 달성도 및 사회적 위상 분석)

  • Han, Jiyoung
    • Journal of Engineering Education Research
    • /
    • v.27 no.2
    • /
    • pp.3-12
    • /
    • 2024
  • The purpose of this study was to evaluate how well the 2020 vision presented by the Accreditation Board for Engineering Education of Korea(ABEEK) had been achieved, and to objectively examine the social status. It was very necessary for the development of engineering education in Korea to provide room for improvement by diagnosing how well the ABEEK, one of the major engineering education communities, was achieving its own vision. In order to achieve the objectives of the study, research methods such as literature review, survey research, and expert advisory committee were used. To evaluate the level of achievement of the Vision 2020 of the ABEEK, the analysis was based on the response results of 61 people who had experience as a member of the steering committe. In addition, the vision and mission of the 23 countries that are currently signatory members of the Washington Accord were surveyed, and the social responsibility and financial independence of the 20 countries that joined the signatory member countries before 2020 were compared with each other. As a result of the analysis, the item of securing international equivalence in engineering education received the most positive evaluation, and the social compensation efforts for accreditied graduates received the least evaluation. The ABEEK was evaluated as having a medium level of social responsibility and a low level of financial independence. Based on the results of this research, we proposed ways the ABEEK to contribute to the improvement of Korean engineering education.