• Title/Summary/Keyword: industrial computer vision

Search Result 151, Processing Time 0.029 seconds

Multiple Object Tracking with Color-Based Particle Filter for Intelligent Space (공간지능화를 위한 색상기반 파티클 필터를 이용한 다중물체추적)

  • Jin, Tae-Seok;Hashimoto, Hideki
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.1
    • /
    • pp.21-28
    • /
    • 2007
  • The Intelligent Space(ISpace) provides challenging research fields for surveillance, human-computer interfacing, networked camera conferencing, industrial monitoring or service and training applications. ISpace is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human following by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd. And the article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguity conditions. We propose to track the moving objects by generating hypotheses not in the image plan but on the top-view reconstruction of the scene. Comparative results on real video sequences show the advantage of our method for multi-object tracking. Also, the method is applied to the intelligent environment and its performance is verified by the experiments.

  • PDF

DEVELOPMENT OF VIRTUAL PLAYGROUND SYSTEM BY MARKERLESS AUGUMENTED REALITY AND PHYSICS ENGINE

  • Takahashi, Masafumi;Miyata, Kazunori
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.834-837
    • /
    • 2009
  • Augmented Reality (AR) is a useful technology for various industrial systems. This paper suggests a new playground system which uses markerless AR technology. We developed a virtual playground system that can learn physics and kinematics from the physical play of people. The virtual playground is a space in which real scenes and CG are mixed. As for the CG objects, physics of the real world is used. This is realized by a physics engine. Therefore it is necessary to analyze information from cameras, so that CG reflects the real world. Various games options are possible using real world images and physics simulation in the virtual playground. We think that the system is effective for education. Because CG behaves according to physics simulation, users can learn physics and kinematics from the system. We think that the system can take its place in the field of education through entertainment.

  • PDF

Real Time Eye and Gaze Tracking (트래킹 Gaze와 실시간 Eye)

  • Min Jin-Kyoung;Cho Hyeon-Seob
    • Proceedings of the KAIS Fall Conference
    • /
    • 2004.11a
    • /
    • pp.234-239
    • /
    • 2004
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process fur each person, our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks (GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Furthermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.

  • PDF

Object Recognition and Pose Estimation Based on Deep Learning for Visual Servoing (비주얼 서보잉을 위한 딥러닝 기반 물체 인식 및 자세 추정)

  • Cho, Jaemin;Kang, Sang Seung;Kim, Kye Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.1-7
    • /
    • 2019
  • Recently, smart factories have attracted much attention as a result of the 4th Industrial Revolution. Existing factory automation technologies are generally designed for simple repetition without using vision sensors. Even small object assemblies are still dependent on manual work. To satisfy the needs for replacing the existing system with new technology such as bin picking and visual servoing, precision and real-time application should be core. Therefore in our work we focused on the core elements by using deep learning algorithm to detect and classify the target object for real-time and analyzing the object features. We chose YOLO CNN which is capable of real-time working and combining the two tasks as mentioned above though there are lots of good deep learning algorithms such as Mask R-CNN and Fast R-CNN. Then through the line and inside features extracted from target object, we can obtain final outline and estimate object posture.

Application of artificial intelligence-based technologies to the construction sites (이미지 기반 인공지능을 활용한 현장 적용성 연구)

  • Na, Seunguk;Heo, Seokjae;Roh, Youngsook
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.04a
    • /
    • pp.225-226
    • /
    • 2022
  • The construction industry, which has a labour-intensive and conservative nature, is exclusive to adopt new technologies. However, the construction industry is viably introducing the 4th Industrial Revolution technologies represented by artificial intelligence, Internet of Things, robotics and unmanned transportation to promote change into a smart industry. An image-based artificial intelligence technology is a field of computer vision technology that refers to machines mimicking human visual recognition of objects from pictures or videos. The purpose of this article is to explore image-based artificial intelligence technologies which would be able to apply to the construction sites. In this study, we show two examples which is one for a construction waste classification model and another for cast in-situ anchor bolts defection detection model. Image-based intelligence technologies would be used for various measurement, classification, and detection works that occur in the construction projects.

  • PDF

Image Processing and Deep Learning-based Defect Detection Theory for Sapphire Epi-Wafer in Green LED Manufacturing

  • Suk Ju Ko;Ji Woo Kim;Ji Su Woo;Sang Jeen Hong;Garam Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.81-86
    • /
    • 2023
  • Recently, there has been an increased demand for light-emitting diode (LED) due to the growing emphasis on environmental protection. However, the use of GaN-based sapphire in LED manufacturing leads to the generation of defects, such as dislocations caused by lattice mismatch, which ultimately reduces the luminous efficiency of LEDs. Moreover, most inspections for LED semiconductors focus on evaluating the luminous efficiency after packaging. To address these challenges, this paper aims to detect defects at the wafer stage, which could potentially improve the manufacturing process and reduce costs. To achieve this, image processing and deep learning-based defect detection techniques for Sapphire Epi-Wafer used in Green LED manufacturing were developed and compared. Through performance evaluation of each algorithm, it was found that the deep learning approach outperformed the image processing approach in terms of detection accuracy and efficiency.

  • PDF

Classification of Mouse Lung Metastatic Tumor with Deep Learning

  • Lee, Ha Neul;Seo, Hong-Deok;Kim, Eui-Myoung;Han, Beom Seok;Kang, Jin Seok
    • Biomolecules & Therapeutics
    • /
    • v.30 no.2
    • /
    • pp.179-183
    • /
    • 2022
  • Traditionally, pathologists microscopically examine tissue sections to detect pathological lesions; the many slides that must be evaluated impose severe work burdens. Also, diagnostic accuracy varies by pathologist training and experience; better diagnostic tools are required. Given the rapid development of computer vision, automated deep learning is now used to classify microscopic images, including medical images. Here, we used a Inception-v3 deep learning model to detect mouse lung metastatic tumors via whole slide imaging (WSI); we cropped the images to 151 by 151 pixels. The images were divided into training (53.8%) and test (46.2%) sets (21,017 and 18,016 images, respectively). When images from lung tissue containing tumor tissues were evaluated, the model accuracy was 98.76%. When images from normal lung tissue were evaluated, the model accuracy ("no tumor") was 99.87%. Thus, the deep learning model distinguished metastatic lesions from normal lung tissue. Our approach will allow the rapid and accurate analysis of various tissues.

Color Pattern Recognition and Tracking for Multi-Object Tracking in Artificial Intelligence Space (인공지능 공간상의 다중객체 구분을 위한 컬러 패턴 인식과 추적)

  • Tae-Seok Jin
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.2_2
    • /
    • pp.319-324
    • /
    • 2024
  • In this paper, the Artificial Intelligence Space(AI-Space) for human-robot interface is presented, which can enable human-computer interfacing, networked camera conferencing, industrial monitoring, service and training applications. We present a method for representing, tracking, and objects(human, robot, chair) following by fusing distributed multiple vision systems in AI-Space. The article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguous conditions. We propose to track the moving objects(human, robot, chair) by generating hypotheses not in the image plane but on the top-view reconstruction of the scene.

Video Motion Analysis for Sudden Death Detection During Sleeping (수면 중 돌연사 감지를 위한 비디오 모션 분석 방법)

  • Lee, Seung Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.10
    • /
    • pp.603-609
    • /
    • 2018
  • Sudden death during sleep often occurs in different age groups. To prevent an unexpected sudden death, sleep monitoring is required. This paper presents a video analysis method to detect sudden death without using any attachable sensors. In the proposed method, a motion magnification technique detects even very subtle motion during sleep. If the magnification cannot detect motion, the proposed method readily decides on abnormal status (possibly sudden death). Experimental results on two kinds of sleep video show that motion magnification-based video analysis could be useful for discriminating sleep (with very subtle motion) from sudden death.

Target Object Detection Based on Robust Feature Extraction (강인한 특징 추출에 기반한 대상물체 검출)

  • Jang, Seok-Woo;Huh, Moon-Haeng
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.12
    • /
    • pp.7302-7308
    • /
    • 2014
  • Detecting target objects robustly in natural environments is a difficult problem in the computer vision and image processing areas. This paper suggests a method of robustly detecting target objects in the environments where reflection exists. The suggested algorithm first captures scenes with a stereo camera and extracts the line and corner features representing the target objects. This method then eliminates the reflected features among the extracted ones using a homographic transform. Subsequently, the method robustly detects the target objects by clustering only real features. The experimental results showed that the suggested algorithm effectively detects the target objects in reflection environments rather than existing algorithms.