• 제목/요약/키워드: Automation Target Recognition

검색결과 11건 처리시간 0.027초

An Improved ViBe Algorithm of Moving Target Extraction for Night Infrared Surveillance Video

  • Feng, Zhiqiang;Wang, Xiaogang;Yang, Zhongfan;Guo, Shaojie;Xiong, Xingzhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권12호
    • /
    • pp.4292-4307
    • /
    • 2021
  • For the research field of night infrared surveillance video, the target imaging in the video is easily affected by the light due to the characteristics of the active infrared camera and the classical ViBe algorithm has some problems for moving target extraction because of background misjudgment, noise interference, ghost shadow and so on. Therefore, an improved ViBe algorithm (I-ViBe) for moving target extraction in night infrared surveillance video is proposed in this paper. Firstly, the video frames are sampled and judged by the degree of light influence, and the video frame is divided into three situations: no light change, small light change, and severe light change. Secondly, the ViBe algorithm is extracted the moving target when there is no light change. The segmentation factor of the ViBe algorithm is adaptively changed to reduce the impact of the light on the ViBe algorithm when the light change is small. The moving target is extracted using the region growing algorithm improved by the image entropy in the differential image of the current frame and the background model when the illumination changes drastically. Based on the results of the simulation, the I-ViBe algorithm proposed has better robustness to the influence of illumination. When extracting moving targets at night the I-ViBe algorithm can make target extraction more accurate and provide more effective data for further night behavior recognition and target tracking.

비주얼 서보잉을 위한 딥러닝 기반 물체 인식 및 자세 추정 (Object Recognition and Pose Estimation Based on Deep Learning for Visual Servoing)

  • 조재민;강상승;김계경
    • 로봇학회논문지
    • /
    • 제14권1호
    • /
    • pp.1-7
    • /
    • 2019
  • Recently, smart factories have attracted much attention as a result of the 4th Industrial Revolution. Existing factory automation technologies are generally designed for simple repetition without using vision sensors. Even small object assemblies are still dependent on manual work. To satisfy the needs for replacing the existing system with new technology such as bin picking and visual servoing, precision and real-time application should be core. Therefore in our work we focused on the core elements by using deep learning algorithm to detect and classify the target object for real-time and analyzing the object features. We chose YOLO CNN which is capable of real-time working and combining the two tasks as mentioned above though there are lots of good deep learning algorithms such as Mask R-CNN and Fast R-CNN. Then through the line and inside features extracted from target object, we can obtain final outline and estimate object posture.

머신비전 자동검사를 위한 대상객체의 인식방향성 개선 (Recognition Direction Improvement of Target Object for Machine Vision based Automatic Inspection)

  • 홍승범;홍승우;이규호
    • 한국정보통신학회논문지
    • /
    • 제23권11호
    • /
    • pp.1384-1390
    • /
    • 2019
  • 본 논문은 머신비전기반 자동검사를 위한 대상객체의 인식방향성 개선 연구로서, 영상카메라에 의한 자동 비전검사의 과정에서 제한성이 따르는 대상 객체의 인식방향성을 개선하는 방법을 제안한다. 이를 통하여 머신비전 자동검사에서 시험대상물의 위치와 방향에 상관없이 검사대상의 영상을 검출할 수 있게 함으로써 별도 검사지그의 필요성을 배제하고 검사과정의 자동화 레벨을 향상시킨다. 본 연구에서는 검사대상으로서 와이어 하네스 제조과정에서 실제 적용할 수 있는 기술과 방법을 개발하여 실제 시스템으로 구현한 결과를 제시한다. 시스템구현 결과는 공인기관의 평가를 통하여, 정밀도, 검출인식도, 재현률 및 위치조정 성공률에서 모두 성공적인 측정결과를 얻었고, 당초 설정하였던 10종류의 컬러구별 능력, 1초 이내 검사시간, 4개 자동모드 설정 등에서도 목표달성을 확인하였다.

항공 영상 융합의 성능 향상을 위한 적응 가이디드 필터 (An Adaptive Guided Filter for Performance Improvement of Aviation Image Fusion)

  • 김선영;강창호;박찬국
    • 한국항공우주학회지
    • /
    • 제44권5호
    • /
    • pp.407-415
    • /
    • 2016
  • 본 논문에서는 최적의 항공 영상 융합을 위하여 적응 가이디드 필터 기반 알고리즘을 제안하였다. 제안한 적응 가이디드 필터는 가이디드 필터 설계 요소 중에서 정규화 파라미터 값을 입력된 영상 특성에 따라서 조절하고 PSNR (peak signal to noise ratio)을 미리 정해둔 값으로 유지한다. 제안한 방법은 입력 영상의 특성에 관계없이 미리 정한 PSNR을 유지하는 범위 내에서 잡음을 제거하므로 최적의 영상 융합 성능의 결과를 가져올 수 있다. 필터 성능은 시뮬레이션을 통해 검증하였고, 기존에 많이 사용되고 있는 영상융합 품질 파라미터를 이용하여 분석하였다.

Vision-Based Activity Recognition Monitoring Based on Human-Object Interaction at Construction Sites

  • Chae, Yeon;Lee, Hoonyong;Ahn, Changbum R.;Jung, Minhyuk;Park, Moonseo
    • 국제학술발표논문집
    • /
    • The 9th International Conference on Construction Engineering and Project Management
    • /
    • pp.877-885
    • /
    • 2022
  • Vision-based activity recognition has been widely attempted at construction sites to estimate productivity and enhance workers' health and safety. Previous studies have focused on extracting an individual worker's postural information from sequential image frames for activity recognition. However, various trades of workers perform different tasks with similar postural patterns, which degrades the performance of activity recognition based on postural information. To this end, this research exploited a concept of human-object interaction, the interaction between a worker and their surrounding objects, considering the fact that trade workers interact with a specific object (e.g., working tools or construction materials) relevant to their trades. This research developed an approach to understand the context from sequential image frames based on four features: posture, object, spatial features, and temporal feature. Both posture and object features were used to analyze the interaction between the worker and the target object, and the other two features were used to detect movements from the entire region of image frames in both temporal and spatial domains. The developed approach used convolutional neural networks (CNN) for feature extractors and activity classifiers and long short-term memory (LSTM) was also used as an activity classifier. The developed approach provided an average accuracy of 85.96% for classifying 12 target construction tasks performed by two trades of workers, which was higher than two benchmark models. This experimental result indicated that integrating a concept of the human-object interaction offers great benefits in activity recognition when various trade workers coexist in a scene.

  • PDF

자동차 부품의 로봇 처리 시스템을 위한 3D 비전 구현 (3D Vision Implementation for Robotic Handling System of Automotive Parts)

  • 남지훈;양원옥;박수현;김남국;송철기;이호성
    • 한국기계가공학회지
    • /
    • 제21권4호
    • /
    • pp.60-69
    • /
    • 2022
  • To keep pace with Industry 4.0, it is imperative for companies to redesign their working environments by adopting robotic automation systems. Automation lines are facilitating the latest cutting-edge technologies, such as 3D vision and industrial robots, to outdo competitors by reducing costs. Considering the nature of the manufacturing industry, a time-saving workflow and smooth linkwork between processes is vital. At Dellics, without any additional new installation in the automation lines, only a few improvements to the working process could raise productivity. Three requirements are the development of gripping technology by utilizing a 3D vision system for the recognition of the material shape and location, research on lighting projectors to target long distances and high illumination, and testing of algorithms/software to improve measurement accuracy and identify products. With some of the functional requisites mentioned above, improved robotic automation systems should provide an improved working environment to maximize overall production efficiency. In this article, the ways in which such a system can become the groundwork for establishing an unmanned working infrastructure are discussed.

A Study on Infra-Technology of RCP Mobility System

  • Kim, Seung-Woo;Choe, Jae-Il;Im, Chan-Young
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.1435-1439
    • /
    • 2004
  • Most recently, CP(Cellular Phone) has been one of the most important technologies in the IT(Information Tech-nology) field, and it is situated in a position of great importance industrially and economically. To produce the best CP in the world, a new technological concept and its advanced implementation technique is required, due to the extreme level of competition in the world market. The RT(Robot Technology) has been developed as the next generation of a future technology. Current robots require advanced technology, such as soft computing, human-friendly interface, interaction technique, speech recognition, object recognition etc. unlike the industrial robots of the past. Therefore, this paper explains conceptual research for development of the RCP(Robotic Cellular Phone), a new technological concept, in which a synergy effect is generated by the merging of IT & RT. RCP infra consists of $RCP^{Mobility}$ $RCP^{Interaction}$, $RCP^{Integration}$ technologies. For $RCP^{Mobility}$, human-friendly motion automation and personal service with walking and arming ability are developed. $RCP^{Interaction}$ ability is achieved by modeling an emotion-generating engine and $RCP^{Integration}$ that recognizes environmental and self conditions is developed. By joining intelligent algorithms and CP communication network with the three base modules, a RCP system is constructed. Especially, the RCP mobility system is focused in this paper. $RCP^{Mobility}$ is to apply a mobility technology, which is popular robot technology, to CP and combine human-friendly motion and navigation function to CP. It develops a new technological application system of auto-charging and real-world entertainment function etc. This technology can make a CP companion pet robot. It is an automation of human-friendly motions such as opening and closing of CPs, rotation of antenna, manipulation and wheel-walking. It's target is the implementation of wheel and manipulator functions that can give service to humans with human-friendly motion. So, this paper presents the definition, the basic theory and experiment results of the RCP mobility system. We confirm a good performance of the RCP mobility system through the experiment results.

  • PDF

자동화된 변전소의 주변압기 사고복구를 위한 패턴인식기법에 기반한 실시간 모선재구성 전략 개발 (Real-Time Bus Reconfiguration Strategy for the Fault Restoration of Main Transformer Based on Pattern Recognition Method)

  • 고윤석
    • 대한전기학회논문지:전력기술부문A
    • /
    • 제53권11호
    • /
    • pp.596-603
    • /
    • 2004
  • This paper proposes an expert system based on the pattern recognition method which can enhance the accuracy and effectiveness of real-time bus reconfiguration strategy for the transfer of faulted load when a main transformer fault occurs in the automated substation. The minimum distance classification method is adopted as the pattern recognition method of expert system. The training pattern set is designed MTr by MTr to minimize the searching time for target load pattern which is similar to the real-time load pattern. But the control pattern set, which is required to determine the corresponding bus reconfiguration strategy to these trained load pattern set is designed as one table by considering the efficiency of knowledge base design because its size is small. The training load pattern generator based on load level and the training load pattern generator based on load profile are designed, which are can reduce the size of each training pattern set from max L/sup (m+f)/ to the size of effective level. Here, L is the number of load level, m and f are the number of main transformers and the number of feeders. The one reduces the number of trained load pattern by setting the sawmiller patterns to a same pattern, the other reduces by considering only load pattern while the given period. And control pattern generator based on exhaustive search method with breadth-limit is designed, which generates the corresponding bus reconfiguration strategy to these trained load pattern set. The inference engine of the expert system and the substation database and knowledge base is implemented in MFC function of Visual C++ Finally, the performance and effectiveness of the proposed expert system is verified by comparing the best-first search solution and pattern recognition solution based on diversity event simulations for typical distribution substation.

광 페룰 에폭시 자동주입 시스템 설계 및 성능시험에 관한 연구 (A Study on the Design and Performance Test of Optical Ferrule Epoxy Injection System)

  • 곽이구
    • 한국공작기계학회논문집
    • /
    • 제17권6호
    • /
    • pp.118-123
    • /
    • 2008
  • Weakness process can be called ferrule array and epoxy filling process at connector manufacturing process, and a lot of problems happen as think as general manufacturing process at early investment. Wished to improve this and working environment mend of worker on childhood(planning phase) and problem that is happened at done ferrule array and epoxy injection by emphasis target. By ferrule sorting and Improvement of epoxy filling process, bring authoritativeness elevation of product by fraction defective decrease of product by sized work along with productivity elevation. On the other hand, working jigs are various in characteristics of optical connector manufacturing line. There have been lots of restriction in practice because the applicability of this system is only for single type model though the network should be smooth between lines. Thus, it is not only needed the recognition of necessity in industrial line but also the development of automation system arraying ferrule and filling epoxy in the manufacturing process. It is found that the present system development enhances productivity fairly and prevents industrial disaster in the optical connector manufacturing system.

Gyro-Mouse for the Disabled: 'Click' and 'Position' Control of the Mouse Cursor

  • Eom, Gwang-Moon;Kim, Kyeong-Seop;Kim, Chul-Seung;Lee, James;Chung, Soon-Cheol;Lee, Bong-Soo;Higa, Hiroki;Furuse, Norio;Futami, Ryoko;Watanabe, Takashi
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권2호
    • /
    • pp.147-154
    • /
    • 2007
  • This paper describes a 'gyro-mouse', which provides a new human-computer interface (HCI) for persons who are disabled in their upper extremities, for handling the mouse-click and mouse-move function. We adopted the artificial neural network to recognize a quick-nodding pattern of the disabled person as the gyro-mouse click. The performance of our gyro-mouse was evaluated by three indices that include 'click recognition rate', 'error in cursor position control', and 'click rate per minute' on a target box appearing at random positions. Although it turned out that the average error in cursor positioning control was 1.4-1.5 times larger than that of optical mouse control, and the average click rate per minute was 40% of the optical mouse, the overall click recognition rate was 93%. Moreover, the click rate per minute increased from 35.2% to 44% with repetitive trials. Hence, our suggested gyro-mouse system can be used to provide a new user interface tool especially for those persons who do not have full use of their upper extremities.