• 제목/요약/키워드: Robot vision

Search Result 880, Processing Time 0.026 seconds

Hole Identification Method Based on Template Matching for the Ear-Pins Insertion Automation System (이어핀 삽입 자동화 시스템을 위한 템플릿 매칭 기반 삽입 위치 판별 방법)

  • Baek, Jonghwan;Lee, Jaeyoul;Jung, Myungsoo;Jang, Minwoo;Shin, Dongho;Seo, Kapho;Hong, Sungho
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.1
    • /
    • pp.7-14
    • /
    • 2021
  • In jewelry industry, the proportion of labor costs is high. Also, the production time and quality of products are highly varied depending on the workers' capabilities. Therefore, there is a demand from the jewelry industry for automation. The ear pin insertion automation system is the robot automatically inserts the ear pins into the silicone mold, and this automated system require accurate and fast hole detection method. In this paper, we propose optimal binarization method and a template matching method that can be applied in the ear pin insertion automation system. Through the performance test, it was shown that the applied method has an accuracy of 98.5% and 0.5 seconds faster processing speed than the Otsu binarization method. So, this automation system can contribute to cost reduction, work time reduction, and productivity improvement.

LiDAR Static Obstacle Map based Vehicle Dynamic State Estimation Algorithm for Urban Autonomous Driving (도심자율주행을 위한 라이다 정지 장애물 지도 기반 차량 동적 상태 추정 알고리즘)

  • Kim, Jongho;Lee, Hojoon;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.14-19
    • /
    • 2021
  • This paper presents LiDAR static obstacle map based vehicle dynamic state estimation algorithm for urban autonomous driving. In an autonomous driving, state estimation of host vehicle is important for accurate prediction of ego motion and perceived object. Therefore, in a situation in which noise exists in the control input of the vehicle, state estimation using sensor such as LiDAR and vision is required. However, it is difficult to obtain a measurement for the vehicle state because the recognition sensor of autonomous vehicle perceives including a dynamic object. The proposed algorithm consists of two parts. First, a Bayesian rule-based static obstacle map is constructed using continuous LiDAR point cloud input. Second, vehicle odometry during the time interval is calculated by matching the static obstacle map using Normal Distribution Transformation (NDT) method. And the velocity and yaw rate of vehicle are estimated based on the Extended Kalman Filter (EKF) using vehicle odometry as measurement. The proposed algorithm is implemented in the Linux Robot Operating System (ROS) environment, and is verified with data obtained from actual driving on urban roads. The test results show a more robust and accurate dynamic state estimation result when there is a bias in the chassis IMU sensor.

Transcutaneous medial fixation sutures for free flap inset after robot-assisted nipple-sparing mastectomy

  • Kim, Bong-Sung;Kuo, Wen-Ling;Cheong, David Chon-Fok;Lindenblatt, Nicole;Huang, Jung-Ju
    • Archives of Plastic Surgery
    • /
    • v.49 no.1
    • /
    • pp.29-33
    • /
    • 2022
  • The application of minimal invasive mastectomy has allowed surgeons to perform nipples-paring mastectomy via a shorter, inconspicuous incision under clear vision and with more precise hemostasis. However, it poses new challenges in microsurgical breast reconstruction, such as vascular anastomosis and flap insetting, which are considerably more difficult to perform through the shorter incision on the lateral breast border. We propose an innovative technique of transcutaneous medial fixation sutures to help in flap insetting and creating and maintaining the medial breast border. The sutures are placed after mastectomy and before flap transfer. Three 4-0 nylon suture loops are placed transcutaneously and into the pocket at the markings of the preferred lower medial border of the reconstructed breast. After microvascular anastomosis and temporary shaping of the flap on top of the mastectomy skin, the three corresponding points for the sutures are identified. The three nylon loops are then sutured to the dermis of the corresponding medial point of the flap. The flap is placed into the pocket by a simultaneous gentle pull on the three sutures and a combined lateral push. The stitches are then tied and buried after completion of flap inset.

Monovision Charging Terminal Docking Method for Unmanned Automatic Charging of Autonomous Mobile Robots (자율이동로봇의 무인 자동 충전을 위한 모노비전 방식의 충전단자 도킹 방법)

  • Keunho Park;Juhwan Choi;Seonhyeong Kim;Dongkil Kang;Haeseong Jo;Joonsoo Bae
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.47 no.3
    • /
    • pp.95-103
    • /
    • 2024
  • The diversity of smart EV(electric vehicle)-related industries is increasing due to the growth of battery-based eco-friendly electric vehicle component material technology, and labor-intensive industries such as logistics, manufacturing, food, agriculture, and service have invested in and studied automation for a long time. Accordingly, various types of robots such as autonomous mobile robots and collaborative robots are being utilized for each process to improve industrial engineering such as optimization, productivity management, and work management. The technology that should accompany this unmanned automobile industry is unmanned automatic charging technology, and if autonomous mobile robots are manually charged, the utility of autonomous mobile robots will not be maximized. In this paper, we conducted a study on the technology of unmanned charging of autonomous mobile robots using charging terminal docking and undocking technology using an unmanned charging system composed of hardware such as a monocular camera, multi-joint robot, gripper, and server. In an experiment to evaluate the performance of the system, the average charging terminal recognition rate was 98%, and the average charging terminal recognition speed was 0.0099 seconds. In addition, an experiment was conducted to evaluate the docking and undocking success rate of the charging terminal, and the experimental results showed an average success rate of 99%.

A Study on a Real-Time Aerial Image-Based UAV-USV Cooperative Guidance and Control Algorithm (실시간 항공영상 기반 UAV-USV 간 협응 유도·제어 알고리즘 개발)

  • Do-Kyun Kim;Jeong-Hyeon Kim;Hui-Hun Son;Si-Woong Choi;Dong-Han Kim;Chan Young Yeo;Jong-Yong Park
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.61 no.5
    • /
    • pp.324-333
    • /
    • 2024
  • This paper focuses on the cooperation between Unmanned Aerial Vehicle (UAV) and Unmanned Surface Vessel (USV). It aims to develop efficient guidance and control algorithms for USV based on obstacle identification and path planning from aerial images captured by UAV. Various obstacle scenarios were implemented using the Robot Operating System (ROS) and the Gazebo simulation environment. The aerial images transmitted in real-time from UAV to USV are processed using the computer vision-based deep learning model, You Only Look Once (YOLO), to classify and recognize elements such as the water surface, obstacles, and ships. The recognized data is used to create a two-dimensional grid map. Algorithms such as A* and Rapidly-exploring Random Tree star (RRT*) were used for path planning. This process enhances the guidance and control strategies within the UAV-USV collaborative system, especially improving the navigational capabilities of the USV in complex and dynamic environments. This research offers significant insights into obstacle avoidance and path planning in maritime environments and proposes new directions for the integrated operation of UAV and USV.

An Image Processing System for the Harvesting robot$^{1)}$ (포도수확용 로봇 개발을 위한 영상처리시스템)

  • Lee, Dae-Weon;Kim, Dong-Woo;Kim, Hyun-Tae;Lee, Yong-Kuk;Si-Heung
    • Journal of Bio-Environment Control
    • /
    • v.10 no.3
    • /
    • pp.172-180
    • /
    • 2001
  • A grape fruit is required for a lot of labor to harvest in time in Korea, since the fruit is cut and grabbed currently by hand. In foreign country, especially France, a grape harvester has been developed for processing to make wine out of a grape, not to eat a fresh grape fruit. However, a harvester which harvests to eat a fresh grape fruit has not been developed yet. Therefore, this study was designed and constructed to develope a image processing system for a fresh grape harvester. Its development involved the integration of a vision system along with an personal computer and two cameras. Grape recognition, which was able to found the accurate cutting position in three dimension by the end-effector, needed to find out the object from the background by using two different images from two cameras. Based on the results of this research the following conclusions were made: The model grape was located and measured within less than 1,100 mm from camera center, which means center between two cameras. The distance error of the calculated distance had the distance error within 5mm by using model image in the laboratory. The image processing system proved to be a reliable system for measuring the accurate distance between the camera center and the grape fruit. Also, difference between actual distance and calculated distance was found within 5 mm using stereo vision system in the field. Therefore, the image processing system would be mounted on a grape harvester to be founded to the position of the a grape fruit.

  • PDF

A Study on the Selection and Applicability Analysis of 3D Terrain Modeling Sensor for Intelligent Excavation Robot (지능형 굴삭 로봇의 개발을 위한 로컬영역 3차원 모델링 센서 선정 및 현장 적용성 분석에 관한 연구)

  • Yoo, Hyun-Seok;Kwon, Soon-Wook;Kim, Young-Suk
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.6
    • /
    • pp.2551-2562
    • /
    • 2013
  • Since 2006, an Intelligent Excavation Robot which automatically performs the earth-work without operator has been developed in Korea. The technologies for automatically recognizing the terrain of work environment and detecting the objects such as obstacles or dump trucks are essential for its work quality and safety. In several countries, terrestrial 3D laser scanner and stereo vision camera have been used to model the local area around workspace of the automated construction equipment. However, these attempts have some problems that require high cost to make the sensor system or long processing time to eliminate the noise from 3D model outcome. The objectives of this study are to analyze the advantages of the existing 3D modeling sensors and to examine the applicability for practical use by using Analytic Hierarchical Process(AHP). In this study, 3D modeling quality and accuracy of modeling sensors were tested at the real earth-work environment.

Machine Classification in Ship Engine Rooms Using Transfer Learning (전이 학습을 이용한 선박 기관실 기기의 분류에 관한 연구)

  • Park, Kyung-Min
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.2
    • /
    • pp.363-368
    • /
    • 2021
  • Ship engine rooms have improved automation systems owing to the advancement of technology. However, there are many variables at sea, such as wind, waves, vibration, and equipment aging, which cause loosening, cutting, and leakage, which are not measured by automated systems. There are cases in which only one engineer is available for patrolling. This entails many risk factors in the engine room, where rotating equipment is operating at high temperature and high pressure. When the engineer patrols, he uses his five senses, with particular high dependence on vision. We hereby present a preliminary study to implement an engine-room patrol robot that detects and informs the machine room while a robot patrols the engine room. Images of ship engine-room equipment were classified using a convolutional neural network (CNN). After constructing the image dataset of the ship engine room, the network was trained with a pre-trained CNN model. Classification performance of the trained model showed high reproducibility. Images were visualized with a class activation map. Although it cannot be generalized because the amount of data was limited, it is thought that if the data of each ship were learned through transfer learning, a model suitable for the characteristics of each ship could be constructed with little time and cost expenditure.

Design and Implementation of Real-time Digital Twin in Heterogeneous Robots using OPC UA (OPC UA를 활용한 이기종 로봇의 실시간 디지털 트윈 설계 및 구현)

  • Jeehyeong Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.189-196
    • /
    • 2023
  • As the manufacturing paradigm shifts, various collaborative robots are creating new markets. Demand for collaborative robots is increasing in all industries for the purpose of easy operation, productivity improvement, and replacement of manpower who do simple tasks compared to existing industrial robots. However, accidents frequently occur during work caused by collaborative robots in industrial sites, threatening the safety of workers. In order to construct an industrial site through robots in a human-centered environment, the safety of workers must be guaranteed, and there is a need to develop a collaborative robot guard system that provides reliable communication without the possibility of dispatch. It is necessary to double prevent accidents that occur within the working radius of cobots and reduce the risk of safety accidents through sensors and computer vision. We build a system based on OPC UA, an international protocol for communication with various industrial equipment, and propose a collaborative robot guard system through image analysis using ultrasonic sensors and CNN (Convolution Neural Network). The proposed system evaluates the possibility of robot control in an unsafe situation for a worker.

An Observation System of Hemisphere Space with Fish eye Image and Head Motion Detector

  • Sudo, Yoshie;Hashimoto, Hiroshi;Ishii, Chiharu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.663-668
    • /
    • 2003
  • This paper presents a new observation system which is useful to observe the scene of the remote controlled robot vision. This system is composed of a motionless camera and head motion detector with a motion sensor. The motionless camera has a fish eye lens and is for observing a hemisphere space. The head motion detector has a motion sensor is for defining an arbitrary subspace of the hemisphere space from fish eye lens. Thus processing the angular information from the motion sensor appropriately, the direction of face is estimated. However, since the fisheye image is distorted, it is unclear image. The partial domain of a fish eye image is selected by head motion, and this is converted to perspective image. However, since this conversion enlarges the original image spatially and is based on discrete data, crevice is generated in the converted image. To solve this problem, interpolation based on an intensity of the image is performed for the crevice in the converted image (space problem). This paper provides the experimental results of the proposed observation system with the head motion detector and perspective image conversion using the proposed conversion and interpolation methods, and the adequacy and improving point of the proposed techniques are discussed.

  • PDF