• 제목/요약/키워드: Robotic camera

검색결과 93건 처리시간 0.024초

Study of Intelligent Vision Sensor for the Robotic Laser Welding

  • Kim, Chang-Hyun;Choi, Tae-Yong;Lee, Ju-Jang;Suh, Jeong;Park, Kyoung-Taik;Kang, Hee-Shin
    • 한국산업융합학회 논문집
    • /
    • 제22권4호
    • /
    • pp.447-457
    • /
    • 2019
  • The intelligent sensory system is required to ensure the accurate welding performance. This paper describes the development of an intelligent vision sensor for the robotic laser welding. The sensor system includes a PC based vision camera and a stripe-type laser diode. A set of robust image processing algorithms are implemented. The laser-stripe sensor can measure the profile of the welding object and obtain the seam line. Moreover, the working distance of the sensor can be changed and other configuration is adjusted accordingly. The robot, the seam tracking system, and CW Nd:YAG laser are used for the laser welding robot system. The simple and efficient control scheme of the whole system is also presented. The profile measurement and the seam tracking experiments were carried out to validate the operation of the system.

로봇팔을 지닌 물류용 자율주행 전기차 플랫폼 개발 (Development of Autonomous Driving Electric Vehicle for Logistics with a Robotic Arm)

  • 정의정;박성호;전광우;신현석;최윤용
    • 로봇학회논문지
    • /
    • 제18권1호
    • /
    • pp.93-98
    • /
    • 2023
  • In this paper, the development of an autonomous electric vehicle for logistics with a robotic arm is introduced. The manual driving electric vehicle was converted into an electric vehicle platform capable of autonomous driving. For autonomous driving, an encoder is installed on the driving wheels, and an electronic power steering system is applied for automatic steering. The electric vehicle is equipped with a lidar sensor, a depth camera, and an ultrasonic sensor to recognize the surrounding environment, create a map, and recognize the vehicle location. The odometry was calculated using the bicycle motion model, and the map was created using the SLAM algorithm. To estimate the location of the platform based on the generated map, AMCL algorithm using Lidar was applied. A user interface was developed to create and modify a waypoint in order to move a predetermined place according to the logistics process. An A-star-based global path was generated to move to the destination, and a DWA-based local path was generated to trace the global path. The autonomous electric vehicle developed in this paper was tested and its utility was verified in a warehouse.

실시간 햅틱 렌더링 기술을 통한 시각 장애인을 위한 원격현장감(Telepresence) 로봇 기술 (Telepresence Robotic Technology for Individuals with Visual Impairments Through Real-time Haptic Rendering)

  • 박정혁;아야나 하워드
    • 로봇학회논문지
    • /
    • 제8권3호
    • /
    • pp.197-205
    • /
    • 2013
  • This paper presents a robotic system that provides telepresence to the visually impaired by combining real-time haptic rendering with multi-modal interaction. A virtual-proxy based haptic rendering process using a RGB-D sensor is developed and integrated into a unified framework for control and feedback for the telepresence robot. We discuss the challenging problem of presenting environmental perception to a user with visual impairments and our solution for multi-modal interaction. We also explain the experimental design and protocols, and results with human subjects with and without visual impairments. Discussion on the performance of our system and our future goals are presented toward the end.

방사선 분포 모니터링 시스템 (Radiation level distribution monitoring system)

  • 최영수;박순용;이종민
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1996년도 한국자동제어학술회의논문집(국내학술편); 포항공과대학교, 포항; 24-26 Oct. 1996
    • /
    • pp.828-831
    • /
    • 1996
  • Radiation monitoring system is needed at nuclear power plant and nuclear facility. Manual survey techniques are commonly used, but they are time consuming and somewhat inaccurate. Automatic radiation surveys are very important because it provides significant savings in men-rem and wages. Unmanned, remote automatic radiation measurement system should be small and light-weighted in order to mount on robotic system. The system we have developed consists of detection parts, signal processing part, interface, and software part. Position information is provided by using of a collimator. The measurement process is achieved by the scanning of detector and image processing techniques are used to display radiation levels. We designed collimators, detectors, signal processing circuit, and constructed prototype system. The goal of this system is the mapping of camera image and radiation level distribution.

  • PDF

레이저 비전 센서를 이용한 용접선 추적에 관한 시뮬레이션 (Computer simulation for seam tracking algorithm using laser vision sensor in robotic welding)

  • 정택민;성기은;이세헌
    • 한국레이저가공학회지
    • /
    • 제13권2호
    • /
    • pp.17-23
    • /
    • 2010
  • It is very important to track a complicate weld seam for the welding automation. Very recently, laser vision sensor becomes a useful sensing tool to find the seams. Until now, however studies of welding automation using a laser vision sensor, focused on either image processing or feature recognition from CCD camera. Even though it is possible to use a simple algorithm for tracking a simple seam, it is extremely difficult to develop a seam-tracking algorithm when the seam is more complex. To overcome these difficulties, this study introduces a simulation system to develop the seam tracking algorithm. This method was verified experimentally to reduce the time and effort to develop the seam tracking algorithm, and to implement the sensing device.

  • PDF

Backstepping-Based Control of a Strapdown Boatboard Camera Stabilizer

  • Setoodeh, Peyman;Khayatian, Alireza;Farjah, Ebrahim
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권1호
    • /
    • pp.15-23
    • /
    • 2007
  • In surveillance, monitoring, and target tracking operations, high-resolution images should be obtained even if the target is in a far distance. Frequent movements of vehicles such as boats degrade the image quality of onboard camera systems. Therefore, stabilizer mechanisms are required to stabilize the line of sight of boatboard camera systems against boat movements. This paper addresses design and implementation of a strapdown boatboard camera stabilizer. A two degree of freedom(DOF)(pan/tilt) robot performs the stabilization task. The main problem is divided into two subproblems dealing with attitude estimation and attitude control. It is assumed that exact estimate of the boat movement is available from an attitude estimation system. Estimates obtained in this way are carefully transformed to robot coordinate frame to provide desired trajectories, which should be tracked by the robot to compensate for the boat movements. Such a practical robotic system includes actuators with fast dynamics(electrical dynamics) and has more degrees of freedom than control inputs. Backstepping method is employed to deal with this problem by extending the control effectiveness.

기계학습 기반의 파이썬 모듈을 이용한 밀양아리랑우주천문대 전천 영상의 운량 모니터링 프로그램 개발 (Development of the Cloud Monitoring Program using Machine Learning-based Python Module from the MAAO All-sky Camera Images)

  • 임구;김도형;김동현;박근홍
    • 한국지구과학회지
    • /
    • 제45권2호
    • /
    • pp.111-120
    • /
    • 2024
  • 운량은 천체 관측을 지속하는 데에 중요한 요소 중 하나이다. 과거에는 관측자가 날씨를 직접 판단할 수밖에 없었으나, 원격 및 자동 관측 시스템의 개발로 관측자의 역할이 상대적으로 줄어들었다. 또한 구름의 다양한 형태와 빠른 이동 때문에 자동으로 운량을 판단하는 것은 쉽지 않다. 이 연구에서는 기계학습 기반의 파이썬 모듈인 "cloudynight"을 밀양아리랑우주천문대의 전천 영상에 적용하여 운량을 모니터링하는 프로그램을 개발하였다. 전천 영상을 하위 영역으로 나누어 각 39,996개 영역의 16개의 특징을 학습하여 기계학습 모델을 생성하였다. 검증 표본에서 얻은 F1 점수는 0.97로, 기계학습 모델이 우수한 성능을 가짐을 보여준다. 운량("Cloudiness")은 전체 하위 영역 개수 중 구름으로 식별 된 하위 영역 개수의 비율로 계산하며, 운량이 지난 30분 동안 0.6을 초과할 때 관측을 중단하도록 자동 관측 프로그램 규칙을 정하였다. 이 규칙을 따를 때, 기계학습 모델이 운량을 오판하여 관측에 영향을 미치는 경우는 거의 발생하지 않았다. 본 기계학습 모델을 통하여, 밀양아리랑우주천문대 0.7 m 망원경의 성공적인 자동 관측을 기대한다.

로봇 환경에서의 2DPCA 기반 알고리즘의 비교 연구 (Performance Comparison of 2DPCA based Face Recognition algorithm under Robotic Environments)

  • 박범철;곽근창;윤호섭
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2007년도 하계종합학술대회 논문집
    • /
    • pp.217-218
    • /
    • 2007
  • Face recognition, recognizing the human faces, is one of the most important techniques for making intelligent robot that provide commendable services to human. In this paper, we make a comparative study of Original PCA, 2DPCA, 2DPCA based algorithms and LDA in robot environment. Database is obtained through the robot's camera in a laboratory what is made like home environment for experiment.. We consider distance state what can be generated in home environment for database.

  • PDF

퍼지신경망을 이용한 로보트의 비쥬얼서보제어 (Visual servo control of robots using fuzzy-neural-network)

  • 서은택;정진현
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1994년도 Proceedings of the Korea Automatic Control Conference, 9th (KACC) ; Taejeon, Korea; 17-20 Oct. 1994
    • /
    • pp.566-571
    • /
    • 1994
  • This paper presents in image-based visual servo control scheme for tracking a workpiece with a hand-eye coordinated robotic system using the fuzzy-neural-network. The goal is to control the relative position and orientation between the end-effector and a moving workpiece using a single camera mounted on the end-effector of robot manipulator. We developed a fuzzy-neural-network that consists of a network-model fuzzy system and supervised learning rules. Fuzzy-neural-network is applied to approximate the nonlinear mapping which transforms the features and theire change into the desired camera motion. In addition a control strategy for real-time relative motion control based on this approximation is presented. Computer simulation results are illustrated to show the effectiveness of the fuzzy-neural-network method for visual servoing of robot manipulator.

  • PDF

로봇 환경의 템플릿 기반 얼굴인식 알고리즘 성능 비교 (Performance Comparison of Template-based Face Recognition under Robotic Environments)

  • 반규대;곽근창;지수영;정연구
    • 로봇학회논문지
    • /
    • 제1권2호
    • /
    • pp.151-157
    • /
    • 2006
  • This paper is concerned with the template-based face recognition from robot camera images with illumination and distance variations. The approaches used in this paper consist of Eigenface, Fisherface, and Icaface which are the most representative recognition techniques frequently used in conjunction with face recognition. These approaches are based on a popular unsupervised and supervised statistical technique that supports finding useful image representations, respectively. Thus we focus on the performance comparison from robot camera images with unwanted variations. The comprehensive experiments are completed for a databases with illumination and distance variations.

  • PDF