• Title/Summary/Keyword: a range finder

검색결과 154건 처리시간 0.027초

펄스 위상차와 스트럭춰드 라이트를 이용한 이동 로봇 시각 장치 구현 (Implementation of vision system for a mobile robot using pulse phase difference & structured light)

  • 방석원;정명진;서일홍;오상록
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1991년도 한국자동제어학술회의논문집(국내학술편); KOEX, Seoul; 22-24 Oct. 1991
    • /
    • pp.652-657
    • /
    • 1991
  • Up to date, application areas of mobile robots have been expanded. In addition, Many types of LRF(Laser Range Finder) systems have been developed to acquire three dimensional information about unknown environments. However in real world, because of various noises (sunlight, fluorescent light), it is difficult to separate reflected laser light from these noise. To overcome the previous restriction, we have developed a new type vision system which enables a mobile robot to measure the distance to a object located 1-5 (m) ahead with an error than 2%. The separation and detection algorithm used in this system consists of pulse phase difference method and multi-stripe structured light. The effectiveness and feasibility of the proposed vision system are demonstrated by 3-D maps of detected objects and computation time analysis.

  • PDF

신뢰성 높은 위치 인식을 위하여 방향을 고려한 확률적 스캔 매칭 기법 (Direction Augmented Probabilistic Scan Matching for Reliable Localization)

  • 최민용;최진우;정완균
    • 제어로봇시스템학회논문지
    • /
    • 제17권12호
    • /
    • pp.1234-1239
    • /
    • 2011
  • The scan matching is widely used in localization and mapping of mobile robots. This paper presents a probabilistic scan matching method. To improve the performance of the scan matching, a direction of data point is incorporated into the scan matching. The direction of data point is calculated using the line fitted by the neighborhood data. Owing to the incorporation, the performance of the matching was improved. The number of iterations in the scan matching decreased, and the tolerance against a high rotation between scans increased. Based on real data of a laser range finder, experiments verified the performance of the proposed direction augmented probabilistic scan matching algorithm.

Mixed-input neural networks for daylight prediction

  • Thanh Luan LE;Sung-Ah KIM
    • 국제학술발표논문집
    • /
    • The 10th International Conference on Construction Engineering and Project Management
    • /
    • pp.973-979
    • /
    • 2024
  • In this research, we present the implementation of a mixed-input neural network for daylight prediction in the architectural design process. This approach harnesses the advantages of both image and numerical inputs to construct a robust neural network model. The hybrid model consists of two branches, each handling in-depth information about the building. Consequently, this model can effectively accommodate a wide range of building layouts, incorporating additional information for enhanced predictions. The building data was created utilizing PlanFinder in Rhino Grasshopper, while simulation data were generated using Honeybee and Ladybug. Weather data were collected from three distinct localities in Vietnam: Ha Noi, Da Nang, and Ho Chi Minh City. The neural network demonstrates outstanding performance, achieving an R-squared (R2) value of 0.95 and the overall percentage difference in the testing dataset ranges from 0 to 20.7%.

초음파 센서 오차 감소를 위한 실내 환경의 거리 자료 분석 (Distance Data Analysis of Indoor Environment for Ultrasonic Sensor Error Decrease)

  • 임병현;고낙용;황종선;김영민;박현철
    • 한국전기전자재료학회:학술대회논문집
    • /
    • 한국전기전자재료학회 2003년도 춘계학술대회 논문집 기술교육전문연구회
    • /
    • pp.62-65
    • /
    • 2003
  • When a mobile robot moves around autonomously without man-made corrupted bye landmarks, it is essential to recognize the placement of surrounding objects especially for self localization, obstacle avoidance, and target classification and localization. To recognize the environment we use many Kinds of sensors, such as ultrasonic sensors, laser range finder, CCD camera, and so on. Among the sensors, ultra sonic sensors(sonar)are unexpensive and easy to use. In this paper, we analyze the sonar data and propose a method to recognize features of indoor environment. It is supposed that the environments are consisted of features of planes, edges, and corners, For the analysis, sonar data of plane, edge, and corner are accumulated for several given ranges. The data are filtered to eliminate some noise using the Kalman filter algorithm. Then, the data for each feature are compared each other to extract the character is ties of each feature. We demonstrate the applicability of the proposed method using the sonar data obtained form a sonar transducer rotating and scanning the range information around a indoor environment.

  • PDF

실외 자율주행 로봇을 위한 다수의 동적 장애물 탐지 및 선속도 기반 장애물 회피기법 개발 (Multiple Target Tracking and Forward Velocity Control for Collision Avoidance of Autonomous Mobile Robot)

  • 김선도;노치원;강연식;강성철;송재복
    • 제어로봇시스템학회논문지
    • /
    • 제14권7호
    • /
    • pp.635-641
    • /
    • 2008
  • In this paper, we used a laser range finder (LRF) to detect both the static and dynamic obstacles for the safe navigation of a mobile robot. LRF sensor measurements containing the information of obstacle's geometry are first processed to extract the characteristic points of the obstacle in the sensor field of view. Then the dynamic states of the characteristic points are approximated using kinematic model, which are tracked by associating the measurements with Probability Data Association Filter. Finally, the collision avoidance algorithm is developed by using fuzzy decision making algorithm depending on the states of the obstacles tracked by the proposed obstacle tracking algorithm. The performance of the proposed algorithm is evaluated through experiments with the experimental mobile robot.

영상처리와 거리센서를 융합한 객체 추적용 드론의 연구 (A Study of Object Tracking Drones Combining Image Processing and Distance Sensor)

  • 양우석;천명현;장건우;김상훈
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2017년도 추계학술발표대회
    • /
    • pp.961-964
    • /
    • 2017
  • 드론의 대중화에 따른 사고위험의 증가로 안전한 조종 방법에 대한 연구의 필요성이 대두되었다. 따라서 조종자의 조종능력에 구애받지 않는 자율비행제어기술이 필요하게 되었고, 이를 보다 안정적으로 구현하기 위하여 자율주행 소프트웨어 플랫폼으로 주목받고 있는 Robot Operating System(ROS)를 사용하였다. ROS를 기반으로 Laser Range Finder(LRF)와 Particle Filter를 사용하여 자율적으로 객체추적이 가능하며 지능적으로 장애물을 회피하여 비행 할 수 있는 안정적인 자율비행제어시스템을 구현하고자 한다.

Performance Comparison of Sensor-Programming Schemes According to the Shapes of Obstacles

  • Chung, Jong-In;Chae, Yi-Geun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제13권3호
    • /
    • pp.56-62
    • /
    • 2021
  • MSRDS(Microsoft Robotics Developer Studio) provides the ability to simulate these technologies. SPL(Simple Programming Language) of MSRDS provides many functions for sensor programming to control autonomous robots. Sensor programming in SPL can be implemented in two types of schemes: procedure sensor notification and while-loop schemes. We considered the three programming schemes to control the robot movement after studying the advantages and disadvantages of the sensor notification procedure and the while-loop scheme. We also created simulation environments to evaluate the performance of the three considered schemes when applied to four different mazes. The simulation environment consisted of a maze and a robot with the most powerful sensor, i.e., the LRF(Laser Range Finder) sensor. We measured the required travel time and robot actions (number of turns and number of collisions) needed to escape the maze and compared the performance outcomes of the three considered schemes in the four different mazes.

평면 구조물의 단일점 일치를 이용한 2차원 레이저 거리감지센서의 자동 캘리브레이션 (Autonomous Calibration of a 2D Laser Displacement Sensor by Matching a Single Point on a Flat Structure)

  • 정지훈;강태선;신현호;김수종
    • 제어로봇시스템학회논문지
    • /
    • 제20권2호
    • /
    • pp.218-222
    • /
    • 2014
  • In this paper, we introduce an autonomous calibration method for a 2D laser displacement sensor (e.g. laser vision sensor and laser range finder) by matching a single point on a flat structure. Many arc welding robots install a 2D laser displacement sensor to expand their application by recognizing their environment (e.g. base metal and seam). In such systems, sensing data should be transformed to the robot's coordinates, and the geometric relation (i.e. rotation and translation) between the robot's coordinates and sensor coordinates should be known for the transformation. Calibration means the inference process of geometric relation between the sensor and robot. Generally, the matching of more than 3 points is required to infer the geometric relation. However, we introduce a novel method to calibrate using only 1 point matching and use a specific flat structure (i.e. circular hole) which enables us to find the geometric relation with a single point matching. We make the rotation component of the calibration results as a constant to use only a single point by moving a robot to a specific pose. The flat structure can be installed easily in a manufacturing site, because the structure does not have a volume (i.e. almost 2D structure). The calibration process is fully autonomous and does not need any manual operation. A robot which installed the sensor moves to the specific pose by sensing features of the circular hole such as length of chord and center position of the chord. We show the precision of the proposed method by performing repetitive experiments in various situations. Furthermore, we applied the result of the proposed method to sensor based seam tracking with a robot, and report the difference of the robot's TCP (Tool Center Point) trajectory. This experiment shows that the proposed method ensures precision.

보행보조시스템의 조작 편리성 향상을 위한 사용자의 선속도 및 회전각속도 검출 알고리즘 (An Algorithm for Detecting Linear Velocity and Angular Velocity for Improve Convenience of Assistive Walking System)

  • 김병철;이원영;엄수홍;장문석;김평수;이응혁
    • 재활복지공학회논문지
    • /
    • 제10권4호
    • /
    • pp.321-328
    • /
    • 2016
  • 본 논문에서는 신체의 노화로 인해 보행활동이 제한되는 고령자가 사용하는 전동형 보행보조시스템의 조작 편리성 향상을 위해 기존의 조작 기법인 보행의지 기법과 융합이 가능한 보행상태 기법을 제안한다. 이것은 사용자가 핸들바를 파지하고 있는지 여부를 파악하고 그에 따라 사용자의 보행의지를 판별하는 단순 트리거 신호로 사용한다. 또한 보행의지 파악을 위한 사용자의 보행상태 확인은 레이저 거리 측정 장치를 사용하여 검출된 사용자의 선속도와 회전각속도를 보행보조시스템 중심의 선속도와 회전각속도로 사용한다. 이를 위해 사용자의 양측 다리 중심점을 추정하여 가상의 인체중심점 검출 기법을 제안한다. 실험은 보행자의 선속도 및 회전각속도와 보행 보조시스템 중심의 선속도 및 회전각속도를 비교 분석하는 실험을 진행하였다. 실험 결과, 사용자와 보행보조시스템 중심 간에 선속도와 회전각속도의 오차율은 각각 1%와 2.77%로 나타냈으며, 이는 사용자의 선속도와 회전각 속도를 보행보조시스템의 양측구동기, 속도산출기에 적용이 가능한 것으로 확인했다. 이를 통해 본 논문에서 제안하는 보행의지 기법과 보행상태 기법을 융합한다면 사용자가 보행보조시스템에 끌려가는 현상과 조작 미숙으로 인한 오작동 등이 예방 가능할 것으로 확인되었다.

실내환경에서의 2 차원/ 3 차원 Map Modeling 제작기법 (A 2D / 3D Map Modeling of Indoor Environment)

  • 조상우;박진우;권용무;안상철
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2006년도 학술대회 1부
    • /
    • pp.355-361
    • /
    • 2006
  • In large scale environments like airport, museum, large warehouse and department store, autonomous mobile robots will play an important role in security and surveillance tasks. Robotic security guards will give the surveyed information of large scale environments and communicate with human operator with that kind of data such as if there is an object or not and a window is open. Both for visualization of information and as human machine interface for remote control, a 3D model can give much more useful information than the typical 2D maps used in many robotic applications today. It is easier to understandable and makes user feel like being in a location of robot so that user could interact with robot more naturally in a remote circumstance and see structures such as windows and doors that cannot be seen in a 2D model. In this paper we present our simple and easy to use method to obtain a 3D textured model. For expression of reality, we need to integrate the 3D models and real scenes. Most of other cases of 3D modeling method consist of two data acquisition devices. One for getting a 3D model and another for obtaining realistic textures. In this case, the former device would be 2D laser range-finder and the latter device would be common camera. Our algorithm consists of building a measurement-based 2D metric map which is acquired by laser range-finder, texture acquisition/stitching and texture-mapping to corresponding 3D model. The algorithm is implemented with laser sensor for obtaining 2D/3D metric map and two cameras for gathering texture. Our geometric 3D model consists of planes that model the floor and walls. The geometry of the planes is extracted from the 2D metric map data. Textures for the floor and walls are generated from the images captured by two 1394 cameras which have wide Field of View angle. Image stitching and image cutting process is used to generate textured images for corresponding with a 3D model. The algorithm is applied to 2 cases which are corridor and space that has the four wall like room of building. The generated 3D map model of indoor environment is shown with VRML format and can be viewed in a web browser with a VRML plug-in. The proposed algorithm can be applied to 3D model-based remote surveillance system through WWW.

  • PDF