• Title/Summary/Keyword: Mobile Robots

Search Result 910, Processing Time 0.024 seconds

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

Analysis of Defective Causes in Real Time and Prediction of Facility Replacement Cycle based on Big Data (빅데이터 기반 실시간 불량품 발생 원인 분석 및 설비 교체주기 예측)

  • Hwang, Seung-Yeon;Kwak, Kyung-Min;Shin, Dong-Jin;Kwak, Kwang-Jin;Rho, Young-J;Park, Kyung-won;Park, Jeong-Min;Kim, Jeong-Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.203-212
    • /
    • 2019
  • Along with the recent fourth industrial revolution, the world's manufacturing powerhouses are pushing for national strategies to revive the sluggish manufacturing industry. Moon Jae-in, the government is in accordance with the trend, called 'advancement of science and technology is leading the fourth round of the Industrial Revolution' strategy. Intelligent information technology such as IoT, Cloud, Big Data, Mobile, and AI, which are key technologies that lead the fourth industrial revolution, is promoting the emergence of new industries such as robots and 3D printing and the smarting of existing major manufacturing industries. Advances in technologies such as smart factories have enabled IoT-based sensing technology to measure various data that could not be collected before, and data generated by each process has also exploded. Thus, this paper uses data generators to generate virtual data that can occur in smart factories, and uses them to analyze the cause of the defect in real time and to predict the replacement cycle of the facility.

Object Detection Method on Vision Robot using Sensor Fusion (센서 융합을 이용한 이동 로봇의 물체 검출 방법)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.249-254
    • /
    • 2007
  • A mobile robot with various types of sensors and wireless camera is introduced. We show this mobile robot can detect objects well by combining the results of active sensors and image processing algorithm. First, to detect objects, active sensors such as infrared rays sensors and supersonic waves sensors are employed together and calculates the distance in real time between the object and the robot using sensor's output. The difference between the measured value and calculated value is less than 5%. We focus on how to detect a object region well using image processing algorithm because it gives robots the ability of working for human. This paper suggests effective visual detecting system for moving objects with specified color and motion information. The proposed method includes the object extraction and definition process which uses color transformation and AWUPC computation to decide the existence of moving object. Shape information and signature algorithm are used to segment the objects from background regardless of shape changes. We add weighing values to each results from sensors and the camera. Final results are combined to only one value which represents the probability of an object in the limited distance. Sensor fusion technique improves the detection rate at least 7% higher than the technique using individual sensor.

Mobile Sensor Velocity Optimization for Chemical Detection and Response in Chemical Plant Fence Monitoring (사업장의 경계면에서 화학물질 감지 및 대응을 위한 이동식 센서 배치 최적화)

  • Park, Myeongnam;Kim, Hyunseung;Cho, Jaehoon;Lulu, Addis;Shin, Dongil
    • Journal of the Korean Institute of Gas
    • /
    • v.21 no.2
    • /
    • pp.41-49
    • /
    • 2017
  • Recently, as the number of facilities using chemicals is increasing, the amount of handling is rapidly increasing. However, chemical spills are occurring steadily, and if large quantities of chemicals are leaked in time, they are likely to cause major damage. These industrial complexes use information obtained from a number of sensors to detect and monitor leaking areas, and are used in industrial fields by applying existing fixed sensors to robots and drones. Therefore, it is necessary to propose a sensor placement method at the interface for rapid detection and response based on various leaking scenarios reflecting leaking conditions and environmental conditions of the chemical handling process. In this study, COMSOL was used to analyze the actual accident scenarios by applying the medium parameter to the case of chemical leaks. Based on the accident scenarios, the objective function is selected so that the velocity of each robot is calculated by attaching importance to each item of sensor detection probability, sensing time and sensing scenario number. We also confirmed the feasibility of this method of reliability analysis for unexpected leak accidents. Based on the above results, it is expected that it will be helpful to trace back the leakage source based on the concentration data of the portable sensor to be applied later.

A 2D / 3D Map Modeling of Indoor Environment (실내환경에서의 2 차원/ 3 차원 Map Modeling 제작기법)

  • Jo, Sang-Woo;Park, Jin-Woo;Kwon, Yong-Moo;Ahn, Sang-Chul
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.355-361
    • /
    • 2006
  • In large scale environments like airport, museum, large warehouse and department store, autonomous mobile robots will play an important role in security and surveillance tasks. Robotic security guards will give the surveyed information of large scale environments and communicate with human operator with that kind of data such as if there is an object or not and a window is open. Both for visualization of information and as human machine interface for remote control, a 3D model can give much more useful information than the typical 2D maps used in many robotic applications today. It is easier to understandable and makes user feel like being in a location of robot so that user could interact with robot more naturally in a remote circumstance and see structures such as windows and doors that cannot be seen in a 2D model. In this paper we present our simple and easy to use method to obtain a 3D textured model. For expression of reality, we need to integrate the 3D models and real scenes. Most of other cases of 3D modeling method consist of two data acquisition devices. One for getting a 3D model and another for obtaining realistic textures. In this case, the former device would be 2D laser range-finder and the latter device would be common camera. Our algorithm consists of building a measurement-based 2D metric map which is acquired by laser range-finder, texture acquisition/stitching and texture-mapping to corresponding 3D model. The algorithm is implemented with laser sensor for obtaining 2D/3D metric map and two cameras for gathering texture. Our geometric 3D model consists of planes that model the floor and walls. The geometry of the planes is extracted from the 2D metric map data. Textures for the floor and walls are generated from the images captured by two 1394 cameras which have wide Field of View angle. Image stitching and image cutting process is used to generate textured images for corresponding with a 3D model. The algorithm is applied to 2 cases which are corridor and space that has the four wall like room of building. The generated 3D map model of indoor environment is shown with VRML format and can be viewed in a web browser with a VRML plug-in. The proposed algorithm can be applied to 3D model-based remote surveillance system through WWW.

  • PDF

An SoC-based Context-Aware System Architecture (SoC 기반 상황인식 시스템 구조)

  • Sohn, Bong-Ki;Lee, Keon-Myong;Kim, Jong-Tae;Lee, Seung-Wook;Lee, Ji-Hyong;Jeon, Jae-Wook;Cho, Jun-Dong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.4
    • /
    • pp.512-516
    • /
    • 2004
  • Context-aware computing has been attracting the attention as an approach to alleviating the inconvenience in human-computer interaction. This paper proposes a context-aware system architecture to be implemented on an SoC(System-on-a-Chip). The proposed architecture supports sensor abstraction, notification mechanism for context changes, modular development, easy service composition using if-then rules, and flexible context-aware service implementation. It consists of the communication unit, the processing unit, the blackboard, and the rule-based system unit, where the first three components reside in the microprocessor part of the SoC and the rule-based system unit is implemented in hardware. For the proposed architecture, an SoC system has been designed and tested in an SoC development platform called SystemC and the feasibility of the behavoir modules for the microprocessor part has been evaluated by implementing software modules on the conventional computer platform. This SoC-based context-aware system architecture has been developed to apply to mobile intelligent robots which would assist old people at home in a context-aware manner.

Effective Utilization of Domain Knowledge for Relational Reinforcement Learning (관계형 강화 학습을 위한 도메인 지식의 효과적인 활용)

  • Kang, MinKyo;Kim, InCheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.141-148
    • /
    • 2022
  • Recently, reinforcement learning combined with deep neural network technology has achieved remarkable success in various fields such as board games such as Go and chess, computer games such as Atari and StartCraft, and robot object manipulation tasks. However, such deep reinforcement learning describes states, actions, and policies in vector representation. Therefore, the existing deep reinforcement learning has some limitations in generality and interpretability of the learned policy, and it is difficult to effectively incorporate domain knowledge into policy learning. On the other hand, dNL-RRL, a new relational reinforcement learning framework proposed to solve these problems, uses a kind of vector representation for sensor input data and lower-level motion control as in the existing deep reinforcement learning. However, for states, actions, and learned policies, It uses a relational representation with logic predicates and rules. In this paper, we present dNL-RRL-based policy learning for transportation mobile robots in a manufacturing environment. In particular, this study proposes a effective method to utilize the prior domain knowledge of human experts to improve the efficiency of relational reinforcement learning. Through various experiments, we demonstrate the performance improvement of the relational reinforcement learning by using domain knowledge as proposed in this paper.

A Study on the Efficient Human-Robot Interaction Style for a Map Building Process of a Home-service Robot (홈서비스로봇의 맵빌딩을 위한 효율적인 휴먼-로봇 상호작용방식에 대한 연구)

  • Lee, Woo-Hun;Kim, Yeon-Ji;Kim, Hyun-Jin;Yang, Gyun-Hye;Park, Yong-Kuk;Bang, Seok-Won
    • Archives of design research
    • /
    • v.18 no.2 s.60
    • /
    • pp.155-164
    • /
    • 2005
  • Home-service robots need to have sufficient spatial information about the surroundings for interacting with human intelligently and performing services efficiently. It is very important to investigate the efficient interaction style that supports map building task through human-robot collaboration. We first analyzed map building task with a cleaning robot and drew 4 design factors and tentative solutions, including map building procedure (task-preferred procedure/space- preferred procedure), LCD display installation (robot/robot+remote control), navigation method (push type/pull type), feedback modality(GUI/GUI+TTS). The design factors and tentative solutions were defined as independent variables and levels. This research investigated how those variables affect to the human task performance and behavior in map building tast. 8 kinds of experiment prototypes were built and usability test among 16 house wives was conducted for acquiring empirical data. As the experiment result, in terms of map building procedure, space-preferred procedure indicated better task performance than task-proffered procedure as we expected. For the LCD display installation factor, remote control with LCD display indicated higher task performance and subjective satisfaction. In robot navigation method, it was very difficult to find a significant difference between push type and pull type which contrary to our expectation. In fact, push type indicated higher subjective satisfaction. Also in feedback modality, we have acquired negative feedback an additional TTS operation guidance. It seems that robot's autonomy before achieving spatial information is rudiment condition which means users are just interacting with a mobile appliance. Thus they prefer remote-control-based interaction style in robot map building process as they used in traditional appliance control.

  • PDF

Development and Performance Evaluation of Multi-sensor Module for Use in Disaster Sites of Mobile Robot (조사로봇의 재난현장 활용을 위한 다중센서모듈 개발 및 성능평가에 관한 연구)

  • Jung, Yonghan;Hong, Junwooh;Han, Soohee;Shin, Dongyoon;Lim, Eontaek;Kim, Seongsam
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_3
    • /
    • pp.1827-1836
    • /
    • 2022
  • Disasters that occur unexpectedly are difficult to predict. In addition, the scale and damage are increasing compared to the past. Sometimes one disaster can develop into another disaster. Among the four stages of disaster management, search and rescue are carried out in the response stage when an emergency occurs. Therefore, personnel such as firefighters who are put into the scene are put in at a lot of risk. In this respect, in the initial response process at the disaster site, robots are a technology with high potential to reduce damage to human life and property. In addition, Light Detection And Ranging (LiDAR) can acquire a relatively wide range of 3D information using a laser. Due to its high accuracy and precision, it is a very useful sensor when considering the characteristics of a disaster site. Therefore, in this study, development and experiments were conducted so that the robot could perform real-time monitoring at the disaster site. Multi-sensor module was developed by combining LiDAR, Inertial Measurement Unit (IMU) sensor, and computing board. Then, this module was mounted on the robot, and a customized Simultaneous Localization and Mapping (SLAM) algorithm was developed. A method for stably mounting a multi-sensor module to a robot to maintain optimal accuracy at disaster sites was studied. And to check the performance of the module, SLAM was tested inside the disaster building, and various SLAM algorithms and distance comparisons were performed. As a result, PackSLAM developed in this study showed lower error compared to other algorithms, showing the possibility of application in disaster sites. In the future, in order to further enhance usability at disaster sites, various experiments will be conducted by establishing a rough terrain environment with many obstacles.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.