• Title/Summary/Keyword: robot face

Search Result 187, Processing Time 0.023 seconds

Avoidance Algorithm of a Robot about Moving Obstacle on Two Dimension Path (2차원 경로상에서 이동물체에 대한 로봇의 회피 알고리즘)

  • 방시현;원태현;이만형
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.327-330
    • /
    • 1995
  • If a mobile robot is used in a real situation, robot must face a moving obstacles. In that case, the collision avoidance algorithm for moving obstacle is a indispensible element in mobile robot control. We csrried out a research to find and evaluate the advanced algorithm for mobile robot. At first we generate the continous path for mobi;e robot. Then by creating a curved path for avoidance, the mobile robot can change its path smoothly. Smoothed path made the robot adapt more effectively to the changing of path. Under time-varying condition, computer simulation was performed to show the validation of proposed algorithm.

  • PDF

Point Number Algorithm for Position Identification of Mobile Robots (로봇의 위치계산을 위한 포인트 개수 알고리즘)

  • Liu, Jiang;Son, Young-Ik;Kim, Kab-Il
    • Proceedings of the KIEE Conference
    • /
    • 2005.10b
    • /
    • pp.427-429
    • /
    • 2005
  • This paper presents the use of Point Number Algorithm (PNA) for real-time image processing for position identification of mobile robot. PNA can get how many points in the image gotten from the robot vision and can calculate the distance between the robot and the wall by the number of the points. The algorithm can be applied to a robot vision system enable to identify where it is in the workspace. In the workspace, the walls are made up by white background with many black points on them evenly. The angle of the vision is set invariable. So the more black points in the vision, the longer the distance is from the robot to the wall. But when the robot does not face the wall directly, the number of the black points is different. When the robot faces the wall, the least number of the black points can be gotten. The simulation results are presented at the end of this paper.

  • PDF

METHODS OF EYEBROW REGION EXTRACRION AND MOUTH DETECTION FOR FACIAL CARICATURING SYSTEM PICASSO-2 EXHIBITED AT EXPO2005

  • Tokuda, Naoya;Fujiwara, Takayuki;Funahashi, Takuma;Koshimizu, Hiroyasu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.425-428
    • /
    • 2009
  • We have researched and developed the caricature generation system PICASSO. PICASSO outputs the deformed facial caricature by comparing input face with prepared mean face. We specialized it as PICASSO-2 for exhibiting a robot at Aichi EXPO2005. This robot enforced by PICASSO-2 drew a facial caricature on the shrimp rice cracker with the laser pen. We have been recently exhibiting another revised robot characterized by a brush drawing. This system takes a couple of facial images with CCD camera, extracts the facial features from the images, and generates the facial caricature in real time. We experimentally evaluated the performance of the caricatures using a lot of data taken in Aichi EXPO2005. As a result it was obvious that this system were not sufficient in accuracy of eyebrow region extraction and mouth detection. In this paper, we propose the improved methods for eyebrow region extraction and mouth detection.

  • PDF

Hand Raising Pose Detection in the Images of a Single Camera for Mobile Robot (주행 로봇을 위한 단일 카메라 영상에서 손든 자세 검출 알고리즘)

  • Kwon, Gi-Il
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.4
    • /
    • pp.223-229
    • /
    • 2015
  • This paper proposes a novel method for detection of hand raising poses from images acquired from a single camera attached to a mobile robot that navigates unknown dynamic environments. Due to unconstrained illumination, a high level of variance in human appearances and unpredictable backgrounds, detecting hand raising gestures from an image acquired from a camera attached to a mobile robot is very challenging. The proposed method first detects faces to determine the region of interest (ROI), and in this ROI, we detect hands by using a HOG-based hand detector. By using the color distribution of the face region, we evaluate each candidate in the detected hand region. To deal with cases of failure in face detection, we also use a HOG-based hand raising pose detector. Unlike other hand raising pose detector systems, we evaluate our algorithm with images acquired from the camera and images obtained from the Internet that contain unknown backgrounds and unconstrained illumination. The level of variance in hand raising poses in these images is very high. Our experiment results show that the proposed method robustly detects hand raising poses in complex backgrounds and unknown lighting conditions.

Identification of Hazard for Securing the Safety of Unmanned Parcel Storage Device System Using Robot Technology

  • Park, Jae Min;Kim, Young Min
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.132-139
    • /
    • 2022
  • The development of the fourth industrial revolution and logistics 4.0 related technology, the growth of the e-commerce market, and the transition to a non-face to face society due to the pandemic are accelerating the growth of the logistics industry. Due to the growth of the logistics industry, various services are emerging to meet the requirements of the market, and research and technology development related to the parcel storage, which is an important element of the last mile service, is also underway. In the past, if it was difficult to deliver the goods directly to the recipient, the parcel storage installed near the delivery location was used, but the usability was not good and the storage of the goods was limited. In addition, the existing parcel storage has a lot of functional limitations compared to the advanced logistics technology, so it is necessary to develop a device that improves it. Therefore, this study conducted to secure safety for unmanned parcel storage devices with robot technology to improve usability and functionality in line with the advanced logistics industry. Based on ISO 10218, an industrial robot related standard, risk identification studies were conducted to derive results that contribute to the development of devices under development.

A Study on Drilling with Multi-Articulated Robot (다관절 로봇에 의한 드릴가공에 관한 연구)

  • 최은환;박효흥;이기원;정선환;노승훈;최성대
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.05a
    • /
    • pp.438-441
    • /
    • 2002
  • Today, it is concentrating on automation and these appearance of industrial robot has been dealing flexibly with this field. If it is drilling through these industrial robot, it will be a high efficiency of the productivity. Therefore, electric drill is setting on articulated robot with six degree of freedom and the 5 face is drilling as the type of articulated robot by a machine object setting. This study was carried out to get the possibility about drilling as analyzing with experimental frequency, original vibration of robot and appearance of vibration when it is practical drilling.

  • PDF

A Development of Single Action Press Robot (프레스 단동로봇의 개발)

  • 허성창;황병복
    • Proceedings of the Korean Society for Technology of Plasticity Conference
    • /
    • 1997.03a
    • /
    • pp.261-264
    • /
    • 1997
  • A single action press robot, which consists of a driving unit, rotator, up-down feed base and feed bar, is developed and applied for the press automation. The driving unit is made up with a face cam and blade cam, which have a phase angle. The feeding system consists of a double speed-up apparatus and linear motion guides, and has a fast motion characteristics. A horizontal feeding speed of the feed bar is increased twice by the double speed-up apparatus. The driving mechanism could be simplified due to the speed-up of the feeding unit.

  • PDF

Robot vision system for face tracking using color information from video images (로봇의 시각시스템을 위한 동영상에서 칼라정보를 이용한 얼굴 추적)

  • Jung, Haing-Sup;Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.14 no.4
    • /
    • pp.553-561
    • /
    • 2010
  • This paper proposed the face tracking method which can be effectively applied to the robot's vision system. The proposed algorithm tracks the facial areas after detecting the area of video motion. Movement detection of video images is done by using median filter and erosion and dilation operation as a method for removing noise, after getting the different images using two continual frames. To extract the skin color from the moving area, the color information of sample images is used. The skin color region and the background area are separated by evaluating the similarity by generating membership functions by using MIN-MAX values as fuzzy data. For the face candidate region, the eyes are detected from C channel of color space CMY, and the mouth from Q channel of color space YIQ. The face region is tracked seeking the features of the eyes and the mouth detected from knowledge-base. Experiment includes 1,500 frames of the video images from 10 subjects, 150 frames per subject. The result shows 95.7% of detection rate (the motion areas of 1,435 frames are detected) and 97.6% of good face tracking result (1,401 faces are tracked).

Design and Implementation of Real-time High Performance Face Detection Engine (고성능 실시간 얼굴 검출 엔진의 설계 및 구현)

  • Han, Dong-Il;Cho, Hyun-Jong;Choi, Jong-Ho;Cho, Jae-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.2
    • /
    • pp.33-44
    • /
    • 2010
  • This paper propose the structure of real-time face detection hardware architecture for robot vision processing applications. The proposed architecture is robust against illumination changes and operates at no less than 60 frames per second. It uses Modified Census Transform to obtain face characteristics robust against illumination changes. And the AdaBoost algorithm is adopted to learn and generate the characteristics of the face data, and finally detected the face using this data. This paper describes the face detection hardware structure composed of Memory Interface, Image Scaler, MCT Generator, Candidate Detector, Confidence Comparator, Position Resizer, Data Grouper, and Detected Result Display, and verification Result of Hardware Implementation with using Virtex5 LX330 FPGA of Xilinx. Verification result with using the images from a camera showed that maximum 32 faces per one frame can be detected at the speed of maximum 149 frame per second.

Study on Facial Expression Factors as Emotional Interaction Design Factors (감성적 인터랙션 디자인 요소로서의 표정 요소에 관한 연구)

  • Heo, Seong-Cheol
    • Science of Emotion and Sensibility
    • /
    • v.17 no.4
    • /
    • pp.61-70
    • /
    • 2014
  • Verbal communication has limits in the interaction between robot and man, and therefore nonverbal communication is required for realizing smoother and more efficient communication and even the emotional expression of the robot. This study derived 7 pieces of nonverbal information based on shopping behavior using the robot designed to support shopping, selected facial expression as the element of the nonverbal information derived, and coded face components through 2D analysis. Also, this study analyzed the significance of the expression of nonverbal information using 3D animation that combines the codes of face components. The analysis showed that the proposed expression method for nonverbal information manifested high level of significance, suggesting the potential of this study as the base line data for the research on nonverbal information. However, the case of 'embarrassment' showed limits in applying the coded face components to shape and requires more systematic studies.