• Title/Summary/Keyword: Vision assistant

Search Result 28, Processing Time 0.028 seconds

Teaching Assistant System using Computer Vision (컴퓨터 비전을 이용한 강의 도우미 시스템)

  • Kim, Tae-Jun;Park, Chang-Hoon;Choi, Kang-Sun
    • Journal of Practical Engineering Education
    • /
    • v.5 no.2
    • /
    • pp.109-115
    • /
    • 2013
  • In this paper, a teaching assistant system using computer vision is presented. Using the proposed system, lecturers can utilize various lecture contents such as lecture notes and related video clips easily and seamlessly. In order to do transition between different lecture contents and control multimedia contents, lecturers just draw pre-defined symbols on the board without pausing the class. In the proposed teaching assistant system, a feature descriptor, so called shape context, is used for recognizing the pre-defined symbols successfully.

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

Automation for Oyster Hinge Breaking System

  • So, J.D.;Wheaton, F.W.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.658-667
    • /
    • 1996
  • A computer vision system was developed to automatically detect and locate the oyster hinge line, one step in shucking an oyster. The computer vision system consisted of a personal computer, a color frame grabber, a color CCD video camera with a zoom lens, two video monitor, a specially designed fixture to hold the oyster, a lighting system to illuminate the oyster and the system software. The software consisted of a combination of commercially available programs and custom designed programs developed using the Microsoft CTM . Test results showed that the image resolution was the most important variable influencing hinge detection efficiency. Whether or not the trimmed -off-flat-white surface area was dry or wet, the oyster size relative to the image size selected , and the image processing methods used all influenced the hinge locating efficiency. The best computer software and hardware combination used successfully located 97% of the oyster hinge lines tested. This efficienc was achieve using camera field of view of 1.9 by 1.5cm , a 180 by 170 pixel image window, and a dry trimmed -off oyster hinge end surface.

  • PDF

A Study on Robot Arm Control System using Detection of Foot Movement (발 움직임 검출을 통한 로봇 팔 제어에 관한 연구)

  • Ji, H.;Lee, D.H.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.9 no.1
    • /
    • pp.67-72
    • /
    • 2015
  • The system for controlling the robotic arm through the foot motion detection was implemented for the disabled who not free to use of the arm. In order to get an image on foot movement, two cameras were setup in front of both foot. After defining multiple regions of interest by using LabView-based Vision Assistant from acquired images, we could detect foot movement based on left/right and up/down edge detection within the left/right image area. After transferring control data which was obtained according to left/right and up/down edge detection numbers from two foot images of left/right sides through serial communication, control system was implemented to control 6-joint robotic arm into up/down and left/right direction by foot. As a result of experiment, we was able to get within 0.5 second reaction time and operational recognition rate of more 88%.

  • PDF

Trend of Vehicle Vision Processor (자동차 비전 프로세서 동향)

  • Han, J.H.;Byun, K.G.;Eum, N.W.
    • Electronics and Telecommunications Trends
    • /
    • v.30 no.4
    • /
    • pp.102-109
    • /
    • 2015
  • 자동차 분야에서 운전자의 안전 및 안전한 운전을 위해 비전 시스템에 기반한 Advanced Driver Assistant System(ADAS)을 개발하고 있고 비전 시스템을 이용한 물체인식 기술을 이용해서 차선인식, 보행자인식, 차량인식 등 통해 차량위치 및 추돌위험 등을 감지하기 위해 자동차 수준에서 필요로 하는 물체인식 기술 요구조건은 날로 증가하고 있다. 이를 지원하기 위한 Vehicle Vision Processor 또한 발전을 해오고 있고 초기에 50GOPS의 연산능력에서 약 400GOPS에 가까운 연산능력으로 720p 이미지 크기에 대해서 30fps Frame Rate로 처리할 수 있는 등 지금까지의 vehicle vision system을 위한 vision 연산기능이 강화된 vision processor 동향에 대해서 살펴보겠다.

  • PDF

DEVELOPMENT OF A MACHINE VISION SYSTEM FOR WEED CONTROL USING PRECISION CHEMICAL APPLICATION

  • Lee, Won-Suk;David C. Slaughter;D.Ken Giles
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.802-811
    • /
    • 1996
  • Farmers need alternatives for weed control due to the desire to reduce chemicals used in farming. However, conventional mechanical cultivation cannot selectively remove weeds located in the seedline between crop plants and there are no selective heribicides for some crop/weed situations. Since hand labor is costly , an automated weed control system could be feasible. A robotic weed control system can also reduce or eliminate the need for chemicals. Currently no such system exists for removing weeds located in the seedline between crop plants. The goal of this project is to build a real-time , machine vision weed control system that can detect crop and weed locations. remove weeds and thin crop plants. In order to accomplish this objective , a real-time robotic system was developed to identify and locate outdoor plants using machine vision technology, pattern recognition techniques, knowledge-based decision theory, and robotics. The prototype weed control system is composed f a real-time computer vision system, a uniform illumination device, and a precision chemical application system. The prototype system is mounted on the UC Davis Robotic Cultivator , which finds the center of the seedline of crop plants. Field tests showed that the robotic spraying system correctly targeted simulated weeds (metal coins of 2.54 cm diameter) with an average error of 0.78 cm and the standard deviation of 0.62cm.

  • PDF

Life Companion Robots (반려 로봇)

  • Kim, J.H.;Seo, B.S.;Cho, J.I.;Choi, J.D.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.1
    • /
    • pp.12-21
    • /
    • 2021
  • This article presents the future vision and core technologies of the "Life Companion Robot," which is one of the 12 future concepts introduced in the ETRI Technology Roadmap published in November 2020. Assistant robots, care robots, and life support robots were proposed as the development stages of life companion robots. Further, core technologies for each of the ten major roles that must be directly or indirectly performed by life companion robots are introduced. Finally, this article describes in detail three major artificial intelligence technologies for autonomous robots.

Assessment and Reliability Validation of Lane Departure Assistance System Based on DGPS-GIS Using Camera Vision (카메라영상에 의한 DGPS-GIS기반 차선변경 지원시스템의 평가 및 신뢰성 검증)

  • Moon, Sangchan;Lee, Soon-Geul;Kim, Minwoo;Joo, Dani
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.6
    • /
    • pp.49-58
    • /
    • 2014
  • This paper proposes a new assessment and reliability validation method of Lane Departure Assistance System based on DGPS-GIS by measuring lanes with camera vision. Assessment of lane departure is performed with yaw speed measurement and determination method for false alarm of ISO 17361 and performance validation is executed after generating departure warning boundary line by considering deviation error of LDAS using DGPS. Distance between the wheel and the lane is obtained through line abstraction using Hough transformation of the lane image with camera vision. Evaluation validation is obtained by comparing this value with the distance obtained with LDAS. The experimental result shows that the error of the extracted distance of the LDAS is within 5 cm. Also it proves performance of LDAS based on DGPS-GIS and assures effectiveness of the proposed validation method for system reliability using camera vision.

Implementation of Network System for Bio-physical signal Communication

  • Kim, Jeong Lae;Kang, Jeong Jin;Rothwell, Edward J.
    • International Journal of Advanced Culture Technology
    • /
    • v.1 no.1
    • /
    • pp.1-5
    • /
    • 2013
  • This network system for home care realized communication by the bio-physical signal, to convey physical rhythm. Four function of displacement had point of a Vision, Somatosensory, Vestibular and CNS. Bio-physical signal was decided to design a maximum points and minimum points with 0.01unit in reference level. Bio-physical signal was checked to compound physical condition of body posture for sensory organ. There detected a measurement of Vision, Somatosensory, Vestibular, CNS and BMI. The service of network system of home can be used to support a health care system for health assistant in health care center. It will expect to manage a physical parameter for network communication.

  • PDF

Development of Computer Vision System for Individual Recognition and Feature Information of Cow (II) - Analysis of body parameters using stereo image - (젖소의 개체인식 및 형상 정보화를 위한 컴퓨터 시각 시스템 개발(II) - 스테레오 영상을 이용한 체위 분석 -)

  • 이종환
    • Journal of Biosystems Engineering
    • /
    • v.28 no.1
    • /
    • pp.65-76
    • /
    • 2003
  • The analysis of cow body parameters is important to provide some useful information fur cow management and cow evaluation. Present methods give many stresses to cows because they are invasive and constrain cow postures during measurement of body parameters. This study was conducted to develop the stereo vision system fur non-invasive analysis of cow body features. Body feature parameters of 16 heads at two farms(A, B) were measured using scales and nineteen stereo images of them with walking postures were captured under outdoor illumination. In this study, the camera calibration and inverse perspective transformation technique was established fer the stereo vision system. Two calibration results were presented for farm A and fm B, respectively because setup distances from camera to cow were 510 cm at farm A and 630cm at farm B. Calibration error values fer the stereo vision system were within 2 cm for farm A and less than 4.9 cm for farm B. Eleven feature points of cow body were extracted on stereo images interactively and five assistant points were determined by computer program. 3D world coordinates for these 15 points were calculated by computer program and also used for calculation of cow body parameters such as withers height. pelvic arch height. body length. slope body length. chest depth and chest width. Measured errors for body parameters were less than 10% for most cows. For a few cow. measured errors for slope body length and chest width were more than 10% due to searching errors fer their feature points at inside-body positions. Equation for chest girth estimated by chest depth and chest width was presented. Maximum of estimated error fur chest girth was within 10% of real values and mean value of estimated error was 8.2cm. The analysis of cow body parameters using stereo vision system were successful although body shape on the binocular stereo image was distorted due to cow movements.