• Title/Summary/Keyword: Wireless Camera

Search Result 241, Processing Time 0.024 seconds

Vision Based Sensor Fusion System of Biped Walking Robot for Environment Recognition (영상 기반 센서 융합을 이용한 이쪽로봇에서의 환경 인식 시스템의 개발)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Seo, Sam-Jun;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.123-125
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tole-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

TELE-OPERATIVE SYSTEM FOR BIOPRODUCTION - REMOTE LOCAL IMAGE PROCESSING FOR OBJECT IDENTIFICATION -

  • Kim, S. C.;H. Hwang;J. E. Son;Park, D. Y.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2000.11b
    • /
    • pp.300-306
    • /
    • 2000
  • This paper introduces a new concept of automation for bio-production with tele-operative system. The proposed system showed practical and feasible way of automation for the volatile bio-production process. Based on the proposition, recognition of the job environment with object identification was performed using computer vision system. A man-machine interactive hybrid decision-making, which utilized a concept of tele-operation was proposed to overcome limitations of the capability of computer in image processing and feature extraction from the complex environment image. Identifying watermelons from the outdoor scene of the cultivation field was selected to realize the proposed concept. Identifying watermelon from the camera image of the outdoor cultivation field is very difficult because of the ambiguity among stems, leaves, shades, and especially fruits covered partly by leaves or stems. The analog signal of the outdoor image was captured and transmitted wireless to the host computer by R.F module. The localized window was formed from the outdoor image by pointing to the touch screen. And then a sequence of algorithms to identify the location and size of the watermelon was performed with the local window image. The effect of the light reflectance of fruits, stems, ground, and leaves were also investigated.

  • PDF

Adaptive Cloud Offloading of Augmented Reality Applications on Smart Devices for Minimum Energy Consumption

  • Chung, Jong-Moon;Park, Yong-Suk;Park, Jong-Hong;Cho, HyoungJun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3090-3102
    • /
    • 2015
  • The accuracy of an augmented reality (AR) application is highly dependent on the resolution of the object's image and the device's computational processing capability. Naturally, a mobile smart device equipped with a high-resolution camera becomes the best platform for portable AR services. AR applications require significant energy consumption and very fast response time, which are big burdens to the smart device. However, there are very few ways to overcome these burdens. Computation offloading via mobile cloud computing has the potential to provide energy savings and enhance the performance of applications executed on smart devices. Therefore, in this paper, adaptive mobile computation offloading of mobile AR applications is considered in order to determine optimal offloading points that satisfy the required quality of experience (QoE) while consuming minimum energy of the smart device. AR feature extraction based on SURF algorithm is partitioned into sub-stages in order to determine the optimal AR cloud computational offloading point based on conditions of the smart device, wireless and wired networks, and AR service cloud servers. Tradeoffs in energy savings and processing time are explored also taking network congestion and server load conditions into account.

Implementation of A Hospital Information System in Ubiquitous and Mobile Environment

  • Jang, Jae-Hyuk;Sim, Gab-Sig
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.12
    • /
    • pp.53-59
    • /
    • 2015
  • In this paper, we developed a Hospital Information System in which the business process is formalized and a wire/wireless integrated solution is used. This system consists of the administration office program, the medical office program, the ward management program and the rounds management program. The administration office program can enroll and accept patients, issue and reissue the RFID card. The medical office program inputs a medical examination and treatment, outputs a diagnosis, requests a hospitalization, retrieves the record of a medical examination and treatment, assigns the corresponding examination room to the accepted patients, and updates the number of an waiting patient and a patient number according to the examination room on real time. The ward management program handles hospitalizations and leaving hospital, a nurse's note, and an isolation ward monitoring. The rounds management program handles a medical examination and treatment, and a leaving hospital using PDA. This developed system can be built at low cost and increase the quality of the medical services highly by making it automated the medical administration automation. Also the small number of the medical staffs can manage the inpatients efficiently by using the monitoring functions.

Visible Light Identification System for Smart Door Lock Application with Small Area Outdoor Interface

  • Song, Seok-Jeong;Nam, Hyoungsik
    • Current Optics and Photonics
    • /
    • v.1 no.2
    • /
    • pp.90-94
    • /
    • 2017
  • Visible light identification (VLID) is a user identification system for a door lock application using smartphone that adopts visible light communication (VLC) technology with the objective of high security, small form factor, and cost effectiveness. The user is verified by the identification application program of a smartphone via fingerprint recognition or password entry. If the authentication succeeds, the corresponding encoded visible light signals are transmitted by a light emitting diode (LED) camera flash. Then, only a small size and low cost photodiode as an outdoor interface converts the light signal to the digital data along with a comparator, and runs the authentication process, and releases the lock. VLID can utilize powerful state-of-the-art hardware and software of smartphones. Furthermore, the door lock system is allowed to be easily upgraded with advanced technologies without its modification and replacement. It can be upgraded by just update the software of smartphone application or replacing the smartphone with the latest one. Additionally, wireless connection between a smartphone and a smart home hub is established automatically via Bluetooth for updating the password and controlling the home devices. In this paper, we demonstrate a prototype VLID door lock system that is built up with LEGO blocks, a photodiode, a comparator circuit, Bluetooth module, and FPGA board.

Stairs Walking of a Biped Robot (2족 보행 로봇의 계단 보행)

  • 성영휘;안희욱
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.1
    • /
    • pp.46-52
    • /
    • 2004
  • In this paper, we introduce a case study of developing a miniature humanoid robot which has 16 degrees of freedom, 42 cm heights, and 1.5kg weights. For easy implimentation, the integrated RC-servo motors are adopted as actuators and a digital camera is equipped on its head. So, it can transmit vision data to a remote host computer via wireless modem. The robot can perform staircase walking as well as straight walking and turning to any direction. The user-interface program running on the host computer contains a robot graphic simulator and a motion editor which are used to generate and verify the robot's walking motion. The experimental results show that the robot has various walking capability including straight walking, turning, and stairs walking.

  • PDF

Improve utilization of Drone for Private Security (Drone의 민간 시큐리티 활용성 제고)

  • Gong, Bae Wan
    • Convergence Security Journal
    • /
    • v.16 no.3_2
    • /
    • pp.25-32
    • /
    • 2016
  • Drone refers to an unmanned flying system according to the remote control. That is a remote control systems on the ground or a system that automatically or semi auto-piloted system without pilot on board. Drones have been used and developed before for military purposes. However there are currently utilized in a variety of areas such as logistics and distribution of relief supplies disaster areas, wireless Internet connection, TV, video shooting and disaster observation, tracking criminals etc. Especially it can be actively used in activities such as search or the structure of the disaster site, and may be able to detect the movement of people and an attacker using an infrared camera at night. Drones are very effective for private security.

A Study on the Blue-green algae Monitoring System using HSV Color Model (HSV 색상 모델을 활용한 녹조 모니터링 시스템에 관한 연구)

  • Kim, Tae-hyeon;Choi, Jun-seok;Kim, Kyung-min;Kim, Dong-ju;Kim, Kyung-min
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.553-555
    • /
    • 2015
  • In this paper, we proposed the blue-green algae monitoring system using the HSV(Hue Saturation Value) color model. The proposed system is to extract the image data from the camera of raspberry pie server by an wireless network, and it is analyzed through the HSV color model. We implemented a web server to provide the information of the XML data which was analyzed from the raspberry pie server. Also, the mobile app was developed to view the XML data on smart devices.

  • PDF

Development of Gas Sensor Monitoring Services using Smart Phone and Web Server (스마트폰과 웹 서버를 활용한 가스 센서 모니터링 서비스 개발)

  • Roh, Jae-Sung;Lee, Sang-Geun;Hwang, In-Gyu;Lee, Jeong-Moo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.10a
    • /
    • pp.1048-1050
    • /
    • 2013
  • Mobile devices or smartphones are rapidly becoming the central computer and communication network device. Recently, smartphones are programmable and come with a growing set of cheap powerful embedded sensors, such as an accelerometer, digital compass, gyroscope, GPS, microphone, and camera. In this paper, we discuss the wireless gas sensing service architectural and develope the gas sensor monitoring services using smartphone and web server.

  • PDF

Gesture based Natural User Interface for e-Training

  • Lim, C.J.;Lee, Nam-Hee;Jeong, Yun-Guen;Heo, Seung-Il
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.577-583
    • /
    • 2012
  • Objective: This paper describes the process and results related to the development of gesture recognition-based natural user interface(NUI) for vehicle maintenance e-Training system. Background: E-Training refers to education training that acquires and improves the necessary capabilities to perform tasks by using information and communication technology(simulation, 3D virtual reality, and augmented reality), device(PC, tablet, smartphone, and HMD), and environment(wired/wireless internet and cloud computing). Method: Palm movement from depth camera is used as a pointing device, where finger movement is extracted by using OpenCV library as a selection protocol. Results: The proposed NUI allows trainees to control objects, such as cars and engines, on a large screen through gesture recognition. In addition, it includes the learning environment to understand the procedure of either assemble or disassemble certain parts. Conclusion: Future works are related to the implementation of gesture recognition technology for a multiple number of trainees. Application: The results of this interface can be applied not only in e-Training system, but also in other systems, such as digital signage, tangible game, controlling 3D contents, etc.