• Title/Summary/Keyword: Web-based Camera system

Search Result 98, Processing Time 0.031 seconds

A Fire Deteetion System based on YOLOv5 using Web Camera (웹카메라를 이용한 YOLOv5 기반 화재 감지 시스템)

  • Park, Dae-heum;Jang, Si-woong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.69-71
    • /
    • 2022
  • Today, the AI market is very large due to the development of AI. Among them, the most advanced AI is image detection. Thus, there are many object detection models using YOLOv5.However, most object detection in AI is focused on detecting objects that are stereotyped.In order to recognize such unstructured data, the object may be recognized by learning and filtering the object. Therefore, in this paper, a fire monitoring system using YOLOv5 was designed to detect and analyze unstructured data fires and suggest ways to improve the fire object detection model.

  • PDF

Monitoring system for grain sorting using embedded Linux-based servers and Web applications (임베디드 리눅스 기반의 서버와 웹 어플리케이션을 이용한 곡물 선별 모니터링 시스템)

  • Park, Se-hyun;Geum, Young-wook;Kim, Hyun-jae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.12
    • /
    • pp.2341-2347
    • /
    • 2016
  • In this paper, we implement monitoring system for grain sorting using a high-speed FPGA and embedded LINUX. The proposed system is designed by base on web server and web-based applications while existing system was designed by base on stand-alone mode.The interface the Web server with high speed hardware of FPGA is designed on the implemented monitoring system. The proposed system has the advantages of multi-tasking on Linux web server and real-time high speed on FPGA also. The control logic of a high speed rate line-scan CCD camera, the method of center of gravity, HSL decoding and the interface on the Web server are implemented in FPGA. The implemented monitoring system has the advantage of being able to control the grain monitoring, system failure and recovery remotely by web application. As a result, we can upgrade the performance of sorting quality compared by existing system.

Development of Retina Healthcare Service System Using Smart Phone

  • Park, Gi Hun;Han, Ju Hyuck;Kim, Yong Suk
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.227-237
    • /
    • 2019
  • In this paper, we have developed a Retina Healthcare Service System through which the patient himself/herself can manage his/her retina health. In the case of conventional portable ophthalmic cameras, patients cannot check their eye health on their own because most of them are used by doctor in environments where ophthalmography cannot be performed properly. This system consists of web, app and camera modules, and when a patient mounts a camera module for fundus photography on his / her smart phone and then photographs his / her fundus through the app, the image is transmitted to a server, and the transmitted image reads the fundus the patient's fundus image status in the fundus image reading model learned using deep learning. When the doctor expresses his/her opinions about the patient 's eye condition based on the reading result and the fundus photograph, the patient can check through the app and judge whether to receive ophthalmologic treatment.

A Study on Web-based Mobile Mapping System Using Real-Time GPS/INS System (실시간 GPS/INS 시스템을 이용한 웹기반 모바일 매핑시스템 연구)

  • 이종기;김병국;권재현
    • Spatial Information Research
    • /
    • v.11 no.3
    • /
    • pp.291-299
    • /
    • 2003
  • The Mobile Mapping System collects geographic information through mounted sensors such as a pair of CCD camera, CPS, IMU(Inertial Measurement Unit) and Odometer at regular distance or time interval. The advantage of such system is to easy identification of positions and geographic informations of mobile objects in real time. Among many wireless communication ways for real-time positions and geographic information data from the mobile mapping system to the user such as PDA, wireless modem, cellophane, and web, the web is considered to be more stabile, effect and economic than any other methods. In this paper, a study on the web-based real-time mobile mapping platform to identify the user position is presented using the real-time NovAtel BDS.

  • PDF

A Study on Development of Visual Navigation System based on Neural Network Learning

  • Shin, Suk-Young;Lee, Jang-Hee;You, Yang-Jun;Kang, Hoon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.2 no.1
    • /
    • pp.1-8
    • /
    • 2002
  • It has been integrated into several navigation systems. This paper shows that system recognizes difficult indoor roads without any specific marks such as painted guide line or tape. In this method the robot navigates with visual sensors, which uses visual information to navigate itself along the read. The Neural Network System was used to learn driving pattern and decide where to move. In this paper, I will present a vision-based process for AMR(Autonomous Mobile Robot) that is able to navigate on the indoor read with simple computation. We used a single USB-type web camera to construct smaller and cheaper navigation system instead of expensive CCD camera.

MONITORING CONSTRUCTION PROCESSES: A SOLUTION USING WIRELESS TECHNOLOGY AND ONLINE COLLABORATIVE ENVIRONMENT

  • Sze-wing Leung;Stephen Mak;Bill L.P. Lee
    • International conference on construction engineering and project management
    • /
    • 2007.03a
    • /
    • pp.50-60
    • /
    • 2007
  • The endeavor of this paper focuses on designing a monitoring system to provide a cost-effective solution on quality assurance for construction projects. The construction site monitoring system integrates a long-range wireless network, network cameras, and a web-based collaborative platform. The users of the system could obtain the most updated status of construction sites, such as behaviors of workers, project progress, and site events anywhere with Internet connectivity. It was carefully configured in order to maintain the reliability under the reactive conditions of the construction sites. This paper reports the architecture of the monitoring system and reviews the related technologies. The system has been implemented and tested on a construction site and promising results were obtained.

  • PDF

Server and Client Simulator for Web-based 3D Image Communication

  • Ko, Jung-Hwan;Lee, Sang-Tae;Kim, Eun-Soo
    • Journal of Information Display
    • /
    • v.5 no.4
    • /
    • pp.38-44
    • /
    • 2004
  • In this paper, a server and client simulator for the web-based multi-view 3D image communication system is implemented by using the IEEE 1394 digital cameras, Intel Xeon server computer and Microsoft's DirectShow programming library. In the proposed system, two-view image is initially captured by using the IEEE 1394 stereo camera and then, this data is compressed through extraction of its disparity information in the Intel Xeon server computer and transmitted to the client system, in which multi-view images are generated through the intermediate views reconstruction method and finally display on the 3D display monitor. Through some experiments it is found that the proposed system can display 8-view image having a grey level of 8 bits with a frame rate of 15 fps.

An Interactive Aerobic Training System Using Vision and Multimedia Technologies

  • Chalidabhongse, Thanarat H.;Noichaiboon, Alongkot
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1191-1194
    • /
    • 2004
  • We describe the development of an interactive aerobic training system using vision-based motion capture and multimedia technology. Unlike the traditional one-way aerobic training on TV, the proposed system allows the virtual trainer to observe and interact with the user in real-time. The system is composed of a web camera connected to a PC watching the user moves. First, the animated character on the screen makes a move, and then instructs the user to follow its movement. The system applies a robust statistical background subtraction method to extract a silhouette of the moving user from the captured video. Subsequently, principal body parts of the extracted silhouette are located using model-based approach. The motion of these body parts is then analyzed and compared with the motion of the animated character. The system provides audio feedback to the user according to the result of the motion comparison. All the animation and video processing run in real-time on a PC-based system with consumer-type camera. This proposed system is a good example of applying vision algorithms and multimedia technology for intelligent interactive home entertainment systems.

  • PDF

Development of A Prototype Device to Capture Day/Night Cloud Images based on Whole-Sky Camera Using the Illumination Data (정밀조도정보를 이용한 전천카메라 기반의 주·야간 구름영상촬영용 원형장치 개발)

  • Lee, Jaewon;Park, Inchun;cho, Jungho;Ki, GyunDo;Kim, Young Chul
    • Atmosphere
    • /
    • v.28 no.3
    • /
    • pp.317-324
    • /
    • 2018
  • In this study, we review the ground-based whole-sky camera (WSC), which is developed to continuously capture day and night cloud images using the illumination data from a precision Lightmeter with a high temporal resolution. The WSC is combined with a precision Lightmeter developed in IYA (International Year of Astronomy) for analysis of an artificial light pollution at night and a DSLR camera equipped with a fish-eye lens widely applied in observational astronomy. The WSC is designed to adjust the shutter speed and ISO of the equipped camera according to illumination data in order to stably capture cloud images. And Raspberry Pi is applied to control automatically the related process of taking cloud and sky images every minute under various conditions depending on illumination data from Lightmeter for 24 hours. In addition, it is utilized to post-process and store the cloud images and to upload the data to web page in real time. Finally, we check the technical possibility of the method to observe the cloud distribution (cover, type, height) quantitatively and objectively by the optical system, through analysis of the captured cloud images from the developed device.

Volume Control using Gesture Recognition System

  • Shreyansh Gupta;Samyak Barnwal
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.161-170
    • /
    • 2024
  • With the technological advances, the humans have made so much progress in the ease of living and now incorporating the use of sight, motion, sound, speech etc. for various application and software controls. In this paper, we have explored the project in which gestures plays a very significant role in the project. The topic of gesture control which has been researched a lot and is just getting evolved every day. We see the usage of computer vision in this project. The main objective that we achieved in this project is controlling the computer settings with hand gestures using computer vision. In this project we are creating a module which acts a volume controlling program in which we use hand gestures to control the computer system volume. We have included the use of OpenCV. This module is used in the implementation of hand gestures in computer controls. The module in execution uses the web camera of the computer to record the images or videos and then processes them to find the needed information and then based on the input, performs the action on the volume settings if that computer. The program has the functionality of increasing and decreasing the volume of the computer. The setup needed for the program execution is a web camera to record the input images and videos which will be given by the user. The program will perform gesture recognition with the help of OpenCV and python and its libraries and them it will recognize or identify the specified human gestures and use them to perform or carry out the changes in the device setting. The objective is to adjust the volume of a computer device without the need for physical interaction using a mouse or keyboard. OpenCV, a widely utilized tool for image processing and computer vision applications in this domain, enjoys extensive popularity. The OpenCV community consists of over 47,000 individuals, and as of a survey conducted in 2020, the estimated number of downloads exceeds 18 million.