• Title/Summary/Keyword: polling signal

Search Result 15, Processing Time 0.033 seconds

An Information Transmission Method of Photovoltaic Power System for 10kW Class (10kW급 태양광 발전시스템의 정보 전송방식)

  • Lee H.D.;Jeon S.B.;Lee S.B.;Kim J.G.;Ryu S.P.
    • Proceedings of the KIPE Conference
    • /
    • 2006.06a
    • /
    • pp.228-230
    • /
    • 2006
  • This paper describes the information transmission method using communication networks between the devices in Photovoltaic Power System, and also represents the test result which carried out in the field. In this test, transmission signal waveform, polling time, response time and request/response frame were measured between the devices. The measured results show the excellent transmission characteristics.

  • PDF

Design and Development of Network for Housing Estate Security System

  • Nachin, Awacharin;Mitatha, Somsak;Dejhan, Kobchai;Kirdpipat, Patchanon;Miyanaga, Yoshikazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1480-1484
    • /
    • 2003
  • This paper presents the design and development of network for housing estate security system. The system can cover up to 961 houses which can be up to 1,200 meters long transfer rate of 9,600 bps. This system uses checking and warning the abnormal situation. More over this system has ability to control switch on/off the electrical equipment in the house via AC line control system. The system consists of 4 parts. The first part is a security system of each house using MCS-51 microcontroller as a central processing unit scan 32 sensors and control 8 appliances and send alarm. The MCS-51 microcontroller received control signal via telephone used DTMF circuit. The second part is distributed two levels master/slave network implementing after RS-485 serial communication standard. The protocol its base on the OSI (Open Systems Interconnection) 7 layers protocol model design focus on speed, reliability and security of data that is transferred. The network security using encrypt by DES algorithm, message sequence, time stamp checking and authentication system when user to access and when connect new device to this system. Flow control in system is Poll/Select and Stop-and-Wait method. The third part is central server that using microcomputer which its main function are storing event data into database and can check history event. The final part is internet system which users can access their own homes via the Internet. This web service is based on a combination of SOAP, HTTP and TCP/IP protocols. Messages are exchanged using XML format [6]. In order to save the number of IP address, the system uses 1 IP address for the whole village in which all homes and appliance in this village are addressed using internal identification numbers. This proposed system gives the data transfer accuracy over 99.8% and maximum polling time is 1,120 ms.

  • PDF

Development of Vehicle LDW Application Service using AUTOSAR Platform on Multi-Core MCU (멀티코어 상의 AUTOSAR 플랫폼을 활용한 차량용 LDW 응용 서비스 개발)

  • Park, Mi-Ryong;Kim, Dongwon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.113-120
    • /
    • 2014
  • In this paper, we examine Asymmetric Multi-Processing Environment to provide LDW service. Asymmetric Multi-Processing Environment consists of high-speed MCU to support rapid image processing and low-speed MCU for controlling with other ECU at the control domain. Also we designed rapid image process application and LDW application Software Component(SW-C) according to the development process rule of AUTOSAR. To communicate between two MCUs, timer based polling based IPC was designed. Also to communicate with other ECUs(Electronic Control Units), we designed CAN messages to provide alarm information and receiving CAN message to catch the Turn signal. We confirm the possibility of the various ADAS development using an Asymmetric Multi-Processing Environment and AUTOSAR platform. We also expect providing ISO 26262 functional safety.

CNN-LSTM-based Upper Extremity Rehabilitation Exercise Real-time Monitoring System (CNN-LSTM 기반의 상지 재활운동 실시간 모니터링 시스템)

  • Jae-Jung Kim;Jung-Hyun Kim;Sol Lee;Ji-Yun Seo;Do-Un Jeong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.3
    • /
    • pp.134-139
    • /
    • 2023
  • Rehabilitators perform outpatient treatment and daily rehabilitation exercises to recover physical function with the aim of quickly returning to society after surgical treatment. Unlike performing exercises in a hospital with the help of a professional therapist, there are many difficulties in performing rehabilitation exercises by the patient on a daily basis. In this paper, we propose a CNN-LSTM-based upper limb rehabilitation real-time monitoring system so that patients can perform rehabilitation efficiently and with correct posture on a daily basis. The proposed system measures biological signals through shoulder-mounted hardware equipped with EMG and IMU, performs preprocessing and normalization for learning, and uses them as a learning dataset. The implemented model consists of three polling layers of three synthetic stacks for feature detection and two LSTM layers for classification, and we were able to confirm a learning result of 97.44% on the validation data. After that, we conducted a comparative evaluation with the Teachable machine, and as a result of the comparative evaluation, we confirmed that the model was implemented at 93.6% and the Teachable machine at 94.4%, and both models showed similar classification performance.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.