• Title/Summary/Keyword: Intelligent Network Camera

Search Result 93, Processing Time 0.022 seconds

Real-time Multiple Pedestrians Tracking for Embedded Smart Visual Systems

  • Nguyen, Van Ngoc Nghia;Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.2
    • /
    • pp.167-177
    • /
    • 2019
  • Even though so much progresses have been achieved in Multiple Object Tracking (MOT), most of reported MOT methods are not still satisfactory for commercial embedded products like Pan-Tilt-Zoom (PTZ) camera. In this paper, we propose a real-time multiple pedestrians tracking method for embedded environments. First, we design a new light weight convolutional neural network(CNN)-based pedestrian detector, which is constructed to detect even small size pedestrians, as well. For further saving of processing time, the designed detector is applied for every other frame, and Kalman filter is employed to predict pedestrians' positions in frames where the designed CNN-based detector is not applied. The pose orientation information is incorporated to enhance object association for tracking pedestrians without further computational cost. Through experiments on Nvidia's embedded computing board, Jetson TX2, it is verified that the designed pedestrian detector detects even small size pedestrians fast and well, compared to many state-of-the-art detectors, and that the proposed tracking method can track pedestrians in real-time and show accuracy performance comparably to performances of many state-of-the-art tracking methods, which do not target for operation in embedded systems.

IoT Enabled Smart Emergency LED Exit Sign controller Design using Arduino

  • Jung, Joonseok;Kwon, Jongman;Mfitumukiza, Joseph;Jung, Soonho;Lee, Minwoo;Cha, Jaesang
    • International journal of advanced smart convergence
    • /
    • v.6 no.1
    • /
    • pp.76-81
    • /
    • 2017
  • This paper presents a low cost and flexible IoT enabled smart LED controller using Arduino that is used for emergency exit signs. The Internet of Things (IoT) is become a global network that put together physical objects using network communications for the purpose of inter-communication of devices, access information on internet, interaction with users as well as permanent connected environment. A crucial point in this paper, is underlined on the potential key points of applying the Arduino platform as low cost, easy to use microcontroller with combination of various sensors applied in IoT technology to facilitate and establishment of intelligent products. To demonstrate the feasibility and effectiveness of the system, devices such as LED strip, combination of various sensors, Arduino, power plug and ZigBee module have been integrated to setup smart emergency exit sign system. The general concept of the proposed system design discussed in this paper is all about the combination of various sensor such as smoke detector sensor, humidity, temperature sensor, glass break sensors as well as camera sensor that are connected to the main controller (Arduino) for the purpose of communicating with LED exit signs displayer and dedicated PC monitors from integrated system monitoring (controller room) through gateway devices using Zig bee module. A critical appraisal of the approach in the area concludes the paper.

Building the Quality Management System for Compact Camera Module(CCM) Assembly Line (휴대용 카메라 모듈(CCM) 제조 라인에 대한 데이터마이닝 기반 품질관리시스템 구축)

  • Yu, Song-Jin;Kang, Boo-Sik;Hong, Han-Kook
    • Journal of Intelligence and Information Systems
    • /
    • v.14 no.4
    • /
    • pp.89-101
    • /
    • 2008
  • The most used tool for quality control is control chart in manufacturing industry. But it has limitations at current situation where most of manufacturing facilities are automated and several manufacturing processes have interdependent relationship such as CCM assembly line. To Solve problems, we propose quality management system based on data mining that are consisted of monitoring system where it monitors flows of processes at single window and feature extraction system where it predicts the yield of final product and identifies which processes have impact on the quality of final product. The quality management system uses decision tree, neural network, self-organizing map for data mining. We hope that the proposed system can help manufacturing process to produce stable quality of products and provides engineers useful information such as the predicted yield for current status, identification of causal processes for lots of abnormality.

  • PDF

Performance Analysis of Face Recognition by Face Image resolutions using CNN without Backpropergation and LDA (역전파가 제거된 CNN과 LDA를 이용한 얼굴 영상 해상도별 얼굴 인식률 분석)

  • Moon, Hae-Min;Park, Jin-Won;Pan, Sung Bum
    • Smart Media Journal
    • /
    • v.5 no.1
    • /
    • pp.24-29
    • /
    • 2016
  • To satisfy the needs of high-level intelligent surveillance system, it shall be able to extract objects and classify to identify precise information on the object. The representative method to identify one's identity is face recognition that is caused a change in the recognition rate according to environmental factors such as illumination, background and angle of camera. In this paper, we analyze the robust face recognition of face image by changing the distance through a variety of experiments. The experiment was conducted by real face images of 1m to 5m. The method of face recognition based on Linear Discriminant Analysis show the best performance in average 75.4% when a large number of face images per one person is used for training. However, face recognition based on Convolution Neural Network show the best performance in average 69.8% when the number of face images per one person is less than five. In addition, rate of low resolution face recognition decrease rapidly when the size of the face image is smaller than $15{\times}15$.

Class 1·3 Vehicle Classification Using Deep Learning and Thermal Image (열화상 카메라를 활용한 딥러닝 기반의 1·3종 차량 분류)

  • Jung, Yoo Seok;Jung, Do Young
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.19 no.6
    • /
    • pp.96-106
    • /
    • 2020
  • To solve the limitation of traffic monitoring that occur from embedded sensor such as loop and piezo sensors, the thermal imaging camera was installed on the roadside. As the length of Class 1(passenger car) is getting longer, it is becoming difficult to classify from Class 3(2-axle truck) by using an embedded sensor. The collected images were labeled to generate training data. A total of 17,536 vehicle images (640x480 pixels) training data were produced. CNN (Convolutional Neural Network) was used to achieve vehicle classification based on thermal image. Based on the limited data volume and quality, a classification accuracy of 97.7% was achieved. It shows the possibility of traffic monitoring system based on AI. If more learning data is collected in the future, 12-class classification will be possible. Also, AI-based traffic monitoring will be able to classify not only 12-class, but also new various class such as eco-friendly vehicles, vehicle in violation, motorcycles, etc. Which can be used as statistical data for national policy, research, and industry.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.

Design of Near Real-Time land Monitoring System over the Korean Peninsula

  • Lee, Kyu-Sung;Yoon, Jong-Suk
    • Spatial Information Research
    • /
    • v.16 no.4
    • /
    • pp.411-420
    • /
    • 2008
  • To provide technological foundation for periodic and real-time land monitoring over the Korean peninsula where the land cover changes are prevailing, the Land Monitoring Research project was initiated as one of five core projects within the Intelligent National Land Information Technology Innovation Project operated by the Korean Land Spatialization Group (KLSG). This four year project can be categorized into two research themes with nine sub-projects. The first research theme is dealing with the real-time data acquisition from aerial platform and in-situ measurements by ubiquitous sensor network (USN), ground video camera, and automobile-based data collection systems. The second research theme is mainly focused on the development of application systems that can be directly utilized in several public organizations dealing with land monitoring over the nation. The Moderate Resolution Imaging Spectroradiometer (MODIS)-based land monitoring system that is currently under development is one of such application systems designed to provide necessary information regarding the status and condition of land cover in near real-time.

  • PDF

Design and Implement of BACnet based Intelligent Building Automation Control System (모바일 카메라를 이용한 방송 시스템 설계 및 구현)

  • Park, Youngha;Seong, Kiyoung;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.10
    • /
    • pp.1330-1336
    • /
    • 2021
  • Media sharing platform such as YouTube have grown significantly in the mobile environment. This is a platform that allows users to select and view broadcast programs that are only available on TV through network-connected PCs and mobiles, and share their media content to communicate. Currently, in the era where mobile and TV broadcasts can be viewed equally, there is a time difference between the video and real-time screen transmitted to TV and mobile, different from the actual situation.We want this time difference to be realized in the same way as the real time, and there is a need for a system that can broadcast in a free environment at any time. Therefore, in this paper, a broadcasting system was designed and implemented in a mobile environment. The result of reducing the delay time difference due to the improvement of the processing method was obtained.

Detection of Number and Character Area of License Plate Using Deep Learning and Semantic Image Segmentation (딥러닝과 의미론적 영상분할을 이용한 자동차 번호판의 숫자 및 문자영역 검출)

  • Lee, Jeong-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.1
    • /
    • pp.29-35
    • /
    • 2021
  • License plate recognition plays a key role in intelligent transportation systems. Therefore, it is a very important process to efficiently detect the number and character areas. In this paper, we propose a method to effectively detect license plate number area by applying deep learning and semantic image segmentation algorithm. The proposed method is an algorithm that detects number and text areas directly from the license plate without preprocessing such as pixel projection. The license plate image was acquired from a fixed camera installed on the road, and was used in various real situations taking into account both weather and lighting changes. The input images was normalized to reduce the color change, and the deep learning neural networks used in the experiment were Vgg16, Vgg19, ResNet18, and ResNet50. To examine the performance of the proposed method, we experimented with 500 license plate images. 300 sheets were used for learning and 200 sheets were used for testing. As a result of computer simulation, it was the best when using ResNet50, and 95.77% accuracy was obtained.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.