• Title/Summary/Keyword: Network Camera

Search Result 645, Processing Time 0.031 seconds

Implementation of an USB Camera Interface Based on Embedded Linux System (임베디드 LINUX 시스템 기반 USB 카메라 인터페이스 구현)

  • Song Sung-Hee;Kim Jeong-Hyeon;Kim Tae-Hyo
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.6 no.4
    • /
    • pp.169-175
    • /
    • 2005
  • In recent, implementation of the embedded system is gradually in the spotlight of world-wide by information technology(IT) engineers. By this time, an implementation of real time system is limited on image acquisition and processing system in practical. In this paper, the USB camera interface system based on the embedded linux OS is implemented using USB 2.0 camera with low cost. This system can obtain image signals into the memory via X-hyper255B processor from USB camera. It is need to initialize USB camera by the Video4Linux for the kernel device driver. From the system image capturing and image processing can be performed. It is confirmed that the image data can be transformed to packet of Network File System(NFS) and connected to the internetwork, then the data can be monitored from the client computer connected to the internetwork.

  • PDF

Video Camera Model Identification System Using Deep Learning (딥 러닝을 이용한 비디오 카메라 모델 판별 시스템)

  • Kim, Dong-Hyun;Lee, Soo-Hyeon;Lee, Hae-Yeoun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.8
    • /
    • pp.1-9
    • /
    • 2019
  • With the development of imaging information communication technology in modern society, imaging acquisition and mass production technology have developed rapidly. However, crime rates using these technology are increased and forensic studies are conducted to prevent it. Identification techniques for image acquisition devices are studied a lot, but the field is limited to images. In this paper, camera model identification technique for video, not image is proposed. We analyzed video frames using the trained model with images. Through training and analysis by considering the frame characteristics of video, we showed the superiority of the model using the P frame. Then, we presented a video camera model identification system by applying a majority-based decision algorithm. In the experiment using 5 video camera models, we obtained maximum 96.18% accuracy for each frame identification and the proposed video camera model identification system achieved 100% identification rate for each camera model.

A MNN(Modular Neural Network) for Robot Endeffector Recognition (로봇 Endeffector 인식을 위한 모듈라 신경회로망)

  • 김영부;박동선
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.496-499
    • /
    • 1999
  • This paper describes a medular neural network(MNN) for a vision system which tracks a given object using a sequence of images from a camera unit. The MNN is used to precisely recognize the given robot endeffector and to minize the processing time. Since the robot endeffector can be viewed in many different shapes in 3-D space, a MNN structure, which contains a set of feedforwared neural networks, co be more attractive in recognizing the given object. Each single neural network learns the endeffector with a cluster of training patterns. The training patterns for a neural network share the similar charateristics so that they can be easily trained. The trained MNN is less sensitive to noise and it shows the better performance in recognizing the endeffector. The recognition rate of MNN is enhanced by 14% over the single neural network. A vision system with the MNN can precisely recognize the endeffector and place it at the center of a display for a remote operator.

  • PDF

Study about the home network system implementation that used an ubiquitous sensor network (유비쿼터스 센서 네트워크을 이용한 홈네트워크 시스템 구현에 관한 연구)

  • Nam, Sang-Yep;Park, Chun-Myoung
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.479-480
    • /
    • 2007
  • It is study about implementation of the home network system that used an ubiquitous sensor network and an embedded system in this paper. PXA270 and CC2420 were used, and the home server of a wireless sensor home network system composed it. A wireless control system is composed of a gas valve, a DC motor, a lamp and a door rock. A wireless detection system is composed of a gas detection sensor, a movement detection sensor, an extension detection sensor The wireless detection system that was an environment sensing system was composed of temperature, humidity, mic, illuminance, a speed-up, infrared rays temperature sensing module, and modular, other RFID established an USB camera, and an ubiquitous home network was composed.

  • PDF

A Study of Electrical and Optical Method of Safety Standards for diagnosis of Power Facility using UV-IR Camera (UV-IR 카메라를 이용한 전력설비 진단을 위한 전기 및 광학적 안전 기준 설정 연구)

  • Kim, Young-Seok;Kim, Chong-Min;Choi, Myeong-Il;Bang, Sun-Bae;Shong, Kil-Mok;Kwag, Dong-Soon
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.27 no.4
    • /
    • pp.54-61
    • /
    • 2013
  • UV-IR camera is being used for predictive maintenance of high voltage equipment together with measurement of temperature on localized heat and corona discharge. This paper was suggested the judgement method that is the discharge count, UV image pattern and discharge matching rate to apply the UV-IR camera on power facility. The discharge count method is counted by UV image pixel value. the UV image pattern method is determined by the UV image shape using neural network algorithm method, separated by Sunflower, Jellyfish, Ameba. The UV discharge matching is compare the breakdown the UV image size and measuring UV image size according to distance.

EpiLoc: Deep Camera Localization Under Epipolar Constraint

  • Xu, Luoyuan;Guan, Tao;Luo, Yawei;Wang, Yuesong;Chen, Zhuo;Liu, WenKai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.2044-2059
    • /
    • 2022
  • Recent works have shown that the geometric constraint can be harnessed to boost the performance of CNN-based camera localization. However, the existing strategies are limited to imposing image-level constraint between pose pairs, which is weak and coarse-gained. In this paper, we introduce a pixel-level epipolar geometry constraint to vanilla localization framework without the ground-truth 3D information. Dubbed EpiLoc, our method establishes the geometric relationship between pixels in different images by utilizing the epipolar geometry thus forcing the network to regress more accurate poses. We also propose a variant called EpiSingle to cope with non-sequential training images, which can construct the epipolar geometry constraint based on a single image in a self-supervised manner. Extensive experiments on the public indoor 7Scenes and outdoor RobotCar datasets show that the proposed pixel-level constraint is valuable, and helps our EpiLoc achieve state-of-the-art results in the end-to-end camera localization task.

Collaborative Control Method of Underwater, Surface and Aerial Robots Based on Sensor Network (센서네트워크 기반의 수중, 수상 및 공중 로봇의 협력제어 기법)

  • Man, Dong-Woo;Ki, Hyeon-Seung;Kim, Hyun-Sik
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.1
    • /
    • pp.135-141
    • /
    • 2016
  • Recently, the needs for the development and application of marine robots are increasing as marine accidents occur frequently. However, it is very difficult to acquire the information by utilizing marine robots in the marine environment. Therefore, the needs for the researches of sensor networks which are composed of underwater, surface and aerial robots are increasing in order to acquire the information effectively as the information from heterogeneous robots has less limitation in terms of coverage and connectivity. Although various researches of the sensor network which is based on marine robots have been executed, all of the underwater, surface and aerial robots have not yet been considered in the sensor network. To solve this problem, a collaborative control method based on the acoustic information and image by the sonars of the underwater robot, the acoustic information by the sonar of the surface robot and the optical image by the camera of the static-floating aerial robot is proposed. To verify the performance of the proposed method, the collaborative control of a MUR(Micro Underwater Robot) with an OAS(Obstacle Avoidance Sonar) and a SSS(Side Scan Sonar), a MSR(Micro Surface Robot) with an OAS and a BMAR(Balloon-based Micro Aerial Robot) with a camera are executed. The test results show the possibility of real applications and the need for additional studies.

A License Plate Recognition Algorithm using Multi-Stage Neural Network for Automobile Black-Box Image (다단계 신경 회로망을 이용한 블랙박스 영상용 차량 번호판 인식 알고리즘)

  • Kim, Jin-young;Heo, Seo-weon;Lim, Jong-tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.1
    • /
    • pp.40-48
    • /
    • 2018
  • This paper proposes a license-plate recognition algorithm for automobile black-box image which is obtained from the camera moving with the automobile. The algorithm intends to increase the overall recognition-rate of the license-plate by increasing the Korean character recognition-rate using multi-stage neural network for automobile black-box image where there are many movements of the camera and variations of light intensity. The proposed algorithm separately recognizes the vowel and consonant of Korean characters of automobile license-plate. First, the first-stage neural network recognizes the vowels, and the recognized vowels are classified as vertical-vowels('ㅏ','ㅓ') and horizontal-vowels('ㅗ','ㅜ'). Then the consonant is classified by the second-stage neural networks for each vowel group. The simulation for automobile license-plate recognition is performed for the image obtained by a real black-box system, and the simulation results show the proposed algorithm provides the higher recognition-rate than the existing algorithms using a neural network.

MPEG Video Segmentation using Two-stage Neural Networks and Hierarchical Frame Search (2단계 신경망과 계층적 프레임 탐색 방법을 이용한 MPEG 비디오 분할)

  • Kim, Joo-Min;Choi, Yeong-Woo;Chung, Ku-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.1_2
    • /
    • pp.114-125
    • /
    • 2002
  • In this paper, we are proposing a hierarchical segmentation method that first segments the video data into units of shots by detecting cut and dissolve, and then decides types of camera operations or object movements in each shot. In our previous work[1], each picture group is divided into one of the three detailed categories, Shot(in case of scene change), Move(in case of camera operation or object movement) and Static(in case of almost no change between images), by analysing DC(Direct Current) component of I(Intra) frame. In this process, we have designed two-stage hierarchical neural network with inputs of various multiple features combined. Then, the system detects the accurate shot position, types of camera operations or object movements by searching P(Predicted), B(Bi-directional) frames of the current picture group selectively and hierarchically. Also, the statistical distributions of macro block types in P or B frames are used for the accurate detection of cut position, and another neural network with inputs of macro block types and motion vectors method can reduce the processing time by using only DC coefficients of I frames without decoding and by searching P, B frames selectively and hierarchically. The proposed method classified the picture groups in the accuracy of 93.9-100.0% and the cuts in the accuracy of 96.1-100.0% with three different together is used to detect dissolve, types of camera operations and object movements. The proposed types of video data. Also, it classified the types of camera movements or object movements in the accuracy of 90.13% and 89.28% with two different types of video data.

Development of Monitoring Robot with Quadruped Link Mechanism (4족 링크 구조의 감시용 로봇 시스템 개발)

  • 정기범;박병훈;전병준;김동환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.46-46
    • /
    • 2000
  • A quadruped monitoring robot is introduced. The robot has several features that poses arbitrary position thanks to a 4-wheel hive mechanism, transmits an image and command data via RF wireless communication, and moreover, the imaged date are transferred through a network communication. The robot plays a role in monitoring what is happening around the robot and covers wide range due to a moving camera operated by the 4-wheel mechanism. The robot system can be applied k versatile models based the distinguished techniques introduced in this paper

  • PDF