• 제목/요약/키워드: Network Camera

검색결과 645건 처리시간 0.028초

Implementation of an Embedded System for Image Tracking Using Web Camera (ICCAS 2005)

  • Nam, Chul;Ha, Kwan-Yong;;Kim, Hie-Sik
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.1405-1408
    • /
    • 2005
  • An embedded system has been applied to many fields including households and industrial sites. In the past, user interface products with simple functions were commercialized .but now user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise In this paper, we Implementation of an embedded system for image tracking. This system is used a fixed IP for the reliable server operation on TCP/IP networks. A real time broadcasting of video image on the internet was developed by using an USB camera on the embedded Linux system. The digital camera is connected at the USB host port of the embedded board. all input images from the video camera is continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU Which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  • PDF

딥러닝 기반 카메라 모델 판별 (Camera Model Identification Based on Deep Learning)

  • 이수현;김동현;이해연
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제8권10호
    • /
    • pp.411-420
    • /
    • 2019
  • 멀티미디어 포렌식 분야에서 영상을 촬영한 카메라 모델 판별을 위한 연구가 지속되어 왔다. 점점 고도화되는 범죄 중에 불법 촬영 등의 범죄는 카메라가 소형화됨에 따라 피해자가 알아차리기 어렵기 때문에 높은 범죄 발생 건수를 차지하고 있다. 따라서 특정 영상이 어느 카메라로 촬영되었는지를 특정할 수 있는 기술이 사용된다면 범죄자가 자신의 범죄 행위를 부정할 때, 범죄 혐의를 입증할 증거로 사용될 수 있을 것이다. 본 논문에서는 영상을 촬영한 카메라 모델 판별을 위한 딥러닝 모델을 제안한다. 제안하는 모델은 4개의 컨볼루션 계층과 2개의 전연결 계층으로 구성되었으며, 데이터 전처리를 위한 필터로 High Pass Filter를 사용하였다. 제안한 모델의 성능 검증을 위하여 Dresden Image Database를 활용하였고, 데이터셋은 순차분할 방식을 적용하여 생성하였다. 제안하는 모델을 3 계층 모델과 GLCM 적용 모델 등 기존 연구들과 비교 분석을 수행하여 우수성을 보였고, 최신 연구 결과에서 제시하는 수준의 98% 정확도를 달성하였다.

CCTV카메라를 활용한 선로전환감시시스템의 신뢰성 향상에 관한 연구 (A Study on the Improvement of Reliability of Line Conversion Monitoring System using CCTV Camera)

  • 문채영;김세민;류광기
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2019년도 춘계학술대회
    • /
    • pp.400-402
    • /
    • 2019
  • 철도 레일의 선로 변경을 위해 사용되는 분기기의 제어를 담당하는 전기선로전환기는 철도시스템 중에서 매우 중요하게 관리되고 있다. 선로전환기의 상태를 확인하기 위해 다양한 유무선 실시간 모니터링 시스템이 사용되고 있으나 센서 또는 네트워크 오류 등으로 인한 오작동 가능성이 존재하고 있다. 본 논문에서는 선로전환기 작동상태를 이중으로 확인하기 위하여 선로전환감시시스템과 CCTV카메라 관제 시스템이 통합된 이중화 감시시스템을 설계하였다. 선로전환감시시스템에서는 전로전환기의 작동상태를 감시, 경보 그리고 네트워크를 통해 전송한다. 그리고 이 정보를 전달받은 CCTV카메라 관제시스템에서는 해당 선로전환기와 분기기의 상태를 촬영하여 관리자에게 전송하도록 하였다. 선로 관리자는 선로전환기용 모니터링 화면을 통해 선로 전환상태를 1차적으로 확인하고 이어서 CCTV카메라 영상을 통해 전환상태를 직접 확인함으로써 선로전환기의 작동에 대한 신뢰성을 향상 시킬 수 있다. 또한 관리를 위한 인력 운용을 안전하고 효율적으로 수행할 수 있게 된다. 이를 통해 선로전환기의 오작동으로 인해 선로 분기기에서 발생되는 선로 이탈사고 예방에 기여될 것으로 기대된다.

  • PDF

Backward Explicit Congestion Control in Image Transmission on the Internet

  • Kim, Jeong-Ha;Kim, Hyoung-Bae;Lee, Hak-No;Nam, Boo-Hee
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.2106-2111
    • /
    • 2003
  • In this paper we discuss an algorithm for a real time transmission of moving color images on the TCP/IP network using wavelet transform and neural network. The image frames received from the camera are two-level wavelet-trans formed in the server, and are transmitted to the client on the network. Then, the client performs the inverse wavelet-transform using only the received pieces of each image frame within the prescribed time limit to display the moving images. When the TCP/IP network is busy, only a fraction of each image frame will be delivered. When the line is free, the whole frame of each image will be transferred to the client. The receiver warns the sender of the condition of traffic congestion in the network by sending a special short frame for this specific purpose. The sender can respond to this information of warning by simply reducing the data rate which is adjusted by a back-propagation neural network. In this way we can send a stream of moving images adaptively adjusting to the network traffic condition.

  • PDF

기계 시각과 인공 신경망을 이용한 파란의 판별 (Detection of Surface Cracks in Eggshell by Machine Vision and Artificial Neural Network)

  • 이수환;조한근;최완규
    • Journal of Biosystems Engineering
    • /
    • 제25권5호
    • /
    • pp.409-414
    • /
    • 2000
  • A machine vision system was built to obtain single stationary image from an egg. This system includes a CCD camera, an image processing board and a lighting system. A computer program was written to acquire, enhance and get histogram from an image. To minimize the evaluation time, the artificial neural network with the histogram of the image was used for eggshell evaluation. Various artificial neural networks with different parameters were trained and tested. The best network(64-50-1 and 128-10-1) showed an accuracy of 87.5% in evaluating eggshell. The comparison test for the elapsed processing time per an egg spent by this method(image processing and artificial neural network) and by the processing time per an egg spent by this method(image processing and artificial neural network) and by the previous method(image processing only) revealed that it was reduced to about a half(5.5s from 10.6s) in case of cracked eggs and was reduced to about one-fifth(5.5s from 21.1s) in case of normal eggs. This indicates that a fast eggshell evaluation system can be developed by using machine vision and artificial neural network.

  • PDF

Trends in Leopard Cat (Prionailurus bengalensis) Research through Co-word Analysis

  • Park, Heebok;Lim, Anya;Choi, Taeyoung;Han, Changwook;Park, Yungchul
    • Journal of Forest and Environmental Science
    • /
    • 제34권1호
    • /
    • pp.46-49
    • /
    • 2018
  • This study aims to explore the knowledge structure of the leopard cat (Prionailurus bengalensis) research during the period of 1952-2017. Data was collected from Google Scholar and Research Information Service System (RISS), and a total of 482 author keywords from 125 papers from peer-reviewed scholarly journals were retrieved. Co-word analysis was applied to examine patterns and trends in the leopard cat research by measuring the association strengths of the author keywords along with the descriptive analysis of the keywords. The result shows that the most commonly used keywords in leopard cat research were Felidae, Iriomte cat, and camera trap except for its English and scientific name, and camera traps became a frequent keyword since 2005. Co-word analysis also reveals that leopard cat research has been actively conducted in Southeast Asia in conjugation with studying other carnivores using the camera traps. Through the understanding of the patterns and trends, the finding of this study could provide an opportunity for the exploration of neglected areas in the leopard cat research and conservation.

Human Tracking using Multiple-Camera-Based Global Color Model in Intelligent Space

  • Jin Tae-Seok;Hashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제6권1호
    • /
    • pp.39-46
    • /
    • 2006
  • We propose an global color model based method for tracking motions of multiple human using a networked multiple-camera system in intelligent space as a human-robot coexistent system. An intelligent space is a space where many intelligent devices, such as computers and sensors(color CCD cameras for example), are distributed. Human beings can be a part of intelligent space as well. One of the main goals of intelligent space is to assist humans and to do different services for them. In order to be capable of doing that, intelligent space must be able to do different human related tasks. One of them is to identify and track multiple objects seamlessly. In the environment where many camera modules are distributed on network, it is important to identify object in order to track it, because different cameras may be needed as object moves throughout the space and intelligent space should determine the appropriate one. This paper describes appearance based unknown object tracking with the distributed vision system in intelligent space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

모션벡터를 이용한 가상현실 체험 시스템의 구현 (Implementation of Virtual Realily Immersion System using Motion Vectors)

  • 서정만;정순기
    • 한국컴퓨터정보학회논문지
    • /
    • 제8권3호
    • /
    • pp.87-93
    • /
    • 2003
  • 본 논문에서는 인간의 오감 중에서 시각을 이용하여 가상현실을 실제로 체험할 수 있는 가상현실 체감 시스템을 구현 하였다. 본 논문에서는 3 단계 검색(TSS : Three Step Search)방법을 이용, 현재 프레임에 대응되는 프레임을 인접블록에서 검색하여 두 프레임으로부터 모션벡터를 추출하였다. 구현된 시스템의 성능 평가를 위해 센서를 사용하여 측정한 시뮬레이터 축들의 가속도 값과 영상으로부터 추출한 모션벡터 값을 비교, 평가하였다. 본 논문에서 제시한 가상현실 체감 시스템이 영상의 움직임에 보다 가깝게 시뮬레이터를 동작시킬 수 있는 것으로 입증되었다.

  • PDF

헤드마운티드 디스플레이를 활용한 전방위 카메라 기반 영상 렌더링 동기화 시스템 (Omnidirectional Camera-based Image Rendering Synchronization System Using Head Mounted Display)

  • 이승준;강석주
    • 전기학회논문지
    • /
    • 제67권6호
    • /
    • pp.782-788
    • /
    • 2018
  • This paper proposes a novel method for the omnidirectional camera-based image rendering synchronization system using head mounted display. There are two main processes in the proposed system. The first one is rendering 360-degree images which are remotely photographed to head mounted display. This method is based on transmission control protocol/internet protocol(TCP/IP), and the sequential images are rapidly captured and transmitted to the server using TCP/IP protocol with the byte array data format. Then, the server collects the byte array data, and make them into images. Finally, the observer can see them while wearing head mounted display. The second process is displaying the specific region by detecting the user's head rotation. After extracting the user's head Euler angles from head mounted display's inertial measurement units sensor, the proposed system display the region based on these angles. In the experimental results, rendering the original image at the same resolution in a given network environment causes loss of frame rate, and rendering at the same frame rate results in loss of resolution. Therefore, it is necessary to select optimal parameters considering environmental requirements.

영상 기반 강아지의 이상 행동 탐지 (Camera-based Dog Unwanted Behavior Detection)

  • 오스만;이종욱;박대희;정용화
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2019년도 춘계학술발표대회
    • /
    • pp.419-422
    • /
    • 2019
  • The recent increase in single-person households and family income has led to an increase in the number of pet owners. However, due to the owners' difficulty to communicate with them for 24 hours, pets, and especially dogs, tend to display unwanted behavior that can be harmful to themselves and their environment when left alone. Therefore, detecting those behaviors when the owner is absent is necessary to suppress them and prevent any damage. In this paper, we propose a camera-based system that detects a set of normal and unwanted behaviors using deep learning algorithms to monitor dogs when left alone at home. The frames collected from the camera are arranged into sequences of RGB frames and their corresponding optical flow sequences, and then features are extracted from each data flow using pre-trained VGG-16 models. The extracted features from each sequence are concatenated and input to a bi-directional LSTM network that classifies the dog action into one of the targeted classes. The experimental results show that our method achieves a good performance exceeding 0.9 in precision, recall and f-1 score.