• Title/Summary/Keyword: Video image processing system

Search Result 408, Processing Time 0.034 seconds

A Study On Development of Fast Image Detector System (고속 영상 검지기 시스템 개발에 관한 연구)

  • Kim Byung Chul;Ha Dong Mun;Kim Yong Deak
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.41 no.1
    • /
    • pp.25-32
    • /
    • 2004
  • Nowadays image processing is very useful for some field of traffic applications. The one reason is we can construct the system in a low price, the other is the improvement of hardware processing power, it can be more fast to processing the data. In traffic field, the development of image using system is interesting issue. Because it has the advantage of price of installation and it does not obstruct traffic during the installation. In this study, 1 propose the traffic monitoring system that implement on the embedded system environment. The whole system consists of two main part, one is host controller board, the other is image processing board. The part of host controller board take charge of control the total system interface of external environment, and OSD(On screen display). The part of image processing board takes charge of image input and output using video encoder and decoder, Image classification and memory control of using FPGA, control of mouse signal. And finally, for stable operation of host controller board, uC/OS-II operating system is ported on the board.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

Fire detection in video surveillance and monitoring system using Hidden Markov Models (영상감시시스템에서 은닉마코프모델을 이용한 불검출 방법)

  • Zhu, Teng;Kim, Jeong-Hyun;Kang, Dong-Joong;Kim, Min-Sung;Lee, Ju-Seoup
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2009.04a
    • /
    • pp.35-38
    • /
    • 2009
  • The paper presents an effective method to detect fire in video surveillance and monitoring system. The main contribution of this work is that we successfully use the Hidden Markov Models in the process of detecting the fire with a few preprocessing steps. First, the moving pixels detected from image difference, the color values obtained from the fire flames, and their pixels clustering are applied to obtain the image regions labeled as fire candidates; secondly, utilizing massive training data, including fire videos and non-fire videos, creates the Hidden Markov Models of fire and non-fire, which are used to make the final decision that whether the frame of the real-time video has fire or not in both temporal and spatial analysis. Experimental results demonstrate that it is not only robust but also has a very low false alarm rate, furthermore, on the ground that the HMM training which takes up the most time of our whole procedure is off-line calculated, the real-time detection and alarm can be well implemented when compared with the other existing methods.

Lane and Obstacle Recognition Using Artificial Neural Network (신경망을 이용한 차선과 장애물 인식에 관한 연구)

  • Kim, Myung-Soo;Yang, Sung-Hoon;Lee, Sang-Ho;Lee, Suk
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.10
    • /
    • pp.25-34
    • /
    • 1999
  • In this paper, an algorithm is presented to recognize lane and obstacles based on highway road image. The road images obtained by a video camera undergoes a pre-processing that includes filtering, edge detection, and identification of lanes. After this pre-processing, a part of image is grouped into 27 sub-windows and fed into a three-layer feed-forward neural network. The neural network is trained to indicate the road direction and the presence of absence of an obstacle. The proposed algorithm has been tested with the images different from the training images, and demonstrated its efficacy for recognizing lane and obstacles. Based on the test results, it can be said that the algorithm successfully combines the traditional image processing and the neural network principles towards a simpler and more efficient driver warning of assistance system

  • PDF

Implementation of Fish Robot Tracking-Control Methods (물고기 로봇 추적 제어 구현)

  • Lee, Nam-Gu;Kim, Byeong-Jun;Shin, Kyoo-Jae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.885-888
    • /
    • 2018
  • This paper researches a way of detecting fish robots moving in an aquarium. The fish robot was designed and developed for interactions with humans in aquariums. It was studied merely to detect a moving object in an aquarium because we need to find the positions of moving fish robots. The intention is to recognize the location of robotic fish using an image processing technique and a video camera. This method is used to obtain the velocity for each pixel in an image, and assumes a constant velocity in each video frame to obtain positions of fish robots by comparing sequential video frames. By using this positional data, we compute the distance between fish robots using a mathematical expression, and determine which fish robot is leading and which one is lagging. Then, the lead robot will wait for the lagging robot until it reaches the lead robot. The process runs continuously. This system is exhibited in the Busan Science Museum, satisfying a performance test of this algorithm.

Implementation of the mote Image Based Metering System bridging with PCS Network (PCS망을 연동한 원격영상 검침시스템 구현)

  • Lee, Chang-Su;Na, Jong-Ray;Hwang, Jin-Kwon
    • The KIPS Transactions:PartD
    • /
    • v.10D no.6
    • /
    • pp.1041-1048
    • /
    • 2003
  • This paper implements a remote image based metering(IBM) system which capture meter image, recognizes number automatically, and send the data wirelessly through PCS data network. We use existing gas/water meter and get NTSC camera image by installing small monochrome CMOS camera on the meter closely. For remote data transfer, we use SMS (short message service) that is provided by commercial PCS network. We developed DVR(digital video recorder) for capturing meter image and character recognition algorithm. In addition, hardware and software for SMS and meter selector were developed.

Image Contents Encryption Technique for Digital Hologram Broadcasting Service (디지털 홀로그램 방송을 위한 영상 콘텐츠의 암호화)

  • Ha, Jun;Choi, Hyun-Jun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.818-819
    • /
    • 2013
  • This paper propose a contents security technique for digital holographic display service. Digital holographic video system assumes the existing service frame for 2-dimensional or 3-dimensional video, which includes data acquisition, processing, transmission, reception, and reconstruction. In this paper, we perform the encryption of RGB image and depth-map for such a system. The experimental results showed that encrypting only 0.048% of the entire data was enough to hide the constants of the RGB image and depth-map.

  • PDF

A LabVIEW-based Video Dehazing using Dark Channel Prior (Dark Channel Prior을 이용한 LabVIEW 기반의 동영상 안개제거)

  • Roh, Chang Su;Kim, Yeon Gyo;Chong, Ui Pil
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.101-107
    • /
    • 2017
  • LabVIEW coding for video dehazing was developed. The dark channel prior proposed by K. He was applied to remove fog based on a single image, and K. B. Gibson's median dark channel prior was applied, and implemented in LabVIEW. In other words, we improved the image processing speed by converting the existing fog removal algorithm, dark channel prior, to the LabVIEW system. As a result, we have developed a real-time fog removal system that can be commercialized. Although the existing algorithm has been utilized, since the performance has been verified real - time, it will be highly applicable in academic and industrial fields. In addition, fog removal is performed not only in the entire image but also in the selected area of the partial region. As an application example, we have developed a system that acquires clear video from the long distance by connecting a laptop equipped with LabVIEW SW that was developed in this paper to a 100~300 times zoom telescope.

XCRAB : A Content and Annotation-based Multimedia Indexing and Retrieval System (XCRAB :내용 및 주석 기반의 멀티미디어 인덱싱과 검색 시스템)

  • Lee, Soo-Chelo;Rho, Seung-Min;Hwang, Een-Jun
    • The KIPS Transactions:PartB
    • /
    • v.11B no.5
    • /
    • pp.587-596
    • /
    • 2004
  • During recent years, a new framework, which aims to bring a unified and global approach in indexing, browsing and querying various digital multimedia data such as audio, video and image has been developed. This new system partitions each media stream into smaller units based on actual physical events. These physical events within oath media stream can then be effectively indexed for retrieval. In this paper, we present a new approach that exploits audio, image and video features to segment and analyze the audio-visual data. Integration of audio and visual analysis can overcome the weakness of previous approach that was based on the image or video analysis only. We Implement a web-based multi media data retrieval system called XCRAB and report on its experiment result.

Development of an Interactive Video Installation Based on Zhuangzi's Butterfly Dream (장자 나비의 꿈을 소재로 한 인터렉티브 비디오 구현)

  • Kim, Tae-Hee
    • Journal of Korea Game Society
    • /
    • v.11 no.2
    • /
    • pp.29-37
    • /
    • 2011
  • As a field in Digital Arts, interactive video introduced the mirror metaphor to the foundation of media, given its characteristic as a medium that extracts an audience image in a particular perspective. The interactive video work introduced in this paper addresses conceptual topics in the extension of Zhuangzi's Butterfly Dream and illustrates the technological approaches that employ an intensity-based computer vision processing in order to obtain the silhouette of audience for multiple graphical butterflies to draw an audience image. Users generate narratives in the interaction with the projected image. Sound is used in order for the system to provide augmented perception in the space and to add more rooms for narratives. The computer vision and the graphics methods introduced in this paper are suggested as tools for interactive video.