• Title/Summary/Keyword: video to images

Search Result 1,348, Processing Time 0.028 seconds

Embedded Web Server for Monitoring and Control of a Mobile Robot

  • Sin,Yonggak;Kwak, Jaehyuk;Lim, Joonhong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.132.2-132
    • /
    • 2001
  • In this paper, we propose an efficient configuration of a system for the remote control of a mobile robot. The interface has a video feedback and runs in standard web environments. For control servers of mobile robot and CCD camera, we use the environment with embedded web server Specific program has been developed in order to grab the images using Microsoft Visual C++ The external camera sends the video signal to a framegrabber in the PC, then this program grabs the images and puts them in shared memory in BMP format. For a video feedback, we use image feedback based on the client pull technique supported by Netscape and Internet Explorer.

  • PDF

A Perception-based Color Correction Method for Multi-view Images

  • Shao, Feng;Jiang, Gangyi;Yu, Mei;Peng, Zongju
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.2
    • /
    • pp.390-407
    • /
    • 2011
  • Three-dimensional (3D) video technologies are becoming increasingly popular, as it can provide users with high quality and immersive experiences. However, color inconsistency between the camera views is an urgent problem to be solved in multi-view imaging. In this paper, a perception-based color correction method for multi-view images is proposed. In the proposed method, human visual sensitivity (VS) and visual attention (VA) models are incorporated into the correction process. Firstly, the VS property is used to reduce the computational complexity by removing these visual insensitive regions. Secondly, the VA property is used to improve the perceptual quality of local VA regions by performing VA-dependent color correction. Experimental results show that compared with other color correction methods, the proposed method can greatly promote the perceptual quality of local VA regions greatly and reduce the computational complexity, and obtain higher coding performance.

Super Resolution Image Reconstruction using the Maximum A-Posteriori Method

  • Kwon Hyuk-Jong;Kim Byung-Guk
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.115-118
    • /
    • 2004
  • Images with high resolution are desired and often required in many visual applications. When resolution can not be improved by replacing sensors, either because of cost or hardware physical limits, super resolution image reconstruction method is what can be resorted to. Super resolution image reconstruction method refers to image processing algorithms that produce high quality and high resolution images from a set of low quality and low resolution images. The method is proved to be useful in many practical cases where multiple frames of the same scene can be obtained, including satellite imaging, video surveillance, video enhancement and restoration, digital mosaicking, and medical imaging. The method can be either the frequency domain approach or the spatial domain approach. Much of the earlier works concentrated on the frequency domain formulation, but as more general degradation models were considered, later researches had been almost exclusively on spatial domain formulations. The method in spatial domains has three stages: i) motion estimate or image registration, ii) interpolation onto high resolution grid and iii) deblurring process. The super resolution grid construction in the second stage was discussed in this paper. We applied the Maximum A­Posteriori(MAP) reconstruction method that is one of the major methods in the super resolution grid construction. Based on this method, we reconstructed high resolution images from a set of low resolution images and compared the results with those from other known interpolation methods.

  • PDF

Effective teaching using textbooks and AI web apps (교과서와 AI 웹앱을 활용한 효과적인 교육방식)

  • Sobirjon, Habibullaev;Yakhyo, Mamasoliev;Kim, Ki-Hawn
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.211-213
    • /
    • 2022
  • Images in the textbooks influence the learning process. Students often see pictures before reading the text and these pictures can enhance the power of imagination of the students. The findings of some researches show that the images in textbooks can increase students' creativity. However, when learning major subjects, reading a textbook or looking at a picture alone may not be enough to understand the topics and completely realize the concepts. Studies show that viewers remember 95% of a message when watching a video than reading a text. If we can combine textbooks and videos, this teaching method is fantastic. The "TEXT + IMAGE + VIDEO (Animation)" concept could be more beneficial than ordinary ones. We tried to give our solution by using machine learning Image Classification. This paper covers the features, approaches and detailed objectives of our project. For now, we have developed the prototype of this project as a web app and it only works when accessed via smartphone. Once you have accessed the web app through your smartphone, the web app asks for access to use the camera. Suppose you bring your smartphone's camera closer to the picture in the textbook. It will then display the video related to the photo below.

  • PDF

Fashion Accessory Design Suggestions Using Firework Images with the OLED Display Platform (불꽃놀이 형상과 OLED를 기반으로 한 패션 액세서리 디자인 제안)

  • Kim, Sun-Young
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.35 no.10
    • /
    • pp.1188-1198
    • /
    • 2011
  • This study proposes the use of firework shapes to design fashion accessories in the judgment that they are appropriate for the expression of creative images in consideration of the display of fireworks as a kind of entertainment and a festive symbol. This study promotes the sustainable application of firework shapes to develop the designs of fashion culture items that feature a distinctive personality and uniqueness. In this present study, the proposed fashion accessory design was intended to create an entertaining new atmosphere that uses an Organic Light Emitting Diode (OLED) that draws attention as a futuristic display. In terms of methodology, a literature review of firework shapes and OLED was conducted; in addition, Adobe Illustrator CS2 and Adobe Photoshop CS2 were used to develop six different standard motive designs with formative design elements represented by a variety of firework shapes. Each of the six motifs was further expanded with different color combinations. Rich images are produced with the use of pink, blue, purple, green, yellow, orange, and red, in conjunction with various OLED effects to express the three-dimensional images of fireworks. The motifs are applied to three types of items: bags, bracelets, and necklaces. For the video images, evening and tote bags, pendants, and bangles were used. Shifting images and lights should produce unique images as well as satisfy the consumer desire for entertainment. The Adobe Image Ready software was used to present the motive of fireworks applied to the design of fashion accessories in video images but not in still-cut images due to physical constraints of this paper.

Digital Watermarking Technique of Compressed Multi-view Video with Layered Depth Image (계층적 깊이 영상으로 압축된 다시점 비디오에 대한 디지털 워터마크 기술)

  • Lim, Joong-Hee;Shin, Jong-Hong;Jee, Inn-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • In this paper, proposed digital image watermarking technique with lifting wavelet transformation. This watermark technique can be easily extended in video content fields. Therefore, we apply this watermark technique to layered depth image structure that is efficient compression method of multi-view video with depth images. This application steps are very simple, because watermark is inserted only reference image. And watermarks of the other view images borrow from reference image. Each view image of multi-view video may be guaranteed authentication and copyright.

  • PDF

Optimal Coding Model for Screen Contents Applications from the Coding Performance Analysis of High Efficient Coding Tools in HEVC (HEVC 고성능 압축 도구들의 성능 분석을 통한 스크린 콘텐츠 응용 최적 부호화 모델)

  • Han, Chan-Hee;Lee, Si-Woong
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.12
    • /
    • pp.544-554
    • /
    • 2012
  • Screen content refers to images or videos generated by various electronic devices such as computers or mobile phones, whereas natural content refers to images captured by cameras. Screen contents show different statistical characteristics from natural images, so the conventional video codecs which were developed mainly for the coding of natural videos cannot guarantee good coding performances for screen contents. Recently, researches on efficient SCC(Screen Content Coding) are being actively studied, and especially at ongoing JCT-VC(Joint Collaborative Team on Video Coding) meeting for HEVC(High Efficiency Video Coding) standard, SCC issues are being discussed steadily. In this paper, we analyze the performances of high efficient coding tools in HM(HEVC Test Model) on SCC, and present an optimized SCC model based on the analysis results. We also present the characteristics of screen contents and the future research issues as well.

Multi-view Video Coding using View Interpolation (영상 보간을 이용한 다시점 비디오 부호화 방법)

  • Lee, Cheon;Oh, Kwan-Jung;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.12 no.2
    • /
    • pp.128-136
    • /
    • 2007
  • Since the multi-view video is a set of video sequences captured by multiple array cameras for the same three-dimensional scene, it can provide multiple viewpoint images using geometrical manipulation and intermediate view generation. Although multi-view video allows us to experience more realistic feeling with a wide range of images, the amount of data to be processed increases in proportion to the number of cameras. Therefore, we need to develop efficient coding methods. One of the possible approaches to multi-view video coding is to generate an intermediate image using view interpolation method and to use the interpolated image as an additional reference frame. The previous view interpolation method for multi-view video coding employs fixed size block matching over the pre-determined disparity search range. However, if the disparity search range is not proper, disparity error may occur. In this paper, we propose an efficient view interpolation method using initial disparity estimation, variable block-based estimation, and pixel-level estimation using adjusted search ranges. In addition, we propose a multi-view video coding method based on H.264/AVC to exploit the intermediate image. Intermediate images have been improved about $1{\sim}4dB$ using the proposed method compared to the previous view interpolation method, and the coding efficiency have been improved about 0.5 dB compared to the reference model.

Application of Video Photogrammetry for Generating and Updating Digital Maps (수치지도 생성 및 갱신을 위한 Video Photogrammetry 적용)

  • Yoo, Hwan-Hee;Sung, Jae-Ryeol
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.6 no.2 s.12
    • /
    • pp.11-20
    • /
    • 1998
  • Although aerial photogrammetry has been used to generate or update digital maps. It is difficult to make the spatial and attribute data for all kinds of objects on the ground with only aerial photogrammetry. Therefore, we are getting informations of the object on the ground through an on-the-spot survey In order to improve accuracy and reliability of on-the-spot survey in this study, we obtained stereo images from high resolution digital camera (1152*864 pixels) and developed the video photogrammetry which was able to determine the three dimensional coordinates from stereo images by applying DLT(Direct Linear Transformation). Also, the developed video photogrammetry could generate and update the spatial and attribute data in digital maps by using a function that could connect three dimensional coordinates with the attribute data.

  • PDF

Video Camera Model Identification System Using Deep Learning (딥 러닝을 이용한 비디오 카메라 모델 판별 시스템)

  • Kim, Dong-Hyun;Lee, Soo-Hyeon;Lee, Hae-Yeoun
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.8
    • /
    • pp.1-9
    • /
    • 2019
  • With the development of imaging information communication technology in modern society, imaging acquisition and mass production technology have developed rapidly. However, crime rates using these technology are increased and forensic studies are conducted to prevent it. Identification techniques for image acquisition devices are studied a lot, but the field is limited to images. In this paper, camera model identification technique for video, not image is proposed. We analyzed video frames using the trained model with images. Through training and analysis by considering the frame characteristics of video, we showed the superiority of the model using the P frame. Then, we presented a video camera model identification system by applying a majority-based decision algorithm. In the experiment using 5 video camera models, we obtained maximum 96.18% accuracy for each frame identification and the proposed video camera model identification system achieved 100% identification rate for each camera model.