• Title/Summary/Keyword: video to images

Search Result 1,348, Processing Time 0.026 seconds

A STUDY ON THE READABILITY OF PERIAPICAL RADIOGRAPH WITH THE DIGITAL RADIOGRAPHY (Digital radiography를 이용한 치근단 X선 사진의 판독능에 관한 실험적 연구)

  • Lee Kon;Lee Sang Rae
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.22 no.1
    • /
    • pp.117-127
    • /
    • 1992
  • This investigation was performed to test the readability of the video based digital radiography, that can be applied clinically, compared with the periapical radiograph. The experiments were performed with IBM-PC/AT compatible, video camera and ADC (analog-digital converter). And spatial resolution was 512 X 480 with 256 (8 bit) gray levels. The radiographs obtained by using variable steps of exposure time were digitized. and then the digital images were analyzed. The obtained results were as follows: 1. There was no remarkable difference in readability between the radiographs and their digital images. However, under over exposure the digital images were superior to the radiographs in readability and vice versa. 2. As the exposure time was increased, the gray level of the digital image was decreased proportionally. 3. The correlation beween the regions of interest and the aluminum step wedges were relatively close; R=0.9965 (p <0.001).

  • PDF

Probabilistic Background Subtraction in a Video-based Recognition System

  • Lee, Hee-Sung;Hong, Sung-Jun;Kim, Eun-Tai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.4
    • /
    • pp.782-804
    • /
    • 2011
  • In video-based recognition systems, stationary cameras are used to monitor an area of interest. These systems focus on a segmentation of the foreground in the video stream and the recognition of the events occurring in that area. The usual approach to discriminating the foreground from the video sequence is background subtraction. This paper presents a novel background subtraction method based on a probabilistic approach. We represent the posterior probability of the foreground based on the current image and all past images and derive an updated method. Furthermore, we present an efficient fusion method for the color and edge information in order to overcome the difficulties of existing background subtraction methods that use only color information. The suggested method is applied to synthetic data and real video streams, and its robust performance is demonstrated through experimentation.

A New Details Extraction Technique for Video Sequence Using Morphological Laplacian (수리형태학적 Laplacian 연산을 이용한 새로운 동영상 Detail 추출 기법)

  • 김희준;어진우
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.911-914
    • /
    • 1998
  • In this paper, the importance of including small image features at the initial levels of a progressive second generation video coding scheme is presented. It is shown that a number of meaningful small features called details shouuld be coded in order to match their perceptual significance to the human visual system. We propose a method for extracting, perceptually selecting and coding of visual details in a video sequence using morphological laplacian operator and modified post-it transform is very efficient for improving quality of the reconstructed images.

  • PDF

Integrating Video Image into Digital Map (동영상과 수치지도의 결합에 관한 연구)

  • Kim, Yong-Il;Pyeon, Mu-Wook
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.4 no.2 s.8
    • /
    • pp.161-172
    • /
    • 1996
  • The objective of this research is to develope a process of integrating video image into digital map. In order to reach the research objective, the work includes the development of georeferencing technique for video images, the development of pilot system and the assesment process. Georeferencing technique for video images is composed of DGPS positioning, filtering of abnormal points, map conflation, indexing locations for key frames via time tag and indexing locations for total frames. By using the proposed building process, we could find the result that the accuracy of image capturing test points is $92.8%({\pm}2\;frames)$. The eventual meaning of this study is that it is possible to find a new conception of digital map, which overcomes a limitation of exiting two dimensional digital map.

  • PDF

A Tile-Image Merging Algorithm of Tiled-Display Recorder using Time-stamp (타임 스탬프를 이용한 타일드 디스플레이 기록기의 타일 영상 병합 알고리즘)

  • Choe, Gi-Seok;Nang, Jong-Ho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.5
    • /
    • pp.327-334
    • /
    • 2009
  • The tiled-display system provides a high resolution display which can be used in different applications in co-working area. The systems used in the co-working field usually save the user logs, and these log information not only makes the maintenance of the tiled-display system easier, but also can be used to check the progress of the co-working. There are three main steps in the proposed tiled display log recorder. The first step is to capture the screen shots of the tiles and send them for merging. The second step is to merge the captured tile images to form a single screen shot of the tiled-display. The final step is to encode the merged tile images to make a compressed video stream. This video stream could be stored for the logs of co-working or be streamed to remote users. Since there could be differences in capturing time of tile images, the quality of merged tiled-display could be degraded. This paper proposes a time stamp-based metric to evaluate the quality of the video stream, and a merging algorithm that could upgrade the quality of the video stream with respect to the proposed quality metrics.

Decision on Compression Ratios for Real-Time Transfer of Ultrasound Sequences

  • Lee, Jae-Hoon;Sung, Min-Mo;Kim, Hee-Joung;Yoo, Sun-Kwook;Kim, Eun-Kyung;Kim, Dong-Keun;Jung, Suk-Myung;Yoo, Hyung-Sik
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2002.09a
    • /
    • pp.489-491
    • /
    • 2002
  • The need for video diagnosis in medicine has been increased and real-time transfer of digital video will be an important component in PACS and telemedicine. But, Network environment has certain limitations that the required throughput can not satisfy quality of service (QoS). MPEG-4 ratified as a moving video standard by the ISO/IEC provides very efficient video coding covering the various ranges of low bit-rate in network environment. We implemented MPEG-4 CODEC (coder/decoder) and applied various compression ratios to moving ultrasound images. These images were displayed in random order on a client monitor passed through network. Radiologists determined subjective opinion scores for evaluating clinically acceptable image quality and then these were statistically processed in the t-Test method. Moreover the MPEG-4 decoded images were quantitatively analyzed by computing peak signal-to-noise ratio (PSNR) to objectively evaluate image quality. The bit-rate to maintain clinically acceptable image quality was up to 0.8Mbps. We successfully implemented the adaptive throughput or bit-rate relative to the image quality of ultrasound sequences used MPEG-4 that can be applied for diagnostic performance in real-time.

  • PDF

Adaptive Video Watermarking based on 3D-DCT Using Image Characteristics (영상 특성을 이용한 3D-DCT 기반의 적응적인 비디오 워터마킹)

  • Park Hyun;Lee Sung-Hyun;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.3 s.309
    • /
    • pp.68-75
    • /
    • 2006
  • In this paper, we propose an adaptive video watermarking method using human visual system(HVS) and characteristics of three-dimensional cosine transform (3D-DCT) cubes. We classify 3D-DCT cubes into three patterns according to the distribution of coefficients in the 3D-DCT cube: cube with motion and textures, cube with high textures and little motion, and cube with little textures and line motion. Images are also classified into three types according to the ratio of these patterns: images with motion and textures, images with high textures and little motion, and images with little textures and little motion. The proposed watermarking method adaptivelyinserts the watermark on the coefficients of the mid-range in the 3D-DCT cube using the appropriately learned sensitivity table and the proportional constants depending on the patterns of 3D-DCT cubes and types of images. Experimental results show that the proposed method achieves better performance in terms of invisibility and robustness than the previous method.

A Study on Gender Identity Expressed in Fashion in Music Video

  • Jeong, Ha-Na;Choy, Hyon-Sook
    • International Journal of Costume and Fashion
    • /
    • v.6 no.2
    • /
    • pp.28-42
    • /
    • 2006
  • In present modern society, media contributes more to the constructing of personal identities than any other medium. Music video, a postmodernism branch among a variety of media, offers a complex experience of sounds combined with visual images. In particular. fashion in music video helps conveying contexts effectively and functions as a medium of immediate communication by visual effect. Considering the socio-cultural effects of music video. gender identity represented in fashion in it can be of great importance. Therefore, this study is geared to the reconsidering of gender identity represented through costumes in music video by analyzing fashions in it. Gender identity in socio-cultural category is classified as masculinity, femininity, and the third sex. By examining fashions based on the classification. this study will help to create new design concepts and to understand gender identity in fashion. The results of this study are as follows: First. masculinity in music video fashion was categorized into stereotyped masculinity, sexual masculinity. and metro sexual masculinity. Second, femininity in music video fashion was categorized into stereotyped femininity. sexual femininity, and contra sexual femininity. Third, the third sex in music video fashion was categorized into transvestism, masculinization of female, and feminization of male. This phenomenon is presented into music videos through females in male attire and males in female attire. Through this research, gender identity represented in fashion of music video was demonstrated, and the importance of the relationship between representation of identity through fashion and socio-cultural environment was reconfirmed.

Image Resizing in an Arbitrary Block Transform Domain Using the Filters Suitable to Image Signal (임의의 직교 블록 변환 영역에서 영상 특성에 적합한 필터를 사용한 영상 해상도 변환)

  • Oh, Hyung-Suk;Kim, Won-Ha;Koo, Jun-Mo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.5
    • /
    • pp.52-62
    • /
    • 2008
  • In this paper, we develop a method that changes the resolutions of images in an arbitrary block transform domain by using a filter that suits to the characteristics of the underlying images. To accomplish this, we represent each procedure resizing images in an arbitrary transform domain as matrix multiplications and we derive the matrix that scales the image resolutions from the matrix multiplications. The resolution scaling matrix is also designed to be able to select the up/down-sampling filter that suits the characteristics of the image. Experiments show that the proposed method produces the reliable performances when it is applied to various transforms and to images that are mixed with predicted and non-predicted blocks which are generated during video coding.

A study on lighting angle for improvement of 360 degree video quality in metaverse (메타버스에서 360° 영상 품질향상을 위한 조명기 투사각연구)

  • Kim, Joon Ho;An, Kyong Sok;Choi, Seong Jhin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.499-505
    • /
    • 2022
  • Recently, the metaverse has been receiving a lot of attention. Metaverse means a virtual space, and various events can be held in this space. In particular, 360-degree video, a format optimized for the metaverse space, is attracting attention. A 360-degree video image is created by stitching images taken with multiple cameras or lenses in all 360-degree directions. When shooting a 360-degree video, a variety of shooting equipment, including a shooting staff to take a picture of a subject in front of the camera, is displayed on the video. Therefore, when shooting a 360-degree video, you have to hide everything except the subject around the camera. There are several problems with this shooting method. Among them, lighting is the biggest problem. This is because it is very difficult to install a fixture that focuses on the subject from behind the camera as in conventional image shooting. This study is an experimental study to find the optimal angle for 360-degree images by adjusting the angle of indoor lighting. We propose a method to record 360-degree video without installing additional lighting. Based on the results of this study, it is expected that experiments will be conducted through more various shooting angles in the future, and furthermore, it is expected that it will be helpful when using 360-degree images in the metaverse space.