• Title/Summary/Keyword: consistent video quality

Search Result 23, Processing Time 0.017 seconds

Interframe Coding of 3-D Medical Image Using Warping Prediction (Warping을 이용한 움직임 보상을 통한 3차원 의료 영상의 압축)

  • So, Yun-Sung;Cho, Hyun-Duck;Kim, Jong-Hyo;Ra, Jong-Beom
    • Journal of Biomedical Engineering Research
    • /
    • v.18 no.3
    • /
    • pp.223-231
    • /
    • 1997
  • In this paper, an interframe coding method for volumetric medical images is proposed. By treating interslice variations as the motion of bones or tissues, we use the motion compensation (MC) technique to predict the current frame from the previous frame. Instead of a block matching algorithm (BMA), which is the most common motion estimation (ME) algorithm in video coding, image warping with biolinear transformation has been suggested to predict complex interslice object variation in medical images. When an object disappears between slices, however, warping prediction has poor performance. In order to overcome this drawback, an overlapped block motion compensation (OBMC) technique is combined with carping prediction. Motion compensated residual images are then encoded by using an embedded zerotree wavelet (EZW) coder with small modification for consistent quality of reconstructed images. The experimental results show that the interframe coding suing warping prediction provides better performance compared with interframe coding, and the OBMC scheme gives some additional improvement over the warping-only MC method.

  • PDF

Object Detection Based on Deep Learning Model for Two Stage Tracking with Pest Behavior Patterns in Soybean (Glycine max (L.) Merr.)

  • Yu-Hyeon Park;Junyong Song;Sang-Gyu Kim ;Tae-Hwan Jun
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.89-89
    • /
    • 2022
  • Soybean (Glycine max (L.) Merr.) is a representative food resource. To preserve the integrity of soybean, it is necessary to protect soybean yield and seed quality from threats of various pests and diseases. Riptortus pedestris is a well-known insect pest that causes the greatest loss of soybean yield in South Korea. This pest not only directly reduces yields but also causes disorders and diseases in plant growth. Unfortunately, no resistant soybean resources have been reported. Therefore, it is necessary to identify the distribution and movement of Riptortus pedestris at an early stage to reduce the damage caused by insect pests. Conventionally, the human eye has performed the diagnosis of agronomic traits related to pest outbreaks. However, due to human vision's subjectivity and impermanence, it is time-consuming, requires the assistance of specialists, and is labor-intensive. Therefore, the responses and behavior patterns of Riptortus pedestris to the scent of mixture R were visualized with a 3D model through the perspective of artificial intelligence. The movement patterns of Riptortus pedestris was analyzed by using time-series image data. In addition, classification was performed through visual analysis based on a deep learning model. In the object tracking, implemented using the YOLO series model, the path of the movement of pests shows a negative reaction to a mixture Rina video scene. As a result of 3D modeling using the x, y, and z-axis of the tracked objects, 80% of the subjects showed behavioral patterns consistent with the treatment of mixture R. In addition, these studies are being conducted in the soybean field and it will be possible to preserve the yield of soybeans through the application of a pest control platform to the early stage of soybeans.

  • PDF

3D Object Extraction Mechanism from Informal Natural Language Based Requirement Specifications (비정형 자연어 요구사항으로부터 3D 객체 추출 메커니즘)

  • Hyuntae Kim;Janghwan Kim;Jihoon Kong;Kidu Kim;R. Young Chul Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.9
    • /
    • pp.453-459
    • /
    • 2024
  • Recent advances in generative AI technologies using natural language processing have critically impacted text, image, and video production. Despite these innovations, we still need to improve the consistency and reusability of AI-generated outputs. These issues are critical in cartoon creation, where the inability to consistently replicate characters and specific objects can degrade the work's quality. We propose an integrated adaption of language analysis-based requirement engineering and cartoon engineering to solve this. The proposed method applies the linguistic frameworks of Chomsky and Fillmore to analyze natural language and utilizes UML sequence models for generating consistent 3D representations of object interactions. It systematically interprets the creator's intentions from textual inputs, ensuring that each character or object, once conceptualized, is accurately replicated across various panels and episodes to preserve visual and contextual integrity. This technique enhances the accuracy and consistency of character portrayals in animated contexts, aligning closely with the initial specifications. Consequently, this method holds potential applicability in other domains requiring the translation of complex textual descriptions into visual representations.