• Title/Summary/Keyword: Harmful Video Images

Search Result 6, Processing Time 0.022 seconds

A Method for Identification of Harmful Video Images Using a 2-Dimensional Projection Map

  • Kim, Chang-Geun;Kim, Soung-Gyun;Kim, Hyun-Ju
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.1
    • /
    • pp.62-68
    • /
    • 2013
  • This paper proposes a method for identification of harmful video images based on the degree of harmfulness in the video content. To extract harmful candidate frames from the video effectively, we used a video color extraction method applying a projection map. The procedure for identifying the harmful video has five steps, first, extract the I-frames from the video and map them onto projection map. Next, calculate the similarity and select the potentially harmful, then identify the harmful images by comparing the similarity measurement value. The method estimates similarity between the extracted frames and normative images using the critical value of the projection map. Based on our experimental test, we propose how the harmful candidate frames are extracted and compared with normative images. The various experimental data proved that the image identification method based on the 2-dimensional projection map is superior to using the color histogram technique in harmful image detection performance.

Detection of Harmful Images Based on Color and Geometrical Features (색상과 기하학적인 특징 기반의 유해 영상 탐지)

  • Jang, Seok-Woo;Park, Young-Jae;Huh, Moon-Haeng
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.11
    • /
    • pp.5834-5840
    • /
    • 2013
  • Along with the development of high-speed, wired and wireless Internet technology, various harmful images in a form of photos and video clips have become prevalent these days. In this paper, we suggest a method of automatically detecting adult images by extracting woman's nipple areas which represent obscenity of the image. The suggested algorithm first segments skin color areas in the $YC_bC_r$ color space from input images and extracts nipple's candidate areas from the segmented skin areas through the suggested nipple map. We then select real nipple areas by using geometrical information and determines input images as harmful images if they contain nipples. Experimental results show that the suggested nipple map-based method effectively detects adult images.

A Technique to Select Key-Frame for Identifying Harmful Video Images (동영상의 유해성 판별을 위한 대표 프레임 선정 기법)

  • Kim, Seong-Gyun;Park, Myeong-Chul;Ha, Seok-Wun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.10
    • /
    • pp.1822-1828
    • /
    • 2006
  • A key-frame should be efficiently selected to distinguish bad information from the videos. A previous technique selecting a key-frame mostly consists of the transformation scene-centered. In the case of harmful videos containing the quaility of continuous changes, the technique makes the total rate be reduced by an unnecessary key-frame. This thesis suggests the technique selecting a key-frame, an entry of the distinguishing system by using the quality of changes between the frames. In the experiment of this technique, it was proved that over 90% of the bad information was distinguished by the selected key frame, and also time efficiency was proved by showing 68% of decrement compared to the numbers I-frame. Therefore, This technique makes the system efficient to distinguish bad information, and efficiently can contribute to the distribution of the healthy movie information.

Development of Workplace Risk Assessment System Based on AI Video Analysis

  • Jeong-In Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.151-161
    • /
    • 2024
  • In this paper, we develop 'the Danger Map' of a workplace to identify risk and harmful factors by analyzing images of each process within the manufacturing plant site using artificial intelligence (AI). We proposed a system that automatically derives 'the risk and safety levels' based on the frequency and intensity derived from this Danger Map in accordance with actual field conditions and applies them to similar manufacturing industries. In particular, in the traditional evaluation method of manually evaluating the risk of a workplace using Excel, the risk level for each risk and harmful factor acquired from the video is automatically calculated and evaluated to ensure safety through the system and calculate the safety level, so that the company can take appropriate actions accordingly. and measures were prepared. To automate safety calculation and evaluation, 'Heinrich's law' was used as a model, and a 5X4 point evaluation scale was calculated for risky behavior patterns. To demonstrate this system, we applied it to a casting factory and were able to save 2 people the time and labor required to calculate safety each month.

Toward Occlusion-Free Depth Estimation for Video Production

  • Park, Jong-Il;Seiki-Inoue
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1997.06a
    • /
    • pp.131-136
    • /
    • 1997
  • We present a method to estimate a dense and sharp depth map using multiple cameras for the application to flexible video production. A key issue for obtaining sharp depth map is how to overcome the harmful influence of occlusion. Thus, we first propose to selectively use the depth information from multiple cameras. With a simple sort and discard technique, we resolve the occlusion problem considerably at a slight sacrifice of noise tolerance. However, boundary overreach of more textured area to less textured area at object boundaries still remains to be solved. We observed that the amount of boundary overreach is less than half the size of the matching window and, unlike usual stereo matching, the boundary overreach with the proposed occlusion-overcoming method shows very abrupt transition. Based on these observations, we propose a hierarchical estimation scheme that attempts to reduce boundary overreach such that edges of the depth map coincide with object boundaries on the one hand, and to reduce noisy estimates due to insufficient size of matching window on the other hand. We show the hierarchical method can produce a sharp depth map for a variety of images.

  • PDF

Automatic identification and analysis of multi-object cattle rumination based on computer vision

  • Yueming Wang;Tiantian Chen;Baoshan Li;Qi Li
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.519-534
    • /
    • 2023
  • Rumination in cattle is closely related to their health, which makes the automatic monitoring of rumination an important part of smart pasture operations. However, manual monitoring of cattle rumination is laborious and wearable sensors are often harmful to animals. Thus, we propose a computer vision-based method to automatically identify multi-object cattle rumination, and to calculate the rumination time and number of chews for each cow. The heads of the cattle in the video were initially tracked with a multi-object tracking algorithm, which combined the You Only Look Once (YOLO) algorithm with the kernelized correlation filter (KCF). Images of the head of each cow were saved at a fixed size, and numbered. Then, a rumination recognition algorithm was constructed with parameters obtained using the frame difference method, and rumination time and number of chews were calculated. The rumination recognition algorithm was used to analyze the head image of each cow to automatically detect multi-object cattle rumination. To verify the feasibility of this method, the algorithm was tested on multi-object cattle rumination videos, and the results were compared with the results produced by human observation. The experimental results showed that the average error in rumination time was 5.902% and the average error in the number of chews was 8.126%. The rumination identification and calculation of rumination information only need to be performed by computers automatically with no manual intervention. It could provide a new contactless rumination identification method for multi-cattle, which provided technical support for smart pasture.