• Title/Summary/Keyword: Object technology

Search Result 3,903, Processing Time 0.03 seconds

Fundamental Research for Video-Integrated Collision Prediction and Fall Detection System to Support Navigation Safety of Vessels

  • Kim, Bae-Sung;Woo, Yun-Tae;Yu, Yung-Ho;Hwang, Hun-Gyu
    • Journal of Ocean Engineering and Technology
    • /
    • v.35 no.1
    • /
    • pp.91-97
    • /
    • 2021
  • Marine accidents caused by ships have brought about economic and social losses as well as human casualties. Most of these accidents are caused by small and medium-sized ships and are due to their poor conditions and insufficient equipment compared with larger vessels. Measures are quickly needed to improve the conditions. This paper discusses a video-integrated collision prediction and fall detection system to support the safe navigation of small- and medium-sized ships. The system predicts the collision of ships and detects falls by crew members using the CCTV, displays the analyzed integrated information using automatic identification system (AIS) messages, and provides alerts for the risks identified. The design consists of an object recognition algorithm, interface module, integrated display module, collision prediction and fall detection module, and an alarm management module. For the basic research, we implemented a deep learning algorithm to recognize the ship and crew from images, and an interface module to manage messages from AIS. To verify the implemented algorithm, we conducted tests using 120 images. Object recognition performance is calculated as mAP by comparing the pre-defined object with the object recognized through the algorithms. As results, the object recognition performance of the ship and the crew were approximately 50.44 mAP and 46.76 mAP each. The interface module showed that messages from the installed AIS were accurately converted according to the international standard. Therefore, we implemented an object recognition algorithm and interface module in the designed collision prediction and fall detection system and validated their usability with testing.

Object detection within the region of interest based on gaze estimation (응시점 추정 기반 관심 영역 내 객체 탐지)

  • Seok-Ho Han;Hoon-Seok Jang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.3
    • /
    • pp.117-122
    • /
    • 2023
  • Gaze estimation, which automatically recognizes where a user is currently staring, and object detection based on estimated gaze point, can be a more accurate and efficient way to understand human visual behavior. in this paper, we propose a method to detect the objects within the region of interest around the gaze point. Specifically, after estimating the 3D gaze point, a region of interest based on the estimated gaze point is created to ensure that object detection occurs only within the region of interest. In our experiments, we compared the performance of general object detection, and the proposed object detection based on region of interest, and found that the processing time per frame was 1.4ms and 1.1ms, respectively, indicating that the proposed method was faster in terms of processing speed.

Proposal and Implementation of Intelligent Omni-directional Video Analysis System (지능형 전방위 영상 분석 시스템 제안 및 구현)

  • Jeon, So-Yeon;Heo, Jun-Hak;Park, Goo-Man
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.850-853
    • /
    • 2017
  • In this paper, we propose an image analysis system based on omnidirectional image and object tracking image display using super wide angle camera. In order to generate spherical images, the projection process of converting from two wide-angle images to the equirectangular panoramic image was performed and the spherical image was expressed by converting rectangular to spherical coordinate system. Object tracking was performed by selecting the desired object initially, and KCF(Kernelized Correlation Filter) algorithm was used so that robust object tracking can be performed even when the object's shape is changed. In the initial dialog, the file and mode are selected, and then the result is displayed in the new dialog. If the object tracking mode is selected, the ROI is set by dragging the desired area in the new window.

Salient Object Detection via Adaptive Region Merging

  • Zhou, Jingbo;Zhai, Jiyou;Ren, Yongfeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.9
    • /
    • pp.4386-4404
    • /
    • 2016
  • Most existing salient object detection algorithms commonly employed segmentation techniques to eliminate background noise and reduce computation by treating each segment as a processing unit. However, individual small segments provide little information about global contents. Such schemes have limited capability on modeling global perceptual phenomena. In this paper, a novel salient object detection algorithm is proposed based on region merging. An adaptive-based merging scheme is developed to reassemble regions based on their color dissimilarities. The merging strategy can be described as that a region R is merged with its adjacent region Q if Q has the lowest dissimilarity with Q among all Q's adjacent regions. To guide the merging process, superpixels that located at the boundary of the image are treated as the seeds. However, it is possible for a boundary in the input image to be occupied by the foreground object. To avoid this case, we optimize the boundary influences by locating and eliminating erroneous boundaries before the region merging. We show that even though three simple region saliency measurements are adopted for each region, encouraging performance can be obtained. Experiments on four benchmark datasets including MSRA-B, SOD, SED and iCoSeg show the proposed method results in uniform object enhancement and achieve state-of-the-art performance by comparing with nine existing methods.

Object Identification and Localization for Image Recognition (이미지 인식을 위한 객체 식별 및 지역화)

  • Lee, Yong-Hwan;Park, Je-Ho;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.11 no.4
    • /
    • pp.49-55
    • /
    • 2012
  • This paper proposes an efficient method of object identification and localization for image recognition. The new proposed algorithm utilizes correlogram back-projection in the YCbCr chromaticity components to handle the problem of sub-region querying. Utilizing similar spatial color information enables users to detect and locate primary location and candidate regions accurately, without the need for additional information about the number of objects. Comparing this proposed algorithm to existing methods, experimental results show that improvement of 21% was observed. These results reveal that color correlogram is markedly more effective than color histogram for this task. Main contribution of this paper is that a different way of treating color spaces and a histogram measure, which involves information on spatial color, are applied in object localization. This approach opens up new opportunities for object detection for the use in the area of interactive image and 2-D based augmented reality.

Simulation of Deformable Objects using GLSL 4.3

  • Sung, Nak-Jun;Hong, Min;Lee, Seung-Hyun;Choi, Yoo-Joo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.8
    • /
    • pp.4120-4132
    • /
    • 2017
  • In this research, we implement a deformable object simulation system using OpenGL's shader language, GLSL4.3. Deformable object simulation is implemented by using volumetric mass-spring system suitable for real-time simulation among the methods of deformable object simulation. The compute shader in GLSL 4.3 which helps to access the GPU resources, is used to parallelize the operations of existing deformable object simulation systems. The proposed system is implemented using a compute shader for parallel processing and it includes a bounding box-based collision detection solution. In general, the collision detection is one of severe computing bottlenecks in simulation of multiple deformable objects. In order to validate an efficiency of the system, we performed the experiments using the 3D volumetric objects. We compared the performance of multiple deformable object simulations between CPU and GPU to analyze the effectiveness of parallel processing using GLSL. Moreover, we measured the computation time of bounding box-based collision detection to show that collision detection can be processed in real-time. The experiments using 3D volumetric models with 10K faces showed the GPU-based parallel simulation improves performance by 98% over the CPU-based simulation, and the overall steps including collision detection and rendering could be processed in real-time frame rate of 218.11 FPS.

Visual Object Tracking using Surface Fitting for Scale and Rotation Estimation

  • Wang, Yuhao;Ma, Jun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1744-1760
    • /
    • 2021
  • Since correlation filter appeared in the field of object tracking, it plays an increasingly vital role due to its excellent performance. Although many sophisticated trackers have been successfully applied to track the object accurately, very few of them attaches importance to the scale and rotation estimation. In order to address the above limitation, we propose a novel method combined with Fourier-Mellin transform and confidence evaluation strategy for robust object tracking. In the first place, we construct a correlation filter to locate the target object precisely. Then, a log-polar technique is used in the Fourier-Mellin transform to cope with the rotation and scale changes. In order to achieve subpixel accuracy, we come up with an efficient surface fitting mechanism to obtain the optimal calculation result. In addition, we introduce a confidence evaluation strategy modeled on the output response, which can decrease the impact of image noise and perform as a criterion to evaluate the target model stability. Experimental experiments on OTB100 demonstrate that the proposed algorithm achieves superior capability in success plots and precision plots of OPE, which is 10.8% points and 8.6% points than those of KCF. Besides, our method performs favorably against the others in terms of SRE and TRE validation schemes, which shows the superiority of our proposed algorithm in scale and rotation evaluation.

Effect of Object Manipulation Ability and Basic Movement Ability on Mathematical Ability of Young Children

  • Park, Ji-Hee
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.265-273
    • /
    • 2022
  • The aim of this study is to investigate the relationship between the object ability, basic movement ability, and mathematical ability of young children. Next, through this study, the influence of young children's object manipulation ability and basic movement ability on mathematical ability was investigated. The subjects of this study were 80 children aged 5 years old. As a research tool, the non-mobile movement and mobile movement, the basic movement development test scale, and the young children's picture-mathematical ability test scale were used. This survey was conducted from October 2018 to January 2019. For data analysis, correlation analysis and hierarchical multiple regression analysis were performed using the spss program. As a result of the study, it was found that there was a significant positive correlation between the non-mobile ability, movement ability, object manipulation ability and mathematical ability of young children. It was found that young children's ability to manipulate objects and non-movement abilities had positive effect on their mathematical abilities. The movement ability of young children had both negative and positive effects on mathematical ability, but it was not found to be statistically significant. This study is meaningful in that it investigated the effects of non-mobile movement, mobile movement ability, and object manipulation ability, which are sub-capabilities of basic movement ability, which had not been investigated so far, on the mathematical ability of young children.

A study of duck detection using deep neural network based on RetinaNet model in smart farming

  • Jeyoung Lee;Hochul Kang
    • Journal of Animal Science and Technology
    • /
    • v.66 no.4
    • /
    • pp.846-858
    • /
    • 2024
  • In a duck cage, ducks are placed in various states. In particular, if a duck is overturned and falls or dies, it will adversely affect the growing environment. In order to prevent the foregoing, it was necessary to continuously manage the cage for duck growth. This study proposes a method using an object detection algorithm to improve the foregoing. Object detection refers to the work to perform classification and localization of all objects present in the image when an input image is given. To use an object detection algorithm in a duck cage, data to be used for learning should be made and the data should be augmented to secure enough data to learn from. In addition, the time required for object detection and the accuracy of object detection are important. The study collected, processed, and augmented image data for a total of two years in 2021 and 2022 from the duck cage. Based on the objects that must be detected, the data collected as such were divided at a ratio of 9 : 1, and learning and verification were performed. The final results were visually confirmed using images different from the images used for learning. The proposed method is expected to be used for minimizing human resources in the growing process in duck cages and making the duck cages into smart farms.

Automatic identification and analysis of multi-object cattle rumination based on computer vision

  • Yueming Wang;Tiantian Chen;Baoshan Li;Qi Li
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.519-534
    • /
    • 2023
  • Rumination in cattle is closely related to their health, which makes the automatic monitoring of rumination an important part of smart pasture operations. However, manual monitoring of cattle rumination is laborious and wearable sensors are often harmful to animals. Thus, we propose a computer vision-based method to automatically identify multi-object cattle rumination, and to calculate the rumination time and number of chews for each cow. The heads of the cattle in the video were initially tracked with a multi-object tracking algorithm, which combined the You Only Look Once (YOLO) algorithm with the kernelized correlation filter (KCF). Images of the head of each cow were saved at a fixed size, and numbered. Then, a rumination recognition algorithm was constructed with parameters obtained using the frame difference method, and rumination time and number of chews were calculated. The rumination recognition algorithm was used to analyze the head image of each cow to automatically detect multi-object cattle rumination. To verify the feasibility of this method, the algorithm was tested on multi-object cattle rumination videos, and the results were compared with the results produced by human observation. The experimental results showed that the average error in rumination time was 5.902% and the average error in the number of chews was 8.126%. The rumination identification and calculation of rumination information only need to be performed by computers automatically with no manual intervention. It could provide a new contactless rumination identification method for multi-cattle, which provided technical support for smart pasture.