• Title/Summary/Keyword: COCO Dataset

Search Result 21, Processing Time 0.031 seconds

Dataset Construction of Taekwondo Beginner AI (태권도 초심자를 위한 AI의 DataSet 구축)

  • Cho, Kyu Cheol;Kim, Ju Yeon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.249-252
    • /
    • 2022
  • 세계 태권도 연맹은 국제 축구 연맹의 가입국과 동일한 수의 가입국을 보유할 만큼 태권도는 점점 더 세계적으로 나아가고 있다. 하지만 태권도의 교육방법은 예전과 다르지 않다. 도장의 관장이나 사범이 직접 자세를 눈으로 보고 판단하여 지도해야 한다. 본 연구는 기술이 발전하고 변화함에 따라 태권도를 조금 더 다양하고 흥미롭게 배울 수 있는 방법을 개발하고자 진행하였다. 본 논문에서는 피사체 모델을 촬영하여 이미지를 추출하고 이미지에서 사람의 관절 KeyPoint를 라벨링 한 후 이를 바탕으로 COCO 형식의 DataSet을 만들어낸다. 이후 이 DataSet을 기계에 학습을 시킨다면 초심자를 위한 교육용 태권도 AI가 만들어질 수 있다. 또한, 기계학습 이후 이 AI를 실제 교육현장에 적용하여 교육과정에 직접 사용할 수 있으며 이 AI를 바탕으로 교육용 게임 개발 등 다양한 방면으로 활용할 수 있을 것이라고 기대한다.

  • PDF

Estimation of Traffic Volume Using Deep Learning in Stereo CCTV Image (스테레오 CCTV 영상에서 딥러닝을 이용한 교통량 추정)

  • Seo, Hong Deok;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.3
    • /
    • pp.269-279
    • /
    • 2020
  • Traffic estimation mainly involves surveying equipment such as automatic vehicle classification, vehicle detection system, toll collection system, and personnel surveys through CCTV (Closed Circuit TeleVision), but this requires a lot of manpower and cost. In this study, we proposed a method of estimating traffic volume using deep learning and stereo CCTV to overcome the limitation of not detecting the entire vehicle in case of single CCTV. COCO (Common Objects in Context) dataset was used to train deep learning models to detect vehicles, and each vehicle was detected in left and right CCTV images in real time. Then, the vehicle that could not be detected from each image was additionally detected by using affine transformation to improve the accuracy of traffic volume. Experiments were conducted separately for the normal road environment and the case of weather conditions with fog. In the normal road environment, vehicle detection improved by 6.75% and 5.92% in left and right images, respectively, than in a single CCTV image. In addition, in the foggy road environment, vehicle detection was improved by 10.79% and 12.88% in the left and right images, respectively.

Improvement of Mask-RCNN Performance Using Deep-Learning-Based Arbitrary-Scale Super-Resolution Module (딥러닝 기반 임의적 스케일 초해상도 모듈을 이용한 Mask-RCNN 성능 향상)

  • Ahn, Young-Pill;Park, Hyun-Jun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.381-388
    • /
    • 2022
  • In instance segmentation, Mask-RCNN is mostly used as a base model. Increasing the performance of Mask-RCNN is meaningful because it affects the performance of the derived model. Mask-RCNN has a transform module for unifying size of input images. In this paper, to improve the Mask-RCNN, we apply deep-learning-based ASSR to the resizing part in the transform module and inject calculated scale information into the model using IM(Integration Module). The proposed IM improves instance segmentation performance by 2.5 AP higher than Mask-RCNN in the COCO dataset, and in the periment for optimizing the IM location, the best performance was shown when it was located in the 'Top' before FPN and backbone were combined. Therefore, the proposed method can improve the performance of models using Mask-RCNN as a base model.

Dual Attention Based Image Pyramid Network for Object Detection

  • Dong, Xiang;Li, Feng;Bai, Huihui;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.12
    • /
    • pp.4439-4455
    • /
    • 2021
  • Compared with two-stage object detection algorithms, one-stage algorithms provide a better trade-off between real-time performance and accuracy. However, these methods treat the intermediate features equally, which lacks the flexibility to emphasize meaningful information for classification and location. Besides, they ignore the interaction of contextual information from different scales, which is important for medium and small objects detection. To tackle these problems, we propose an image pyramid network based on dual attention mechanism (DAIPNet), which builds an image pyramid to enrich the spatial information while emphasizing multi-scale informative features based on dual attention mechanisms for one-stage object detection. Our framework utilizes a pre-trained backbone as standard detection network, where the designed image pyramid network (IPN) is used as auxiliary network to provide complementary information. Here, the dual attention mechanism is composed of the adaptive feature fusion module (AFFM) and the progressive attention fusion module (PAFM). AFFM is designed to automatically pay attention to the feature maps with different importance from the backbone and auxiliary network, while PAFM is utilized to adaptively learn the channel attentive information in the context transfer process. Furthermore, in the IPN, we build an image pyramid to extract scale-wise features from downsampled images of different scales, where the features are further fused at different states to enrich scale-wise information and learn more comprehensive feature representations. Experimental results are shown on MS COCO dataset. Our proposed detector with a 300 × 300 input achieves superior performance of 32.6% mAP on the MS COCO test-dev compared with state-of-the-art methods.

Multiple Binarization Quadtree Framework for Optimizing Deep Learning-Based Smoke Synthesis Method

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.47-53
    • /
    • 2021
  • In this paper, we propose a quadtree-based optimization technique that enables fast Super-resolution(SR) computation by efficiently classifying and dividing physics-based simulation data required to calculate SR. The proposed method reduces the time required for quadtree computation by downscaling the smoke simulation data used as input data. By binarizing the density of the smoke in this process, a quadtree is constructed while mitigating the problem of numerical loss of density in the downscaling process. The data used for training is the COCO 2017 Dataset, and the artificial neural network uses a VGG19-based network. In order to prevent data loss when passing through the convolutional layer, similar to the residual method, the output value of the previous layer is added and learned. In the case of smoke, the proposed method achieved a speed improvement of about 15 to 18 times compared to the previous approach.

Real-Time Comprehensive Assistance for Visually Impaired Navigation

  • Amal Al-Shahrani;Amjad Alghamdi;Areej Alqurashi;Raghad Alzahrani;Nuha imam
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.1-10
    • /
    • 2024
  • Individuals with visual impairments face numerous challenges in their daily lives, with navigating streets and public spaces being particularly daunting. The inability to identify safe crossing locations and assess the feasibility of crossing significantly restricts their mobility and independence. Globally, an estimated 285 million people suffer from visual impairment, with 39 million categorized as blind and 246 million as visually impaired, according to the World Health Organization. In Saudi Arabia alone, there are approximately 159 thousand blind individuals, as per unofficial statistics. The profound impact of visual impairments on daily activities underscores the urgent need for solutions to improve mobility and enhance safety. This study aims to address this pressing issue by leveraging computer vision and deep learning techniques to enhance object detection capabilities. Two models were trained to detect objects: one focused on street crossing obstacles, and the other aimed to search for objects. The first model was trained on a dataset comprising 5283 images of road obstacles and traffic signals, annotated to create a labeled dataset. Subsequently, it was trained using the YOLOv8 and YOLOv5 models, with YOLOv5 achieving a satisfactory accuracy of 84%. The second model was trained on the COCO dataset using YOLOv5, yielding an impressive accuracy of 94%. By improving object detection capabilities through advanced technology, this research seeks to empower individuals with visual impairments, enhancing their mobility, independence, and overall quality of life.

A Study on the System for AI Service Production (인공지능 서비스 운영을 위한 시스템 측면에서의 연구)

  • Hong, Yong-Geun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.10
    • /
    • pp.323-332
    • /
    • 2022
  • As various services using AI technology are being developed, much attention is being paid to AI service production. Recently, AI technology is acknowledged as one of ICT services, a lot of research is being conducted for general-purpose AI service production. In this paper, I describe the research results in terms of systems for AI service production, focusing on the distribution and production of machine learning models, which are the final steps of general machine learning development procedures. Three different Ubuntu systems were built, and experiments were conducted on the system, using data from 2017 validation COCO dataset in combination of different AI models (RFCN, SSD-Mobilenet) and different communication methods (gRPC, REST) to request and perform AI services through Tensorflow serving. Through various experiments, it was found that the type of AI model has a greater influence on AI service inference time than AI machine communication method, and in the case of object detection AI service, the number and complexity of objects in the image are more affected than the file size of the image to be detected. In addition, it was confirmed that if the AI service is performed remotely rather than locally, even if it is a machine with good performance, it takes more time to infer the AI service than if it is performed locally. Through the results of this study, it is expected that system design suitable for service goals, AI model development, and efficient AI service production will be possible.

A Computer-Aided Diagnosis of Brain Tumors Using a Fine-Tuned YOLO-based Model with Transfer Learning

  • Montalbo, Francis Jesmar P.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4816-4834
    • /
    • 2020
  • This paper proposes transfer learning and fine-tuning techniques for a deep learning model to detect three distinct brain tumors from Magnetic Resonance Imaging (MRI) scans. In this work, the recent YOLOv4 model trained using a collection of 3064 T1-weighted Contrast-Enhanced (CE)-MRI scans that were pre-processed and labeled for the task. This work trained with the partial 29-layer YOLOv4-Tiny and fine-tuned to work optimally and run efficiently in most platforms with reliable performance. With the help of transfer learning, the model had initial leverage to train faster with pre-trained weights from the COCO dataset, generating a robust set of features required for brain tumor detection. The results yielded the highest mean average precision of 93.14%, a 90.34% precision, 88.58% recall, and 89.45% F1-Score outperforming other previous versions of the YOLO detection models and other studies that used bounding box detections for the same task like Faster R-CNN. As concluded, the YOLOv4-Tiny can work efficiently to detect brain tumors automatically at a rapid phase with the help of proper fine-tuning and transfer learning. This work contributes mainly to assist medical experts in the diagnostic process of brain tumors.

Center point prediction using Gaussian elliptic and size component regression using small solution space for object detection

  • Yuantian Xia;Shuhan Lu;Longhe Wang;Lin Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.1976-1995
    • /
    • 2023
  • The anchor-free object detector CenterNet regards the object as a center point and predicts it based on the Gaussian circle region. For each object's center point, CenterNet directly regresses the width and height of the objects and finally gets the boundary range of the objects. However, the critical range of the object's center point can not be accurately limited by using the Gaussian circle region to constrain the prediction region, resulting in many low-quality centers' predicted values. In addition, because of the large difference between the width and height of different objects, directly regressing the width and height will make the model difficult to converge and lose the intrinsic relationship between them, thereby reducing the stability and consistency of accuracy. For these problems, we proposed a center point prediction method based on the Gaussian elliptic region and a size component regression method based on the small solution space. First, we constructed a Gaussian ellipse region that can accurately predict the object's center point. Second, we recode the width and height of the objects, which significantly reduces the regression solution space and improves the convergence speed of the model. Finally, we jointly decode the predicted components, enhancing the internal relationship between the size components and improving the accuracy consistency. Experiments show that when using CenterNet as the improved baseline and Hourglass-104 as the backbone, on the MS COCO dataset, our improved model achieved 44.7%, which is 2.6% higher than the baseline.

Instance segmentation with pyramid integrated context for aerial objects

  • Juan Wang;Liquan Guo;Minghu Wu;Guanhai Chen;Zishan Liu;Yonggang Ye;Zetao Zhang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.701-720
    • /
    • 2023
  • Aerial objects are more challenging to segment than normal objects, which are usually smaller and have less textural detail. In the process of segmentation, target objects are easily omitted and misdetected, which is problematic. To alleviate these issues, we propose local aggregation feature pyramid networks (LAFPNs) and pyramid integrated context modules (PICMs) for aerial object segmentation. First, using an LAFPN, while strengthening the deep features, the extent to which low-level features interfere with high-level features is reduced, and numerous dense and small aerial targets are prevented from being mistakenly detected as a whole. Second, the PICM uses global information to guide local features, which enhances the network's comprehensive understanding of an entire image and reduces the missed detection of small aerial objects due to insufficient texture information. We evaluate our network with the MS COCO dataset using three categories: airplanes, birds, and kites. Compared with Mask R-CNN, our network achieves performance improvements of 1.7%, 4.9%, and 7.7% in terms of the AP metrics for the three categories. Without pretraining or any postprocessing, the segmentation performance of our network for aerial objects is superior to that of several recent methods based on classic algorithms.