• 제목/요약/키워드: Image training

검색결과 1,367건 처리시간 0.024초

영상처리 및 머신러닝 기술을 이용하는 운동 및 식단 보조 애플리케이션 (Application for Workout and Diet Assistant using Image Processing and Machine Learning Skills)

  • 이치호;김동현;최승호;황인웅;한경숙
    • 한국인터넷방송통신학회논문지
    • /
    • 제23권5호
    • /
    • pp.83-88
    • /
    • 2023
  • 본 논문에서는 홈 트레이닝 인구가 늘어나면서 증가한 운동과 식단 보조 서비스에 대한 수요를 충족시키기 위해 운동 및 식단 보조 애플리케이션을 개발하였다. 애플리케이션은 카메라를 통해 실시간으로 촬영되는 사용자의 운동 자세를 분석하여, 안내선과 음성을 이용해 올바른 자세를 유도하는 기능을 가진다. 또한, 사용자가 사진을 촬영하면 사진에 포함된 음식들을 분류하고 각 음식의 양을 추정하여, 칼로리 등의 영양 정보를 계산하여 제공하는 기능을 가진다. 영양 정보 계산은 외부의 서버에서 수행되도록 구성했다. 서버는 계산된 결과를 애플리케이션으로 전송하고, 애플리케이션은 결과를 받아 시각적으로 출력한다. 추가로, 운동 결과와 영양 정보는 날짜별로 저장하고 확인할 수 있도록 하였다.

A Method of Extracting Features of Sensor-only Facilities for Autonomous Cooperative Driving

  • Hyung Lee;Chulwoo Park;Handong Lee;Sanyeon Won
    • 한국컴퓨터정보학회논문지
    • /
    • 제28권12호
    • /
    • pp.191-199
    • /
    • 2023
  • 본 논문에서는 자율협력주행을 위한 인프라로써 제작된 5가지 센서 전용 시설물들에 대해 라이다로 취득한 포인트 클라우드 데이터로부터 시설물들의 특징을 추출하는 방법을 제안한다. 자율주행차량에 장착된 영상 취득 센서의 경우에는 기후 환경 및 카메라의 특성 등으로 인해 취득 데이터의 일관성이 낮기 때문에 이를 보완하기 위해서 라이다 센서를 적용했다. 또한, 라이다로 기존의 다른 시설물들과의 구별을 용이하게 하기 위해서 고휘도 반사지를 시설물의 용도별로 디자인하여 부착했다. 이렇게 개발된 5가지 센서 전용 시설물들과 데이터 취득 시스템으로 취득한 포인트 클라우드 데이터로부터 측정 거리별 시설물의 특징을 추출하는 방법으로 해당 시설물에 부착된 고휘도 반사지의 평균 반사강도을 기준으로 특징 포인트들을 추출하여 DBSCAN 방법으로 군집화한 후 해당 포인트들을 투영법으로 2차원 좌표로 변경했다. 거리별 해당 시설물의 특징은 3차원 포인트 좌표, 2차원 투영 좌표, 그리고 반사강도로 구성되며, 추후 개발될 시설물 인식을 위한 모형의 학습데이터로 활용될 예정이다.

Gesture Control Gaming for Motoric Post-Stroke Rehabilitation

  • Andi Bese Firdausiah Mansur
    • International Journal of Computer Science & Network Security
    • /
    • 제23권10호
    • /
    • pp.37-43
    • /
    • 2023
  • The hospital situation, timing, and patient restrictions have become obstacles to an optimum therapy session. The crowdedness of the hospital might lead to a tight schedule and a shorter period of therapy. This condition might strike a post-stroke patient in a dilemma where they need regular treatment to recover their nervous system. In this work, we propose an in-house and uncomplex serious game system that can be used for physical therapy. The Kinect camera is used to capture the depth image stream of a human skeleton. Afterwards, the user might use their hand gesture to control the game. Voice recognition is deployed to ease them with play. Users must complete the given challenge to obtain a more significant outcome from this therapy system. Subjects will use their upper limb and hands to capture the 3D objects with different speeds and positions. The more substantial challenge, speed, and location will be increased and random. Each delegated entity will raise the scores. Afterwards, the scores will be further evaluated to correlate with therapy progress. Users are delighted with the system and eager to use it as their daily exercise. The experimental studies show a comparison between score and difficulty that represent characteristics of user and game. Users tend to quickly adapt to easy and medium levels, while high level requires better focus and proper synchronization between hand and eye to capture the 3D objects. The statistical analysis with a confidence rate(α:0.05) of the usability test shows that the proposed gaming is accessible, even without specialized training. It is not only for therapy but also for fitness because it can be used for body exercise. The result of the experiment is very satisfying. Most users enjoy and familiarize themselves quickly. The evaluation study demonstrates user satisfaction and perception during testing. Future work of the proposed serious game might involve haptic devices to stimulate their physical sensation.

안면 백반증 치료 평가를 위한 딥러닝 기반 자동화 분석 시스템 개발 (Development of a Deep Learning-Based Automated Analysis System for Facial Vitiligo Treatment Evaluation)

  • 이세나;허연우;이솔암;박성빈
    • 대한의용생체공학회:의공학회지
    • /
    • 제45권2호
    • /
    • pp.95-100
    • /
    • 2024
  • Vitiligo is a condition characterized by the destruction or dysfunction of melanin-producing cells in the skin, resulting in a loss of skin pigmentation. Facial vitiligo, specifically affecting the face, significantly impacts patients' appearance, thereby diminishing their quality of life. Evaluating the efficacy of facial vitiligo treatment typically relies on subjective assessments, such as the Facial Vitiligo Area Scoring Index (F-VASI), which can be time-consuming and subjective due to its reliance on clinical observations like lesion shape and distribution. Various machine learning and deep learning methods have been proposed for segmenting vitiligo areas in facial images, showing promising results. However, these methods often struggle to accurately segment vitiligo lesions irregularly distributed across the face. Therefore, our study introduces a framework aimed at improving the segmentation of vitiligo lesions on the face and providing an evaluation of vitiligo lesions. Our framework for facial vitiligo segmentation and lesion evaluation consists of three main steps. Firstly, we perform face detection to minimize background areas and identify the face area of interest using high-quality ultraviolet photographs. Secondly, we extract facial area masks and vitiligo lesion masks using a semantic segmentation network-based approach with the generated dataset. Thirdly, we automatically calculate the vitiligo area relative to the facial area. We evaluated the performance of facial and vitiligo lesion segmentation using an independent test dataset that was not included in the training and validation, showing excellent results. The framework proposed in this study can serve as a useful tool for evaluating the diagnosis and treatment efficacy of vitiligo.

YOLO v8을 활용한 컴퓨터 비전 기반 교통사고 탐지 (Computer Vision-Based Car Accident Detection using YOLOv8)

  • 마르와 차차 안드레아;이충권;김양석;노미진;문상일;신재호
    • 한국산업정보학회논문지
    • /
    • 제29권1호
    • /
    • pp.91-105
    • /
    • 2024
  • 자동차 사고는 차량 간의 충돌로 인해 발생되며, 이로 인해 차량의 손상과 함께 인적, 물적 피해가 유발된다. 본 연구는 CCTV에 의해 촬영되어 YouTube에 업로드된 차량사고 동영상으로 부터 추출된 2,550개의 이미지 프레임을 기반으로 차량사고 탐지모델을 개발하였다. 전처리를 위해 roboflow.com을 사용하여 바운딩 박스를 표시하고 이미지를 다양한 각도로 뒤집어 데이터 세트를 증강하였다. 훈련에서는 You Only Look Once 버전 8 (YOLOv8) 모델을 사용하였고, 사고 탐지에 있어서 평균 0.954의 정확도를 달성하였다. 제안된 모델은 비상시에 경보 전송을 용이하게 하는 실용적 의의를 가지고 있다. 또한, 효과적이고 효율적인 차량사고 탐지 메커니즘 개발에 대한 연구에 기여하고 스마트폰과 같은 기기에서 활용될 수 있다. 향후의 연구에서는 소리와 같은 추가 데이터의 통합을 포함하여 탐지기능을 정교화하고자 한다.

회랑 감시를 위한 딥러닝 알고리즘 학습 및 성능분석 (Deep Learning Algorithm Training and Performance Analysis for Corridor Monitoring)

  • 정우진;홍석민;최원혁
    • 한국항행학회논문지
    • /
    • 제27권6호
    • /
    • pp.776-781
    • /
    • 2023
  • K-UAM은 2035년까지의 성숙기 이후 상용화될 예정이다. UAM 회랑은 기존의 헬리콥터 회랑을 수직 분리하여 사용될 예정이기에 회량 사용량이 증가할 것으로 예상된다. 따라서 회랑을 모니터링하는 시스템도 필요하다. 최근 객체 검출 알고리즘이 크게 발전하였다. 객체 검출 알고리즘은 1단계 탐지와, 2단계 탐지 모델로 나뉜다. 실시간 객체 검출에 있어서 2단계 모델은 너무 느리기에 적합하지 않다. 기존 1단계 모델은 정확도에 문제가 있었지만, 버전 업그레이드를 통해 성능이 향상되었다. 1단계 모델 중 YOLO-V5는 모자이크 기법을 통한 소형 객체 검출 성능을 향상시킨 모델이다. 따라서 YOLO-V5는 넓은 회랑의 실시간 모니터링에 가장 적합하다고 판단된다. 본 논문에서는 YOLO-V5 알고리즘을 학습시켜 궁극적으로 회랑 모니터링 시스템에 대한 적합도를 분석한다.

A Novel, Deep Learning-Based, Automatic Photometric Analysis Software for Breast Aesthetic Scoring

  • Joseph Kyu-hyung Park;Seungchul Baek;Chan Yeong Heo;Jae Hoon Jeong;Yujin Myung
    • Archives of Plastic Surgery
    • /
    • 제51권1호
    • /
    • pp.30-35
    • /
    • 2024
  • Background Breast aesthetics evaluation often relies on subjective assessments, leading to the need for objective, automated tools. We developed the Seoul Breast Esthetic Scoring Tool (S-BEST), a photometric analysis software that utilizes a DenseNet-264 deep learning model to automatically evaluate breast landmarks and asymmetry indices. Methods S-BEST was trained on a dataset of frontal breast photographs annotated with 30 specific landmarks, divided into an 80-20 training-validation split. The software requires the distances of sternal notch to nipple or nipple-to-nipple as input and performs image preprocessing steps, including ratio correction and 8-bit normalization. Breast asymmetry indices and centimeter-based measurements are provided as the output. The accuracy of S-BEST was validated using a paired t-test and Bland-Altman plots, comparing its measurements to those obtained from physical examinations of 100 females diagnosed with breast cancer. Results S-BEST demonstrated high accuracy in automatic landmark localization, with most distances showing no statistically significant difference compared with physical measurements. However, the nipple to inframammary fold distance showed a significant bias, with a coefficient of determination ranging from 0.3787 to 0.4234 for the left and right sides, respectively. Conclusion S-BEST provides a fast, reliable, and automated approach for breast aesthetic evaluation based on 2D frontal photographs. While limited by its inability to capture volumetric attributes or multiple viewpoints, it serves as an accessible tool for both clinical and research applications.

A Comparative Study of Deep Learning Techniques for Alzheimer's disease Detection in Medical Radiography

  • Amal Alshahrani;Jenan Mustafa;Manar Almatrafi;Layan Albaqami;Raneem Aljabri;Shahad Almuntashri
    • International Journal of Computer Science & Network Security
    • /
    • 제24권5호
    • /
    • pp.53-63
    • /
    • 2024
  • Alzheimer's disease is a brain disorder that worsens over time and affects millions of people around the world. It leads to a gradual deterioration in memory, thinking ability, and behavioral and social skills until the person loses his ability to adapt to society. Technological progress in medical imaging and the use of artificial intelligence, has provided the possibility of detecting Alzheimer's disease through medical images such as magnetic resonance imaging (MRI). However, Deep learning algorithms, especially convolutional neural networks (CNNs), have shown great success in analyzing medical images for disease diagnosis and classification. Where CNNs can recognize patterns and objects from images, which makes them ideally suited for this study. In this paper, we proposed to compare the performances of Alzheimer's disease detection by using two deep learning methods: You Only Look Once (YOLO), a CNN-enabled object recognition algorithm, and Visual Geometry Group (VGG16) which is a type of deep convolutional neural network primarily used for image classification. We will compare our results using these modern models Instead of using CNN only like the previous research. In addition, the results showed different levels of accuracy for the various versions of YOLO and the VGG16 model. YOLO v5 reached 56.4% accuracy at 50 epochs and 61.5% accuracy at 100 epochs. YOLO v8, which is for classification, reached 84% accuracy overall at 100 epochs. YOLO v9, which is for object detection overall accuracy of 84.6%. The VGG16 model reached 99% accuracy for training after 25 epochs but only 78% accuracy for testing. Hence, the best model overall is YOLO v9, with the highest overall accuracy of 86.1%.

Effects of Backward Walking Training with a Weighted Bag Carried on the Front on Craniocervical Alignment and Gait Parameters in Young Adults with Forward Head Posture: A case series

  • Byoung-Ha Hwang;Han-Kyu Park
    • 대한통합의학회지
    • /
    • 제12권3호
    • /
    • pp.83-91
    • /
    • 2024
  • Purpose : This case study aimed to investigate the effects of backward walking exercises with a front-loaded bag on craniovertebral angle (CVA), craniorotational angle (CRA), and gait variables in subjects with forward head posture (FHP). Methods : Two individuals in their twenties with FHP performed backward walking exercises on a treadmill while carrying a front-loaded bag with a load equivalent to 20 % of their body weight, for 30 minutes per day, three times a week, over two weeks. CVA and CRA were measured before and after the intervention using side view photographs taken from 1.5 meters away. CVA was calculated by marking C7, the tragus of the ear, and the outer canthus of the eye, and CRA was determined using the same landmarks. Image J software was used for angle analysis, with measurements taken three times and averaged. Gait variables such as step length and cadence were recorded using a step analysis treadmill and analyzed with the software included with the equipment, with measurements taken at baseline and after the two-week intervention. Results : Both participants demonstrated notable improvements in the CVA, indicating enhanced head alignment relative to the cervical spine. There was also a marked decrease in the CRA, suggesting a reduction in rotational misalignment. Although differences were observed in gait variables, such as step length and cadence, these changes were not consistent across measurements. The results suggest that backward walking exercises with a load carried in front can positively influence postural adjustments by aligning the cervical spine in individuals with FHP. Conclusion : The findings of this case study indicate that backward walking exercises with a front-loaded bag can effectively improve cervical spine alignment in individuals with FHP. Differences were observed in gait variables, such as step length and cadence, but these changes were not consistent across measurements. Future studies should explore these effects more comprehensively and consider optimizing the exercise protocol for better therapeutic outcomes.

Real-time semantic segmentation of gastric intestinal metaplasia using a deep learning approach

  • Vitchaya Siripoppohn;Rapat Pittayanon;Kasenee Tiankanon;Natee Faknak;Anapat Sanpavat;Naruemon Klaikaew;Peerapon Vateekul;Rungsun Rerknimitr
    • Clinical Endoscopy
    • /
    • 제55권3호
    • /
    • pp.390-400
    • /
    • 2022
  • Background/Aims: Previous artificial intelligence (AI) models attempting to segment gastric intestinal metaplasia (GIM) areas have failed to be deployed in real-time endoscopy due to their slow inference speeds. Here, we propose a new GIM segmentation AI model with inference speeds faster than 25 frames per second that maintains a high level of accuracy. Methods: Investigators from Chulalongkorn University obtained 802 histological-proven GIM images for AI model training. Four strategies were proposed to improve the model accuracy. First, transfer learning was employed to the public colon datasets. Second, an image preprocessing technique contrast-limited adaptive histogram equalization was employed to produce clearer GIM areas. Third, data augmentation was applied for a more robust model. Lastly, the bilateral segmentation network model was applied to segment GIM areas in real time. The results were analyzed using different validity values. Results: From the internal test, our AI model achieved an inference speed of 31.53 frames per second. GIM detection showed sensitivity, specificity, positive predictive, negative predictive, accuracy, and mean intersection over union in GIM segmentation values of 93%, 80%, 82%, 92%, 87%, and 57%, respectively. Conclusions: The bilateral segmentation network combined with transfer learning, contrast-limited adaptive histogram equalization, and data augmentation can provide high sensitivity and good accuracy for GIM detection and segmentation.