• Title/Summary/Keyword: Computer vision technology

Search Result 666, Processing Time 0.024 seconds

Development of Mobile Type Computer Vision System and Lean Tissue Extraction Algorithm for Beef Quality Grading (쇠고기 등급판정을 위한 이동형 컴퓨터시각 장치 및 살코기 추출 알고리즘 개발)

  • Choi S.;Huan Le Ngoc;Hwang H.
    • Journal of Biosystems Engineering
    • /
    • v.30 no.6 s.113
    • /
    • pp.340-346
    • /
    • 2005
  • Major quality features of the beef carcass in most countries including Korea are size, marbling state of the lean tissue, color of the fat and lean tissue, and thickness of back fat of the 13th rib. To evaluate the beef quality, extracting loin parts from the sectional image of the 13th beef rib is crucial and is the first step. However, because of the inhomogeneous distribution and fuzzy pattern of the fat and lean tissues on the beef cut, it is difficult to extract automatically the proper contour of the lean tissue. In this paper, a prototype mobile beef quality measurement system, which can be implemented practically at the beef processing site was developed. The developed system was composed of the hand held image acquisition unit and mobile processing unit mounted with touch-pad screen. Algorithms to extract the boundary of the lean tissue and a proper tool to evaluate the marbling status have been developed using color image processing. The boundary extraction algorithm showed successful results for the beef cuts with simple and moderate patterns of the lean tissue and fat. However, it had some difficulty in eliminating complex pattern of the extraneous tissues adhered to the lean tissue in the boundary extraction. The developed algorithms were implemented to the prototype mobile processing unit.

Associative Interactive play Contents for Infant Imagination (유아 상상력을 위한 연상 인터렉티브 놀이 콘텐츠)

  • Jang, Eun-Jung;Lim, Chan
    • The Journal of the Convergence on Culture Technology
    • /
    • v.5 no.1
    • /
    • pp.371-376
    • /
    • 2019
  • Creative thinking appears even before it is expressed in language, and its existence is revealed through emotion, intuition, image and body feeling before logic or linguistics rules work. In this study, Lego is intended to present experimental child interactive content that is applied with a computer vision based on image processing techniques. In the case of infants, the main purpose of this content is the development of hand muscles and the ability to implement imagination. The purpose of the analysis algorithm of the OpenCV library and the image processing using the 'VVVV' that is implemented as a 'Node' in the midst of perceptual changes in image processing technology that are representative of object recognition, and the objective is to use a webcam to film, recognize, derive results that match the analysis and produce interactive content that is completed by the user participating. Research shows what Lego children have made, and children can create things themselves and develop creativity. Furthermore, we expect to be able to infer a diverse and individualistic person's thinking based on more data.

Post-processing Algorithm Based on Edge Information to Improve the Accuracy of Semantic Image Segmentation (의미론적 영상 분할의 정확도 향상을 위한 에지 정보 기반 후처리 방법)

  • Kim, Jung-Hwan;Kim, Seon-Hyeok;Kim, Joo-heui;Choi, Hyung-Il
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.3
    • /
    • pp.23-32
    • /
    • 2021
  • Semantic image segmentation technology in the field of computer vision is a technology that classifies an image by dividing it into pixels. This technique is also rapidly improving performance using a machine learning method, and a high possibility of utilizing information in units of pixels is drawing attention. However, this technology has been raised from the early days until recently for 'lack of detailed segmentation' problem. Since this problem was caused by increasing the size of the label map, it was expected that the label map could be improved by using the edge map of the original image with detailed edge information. Therefore, in this paper, we propose a post-processing algorithm that maintains semantic image segmentation based on learning, but modifies the resulting label map based on the edge map of the original image. After applying the algorithm to the existing method, when comparing similar applications before and after, approximately 1.74% pixels and 1.35% IoU (Intersection of Union) were applied, and when analyzing the results, the precise targeting fine segmentation function was improved.

A Study on the Deep Learning-Based Textbook Questionnaires Detection Experiment (딥러닝 기반 교재 문항 검출 실험 연구)

  • Kim, Tae Jong;Han, Tae In;Park, Ji Su
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.513-520
    • /
    • 2021
  • Recently, research on edutech, which combines education and technology in the e-learning field called learning, education and training, has been actively conducted, but it is still insufficient to collect and utilize data tailored to individual learners based on learning activity data that can be automatically collected from digital devices. Therefore, this study attempts to detect questions in textbooks or problem papers using artificial intelligence computer vision technology that plays the same role as human eyes. The textbook or questionnaire item detection model proposed in this study can help collect, store, and analyze offline learning activity data in connection with intelligent education services without digital conversion of textbooks or questionnaires to help learners provide personalized learning services even in offline learning.

Fashion Image Searching Website based on Deep Learning Image Classification (딥러닝 기반의 이미지 분류를 이용한 패션 이미지 검색 웹사이트)

  • Lee, Hak-Jae;Lee, Seok-Jun;Choi, Moon-Hyuk;Kim, So-Yeong;Moon, Il-Young
    • Journal of Practical Engineering Education
    • /
    • v.11 no.2
    • /
    • pp.175-180
    • /
    • 2019
  • Existing fashion web sites show only the search results for one type of clothes in items such as tops and bottoms. As the fashion market grows, consumers are demanding a platform to find a variety of fashion information. To solve this problem, we devised the idea of linking image classification through deep learning with a website and integrating SNS functions. User uploads their own image to the web site and uses the deep learning server to identify, classify and store the image's characteristics. Users can use the stored information to search for the images in various combinations. In addition, communication between users can be actively performed through the SNS function. Through this, the plan to solve the problem of existing fashion-related sites was prepared.

Vehicle Detection in Aerial Images Based on Hyper Feature Map in Deep Convolutional Network

  • Shen, Jiaquan;Liu, Ningzhong;Sun, Han;Tao, Xiaoli;Li, Qiangyi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.1989-2011
    • /
    • 2019
  • Vehicle detection based on aerial images is an interesting and challenging research topic. Most of the traditional vehicle detection methods are based on the sliding window search algorithm, but these methods are not sufficient for the extraction of object features, and accompanied with heavy computational costs. Recent studies have shown that convolutional neural network algorithm has made a significant progress in computer vision, especially Faster R-CNN. However, this algorithm mainly detects objects in natural scenes, it is not suitable for detecting small object in aerial view. In this paper, an accurate and effective vehicle detection algorithm based on Faster R-CNN is proposed. Our method fuse a hyperactive feature map network with Eltwise model and Concat model, which is more conducive to the extraction of small object features. Moreover, setting suitable anchor boxes based on the size of the object is used in our model, which also effectively improves the performance of the detection. We evaluate the detection performance of our method on the Munich dataset and our collected dataset, with improvements in accuracy and effectivity compared with other methods. Our model achieves 82.2% in recall rate and 90.2% accuracy rate on Munich dataset, which has increased by 2.5 and 1.3 percentage points respectively over the state-of-the-art methods.

Recognition of Events by Human Motion for Context-aware Computing (상황인식 컴퓨팅을 위한 사람 움직임 이벤트 인식)

  • Cui, Yao-Huan;Shin, Seong-Yoon;Lee, Chang-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.47-57
    • /
    • 2009
  • Event detection and recognition is an active and challenging topic recent in Computer Vision. This paper describes a new method for recognizing events caused by human motion from video sequences in an office environment. The proposed approach analyzes human motions using Motion History Image (MHI) sequences, and is invariant to body shapes. types or colors of clothes and positions of target objects. The proposed method has two advantages; one is thant the proposed method is less sensitive to illumination changes comparing with the method using color information of objects of interest, and the other is scale invariance comparing with the method using a prior knowledge like appearances or shapes of objects of interest. Combined with edge detection, geometrical characteristics of the human shape in the MHI sequences are considered as the features. An advantage of the proposed method is that the event detection framework is easy to extend by inserting the descriptions of events. In addition, the proposed method is the core technology for event detection systems based on context-aware computing as well as surveillance systems based on computer vision techniques.

Real-Time Comprehensive Assistance for Visually Impaired Navigation

  • Amal Al-Shahrani;Amjad Alghamdi;Areej Alqurashi;Raghad Alzahrani;Nuha imam
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.1-10
    • /
    • 2024
  • Individuals with visual impairments face numerous challenges in their daily lives, with navigating streets and public spaces being particularly daunting. The inability to identify safe crossing locations and assess the feasibility of crossing significantly restricts their mobility and independence. Globally, an estimated 285 million people suffer from visual impairment, with 39 million categorized as blind and 246 million as visually impaired, according to the World Health Organization. In Saudi Arabia alone, there are approximately 159 thousand blind individuals, as per unofficial statistics. The profound impact of visual impairments on daily activities underscores the urgent need for solutions to improve mobility and enhance safety. This study aims to address this pressing issue by leveraging computer vision and deep learning techniques to enhance object detection capabilities. Two models were trained to detect objects: one focused on street crossing obstacles, and the other aimed to search for objects. The first model was trained on a dataset comprising 5283 images of road obstacles and traffic signals, annotated to create a labeled dataset. Subsequently, it was trained using the YOLOv8 and YOLOv5 models, with YOLOv5 achieving a satisfactory accuracy of 84%. The second model was trained on the COCO dataset using YOLOv5, yielding an impressive accuracy of 94%. By improving object detection capabilities through advanced technology, this research seeks to empower individuals with visual impairments, enhancing their mobility, independence, and overall quality of life.

정보 융합체계 현황 분석

  • Jo, Dong-Rae;Choe, Jeung-Won;Ju, Jae-U
    • Defense and Technology
    • /
    • no.12 s.274
    • /
    • pp.46-53
    • /
    • 2001
  • 미래 전쟁에 대비하기 위하여, 미국은 합동참모본부의 Joint Vision 2020에서 정보에서의 우월성을 기초로 압도적인 기동, 정확한 공격, 집중된 군수지원, 전면적인 방어를 이룩하여 전체적인 우위전력 확보를 도모하여, 정보에서의 우월성 확보를 위해 $C^4ISR$(Command, Control, Communication and Computer, Intelligence, Surveillance, Reconnaissance)개념에 의한 통합체계의 구축을 목표로 제시하고 있다.

  • PDF

Development of VIN Character Recognition System for Motor (자동차 VIN 문자 인식 시스템 개발)

  • 이용중;이화춘;류재엽
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2000.10a
    • /
    • pp.68-73
    • /
    • 2000
  • This study to embody automatic recognition of VIN(Vehicle Identification Number)character by computer vision system. Automatic recognition characters methods consist of the thining processing and the recognition of each character. VIN character and background classified using counting method of the size of connected pixels. Thining processing applied to segmentation of connected fundamental phonemes by Hilditch's algorithm. Each VIN character contours tracing algorithm used the Freeman's direction tracing algorithm.

  • PDF