• Title/Summary/Keyword: industrial computer vision

Search Result 151, Processing Time 0.022 seconds

Reinforced Feature of Dynamic Search Area for the Discriminative Model Prediction Tracker based on Multi-domain Dataset (다중 도메인 데이터 기반 구별적 모델 예측 트레커를 위한 동적 탐색 영역 특징 강화 기법)

  • Lee, Jun Ha;Won, Hong-In;Kim, Byeong Hak
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.6
    • /
    • pp.323-330
    • /
    • 2021
  • Visual object tracking is a challenging area of study in the field of computer vision due to many difficult problems, including a fast variation of target shape, occlusion, and arbitrary ground truth object designation. In this paper, we focus on the reinforced feature of the dynamic search area to get better performance than conventional discriminative model prediction trackers on the condition when the accuracy deteriorates since low feature discrimination. We propose a reinforced input feature method shown like the spotlight effect on the dynamic search area of the target tracking. This method can be used to improve performances for deep learning based discriminative model prediction tracker, also various types of trackers which are used to infer the center of the target based on the visual object tracking. The proposed method shows the improved tracking performance than the baseline trackers, achieving a relative gain of 38% quantitative improvement from 0.433 to 0.601 F-score at the visual object tracking evaluation.

Real-time Tooth Region Detection in Intraoral Scanner Images with Deep Learning (딥러닝을 이용한 구강 스캐너 이미지 내 치아 영역 실시간 검출)

  • Na-Yun, Park;Ji-Hoon Kim;Tae-Min Kim;Kyeong-Jin Song;Yu-Jin Byun;Min-Ju Kang․;Kyungkoo Jun;Jae-Gon Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.1-6
    • /
    • 2023
  • In the realm of dental prosthesis fabrication, obtaining accurate impressions has historically been a challenging and inefficient process, often hindered by hygiene concerns and patient discomfort. Addressing these limitations, Company D recently introduced a cutting-edge solution by harnessing the potential of intraoral scan images to create 3D dental models. However, the complexity of these scan images, encompassing not only teeth and gums but also the palate, tongue, and other structures, posed a new set of challenges. In response, we propose a sophisticated real-time image segmentation algorithm that selectively extracts pertinent data, specifically focusing on teeth and gums, from oral scan images obtained through Company D's oral scanner for 3D model generation. A key challenge we tackled was the detection of the intricate molar regions, common in dental imaging, which we effectively addressed through intelligent data augmentation for enhanced training. By placing significant emphasis on both accuracy and speed, critical factors for real-time intraoral scanning, our proposed algorithm demonstrated exceptional performance, boasting an impressive accuracy rate of 0.91 and an unrivaled FPS of 92.4. Compared to existing algorithms, our solution exhibited superior outcomes when integrated into Company D's oral scanner. This algorithm is scheduled for deployment and commercialization within Company D's intraoral scanner.

Development of an intelligent edge computing device equipped with on-device AI vision model (온디바이스 AI 비전 모델이 탑재된 지능형 엣지 컴퓨팅 기기 개발)

  • Kang, Namhi
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.17-22
    • /
    • 2022
  • In this paper, we design a lightweight embedded device that can support intelligent edge computing, and show that the device quickly detects an object in an image input from a camera device in real time. The proposed system can be applied to environments without pre-installed infrastructure, such as an intelligent video control system for industrial sites or military areas, or video security systems mounted on autonomous vehicles such as drones. The On-Device AI(Artificial intelligence) technology is increasingly required for the widespread application of intelligent vision recognition systems. Computing offloading from an image data acquisition device to a nearby edge device enables fast service with less network and system resources than AI services performed in the cloud. In addition, it is expected to be safely applied to various industries as it can reduce the attack surface vulnerable to various hacking attacks and minimize the disclosure of sensitive data.

End to End Autonomous Driving System using Out-layer Removal (Out-layer를 제거한 End to End 자율주행 시스템)

  • Seung-Hyeok Jeong;Dong-Ho Yun;Sung-Hun Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.1
    • /
    • pp.65-70
    • /
    • 2023
  • In this paper, we propose an autonomous driving system using an end-to-end model to improve lane departure and misrecognition of traffic lights in a vision sensor-based system. End-to-end learning can be extended to a variety of environmental conditions. Driving data is collected using a model car based on a vision sensor. Using the collected data, it is composed of existing data and data with outlayers removed. A class was formed with camera image data as input data and speed and steering data as output data, and data learning was performed using an end-to-end model. The reliability of the trained model was verified. Apply the learned end-to-end model to the model car to predict the steering angle with image data. As a result of the learning of the model car, it can be seen that the model with the outlayer removed is improved than the existing model.

Indoor Surveillance Camera based Human Centric Lighting Control for Smart Building Lighting Management

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Mariappan, Vinayagam;Lee, Min Woo;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.207-212
    • /
    • 2020
  • The human centric lighting (HCL) control is a major focus point of the smart lighting system design to provide energy efficient and people mood rhythmic motivation lighting in smart buildings. This paper proposes the HCL control using indoor surveillance camera to improve the human motivation and well-beings in the indoor environments like residential and industrial buildings. In this proposed approach, the indoor surveillance camera video streams are used to predict the day lights and occupancy, occupancy specific emotional features predictions using the advanced computer vision techniques, and this human centric features are transmitted to the smart building light management system. The smart building light management system connected with internet of things (IoT) featured lighting devices and controls the light illumination of the objective human specific lighting devices. The proposed concept experimental model implemented using RGB LED lighting devices connected with IoT features open-source controller in the network along with networked video surveillance solution. The experiment results are verified with custom made automatic lighting control demon application integrated with OpenCV framework based computer vision methods to predict the human centric features and based on the estimated features the lighting illumination level and colors are controlled automatically. The experiment results received from the demon system are analyzed and used for the real-time development of a lighting system control strategy.

A Review on Meat Quality Evaluation Methods Based on Non-Destructive Computer Vision and Artificial Intelligence Technologies

  • Shi, Yinyan;Wang, Xiaochan;Borhan, Md Saidul;Young, Jennifer;Newman, David;Berg, Eric;Sun, Xin
    • Food Science of Animal Resources
    • /
    • v.41 no.4
    • /
    • pp.563-588
    • /
    • 2021
  • Increasing meat demand in terms of both quality and quantity in conjunction with feeding a growing population has resulted in regulatory agencies imposing stringent guidelines on meat quality and safety. Objective and accurate rapid non-destructive detection methods and evaluation techniques based on artificial intelligence have become the research hotspot in recent years and have been widely applied in the meat industry. Therefore, this review surveyed the key technologies of non-destructive detection for meat quality, mainly including ultrasonic technology, machine (computer) vision technology, near-infrared spectroscopy technology, hyperspectral technology, Raman spectra technology, and electronic nose/tongue. The technical characteristics and evaluation methods were compared and analyzed; the practical applications of non-destructive detection technologies in meat quality assessment were explored; and the current challenges and future research directions were discussed. The literature presented in this review clearly demonstrate that previous research on non-destructive technologies are of great significance to ensure consumers' urgent demand for high-quality meat by promoting automatic, real-time inspection and quality control in meat production. In the near future, with ever-growing application requirements and research developments, it is a trend to integrate such systems to provide effective solutions for various grain quality evaluation applications.

Determination and evaluation of dynamic properties for structures using UAV-based video and computer vision system

  • Rithy Prak;Ji Ho Park;Sanggi Jeong;Arum Jang;Min Jae Park;Thomas H.-K. Kang;Young K. Ju
    • Computers and Concrete
    • /
    • v.31 no.5
    • /
    • pp.457-468
    • /
    • 2023
  • Buildings, bridges, and dams are examples of civil infrastructure that play an important role in public life. These structures are prone to structural variations over time as a result of external forces that might disrupt the operation of the structures, cause structural integrity issues, and raise safety concerns for the occupants. Therefore, monitoring the state of a structure, also known as structural health monitoring (SHM), is essential. Owing to the emergence of the fourth industrial revolution, next-generation sensors, such as wireless sensors, UAVs, and video cameras, have recently been utilized to improve the quality and efficiency of building forensics. This study presents a method that uses a target-based system to estimate the dynamic displacement and its corresponding dynamic properties of structures using UAV-based video. A laboratory experiment was performed to verify the tracking technique using a shaking table to excite an SDOF specimen and comparing the results between a laser distance sensor, accelerometer, and fixed camera. Then a field test was conducted to validate the proposed framework. One target marker is placed on the specimen, and another marker is attached to the ground, which serves as a stationary reference to account for the undesired UAV movement. The results from the UAV and stationary camera displayed a root mean square (RMS) error of 2.02% for the displacement, and after post-processing the displacement data using an OMA method, the identified natural frequency and damping ratio showed significant accuracy and similarities. The findings illustrate the capabilities and reliabilities of the methodology using UAV to evaluate the dynamic properties of structures.

A Study on the Construction Equipment Object Extraction Model Based on Computer Vision Technology (컴퓨터 비전 기술 기반 건설장비 객체 추출 모델 적용 분석 연구)

  • Sungwon Kang;Wisung Yoo;Yoonseok Shin
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.4
    • /
    • pp.916-923
    • /
    • 2023
  • Purpose: Looking at the status of fatal accidents in the construction industry in the 2022 Industrial Accident Status Supplementary Statistics, 27.8% of all fatal accidents in the construction industry are caused by construction equipment. In order to overcome the limitations of tours and inspections caused by the enlargement of sites and high-rise buildings, we plan to build a model that can extract construction equipment using computer vision technology and analyze the model's accuracy and field applicability. Method: In this study, deep learning is used to learn image data from excavators, dump trucks, and mobile cranes among construction equipment, and then the learning results are evaluated and analyzed and applied to construction sites. Result: At site 'A', objects of excavators and dump trucks were extracted, and the average extraction accuracy was 81.42% for excavators and 78.23% for dump trucks. The mobile crane at site 'B' showed an average accuracy of 78.14%. Conclusion: It is believed that the efficiency of on-site safety management can be increased and the risk factors for disaster occurrence can be minimized. In addition, based on this study, it can be used as basic data on the introduction of smart construction technology at construction sites.

A Study on Adaptive Skin Extraction using a Gradient Map and Saturation Features (경사도 맵과 채도 특징을 이용한 적응적 피부영역 검출에 관한 연구)

  • Hwang, Dae-Dong;Lee, Keun-Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.7
    • /
    • pp.4508-4515
    • /
    • 2014
  • Real-time body detection has been researched actively. On the other hand, the detection rate of color distorted images is low because most existing detection methods use static skin color model. Therefore, this paper proposes a new method for detecting the skin color region using a gradient map and saturation features. The basic procedure of the proposed method sequentially consists of creating a gradient map, extracting a gradient feature of skin regions, noise removal using the saturation features of skin, creating a cluster for extraction regions, detecting skin regions using cluster information, and verifying the results. This method uses features other than the color to strengthen skin detection not affected by light, race, age, individual features, etc. The results of the detection rate showed that the proposed method is 10% or more higher than the traditional methods.

Parallel Implementations of Digital Focus Indices Based on Minimax Search Using Multi-Core Processors

  • HyungTae, Kim;Duk-Yeon, Lee;Dongwoon, Choi;Jaehyeon, Kang;Dong-Wook, Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.542-558
    • /
    • 2023
  • A digital focus index (DFI) is a value used to determine image focus in scientific apparatus and smart devices. Automatic focus (AF) is an iterative and time-consuming procedure; however, its processing time can be reduced using a general processing unit (GPU) and a multi-core processor (MCP). In this study, parallel architectures of a minimax search algorithm (MSA) are applied to two DFIs: range algorithm (RA) and image contrast (CT). The DFIs are based on a histogram; however, the parallel computation of the histogram is conventionally inefficient because of the bank conflict in shared memory. The parallel architectures of RA and CT are constructed using parallel reduction for MSA, which is performed through parallel relative rating of the image pixel pairs and halved the rating in every step. The array size is then decreased to one, and the minimax is determined at the final reduction. Kernels for the architectures are constructed using open source software to make it relatively platform independent. The kernels are tested in a hexa-core PC and an embedded device using Lenna images of various sizes based on the resolutions of industrial cameras. The performance of the kernels for the DFIs was investigated in terms of processing speed and computational acceleration; the maximum acceleration was 32.6× in the best case and the MCP exhibited a higher performance.