• Title/Summary/Keyword: Computer vision technology

Search Result 666, Processing Time 0.034 seconds

Intelligent Robust Base-Station Research in Harsh Outdoor Wilderness Environments for Wildsense

  • Ahn, Junho;Mysore, Akshay;Zybko, Kati;Krumm, Caroline;Lee, Dohyeon;Kim, Dahyeon;Han, Richard;Mishra, Shivakant;Hobbs, Thompson
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.814-836
    • /
    • 2021
  • Wildlife ecologists and biologists recapture deer to collect tracking data from deer collars or wait for a drop-off of a deer collar construction that is automatically detached and disconnected. The research teams need to manage a base camp with medical trailers, helicopters, and airplanes to capture deer or wait for several months until the deer collar drops off of the deer's neck. We propose an intelligent robust base-station research with a low-cost and time saving method to obtain recording sensor data from their collars to a listener node, and readings are obtained without opening the weatherproof deer collar. We successfully designed the and implemented a robust base station system for automatically collecting data of the collars and listener motes in harsh wilderness environments. Intelligent solutions were also analyzed for improved data collections and pattern predictions with drone-based detection and tracking algorithms.

Vehicle Displacement Estimation By GPS and Vision Sensor (영상센서/GPS에 기반한 차량의 이동변위 추정)

  • Kim, Min-Woo;Lim, Joon-Hoo;Park, Je-Doo;Kim, Hee-Sung;Lee, Hyung-Keun
    • Journal of Advanced Navigation Technology
    • /
    • v.16 no.3
    • /
    • pp.417-425
    • /
    • 2012
  • It is well known that GPS cannot provide positioning results if sufficient number of visible satellites are not available. To overcome this weak point, attentions have been recently moved to hybrid positioning methods that augments GPS with other sensors. As an extension of hybrid positiong methods, this paper proposes a new method that combines GPS and vision sensor to improve availability and accuracy of land vehicle positioning. The proposed method does not require any external map information and can provide position solutions if more than 2 navigation satellites are visible. To evaluate the performance of the proposed method, an experiment result with real measurements is provided and a result shows that accumulated error of n-axis is almost 2.5meters and that of e-axis is almost 3meters in test section.

NTGST-Based Parallel Computer Vision Inspection for High Resolution BLU (NTGST 병렬화를 이용한 고해상도 BLU 검사의 고속화)

  • 김복만;서경석;최흥문
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.19-24
    • /
    • 2004
  • A novel fast parallel NTGST is proposed for high resolution computer vision inspection of the BLUs in a LCD production line. The conventional computation- intensive NTGST algorithm is modified and its C codes are optimized into fast NTGST to be adapted to the SIMD parallel architecture. And then, the input inspection image is partitioned and allocated to each of the P processors in multi-threaded implementation, and the NTGST is executed on SIMD architecture of N data items simultaneously in each thread. Thus, the proposed inspection system can achieve the speedup of O(NP). Experiments using Dual-Pentium III processor with its MMX and extended MMX SIMD technology show that the proposed parallel NTGST is about Sp=8 times faster than the conventional NTGST, which shows the scalability of the proposed system implementation for the fast, high resolution computer vision inspection of the various sized BLUs in LCD production lines.

Accurate Pose Measurement of Label-attached Small Objects Using a 3D Vision Technique (3차원 비전 기술을 이용한 라벨부착 소형 물체의 정밀 자세 측정)

  • Kim, Eung-su;Kim, Kye-Kyung;Wijenayake, Udaya;Park, Soon-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.10
    • /
    • pp.839-846
    • /
    • 2016
  • Bin picking is a task of picking a small object from a bin. For accurate bin picking, the 3D pose information, position, and orientation of a small object is required because the object is mixed with other objects of the same type in the bin. Using this 3D pose information, a robotic gripper can pick an object using exact distance and orientation measurements. In this paper, we propose a 3D vision technique for accurate measurement of 3D position and orientation of small objects, on which a paper label is stuck to the surface. We use a maximally stable extremal regions (MSERs) algorithm to detect the label areas in a left bin image acquired from a stereo camera. In each label area, image features are detected and their correlation with a right image is determined by a stereo vision technique. Then, the 3D position and orientation of the objects are measured accurately using a transformation from the camera coordinate system to the new label coordinate system. For stable measurement during a bin picking task, the pose information is filtered by averaging at fixed time intervals. Our experimental results indicate that the proposed technique yields pose accuracy between 0.4~0.5mm in positional measurements and $0.2-0.6^{\circ}$ in angle measurements.

A technique for predicting the cutting points of fish for the target weight using AI machine vision

  • Jang, Yong-hun;Lee, Myung-sub
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.4
    • /
    • pp.27-36
    • /
    • 2022
  • In this paper, to improve the conditions of the fish processing site, we propose a method to predict the cutting point of fish according to the target weight using AI machine vision. The proposed method performs image-based preprocessing by first photographing the top and front views of the input fish. Then, RANSAC(RANdom SAmple Consensus) is used to extract the fish contour line, and then 3D external information of the fish is obtained using 3D modeling. Next, machine learning is performed on the extracted three-dimensional feature information and measured weight information to generate a neural network model. Subsequently, the fish is cut at the cutting point predicted by the proposed technique, and then the weight of the cut piece is measured. We compared the measured weight with the target weight and evaluated the performance using evaluation methods such as MAE(Mean Absolute Error) and MRE(Mean Relative Error). The obtained results indicate that an average error rate of less than 3% was achieved in comparison to the target weight. The proposed technique is expected to contribute greatly to the development of the fishery industry in the future by being linked to the automation system.

Integrated Object Detection and Blockchain Framework for Remote Safety Inspection at Construction Sites

  • Kim, Dohyeong;Yang, Jaehun;Anjum, Sharjeel;Lee, Dongmin;Pyeon, Jae-ho;Park, Chansik;Lee, Doyeop
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.136-144
    • /
    • 2022
  • Construction sites are characterized by dangerous situations and environments that cause fatal accidents. Potential risk detection needs to be improved by continuously monitoring site conditions. However, the current labor-intensive inspection practice has many limitations in monitoring dangerous conditions at construction sites. Computer vision technology that can quickly analyze and collect site conditions from images has been in the spotlight as a solution. Nonetheless, inspection results obtained via computer vision are still stored and managed in centralized systems vulnerable to tampering with information by the central node. Blockchain has been used as a reliable and efficient decentralized information management system. Despite its potential, only limited research has been conducted integrating computer vision and blockchain. Therefore, to solve the current safety management problems, the authors propose a framework for construction site inspection that integrates object detection and blockchain network, enabling efficient and reliable remote inspection. Object detection is applied to enable the automatic analysis of site safety conditions. As a result, the workload of safety managers can be reduced with inspection results stored and distributed reliably through the blockchain network. In addition, errors or forgery in the inspection process can be automatically prevented and verified through a smart contract. As site safety conditions are reliably shared with project participants, project participants can remotely inspect site conditions and make safety-related decisions in trust.

  • PDF

Classifying Indian Medicinal Leaf Species Using LCFN-BRNN Model

  • Kiruba, Raji I;Thyagharajan, K.K;Vignesh, T;Kalaiarasi, G
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3708-3728
    • /
    • 2021
  • Indian herbal plants are used in agriculture and in the food, cosmetics, and pharmaceutical industries. Laboratory-based tests are routinely used to identify and classify similar herb species by analyzing their internal cell structures. In this paper, we have applied computer vision techniques to do the same. The original leaf image was preprocessed using the Chan-Vese active contour segmentation algorithm to efface the background from the image by setting the contraction bias as (v) -1 and smoothing factor (µ) as 0.5, and bringing the initial contour close to the image boundary. Thereafter the segmented grayscale image was fed to a leaky capacitance fired neuron model (LCFN), which differentiates between similar herbs by combining different groups of pixels in the leaf image. The LFCN's decay constant (f), decay constant (g) and threshold (h) parameters were empirically assigned as 0.7, 0.6 and h=18 to generate the 1D feature vector. The LCFN time sequence identified the internal leaf structure at different iterations. Our proposed framework was tested against newly collected herbal species of natural images, geometrically variant images in terms of size, orientation and position. The 1D sequence and shape features of aloe, betel, Indian borage, bittergourd, grape, insulin herb, guava, mango, nilavembu, nithiyakalyani, sweet basil and pomegranate were fed into the 5-fold Bayesian regularization neural network (BRNN), K-nearest neighbors (KNN), support vector machine (SVM), and ensemble classifier to obtain the highest classification accuracy of 91.19%.

A Lightweight Pedestrian Intrusion Detection and Warning Method for Intelligent Traffic Security

  • Yan, Xinyun;He, Zhengran;Huang, Youxiang;Xu, Xiaohu;Wang, Jie;Zhou, Xiaofeng;Wang, Chishe;Lu, Zhiyi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3904-3922
    • /
    • 2022
  • As a research hotspot, pedestrian detection has a wide range of applications in the field of computer vision in recent years. However, current pedestrian detection methods have problems such as insufficient detection accuracy and large models that are not suitable for large-scale deployment. In view of these problems mentioned above, a lightweight pedestrian detection and early warning method using a new model called you only look once (Yolov5) is proposed in this paper, which utilizing advantages of Yolov5s model to achieve accurate and fast pedestrian recognition. In addition, this paper also optimizes the loss function of the batch normalization (BN) layer. After sparsification, pruning and fine-tuning, got a lot of optimization, the size of the model on the edge of the computing power is lower equipment can be deployed. Finally, from the experimental data presented in this paper, under the training of the road pedestrian dataset that we collected and processed independently, the Yolov5s model has certain advantages in terms of precision and other indicators compared with traditional single shot multiBox detector (SSD) model and fast region-convolutional neural network (Fast R-CNN) model. After pruning and lightweight, the size of training model is greatly reduced without a significant reduction in accuracy, and the final precision reaches 87%, while the model size is reduced to 7,723 KB.

Performance Analysis of Vision-based Positioning Assistance Algorithm (비전 기반 측위 보조 알고리즘의 성능 분석)

  • Park, Jong Soo;Lee, Yong;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.101-108
    • /
    • 2019
  • Due to recent improvements in computer processing speed and image processing technology, researches are being actively carried out to combine information from camera with existing GNSS (Global Navigation Satellite System) and dead reckoning. In this study, developed a vision-based positioning assistant algorithm to estimate the distance to the object from stereo images. In addition, GNSS/on-board vehicle sensor/vision based positioning algorithm is developed by combining vision based positioning algorithm with existing positioning algorithm. For the performance analysis, the velocity calculated from the actual driving test was used for the navigation solution correction, simulation tests were performed to analyse the effects of velocity precision. As a result of analysis, it is confirmed that about 4% of position accuracy is improved when vision information is added compared to existing GNSS/on-board based positioning algorithm.

The Change of Near Point of Convergence and Fusional Reserves after Computer Gaming with Different Direction of Eye Movement (안구의 운동방향이 다른 컴퓨터 게임 후 폭주근점과 융합여력의 변화)

  • Kim, Se Il;Kwon, Ki-Il;Lee, Jiye;Lee, Hyo Jin;Park, Mijung;Kim, So Ra
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.18 no.1
    • /
    • pp.37-43
    • /
    • 2013
  • Purpose: The present study was conducted to investigate whether the directions of eye movement in playing computer games for certain period affected the change of near point of convergence (NPC) and fusional reserve (FR) or not. Methods: Total 40 subjects in 20s who have the visual acuity of 1.0 or higher without any ocular disease and accommodative dysfunction were asked to successively play computer games. After the subjects were moving eyes in horizontal and vertical directions for 40 and 90 minutes, their horizontal fusional reserves, vertical fusional vergence and near point of convergence were measured. Results: The near point of convergence showed a tendency to be receded after computer gaming in the horizontal and vertical directions, and both of horizontal and vertical fusional reserves were significantly reduced. The range of declined fusional reserves and receded near point of convergence after computer gaming for 90 minutes was smaller than those after computer gaming for 40 minutes. The change of binocular vision was affected by the horizontal eye movement rather than the vertical movement when analyzed by the direction of eye movement. Conclusions: This study revealed that the change in FR and NPC was different along with dominant direction of eye movement during visual display terminal (VDT) tasks. Therefore, the adjustment of VDT working time is required to prevent the dysfunction of binocular vision according to the dominant direction of eye movement during VDT task.