• Title/Summary/Keyword: real-time vision

Search Result 859, Processing Time 0.028 seconds

Efficient Object Recognition by Masking Semantic Pixel Difference Region of Vision Snapshot for Lightweight Embedded Systems (경량화된 임베디드 시스템에서 의미론적인 픽셀 분할 마스킹을 이용한 효율적인 영상 객체 인식 기법)

  • Yun, Heuijee;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.6
    • /
    • pp.813-826
    • /
    • 2022
  • AI-based image processing technologies in various fields have been widely studied. However, the lighter the board, the more difficult it is to reduce the weight of image processing algorithm due to a lot of computation. In this paper, we propose a method using deep learning for object recognition algorithm in lightweight embedded boards. We can determine the area using a deep neural network architecture algorithm that processes semantic segmentation with a relatively small amount of computation. After masking the area, by using more accurate deep learning algorithm we could operate object detection with improved accuracy for efficient neural network (ENet) and You Only Look Once (YOLO) toward executing object recognition in real time for lightweighted embedded boards. This research is expected to be used for autonomous driving applications, which have to be much lighter and cheaper than the existing approaches used for object recognition.

A Study on Sensor-Based Upper Full-Body Motion Tracking on HoloLens

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.39-46
    • /
    • 2021
  • In this paper, we propose a method for the motion recognition method required in the industrial field in mixed reality. In industrial sites, movements (grasping, lifting, and carrying) are required throughout the upper full-body, from trunk movements to arm movements. In this paper, we use a method composed of sensors and wearable devices that are not vision-based such as Kinect without using heavy motion capture equipment. We used two IMU sensors for the trunk and shoulder movement, and used Myo arm band for the arm movements. Real-time data coming from a total of 4 are fused to enable motion recognition for the entire upper body area. As an experimental method, a sensor was attached to the actual clothes, and objects were manipulated through synchronization. As a result, the method using the synchronization method has no errors in large and small operations. Finally, through the performance evaluation, the average result was 50 frames for single-handed operation on the HoloLens and 60 frames for both-handed operation.

Real-time traffic light information recognition based on object detection models (객체 인식 모델 기반 실시간 교통신호 정보 인식)

  • Joo, eun-oh;Kim, Min-Soo
    • Journal of Cadastre & Land InformatiX
    • /
    • v.52 no.1
    • /
    • pp.81-93
    • /
    • 2022
  • Recently, there have been many studies on object recognition around the vehicle and recognition of traffic signs and traffic lights in autonomous driving. In particular, such the recognition of traffic lights is one of the core technologies in autonomous driving. Therefore, many studies for such the recognition of traffic lights have been performed, the studies based on various deep learning models have increased significantly in recent. In addition, as a high-quality AI training data set for voice, vision, and autonomous driving is released on AIHub, it makes it possible to develop a recognition model for traffic lights suitable for the domestic environment using the data set. In this study, we developed a recognition model for traffic lights that can be used in Korea using the AIHub's training data set. In particular, in order to improve the recognition performance, we used various models of YOLOv4 and YOLOv5, and performed our recognition experiments by defining various classes for the training data. In conclusion, we could see that YOLOv5 shows better performance in the recognition than YOLOv4 and could confirm the reason from the architecture comparison of the two models.

Protective effects of Panax ginseng berry extract on blue light-induced retinal damage in ARPE-19 cells and mouse retina

  • Hye Mi Cho;Sang Jun Lee;Se-Young Choung
    • Journal of Ginseng Research
    • /
    • v.47 no.1
    • /
    • pp.65-73
    • /
    • 2023
  • Background: Age-related macular degeneration (AMD) is a significant visual disease that induces impaired vision and irreversible blindness in the elderly. However, the effects of ginseng berry extract (GBE) on the retina have not been studied. Therefore, this study aimed to investigate the protective effects of GBE on blue light (BL)-induced retinal damage and elucidate its underlying mechanisms in human retinal pigment epithelial cells (ARPE-19 cells) and Balb/c retina. Methods: To investigate the effects and underlying mechanisms of GBE on retinal damage in vitro, we performed cell viability assay, pre-and post-treatment of sample, reactive oxygen species (ROS) assay, quantitative real-time PCR (qRT-PCR), and western immunoblotting using A2E-laden ARPE-19 cells with BL exposure. In addition, Balb/c mice were irradiated with BL to induce retinal degeneration and orally administrated with GBE (50, 100, 200 mg/kg). Using the harvested retina, we performed histological analysis (thickness of retinal layers), qRT-PCR, and western immunoblotting to elucidate the effects and mechanisms of GBE against retinal damage in vivo. Results: GBE significantly inhibited BL-induced cell damage in ARPE-19 cells by activating the SIRT1/PGC-1α pathway, regulating NF-kB translocation, caspase 3 activation, PARP cleavage, expressions of apoptosis-related factors (BAX/BCL-2, LC3-II, and p62), and ROS production. Furthermore, GBE prevented BL-induced retinal degeneration by restoring the thickness of retinal layers and suppressed inflammation and apoptosis via regulation of NF-kB and SIRT1/PGC-1α pathway, cleavage of caspase 3 and PARP, and expressions of apoptosis-related factors in vivo. Conclusions: GBE could be a potential agent to prevent dry AMD and progression to wet AMD.

AprilTag and Stereo Visual Inertial Odometry (A-SVIO) based Mobile Assets Localization at Indoor Construction Sites

  • Khalid, Rabia;Khan, Muhammad;Anjum, Sharjeel;Park, Junsung;Lee, Doyeop;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.344-352
    • /
    • 2022
  • Accurate indoor localization of construction workers and mobile assets is essential in safety management. Existing positioning methods based on GPS, wireless, vision, or sensor based RTLS are erroneous or expensive in large-scale indoor environments. Tightly coupled sensor fusion mitigates these limitations. This research paper proposes a state-of-the-art positioning methodology, addressing the existing limitations, by integrating Stereo Visual Inertial Odometry (SVIO) with fiducial landmarks called AprilTags. SVIO determines the relative position of the moving assets or workers from the initial starting point. This relative position is transformed to an absolute position when AprilTag placed at various entry points is decoded. The proposed solution is tested on the NVIDIA ISAAC SIM virtual environment, where the trajectory of the indoor moving forklift is estimated. The results show accurate localization of the moving asset within any indoor or underground environment. The system can be utilized in various use cases to increase productivity and improve safety at construction sites, contributing towards 1) indoor monitoring of man machinery coactivity for collision avoidance and 2) precise real-time knowledge of who is doing what and where.

  • PDF

Joint Reasoning of Real-time Visual Risk Zone Identification and Numeric Checking for Construction Safety Management

  • Ali, Ahmed Khairadeen;Khan, Numan;Lee, Do Yeop;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.313-322
    • /
    • 2020
  • The recognition of the risk hazards is a vital step to effectively prevent accidents on a construction site. The advanced development in computer vision systems and the availability of the large visual database related to construction site made it possible to take quick action in the event of human error and disaster situations that may occur during management supervision. Therefore, it is necessary to analyze the risk factors that need to be managed at the construction site and review appropriate and effective technical methods for each risk factor. This research focuses on analyzing Occupational Safety and Health Agency (OSHA) related to risk zone identification rules that can be adopted by the image recognition technology and classify their risk factors depending on the effective technical method. Therefore, this research developed a pattern-oriented classification of OSHA rules that can employ a large scale of safety hazard recognition. This research uses joint reasoning of risk zone Identification and numeric input by utilizing a stereo camera integrated with an image detection algorithm such as (YOLOv3) and Pyramid Stereo Matching Network (PSMNet). The research result identifies risk zones and raises alarm if a target object enters this zone. It also determines numerical information of a target, which recognizes the length, spacing, and angle of the target. Applying image detection joint logic algorithms might leverage the speed and accuracy of hazard detection due to merging more than one factor to prevent accidents in the job site.

  • PDF

Development of Wideband Frequency Modulated Laser for High Resolution FMCW LiDAR Sensor (고분해능 FMCW LiDAR 센서 구성을 위한 광대역 주파수변조 레이저 개발)

  • Jong-Pil La;Ji-Eun Choi
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1023-1030
    • /
    • 2023
  • FMCW LiDAR system with robust target detection capabilities even under adverse operating conditions such as snow, rain, and fog is addressed in this paper. Our focus is primarily on enhancing the performance of FMCW LiDAR by improving the characteristics of the frequency-modulated laser, which directly influence range resolution, coherence length, and maximum measurement range etc. of LiDAR. We describe the utilization of an unbalanced Mach-Zehnder laser interferometer to measure real-time changes of the lasing frequency and to correct frequency modulation errors through an optical phase-locked loop technique. To extend the coherence length of laser, we employ an extended-cavity laser diode as the laser source and implement a laser interferometer with an photonic integrated circuit for miniaturization of optical system. The developed FMCW LiDAR system exhibits a bandwidth of 10.045GHz and a remarkable distance resolution of 0.84mm.

Development of AI and IoT-based smart farm pest prediction system: Research on application of YOLOv5 and Isolation Forest models (AI 및 IoT 기반 스마트팜 병충해 예측시스템 개발: YOLOv5 및 Isolation Forest 모델 적용 연구)

  • Mi-Kyoung Park;Hyun Sim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.4
    • /
    • pp.771-780
    • /
    • 2024
  • In this study, we implemented a real-time pest detection and prediction system for a strawberry farm using a computer vision model based on the YOLOv5 architecture and an Isolation Forest Classifier. The model performance evaluation showed that the YOLOv5 model achieved a mean average precision (mAP 0.5) of 78.7%, an accuracy of 92.8%, a recall of 90.0%, and an F1-score of 76%, indicating high predictive performance. This system was designed to be applicable not only to strawberry farms but also to other crops and various environments. Based on data collected from a tomato farm, a new AI model was trained, resulting in a prediction accuracy of over 85% for major diseases such as late blight and yellow leaf curl virus. Compared to the previous model, this represented an improvement of more than 10% in prediction accuracy.

Thermographic Assessment on Temperature Change of Eye Surface in Cataract Surgery Observation (백내장수술 안에서 열화상카메라를 이용한 안구표면 온도의 변화)

  • Park, Chang Won;An, Young-Ju;Kim, Hyojin
    • The Korean Journal of Vision Science
    • /
    • v.20 no.4
    • /
    • pp.497-504
    • /
    • 2018
  • Purpose : The purpose of this study was to investigate the temperature changes of the ocular surface before and after cataract surgery using thermography of a thermal imaging camera. Methods : The study included 75 patients (75 eyes) aged from 50 to 79 years who underwent cataract surgery. In the past, those who underwent corneal-related surgery, wearing contact lens, disorder of tear secretion and taking medication for systemic disease were excluded from this study. The temperature changes of the eyeball surface were measured using a thermal imager (Cox CX series, Answer, Korea) following Tear Break Up Time (TBUT) test, Mcmonnies questionnaire and Schirmer's Test in real time, Results : While the temperature of preoperative ocular surface was $35.20{\pm}0.54^{\circ}C$ and that of postoperative temperature was $35.30{\pm}0.53^{\circ}C$, the difference was not significant. The temperature changes in the ocular surface were statistically significant at $-0.12{\pm}0.08{\Delta}$ ($^{\circ}C/sec$) before the surgery and $-0.18{\pm}0.07{\Delta}$ ($^{\circ}C/sec$) after the surgery. In comparison of the age groups, it was shown that the changes in the surface temperature before the surgery were from $-0.19{\pm}0.05{\Delta}$ ($^{\circ}C/sec$) to $-0.14{\pm}0.09{\Delta}$ ($^{\circ}C/sec$) in the 50s group, and from $-0.12{\pm}0.08{\Delta}$ ($^{\circ}C/sec$) to $-0.15{\pm}0.07{\Delta}$ ($^{\circ}C/sec$) in 60s group, and $-0.18{\pm}0.07{\Delta}$ ($^{\circ}C$) to $-0.12{\pm}0.08{\Delta}/sec$) in the 70s group, showing significant changes in the ocular surface temperature at all ages. Conclusion : Following the cataract surgery, all the indicators of dry eye syndrome were decreased, and eye surface temperature changes were significant. The thermography technique of the ocular surface would be expected to be useful for the evaluation of various dry eye syndromes because it is easy to evaluate dry eye syndrome noninvasively and can be quantified.

Training of a Siamese Network to Build a Tracker without Using Tracking Labels (샴 네트워크를 사용하여 추적 레이블을 사용하지 않는 다중 객체 검출 및 추적기 학습에 관한 연구)

  • Kang, Jungyu;Song, Yoo-Seung;Min, Kyoung-Wook;Choi, Jeong Dan
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.5
    • /
    • pp.274-286
    • /
    • 2022
  • Multi-object tracking has been studied for a long time under computer vision and plays a critical role in applications such as autonomous driving and driving assistance. Multi-object tracking techniques generally consist of a detector that detects objects and a tracker that tracks the detected objects. Various publicly available datasets allow us to train a detector model without much effort. However, there are relatively few publicly available datasets for training a tracker model, and configuring own tracker datasets takes a long time compared to configuring detector datasets. Hence, the detector is often developed separately with a tracker module. However, the separated tracker should be adjusted whenever the former detector model is changed. This study proposes a system that can train a model that performs detection and tracking simultaneously using only the detector training datasets. In particular, a Siam network with augmentation is used to compose the detector and tracker. Experiments are conducted on public datasets to verify that the proposed algorithm can formulate a real-time multi-object tracker comparable to the state-of-the-art tracker models.