• Title/Summary/Keyword: Object Recognition Technology

Search Result 470, Processing Time 0.029 seconds

Robust Dynamic Projection Mapping onto Deforming Flexible Moving Surface-like Objects (유연한 동적 변형물체에 대한 견고한 다이내믹 프로젝션맵핑)

  • Kim, Hyo-Jung;Park, Jinho
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.897-906
    • /
    • 2017
  • Projection Mapping, also known as Spatial Augmented Reality(SAR) has attracted much attention recently and used for many division, which can augment physical objects with projected various virtual replications. However, conventional approaches towards projection mapping have faced some limitations. Target objects' geometric transformation property does not considered, and movements of flexible objects-like paper are hard to handle, such as folding and bending as natural interaction. Also, precise registration and tracking has been a cumbersome process in the past. While there have been many researches on Projection Mapping on static objects, dynamic projection mapping that can keep tracking of a moving flexible target and aligning the projection at interactive level is still a challenge. Therefore, this paper propose a new method using Unity3D and ARToolkit for high-speed robust tracking and dynamic projection mapping onto non-rigid deforming objects rapidly and interactively. The method consists of four stages, forming cubic bezier surface, process of rendering transformation values, multiple marker recognition and tracking, and webcam real time-lapse imaging. Users can fold, curve, bend and twist to make interaction. This method can achieve three high-quality results. First, the system can detect the strong deformation of objects. Second, it reduces the occlusion error which reduces the misalignment between the target object and the projected video. Lastly, the accuracy and the robustness of this method can make result values to be projected exactly onto the target object in real-time with high-speed and precise transformation tracking.

A Flexible Model-Based Face Region Detection Method (유연한 모델 기반의 얼굴 영역 검출 방법)

  • Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.5
    • /
    • pp.251-256
    • /
    • 2021
  • Unlike general cameras, a high-speed camera capable of capturing a large number of frames per second can enable the advancement of some image processing technologies that have been limited so far. This paper proposes a method of removing undesirable noise from an high-speed input color image, and then detecting a human face from the noise-free image. In this paper, noise pixels included in the ultrafast input image are first removed by applying a bidirectional filter. Then, using RetinaFace, a region representing the person's personal information is robustly detected from the image where noise was removed. The experimental results show that the described algorithm removes noise from the input image and then robustly detects a human face using the generated model. The model-based face-detection method presented in this paper is expected to be used as basic technology for many practical application fields related to image processing and pattern recognition, such as indoor and outdoor building monitoring, door opening and closing management, and mobile biometric authentication.

YOLO-based lane detection system (YOLO 기반 차선검출 시스템)

  • Jeon, Sungwoo;Kim, Dongsoo;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.464-470
    • /
    • 2021
  • Automobiles have been used as simple means of transportation, but recently, as automobiles are rapidly becoming intelligent and smart, and automobile preferences are increasing, research on IT technology convergence is underway, requiring basic high-performance functions such as driver's convenience and safety. As a result, autonomous driving and semi-autonomous vehicles are developed, and these technologies sometimes deviate from lanes due to environmental problems, situations that cannot be judged by autonomous vehicles, and lane detectors may not recognize lanes. In order to improve the performance of lane departure from the lane detection system of autonomous vehicles, which is such a problem, this paper uses fast recognition, which is a characteristic of YOLO(You only look once), and is affected by the surrounding environment using CSI-Camera. We propose a lane detection system that recognizes the situation and collects driving data to extract the region of interest.

A Method for 3D Human Pose Estimation based on 2D Keypoint Detection using RGB-D information (RGB-D 정보를 이용한 2차원 키포인트 탐지 기반 3차원 인간 자세 추정 방법)

  • Park, Seohee;Ji, Myunggeun;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.19 no.6
    • /
    • pp.41-51
    • /
    • 2018
  • Recently, in the field of video surveillance, deep learning based learning method is applied to intelligent video surveillance system, and various events such as crime, fire, and abnormal phenomenon can be robustly detected. However, since occlusion occurs due to the loss of 3d information generated by projecting the 3d real-world in 2d image, it is need to consider the occlusion problem in order to accurately detect the object and to estimate the pose. Therefore, in this paper, we detect moving objects by solving the occlusion problem of object detection process by adding depth information to existing RGB information. Then, using the convolution neural network in the detected region, the positions of the 14 keypoints of the human joint region can be predicted. Finally, in order to solve the self-occlusion problem occurring in the pose estimation process, the method for 3d human pose estimation is described by extending the range of estimation to the 3d space using the predicted result of 2d keypoint and the deep neural network. In the future, the result of 2d and 3d pose estimation of this research can be used as easy data for future human behavior recognition and contribute to the development of industrial technology.

Anomaly Detection Methodology Based on Multimodal Deep Learning (멀티모달 딥 러닝 기반 이상 상황 탐지 방법론)

  • Lee, DongHoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.101-125
    • /
    • 2022
  • Recently, with the development of computing technology and the improvement of the cloud environment, deep learning technology has developed, and attempts to apply deep learning to various fields are increasing. A typical example is anomaly detection, which is a technique for identifying values or patterns that deviate from normal data. Among the representative types of anomaly detection, it is very difficult to detect a contextual anomaly that requires understanding of the overall situation. In general, detection of anomalies in image data is performed using a pre-trained model trained on large data. However, since this pre-trained model was created by focusing on object classification of images, there is a limit to be applied to anomaly detection that needs to understand complex situations created by various objects. Therefore, in this study, we newly propose a two-step pre-trained model for detecting abnormal situation. Our methodology performs additional learning from image captioning to understand not only mere objects but also the complicated situation created by them. Specifically, the proposed methodology transfers knowledge of the pre-trained model that has learned object classification with ImageNet data to the image captioning model, and uses the caption that describes the situation represented by the image. Afterwards, the weight obtained by learning the situational characteristics through images and captions is extracted and fine-tuning is performed to generate an anomaly detection model. To evaluate the performance of the proposed methodology, an anomaly detection experiment was performed on 400 situational images and the experimental results showed that the proposed methodology was superior in terms of anomaly detection accuracy and F1-score compared to the existing traditional pre-trained model.

Effect of Microcurrent Wave Superposition on Cognitive Improvement in Alzheimer's Disease Mice Model (알츠하이머 질환 마우스에서 중첩주파수를 활용한 미세전류가 인지능력 개선에 미치는 효과)

  • Kim, Min Jeong;Lee, Ah Young;Cho, Dong Shik;Cho, Eun Ju
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.5
    • /
    • pp.241-251
    • /
    • 2019
  • In the present study, we investigated the effect of microcurrent against cognitive impairment in Alzheimer's disease (AD) mice model. The cognitive impairment was induced by intracerebroventricularly injection of amyloid beta ($A{\beta}$) to ICR mouse brain, and four kinds of micorocurrent wave were applied to AD mice. We observed the improved cognitive ability in microcurrent-applied AD mice through novel object recognition test and Morris water maze test, compared to $A{\beta}$-injected control group. The contents of malondialdehyde generated by $A{\beta}$ in the brain were also reduced by microcurrent application. These effects of microcurrent were related to the modulation of $A{\beta}$ producing and brain-derived neurotrophic factor (BDNF). Microcurrent down-regulated ${\beta}$-secretase, presenilin 1, and presenilin 2 which were related amyloidogenic pathway, and up-regulated human brain-derived neurotrophic factor in the mice brain, especially Wave4 group [STEP FORM wave form (0, 1.5, 3, 5V), wave superposition]. These results suggest that microcurrent application could provide help for improvement learning and memory ability, at least partly.

A Study on the Design and Implementation of Multi-Disaster Drone System Using Deep Learning-Based Object Recognition and Optimal Path Planning (딥러닝 기반 객체 인식과 최적 경로 탐색을 통한 멀티 재난 드론 시스템 설계 및 구현에 대한 연구)

  • Kim, Jin-Hyeok;Lee, Tae-Hui;Han, Yamin;Byun, Heejung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.4
    • /
    • pp.117-122
    • /
    • 2021
  • In recent years, human damage and loss of money due to various disasters such as typhoons, earthquakes, forest fires, landslides, and wars are steadily occurring, and a lot of manpower and funds are required to prevent and recover them. In this paper, we designed and developed a disaster drone system based on artificial intelligence in order to monitor these various disaster situations in advance and to quickly recognize and respond to disaster occurrence. In this study, multiple disaster drones are used in areas where it is difficult for humans to monitor, and each drone performs an efficient search with an optimal path by applying a deep learning-based optimal path algorithm. In addition, in order to solve the problem of insufficient battery capacity, which is a fundamental problem of drones, the optimal route of each drone is determined using Ant Colony Optimization (ACO) technology. In order to implement the proposed system, it was applied to a forest fire situation among various disaster situations, and a forest fire map was created based on the transmitted data, and a forest fire map was visually shown to the fire fighters dispatched by a drone equipped with a beam projector. In the proposed system, multiple drones can detect a disaster situation in a short time by simultaneously performing optimal path search and object recognition. Based on this research, it can be used to build disaster drone infrastructure, search for victims (sea, mountain, jungle), self-extinguishing fire using drones, and security drones.

A Study on the Real-time Recognition Methodology for IoT-based Traffic Accidents (IoT 기반 교통사고 실시간 인지방법론 연구)

  • Oh, Sung Hoon;Jeon, Young Jun;Kwon, Young Woo;Jeong, Seok Chan
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.15-27
    • /
    • 2022
  • In the past five years, the fatality rate of single-vehicle accidents has been 4.7 times higher than that of all accidents, so it is necessary to establish a system that can detect and respond to single-vehicle accidents immediately. The IoT(Internet of Thing)-based real-time traffic accident recognition system proposed in this study is as following. By attaching an IoT sensor which detects the impact and vehicle ingress to the guardrail, when an impact occurs to the guardrail, the image of the accident site is analyzed through artificial intelligence technology and transmitted to a rescue organization to perform quick rescue operations to damage minimization. An IoT sensor module that recognizes vehicles entering the monitoring area and detects the impact of a guardrail and an AI-based object detection module based on vehicle image data learning were implemented. In addition, a monitoring and operation module that imanages sensor information and image data in integrate was also implemented. For the validation of the system, it was confirmed that the target values were all met by measuring the shock detection transmission speed, the object detection accuracy of vehicles and people, and the sensor failure detection accuracy. In the future, we plan to apply it to actual roads to verify the validity using real data and to commercialize it. This system will contribute to improving road safety.

Design And Implementation of Zone Based Location Tracking System Using ZigBee in Indoor Environment (실내 환경에서 ZigBee를 이용한 Zone 기반 위치추적 시스템 설계 및 구현)

  • Nam, Jin-Woo;Chung, Yeong-Jee
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.10a
    • /
    • pp.1003-1006
    • /
    • 2009
  • Recently, Ubiquitous computing technology is increasing necessity for object recognition and a location tracking technology to meet various applications. The location tracking technology is the fundamental to the Context-Aware of users in Ubiquitous environment and its efficiency has to be improved using IEEE 802.15.4 ZigBee used in current infra such as ubiquitous sensor network. But because the IEEE 802.15.4 ZigBee protocol has limitation to apply location tracking technology such as ToA and TDoA, Zone-based Location Tracking technology using RSSI is needed. In this paper suggests RSSI-based 802.15.4 ZigBee local positioning protocol to support a positioning tracking service in Ubiqutous environment. And Zone-based location tracking system is designed for actual the indoor location tracking service.

  • PDF

Thermal imaging and computer vision technologies for the enhancement of pig husbandry: a review

  • Md Nasim Reza;Md Razob Ali;Samsuzzaman;Md Shaha Nur Kabir;Md Rejaul Karim;Shahriar Ahmed;Hyunjin Kyoung;Gookhwan Kim;Sun-Ok Chung
    • Journal of Animal Science and Technology
    • /
    • v.66 no.1
    • /
    • pp.31-56
    • /
    • 2024
  • Pig farming, a vital industry, necessitates proactive measures for early disease detection and crush symptom monitoring to ensure optimum pig health and safety. This review explores advanced thermal sensing technologies and computer vision-based thermal imaging techniques employed for pig disease and piglet crush symptom monitoring on pig farms. Infrared thermography (IRT) is a non-invasive and efficient technology for measuring pig body temperature, providing advantages such as non-destructive, long-distance, and high-sensitivity measurements. Unlike traditional methods, IRT offers a quick and labor-saving approach to acquiring physiological data impacted by environmental temperature, crucial for understanding pig body physiology and metabolism. IRT aids in early disease detection, respiratory health monitoring, and evaluating vaccination effectiveness. Challenges include body surface emissivity variations affecting measurement accuracy. Thermal imaging and deep learning algorithms are used for pig behavior recognition, with the dorsal plane effective for stress detection. Remote health monitoring through thermal imaging, deep learning, and wearable devices facilitates non-invasive assessment of pig health, minimizing medication use. Integration of advanced sensors, thermal imaging, and deep learning shows potential for disease detection and improvement in pig farming, but challenges and ethical considerations must be addressed for successful implementation. This review summarizes the state-of-the-art technologies used in the pig farming industry, including computer vision algorithms such as object detection, image segmentation, and deep learning techniques. It also discusses the benefits and limitations of IRT technology, providing an overview of the current research field. This study provides valuable insights for researchers and farmers regarding IRT application in pig production, highlighting notable approaches and the latest research findings in this field.