• Title/Summary/Keyword: smart camera

Search Result 560, Processing Time 0.033 seconds

Human Skeleton Keypoints based Fall Detection using GRU (PoseNet과 GRU를 이용한 Skeleton Keypoints 기반 낙상 감지)

  • Kang, Yoon Kyu;Kang, Hee Yong;Weon, Dal Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.127-133
    • /
    • 2021
  • A recent study of people physically falling focused on analyzing the motions of the falls using a recurrent neural network (RNN) and a deep learning approach to get good results from detecting 2D human poses from a single color image. In this paper, we investigate a detection method for estimating the position of the head and shoulder keypoints and the acceleration of positional change using the skeletal keypoints information extracted using PoseNet from an image obtained with a low-cost 2D RGB camera, increasing the accuracy of judgments about the falls. In particular, we propose a fall detection method based on the characteristics of post-fall posture in the fall motion-analysis method. A public data set was used to extract human skeletal features, and as a result of an experiment to find a feature extraction method that can achieve high classification accuracy, the proposed method showed a 99.8% success rate in detecting falls more effectively than a conventional, primitive skeletal data-use method.

Intelligent Hospital Information System Model for Medical AI Research/Development and Practical Use (의료인공지능 연구/개발 및 실용화를 위한 지능형 병원정보시스템 모델)

  • Shon, Byungeun;Jeong, Sungmoon
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.3
    • /
    • pp.67-75
    • /
    • 2022
  • Medical information is variously generated not only from medical devices but also from electronic devices. Recently, related convergence technologies from big data collection in healthcare to medical AI products for patient's condition analysis are rapidly increasing. However, there are difficulties in applying them because of independent developmental procedures. In this paper, we propose an intelligent hospital information system (iHIS) model to simplify and integrate research, development and application of medical AI technology. The proposed model includes (1) real-time patient data management, (2) specialized data management for medical AI development, and (3) real-time monitoring for patient. Using this, real-time biometric data collection and medical AI specialized data generation from patient monitoring devices, as well as specific AI applications of camera-based patient gait analysis and brain MRA-based cerebrovascular disease analysis will be introduced. Based on the proposed model, it is expected that it will be used to improve the HIS by increasing security of data management and improving practical use through consistent interface platformization.

An Experimental Study on Assessing Precision and Accuracy of Low-cost UAV-based Photogrammetry (저가형 UAV 사진측량의 정밀도 및 정확도 분석 실험에 관한 연구)

  • Yun, Seonghyeon;Lee, Hungkyu;Choi, Woonggyu;Jeong, Woochul;Jo, Eonjeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.207-215
    • /
    • 2022
  • This research has been focused on accessing precision and accuracy of UAV (Unmanned Aerial Vehicle)-derived 3-D surveying coordinates. To this end, a highly precise and accurate testing control network had been established by GNSS (Global Navigation Satellite Systems) campaign and its network adjustment. The coordinates of the ground control points and the check points were estimated within 1cm accuracy for 95% of the confidence level. FC330 camera mounted on DJI Phantom 4 repeatedly took aerial photos of an experimental area seven times, and then processed them by two widely used software packages. To evaluate the precision and accuracy of the aerial surveys, 3-D coordinates of the ten check points which automatically extracted by software were compared with GNSS solutions. For the 95% confidence level, the standard deviation of two software's result is within 1cm, 2cm, and 4cm for the north-south, east-west, and height direction, and RMSE (Root Mean Square Error) is within 9cm and 8cm for the horizontal, vertical component, respectively. The interest is that the standard deviation is much smaller than RMSE. The F-ratio test was performed to confirm the statistical difference between the two software processing results. For the standard deviation and RMSE of most positional components, exception of RMSE of the height, the null hypothesis of the one-tailed tests was rejected. It indicates that the result of UAV photogrammetry can be different statistically based on the processing software.

A Study on Tire Surface Defect Detection Method Using Depth Image (깊이 이미지를 이용한 타이어 표면 결함 검출 방법에 관한 연구)

  • Kim, Hyun Suk;Ko, Dong Beom;Lee, Won Gok;Bae, You Suk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.5
    • /
    • pp.211-220
    • /
    • 2022
  • Recently, research on smart factories triggered by the 4th industrial revolution is being actively conducted. Accordingly, the manufacturing industry is conducting various studies to improve productivity and quality based on deep learning technology with robust performance. This paper is a study on the method of detecting tire surface defects in the visual inspection stage of the tire manufacturing process, and introduces a tire surface defect detection method using a depth image acquired through a 3D camera. The tire surface depth image dealt with in this study has the problem of low contrast caused by the shallow depth of the tire surface and the difference in the reference depth value due to the data acquisition environment. And due to the nature of the manufacturing industry, algorithms with performance that can be processed in real time along with detection performance is required. Therefore, in this paper, we studied a method to normalize the depth image through relatively simple methods so that the tire surface defect detection algorithm does not consist of a complex algorithm pipeline. and conducted a comparative experiment between the general normalization method and the normalization method suggested in this paper using YOLO V3, which could satisfy both detection performance and speed. As a result of the experiment, it is confirmed that the normalization method proposed in this paper improved performance by about 7% based on mAP 0.5, and the method proposed in this paper is effective.

A Study on the Relationship between Camera and Subject for Visualization of Image - A Focus on the Status of Watch a Movie with Small Mobile Device - (영상의 시각화를 위한 카메라와 피사체의 상관관계 연구 - 스마트폰 사용자의 영상 시청 현황을 중심으로 -)

  • Ko, Hyun-Wook
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.5
    • /
    • pp.119-126
    • /
    • 2019
  • Watching movies is common on a big screen like a theater or on a big-screen TV. nowadays, small platform such as mobile devices is increasing rapidly for watch a movie. These changes are deeply related to the advent of Internet-based video streaming services such as OTT. OTT's development has provided in free video viewing system without using the set-top box is free from the limitations of time and space. Leading the market is Netflix[1], which started its business with Internet-based DVD rental service. Netflix, which is growing in tandem with the mobile market, had 193.26[2] million members as of the end of 2018. Other OTT participating companies include content-based Pooq, TVing, platform-based Olleh TV Mobile, Oksusu and LTE video portal. The size of such new growth projects has grown gradually, with 25.4 percent of all smartphone users currently watching video content with small mobile devices. Therefore, de-largeization, it is thought that visual language is needed for viewing small mobile devices that are capable of OTT services. To this end, this paper will identify the problem in viewing popular video content with small mobile devices and Survey and study its impact on viewers using the questionnaire.

Examination of Aggregate Quality Using Image Processing Based on Deep-Learning (딥러닝 기반 영상처리를 이용한 골재 품질 검사)

  • Kim, Seong Kyu;Choi, Woo Bin;Lee, Jong Se;Lee, Won Gok;Choi, Gun Oh;Bae, You Suk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.6
    • /
    • pp.255-266
    • /
    • 2022
  • The quality control of coarse aggregate among aggregates, which are the main ingredients of concrete, is currently carried out by SPC(Statistical Process Control) method through sampling. We construct a smart factory for manufacturing innovation by changing the quality control of coarse aggregates to inspect the coarse aggregates based on this image by acquired images through the camera instead of the current sieve analysis. First, obtained images were preprocessed, and HED(Hollistically-nested Edge Detection) which is the filter learned by deep learning segment each object. After analyzing each aggregate by image processing the segmentation result, fineness modulus and the aggregate shape rate are determined by analyzing result. The quality of aggregate obtained through the video was examined by calculate fineness modulus and aggregate shape rate and the accuracy of the algorithm was more than 90% accurate compared to that of aggregates through the sieve analysis. Furthermore, the aggregate shape rate could not be examined by conventional methods, but the content of this paper also allowed the measurement of the aggregate shape rate. For the aggregate shape rate, it was verified with the length of models, which showed a difference of ±4.5%. In the case of measuring the length of the aggregate, the algorithm result and actual length of the aggregate showed a ±6% difference. Analyzing the actual three-dimensional data in a two-dimensional video made a difference from the actual data, which requires further research.

Improvement of Face Recognition Algorithm for Residential Area Surveillance System Based on Graph Convolution Network (그래프 컨벌루션 네트워크 기반 주거지역 감시시스템의 얼굴인식 알고리즘 개선)

  • Tan Heyi;Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.1-15
    • /
    • 2024
  • The construction of smart communities is a new method and important measure to ensure the security of residential areas. In order to solve the problem of low accuracy in face recognition caused by distorting facial features due to monitoring camera angles and other external factors, this paper proposes the following optimization strategies in designing a face recognition network: firstly, a global graph convolution module is designed to encode facial features as graph nodes, and a multi-scale feature enhancement residual module is designed to extract facial keypoint features in conjunction with the global graph convolution module. Secondly, after obtaining facial keypoints, they are constructed as a directed graph structure, and graph attention mechanisms are used to enhance the representation power of graph features. Finally, tensor computations are performed on the graph features of two faces, and the aggregated features are extracted and discriminated by a fully connected layer to determine whether the individuals' identities are the same. Through various experimental tests, the network designed in this paper achieves an AUC index of 85.65% for facial keypoint localization on the 300W public dataset and 88.92% on a self-built dataset. In terms of face recognition accuracy, the proposed network achieves an accuracy of 83.41% on the IBUG public dataset and 96.74% on a self-built dataset. Experimental results demonstrate that the network designed in this paper exhibits high detection and recognition accuracy for faces in surveillance videos.

A Study on Atmospheric Turbulence-Induced Errors in Vision Sensor based Structural Displacement Measurement (대기외란시 비전센서를 활용한 구조물 동적 변위 측정 성능에 관한 연구)

  • Junho Gong
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.3
    • /
    • pp.1-9
    • /
    • 2024
  • This study proposes a multi-scale template matching technique with image pyramids (TMI) to measure structural dynamic displacement using a vision sensor under atmospheric turbulence conditions and evaluates its displacement measurement performance. To evaluate displacement measurement performance according to distance, the three-story shear structure was designed, and an FHD camera was prepared to measure structural response. The initial measurement distance was set at 10m, and increased with an increment of 10m up to 40m. The atmospheric disturbance was generated using a heating plate under indoor illuminance condition, and the image was distorted by the optical turbulence. Through preliminary experiments, the feasibility of displacement measurement of the feature point-based displacement measurement method and the proposed method during atmospheric disturbances were compared and verified, and the verification results showed a low measurement error rate of the proposed method. As a result of evaluating displacement measurement performance in an atmospheric disturbance environment, there was no significant difference in displacement measurement performance for TMI using an artificial target depending on the presence or absence of atmospheric disturbance. However, when natural targets were used, RMSE increased significantly at shooting distances of 20 m or more, showing the operating limitations of the proposed technique. This indicates that the resolution of the natural target decreases as the shooting distance increases, and image distortion due to atmospheric disturbance causes errors in template image estimation, resulting in a high displacement measurement error.

Implementation of Radiotherapy Educational Contents Using Virtual Reality (가상현실 기술을 활용한 방사선치료 교육 콘텐츠 제작 구현)

  • Kwon, Soon-Mu;Shim, Jae-Goo;Chon, Kwon-Su
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.3
    • /
    • pp.409-415
    • /
    • 2018
  • The development of smart devices has brought about significant changes in daily life and one of the most significant changes is the virtual reality zone. Virtual reality is a technology that creates the illusion that a 3D high-resolution image has already been created using a display device just like it does in itself. Unrealized subjects are forced to rely on audiovisual materials, resulting in a decline in the concentration of practices and the quality of classes. It used virtual reality to develop effective teaching materials for radiology students. In order to produce a video clip bridge using virtual reality, a radiology clinic was selected to conduct two exposures from July to September 2017. The video was produced taking into account the radiology and work flow chart and filming was carried out in two separate locations : in the computerized tomography unit and in the LINAC room. Prior to filming the scenario and the filming route were checked in advance to facilitate editing of the video. Modeling and mapping was performed in a PC environment using the Window XP operating system. Using two leading virtual reality camera Gopro Hero, CC pixels were produced using a 4K UHD, Adobe, followed by an 8 megapixel resolution of $3,840{\times}2,160/4,096{\times}2,160$. Total regeneration time was performed in about 5 minutes during the production of using virtual reality to prevent vomiting and dizziness. Currently developed virtual reality radiation and educational contents are being used to secure the market and extend the promotion process to be used by various institutions. The researchers will investigate the satisfaction level of radiation and educational contents using virtual reality and carry out supplementary tasks depending on the results.

A Road Luminance Measurement Application based on Android (안드로이드 기반의 도로 밝기 측정 어플리케이션 구현)

  • Choi, Young-Hwan;Kim, Hongrae;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.49-55
    • /
    • 2015
  • According to the statistics of traffic accidents over recent 5 years, traffic accidents during the night times happened more than the day times. There are various causes to occur traffic accidents and the one of the major causes is inappropriate or missing street lights that make driver's sight confused and causes the traffic accidents. In this paper, with smartphones, we designed and implemented a lane luminance measurement application which stores the information of driver's location, driving, and lane luminance into database in real time to figure out the inappropriate street light facilities and the area that does not have any street lights. This application is implemented under Native C/C++ environment using android NDK and it improves the operation speed than code written in Java or other languages. To measure the luminance of road, the input image with RGB color space is converted to image with YCbCr color space and Y value returns the luminance of road. The application detects the road lane and calculates the road lane luminance into the database sever. Also this application receives the road video image using smart phone's camera and improves the computational cost by allocating the ROI(Region of interest) of input images. The ROI of image is converted to Grayscale image and then applied the canny edge detector to extract the outline of lanes. After that, we applied hough line transform method to achieve the candidated lane group. The both sides of lane is selected by lane detection algorithm that utilizes the gradient of candidated lanes. When the both lanes of road are detected, we set up a triangle area with a height 20 pixels down from intersection of lanes and the luminance of road is estimated from this triangle area. Y value is calculated from the extracted each R, G, B value of pixels in the triangle. The average Y value of pixels is ranged between from 0 to 100 value to inform a luminance of road and each pixel values are represented with color between black and green. We store car location using smartphone's GPS sensor into the database server after analyzing the road lane video image with luminance of road about 60 meters ahead by wireless communication every 10 minutes. We expect that those collected road luminance information can warn drivers about safe driving or effectively improve the renovation plans of road luminance management.