• 제목/요약/키워드: Object classification

검색결과 850건 처리시간 0.026초

KCD 7과 OIICS의 분류기준을 활용한 국내 연구실 사고의 통계적 분석 (Statistical Analysis of Domestic Laboratory Accidents using Classification Criteria of KCD 7 and OIICS)

  • 나예지;장남권;원정훈
    • 한국안전학회지
    • /
    • 제34권3호
    • /
    • pp.42-49
    • /
    • 2019
  • This study statistically analyzed the laboratory accidents by investigating 806 laboratory accident survey reports which were officially submitted to government from 2013 to June 2017. After comparing domestic and foreign accident classification criteria, the laboratory accidents were classified using KCD7(Korean Standard Classification of Diseases) and OIICS(Occupational Injury and Illness Classification System) criteria. For the type and part of injury, KCD7 classification criteria was adopted. And, for the cause and occurrence type of accidents, OIICS was adopted to analyze the laboratory accidents. Most of injuries happened to the wrist and hand caused by sharp materials or chemical materials. The analysis of accident cause showed that accidents resulted in medical practice and accidents from handtools and chemical materials such as acid and alkali frequently occurred. The major occurrence types of laboratory accidents was body exposure to the chemical materials such as hydrochloric acid and sulfuric acid. In addition, the accidents resulted in destroy of grasped object or falling object were frequently reported.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • 제23권11호
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

UAV 자료와 객체기반영상분석을 활용한 대축척 갯벌 표층 퇴적상 분류도 작성 (Generation of Large-scale Map of Surface Sedimentary Facies in Intertidal Zone by Using UAV Data and Object-based Image Analysis (OBIA))

  • 김계림;유주형
    • 대한원격탐사학회지
    • /
    • 제36권2_2호
    • /
    • pp.277-292
    • /
    • 2020
  • 본 연구에서는 천수만 황도 갯벌 지역을 대상으로 UAV 자료와 객체기반영상분석 방법을 사용하여 대축척 갯벌 표층 퇴적상 분류도를 작성하고, 정확도 검증을 수행하여 정밀한 표층 퇴적상 분류의 가능성과 보다 정확한 분류 방법에 대해 제시하였다. 이를 위해 고해상도 UAV 자료에서 가시광 영역의 정사영상과 수치표고모델(DEM), 조류로 밀도 등 퇴적상 분류 시 영향을 주는 요인들을 추출하고, 통계학적 분석 방법을 통해 퇴적상에 따른 요인들의 주성분을 분석하였다. 주성분 요인을 바탕으로 퇴적상 분류 시 사용할 입력 자료를 (1) 가시광 영역의 스펙트럼, (2) 지형 고도와 조류로 밀도, (3) 가시광 영역의 스펙트럼과 지형 고도 및 조류로 밀도로 구분하였으며, 이를 기반으로 객체기반영상분석 분류방법에 입력 자료를 적용하여 대축척 갯벌 표층 퇴적상 분류도를 추출하였다. 입력 자료의 조건에 따라 표층 퇴적상 분류를 수행한 결과, folk 분류 기준을 따르는 6가지의 표층 퇴적상으로 분류하였고, 가시광 영역의 스펙트럼과 지형 고도, 조류로 밀도를 사용할 경우 전체 정확도가 63.04%, Kappa 지수가 0.54로 가장 효과적으로 표층 퇴적상을 분류하였다.

레이더 에고 모션 추정 신뢰성 향상을 위한 도플러 속도 기반 동적 물체 추적 및 제거 (Doppler Velocity-based Dynamic Object Tracking and Rejection for Increasing Reliability of Radar Ego-Motion Estimation)

  • 박영상;민경욱;최정단
    • 한국ITS학회 논문지
    • /
    • 제21권5호
    • /
    • pp.218-232
    • /
    • 2022
  • 차량의 물체 인식에 사용되던 센서인 레이더 센서를 위치 추정에 사용하기 위한 연구들이 진행되고 있다. 특히 레이더 센서에서 출력되는 도플러 속도를 이용하여 동적 물체와 정적 물체를 분류하고, 정적 물체만을 이용하여 에고 모션을 계산하는 방법이 연구되었다. 기존의 동적 물체 분류에서는 RANSAC을 사용한 방법이 제시되었는데, 단 한 번의 알고리즘 실패가 큰 영향을 미치는 위치 추정의 특성상 더 높은 성능을 가진 분류 방법이 필요하다. 본 논문에서는 동적 물체의 추적 및 필터링을 통해 기존 방법보다 분류 성능을 높이는 방법에 대해 제안한다. 추가적으로 GMPHD 필터를 사용하여 추적 성능을 최대로 향상시킨다. 제안된 방법은 기존의 방법과 비교하여 분류 정확도에서 더 높은 성능을 보였으며, 특히 알고리즘의 실패를 방지할 수 있다는 것을 보인다.

관심 객체 검출에 기반한 객체 및 비객체 영상 분류 기법 (Object/Non-object Image Classification Based on the Detection of Objects of Interest)

  • 김성영
    • 한국컴퓨터정보학회논문지
    • /
    • 제11권2호
    • /
    • pp.25-33
    • /
    • 2006
  • 본 논문에서는 영상을 자동적으로 객체와 비객체 영상으로 분류하는 방법을 제안한다. 객체 영상은 객체를 포함하는 영상이다. 객체는 영상의 중심 부근에 위치하고 주변 영역과는 상이한 칼라 분포를 가지는 영역들로 정의한다. 영상 분류를 위해 객체의 특징에 기반을 두고 네 가지 기준을 정의한다. 첫 번째 기준인 중심 영역의 특이성은 중심 영역과 주변 영역간의 칼라 분포의 차이를 통해 계산된다. 두 번째 기준은 영상 내의 특이 픽셀의 분산이다. 특이 픽셀은 영상의 주변영역보다 중심 부근에서 더욱 빈번하게 나타나는 상호 인접한 픽셀들의 칼라 쌍에 의해 정의된다. 세 번째 기준은 중심 객체의 평균 경계강도이다. 세 번째 기준은 분류 기준들중에서 가장 우수한 분류 성능을 나타내지만 특징값을 추출하기 위해서는 중심 객체를 추출해야 되는 많은 연산을 내포하고 있다. 이에 이와 비슷한 특성을 나타내는 네 번째 기준으로 영상 중심 영역에서의 평균 경계강도를 선택하였다. 네 번째 분류 기준은 세 번째 분류 기준에 비해 분류 성능은 조금 낮지만 빠르게 특징값을 추출할 수 있어 많은 데이터를 빠른 시간 내에 처리해야 되는 대규모 영상 데이터 베이스에 적용가능하다. 영상을 분류하기 위해 신경회로망 및 SVM을 사용하여 이들 기준들을 통합하였으며 신경회로망 및 SVM의 분류 성능을 비교하였다.

  • PDF

Deeper SSD: Simultaneous Up-sampling and Down-sampling for Drone Detection

  • Sun, Han;Geng, Wen;Shen, Jiaquan;Liu, Ningzhong;Liang, Dong;Zhou, Huiyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권12호
    • /
    • pp.4795-4815
    • /
    • 2020
  • Drone detection can be considered as a specific sort of small object detection, which has always been a challenge because of its small size and few features. For improving the detection rate of drones, we design a Deeper SSD network, which uses large-scale input image and deeper convolutional network to obtain more features that benefit small object classification. At the same time, in order to improve object classification performance, we implemented the up-sampling modules to increase the number of features for the low-level feature map. In addition, in order to improve object location performance, we adopted the down-sampling modules so that the context information can be used by the high-level feature map directly. Our proposed Deeper SSD and its variants are successfully applied to the self-designed drone datasets. Our experiments demonstrate the effectiveness of the Deeper SSD and its variants, which are useful to small drone's detection and recognition. These proposed methods can also detect small and large objects simultaneously.

딥러닝을 이용한 객체 검출 알고리즘 (Popular Object detection algorithms in deep learning)

  • 강동연
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2019년도 춘계학술발표대회
    • /
    • pp.427-430
    • /
    • 2019
  • Object detection is applied in various field. Autonomous driving, surveillance, OCR(optical character recognition) and aerial image etc. We will look at the algorithms that are using to object detect. These algorithms are divided into two methods. The one is R-CNN algorithms [2], [5], [6] which based on region proposal. The other is YOLO [7] and SSD [8] which are one stage object detector based on regression/classification.

Image classification and captioning model considering a CAM-based disagreement loss

  • Yoon, Yeo Chan;Park, So Young;Park, Soo Myoung;Lim, Heuiseok
    • ETRI Journal
    • /
    • 제42권1호
    • /
    • pp.67-77
    • /
    • 2020
  • Image captioning has received significant interest in recent years, and notable results have been achieved. Most previous approaches have focused on generating visual descriptions from images, whereas a few approaches have exploited visual descriptions for image classification. This study demonstrates that a good performance can be achieved for both description generation and image classification through an end-to-end joint learning approach with a loss function, which encourages each task to reach a consensus. When given images and visual descriptions, the proposed model learns a multimodal intermediate embedding, which can represent both the textual and visual characteristics of an object. The performance can be improved for both tasks by sharing the multimodal embedding. Through a novel loss function based on class activation mapping, which localizes the discriminative image region of a model, we achieve a higher score when the captioning and classification model reaches a consensus on the key parts of the object. Using the proposed model, we established a substantially improved performance for each task on the UCSD Birds and Oxford Flowers datasets.

Sparse Feature Convolutional Neural Network with Cluster Max Extraction for Fast Object Classification

  • Kim, Sung Hee;Pae, Dong Sung;Kang, Tae-Koo;Kim, Dong W.;Lim, Myo Taeg
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권6호
    • /
    • pp.2468-2478
    • /
    • 2018
  • We propose the Sparse Feature Convolutional Neural Network (SFCNN) to reduce the volume of convolutional neural networks (CNNs). Despite the superior classification performance of CNNs, their enormous network volume requires high computational cost and long processing time, making real-time applications such as online-training difficult. We propose an advanced network that reduces the volume of conventional CNNs by producing a region-based sparse feature map. To produce the sparse feature map, two complementary region-based value extraction methods, cluster max extraction and local value extraction, are proposed. Cluster max is selected as the main function based on experimental results. To evaluate SFCNN, we conduct an experiment with two conventional CNNs. The network trains 59 times faster and tests 81 times faster than the VGG network, with a 1.2% loss of accuracy in multi-class classification using the Caltech101 dataset. In vehicle classification using the GTI Vehicle Image Database, the network trains 88 times faster and tests 94 times faster than the conventional CNNs, with a 0.1% loss of accuracy.

무인 자동차의 2차원 레이저 거리 센서를 이용한 도시 환경에서의 빠른 주변 환경 인식 방법 (Fast Scene Understanding in Urban Environments for an Autonomous Vehicle equipped with 2D Laser Scanners)

  • 안승욱;최윤근;정명진
    • 로봇학회논문지
    • /
    • 제7권2호
    • /
    • pp.92-100
    • /
    • 2012
  • A map of complex environment can be generated using a robot carrying sensors. However, representation of environments directly using the integration of sensor data tells only spatial existence. In order to execute high-level applications, robots need semantic knowledge of the environments. This research investigates the design of a system for recognizing objects in 3D point clouds of urban environments. The proposed system is decomposed into five steps: sequential LIDAR scan, point classification, ground detection and elimination, segmentation, and object classification. This method could classify the various objects in urban environment, such as cars, trees, buildings, posts, etc. The simple methods minimizing time-consuming process are developed to guarantee real-time performance and to perform data classification on-the-fly as data is being acquired. To evaluate performance of the proposed methods, computation time and recognition rate are analyzed. Experimental results demonstrate that the proposed algorithm has efficiency in fast understanding the semantic knowledge of a dynamic urban environment.