• Title/Summary/Keyword: Object Classification

Search Result 859, Processing Time 0.025 seconds

Deep Learning in Radiation Oncology

  • Cheon, Wonjoong;Kim, Haksoo;Kim, Jinsung
    • Progress in Medical Physics
    • /
    • v.31 no.3
    • /
    • pp.111-123
    • /
    • 2020
  • Deep learning (DL) is a subset of machine learning and artificial intelligence that has a deep neural network with a structure similar to the human neural system and has been trained using big data. DL narrows the gap between data acquisition and meaningful interpretation without explicit programming. It has so far outperformed most classification and regression methods and can automatically learn data representations for specific tasks. The application areas of DL in radiation oncology include classification, semantic segmentation, object detection, image translation and generation, and image captioning. This article tries to understand what is the potential role of DL and what can be more achieved by utilizing it in radiation oncology. With the advances in DL, various studies contributing to the development of radiation oncology were investigated comprehensively. In this article, the radiation treatment process was divided into six consecutive stages as follows: patient assessment, simulation, target and organs-at-risk segmentation, treatment planning, quality assurance, and beam delivery in terms of workflow. Studies using DL were classified and organized according to each radiation treatment process. State-of-the-art studies were identified, and the clinical utilities of those researches were examined. The DL model could provide faster and more accurate solutions to problems faced by oncologists. While the effect of a data-driven approach on improving the quality of care for cancer patients is evidently clear, implementing these methods will require cultural changes at both the professional and institutional levels. We believe this paper will serve as a guide for both clinicians and medical physicists on issues that need to be addressed in time.

Defect Diagnosis and Classification of Machine Parts Based on Deep Learning

  • Kim, Hyun-Tae;Lee, Sang-Hyeop;Wesonga, Sheilla;Park, Jang-Sik
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.2_1
    • /
    • pp.177-184
    • /
    • 2022
  • The automatic defect sorting function of machinery parts is being introduced to the automation of the manufacturing process. In the final stage of automation of the manufacturing process, it is necessary to apply computer vision rather than human visual judgment to determine whether there is a defect. In this paper, we introduce a deep learning method to improve the classification performance of typical mechanical parts, such as welding parts, galvanized round plugs, and electro galvanized nuts, based on the results of experiments. In the case of poor welding, the method to further increase the depth of layer of the basic deep learning model was effective, and in the case of a circular plug, the surrounding data outside the defective target area affected it, so it could be solved through an appropriate pre-processing technique. Finally, in the case of a nut plated with zinc, since it receives data from multiple cameras due to its three-dimensional structure, it is greatly affected by lighting and has a problem in that it also affects the background image. To solve this problem, methods such as two-dimensional connectivity were applied in the object segmentation preprocessing process. Although the experiments suggested that the proposed methods are effective, most of the provided good/defective images data sets are relatively small, which may cause a learning balance problem of the deep learning model, so we plan to secure more data in the future.

Shipping Container Load State and Accident Risk Detection Techniques Based Deep Learning (딥러닝 기반 컨테이너 적재 정렬 상태 및 사고 위험도 검출 기법)

  • Yeon, Jeong Hum;Seo, Yong Uk;Kim, Sang Woo;Oh, Se Yeong;Jeong, Jun Ho;Park, Jin Hyo;Kim, Sung-Hee;Youn, Joosang
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.11
    • /
    • pp.411-418
    • /
    • 2022
  • Incorrectly loaded containers can easily knock down by strong winds. Container collapse accidents can lead to material damage and paralysis of the port system. In this paper, We propose a deep learning-based container loading state and accident risk detection technique. Using Darknet-based YOLO, the container load status identifies in real-time through corner casting on the top and bottom of the container, and the risk of accidents notifies the manager. We present criteria for classifying container alignment states and select efficient learning algorithms based on inference speed, classification accuracy, detection accuracy, and FPS in real embedded devices in the same environment. The study found that YOLOv4 had a weaker inference speed and performance of FPS than YOLOv3, but showed strong performance in classification accuracy and detection accuracy.

Design of weighted federated learning framework based on local model validation

  • Kim, Jung-Jun;Kang, Jeon Seong;Chung, Hyun-Joon;Park, Byung-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.13-18
    • /
    • 2022
  • In this paper, we proposed VW-FedAVG(Validation based Weighted FedAVG) which updates the global model by weighting according to performance verification from the models of each device participating in the training. The first method is designed to validate each local client model through validation dataset before updating the global model with a server side validation structure. The second is a client-side validation structure, which is designed in such a way that the validation data set is evenly distributed to each client and the global model is after validation. MNIST, CIFAR-10 is used, and the IID, Non-IID distribution for image classification obtained higher accuracy than previous studies.

Frontal Face Video Analysis for Detecting Fatigue States

  • Cha, Simyeong;Ha, Jongwoo;Yoon, Soungwoong;Ahn, Chang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.6
    • /
    • pp.43-52
    • /
    • 2022
  • We can sense somebody's feeling fatigue, which means that fatigue can be detected through sensing human biometric signals. Numerous researches for assessing fatigue are mostly focused on diagnosing the edge of disease-level fatigue. In this study, we adapt quantitative analysis approaches for estimating qualitative data, and propose video analysis models for measuring fatigue state. Proposed three deep-learning based classification models selectively include stages of video analysis: object detection, feature extraction and time-series frame analysis algorithms to evaluate each stage's effect toward dividing the state of fatigue. Using frontal face videos collected from various fatigue situations, our CNN model shows 0.67 accuracy, which means that we empirically show the video analysis models can meaningfully detect fatigue state. Also we suggest the way of model adaptation when training and validating video data for classifying fatigue.

Highly Flexible Piezoelectric Tactile Sensor based on PZT/Epoxy Nanocomposite for Texture Recognition (텍스처 인지를 위한 PZT/Epoxy 나노 복합소재 기반 유연 압전 촉각센서)

  • Yulim Min;Yunjeong Kim;Jeongnam Kim;Saerom Seo;Hye Jin Kim
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.2
    • /
    • pp.88-94
    • /
    • 2023
  • Recently, piezoelectric tactile sensors have garnered considerable attention in the field of texture recognition owing to their high sensitivity and high-frequency detection capability. Despite their remarkable potential, improving their mechanical flexibility to attach to complex surfaces remains challenging. In this study, we present a flexible piezoelectric sensor that can be bent to an extremely small radius of up to 2.5 mm and still maintain good electrical performance. The proposed sensor was fabricated by controlling the thickness that induces internal stress under external deformation. The fabricated piezoelectric sensor exhibited a high sensitivity of 9.3 nA/kPa ranging from 0 to 10 kPa and a wide frequency range of up to 1 kHz. To demonstrate real-time texture recognition by rubbing the surface of an object with our sensor, nine sets of fabric plates were prepared to reflect their material properties and surface roughness. To extract features of the objects from the detected sensing data, we converted the analog dataset to short-term Fourier transform images. Subsequently, texture recognition was performed using a convolutional neural network with a classification accuracy of 97%.

Extracting Road Points from LiDAR Data for Urban Area (도심지역 LiDAR자료로부터 도로포인트 추출기법 연구)

  • Jang, Young Woon;Choi, Yun Woong;Cho, Gi Sung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.2D
    • /
    • pp.269-276
    • /
    • 2008
  • Recently, constructing the database of road network is a main key in various social operation as like the transportation, management, security, disaster assesment, and the city plan in our life. However it need high expenses for constructing the data, and relies on many people for finishing the tasks. This study proposed the classification method for discriminating between the road and building points using the entropy theory, then detects the classes as a expecting road from the classified point group using the standard reflectance intensity of road and the characteristics restricted by raw. Hence the main object of this study is to develop a method which can detect the road in urban area using only the LiDAR data.

A Control Method for designing Object Interactions in 3D Game (3차원 게임에서 객체들의 상호 작용을 디자인하기 위한 제어 기법)

  • 김기현;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.322-331
    • /
    • 2003
  • As the complexity of a 3D game is increased by various factors of the game scenario, it has a problem for controlling the interrelation of the game objects. Therefore, a game system has a necessity of the coordination of the responses of the game objects. Also, it is necessary to control the behaviors of animations of the game objects in terms of the game scenario. To produce realistic game simulations, a system has to include a structure for designing the interactions among the game objects. This paper presents a method that designs the dynamic control mechanism for the interaction of the game objects in the game scenario. For the method, we suggest a game agent system as a framework that is based on intelligent agents who can make decisions using specific rules. Game agent systems are used in order to manage environment data, to simulate the game objects, to control interactions among game objects, and to support visual authoring interface that ran define a various interrelations of the game objects. These techniques can process the autonomy level of the game objects and the associated collision avoidance method, etc. Also, it is possible to make the coherent decision-making ability of the game objects about a change of the scene. In this paper, the rule-based behavior control was designed to guide the simulation of the game objects. The rules are pre-defined by the user using visual interface for designing their interaction. The Agent State Decision Network, which is composed of the visual elements, is able to pass the information and infers the current state of the game objects. All of such methods can monitor and check a variation of motion state between game objects in real time. Finally, we present a validation of the control method together with a simple case-study example. In this paper, we design and implement the supervised classification systems for high resolution satellite images. The systems support various interfaces and statistical data of training samples so that we can select the most effective training data. In addition, the efficient extension of new classification algorithms and satellite image formats are applied easily through the modularized systems. The classifiers are considered the characteristics of spectral bands from the selected training data. They provide various supervised classification algorithms which include Parallelepiped, Minimum distance, Mahalanobis distance, Maximum likelihood and Fuzzy theory. We used IKONOS images for the input and verified the systems for the classification of high resolution satellite images.

Human Tracking Technology using Convolutional Neural Network in Visual Surveillance (서베일런스에서 회선 신경망 기술을 이용한 사람 추적 기법)

  • Kang, Sung-Kwan;Chun, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.15 no.2
    • /
    • pp.173-181
    • /
    • 2017
  • In this paper, we have studied tracking as a training stage of considering the position and the scale of a person given its previous position, scale, as well as next and forward image fraction. Unlike other learning methods, CNN is thereby learning combines both time and spatial features from the image for the two consecutive frames. We introduce multiple path ways in CNN to better fuse local and global information. A creative shift-variant CNN architecture is designed so as to alleviate the drift problem when the distracting objects are similar to the target in cluttered environment. Furthermore, we employ CNNs to estimate the scale through the accurate localization of some key points. These techniques are object-independent so that the proposed method can be applied to track other types of object. The capability of the tracker of handling complex situations is demonstrated in many testing sequences. The accuracy of the SVM classifier using the features learnt by the CNN is equivalent to the accuracy of the CNN. This fact confirms the importance of automatically optimized features. However, the computation time for the classification of a person using the convolutional neural network classifier is less than approximately 1/40 of the SVM computation time, regardless of the type of the used features.

Quantitative Evaluations of Deep Learning Models for Rapid Building Damage Detection in Disaster Areas (재난지역에서의 신속한 건물 피해 정도 감지를 위한 딥러닝 모델의 정량 평가)

  • Ser, Junho;Yang, Byungyun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.5
    • /
    • pp.381-391
    • /
    • 2022
  • This paper is intended to find one of the prevailing deep learning models that are a type of AI (Artificial Intelligence) that helps rapidly detect damaged buildings where disasters occur. The models selected are SSD-512, RetinaNet, and YOLOv3 which are widely used in object detection in recent years. These models are based on one-stage detector networks that are suitable for rapid object detection. These are often used for object detection due to their advantages in structure and high speed but not for damaged building detection in disaster management. In this study, we first trained each of the algorithms on xBD dataset that provides the post-disaster imagery with damage classification labels. Next, the three models are quantitatively evaluated with the mAP(mean Average Precision) and the FPS (Frames Per Second). The mAP of YOLOv3 is recorded at 34.39%, and the FPS reached 46. The mAP of RetinaNet recorded 36.06%, which is 1.67% higher than YOLOv3, but the FPS is one-third of YOLOv3. SSD-512 received significantly lower values than the results of YOLOv3 on two quantitative indicators. In a disaster situation, a rapid and precise investigation of damaged buildings is essential for effective disaster response. Accordingly, it is expected that the results obtained through this study can be effectively used for the rapid response in disaster management.