• Title/Summary/Keyword: Deep Learning based System

Search Result 1,194, Processing Time 0.031 seconds

A Study on the system in the Theory of 'Syndrome Differentiation' from the Viewpoint of Yoon Gilyeong (윤길영의 변증체계 고찰)

  • Kim, Gyeong Cheol;Hong, Dong Gyun
    • The Journal of the Society of Korean Medicine Diagnostics
    • /
    • v.20 no.1
    • /
    • pp.15-26
    • /
    • 2016
  • Objectives Syndrome differentiation and treatment (辨證論治) was one of the core theories in Korean medicine and syndrome differentiation (辨證) constitutes a branch of disease diagnosis in Korean medicine. Yoon Gil-Young, one of the modern outstanding scholar of basic medical science in Korean medicine, wrote on basic theories of Korean medicine such as physiology, pathology, formula science, etc. Hereby we will analyze and discuss his works to understand his recognition of historical changes in the syndrome differentiation. Methods We conducted researches into the two works of Yoon Gil-Young's, which are "The Clinical Formula Science of Eastern Medicine (東醫臨床方劑學)" and "The theory of Four-Constitution Medicine (四象體質醫學論)". From Yoon's academic standpoint which connects the basic medical science with the clinical medicine, we analyzed his opinion about the system in the Theory of 'Syndrome Differentiation'. Results According to Yoon's research work on the Theory of 'Syndrome Differentiation', the system of syndrome differentiation, which had its deep root in the theory of Yin and Yang (陰陽) & the theory of abbreviation of the five circuit phases (五運) and the six atomspheric influences (六氣) of the "Huangdi's Internal Classic (黃帝內經)". Conclusions Yoon Gil-Young's theory of differentiation of syndromes and treatment is widespread so much that he studied on the learning field of Traditional Korean Mediciine and ingenious as well. He explain on the main principles of differentiation of syndromes based on "Huang Di Nei Jing" and the system of differentiation of syndromes is composed of Traditional Korean Medical Physiology.

A Study on Mechanism of Intelligent Cyber Attack Path Analysis (지능형 사이버 공격 경로 분석 방법에 관한 연구)

  • Kim, Nam-Uk;Lee, Dong-Gyu;Eom, Jung-Ho
    • Convergence Security Journal
    • /
    • v.21 no.1
    • /
    • pp.93-100
    • /
    • 2021
  • Damage caused by intelligent cyber attacks not only disrupts system operations and leaks information, but also entails massive economic damage. Recently, cyber attacks have a distinct goal and use advanced attack tools and techniques to accurately infiltrate the target. In order to minimize the damage caused by such an intelligent cyber attack, it is necessary to block the cyber attack at the beginning or during the attack to prevent it from invading the target's core system. Recently, technologies for predicting cyber attack paths and analyzing risk level of cyber attack using big data or artificial intelligence technologies are being studied. In this paper, a cyber attack path analysis method using attack tree and RFI is proposed as a basic algorithm for the development of an automated cyber attack path prediction system. The attack path is visualized using the attack tree, and the priority of the path that can move to the next step is determined using the RFI technique in each attack step. Based on the proposed mechanism, it can contribute to the development of an automated cyber attack path prediction system using big data and deep learning technology.

CNN Based Face Tracking and Re-identification for Privacy Protection in Video Contents (비디오 컨텐츠의 프라이버시 보호를 위한 CNN 기반 얼굴 추적 및 재식별 기술)

  • Park, TaeMi;Phu, Ninh Phung;Kim, HyungWon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.1
    • /
    • pp.63-68
    • /
    • 2021
  • Recently there is sharply increasing interest in watching and creating video contents such as YouTube. However, creating such video contents without privacy protection technique can expose other people in the background in public, which is consequently violating their privacy rights. This paper seeks to remedy these problems and proposes a technique that identifies faces and protecting portrait rights by blurring the face. The key contribution of this paper lies on our deep-learning technique with low detection error and high computation that allow to protect portrait rights in real-time videos. To reduce errors, an efficient tracking algorithm was used in this system with face detection and face recognition algorithm. This paper compares the performance of the proposed system with and without the tracking algorithm. We believe this system can be used wherever the video is used.

Design of YOLO-based Removable System for Pet Monitoring (반려동물 모니터링을 위한 YOLO 기반의 이동식 시스템 설계)

  • Lee, Min-Hye;Kang, Jun-Young;Lim, Soon-Ja
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.22-27
    • /
    • 2020
  • Recently, as the number of households raising pets increases due to the increase of single households, there is a need for a system for monitoring the status or behavior of pets. There are regional limitations in the monitoring of pets using domestic CCTVs, which requires a large number of CCTVs or restricts the behavior of pets. In this paper, we propose a mobile system for detecting and tracking cats using deep learning to solve the regional limitations of pet monitoring. We use YOLO (You Look Only Once), an object detection neural network model, to learn the characteristics of pets and apply them to Raspberry Pi to track objects detected in an image. We have designed a mobile monitoring system that connects Raspberry Pi and a laptop via wireless LAN and can check the movement and condition of cats in real time.

T-Commerce Sale Prediction Using Deep Learning and Statistical Model (딥러닝과 통계 모델을 이용한 T-커머스 매출 예측)

  • Kim, Injung;Na, Kihyun;Yang, Sohee;Jang, Jaemin;Kim, Yunjong;Shin, Wonyoung;Kim, Deokjung
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.803-812
    • /
    • 2017
  • T-commerce is technology-fusion service on which the user can purchase using data broadcasting technology based on bi-directional digital TVs. To achieve the best revenue under a limited environment in regard to the channel number and the variety of sales goods, organizing broadcast programs to maximize the expected sales considering the selling power of each product at each time slot. For this, this paper proposes a method to predict the sales of goods when it is assigned to each time slot. The proposed method predicts the sales of product at a time slot given the week-in-year and weather of the target day. Additionally, it combines a statistical predict model applying SVD (Singular Value Decomposition) to mitigate the sparsity problem caused by the bias in sales record. In experiments on the sales data of W-shopping, a T-commerce company, the proposed method showed NMAE (Normalized Mean Absolute Error) of 0.12 between the prediction and the actual sales, which confirms the effectiveness of the proposed method. The proposed method is practically applied to the T-commerce system of W-shopping and used for broadcasting organization.

Comparison of performance of automatic detection model of GPR signal considering the heterogeneous ground (지반의 불균질성을 고려한 GPR 신호의 자동탐지모델 성능 비교)

  • Lee, Sang Yun;Song, Ki-Il;Kang, Kyung Nam;Ryu, Hee Hwan
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.4
    • /
    • pp.341-353
    • /
    • 2022
  • Pipelines are buried in urban area, and the position (depth and orientation) of buried pipeline should be clearly identified before ground excavation. Although various geophysical methods can be used to detect the buried pipeline, it is not easy to identify the exact information of pipeline due to heterogeneous ground condition. Among various non-destructive geo-exploration methods, ground penetration radar (GPR) can explore the ground subsurface rapidly with relatively low cost compared to other exploration methods. However, the exploration data obtained from GPR requires considerable experiences because interpretation is not intuitive. Recently, researches on automated detection technology for GPR data using deep learning have been conducted. However, the lack of GPR data which is essential for training makes it difficult to build up the reliable detection model. To overcome this problem, we conducted a preliminary study to improve the performance of the detection model using finite difference time domain (FDTD)-based numerical analysis. Firstly, numerical analysis was performed with homogeneous soil media having single permittivity. In case of heterogeneous ground, numerical analysis was performed considering the ground heterogeneity using fractal technique. Secondly, deep learning was carried out using convolutional neural network. Detection Model-A is trained with data set obtained from homogeneous ground. And, detection Model-B is trained with data set obtained from homogeneous ground and heterogeneous ground. As a result, it is found that the detection Model-B which is trained including heterogeneous ground shows better performance than detection Model-A. It indicates the ground heterogeneity should be considered to increase the performance of automated detection model for GPR exploration.

The Effect of Ground Heterogeneity on the GPR Signal: Numerical Analysis (지반의 불균질성이 GPR탐사 신호에 미치는 영향에 대한 수치해석적 분석)

  • Lee, Sangyun;Song, Ki-il;Ryu, Heehwan;Kang, Kyungnam
    • Journal of the Korean GEO-environmental Society
    • /
    • v.23 no.8
    • /
    • pp.29-36
    • /
    • 2022
  • The importance of subsurface information is becoming crucial in urban area due to increase of underground construction. The position of underground facilities should be identified precisely before excavation work. Geophyiscal exporation method such as ground penetration radar (GPR) can be useful to investigate the subsurface facilities. GPR transmits electromagnetic waves to the ground and analyzes the reflected signals to determine the location and depth of subsurface facilities. Unfortunately, the readability of GPR signal is not favorable. To overcome this deficiency and automate the GPR signal processing, deep learning technique has been introduced recently. The accuracy of deep learning model can be improved with abundant training data. The ground is inherently heteorogeneous and the spacially variable ground properties can affact on the GPR signal. However, the effect of ground heterogeneity on the GPR signal has yet to be fully investigated. In this study, ground heterogeneity is simulated based on the fractal theory and GPR simulation is carried out by using gprMax. It is found that as the fractal dimension increases exceed 2.0, the error of fitting parameter reduces significantly. And the range of water content should be less than 0.14 to secure the validity of analysis.

Development of System for Real-Time Object Recognition and Matching using Deep Learning at Simulated Lunar Surface Environment (딥러닝 기반 달 표면 모사 환경 실시간 객체 인식 및 매칭 시스템 개발)

  • Jong-Ho Na;Jun-Ho Gong;Su-Deuk Lee;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.4
    • /
    • pp.281-298
    • /
    • 2023
  • Continuous research efforts are being devoted to unmanned mobile platforms for lunar exploration. There is an ongoing demand for real-time information processing to accurately determine the positioning and mapping of areas of interest on the lunar surface. To apply deep learning processing and analysis techniques to practical rovers, research on software integration and optimization is imperative. In this study, a foundational investigation has been conducted on real-time analysis of virtual lunar base construction site images, aimed at automatically quantifying spatial information of key objects. This study involved transitioning from an existing region-based object recognition algorithm to a boundary box-based algorithm, thus enhancing object recognition accuracy and inference speed. To facilitate extensive data-based object matching training, the Batch Hard Triplet Mining technique was introduced, and research was conducted to optimize both training and inference processes. Furthermore, an improved software system for object recognition and identical object matching was integrated, accompanied by the development of visualization software for the automatic matching of identical objects within input images. Leveraging satellite simulative captured video data for training objects and moving object-captured video data for inference, training and inference for identical object matching were successfully executed. The outcomes of this research suggest the feasibility of implementing 3D spatial information based on continuous-capture video data of mobile platforms and utilizing it for positioning objects within regions of interest. As a result, these findings are expected to contribute to the integration of an automated on-site system for video-based construction monitoring and control of significant target objects within future lunar base construction sites.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.