• Title/Summary/Keyword: neural network learning

Search Result 4,098, Processing Time 0.037 seconds

Radiation Dose Reduction in Digital Mammography by Deep-Learning Algorithm Image Reconstruction: A Preliminary Study (딥러닝 알고리즘을 이용한 저선량 디지털 유방 촬영 영상의 복원: 예비 연구)

  • Su Min Ha;Hak Hee Kim;Eunhee Kang;Bo Kyoung Seo;Nami Choi;Tae Hee Kim;You Jin Ku;Jong Chul Ye
    • Journal of the Korean Society of Radiology
    • /
    • v.83 no.2
    • /
    • pp.344-359
    • /
    • 2022
  • Purpose To develop a denoising convolutional neural network-based image processing technique and investigate its efficacy in diagnosing breast cancer using low-dose mammography imaging. Materials and Methods A total of 6 breast radiologists were included in this prospective study. All radiologists independently evaluated low-dose images for lesion detection and rated them for diagnostic quality using a qualitative scale. After application of the denoising network, the same radiologists evaluated lesion detectability and image quality. For clinical application, a consensus on lesion type and localization on preoperative mammographic examinations of breast cancer patients was reached after discussion. Thereafter, coded low-dose, reconstructed full-dose, and full-dose images were presented and assessed in a random order. Results Lesions on 40% reconstructed full-dose images were better perceived when compared with low-dose images of mastectomy specimens as a reference. In clinical application, as compared to 40% reconstructed images, higher values were given on full-dose images for resolution (p < 0.001); diagnostic quality for calcifications (p < 0.001); and for masses, asymmetry, or architectural distortion (p = 0.037). The 40% reconstructed images showed comparable values to 100% full-dose images for overall quality (p = 0.547), lesion visibility (p = 0.120), and contrast (p = 0.083), without significant differences. Conclusion Effective denoising and image reconstruction processing techniques can enable breast cancer diagnosis with substantial radiation dose reduction.

The Pattern Analysis of Financial Distress for Non-audited Firms using Data Mining (데이터마이닝 기법을 활용한 비외감기업의 부실화 유형 분석)

  • Lee, Su Hyun;Park, Jung Min;Lee, Hyoung Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.111-131
    • /
    • 2015
  • There are only a handful number of research conducted on pattern analysis of corporate distress as compared with research for bankruptcy prediction. The few that exists mainly focus on audited firms because financial data collection is easier for these firms. But in reality, corporate financial distress is a far more common and critical phenomenon for non-audited firms which are mainly comprised of small and medium sized firms. The purpose of this paper is to classify non-audited firms under distress according to their financial ratio using data mining; Self-Organizing Map (SOM). SOM is a type of artificial neural network that is trained using unsupervised learning to produce a lower dimensional discretized representation of the input space of the training samples, called a map. SOM is different from other artificial neural networks as it applies competitive learning as opposed to error-correction learning such as backpropagation with gradient descent, and in the sense that it uses a neighborhood function to preserve the topological properties of the input space. It is one of the popular and successful clustering algorithm. In this study, we classify types of financial distress firms, specially, non-audited firms. In the empirical test, we collect 10 financial ratios of 100 non-audited firms under distress in 2004 for the previous two years (2002 and 2003). Using these financial ratios and the SOM algorithm, five distinct patterns were distinguished. In pattern 1, financial distress was very serious in almost all financial ratios. 12% of the firms are included in these patterns. In pattern 2, financial distress was weak in almost financial ratios. 14% of the firms are included in pattern 2. In pattern 3, growth ratio was the worst among all patterns. It is speculated that the firms of this pattern may be under distress due to severe competition in their industries. Approximately 30% of the firms fell into this group. In pattern 4, the growth ratio was higher than any other pattern but the cash ratio and profitability ratio were not at the level of the growth ratio. It is concluded that the firms of this pattern were under distress in pursuit of expanding their business. About 25% of the firms were in this pattern. Last, pattern 5 encompassed very solvent firms. Perhaps firms of this pattern were distressed due to a bad short-term strategic decision or due to problems with the enterpriser of the firms. Approximately 18% of the firms were under this pattern. This study has the academic and empirical contribution. In the perspectives of the academic contribution, non-audited companies that tend to be easily bankrupt and have the unstructured or easily manipulated financial data are classified by the data mining technology (Self-Organizing Map) rather than big sized audited firms that have the well prepared and reliable financial data. In the perspectives of the empirical one, even though the financial data of the non-audited firms are conducted to analyze, it is useful for find out the first order symptom of financial distress, which makes us to forecast the prediction of bankruptcy of the firms and to manage the early warning and alert signal. These are the academic and empirical contribution of this study. The limitation of this research is to analyze only 100 corporates due to the difficulty of collecting the financial data of the non-audited firms, which make us to be hard to proceed to the analysis by the category or size difference. Also, non-financial qualitative data is crucial for the analysis of bankruptcy. Thus, the non-financial qualitative factor is taken into account for the next study. This study sheds some light on the non-audited small and medium sized firms' distress prediction in the future.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

The Churchlands' Theory of Representation and the Semantics (처칠랜드의 표상이론과 의미론적 유사성)

  • Park, Je-Youn
    • Korean Journal of Cognitive Science
    • /
    • v.23 no.2
    • /
    • pp.133-164
    • /
    • 2012
  • Paul Churchland(1989) suggests the theory of representation from the results of cognitive biology and connectionist AI studies. According to the theory, our representations of the diverse phenomena in the world can be represented as the positions of phase state spaces with the actions of the neurons or of the assembly of neurons. He insists connectionist AI neural networks can have the semantical category systems to recognize the world. But Fodor and Lepore(1996) don't look the perspective bright. From their points of view, the Churchland's theory of representation stands on the base of Quine's holism, and the network semantics cannot explain how the criteria of semantical content similarity could be possible, and so cannot the theory. This thesis aims to excavate which one is the better between the perspective of the theory and the one of Fodor and Lepore's. From my understandings of state space theory of representation, artificial nets can coordinates the criteria of contents similarity by the learning algorithm. On the basis of these, I can see that Fodor and Lepore's points cannot penetrate the Churchlands' theory. From the view point of the theory, we can see how the future's artificial systems can have the conceptual systems recognizing the world. Therefore we can have the perspectives what cognitive scientists have to focus on.

  • PDF

Estimating Gastrointestinal Transition Location Using CNN-based Gastrointestinal Landmark Classifier (CNN 기반 위장관 랜드마크 분류기를 이용한 위장관 교차점 추정)

  • Jang, Hyeon Woong;Lim, Chang Nam;Park, Ye-Suel;Lee, Gwang Jae;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.3
    • /
    • pp.101-108
    • /
    • 2020
  • Since the performance of deep learning techniques has recently been proven in the field of image processing, there are many attempts to perform classification, analysis, and detection of images using such techniques in various fields. Among them, the expectation of medical image analysis software, which can serve as a medical diagnostic assistant, is increasing. In this study, we are attention to the capsule endoscope image, which has a large data set and takes a long time to judge. The purpose of this paper is to distinguish the gastrointestinal landmarks and to estimate the gastrointestinal transition location that are common to all patients in the judging of capsule endoscopy and take a lot of time. To do this, we designed CNN-based Classifier that can identify gastrointestinal landmarks, and used it to estimate the gastrointestinal transition location by filtering the results. Then, we estimate gastrointestinal transition location about seven of eight patients entered the suspected gastrointestinal transition area. In the case of change from the stomach to the small intestine(pylorus), and change from the small intestine to the large intestine(ileocecal valve), we can check all eight patients were found to be in the suspected gastrointestinal transition area. we can found suspected gastrointestinal transition area in the range of 100 frames, and if the reader plays images at 10 frames per second, the gastrointestinal transition could be found in 10 seconds.

The Study on The Identification Model of Friend or Foe on Helicopter by using Binary Classification with CNN

  • Kim, Tae Wan;Kim, Jong Hwan;Moon, Ho Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.3
    • /
    • pp.33-42
    • /
    • 2020
  • There has been difficulties in identifying objects by relying on the naked eye in various surveillance systems. There is a growing need for automated surveillance systems to replace soldiers in the field of military surveillance operations. Even though the object detection technology is developing rapidly in the civilian domain, but the research applied to the military is insufficient due to a lack of data and interest. Thus, in this paper, we applied one of deep learning algorithms, Convolutional Neural Network-based binary classification to develop an autonomous identification model of both friend and foe helicopters (AH-64, Mi-17) among the military weapon systems, and evaluated the model performance by considering accuracy, precision, recall and F-measure. As the result, the identification model demonstrates 97.8%, 97.3%, 98.5%, and 97.8 for accuracy, precision, recall and F-measure, respectively. In addition, we analyzed the feature map on convolution layers of the identification model in order to check which area of imagery is highly weighted. In general, rotary shaft of rotating wing, wheels, and air-intake on both of ally and foe helicopters played a major role in the performance of the identification model. This is the first study to attempt to classify images of helicopters among military weapons systems using CNN, and the model proposed in this study shows higher accuracy than the existing classification model for other weapons systems.

Optimized Feature Selection using Feature Subset IG-MLP Evaluation based Machine Learning Model for Disease Prediction (특징집합 IG-MLP 평가 기반의 최적화된 특징선택 방법을 이용한 질환 예측 머신러닝 모델)

  • Kim, Kyeongryun;Kim, Jaekwon;Lee, Jongsik
    • Journal of the Korea Society for Simulation
    • /
    • v.29 no.1
    • /
    • pp.11-21
    • /
    • 2020
  • Cardio-cerebrovascular diseases (CCD) account for 24% of the causes of death to Koreans and its proportion is the highest except cancer. Currently, the risk of the cardiovascular disease for domestic patients is based on the Framingham risk score (FRS), but accuracy tends to decrease because it is a foreign guideline. Also, it can't score the risk of cerebrovascular disease. CCD is hard to predict, because it is difficult to analyze the features of early symptoms for prevention. Therefore, proper prediction method for Koreans is needed. The purpose of this paper is validating IG-MLP (Information Gain - Multilayer Perceptron) evaluation based feature selection method using CCD data with simulation. The proposed method uses the raw data of the 4th ~ 7th of The Korea National Health and Nutrition Examination Survey (KNHANES). To select the important feature of CCD, analysis on the attributes using IG-MLP are processed, finally CCD prediction ANN model using optimize feature set is provided. Proposed method can find important features of CCD prediction of Koreans, and ANN model could predict more accurate CCD for Koreans.

A design of Optimized Vehicle Routing System(OVRS) based on RSU communication and deep learning (RSU 통신 및 딥러닝 기반 최적화 차량 라우팅 시스템 설계)

  • Son, Su-Rak;Lee, Byung-Kwan;Sim, Son-Kweon;Jeong, Yi-Na
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.2
    • /
    • pp.129-137
    • /
    • 2020
  • Currently, The autonomous vehicle market is researching and developing four-level autonomous vehicles beyond the commercialization of three-level autonomous vehicles. Because unlike the level 3, the level 4 autonomous vehicle has to deal with an emergency directly, the most important aspect of a four-level autonomous vehicle is its stability. In this paper, we propose an Optimized Vehicle Routing System (OVRS) that determines the route with the lowest probability of an accident at the destination of the vehicle rather than an immediate response in an emergency. The OVRS analyzes road and surrounding vehicle information collected by The RSU communication to predict road hazards, and sets the route for the safer and faster road. The OVRS can improve the stability of the vehicle by executing the route guidance according to the road situation through the RSU on the road like the network routing method. As a result, the RPNN of the ASICM, one of the OVRS modules, was about 17% better than the CNN and 40% better than the LSTM. However, because the study was conducted in a virtual environment using a PC, the possibility of accident of the VPDM was not actually verified. Therefore, in the future, experiments with high accuracy on VPDM due to the collection of accident data and actual roads should be conducted in real vehicles and RSUs.

Development of Bone Metastasis Detection Algorithm on Abdominal Computed Tomography Image using Pixel Wise Fully Convolutional Network (픽셀 단위 컨볼루션 네트워크를 이용한 복부 컴퓨터 단층촬영 영상 기반 골전이암 병변 검출 알고리즘 개발)

  • Kim, Jooyoung;Lee, Siyoung;Kim, Kyuri;Cho, Kyeongwon;You, Sungmin;So, Soonwon;Park, Eunkyoung;Cho, Baek Hwan;Choi, Dongil;Park, Hoon Ki;Kim, In Young
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.6
    • /
    • pp.321-329
    • /
    • 2017
  • This paper presents a bone metastasis Detection algorithm on abdominal computed tomography images for early detection using fully convolutional neural networks. The images were taken from patients with various cancers (such as lung cancer, breast cancer, colorectal cancer, etc), and thus the locations of those lesions were varied. To overcome the lack of data, we augmented the data by adjusting the brightness of the images or flipping the images. Before the augmentation, when 70% of the whole data were used in the pre-test, we could obtain the pixel-wise sensitivity of 18.75%, the specificity of 99.97% on the average of test dataset. With the augmentation, we could obtain the sensitivity of 30.65%, the specificity of 99.96%. The increase in sensitivity shows that the augmentation was effective. In the result obtained by using the whole data, the sensitivity of 38.62%, the specificity of 99.94% and the accuracy of 99.81% in the pixel-wise. lesion-wise sensitivity is 88.89% while the false alarm per case is 0.5. The results of this study did not reach the level that could substitute for the clinician. However, it may be helpful for radiologists when it can be used as a screening tool.

Study on High-speed Cyber Penetration Attack Analysis Technology based on Static Feature Base Applicable to Endpoints (Endpoint에 적용 가능한 정적 feature 기반 고속의 사이버 침투공격 분석기술 연구)

  • Hwang, Jun-ho;Hwang, Seon-bin;Kim, Su-jeong;Lee, Tae-jin
    • Journal of Internet Computing and Services
    • /
    • v.19 no.5
    • /
    • pp.21-31
    • /
    • 2018
  • Cyber penetration attacks can not only damage cyber space but can attack entire infrastructure such as electricity, gas, water, and nuclear power, which can cause enormous damage to the lives of the people. Also, cyber space has already been defined as the fifth battlefield, and strategic responses are very important. Most of recent cyber attacks are caused by malicious code, and since the number is more than 1.6 million per day, automated analysis technology to cope with a large amount of malicious code is very important. However, it is difficult to deal with malicious code encryption, obfuscation and packing, and the dynamic analysis technique is not limited to the performance requirements of dynamic analysis but also to the virtual There is a limit in coping with environment avoiding technology. In this paper, we propose a machine learning based malicious code analysis technique which improve the weakness of the detection performance of existing analysis technology while maintaining the light and high-speed analysis performance applicable to commercial endpoints. The results of this study show that 99.13% accuracy, 99.26% precision and 99.09% recall analysis performance of 71,000 normal file and malicious code in commercial environment and analysis time in PC environment can be analyzed more than 5 per second, and it can be operated independently in the endpoint environment and it is considered that it works in complementary form in operation in conjunction with existing antivirus technology and static and dynamic analysis technology. It is also expected to be used as a core element of EDR technology and malware variant analysis.