• Title/Summary/Keyword: Unsupervised algorithm

Search Result 279, Processing Time 0.028 seconds

Detection of Small Green Space in an Urban Area Using Airborne Hyperspectral Imagery and Spectral Angle Mapper (분광각매퍼 기법을 적용한 항공기 탑재 초분광영상의 소규모 녹지공간 탐지)

  • Kim, Tae-Woo;Choi, Don-Jeong;We, Gwang-Jae;Suh, Yong-Cheol
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.16 no.2
    • /
    • pp.88-100
    • /
    • 2013
  • Urban green space is one of most important aspects of urban infrastructure for improving the quality of life of city dwellers as it reduces the heat island effect and is used for recreation and relaxation. However, no systematic management of urban green space has been introduced in Korea as past practices focused on efficient development. A way to calculate the amount of green space needed to complement an urban area must be developed to preserve urban green space and to determine 'regulations determining the total amount of greenery'. In recent years, various studies have quantified urban green space and infrastructure using remotely sensed data. However, it is difficult to detect a myriad small green spaces in a city effectively when considering the spatial resolution of the data used in existing research. In this paper, we quantified small urban green spaces using CASI-1500 hyperspectral imagery. We calculated MCARI, a vegetation index for hyperspectral imagery, to evaluate the greenness of small green spaces. In addition, we applied image-classification methods, including the ISODATA algorithm and Spectral Angle Mapper, to detect small green spaces using supervised and unsupervised classifications. This could be used to categorize land-cover into four classes: unclassified, impervious, suspected green, and vegetation green.

A Study on the UAV-based Vegetable Index Comparison for Detection of Pine Wilt Disease Trees (소나무재선충병 피해목 탐지를 위한 UAV기반의 식생지수 비교 연구)

  • Jung, Yoon-Young;Kim, Sang-Wook
    • Journal of Cadastre & Land InformatiX
    • /
    • v.50 no.1
    • /
    • pp.201-214
    • /
    • 2020
  • This study aimed to early detect damaged trees by pine wilt disease using the vegetation indices of UAV images. The location data of 193 pine wilt disease trees were constructed through field surveys and vegetation index analyses of NDVI, GNDVI, NDRE and SAVI were performed using multi-spectral UAV images at the same time. K-Means algorithm was adopted to classify damaged trees and confusion matrix was used to compare and analyze the classification accuracy. The results of the study are summarized as follows. First, the overall accuracy of the classification was analyzed in order of NDVI (88.04%, Kappa coefficient 0.76) > GNDVI (86.01%, Kappa coefficient 0.72) > NDRE (77.35%, Kappa coefficient 0.55) > SAVI (76.84%, Kappa coefficient 0.54) and showed the highest accuracy of NDVI. Second, K-Means unsupervised classification method using NDVI or GNDVI is possible to some extent to find out the damaged trees. In particular, this technique is to help early detection of damaged trees due to its intensive operation, low user intervention and relatively simple analysis process. In the future, it is expected that the utilization of time series images or the application of deep learning techniques will increase the accuracy of classification.

Leision Detection in Chest X-ray Images based on Coreset of Patch Feature (패치 특징 코어세트 기반의 흉부 X-Ray 영상에서의 병변 유무 감지)

  • Kim, Hyun-bin;Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.35-45
    • /
    • 2022
  • Even in recent years, treatment of first-aid patients is still often delayed due to a shortage of medical resources in marginalized areas. Research on automating the analysis of medical data to solve the problems of inaccessibility for medical services and shortage of medical personnel is ongoing. Computer vision-based medical inspection automation requires a lot of cost in data collection and labeling for training purposes. These problems stand out in the works of classifying lesion that are rare, or pathological features and pathogenesis that are difficult to clearly define visually. Anomaly detection is attracting as a method that can significantly reduce the cost of data collection by adopting an unsupervised learning strategy. In this paper, we propose methods for detecting abnormal images on chest X-RAY images as follows based on existing anomaly detection techniques. (1) Normalize the brightness range of medical images resampled as optimal resolution. (2) Some feature vectors with high representative power are selected in set of patch features extracted as intermediate-level from lesion-free images. (3) Measure the difference from the feature vectors of lesion-free data selected based on the nearest neighbor search algorithm. The proposed system can simultaneously perform anomaly classification and localization for each image. In this paper, the anomaly detection performance of the proposed system for chest X-RAY images of PA projection is measured and presented by detailed conditions. We demonstrate effect of anomaly detection for medical images by showing 0.705 classification AUROC for random subset extracted from the PadChest dataset. The proposed system can be usefully used to improve the clinical diagnosis workflow of medical institutions, and can effectively support early diagnosis in medically poor area.

Forest Fire Area Extraction Method Using VIIRS (VIIRS를 활용한 산불 피해 범위 추출 방법 연구)

  • Chae, Hanseong;Ahn, Jaeseong;Choi, Jinmu
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.669-683
    • /
    • 2022
  • The frequency and damage of forest fires have tended to increase over the past 20 years. In order to effectively respond to forest fires, information on forest fire damage should be well managed. However, information on the extent of forest fire damage is not well managed. This study attempted to present a method that extracting information on the area of forest fire in real time and quasi-real-time using visible infrared imaging radiometer suite (VIIRS) images. VIIRS data observing the Korean Peninsula were obtained and visualized at the time of the East Coast forest fire in March 2022. VIIRS images were classified without supervision using iterative self-organizing data analysis (ISODATA) algorithm. The results were reclassified using the relationship between the burned area and the location of the flame to extract the extent of forest fire. The final results were compared with verification and comparison data. As a result of the comparison, in the case of large forest fires, it was found that classifying and extracting VIIRS images was more accurate than estimating them through forest fire occurrence data. This method can be used to create spatial data for forest fire management. Furthermore, if this research method is automated, it is expected that daily forest fire damage monitoring based on VIIRS will be possible.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Evaluation of Endothelium-dependent Myocardial Perfusion Reserve in Healthy Smokers; Cold Pressor Test using $H_2^{15}O\;PET$ (흡연자에서 관상동맥 내피세포 의존성 심근 혈류 예비능: $H_2^{15}O\;PET$ 찬물자극 검사에 의한 평가)

  • Hwang, Kyung-Hoon;Lee, Dong-Soo;Lee, Byeong-Il;Lee, Jae-Sung;Lee, Ho-Young;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.1
    • /
    • pp.21-29
    • /
    • 2004
  • Purpose: Much evidence suggests long-term cigarette smoking alters coronary vascular endothelial response. On this study, we applied nonnegative matrix factorization (NMF), an unsupervised learning algorithm, to CO-less $H_2^{15}O-PET$ to investigate coronary endothelial dysfunction caused by smoking noninvasively. Materials and methods: This study enrolled eighteen young male volunteers consisting of 9 smokers $(23.8{\pm}1.1\;yr;\;6.5{\pm}2.5$ pack-years) and 9 nonsmokers $(23.8{\pm}2.9 yr)$. They do not have any cardiovascular risk factor or disease history. Myocardial $H_2^{15}O-PET$ was performed at rest, during cold ($5^{\circ}C$) pressor stimulation and during adenosine infusion. Left ventricular blood pool and myocardium were segmented on dynamic PET data by NMF method. Myocardial blood flow (MBF) was calculated from input and tissue functions by a single compartmental model with correction of partial volume and spillover effects. Results: There were no significant difference in resting MBF between the two groups (Smokers: 1.43 0.41 ml/g/min and non-smokers: $1.37{\pm}0.41$ ml/g/min p=NS). during cold pressor stimulation, MBF in smokers was significantly lower than 4hat in non-smokers ($1.25{\pm}0.34$ ml/g/min vs $1.59{\pm}0.29$ ml/gmin; p=0.019). The difference in the ratio of cold pressor MBF to resting MBF between the two groups was also significant (p=0.024; $90{\pm}24%$ in smokers and $122{\pm}28%$ in non-smokers.). During adenosine infusion, however, hyperemic MBF did not differ significantly between smokers and non-smokers ($5.81{\pm}1.99$ ml/g/min vs $5.11{\pm}1.31$ ml/g/min ; p=NS). Conclusion: in smokers, MBF during cold pressor stimulation was significantly lower compared wi4h nonsmokers, reflecting smoking-Induced endothelial dysfunction. However, there was no significant difference in MBF during adenosine-induced hyperemia between the two groups.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

The Pattern Analysis of Financial Distress for Non-audited Firms using Data Mining (데이터마이닝 기법을 활용한 비외감기업의 부실화 유형 분석)

  • Lee, Su Hyun;Park, Jung Min;Lee, Hyoung Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.111-131
    • /
    • 2015
  • There are only a handful number of research conducted on pattern analysis of corporate distress as compared with research for bankruptcy prediction. The few that exists mainly focus on audited firms because financial data collection is easier for these firms. But in reality, corporate financial distress is a far more common and critical phenomenon for non-audited firms which are mainly comprised of small and medium sized firms. The purpose of this paper is to classify non-audited firms under distress according to their financial ratio using data mining; Self-Organizing Map (SOM). SOM is a type of artificial neural network that is trained using unsupervised learning to produce a lower dimensional discretized representation of the input space of the training samples, called a map. SOM is different from other artificial neural networks as it applies competitive learning as opposed to error-correction learning such as backpropagation with gradient descent, and in the sense that it uses a neighborhood function to preserve the topological properties of the input space. It is one of the popular and successful clustering algorithm. In this study, we classify types of financial distress firms, specially, non-audited firms. In the empirical test, we collect 10 financial ratios of 100 non-audited firms under distress in 2004 for the previous two years (2002 and 2003). Using these financial ratios and the SOM algorithm, five distinct patterns were distinguished. In pattern 1, financial distress was very serious in almost all financial ratios. 12% of the firms are included in these patterns. In pattern 2, financial distress was weak in almost financial ratios. 14% of the firms are included in pattern 2. In pattern 3, growth ratio was the worst among all patterns. It is speculated that the firms of this pattern may be under distress due to severe competition in their industries. Approximately 30% of the firms fell into this group. In pattern 4, the growth ratio was higher than any other pattern but the cash ratio and profitability ratio were not at the level of the growth ratio. It is concluded that the firms of this pattern were under distress in pursuit of expanding their business. About 25% of the firms were in this pattern. Last, pattern 5 encompassed very solvent firms. Perhaps firms of this pattern were distressed due to a bad short-term strategic decision or due to problems with the enterpriser of the firms. Approximately 18% of the firms were under this pattern. This study has the academic and empirical contribution. In the perspectives of the academic contribution, non-audited companies that tend to be easily bankrupt and have the unstructured or easily manipulated financial data are classified by the data mining technology (Self-Organizing Map) rather than big sized audited firms that have the well prepared and reliable financial data. In the perspectives of the empirical one, even though the financial data of the non-audited firms are conducted to analyze, it is useful for find out the first order symptom of financial distress, which makes us to forecast the prediction of bankruptcy of the firms and to manage the early warning and alert signal. These are the academic and empirical contribution of this study. The limitation of this research is to analyze only 100 corporates due to the difficulty of collecting the financial data of the non-audited firms, which make us to be hard to proceed to the analysis by the category or size difference. Also, non-financial qualitative data is crucial for the analysis of bankruptcy. Thus, the non-financial qualitative factor is taken into account for the next study. This study sheds some light on the non-audited small and medium sized firms' distress prediction in the future.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.