• Title/Summary/Keyword: Sampling-Based Algorithm

Search Result 480, Processing Time 0.028 seconds

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Efficient Deep Learning Approaches for Active Fire Detection Using Himawari-8 Geostationary Satellite Images (Himawari-8 정지궤도 위성 영상을 활용한 딥러닝 기반 산불 탐지의 효율적 방안 제시)

  • Sihyun Lee;Yoojin Kang;Taejun Sung;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.979-995
    • /
    • 2023
  • As wildfires are difficult to predict, real-time monitoring is crucial for a timely response. Geostationary satellite images are very useful for active fire detection because they can monitor a vast area with high temporal resolution (e.g., 2 min). Existing satellite-based active fire detection algorithms detect thermal outliers using threshold values based on the statistical analysis of brightness temperature. However, the difficulty in establishing suitable thresholds for such threshold-based methods hinders their ability to detect fires with low intensity and achieve generalized performance. In light of these challenges, machine learning has emerged as a potential-solution. Until now, relatively simple techniques such as random forest, Vanilla convolutional neural network (CNN), and U-net have been applied for active fire detection. Therefore, this study proposed an active fire detection algorithm using state-of-the-art (SOTA) deep learning techniques using data from the Advanced Himawari Imager and evaluated it over East Asia and Australia. The SOTA model was developed by applying EfficientNet and lion optimizer, and the results were compared with the model using the Vanilla CNN structure. EfficientNet outperformed CNN with F1-scores of 0.88 and 0.83 in East Asia and Australia, respectively. The performance was better after using weighted loss, equal sampling, and image augmentation techniques to fix data imbalance issues compared to before the techniques were used, resulting in F1-scores of 0.92 in East Asia and 0.84 in Australia. It is anticipated that timely responses facilitated by the SOTA deep learning-based approach for active fire detection will effectively mitigate the damage caused by wildfires.

Analysis of Research Trends in SIAM Journal on Applied Mathematics Using Topic Modeling (토픽모델링을 활용한 SIAM Journal on Applied Mathematics의 연구 동향 분석)

  • Kim, Sung-Yeun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.607-615
    • /
    • 2020
  • The purpose of this study was to analyze the research status and trends related to the industrial mathematics based on text mining techniques with a sample of 4910 papers collected in the SIAM Journal on Applied Mathematics from 1970 to 2019. The R program was used to collect titles, abstracts, and key words from the papers and to analyze topic modeling techniques based on LDA algorithm. As a result of the coherence score on the collected papers, 20 topics were determined optimally using the Gibbs sampling methods. The main results were as follows. First, studies on industrial mathematics were conducted in a variety of mathematics fields, including computational mathematics, geometry, mathematical modeling, topology, discrete mathematics, probability and statistics, with a focus on analysis and algebra. Second, 5 hot topics (mathematical biology, nonlinear partial differential equation, discrete mathematics, statistics, topology) and 1 cold topic (probability theory) were found based on time series regression analysis. Third, among the fields that were not reflected in the 2015 revised mathematics curriculum, numeral system, matrix, vector in space, and complex numbers were extracted as the contents to be covered in the high school mathematical curriculum. Finally, this study suggested strategies to activate industrial mathematics in Korea, described the study limitations, and proposed directions for future research.

A Study of Establishment and application Algorithm of Artificial Intelligence Training Data on Land use/cover Using Aerial Photograph and Satellite Images (항공 및 위성영상을 활용한 토지피복 관련 인공지능 학습 데이터 구축 및 알고리즘 적용 연구)

  • Lee, Seong-hyeok;Lee, Moung-jin
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.871-884
    • /
    • 2021
  • The purpose of this study was to determine ways to increase efficiency in constructing and verifying artificial intelligence learning data on land cover using aerial and satellite images, and in applying the data to AI learning algorithms. To this end, multi-resolution datasets of 0.51 m and 10 m each for 8 categories of land cover were constructed using high-resolution aerial images and satellite images obtained from Sentinel-2 satellites. Furthermore, fine data (a total of 17,000 pieces) and coarse data (a total of 33,000 pieces) were simultaneously constructed to achieve the following two goals: precise detection of land cover changes and the establishment of large-scale learning datasets. To secure the accuracy of the learning data, the verification was performed in three steps, which included data refining, annotation, and sampling. The learning data that wasfinally verified was applied to the semantic segmentation algorithms U-Net and DeeplabV3+, and the results were analyzed. Based on the analysis, the average accuracy for land cover based on aerial imagery was 77.8% for U-Net and 76.3% for Deeplab V3+, while for land cover based on satellite imagery it was 91.4% for U-Net and 85.8% for Deeplab V3+. The artificial intelligence learning datasets on land cover constructed using high-resolution aerial and satellite images in this study can be used as reference data to help classify land cover and identify relevant changes. Therefore, it is expected that this study's findings can be used in the future in various fields of artificial intelligence studying land cover in constructing an artificial intelligence learning dataset on land cover of the whole of Korea.

Detection of Zebra-crossing Areas Based on Deep Learning with Combination of SegNet and ResNet (SegNet과 ResNet을 조합한 딥러닝에 기반한 횡단보도 영역 검출)

  • Liang, Han;Seo, Suyoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.141-148
    • /
    • 2021
  • This paper presents a method to detect zebra-crossing using deep learning which combines SegNet and ResNet. For the blind, a safe crossing system is important to know exactly where the zebra-crossings are. Zebra-crossing detection by deep learning can be a good solution to this problem and robotic vision-based assistive technologies sprung up over the past few years, which focused on specific scene objects using monocular detectors. These traditional methods have achieved significant results with relatively long processing times, and enhanced the zebra-crossing perception to a large extent. However, running all detectors jointly incurs a long latency and becomes computationally prohibitive on wearable embedded systems. In this paper, we propose a model for fast and stable segmentation of zebra-crossing from captured images. The model is improved based on a combination of SegNet and ResNet and consists of three steps. First, the input image is subsampled to extract image features and the convolutional neural network of ResNet is modified to make it the new encoder. Second, through the SegNet original up-sampling network, the abstract features are restored to the original image size. Finally, the method classifies all pixels and calculates the accuracy of each pixel. The experimental results prove the efficiency of the modified semantic segmentation algorithm with a relatively high computing speed.

Determinants of Consumer Preference by type of Accommodation: Two Step Cluster Analysis (이단계 군집분석에 의한 농촌관광 편의시설 유형별 소비자 선호 결정요인)

  • Park, Duk-Byeong;Yoon, Yoo-Shik;Lee, Min-Soo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.3
    • /
    • pp.1-19
    • /
    • 2007
  • 1. Purpose Rural tourism is made by individuals with different characteristics, needs and wants. It is important to have information on the characteristics and preferences of the consumers of the different types of existing rural accommodation. The stud aims to identify the determinants of consumer preference by type of accommodations. 2. Methodology 2.1 Sample Data were collected from 1000 people by telephone survey with three-stage stratified random sampling in seven metropolitan areas in Korea. Respondents were chosen by sampling internal on telephone book published in 2006. We surveyed from four to ten-thirty 0'clock afternoon so as to systematic sampling considering respondents' life cycle. 2.2 Two-step cluster Analysis Our study is accomplished through the use of a two-step cluster method to classify the accommodation in a reduced number of groups, so that each group constitutes a type. This method had been suggested as appropriate in clustering large data sets with mixed attributes. The method is based on a distance measure that enables data with both continuous and categorical attributes to be clustered. This is derived from a probabilistic model in which the distance between two clusters in equivalent to the decrease in log-likelihood function as a result of merging. 2.3 Multinomial Logit Analysis The estimation of a Multionmial Logit model determines the characteristics of tourist who is most likely to opt for each type of accommodation. The Multinomial Logit model constitutes an appropriate framework to explore and explain choice process where the choice set consists of more than two alternatives. Due to its ease and quick estimation of parameters, the Multinomial Logit model has been used for many empirical studies of choice in tourism. 3. Findings The auto-clustering algorithm indicated that a five-cluster solution was the best model, because it minimized the BIC value and the change in them between adjacent numbers of clusters. The accommodation establishments can be classified into five types: Traditional House, Typical Farmhouse, Farmstay house for group Tour, Log Cabin for Family, and Log Cabin for Individuals. Group 1 (Traditional House) includes mainly the large accommodation establishments, i.e. those with ondoll style room providing meals and one shower room on family tourist, of original construction style house. Group 2 (Typical Farmhouse) encompasses accommodation establishments of Ondoll rooms and each bathroom providing meals. It includes, in other words, the tourist accommodations Known as "rural houses." Group 3 (Farmstay House for Group) has accommodation establishments of Ondoll rooms not providing meals and self cooking facilities, large room size over five persons. Group 4 (Log Cabin for Family) includes mainly the popular accommodation establishments, i.e. those with Ondoll style room with on shower room on family tourist, of western styled log house. While the accommodations in this group are not defined as regards type of construction, the group does include all the original Korean style construction, Finally, group 5 (Log Cabin for Individuals)includes those accommodations that are bedroom western styled wooden house with each bathroom. First Multinomial Logit model is estimated including all the explicative variables considered and taking accommodation group 2 as base alternative. The results show that the variables and the estimated values of the parameters for the model giving the probability of each of the five different types of accommodation available in rural tourism village in Korea, according to the socio-economic and trip related characteristics of the individuals. An initial observation of the analysis reveals that none of variables income, the number of journey, distance, and residential style of house is explicative in the choice of rural accommodation. The age and accompany variables are significant for accommodation establishment of group 1. The education and rural residential experience variables are significant for accommodation establishment of groups 4 and 5. The expenditure and marital status variables are significant for accommodation establishment of group 4. The gender and occupation variable are significant for accommodation establishment of group 3. The loyalty variable is significant for accommodation establishment of groups 3 and 4. The study indicates that significant differences exist among the individuals who choose each type of accommodation at a destination. From this investigation is evident that several profiles of tourists can be attracted by a rural destination according to the types of existing accommodations at this destination. Besides, the tourist profiles may be used as the basis for investment policy and promotion for each type of accommodation, making use in each case of the variables that indicate a greater likelihood of influencing the tourist choice of accommodation.

  • PDF

Deep Learning based Estimation of Depth to Bearing Layer from In-situ Data (딥러닝 기반 국내 지반의 지지층 깊이 예측)

  • Jang, Young-Eun;Jung, Jaeho;Han, Jin-Tae;Yu, Yonggyun
    • Journal of the Korean Geotechnical Society
    • /
    • v.38 no.3
    • /
    • pp.35-42
    • /
    • 2022
  • The N-value from the Standard Penetration Test (SPT), which is one of the representative in-situ test, is an important index that provides basic geological information and the depth of the bearing layer for the design of geotechnical structures. In the aspect of time and cost-effectiveness, there is a need to carry out a representative sampling test. However, the various variability and uncertainty are existing in the soil layer, so it is difficult to grasp the characteristics of the entire field from the limited test results. Thus the spatial interpolation techniques such as Kriging and IDW (inverse distance weighted) have been used for predicting unknown point from existing data. Recently, in order to increase the accuracy of interpolation results, studies that combine the geotechnics and deep learning method have been conducted. In this study, based on the SPT results of about 22,000 holes of ground survey, a comparative study was conducted to predict the depth of the bearing layer using deep learning methods and IDW. The average error among the prediction results of the bearing layer of each analysis model was 3.01 m for IDW, 3.22 m and 2.46 m for fully connected network and PointNet, respectively. The standard deviation was 3.99 for IDW, 3.95 and 3.54 for fully connected network and PointNet. As a result, the point net deep learing algorithm showed improved results compared to IDW and other deep learning method.

Application study of random forest method based on Sentinel-2 imagery for surface cover classification in rivers - A case of Naeseong Stream - (하천 내 지표 피복 분류를 위한 Sentinel-2 영상 기반 랜덤 포레스트 기법의 적용성 연구 - 내성천을 사례로 -)

  • An, Seonggi;Lee, Chanjoo;Kim, Yongmin;Choi, Hun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.5
    • /
    • pp.321-332
    • /
    • 2024
  • Understanding the status of surface cover in riparian zones is essential for river management and flood disaster prevention. Traditional survey methods rely on expert interpretation of vegetation through vegetation mapping or indices. However, these methods are limited by their ability to accurately reflect dynamically changing river environments. Against this backdrop, this study utilized satellite imagery to apply the Random Forest method to assess the distribution of vegetation in rivers over multiple years, focusing on the Naeseong Stream as a case study. Remote sensing data from Sentinel-2 imagery were combined with ground truth data from the Naeseong Stream surface cover in 2016. The Random Forest machine learning algorithm was used to extract and train 1,000 samples per surface cover from ten predetermined sampling areas, followed by validation. A sensitivity analysis, annual surface cover analysis, and accuracy assessment were conducted to evaluate their applicability. The results showed an accuracy of 85.1% based on the validation data. Sensitivity analysis indicated the highest efficiency in 30 trees, 800 samples, and the downstream river section. Surface cover analysis accurately reflects the actual river environment. The accuracy analysis identified 14.9% boundary and internal errors, with high accuracy observed in six categories, excluding scattered and herbaceous vegetation. Although this study focused on a single river, applying the surface cover classification method to multiple rivers is necessary to obtain more accurate and comprehensive data.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Optimal Design of Water Distribution System considering the Uncertainties on the Demands and Roughness Coefficients (수요와 조도계수의 불확실성을 고려한 상수도관망의 최적설계)

  • Jung, Dong-Hwi;Chung, Gun-Hui;Kim, Joong-Hoon
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.10 no.1
    • /
    • pp.73-80
    • /
    • 2010
  • The optimal design of water distribution system have started with the least cost design of single objective function using fixed hydraulic variables, eg. fixed water demand and pipe roughness. However, more adequate design is accomplished with considering uncertainties laid on water distribution system such as uncertain future water demands, resulting in successful estimation of real network's behaviors. So, many researchers have suggested a variety of approaches to consider uncertainties in water distribution system using uncertainties quantification methods and the optimal design of multi-objective function is also studied. This paper suggests the new approach of a multi-objective optimization seeking the minimum cost and maximum robustness of the network based on two uncertain variables, nodal demands and pipe roughness uncertainties. Total design procedure consists of two folds: least cost design and final optimal design under uncertainties. The uncertainties of demands and roughness are considered with Latin Hypercube sampling technique with beta probability density functions and multi-objective genetic algorithms (MOGA) is used for the optimization process. The suggested approach is tested in a case study of real network named the New York Tunnels and the applicability of new approach is checked. As the computation time passes, we can check that initial populations, one solution of solutions of multi-objective genetic algorithm, spread to lower right section on the solution space and yield Pareto Optimum solutions building Pareto Front.