• Title/Summary/Keyword: 인공지능기반

Search Result 2,410, Processing Time 0.03 seconds

Introduction and Evaluation of the Production Method for Chlorophyll-a Using Merging of GOCI-II and Polar Orbit Satellite Data (GOCI-II 및 극궤도 위성 자료를 병합한 Chlorophyll-a 산출물 생산방법 소개 및 활용 가능성 평가)

  • Hye-Kyeong Shin;Jae Yeop Kwon;Pyeong Joong Kim;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1255-1272
    • /
    • 2023
  • Satellite-based chlorophyll-a concentration, produced as a long-term time series, is crucial for global climate change research. The production of data without gaps through the merging of time-synthesized or multi-satellite data is essential. However, studies related to satellite-based chlorophyll-a concentration in the waters around the Korean Peninsula have mainly focused on evaluating seasonal characteristics or proposing algorithms suitable for research areas using a single ocean color sensor. In this study, a merging dataset of remote sensing reflectance from the geostationary sensor GOCI-II and polar-orbiting sensors (MODIS, VIIRS, OLCI) was utilized to achieve high spatial coverage of chlorophyll-a concentration in the waters around the Korean Peninsula. The spatial coverage in the results of this study increased by approximately 30% compared to polar-orbiting sensor data, effectively compensating for gaps caused by clouds. Additionally, we aimed to quantitatively assess accuracy through comparison with global chlorophyll-a composite data provided by Ocean Colour Climate Change Initiative (OC-CCI) and GlobColour, along with in-situ observation data. However, due to the limited number of in-situ observation data, we could not provide statistically significant results. Nevertheless, we observed a tendency for underestimation compared to global data. Furthermore, for the evaluation of practical applications in response to marine disasters such as red tides, we qualitatively compared our results with a case of a red tide in the East Sea in 2013. The results showed similarities to OC-CCI rather than standalone geostationary sensor results. Through this study, we plan to use the generated data for future research in artificial intelligence models for prediction and anomaly utilization. It is anticipated that the results will be beneficial for monitoring chlorophyll-a events in the coastal waters around Korea.

Data-centric XAI-driven Data Imputation of Molecular Structure and QSAR Model for Toxicity Prediction of 3D Printing Chemicals (3D 프린팅 소재 화학물질의 독성 예측을 위한 Data-centric XAI 기반 분자 구조 Data Imputation과 QSAR 모델 개발)

  • ChanHyeok Jeong;SangYoun Kim;SungKu Heo;Shahzeb Tariq;MinHyeok Shin;ChangKyoo Yoo
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.523-541
    • /
    • 2023
  • As accessibility to 3D printers increases, there is a growing frequency of exposure to chemicals associated with 3D printing. However, research on the toxicity and harmfulness of chemicals generated by 3D printing is insufficient, and the performance of toxicity prediction using in silico techniques is limited due to missing molecular structure data. In this study, quantitative structure-activity relationship (QSAR) model based on data-centric AI approach was developed to predict the toxicity of new 3D printing materials by imputing missing values in molecular descriptors. First, MissForest algorithm was utilized to impute missing values in molecular descriptors of hazardous 3D printing materials. Then, based on four different machine learning models (decision tree, random forest, XGBoost, SVM), a machine learning (ML)-based QSAR model was developed to predict the bioconcentration factor (Log BCF), octanol-air partition coefficient (Log Koa), and partition coefficient (Log P). Furthermore, the reliability of the data-centric QSAR model was validated through the Tree-SHAP (SHapley Additive exPlanations) method, which is one of explainable artificial intelligence (XAI) techniques. The proposed imputation method based on the MissForest enlarged approximately 2.5 times more molecular structure data compared to the existing data. Based on the imputed dataset of molecular descriptor, the developed data-centric QSAR model achieved approximately 73%, 76% and 92% of prediction performance for Log BCF, Log Koa, and Log P, respectively. Lastly, Tree-SHAP analysis demonstrated that the data-centric-based QSAR model achieved high prediction performance for toxicity information by identifying key molecular descriptors highly correlated with toxicity indices. Therefore, the proposed QSAR model based on the data-centric XAI approach can be extended to predict the toxicity of potential pollutants in emerging printing chemicals, chemical process, semiconductor or display process.

Determinants Affecting Organizational Open Source Software Switch and the Moderating Effects of Managers' Willingness to Secure SW Competitiveness (조직의 오픈소스 소프트웨어 전환에 영향을 미치는 요인과 관리자의 SW 경쟁력 확보의지의 조절효과)

  • Sanghyun Kim;Hyunsun Park
    • Information Systems Review
    • /
    • v.21 no.4
    • /
    • pp.99-123
    • /
    • 2019
  • The software industry is a high value-added industry in the knowledge information age, and its importance is growing as it not only plays a key role in knowledge creation and utilization, but also secures global competitiveness. Among various SW available in today's business environment, Open Source Software(OSS) is rapidly expanding its activity area by not only leading software development, but also integrating with new information technology. Therefore, the purpose of this research is to empirically examine and analyze the effect of factors on the switching behavior to OSS. To accomplish the study's purpose, we suggest the research model based on "Push-Pull-Mooring" framework. This study empirically examines the two categories of antecedents for switching behavior toward OSS. The survey was conducted to employees at various firms that already switched OSS. A total of 268 responses were collected and analyzed by using the structural equational modeling. The results of this study are as follows; first, continuous maintenance cost, vender dependency, functional indifference, and SW resource inefficiency are significantly related to switch to OSS. Second, network-oriented support, testability and strategic flexibility are significantly related to switch to OSS. Finally, the results show that willingness to secures SW competitiveness has a moderating effect on the relationships between push factors and pull factor with exception of improved knowledge, and switch to OSS. The results of this study will contribute to fields related to OSS both theoretically and practically.

Automatic Fracture Detection in CT Scan Images of Rocks Using Modified Faster R-CNN Deep-Learning Algorithm with Rotated Bounding Box (회전 경계박스 기능의 변형 FASTER R-CNN 딥러닝 알고리즘을 이용한 암석 CT 영상 내 자동 균열 탐지)

  • Pham, Chuyen;Zhuang, Li;Yeom, Sun;Shin, Hyu-Soung
    • Tunnel and Underground Space
    • /
    • v.31 no.5
    • /
    • pp.374-384
    • /
    • 2021
  • In this study, we propose a new approach for automatic fracture detection in CT scan images of rock specimens. This approach is built on top of two-stage object detection deep learning algorithm called Faster R-CNN with a major modification of using rotated bounding box. The use of rotated bounding box plays a key role in the future work to overcome several inherent difficulties of fracture segmentation relating to the heterogeneity of uninterested background (i.e., minerals) and the variation in size and shape of fracture. Comparing to the commonly used bounding box (i.e., axis-align bounding box), rotated bounding box shows a greater adaptability to fit with the elongated shape of fracture, such that minimizing the ratio of background within the bounding box. Besides, an additional benefit of rotated bounding box is that it can provide relative information on the orientation and length of fracture without the further segmentation and measurement step. To validate the applicability of the proposed approach, we train and test our approach with a number of CT image sets of fractured granite specimens with highly heterogeneous background and other rocks such as sandstone and shale. The result demonstrates that our approach can lead to the encouraging results on fracture detection with the mean average precision (mAP) up to 0.89 and also outperform the conventional approach in terms of background-to-object ratio within the bounding box.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Impact of Shortly Acquired IPO Firms on ICT Industry Concentration (ICT 산업분야 신생기업의 IPO 이후 인수합병과 산업 집중도에 관한 연구)

  • Chang, YoungBong;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.51-69
    • /
    • 2020
  • Now, it is a stylized fact that a small number of technology firms such as Apple, Alphabet, Microsoft, Amazon, Facebook and a few others have become larger and dominant players in an industry. Coupled with the rise of these leading firms, we have also observed that a large number of young firms have become an acquisition target in their early IPO stages. This indeed results in a sharp decline in the number of new entries in public exchanges although a series of policy reforms have been promulgated to foster competition through an increase in new entries. Given the observed industry trend in recent decades, a number of studies have reported increased concentration in most developed countries. However, it is less understood as to what caused an increase in industry concentration. In this paper, we uncover the mechanisms by which industries have become concentrated over the last decades by tracing the changes in industry concentration associated with a firm's status change in its early IPO stages. To this end, we put emphasis on the case in which firms are acquired shortly after they went public. Especially, with the transition to digital-based economies, it is imperative for incumbent firms to adapt and keep pace with new ICT and related intelligent systems. For instance, after the acquisition of a young firm equipped with AI-based solutions, an incumbent firm may better respond to a change in customer taste and preference by integrating acquired AI solutions and analytics skills into multiple business processes. Accordingly, it is not unusual for young ICT firms become an attractive acquisition target. To examine the role of M&As involved with young firms in reshaping the level of industry concentration, we identify a firm's status in early post-IPO stages over the sample periods spanning from 1990 to 2016 as follows: i) being delisted, ii) being standalone firms and iii) being acquired. According to our analysis, firms that have conducted IPO since 2000s have been acquired by incumbent firms at a relatively quicker time than those that did IPO in previous generations. We also show a greater acquisition rate for IPO firms in the ICT sector compared with their counterparts in other sectors. Our results based on multinomial logit models suggest that a large number of IPO firms have been acquired in their early post-IPO lives despite their financial soundness. Specifically, we show that IPO firms are likely to be acquired rather than be delisted due to financial distress in early IPO stages when they are more profitable, more mature or less leveraged. For those IPO firms with venture capital backup have also become an acquisition target more frequently. As a larger number of firms are acquired shortly after their IPO, our results show increased concentration. While providing limited evidence on the impact of large incumbent firms in explaining the change in industry concentration, our results show that the large firms' effect on industry concentration are pronounced in the ICT sector. This result possibly captures the current trend that a few tech giants such as Alphabet, Apple and Facebook continue to increase their market share. In addition, compared with the acquisitions of non-ICT firms, the concentration impact of IPO firms in early stages becomes larger when ICT firms are acquired as a target. Our study makes new contributions. To our best knowledge, this is one of a few studies that link a firm's post-IPO status to associated changes in industry concentration. Although some studies have addressed concentration issues, their primary focus was on market power or proprietary software. Contrast to earlier studies, we are able to uncover the mechanism by which industries have become concentrated by placing emphasis on M&As involving young IPO firms. Interestingly, the concentration impact of IPO firm acquisitions are magnified when a large incumbent firms are involved as an acquirer. This leads us to infer the underlying reasons as to why industries have become more concentrated with a favor of large firms in recent decades. Overall, our study sheds new light on the literature by providing a plausible explanation as to why industries have become concentrated.

The Impact of O4O Selection Attributes on Customer Satisfaction and Loyalty: Focusing on the Case of Fresh Hema in China (O4O 선택속성이 고객만족도 및 고객충성도에 미치는 영향: 중국 허마셴셩 사례를 중심으로)

  • Cui, Chengguo;Yang, Sung-Byung
    • Knowledge Management Research
    • /
    • v.21 no.3
    • /
    • pp.249-269
    • /
    • 2020
  • Recently, as the online market has matured, it is facing many problems to prevent the growth. The most common problem is the homogenization of online products, which fails to increase the number of customers any more. Moreover, although the portion of the online market has increased significantly, it now becomes essential to expand offline for further development. In response, many online firms have recently sought to expand their businesses and marketing channels by securing offline spaces that can complement the limitations of online platforms, on top of their existing advantages of online channels. Based on their competitive advantage in terms of analyzing large volumes of customer data utilizing information technologies (e.g., big data and artificial intelligence), they are reinforcing their offline influence as well through this online for offline (O4O) business model. On the other hand, most of the existing research has primarily focused on online to offline (O2O) business model, and there is still a lack of research on O4O business models, which have been actively attempted in various industrial fields in recent years. Since a few of O4O-related studies have been conducted only in an experience marketing setting following a case study method, it is critical to conduct an empirical study on O4O selection attributes and their impact on customer satisfaction and loyalty. Therefore, focusing on China's representative O4O business model, 'Fresh Hema,' this study attempts to identify some key selection attributes specialized for O4O services from the customers' viewpoint and examine the impact of these attributes on customer satisfaction and loyalty. The results of the structural equation modeling (SEM) with 300 O4O (Fresh Hema) experienced customers, reveal that, out of seven O4O selection attributes, four (mobile app quality, mobile payment, product quality, and store facilities) have an impact on customer satisfaction, which also leads to customer loyalty (reuse intention, recommendation intention, and brand attachment). This study would help managers in an O4O area well adapt to rapidly changing customer needs and provide them with some guidelines for enhancing both customer satisfaction and loyalty by allocating more resources to more significant selection attributes, rather than less significant ones.

Generation of Daily High-resolution Sea Surface Temperature for the Seas around the Korean Peninsula Using Multi-satellite Data and Artificial Intelligence (다종 위성자료와 인공지능 기법을 이용한 한반도 주변 해역의 고해상도 해수면온도 자료 생산)

  • Jung, Sihun;Choo, Minki;Im, Jungho;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.707-723
    • /
    • 2022
  • Although satellite-based sea surface temperature (SST) is advantageous for monitoring large areas, spatiotemporal data gaps frequently occur due to various environmental or mechanical causes. Thus, it is crucial to fill in the gaps to maximize its usability. In this study, daily SST composite fields with a resolution of 4 km were produced through a two-step machine learning approach using polar-orbiting and geostationary satellite SST data. The first step was SST reconstruction based on Data Interpolate Convolutional AutoEncoder (DINCAE) using multi-satellite-derived SST data. The second step improved the reconstructed SST targeting in situ measurements based on light gradient boosting machine (LGBM) to finally produce daily SST composite fields. The DINCAE model was validated using random masks for 50 days, whereas the LGBM model was evaluated using leave-one-year-out cross-validation (LOYOCV). The SST reconstruction accuracy was high, resulting in R2 of 0.98, and a root-mean-square-error (RMSE) of 0.97℃. The accuracy increase by the second step was also high when compared to in situ measurements, resulting in an RMSE decrease of 0.21-0.29℃ and an MAE decrease of 0.17-0.24℃. The SST composite fields generated using all in situ data in this study were comparable with the existing data assimilated SST composite fields. In addition, the LGBM model in the second step greatly reduced the overfitting, which was reported as a limitation in the previous study that used random forest. The spatial distribution of the corrected SST was similar to those of existing high resolution SST composite fields, revealing that spatial details of oceanic phenomena such as fronts, eddies and SST gradients were well simulated. This research demonstrated the potential to produce high resolution seamless SST composite fields using multi-satellite data and artificial intelligence.

Characteristics and Implications of 4th Industrial Revolution Technology Innovation in the Service Industry (서비스 산업의 4차 산업혁명 기술 혁신 특성과 시사점)

  • Pyoung Yol Jang
    • Journal of Service Research and Studies
    • /
    • v.13 no.2
    • /
    • pp.114-129
    • /
    • 2023
  • In the era of the 4th industrial revolution, the importance of the 4th industrial revolution technology is increasing in the service industry. The purpose of this study is to identify the development and utilization status of the 4th industrial revolution technology in the service industry and to derive the characteristics and implications of the 4th industrial revolution technology innovation in the service industry. In this study, research and analysis were conducted based on the business activity survey data in order to identify the technological innovation characteristics of the 4th industrial revolution in the service industry. The 4th industrial revolution technology in the service industry was analyzed in terms of company ratio, technology development and utilization rate, development/utilization technology, technology application field, and technology development method. In addition, the trend of the 4th industrial revolution technology change in the service industry was also analyzed. The 4th industrial revolution technology utilization and development status of other industries was compared and analyzed. In particular, the service industry 4th industrial revolution technology innovation type was divided into 4 types from the perspective of the 4th industrial revolution company ratio and the 4th industrial revolution company ratio growth rate, and types for each service industry were derived. The characteristics and implications of the 4th industrial revolution technology innovation in the service industry were presented from nine perspectives. As a result of the study, it was found that companies in the service industry were developing or using 4th industrial revolution technologies more actively than companies in other industries, and it was analyzed that the gap was further widening. By service industry, information and communication, finance and insurance, and educational service showed relatively high rates of developing or utilizing 4th industrial revolution technologies. The service industries in which the share of 4th industrial revolution companies increased the most were real estate, education service, health and social welfare service. In particular, cloud, big data, and artificial intelligence were analyzed as the three core technologies of the fourth industrial revolution. The service industry can be classified into 4 types in terms of the 4th industrial revolution company ratio and growth rate, and service industry innovation measures that reflect the differentiated innovation characteristics of each type are needed.