• Title/Summary/Keyword: 극복

Search Result 11,182, Processing Time 0.044 seconds

Classification and identification of organic aerosols in the atmosphere over Seoul using two dimensional gas chromatography-time of flight mass spectrometry (GC×GC/TOF-MS) data (GC×GC/TOF-MS를 이용한 서울 대기 중 유기 에어로졸의 분류 및 동정)

  • Jeon, So Hyeon;Lim, Hyung Bae;Choi, Na Rae;Lee, Ji Yi;Ahn, Yun Kyong;Kim, Yong Pyo
    • Particle and aerosol research
    • /
    • v.14 no.4
    • /
    • pp.153-169
    • /
    • 2018
  • To identify a variety of organic compounds in the ambient aerosols, the two-dimensional gas chromatography-time of flight mass spectrometry (GCxGC) system (2DGC) has been applied. While 2DGC provides more peaks, the amount of the generated data becomes huge. A two-step approach has been proposed to efficiently interpret the organic aerosol analysis data. The two-dimensional 2DGC data were divided into 6 chemical groups depending on their volatility and polarity. Using these classification standards, all the peaks were subject to both qualitative and quantitative analyses and then classified into 8 classes. The aerosol samples collected in Seoul in summer 2013 and winter 2014 were used as the test case. It was found that some chemical classes such as furanone showed seasonal variation in the high polarity-volatile organic compounds (HP-VOC) group. Also, for some chemical classes, qualitative and quantitative analyses showed different trends. Limitations of the proposed method are discussed.

Technological Diversities Observed in Bronze Objects of the Late Goryo Period - Case Study on the Bronze Bowls Excavated from the Burial Complex at Deobu-gol in Goyang - (고려 말 청동용기에 적용된 제작기술의 다양성 연구 - 고양 더부골 고분군 출토 청동용기를 중심으로 -)

  • Jeon, Ik Hwan;Lee, Jae Sung;Park, Jang Sik
    • Korean Journal of Heritage: History & Science
    • /
    • v.46 no.1
    • /
    • pp.208-227
    • /
    • 2013
  • Twenty-seven bronze bowls excavated from the Goryo burial complex at Deobu-gol were examined for their microstructure and chemical composition to characterize the bronze technology practiced by commoners at the time. Results showed that the objects examined can be classified into four groups: 1) objects forged out of Cu-near 22%Sn alloys and then quenched; 2) objects cast from Cu-below 10% Sn alloys containing lead; 3) objects cast from Cu-10%~20% Sn alloys containing lead and then quenched; 4) objects forged out of Cu-10~20% Sn alloys containing lead and then quenched. This study revealed that the fabrication technique as determined by alloy compositions plays an important role in bronze technology. The use of lead was clearly associated with the selection of quenching temperatures, the character of inclusions and the color characteristics of bronze surfaces. It was found that the objects containing lead were quenched at temperatures of $520^{\circ}{\sim}586^{\circ}C$ while those without lead were quenched at the range of $586^{\circ}{\sim}799^{\circ}C$. The presence of selenium in impurity inclusions was detected only in alloys containing lead, suggesting that the raw materials, Cu and Sn, used in making the lead-free alloys for the first group were carefully selected from those smelted using ores without lead contamination. Furthermore, the addition of lead was found to have significant effects on the color characteristics of the surface of bronze alloys when they are subjected to corrosion during interment. In leaded alloys, corrosion turns the surface light green or dark green while in unleaded alloys, corrosion turns the surface dark brown or black. It was found that in fabrication, the wall thickness of the bronze bowls varies depending on the application of quenching; most of the quenched objects have walls 1mm thick or below while those without quenching have walls 1mm thick or above. Fabrication techniques in bronze making usually reflect social environments of a community. It is likely that in the late Goryo period, experiencing lack of skilled bronze workers, the increased demand for bronze was met in two ways; by the use of chief lead instead of expensive tin and by the use of casting suitable for mass production. The above results show that the Goryo bronze workers tried to overcome such a resource-limited environment through technological innovations as apparent in the use of varying fabrication techniques for different alloys. Recently, numerous bronze objects are excavated and available for investigation. This study shows that with the use of proper analytical techniques they can serve as a valuable source of information required for the characterization of the associated technology as well as the social environment leading to the establishment of such technology.

A Study on Setup for Preliminary Decision Criterion of Continuum Rock Mass Slope with Fair to Good Rating (양호한 연속체 암반사면의 예비 판정기준 설정 연구)

  • Kim, Hyung-Min;Lee, Su-gon;Lee, Byok-Kyu;Woo, Jae-Gyung
    • The Journal of Engineering Geology
    • /
    • v.29 no.2
    • /
    • pp.85-97
    • /
    • 2019
  • It can be observed that steep slopes ($65^{\circ}$ to $80^{\circ}$) consist of rock masses were kept stable for a long time. In rock-mass slopes with similar ground condition, steeper slopes than 1 : 0.5 ($63^{\circ}$) may be applied if the discontinuities of rock-mass slope are distributed in a direction favorable to the stability of the slope. In making a decision the angle of the slope, if the preliminary rock mass conditions applicable to steep slope are quantitatively setup, they may be used as guidance in design practice. In this study, the above rock mass was defined as a good continuum rock mass and the quantitative setup criterion range was proposed using RMR, SMR and GSI classifications for the purpose of providing engineering standard for good continuum rock mass conditions. The methods of study are as follows. The stable slope at steep slopes ($65^{\circ}$ to $80^{\circ}$) for each rock type was selected as the study area, and RMR, SMR and GSI were classified to reflect the face mapping results. The results were reviewed by applying the calculated shear strength to the stable analysis of the current state of rock mass slope using the Hoek-Brown failure criterion. It is intended to verify the validity of the preliminary criterion as a rock mass condition that remains stable on a steep slope. Based on the analysis and review by the above research method, it was analyzed that a good continuum rock mass slope can be set to Basic RMR ${\geq}50$ (45 in sedimentary rock), GSI and SMR ${\geq}45$. The safety factor of the LEM is between Fs = 14.08 and 67.50 (average 32.9), and the displacement of the FEM is 0.13 to 0.64 mm (average 0.27 mm). This can be seen as a result of quantitative representation and verification of the stability of a good continuum rock mass slope that has been maintained stable for a long period of time with steep slopes ($65^{\circ}$ to $80^{\circ}$). The setup guideline for a good continuum rock mass slope will be able to establish a more detailed setup standard when the data are accumulated, and it is also a further study project. If stable even on steep slopes of 1 : 0.1 to 0.3, the upper limit of steep slopes is 1 : 0.3 with reference to the overseas design standards and report, thus giving the benefit of ensuring economic and eco-friendlyness. Also, the development of excavation technology and plantation technology and various eco-friendly slope design techniques will help overcome psychological anxiety and rapid weathering and relaxation due to steep slope construction.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Carbon Reduction by and Quantitative Models for Landscape Tree Species in Southern Region - For Camellia japonica, Lagerstroemia indica, and Quercus myrsinaefolia - (남부지방 조경수종의 탄소저감과 계량모델 - 동백나무, 배롱나무 및 가시나무를 대상으로 -)

  • Jo, Hyun-Kil;Kil, Sung-Ho;Park, Hye-Mi;Kim, Jin-Young
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.47 no.3
    • /
    • pp.31-38
    • /
    • 2019
  • This study quantified, through a direct harvesting method, storage and annual uptake of carbon from open-grown trees for three landscape tree species frequently planted in the southern region of Korea, and developed quantitative models to easily estimate the carbon reduction by tree growth for each species. The tree species for the study included Camellia japonica, Lagerstroemia indica, and Quercus myrsinaefolia, for which no information on carbon storage and uptake was available. Ten tree individuals for each species (a total of 30 individuals) were sampled considering various stem diameter sizes at given intervals. The study measured biomass for each part of the sample trees to quantify the total carbon storage per tree. Annual carbon uptake per tree was computed by analyzing the radial growth rates of the stem samples at breast height or ground level. Quantitative models were developed using stem diameter as an independent variable to easily calculate storage and annual uptake of carbon per tree for study species. All the quantitative models showed high fitness with $r^2$ values of 0.94-0.98. The storage and annual uptake of carbon from a Q. myrsinaefolia tree with dbh of 10 cm were 24.0 kg and 4.5 kg/yr, respectively. A C. japonica tree and L. indica tree with dg of 10 cm stored 11.2 kg and 8.1 kg of carbon and annually sequestered 2.6 kg and 1.2 kg, respectively. The above-mentioned carbon storage equaled the amount of carbon emitted from the gasoline consumption of about 42 L for Q. myrsinaefolia, 20 L for C. japonica, and 14 L for L. indica. A tree with the diameter size of 10 cm annually offset carbon emissions from gasoline use of approximately 8 L for Q. myrsinaefolia, 5 L for C. japonica, and 2 L for L. indica. The study pioneers in quantifying biomass and carbon reduction for the landscape tree species in the southern region despite difficulties in direct cutting and root digging of the planted trees.

History and Future Direction for the Development of Rice Growth Models in Korea (벼 작물생육모형 국내 도입 활용과 앞으로의 연구 방향)

  • Kim, Junhwan;Sang, Wangyu;Shin, Pyeong;Baek, Jaekyeong;Cho, Chongil;Seo, Myungchul
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.3
    • /
    • pp.167-174
    • /
    • 2019
  • A process-oriented crop growth model can simulate the biophysical process of rice under diverse environmental and management conditions, which would make it more versatile than an empirical crop model. In the present study, we examined chronology and background of the development of the rice growth models in Korea, which would provide insights on the needs for improvement of the models. The rice crop growth models were introduced in Korea in the late 80s. Until 2000s, these crop models have been used to simulate the yield in a specific area in Korea. Since then, improvement of crop growth models has been made to take into account biological characteristics of rice growth and development in more detail. Still, the use of the crop growth models has been limited to the assessment of climate change impact on crop production. Efforts have been made to apply the crop growth model, e.g., the CERES-Rice model, to develop decision support system for crop management at a farm level. However, the decision support system based on a crop growth model was attractive to a small number of stakeholders most likely due to scarcity of on-site weather data and reliable parameter sets for cultivars grown in Korea. The wide use of the crop growth models would be facilitated by approaches to extend spatial availability of reliable weather data, which could be either measured on-site or estimates using spatial interpolation. New approaches for calibration of cultivar parameters for new cultivars would also help lower hurdles to crop growth models.

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

Impact of Entrepreneurial Business Start-up Motivation, Entrepreneurial Spirit, and Entrepreneurial Competence Characteristics on Start-up Companies' Sustainability: Focusing on the Mediating Effect of the Start-up Companies' Business Performance (창업자의 창업동기, 창업가정신 그리고 창업가 역량특성이 창업기업 지속가능성에 미치는 영향: 창업기업 경영성과를 매개로 하여)

  • Hyuk, Kang Han;Park, Woojin;Yun, Bae byung
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.14 no.3
    • /
    • pp.59-71
    • /
    • 2019
  • The domestic unemployment rate is now seriously increasing due to the impact of the international trend of incurring high costs using only a small number of manpower in a situation where the business has been being reduced in size. The nation is taking start-up promotion policies as a way to overcome the problem, but with the advent of excessive entrepreneurs and their concentration on specific areas, their survival is in an unclear situation. Thus, this study was intended to provide a useful direction for successful start-ups by identifying the impact of entrepreneurial motivation and entrepreneurship, internal characteristics of entrepreneurs, and of competency characteristics, external characteristics of them, on sustainability through business performance. Thus, the purpose of this study was to provide a useful direction for successful start-ups by identifying the impact of entrepreneurial motivation and entrepreneurship, internal characteristics of entrepreneurs, competency, and external characteristics, on sustainability through business performance. The result of verifying the hypothesis has showed that entrepreneurial motivation, entrepreneurship, and competency characteristics had a positive effect on business performance, and on the other hand, the business performance had a positive effect on sustainability. In addition, in terms of the impact of entrepreneurial motivation, entrepreneurship, and competency characteristics on sustainability, all of them had mediating effects on business performance. It is obvious that studies on the factors affecting business performance and corporate sustainability up to now have been carried out by only collecting the independent variables of this study individually or two of them. so it is judged that if the study, which has verified the practical direction in general through the verification of entrepreneurs's internal and external characteristics from various angles, is performed more comprehensively along with the addition of background characteristics of entrepreneurs, such as characteristics of start-up preparation, etc., in the future, more in-depth results will be able to be obtained.

A Study on the Effect of Person-Job Fit and Organizational Justice Recognition on the Job Competency of Small and Medium Enterprises Workers (중소기업 종사자들의 직무 적합성과 조직 공정성 인식이 직무역량에 미치는 영향에 관한 연구)

  • Jung, Hwa;Ha, Kyu Soo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.14 no.3
    • /
    • pp.73-84
    • /
    • 2019
  • Despite decades of work experience, workers at small- and medium-sized enterprises(SME) here have yet to make inroads into the self-employed sector that utilizes the job competency they have accumulated at work after retirement. Unlike large companies, SME do not have a proper system for improving the long-term job competency of their employees as they focus on their immediate performance. It is necessary to analyse the independent variables affecting the job competency of employees of SME to derive practical implications for the personnel of SME. In the preceding studies, there are independent variable analyses that affect job competency in specialized industries, such as health care, public officials and IT, but the analysis of workers at SME is insufficient. This study set the person-job fit and organizational justice based on the prior studies of the independent variables that affect the job competency of SME general workers as a dependent variable. The sub-variables of each variable derived knowledge, skills, experience, and desire for person-job fit, and distribution, procedural and deployment justice for organizational justice, respectively. The survey of employees of SME in Korea was conducted from February to March 2019 by Likert 5 scales, and the survey was retrieved from 323 people and analyzed in a demonstration using the SPSS and AMOS statistics package. Among the four sub-independent variables of person-job fit, knowledge, skills and experience were shown to have a significant impact on the job competency, and desire was not shown to be so. Among the three sub-independent variables of organizational justice, deployment justice has a significant impact on job competency, but distribution and procedural justices have not. Personnel managers of SME need to improve the job competency of their employees by appropriately utilizing independent variables such as knowledge, skills, experience and deployment at each stage, including recruitment, deployment, and promotion. Future job competency modeling studies are needed to overcome the limitations of this study, which fails to objectively measure job competency.