• Title/Summary/Keyword: Flow Learning

Search Result 753, Processing Time 0.031 seconds

A Framework Development for Sketched Data-Driven Building Information Model Creation to Support Efficient Space Configuration and Building Performance Analysis (효율적 공간 형상화 및 건물성능분석을 위한 스케치 정보 기반 BIM 모델 자동생성 프레임워크 개발)

  • Kong, ByungChan;Jeong, WoonSeong
    • Korean Journal of Construction Engineering and Management
    • /
    • v.25 no.1
    • /
    • pp.50-61
    • /
    • 2024
  • The market for compact houses is growing due to the demand for floor plans prioritizing user needs. However, clients often have difficulty communicating their spatial requirements to professionals including architects because they lack the means to provide evidence, such as spatial configurations or cost estimates. This research aims to create a framework that can translate sketched data-driven spatial requirements into 3D building components in BIM models to facilitate spatial understanding and provide building performance analysis to aid in budgeting in the early design phase. The research process includes developing a process model, implementing, and validating the framework. The process model describes the data flow within the framework and identifies the required functionality. Implementation involves creating systems and user interfaces to integrate various systems. The validation verifies that the framework can automatically convert sketched space requirements into walls, floors, and roofs in a BIM model. The framework can also automatically calculate material and energy costs based on the BIM model. The developed frame enables clients to efficiently create 3D building components based on the sketched data and facilitates users to understand the space and analyze the building performance through the created BIM models.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Problems with ERP Education at College and How to Solve the Problems (대학에서의 ERP교육의 문제점 및 개선방안)

  • Kim, Mang-Hee;Ra, Ki-La;Park, Sang-Bong
    • Management & Information Systems Review
    • /
    • v.31 no.2
    • /
    • pp.41-59
    • /
    • 2012
  • ERP is a new technique of process innovation. It indicates enterprise resource planning whose purpose is an integrated total management of enterprise resources. ERP can be also seen as one of the latest management systems that organically connects by using computers all business processes including marketing, production and delivery and control those processes on a real-time basis. Currently, however, it's not easy for local enterprises to have operators who will be in charge of ERP programs, even if they want to introduce the resource management system. This suggests that it's urgently needed to train such operators through ERP education at school. But in the field of education, actually, the lack of professional ERP instructors and less effective learning programs for industrial applications of ERP are obstacles to bringing up ERP workers who are competent as much as required by enterprises. In ERP, accounting is more important than any others. Accountants are assuming more and more roles in ERP. Thus, there's a rapidly increasing demand for experts in ERP accounting. This study examined previous researches and literature concerning ERP education, identified problems with current ERP education at college and proposed how to solve the problems. This study proposed the ways of improving ERP education at college as follows. First, a prerequisite learning of ERP, that is, educating the principle of accounting should be intensified to make students get a basic theoretical knowledge of ERP enough. Second, lots of different scenarios designed to try ERP programs in business should be created. In association, students should be educated to get a better understanding of incidents or events taken place in those scenarios and apply it to trying ERP for themselves. Third, as mentioned earlier, ERP is a system that integrates all enterprise resources such as marketing, procurement, personnel management, remuneration and production under the framework of accounting. It should be noted that under ERP, business activities are organically connected with accounting modules. More importantly, those modules should be recognized not individually, but as parts comprising a whole flow of accounting. This study has a limitation because it is a literature research that heavily relied on previous studies, publications and reports. This suggests the need to compare the efficiency of ERP education between before and after applying what this study proposed to improve that education. Also, it's needed to determine students' and professors' perceived effectiveness of current ERP education and compare and analyze the difference in that perception between the two groups.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

The Analysis on the Relationship between Firms' Exposures to SNS and Stock Prices in Korea (기업의 SNS 노출과 주식 수익률간의 관계 분석)

  • Kim, Taehwan;Jung, Woo-Jin;Lee, Sang-Yong Tom
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.233-253
    • /
    • 2014
  • Can the stock market really be predicted? Stock market prediction has attracted much attention from many fields including business, economics, statistics, and mathematics. Early research on stock market prediction was based on random walk theory (RWT) and the efficient market hypothesis (EMH). According to the EMH, stock market are largely driven by new information rather than present and past prices. Since it is unpredictable, stock market will follow a random walk. Even though these theories, Schumaker [2010] asserted that people keep trying to predict the stock market by using artificial intelligence, statistical estimates, and mathematical models. Mathematical approaches include Percolation Methods, Log-Periodic Oscillations and Wavelet Transforms to model future prices. Examples of artificial intelligence approaches that deals with optimization and machine learning are Genetic Algorithms, Support Vector Machines (SVM) and Neural Networks. Statistical approaches typically predicts the future by using past stock market data. Recently, financial engineers have started to predict the stock prices movement pattern by using the SNS data. SNS is the place where peoples opinions and ideas are freely flow and affect others' beliefs on certain things. Through word-of-mouth in SNS, people share product usage experiences, subjective feelings, and commonly accompanying sentiment or mood with others. An increasing number of empirical analyses of sentiment and mood are based on textual collections of public user generated data on the web. The Opinion mining is one domain of the data mining fields extracting public opinions exposed in SNS by utilizing data mining. There have been many studies on the issues of opinion mining from Web sources such as product reviews, forum posts and blogs. In relation to this literatures, we are trying to understand the effects of SNS exposures of firms on stock prices in Korea. Similarly to Bollen et al. [2011], we empirically analyze the impact of SNS exposures on stock return rates. We use Social Metrics by Daum Soft, an SNS big data analysis company in Korea. Social Metrics provides trends and public opinions in Twitter and blogs by using natural language process and analysis tools. It collects the sentences circulated in the Twitter in real time, and breaks down these sentences into the word units and then extracts keywords. In this study, we classify firms' exposures in SNS into two groups: positive and negative. To test the correlation and causation relationship between SNS exposures and stock price returns, we first collect 252 firms' stock prices and KRX100 index in the Korea Stock Exchange (KRX) from May 25, 2012 to September 1, 2012. We also gather the public attitudes (positive, negative) about these firms from Social Metrics over the same period of time. We conduct regression analysis between stock prices and the number of SNS exposures. Having checked the correlation between the two variables, we perform Granger causality test to see the causation direction between the two variables. The research result is that the number of total SNS exposures is positively related with stock market returns. The number of positive mentions of has also positive relationship with stock market returns. Contrarily, the number of negative mentions has negative relationship with stock market returns, but this relationship is statistically not significant. This means that the impact of positive mentions is statistically bigger than the impact of negative mentions. We also investigate whether the impacts are moderated by industry type and firm's size. We find that the SNS exposures impacts are bigger for IT firms than for non-IT firms, and bigger for small sized firms than for large sized firms. The results of Granger causality test shows change of stock price return is caused by SNS exposures, while the causation of the other way round is not significant. Therefore the correlation relationship between SNS exposures and stock prices has uni-direction causality. The more a firm is exposed in SNS, the more is the stock price likely to increase, while stock price changes may not cause more SNS mentions.

Syllabus Design and Pronunciation Teaching

  • Amakawa, Yukiko
    • Proceedings of the KSPS conference
    • /
    • 2000.07a
    • /
    • pp.235-240
    • /
    • 2000
  • In the age of global communication, more human exchange is extended at the grass-roots level. In the old days, language policy and language planning was based on one nation-state with one language. But high waves of globalizaiton have allowed extended human flow of exchange beyond one's national border on a daily basis. Under such circumstances, homogeneity in Japan may not allow Japanese to speak and communicate only in Japanese and only with Japanese people. In Japan, an advisory report was made to the Ministry of Education in June 1996 about what education should be like in the 21st century. In this report, an introduction of English at public elementary schools was for the first time made. A basic policy of English instruction at the elementary school level was revealed. With this concept, English instruction is not required at the elementary school level but each school has their own choice of introducing English as their curriculum starting April 2002. As Baker, Colin (1996) indicates the age of three as being the threshold diving a child becoming bilingual naturally or by formal instruction. Threre is a movement towards making second language acquisition more naturalistic in an educational setting, developing communicative competence in a more or less formal way. From the lesson of the Canadian immersion success, Genesee (1987) stresses the importance of early language instruction. It is clear that from a psycho-linguistic perspective, most children acquire basic communication skills in their first language apparently effortlessly and without systematic and formal instruction during the first six or seven years of life. This innate capacity diminishes with age, thereby making language learning increasingly difficult. The author, being a returnee, experienced considerable difficulty acquiring L2, and especially achieving native-like competence. There will be many hurdles to conquer until Japanese students are able to reach at least a communicative level in English. It has been mentioned that English is not taught to clear the college entrance examination, but to communicate. However, Japanese college entrance examination still makes students focus more on the grammar-translation method. This is expected to shift to a more communication stressed approach. Japan does not have to aim at becoming an official bilingual country, but at least communicative English should be taught at every level in school Mito College is a small two-year co-ed college in Japan. Students at Mito College are basically notgood at English. It has only one department for business and economics, and English is required for all freshmen. It is necessary for me to make my classes enjoyable and attractive so that students can at least get motivated to learn English. My major target is communicative English so that students may be prepared to use English in various business settings. As an experiment to introduce more communicative English, the author has made the following syllabus design. This program aims at training students speak and enjoy English. 90-minute class (only 190-minute session per week is most common in Japanese colleges) is divided into two: The first half is to train students orally using Graded Direct Method. The latter half uses different materials each time so that students can learn and enjoy English culture and language simultaneously. There are no quizes or examinations in my one-academic year program. However, all students are required to make an original English poem by the end of the spring semester. 2-6 students work together in a group on one poem. Students coming to Mito College, Japan have one of the lowest English levels in all of Japan. However, an attached example of one poem made by a group shows that students can improve their creativity as long as they are kept encouraged. At the end of the fall semester, all students are then required individually to make a 3-minute original English speech. An example of that speech contest will be presented at the Convention in Seoul.

  • PDF

Evaluation of Endothelium-dependent Myocardial Perfusion Reserve in Healthy Smokers; Cold Pressor Test using $H_2^{15}O\;PET$ (흡연자에서 관상동맥 내피세포 의존성 심근 혈류 예비능: $H_2^{15}O\;PET$ 찬물자극 검사에 의한 평가)

  • Hwang, Kyung-Hoon;Lee, Dong-Soo;Lee, Byeong-Il;Lee, Jae-Sung;Lee, Ho-Young;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.1
    • /
    • pp.21-29
    • /
    • 2004
  • Purpose: Much evidence suggests long-term cigarette smoking alters coronary vascular endothelial response. On this study, we applied nonnegative matrix factorization (NMF), an unsupervised learning algorithm, to CO-less $H_2^{15}O-PET$ to investigate coronary endothelial dysfunction caused by smoking noninvasively. Materials and methods: This study enrolled eighteen young male volunteers consisting of 9 smokers $(23.8{\pm}1.1\;yr;\;6.5{\pm}2.5$ pack-years) and 9 nonsmokers $(23.8{\pm}2.9 yr)$. They do not have any cardiovascular risk factor or disease history. Myocardial $H_2^{15}O-PET$ was performed at rest, during cold ($5^{\circ}C$) pressor stimulation and during adenosine infusion. Left ventricular blood pool and myocardium were segmented on dynamic PET data by NMF method. Myocardial blood flow (MBF) was calculated from input and tissue functions by a single compartmental model with correction of partial volume and spillover effects. Results: There were no significant difference in resting MBF between the two groups (Smokers: 1.43 0.41 ml/g/min and non-smokers: $1.37{\pm}0.41$ ml/g/min p=NS). during cold pressor stimulation, MBF in smokers was significantly lower than 4hat in non-smokers ($1.25{\pm}0.34$ ml/g/min vs $1.59{\pm}0.29$ ml/gmin; p=0.019). The difference in the ratio of cold pressor MBF to resting MBF between the two groups was also significant (p=0.024; $90{\pm}24%$ in smokers and $122{\pm}28%$ in non-smokers.). During adenosine infusion, however, hyperemic MBF did not differ significantly between smokers and non-smokers ($5.81{\pm}1.99$ ml/g/min vs $5.11{\pm}1.31$ ml/g/min ; p=NS). Conclusion: in smokers, MBF during cold pressor stimulation was significantly lower compared wi4h nonsmokers, reflecting smoking-Induced endothelial dysfunction. However, there was no significant difference in MBF during adenosine-induced hyperemia between the two groups.

Effects and Roles of Korean Community Dance (한국 커뮤니티 댄스의 효과와 역할)

  • Park, Sojung
    • Trans-
    • /
    • v.9
    • /
    • pp.37-66
    • /
    • 2020
  • Entering the 21st century, the flow of society and culture is emerging as a cultural phenomenon in which one experiences, enjoys, and experiences on one's own. This trend has emerged as community dance, which has been active since 2010. Community dances can be targeted by anyone and can be divided into children's, adult and senior citizens' dances depending on the characteristics and age of the group, allowing them to work in various age groups. It also refers to all kinds of dances for the happiness and self-achievement of everyone who can promote gender, race and religion health or meet the needs of expression and improve their physical strength at meetings by age group, from preschoolers to senior citizens. Community dance is a dance activity in which everyone takes advantage of their leisure time and voluntarily participates in joyous activities, making it expandable to lifelong education and social learning. It is a voluntary community gathering conducted by experts for the general public. The definition of community dance can be said to be the aggregate of physical activities that enrich an individual's daily life and enhance their social sense to create a bright society, while individuals achieve the goals of health promotion and aesthetic education. In the contemporary community dance, the dance experience in body and creativity as self-expression reflects the happiness perspective by exploring the positive psychological experience and influence of the participants in the process of participation, and participants have continued networking through online offline to enjoy the dance culture. Although research has been conducted in various fields for 10 years since the boom in community dance began, the actual methodology of the program has been insufficient to present the Feldenkrais Method, hoping that it will be used as a methodology necessary for local community dance, and will be used as part of the educational effects and choreography creation methods of artists that can improve the physical functional aspects of dance and give a sense of psychological stability.

  • PDF

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

A Study on the Art Education Program Based on Cultural Diversity: Focused on the Case of National Museum of Modern and Contemporary Art, Korea (서울어젠다 기반 문화다양성 미술관교육 프로그램 분석 및 방향 - 국립현대미술관 사례를 중심으로 -)