• Title/Summary/Keyword: systems approach method

Search Result 3,708, Processing Time 0.027 seconds

On the Use of Modal Derivatives for Reduced Order Modeling of a Geometrically Nonlinear Beam (모드 미분을 이용한 기하비선형 보의 축소 모델)

  • Jeong, Yong-Min;Kim, Jun-Sik
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.30 no.4
    • /
    • pp.329-334
    • /
    • 2017
  • The structures, which are made up with the huge number of degrees-of-freedom and the assembly of substructures, have a great complexity. In order to increase the computational efficiency, the analysis models have to be simplified. Many substructuring techniques have been developed to simplify large-scale engineering problems. The techniques are very powerful for solving nonlinear problems which require many iterative calculations. In this paper, a modal derivatives-based model order reduction method, which is able to capture the stretching-bending coupling behavior in geometrically nonlinear systems, is adopted and investigated for its performance evaluation. The quadratic terms in nonlinear beam theory, such as Green-Lagrange strains, can be explained by the modal derivatives. They can be obtained by taking the modal directional derivatives of eigenmodes and form the second order terms of modal reduction basis. The method proposed is then applied to a co-rotational finite element formulation that is well-suited for geometrically nonlinear problems. Numerical results reveal that the end-shortening effect is very important, in which a conventional modal reduction method does not work unless the full model is used. It is demonstrated that the modal derivative approach yields the best compromised result and is very promising for substructuring large-scale geometrically nonlinear problems.

A Study on Temperature Analysis for Smart Electrical Power Devices (스마트 전력 기기의 온도 분석에 관한 연구)

  • Vasanth, Ragu;Lee, Myeongbae;Kim, Younghyun;Park, Myunghye;Lee, Seungbae;Park, Jwangwoo;Cho, Yongyun;Shin, Changsun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.8
    • /
    • pp.353-358
    • /
    • 2017
  • An electrical power utility, like an electrical power pole, includes various kinds of sensors for smart services. Temperature data is considered one of the important factors that can influence the smart operations of this utility. This study suggests a method for temperature data analysis for deciding the status of the smart electrical power utilities by using Kalman Filter and Ensemble Model. The suggested approach separates the temperature data according to the different positions of the temperature sensors of a utility, then uses Kalman Filter and Ensemble Model to analyse the characteristics of the temperature variation. With detailed processes, method explains the variation between an external temperature factor like weather temperature data and the sensed temperature data, and then, analysis the temperature data from each position of electrical power utilities. In this process, the suggested method uses Kalman Filter to remove error data and the ensemble model to find out mean value of every hour of electrical data. The result and discussion of temperature analysis were described clearly with the analysed results of electrical data. Finally, we were able to check the working condition of the power devices and the range of the temperature data foe each devices, which may help to indicate any causalities with respect to the devices in the utility pole.

Blockchain Based Financial Portfolio Management Using A3C (A3C를 활용한 블록체인 기반 금융 자산 포트폴리오 관리)

  • Kim, Ju-Bong;Heo, Joo-Seong;Lim, Hyun-Kyo;Kwon, Do-Hyung;Han, Youn-Hee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.1
    • /
    • pp.17-28
    • /
    • 2019
  • In the financial investment management strategy, the distributed investment selecting and combining various financial assets is called portfolio management theory. In recent years, the blockchain based financial assets, such as cryptocurrencies, have been traded on several well-known exchanges, and an efficient portfolio management approach is required in order for investors to steadily raise their return on investment in cryptocurrencies. On the other hand, deep learning has shown remarkable results in various fields, and research on application of deep reinforcement learning algorithm to portfolio management has begun. In this paper, we propose an efficient financial portfolio investment management method based on Asynchronous Advantage Actor-Critic (A3C), which is a representative asynchronous reinforcement learning algorithm. In addition, since the conventional cross-entropy function can not be applied to portfolio management, we propose a proper method where the existing cross-entropy is modified to fit the portfolio investment method. Finally, we compare the proposed A3C model with the existing reinforcement learning based cryptography portfolio investment algorithm, and prove that the performance of the proposed A3C model is better than the existing one.

Evaluation of hydropower dam water supply capacity (I): individual and integrated operation of hydropower dams in Bukhan river (발전용댐 이수능력 평가 연구(I): 북한강수계 개별 댐 및 댐군 용수공급능력 분석)

  • Jeong, Gimoon;Choi, Jeongwook;Kang, Doosun;Ahn, Jeonghwan;Kim, Taesoon
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.7
    • /
    • pp.505-513
    • /
    • 2022
  • Recently, uncertainty in predicting available water resources is gradually increasing due to climate change and extreme weather conditions. Social interest in water management such as flood and drought prevention is also increasing, and after the unification of water management implemented in 2018, domestic water management is facing a major turning point. As part of such strengthening of water management capabilities, various studies are being conducted to utilize a hydropower dam for flood control and water supply purposes, which was mainly operated for hydroelectric power generation. However, since the dam evaluation methods developed based on a multi-purpose dam are being applied to hydropower dams, an additional evaluation approach that can consider the characteristics of hydropower dams is required. In this study, a new water supply capacity evaluation method is presented in consideration of the operational characteristics of hydropower dams in terms of water supply, and a connected reservoir simulation method is proposed to evaluate the comprehensive water supply capacity of a dam group operating in a river basin. The presented method was applied to the hydropower dams located in the Bukhan River basin, and the results of the water supply yield of individual dams and multi-reservoir systems were compared and analyzed. In the future, the role of hydropower dams for water supply during drought is expected to become more important, and this study can be used for sustainable domestic water management research using hydropower dams.

Study on User Characteristics based on Conversation Analysis between Social Robots and Older Adults: With a focus on phenomenological research and cluster analysis (소셜 로봇과 노년층 사용자 간 대화 분석 기반의 사용자 특성 연구: 현상학적 분석 방법론과 군집 분석을 중심으로)

  • Na-Rae Choi;Do-Hyung Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.211-227
    • /
    • 2023
  • Personal service robots, a type of social robot that has emerged with the aging population and technological advancements, are undergoing a transformation centered around technologies that can extend independent living for older adults in their homes. For older adults to accept and use social robot innovations in their daily lives on a long-term basis, it is crucial to have a deeper understanding of user perspectives, contexts, and emotions. This research aims to comprehensively understand older adults by utilizing a mixed-method approach that integrates quantitative and qualitative data. Specifically, we employ the Van Kaam phenomenological methodology to group conversations into nine categories based on emotional cues and conversation participants as key variables, using voice conversation records between older adults and social robots. We then personalize the conversations based on frequency and weight, allowing for user segmentation. Additionally, we conduct profiling analysis using demographic data and health indicators obtained from pre-survey questionnaires. Furthermore, based on the analysis of conversations, we perform K-means cluster analysis to classify older adults into three groups and examine their respective characteristics. The proposed model in this study is expected to contribute to the growth of businesses related to understanding users and deriving insights by providing a methodology for segmenting older adult s, which is essential for the future provision of social robots with caregiving functions in everyday life.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.

Automated Finite Element Analyses for Structural Integrated Systems (통합 구조 시스템의 유한요소해석 자동화)

  • Chongyul Yoon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.1
    • /
    • pp.49-56
    • /
    • 2024
  • An automated dynamic structural analysis module stands as a crucial element within a structural integrated mitigation system. This module must deliver prompt real-time responses to enable timely actions, such as evacuation or warnings, in response to the severity posed by the structural system. The finite element method, a widely adopted approximate structural analysis approach globally, owes its popularity in part to its user-friendly nature. However, the computational efficiency and accuracy of results depend on the user-provided finite element mesh, with the number of elements and their quality playing pivotal roles. This paper introduces a computationally efficient adaptive mesh generation scheme that optimally combines the h-method of node movement and the r-method of element division for mesh refinement. Adaptive mesh generation schemes automatically create finite element meshes, and in this case, representative strain values for a given mesh are employed for error estimates. When applied to dynamic problems analyzed in the time domain, meshes need to be modified at each time step, considering a few hundred or thousand steps. The algorithm's specifics are demonstrated through a standard cantilever beam example subjected to a concentrated load at the free end. Additionally, a portal frame example showcases the generation of various robust meshes. These examples illustrate the adaptive algorithm's capability to produce robust meshes, ensuring reasonable accuracy and efficient computing time. Moreover, the study highlights the potential for the scheme's effective application in complex structural dynamic problems, such as those subjected to seismic or erratic wind loads. It also emphasizes its suitability for general nonlinear analysis problems, establishing the versatility and reliability of the proposed adaptive mesh generation scheme.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Analyzing the Issue Life Cycle by Mapping Inter-Period Issues (기간별 이슈 매핑을 통한 이슈 생명주기 분석 방법론)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.25-41
    • /
    • 2014
  • Recently, the number of social media users has increased rapidly because of the prevalence of smart devices. As a result, the amount of real-time data has been increasing exponentially, which, in turn, is generating more interest in using such data to create added value. For instance, several attempts are being made to analyze the relevant search keywords that are frequently used on new portal sites and the words that are regularly mentioned on various social media in order to identify social issues. The technique of "topic analysis" is employed in order to identify topics and themes from a large amount of text documents. As one of the most prevalent applications of topic analysis, the technique of issue tracking investigates changes in the social issues that are identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has two limitations. First, when a new period is included, topic analysis must be repeated for all the documents of the entire period, rather than being conducted only on the new documents of the added period. This creates practical limitations in the form of significant time and cost burdens. Therefore, this traditional approach is difficult to apply in most applications that need to perform an analysis on the additional period. Second, the issue is not only generated and terminated constantly, but also one issue can sometimes be distributed into several issues or multiple issues can be integrated into one single issue. In other words, each issue is characterized by a life cycle that consists of the stages of creation, transition (merging and segmentation), and termination. The existing issue tracking methods do not address the connection and effect relationship between these issues. The purpose of this study is to overcome the two limitations of the existing issue tracking method, one being the limitation regarding the analysis method and the other being the limitation involving the lack of consideration of the changeability of the issues. Let us assume that we perform multiple topic analysis for each multiple period. Then it is essential to map issues of different periods in order to trace trend of issues. However, it is not easy to discover connection between issues of different periods because the issues derived for each period mutually contain heterogeneity. In this study, to overcome these limitations without having to analyze the entire period's documents simultaneously, the analysis can be performed independently for each period. In addition, we performed issue mapping to link the identified issues of each period. An integrated approach on each details period was presented, and the issue flow of the entire integrated period was depicted in this study. Thus, as the entire process of the issue life cycle, including the stages of creation, transition (merging and segmentation), and extinction, is identified and examined systematically, the changeability of the issues was analyzed in this study. The proposed methodology is highly efficient in terms of time and cost, as it sufficiently considered the changeability of the issues. Further, the results of this study can be used to adapt the methodology to a practical situation. By applying the proposed methodology to actual Internet news, the potential practical applications of the proposed methodology are analyzed. Consequently, the proposed methodology was able to extend the period of the analysis and it could follow the course of progress of each issue's life cycle. Further, this methodology can facilitate a clearer understanding of complex social phenomena using topic analysis.

Response Modeling for the Marketing Promotion with Weighted Case Based Reasoning Under Imbalanced Data Distribution (불균형 데이터 환경에서 변수가중치를 적용한 사례기반추론 기반의 고객반응 예측)

  • Kim, Eunmi;Hong, Taeho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.29-45
    • /
    • 2015
  • Response modeling is a well-known research issue for those who have tried to get more superior performance in the capability of predicting the customers' response for the marketing promotion. The response model for customers would reduce the marketing cost by identifying prospective customers from very large customer database and predicting the purchasing intention of the selected customers while the promotion which is derived from an undifferentiated marketing strategy results in unnecessary cost. In addition, the big data environment has accelerated developing the response model with data mining techniques such as CBR, neural networks and support vector machines. And CBR is one of the most major tools in business because it is known as simple and robust to apply to the response model. However, CBR is an attractive data mining technique for data mining applications in business even though it hasn't shown high performance compared to other machine learning techniques. Thus many studies have tried to improve CBR and utilized in business data mining with the enhanced algorithms or the support of other techniques such as genetic algorithm, decision tree and AHP (Analytic Process Hierarchy). Ahn and Kim(2008) utilized logit, neural networks, CBR to predict that which customers would purchase the items promoted by marketing department and tried to optimized the number of k for k-nearest neighbor with genetic algorithm for the purpose of improving the performance of the integrated model. Hong and Park(2009) noted that the integrated approach with CBR for logit, neural networks, and Support Vector Machine (SVM) showed more improved prediction ability for response of customers to marketing promotion than each data mining models such as logit, neural networks, and SVM. This paper presented an approach to predict customers' response of marketing promotion with Case Based Reasoning. The proposed model was developed by applying different weights to each feature. We deployed logit model with a database including the promotion and the purchasing data of bath soap. After that, the coefficients were used to give different weights of CBR. We analyzed the performance of proposed weighted CBR based model compared to neural networks and pure CBR based model empirically and found that the proposed weighted CBR based model showed more superior performance than pure CBR model. Imbalanced data is a common problem to build data mining model to classify a class with real data such as bankruptcy prediction, intrusion detection, fraud detection, churn management, and response modeling. Imbalanced data means that the number of instance in one class is remarkably small or large compared to the number of instance in other classes. The classification model such as response modeling has a lot of trouble to recognize the pattern from data through learning because the model tends to ignore a small number of classes while classifying a large number of classes correctly. To resolve the problem caused from imbalanced data distribution, sampling method is one of the most representative approach. The sampling method could be categorized to under sampling and over sampling. However, CBR is not sensitive to data distribution because it doesn't learn from data unlike machine learning algorithm. In this study, we investigated the robustness of our proposed model while changing the ratio of response customers and nonresponse customers to the promotion program because the response customers for the suggested promotion is always a small part of nonresponse customers in the real world. We simulated the proposed model 100 times to validate the robustness with different ratio of response customers to response customers under the imbalanced data distribution. Finally, we found that our proposed CBR based model showed superior performance than compared models under the imbalanced data sets. Our study is expected to improve the performance of response model for the promotion program with CBR under imbalanced data distribution in the real world.