• Title/Summary/Keyword: Complex task

Search Result 576, Processing Time 0.022 seconds

A PLS Path Modeling Approach on the Cause-and-Effect Relationships among BSC Critical Success Factors for IT Organizations (PLS 경로모형을 이용한 IT 조직의 BSC 성공요인간의 인과관계 분석)

  • Lee, Jung-Hoon;Shin, Taek-Soo;Lim, Jong-Ho
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.207-228
    • /
    • 2007
  • Measuring Information Technology(IT) organizations' activities have been limited to mainly measure financial indicators for a long time. However, according to the multifarious functions of Information System, a number of researches have been done for the new trends on measurement methodologies that come with financial measurement as well as new measurement methods. Especially, the researches on IT Balanced Scorecard(BSC), concept from BSC measuring IT activities have been done as well in recent years. BSC provides more advantages than only integration of non-financial measures in a performance measurement system. The core of BSC rests on the cause-and-effect relationships between measures to allow prediction of value chain performance measures to allow prediction of value chain performance measures, communication, and realization of the corporate strategy and incentive controlled actions. More recently, BSC proponents have focused on the need to tie measures together into a causal chain of performance, and to test the validity of these hypothesized effects to guide the development of strategy. Kaplan and Norton[2001] argue that one of the primary benefits of the balanced scorecard is its use in gauging the success of strategy. Norreklit[2000] insist that the cause-and-effect chain is central to the balanced scorecard. The cause-and-effect chain is also central to the IT BSC. However, prior researches on relationship between information system and enterprise strategies as well as connection between various IT performance measurement indicators are not so much studied. Ittner et al.[2003] report that 77% of all surveyed companies with an implemented BSC place no or only little interest on soundly modeled cause-and-effect relationships despite of the importance of cause-and-effect chains as an integral part of BSC. This shortcoming can be explained with one theoretical and one practical reason[Blumenberg and Hinz, 2006]. From a theoretical point of view, causalities within the BSC method and their application are only vaguely described by Kaplan and Norton. From a practical consideration, modeling corporate causalities is a complex task due to tedious data acquisition and following reliability maintenance. However, cause-and effect relationships are an essential part of BSCs because they differentiate performance measurement systems like BSCs from simple key performance indicator(KPI) lists. KPI lists present an ad-hoc collection of measures to managers but do not allow for a comprehensive view on corporate performance. Instead, performance measurement system like BSCs tries to model the relationships of the underlying value chain in cause-and-effect relationships. Therefore, to overcome the deficiencies of causal modeling in IT BSC, sound and robust causal modeling approaches are required in theory as well as in practice for offering a solution. The propose of this study is to suggest critical success factors(CSFs) and KPIs for measuring performance for IT organizations and empirically validate the casual relationships between those CSFs. For this purpose, we define four perspectives of BSC for IT organizations according to Van Grembergen's study[2000] as follows. The Future Orientation perspective represents the human and technology resources needed by IT to deliver its services. The Operational Excellence perspective represents the IT processes employed to develop and deliver the applications. The User Orientation perspective represents the user evaluation of IT. The Business Contribution perspective captures the business value of the IT investments. Each of these perspectives has to be translated into corresponding metrics and measures that assess the current situations. This study suggests 12 CSFs for IT BSC based on the previous IT BSC's studies and COBIT 4.1. These CSFs consist of 51 KPIs. We defines the cause-and-effect relationships among BSC CSFs for IT Organizations as follows. The Future Orientation perspective will have positive effects on the Operational Excellence perspective. Then the Operational Excellence perspective will have positive effects on the User Orientation perspective. Finally, the User Orientation perspective will have positive effects on the Business Contribution perspective. This research tests the validity of these hypothesized casual effects and the sub-hypothesized causal relationships. For the purpose, we used the Partial Least Squares approach to Structural Equation Modeling(or PLS Path Modeling) for analyzing multiple IT BSC CSFs. The PLS path modeling has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. Recently the use of PLS path modeling has been gaining interests and use among IS researchers in recent years because of its ability to model latent constructs under conditions of nonormality and with small to medium sample sizes(Chin et al., 2003). The empirical results of our study using PLS path modeling show that the casual effects in IT BSC significantly exist partially in our hypotheses.

The Sensitivity Analyses of Initial Condition and Data Assimilation for a Fog Event using the Mesoscale Meteorological Model (중규모 기상 모델을 이용한 안개 사례의 초기장 및 자료동화 민감도 분석)

  • Kang, Misun;Lim, Yun-Kyu;Cho, Changbum;Kim, Kyu Rang;Park, Jun Sang;Kim, Baek-Jo
    • Journal of the Korean earth science society
    • /
    • v.36 no.6
    • /
    • pp.567-579
    • /
    • 2015
  • The accurate simulation of micro-scale weather phenomena such as fog using the mesoscale meteorological models is a very complex task. Especially, the uncertainty arisen from initial input data of the numerical models has a decisive effect on the accuracy of numerical models. The data assimilation is required to reduce the uncertainty of initial input data. In this study, the limitation of the mesoscale meteorological model was verified by WRF (Weather Research and Forecasting) model for a summer fog event around the Nakdong river in Korea. The sensitivity analyses of simulation accuracy from the numerical model were conducted using two different initial and boundary conditions: KLAPS (Korea Local Analysis and Prediction System) and LDAPS (Local Data Assimilation and Prediction System) data. In addition, the improvement of numerical model performance by FDDA (Four-Dimensional Data Assimilation) using the observational data from AWS (Automatic Weather System) was investigated. The result of sensitivity analysis showed that the accuracy of simulated air temperature, dew point temperature, and relative humidity with LDAPS data was higher than those of KLAPS, but the accuracy of the wind speed of LDAPS was lower than that of KLAPS. Significant difference was found in case of relative humidity where RMSE (Root Mean Square Error) for LDAPS and KLAPS was 15.7 and 35.6%, respectively. The RMSE for air temperature, wind speed, and relative humidity was improved by approximately $0.3^{\circ}C$, $0.2m\;s^{-1}$, and 2.2%, respectively after incorporating the FDDA.

Intelligent Optimal Route Planning Based on Context Awareness (상황인식 기반 지능형 최적 경로계획)

  • Lee, Hyun-Jung;Chang, Yong-Sik
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.117-137
    • /
    • 2009
  • Recently, intelligent traffic information systems have enabled people to forecast traffic conditions before hitting the road. These convenient systems operate on the basis of data reflecting current road and traffic conditions as well as distance-based data between locations. Thanks to the rapid development of ubiquitous computing, tremendous context data have become readily available making vehicle route planning easier than ever. Previous research in relation to optimization of vehicle route planning merely focused on finding the optimal distance between locations. Contexts reflecting the road and traffic conditions were then not seriously treated as a way to resolve the optimal routing problems based on distance-based route planning, because this kind of information does not have much significant impact on traffic routing until a a complex traffic situation arises. Further, it was also not easy to take into full account the traffic contexts for resolving optimal routing problems because predicting the dynamic traffic situations was regarded a daunting task. However, with rapid increase in traffic complexity the importance of developing contexts reflecting data related to moving costs has emerged. Hence, this research proposes a framework designed to resolve an optimal route planning problem by taking full account of additional moving cost such as road traffic cost and weather cost, among others. Recent technological development particularly in the ubiquitous computing environment has facilitated the collection of such data. This framework is based on the contexts of time, traffic, and environment, which addresses the following issues. First, we clarify and classify the diverse contexts that affect a vehicle's velocity and estimates the optimization of moving cost based on dynamic programming that accounts for the context cost according to the variance of contexts. Second, the velocity reduction rate is applied to find the optimal route (shortest path) using the context data on the current traffic condition. The velocity reduction rate infers to the degree of possible velocity including moving vehicles' considerable road and traffic contexts, indicating the statistical or experimental data. Knowledge generated in this papercan be referenced by several organizations which deal with road and traffic data. Third, in experimentation, we evaluate the effectiveness of the proposed context-based optimal route (shortest path) between locations by comparing it to the previously used distance-based shortest path. A vehicles' optimal route might change due to its diverse velocity caused by unexpected but potential dynamic situations depending on the road condition. This study includes such context variables as 'road congestion', 'work', 'accident', and 'weather' which can alter the traffic condition. The contexts can affect moving vehicle's velocity on the road. Since these context variables except for 'weather' are related to road conditions, relevant data were provided by the Korea Expressway Corporation. The 'weather'-related data were attained from the Korea Meteorological Administration. The aware contexts are classified contexts causing reduction of vehicles' velocity which determines the velocity reduction rate. To find the optimal route (shortest path), we introduced the velocity reduction rate in the context for calculating a vehicle's velocity reflecting composite contexts when one event synchronizes with another. We then proposed a context-based optimal route (shortest path) algorithm based on the dynamic programming. The algorithm is composed of three steps. In the first initialization step, departure and destination locations are given, and the path step is initialized as 0. In the second step, moving costs including composite contexts into account between locations on path are estimated using the velocity reduction rate by context as increasing path steps. In the third step, the optimal route (shortest path) is retrieved through back-tracking. In the provided research model, we designed a framework to account for context awareness, moving cost estimation (taking both composite and single contexts into account), and optimal route (shortest path) algorithm (based on dynamic programming). Through illustrative experimentation using the Wilcoxon signed rank test, we proved that context-based route planning is much more effective than distance-based route planning., In addition, we found that the optimal solution (shortest paths) through the distance-based route planning might not be optimized in real situation because road condition is very dynamic and unpredictable while affecting most vehicles' moving costs. For further study, while more information is needed for a more accurate estimation of moving vehicles' costs, this study still stands viable in the applications to reduce moving costs by effective route planning. For instance, it could be applied to deliverers' decision making to enhance their decision satisfaction when they meet unpredictable dynamic situations in moving vehicles on the road. Overall, we conclude that taking into account the contexts as a part of costs is a meaningful and sensible approach to in resolving the optimal route problem.

An Interface Technique for Avatar-Object Behavior Control using Layered Behavior Script Representation (계층적 행위 스크립트 표현을 통한 아바타-객체 행위 제어를 위한 인터페이스 기법)

  • Choi Seung-Hyuk;Kim Jae-Kyung;Lim Soon-Bum;Choy Yoon-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.9
    • /
    • pp.751-775
    • /
    • 2006
  • In this paper, we suggested an avatar control technique using the high-level behavior. We separated behaviors into three levels according to level of abstraction and defined layered scripts. Layered scripts provide the user with the control over the avatar behaviors at the abstract level and the reusability of scripts. As the 3D environment gets complicated, the number of required avatar behaviors increases accordingly and thus controlling the avatar-object behaviors gets even more challenging. To solve this problem, we embed avatar behaviors into each environment object, which informs how the avatar can interact with the object. Even with a large number of environment objects, our system can manage avatar-object interactions in an object-oriented manner Finally, we suggest an easy-to-use user interface technique that allows the user to control avatars based on context menus. Using the avatar behavior information that is embedded into the object, the system can analyze the object state and filter the behaviors. As a result, context menu shows the behaviors that the avatar can do. In this paper, we made the virtual presentation environment and applied our model to the system. In this paper, we suggested the technique that we controling an the avatar control technique using the high-level behavior. We separated behaviors into three levels byaccording to level of abstract levelion and defined multi-levellayered script. Multi-leveILayered script offers that the user can control avatar behavior at the abstract level and reuses script easily. We suggested object models for avatar-object interaction. Because, TtThe 3D environment is getting more complicated very quickly, so that the numberss of avatar behaviors are getting more variableincreased. Therefore, controlling avatar-object behavior is getting complex and difficultWe need tough processing for handling avatar-object interaction. To solve this problem, we suggested object models that embedded avatar behaviors into object for avatar-object interaction. insert embedded ail avatar behaviors into object. Even though the numbers of objects areis large bigger, it can manage avatar-object interactions by very efficientlyobject-oriented manner. Finally Wewe suggested context menu for ease ordering. User can control avatar throughusing not avatar but the object-oriented interfaces. To do this, Oobject model is suggested by analyzeing object state and filtering the behavior, behavior and context menu shows the behaviors that avatar can do. The user doesn't care about the object or avatar state through the related object.

A Study on Advancing Strategy for National Environmental Geographic Informations - Focused on the National Environmental Assessment Map, Ecological Map and Land Cover Map - (국토환경지리정보 고도화 전략 연구 - 국토환경성평가지도, 생태자연도, 토지피복지도를 중심으로 -)

  • Lee, Chong-Soo
    • Journal of Environmental Policy
    • /
    • v.6 no.2
    • /
    • pp.97-122
    • /
    • 2007
  • In 2006, the Ministry of Environment of the Republic of Korea, completed the construction on national environmental geographic informations including National Environmental Assessment Map, Ecological Map, Land Cover Map and so on. At this point of time, it is necessary to establish the advance strategy on national environmental geographic information, considering the complicated characteristics. Therefore, this study suggests the advance strategy on national environmental geographic information, reflecting results of analyzing the given condition and the trend of informatization. National environmental geographic information has spacial quality to be managed dispersedly in a department unit or an operations unit. According to this quality, requirements for users who need the policy based on national environmental geographic information and complex information are not satisfactory. And, the information system centering the process of administrative affairs should be converted to one putting decision supporting first in importance. Therefore, this study sets up "the realization of the sustainable land management system by advancing national environmental geographic information" as the vision of the advancing strategy. In order to accomplish the vision, this study established the purpose as follow; constructing strategic and geographic information based on knowledge, arranging the foundation to open information to the public transparently, building expanded and integrated national environmental geographic information, embodying the environmental administration based on national environmental geographic information, enhancing the efficiency of national environmental geographic information, and supporting efficiently the process of administrative affairs. And this study suggests executive plans to achieve the vision and the purpose as next; developing the quality control program to verify the information confidence, building the system to integrate and to provide environmental information, collecting information, readjusting laws and regimes in parts of the construction, the application and the management of the system, and operating the task process, human power, organization and information technology. This study puts the emphasis on providing the turning opportunity politically which is possible to make sure of the information confidence in quality, advancing from the expansion of one in quantity. However, this improvement strategy doesn't reflect all national environmental geographic information and current status of environmental administrations. Therefore, for applying the result of this study to the actual environmental administration, it is necessary to discuss regularly the systematic categorization of national environmental geographic information, to interview with the contracting parties and so on hereafter.

  • PDF

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Digital Humanities, and Applications of the "Successful Exam Passers List" (과거 합격자 시맨틱 데이터베이스를 활용한 디지털 인문학 연구)

  • LEE, JAE OK
    • (The)Study of the Eastern Classic
    • /
    • no.70
    • /
    • pp.303-345
    • /
    • 2018
  • In this article, how the Bangmok(榜目) documents, which are essentially lists of successful passers for the civil competitive examination system of the $Chos{\breve{o}}n$ dynasty, when rendered into digitalized formats, could serve as source of information, which would not only lets us know the $Chos{\breve{o}}n$ individuals' social backgrounds and bloodlines but also enables us to understand the intricate nature that the Yangban network had, will be discussed. In digitalized humanity studies, the Bangmok materials, literally a list of leading elites of the $Chos{\breve{o}}n$ period, constitute a very interesting and important source of information. Based upon these materials, we can see how the society -as well as the Yangban community- was like. Currently, all data inside these Bangmok lists are rendered in XML(eXtensible Makrup Language) format and are being served through DBMS(Database Management System), so anyone who would want to examine the statistics could freely do so. Also, by connecting the data in these Bangmok materials with data from genealogy records, we could identify an individual's marital relationship, home town, and political affiliation, and therefore create a complex narrative that would be effective in describing that individual's life in particular. This is a graphic database, which shows-when Bangmok data is punched in-successful passers as individual nodes, and displays blood and marital relations in a very visible way. Clicking upon the nodes would provide you with access to all kinds of relationships formed among more than 90 thousand successful passers, and even the overall marital network, once the genealogical data is input. In Korea, since 2005 and through now, the task of digitalizing data from the Civil exam Bangmok(Mun-gwa Bangmok), Military exam Bangmok (Mu-gwa Bangmok), the "Sa-ma" Bangmok and "Jab-gwa" Bangmok materials, has been completed. They can be accessed through a website(http://people.aks.ac.kr/index.aks) which has information on numerous famous past Korean individuals. With this kind of source of information, we are now able to extract professional Jung-in figures from these lists. However, meaningful and practical studies using this data are yet to be announced. This article would like to remind everyone that this information should be used as a window through which we could see not only the lives of individuals, but also the society.

Recommender system using BERT sentiment analysis (BERT 기반 감성분석을 이용한 추천시스템)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.1-15
    • /
    • 2021
  • If it is difficult for us to make decisions, we ask for advice from friends or people around us. When we decide to buy products online, we read anonymous reviews and buy them. With the advent of the Data-driven era, IT technology's development is spilling out many data from individuals to objects. Companies or individuals have accumulated, processed, and analyzed such a large amount of data that they can now make decisions or execute directly using data that used to depend on experts. Nowadays, the recommender system plays a vital role in determining the user's preferences to purchase goods and uses a recommender system to induce clicks on web services (Facebook, Amazon, Netflix, Youtube). For example, Youtube's recommender system, which is used by 1 billion people worldwide every month, includes videos that users like, "like" and videos they watched. Recommended system research is deeply linked to practical business. Therefore, many researchers are interested in building better solutions. Recommender systems use the information obtained from their users to generate recommendations because the development of the provided recommender systems requires information on items that are likely to be preferred by the user. We began to trust patterns and rules derived from data rather than empirical intuition through the recommender systems. The capacity and development of data have led machine learning to develop deep learning. However, such recommender systems are not all solutions. Proceeding with the recommender systems, there should be no scarcity in all data and a sufficient amount. Also, it requires detailed information about the individual. The recommender systems work correctly when these conditions operate. The recommender systems become a complex problem for both consumers and sellers when the interaction log is insufficient. Because the seller's perspective needs to make recommendations at a personal level to the consumer and receive appropriate recommendations with reliable data from the consumer's perspective. In this paper, to improve the accuracy problem for "appropriate recommendation" to consumers, the recommender systems are proposed in combination with context-based deep learning. This research is to combine user-based data to create hybrid Recommender Systems. The hybrid approach developed is not a collaborative type of Recommender Systems, but a collaborative extension that integrates user data with deep learning. Customer review data were used for the data set. Consumers buy products in online shopping malls and then evaluate product reviews. Rating reviews are based on reviews from buyers who have already purchased, giving users confidence before purchasing the product. However, the recommendation system mainly uses scores or ratings rather than reviews to suggest items purchased by many users. In fact, consumer reviews include product opinions and user sentiment that will be spent on evaluation. By incorporating these parts into the study, this paper aims to improve the recommendation system. This study is an algorithm used when individuals have difficulty in selecting an item. Consumer reviews and record patterns made it possible to rely on recommendations appropriately. The algorithm implements a recommendation system through collaborative filtering. This study's predictive accuracy is measured by Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). Netflix is strategically using the referral system in its programs through competitions that reduce RMSE every year, making fair use of predictive accuracy. Research on hybrid recommender systems combining the NLP approach for personalization recommender systems, deep learning base, etc. has been increasing. Among NLP studies, sentiment analysis began to take shape in the mid-2000s as user review data increased. Sentiment analysis is a text classification task based on machine learning. The machine learning-based sentiment analysis has a disadvantage in that it is difficult to identify the review's information expression because it is challenging to consider the text's characteristics. In this study, we propose a deep learning recommender system that utilizes BERT's sentiment analysis by minimizing the disadvantages of machine learning. This study offers a deep learning recommender system that uses BERT's sentiment analysis by reducing the disadvantages of machine learning. The comparison model was performed through a recommender system based on Naive-CF(collaborative filtering), SVD(singular value decomposition)-CF, MF(matrix factorization)-CF, BPR-MF(Bayesian personalized ranking matrix factorization)-CF, LSTM, CNN-LSTM, GRU(Gated Recurrent Units). As a result of the experiment, the recommender system based on BERT was the best.

Development of Electret to Improve Output and Stability of Triboelectric Nanogenerator (마찰대전 나노발전기의 출력 및 안정성 향상을 위한 일렉트렛 개발)

  • Kam, Dongik;Jang, Sunmin;Yun, Yeongcheol;Bae, Hongeun;Lee, Youngjin;Ra, Yoonsang;Cho, Sumin;Seo, Kyoung Duck;Cha, Kyoung Je;Choi, Dongwhi
    • Korean Chemical Engineering Research
    • /
    • v.60 no.1
    • /
    • pp.93-99
    • /
    • 2022
  • With the rapid development of ultra-small and wearable device technology, continuous electricity supply without spatiotemporal limitations for driving electronic devices is required. Accordingly, Triboelectric nanogenerator (TENG), which utilizes static electricity generated by the contact and separation of two different materials, is being used as a means of effectively harvesting various types of energy dispersed without complex processes and designs due to its simple principle. However, to apply the TENG to real life, it is necessary to increase the electrical output. In addition, stable generation of electrical output, as well as increase in electrical output, is a task to be solved for the commercialization of TENG. In this study, we proposed a method to not only improve the output of TENG but also to stably represent the improved output. This was solved by using the contact layer, which is one of the components of TENG, as an electret for improved output and stability. The utilized electret was manufactured by sequentially performing corona charging-thermal annealing-corona charging on the Fluorinated ethylene propylene (FEP) film. Electric charges artificially injected due to corona charging enter a deep trap through the thermal annealing, so an electret that minimizes charge escape was fabricated and used in TENG. The output performance of the manufactured electret was verified by measuring the voltage output of the TENG in vertical contact separation mode, and the electret treated to the corona charging showed an output voltage 12 times higher than that of the pristine FEP film. The time and humidity stability of the electret was confirmed by measuring the output voltage of the TENG after exposing the electret to a general external environment and extreme humidity environment. In addition, it was shown that it can be applied to real-life by operating the LED by applying an electret to the clap-TENG with the motif of clap.

Research on the Measures and Driving Force behind the Three Major Works of Daesoon Jinrihoe in North Korea in Case of the Respective Types of Unification on the Korean Peninsula (한반도 통일 유형별 북한지역의 대순진리회 3대 중요사업 추진 여건과 방안 연구)

  • Park, Young-taek
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.39
    • /
    • pp.137-174
    • /
    • 2021
  • The main theme of this paper centers on how to promote Three Major Works of Daesoon Jinrihoe, charity aid, social welfare, and education projects, during the unification period. Determining the best methods of promotion is crucial because the Three Major Works must be carried out after unification, and the works must remain based on the practice of the philosophy of Haewon-sangsaeng (the Resolution of Grievances for Mutual Beneficence). The idea of Haewon-sangsaeng is in line with the preface of the U.N. Charter and the aim of world peace. North Korean residents are suffering from starvation under their devastated economy, which is certain to face a crisis of materialistic deficiency during reunification. In this study, the peaceful unification of Germany, unification under a period of sudden changes in Yemen, and the militarized unification of Vietnam were taken as case studies to diagnose and analyze the conditions which would affect the implementation of the Three Major Works. These three styles of unification commonly required a considerable budget and other forms of support to carry out the Three Major Works. Especially if unification were to occur after a period of sudden changes, this would require solutions to issues of food, shelter, and medical support due to the loss of numerous lives and the destruction of infrastructure. On the other hand, the UNHCR model was analyzed to determine the implications of expanding mental well prepared and sufficiently qualified professionals, reorganizing standard organizations within complex situations, task direction, preparing sufficient relief goods, budgeting, securing bases in border areas with North Korea, and establishing networks for sponsorship. Based on this, eight detailed tasks in the field of system construction could be used by the operators of the Three Major Works to prepare for unification. Additionally, nine tasks for review were presented in consideration of the timing of unification and the current situation between South and North Korea. In conclusion, in the event of unification, the Three Major Works should not be neglected during the transition period. The manual "Three Major Works during the Unification Period" should include strategic points on organizational formation and mission implementation, forward base and base operation, security and logistics preparation, public relations and external cooperation, safety measures, and transportation and contact systems.