• Title/Summary/Keyword: FLOW model

Search Result 12,997, Processing Time 0.045 seconds

Research on the Ethical Characteristics of 'Mutual Beneficence' Shown in the Principle of 'Guarding against Self-deception' in Daesoon Thought: in Comparison to Kantian and Utilitarian Ethical Views (대순사상의 무자기(無自欺)에 나타난 상생윤리 - 칸트와 밀의 윤리관과의 대비를 중심으로 -)

  • Kim, Tae-soo
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.27
    • /
    • pp.283-317
    • /
    • 2016
  • This research is an attempt to detail the multi-layered ethical characteristics of 'mutual beneficence', shown in the principle of 'guarding against self-deception' in Daesoon Thought while focusing on its major differences as well as the similarities with Kantian and Utilitarian ethical views. In these Western ethical perspectives, the concept of self-deception has received a considerable amount of attention, centering on the context of natural rights and contract theory. Meanwhile, in Daesoon Thought, 'guarding against self-deception' is presented as one of the principal objectives as well as the method or deontological ground for practice. It further encompasses the features of virtue ethics oriented toward the perfection of Dao. Here, the deontological aspect is interlinked with the concept of cultivation and the pursuit of ethics and morals. Hence this makes it a necessary condition for achieving the perfection of Dao, and likewise renders the practice of 'guarding against self-deception' more active through facilitating mutual relations based on the expansion model wherein human nature is characterized as possessing innate goodness. With regard to the tenet of 'resolution of grievances for mutual beneficence,' this concept is presented as a positive ground for practicing virtues toward others without forming grudges. Furthermore, as long as it reveals the great principle of humanity built on conscience, it will come to harmonize practitioners with others and spirits in an expression of beneficence. Moreover, originating in the Dao of Deities, guarding against self-deception is expressed as a form of life ethics and can be suggested as a new alternative for the model of virtue ethics proposed by Nussbaum. All in all, there is a natural causal relationship by which 'guarding against self-deception' in accord one's own conscience and the principle of humanity as a pursuit of perfect virtues in Dao result in the fulfillment of mutual beneficence. This readily akin to how gravity causes water to flow from high ground to low ground. Consequently, these relational features of mutual beneficence can serve an effective alternative to the Western ethical views which also address the need to overcome the egoistic mind which is liable to self-interest and alienation.

The Relationship between Cultural Self-construal of Korean and Alexithymia: A Serial Mediation Process Model of Ambivalence over Emotion Expression and Emotion Suppression Moderated by Generation (한국인의 문화적 자기관과 감정표현불능증의 관계: 세대에 의해 조절된 정서표현양가성 및 정서억제 연속매개과정 모형)

  • Haejin Kim;Soyoung Kwon;Sunho Jung;Donghoon Lee
    • Korean Journal of Culture and Social Issue
    • /
    • v.29 no.2
    • /
    • pp.171-197
    • /
    • 2023
  • The traditional Korean society has been classified as an Eastern collectivist culture, but in the flow of globalization and digitalization along with the post-Cold War era of the 1970s, Western individualistic culture and values quickly permeated the Korean younger generation. Since rapid changes occurred within a short period of time, there may be differences in cultural self-construal between generations living in the same era. Due to this, psychological problems related to emotional expression and suppression may appear differently depending on generations. Therefore, in the current study, 1,000 Korean adult men and women from their 20s to 60s were investigated for their level of independent and interdependent self-construal, alexithymia, ambivalence over emotional expression(AEE) and emotional suppression(ES). Then the relationship between the variables(self-construal and alexithymia,) and the mediating process of AEE and ES were examined. The generation of participants were divided into the industrialization cohort (birth year < 1970) and the digitalization cohort (birth year starting from 1970). Using the PROCESS macro(Hayes, 2022), we tested a serial mediation model of AEE and ES between the relative independent self-construal(RIS) and alexithymia. The results indicate that the level of alexithymia increases by the serial increase of AEE and ES when RIS decreases. Next, we examined a moderation effect of generatione on the mediation process of AEE and ES, and found that generation moderates the relationship between ES and alexithymia. That is, the effect of ES on alexithymia is significant for the digitalization cohort, while it is not significant for the industrialization cohort. The current results imply that emotion regulation strategies of Koreans have been differently developed according to prevailing cultural values in each generation, and that the negative influence of emotion suppression could be different according to the cultural background of each generation.

Semantic Visualization of Dynamic Topic Modeling (다이내믹 토픽 모델링의 의미적 시각화 방법론)

  • Yeon, Jinwook;Boo, Hyunkyung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.131-154
    • /
    • 2022
  • Recently, researches on unstructured data analysis have been actively conducted with the development of information and communication technology. In particular, topic modeling is a representative technique for discovering core topics from massive text data. In the early stages of topic modeling, most studies focused only on topic discovery. As the topic modeling field matured, studies on the change of the topic according to the change of time began to be carried out. Accordingly, interest in dynamic topic modeling that handle changes in keywords constituting the topic is also increasing. Dynamic topic modeling identifies major topics from the data of the initial period and manages the change and flow of topics in a way that utilizes topic information of the previous period to derive further topics in subsequent periods. However, it is very difficult to understand and interpret the results of dynamic topic modeling. The results of traditional dynamic topic modeling simply reveal changes in keywords and their rankings. However, this information is insufficient to represent how the meaning of the topic has changed. Therefore, in this study, we propose a method to visualize topics by period by reflecting the meaning of keywords in each topic. In addition, we propose a method that can intuitively interpret changes in topics and relationships between or among topics. The detailed method of visualizing topics by period is as follows. In the first step, dynamic topic modeling is implemented to derive the top keywords of each period and their weight from text data. In the second step, we derive vectors of top keywords of each topic from the pre-trained word embedding model. Then, we perform dimension reduction for the extracted vectors. Then, we formulate a semantic vector of each topic by calculating weight sum of keywords in each vector using topic weight of each keyword. In the third step, we visualize the semantic vector of each topic using matplotlib, and analyze the relationship between or among the topics based on the visualized result. The change of topic can be interpreted in the following manners. From the result of dynamic topic modeling, we identify rising top 5 keywords and descending top 5 keywords for each period to show the change of the topic. Existing many topic visualization studies usually visualize keywords of each topic, but our approach proposed in this study differs from previous studies in that it attempts to visualize each topic itself. To evaluate the practical applicability of the proposed methodology, we performed an experiment on 1,847 abstracts of artificial intelligence-related papers. The experiment was performed by dividing abstracts of artificial intelligence-related papers into three periods (2016-2017, 2018-2019, 2020-2021). We selected seven topics based on the consistency score, and utilized the pre-trained word embedding model of Word2vec trained with 'Wikipedia', an Internet encyclopedia. Based on the proposed methodology, we generated a semantic vector for each topic. Through this, by reflecting the meaning of keywords, we visualized and interpreted the themes by period. Through these experiments, we confirmed that the rising and descending of the topic weight of a keyword can be usefully used to interpret the semantic change of the corresponding topic and to grasp the relationship among topics. In this study, to overcome the limitations of dynamic topic modeling results, we used word embedding and dimension reduction techniques to visualize topics by era. The results of this study are meaningful in that they broadened the scope of topic understanding through the visualization of dynamic topic modeling results. In addition, the academic contribution can be acknowledged in that it laid the foundation for follow-up studies using various word embeddings and dimensionality reduction techniques to improve the performance of the proposed methodology.

Contrast Media in Abdominal Computed Tomography: Optimization of Delivery Methods

  • Joon Koo Han;Byung Ihn Choi;Ah Young Kim;Soo Jung Kim
    • Korean Journal of Radiology
    • /
    • v.2 no.1
    • /
    • pp.28-36
    • /
    • 2001
  • Objective: To provide a systematic overview of the effects of various parameters on contrast enhancement within the same population, an animal experiment as well as a computer-aided simulation study was performed. Materials and Methods: In an animal experiment, single-level dynamic CT through the liver was performed at 5-second intervals just after the injection of contrast medium for 3 minutes. Combinations of three different amounts (1, 2, 3 mL/kg), concentrations (150, 200, 300 mgI/mL), and injection rates (0.5, 1, 2 mL/sec) were used. The CT number of the aorta (A), portal vein (P) and liver (L) was measured in each image, and time-attenuation curves for A, P and L were thus obtained. The degree of maximum enhancement (Imax) and time to reach peak enhancement (Tmax) of A, P and L were determined, and times to equilibrium (Teq) were analyzed. In the computed-aided simulation model, a program based on the amount, flow, and diffusion coefficient of body fluid in various compartments of the human body was designed. The input variables were the concentrations, volumes and injection rates of the contrast media used. The program generated the time-attenuation curves of A, P and L, as well as liver-to-hepatocellular carcinoma (HCC) contrast curves. On each curve, we calculated and plotted the optimal temporal window (time period above the lower threshold, which in this experiment was 10 Hounsfield units), the total area under the curve above the lower threshold, and the area within the optimal range. Results: A. Animal Experiment: At a given concentration and injection rate, an increased volume of contrast medium led to increases in Imax A, P and L. In addition, Tmax A, P, L and Teq were prolonged in parallel with increases in injection time The time-attenuation curve shifted upward and to the right. For a given volume and injection rate, an increased concentration of contrast medium increased the degree of aortic, portal and hepatic enhancement, though Tmax A, P and L remained the same. The time-attenuation curve shifted upward. For a given volume and concentration of contrast medium, changes in the injection rate had a prominent effect on aortic enhancement, and that of the portal vein and hepatic parenchyma also showed some increase, though the effect was less prominent. A increased in the rate of contrast injection led to shifting of the time enhancement curve to the left and upward. B. Computer Simulation: At a faster injection rate, there was minimal change in the degree of hepatic attenuation, though the duration of the optimal temporal window decreased. The area between 10 and 30 HU was greatest when contrast media was delivered at a rate of 2 3 mL/sec. Although the total area under the curve increased in proportion to the injection rate, most of this increase was above the upper threshould and thus the temporal window was narrow and the optimal area decreased. Conclusion: Increases in volume, concentration and injection rate all resulted in improved arterial enhancement. If cost was disregarded, increasing the injection volume was the most reliable way of obtaining good quality enhancement. The optimal way of delivering a given amount of contrast medium can be calculated using a computer-based mathematical model.

  • PDF

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

A Study on the Use of GIS-based Time Series Spatial Data for Streamflow Depletion Assessment (하천 건천화 평가를 위한 GIS 기반의 시계열 공간자료 활용에 관한 연구)

  • YOO, Jae-Hyun;KIM, Kye-Hyun;PARK, Yong-Gil;LEE, Gi-Hun;KIM, Seong-Joon;JUNG, Chung-Gil
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.50-63
    • /
    • 2018
  • The rapid urbanization had led to a distortion of natural hydrological cycle system. The change in hydrological cycle structure is causing streamflow depletion, changing the existing use tendency of water resources. To manage such phenomena, a streamflow depletion impact assessment technology to forecast depletion is required. For performing such technology, it is indispensable to build GIS-based spatial data as fundamental data, but there is a shortage of related research. Therefore, this study was conducted to use the use of GIS-based time series spatial data for streamflow depletion assessment. For this study, GIS data over decades of changes on a national scale were constructed, targeting 6 streamflow depletion impact factors (weather, soil depth, forest density, road network, groundwater usage and landuse) and the data were used as the basic data for the operation of continuous hydrologic model. Focusing on these impact factors, the causes for streamflow depletion were analyzed depending on time series. Then, using distributed continuous hydrologic model based DrySAT, annual runoff of each streamflow depletion impact factor was measured and depletion assessment was conducted. As a result, the default value of annual runoff was measured at 977.9mm under the given weather condition without considering other factors. When considering the decrease in soil depth, the increase in forest density, road development, and groundwater usage, along with the change in land use and development, and annual runoff were measured at 1,003.5mm, 942.1mm, 961.9mm, 915.5mm, and 1003.7mm, respectively. The results showed that the major causes of the streaflow depletion were lowered soil depth to decrease the infiltration volume and surface runoff thereby decreasing streamflow; the increased forest density to decrease surface runoff; the increased road network to decrease the sub-surface flow; the increased groundwater use from undiscriminated development to decrease the baseflow; increased impervious areas to increase surface runoff. Also, each standard watershed depending on the grade of depletion was indicated, based on the definition of streamflow depletion and the range of grade. Considering the weather, the decrease in soil depth, the increase in forest density, road development, and groundwater usage, and the change in land use and development, the grade of depletion were 2.1, 2.2, 2.5, 2.3, 2.8, 2.2, respectively. Among the five streamflow depletion impact factors except rainfall condition, the change in groundwater usage showed the biggest influence on depletion, followed by the change in forest density, road construction, land use, and soil depth. In conclusion, it is anticipated that a national streamflow depletion assessment system to be develop in the future would provide customized depletion management and prevention plans based on the system assessment results regarding future data changes of the six streamflow depletion impact factors and the prospect of depletion progress.

A Feasibility Study on GMC (Geo-Multicell-Composite) of the Leachate Collection System in Landfill (폐기물 매립시설의 배수층 및 보호층으로서의 Geo-Multicell-Composite(GMC)의 적합성에 관한 연구)

  • Jung, Sung-Hoon;Oh, Seungjin;Oh, Minah;Kim, Joonha;Lee, Jai-Young
    • Journal of the Korean Geosynthetics Society
    • /
    • v.12 no.4
    • /
    • pp.67-76
    • /
    • 2013
  • Landfill require special care due to the dangers of nearby surface water and underground water pollution caused by leakage of leachate. The leachate does not leak due to the installation of the geomembrane but sharp wastes or landfill equipment can damage the geomembrane and therefore a means of protecting the geomembrane is required. In Korea, in accordance with the waste control act being modified in 1999, protecting the geosynthetics liner on top of the slope of landfill and installing a drainage layer to fluently drain leachate became mandatory, and technologies are being researched to both protect the geomembrane and quickly drain leachate simultaneously. Therefore, this research has its purpose in studying the drainage functions of leachate and protection functions of the geomembrane in order to examine the application possibilities of Geo-Multicell-Composite (GMC) as a Leachate Collection Removal and Protection System (LCRPs) at the slope on top of the geomembrane of landfill by observing methods of inserting filler with high-quality water permeability at the drainage net. GMC's horizontal permeability coefficient is $8.0{\times}10^{-4}m^2/s$ to legal standards satisfeid. Also crash gravel used as filler respected by vertical permeability is 5.0 cm/s, embroidering puncture strength 140.2 kgf. A result of storm drain using artificial rain in GMC model facility, maxinum flow rate of 1,120 L/hr even spray without surface runoff was about 92~97% penetration. Further study, instead of crash gravel used as a filler, such as using recycled aggregate utilization increases and the resulting construction cost is expected to savings.

Effect of Light-Induced ROS Generation Unit on Inactivation of Foodborne Pathogenic Bacteria in Water (광유도 ROS 발생장치의 세척용수 중 식중독 세균에 대한 불활성화 효과)

  • Choi, Jaehyuk;Kim, Dawoon;Jung, Kyu-Seok;Roh, Eunjung;Ryu, Kyoung-Yul;Ryu, Jae-Gee
    • Journal of Food Hygiene and Safety
    • /
    • v.34 no.6
    • /
    • pp.583-590
    • /
    • 2019
  • As the consumption of fresh fruits and vegetables increases, food poisoning caused by foodborne pathogen contamination is not decreasing. To prevent the contamination of produce, a quick and easy, low-cost, environmentally-safe disinfection method that does not affect produce freshness or quality is needed. This study demonstrates a new-concept, circulating-water disinfection system that purifies water by using newly developed 'LED-PS (photosensitizer)-induced ROS generation unit'. Using various types of LED-PS induced ROS generation units, we investigated the conditions for reducing the density of various pathogenic bacteria by more than 3 log CFU / mL in 1 hour. The major operational factors affecting the density reduction of the LED-PS-induced ROS generation unit were analyzed. Depending on bacteria species, the density reduction rate was varied. The effect of the units on reducing the density of Bacillus cereus and Pectobacterium carotovorum subsp. carotovorum was high, but the effect on foodborne bacteria such as Escherichia coli was relatively low. In this circulating water disinfection system, the density reduction effect tended to increase as the flow rate increased and the initial bacterial density decreased. As the amount of PS absorbed beads increased, the density reduction effect increased exponentially in some bacteria. Model 3280, a double cylindrical unit connecting two single cylindrical units, could completely sterilize more than 3 log CFU/mL of B. cereus and P. carotovorum subsp. carotovorum in 30 minutes of LED irradiation.

The Analysis on the Relationship between Firms' Exposures to SNS and Stock Prices in Korea (기업의 SNS 노출과 주식 수익률간의 관계 분석)

  • Kim, Taehwan;Jung, Woo-Jin;Lee, Sang-Yong Tom
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.233-253
    • /
    • 2014
  • Can the stock market really be predicted? Stock market prediction has attracted much attention from many fields including business, economics, statistics, and mathematics. Early research on stock market prediction was based on random walk theory (RWT) and the efficient market hypothesis (EMH). According to the EMH, stock market are largely driven by new information rather than present and past prices. Since it is unpredictable, stock market will follow a random walk. Even though these theories, Schumaker [2010] asserted that people keep trying to predict the stock market by using artificial intelligence, statistical estimates, and mathematical models. Mathematical approaches include Percolation Methods, Log-Periodic Oscillations and Wavelet Transforms to model future prices. Examples of artificial intelligence approaches that deals with optimization and machine learning are Genetic Algorithms, Support Vector Machines (SVM) and Neural Networks. Statistical approaches typically predicts the future by using past stock market data. Recently, financial engineers have started to predict the stock prices movement pattern by using the SNS data. SNS is the place where peoples opinions and ideas are freely flow and affect others' beliefs on certain things. Through word-of-mouth in SNS, people share product usage experiences, subjective feelings, and commonly accompanying sentiment or mood with others. An increasing number of empirical analyses of sentiment and mood are based on textual collections of public user generated data on the web. The Opinion mining is one domain of the data mining fields extracting public opinions exposed in SNS by utilizing data mining. There have been many studies on the issues of opinion mining from Web sources such as product reviews, forum posts and blogs. In relation to this literatures, we are trying to understand the effects of SNS exposures of firms on stock prices in Korea. Similarly to Bollen et al. [2011], we empirically analyze the impact of SNS exposures on stock return rates. We use Social Metrics by Daum Soft, an SNS big data analysis company in Korea. Social Metrics provides trends and public opinions in Twitter and blogs by using natural language process and analysis tools. It collects the sentences circulated in the Twitter in real time, and breaks down these sentences into the word units and then extracts keywords. In this study, we classify firms' exposures in SNS into two groups: positive and negative. To test the correlation and causation relationship between SNS exposures and stock price returns, we first collect 252 firms' stock prices and KRX100 index in the Korea Stock Exchange (KRX) from May 25, 2012 to September 1, 2012. We also gather the public attitudes (positive, negative) about these firms from Social Metrics over the same period of time. We conduct regression analysis between stock prices and the number of SNS exposures. Having checked the correlation between the two variables, we perform Granger causality test to see the causation direction between the two variables. The research result is that the number of total SNS exposures is positively related with stock market returns. The number of positive mentions of has also positive relationship with stock market returns. Contrarily, the number of negative mentions has negative relationship with stock market returns, but this relationship is statistically not significant. This means that the impact of positive mentions is statistically bigger than the impact of negative mentions. We also investigate whether the impacts are moderated by industry type and firm's size. We find that the SNS exposures impacts are bigger for IT firms than for non-IT firms, and bigger for small sized firms than for large sized firms. The results of Granger causality test shows change of stock price return is caused by SNS exposures, while the causation of the other way round is not significant. Therefore the correlation relationship between SNS exposures and stock prices has uni-direction causality. The more a firm is exposed in SNS, the more is the stock price likely to increase, while stock price changes may not cause more SNS mentions.

Patent Production and Technological Performance of Korean Firms: The Role of Corporate Innovation Strategies (특허생산과 기술성과: 기업 혁신전략의 역할)

  • Lee, Jukwan;Jung, Jin Hwa
    • Journal of Technology Innovation
    • /
    • v.22 no.1
    • /
    • pp.149-175
    • /
    • 2014
  • This study analyzed the effect of corporate innovation strategies on patent production and ultimately on technological change and new product development of firms in South Korea. The intent was to derive efficient strategies for enhancing technological performance of the firms. For the empirical analysis, three sources of data were combined: four waves of the Human Capital Corporate Panel Survey (HCCP) data collected by the Korea Research Institute for Vocational Education and Training (KRIVET), corporate financial data obtained from the Korea Information Service (KIS), and corporate patent data provided by the Korean Intellectual Property Office (KIPO). The patent production function was estimated by zero-inflated negative binomial (ZINB) regression. The technological performance function was estimated by two-stage regression, taking into account the endogeneity of patent production. An ordered logit model was applied for the second stage regression. Empirical results confirmed the critical role of corporate innovation strategies in patent production and in facilitating technological change and new product development of the firms. In patent production, the firms' R&D investment and human resources were key determinants. Higher R&D intensity led to more patents, yet with decreasing marginal productivity. A larger stock of registered patents also led to a larger flow of new patent production. Firms were more prolific in patent production when they had high-quality personnel, intensely investing in human resource development, and adopting market-leading or fast-follower strategy as compared to stability strategy. In technological performance, the firms' human resources played a key role in accelerating technological change and new product development. R&D intensity expedited new product development of the firm. Firms adopting market-leading or fast-follower strategy were at an advantage than those with stability strategy in technological performance. Firms prolific in patent production were also advanced in terms of technological change and new product development. However, the nexus between patent production and technological performance measures was substantially reduced when controlling for the endogeneity of patent production. These results suggest that firms need to strengthen the linkage between patent production and technological performance, and take strategies that address each firm's capacities and needs.