• Title/Summary/Keyword: a modeling

Search Result 26,009, Processing Time 0.057 seconds

Determinants Affecting Organizational Open Source Software Switch and the Moderating Effects of Managers' Willingness to Secure SW Competitiveness (조직의 오픈소스 소프트웨어 전환에 영향을 미치는 요인과 관리자의 SW 경쟁력 확보의지의 조절효과)

  • Sanghyun Kim;Hyunsun Park
    • Information Systems Review
    • /
    • v.21 no.4
    • /
    • pp.99-123
    • /
    • 2019
  • The software industry is a high value-added industry in the knowledge information age, and its importance is growing as it not only plays a key role in knowledge creation and utilization, but also secures global competitiveness. Among various SW available in today's business environment, Open Source Software(OSS) is rapidly expanding its activity area by not only leading software development, but also integrating with new information technology. Therefore, the purpose of this research is to empirically examine and analyze the effect of factors on the switching behavior to OSS. To accomplish the study's purpose, we suggest the research model based on "Push-Pull-Mooring" framework. This study empirically examines the two categories of antecedents for switching behavior toward OSS. The survey was conducted to employees at various firms that already switched OSS. A total of 268 responses were collected and analyzed by using the structural equational modeling. The results of this study are as follows; first, continuous maintenance cost, vender dependency, functional indifference, and SW resource inefficiency are significantly related to switch to OSS. Second, network-oriented support, testability and strategic flexibility are significantly related to switch to OSS. Finally, the results show that willingness to secures SW competitiveness has a moderating effect on the relationships between push factors and pull factor with exception of improved knowledge, and switch to OSS. The results of this study will contribute to fields related to OSS both theoretically and practically.

Analysis of the Impact of Generative AI based on Crunchbase: Before and After the Emergence of ChatGPT (Crunchbase를 바탕으로 한 Generative AI 영향 분석: ChatGPT 등장 전·후를 중심으로)

  • Nayun Kim;Youngjung Geum
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.19 no.3
    • /
    • pp.53-68
    • /
    • 2024
  • Generative AI is receiving a lot of attention around the world, and ways to effectively utilize it in the business environment are being explored. In particular, since the public release of the ChatGPT service, which applies the GPT-3.5 model, a large language model developed by OpenAI, it has attracted more attention and has had a significant impact on the entire industry. This study focuses on the emergence of Generative AI, especially ChatGPT, which applies OpenAI's GPT-3.5 model, to investigate its impact on the startup industry and compare the changes that occurred before and after its emergence. This study aims to shed light on the actual application and impact of generative AI in the business environment by examining in detail how generative AI is being used in the startup industry and analyzing the impact of ChatGPT's emergence on the industry. To this end, we collected company information of generative AI-related startups that appeared before and after the ChatGPT announcement and analyzed changes in industry, business content, and investment information. Through keyword analysis, topic modeling, and network analysis, we identified trends in the startup industry and how the introduction of generative AI has revolutionized the startup industry. As a result of the study, we found that the number of startups related to Generative AI has increased since the emergence of ChatGPT, and in particular, the total and average amount of funding for Generative AI-related startups has increased significantly. We also found that various industries are attempting to apply Generative AI technology, and the development of services and products such as enterprise applications and SaaS using Generative AI has been actively promoted, influencing the emergence of new business models. The findings of this study confirm the impact of Generative AI on the startup industry and contribute to our understanding of how the emergence of this innovative new technology can change the business ecosystem.

  • PDF

Spatial effect on the diffusion of discount stores (대형할인점 확산에 대한 공간적 영향)

  • Joo, Young-Jin;Kim, Mi-Ae
    • Journal of Distribution Research
    • /
    • v.15 no.4
    • /
    • pp.61-85
    • /
    • 2010
  • Introduction: Diffusion is process by which an innovation is communicated through certain channel overtime among the members of a social system(Rogers 1983). Bass(1969) suggested the Bass model describing diffusion process. The Bass model assumes potential adopters of innovation are influenced by mass-media and word-of-mouth from communication with previous adopters. Various expansions of the Bass model have been conducted. Some of them proposed a third factor affecting diffusion. Others proposed multinational diffusion model and it stressed interactive effect on diffusion among several countries. We add a spatial factor in the Bass model as a third communication factor. Because of situation where we can not control the interaction between markets, we need to consider that diffusion within certain market can be influenced by diffusion in contiguous market. The process that certain type of retail extends is a result that particular market can be described by the retail life cycle. Diffusion of retail has pattern following three phases of spatial diffusion: adoption of innovation happens in near the diffusion center first, spreads to the vicinity of the diffusing center and then adoption of innovation is completed in peripheral areas in saturation stage. So we expect spatial effect to be important to describe diffusion of domestic discount store. We define a spatial diffusion model using multinational diffusion model and apply it to the diffusion of discount store. Modeling: In this paper, we define a spatial diffusion model and apply it to the diffusion of discount store. To define a spatial diffusion model, we expand learning model(Kumar and Krishnan 2002) and separate diffusion process in diffusion center(market A) from diffusion process in the vicinity of the diffusing center(market B). The proposed spatial diffusion model is shown in equation (1a) and (1b). Equation (1a) is the diffusion process in diffusion center and equation (1b) is one in the vicinity of the diffusing center. $$\array{{S_{i,t}=(p_i+q_i{\frac{Y_{i,t-1}}{m_i}})(m_i-Y_{i,t-1})\;i{\in}\{1,{\cdots},I\}\;(1a)}\\{S_{j,t}=(p_j+q_j{\frac{Y_{j,t-1}}{m_i}}+{\sum\limits_{i=1}^I}{\gamma}_{ij}{\frac{Y_{i,t-1}}{m_i}})(m_j-Y_{j,t-1})\;i{\in}\{1,{\cdots},I\},\;j{\in}\{I+1,{\cdots},I+J\}\;(1b)}}$$ We rise two research questions. (1) The proposed spatial diffusion model is more effective than the Bass model to describe the diffusion of discount stores. (2) The more similar retail environment of diffusing center with that of the vicinity of the contiguous market is, the larger spatial effect of diffusing center on diffusion of the vicinity of the contiguous market is. To examine above two questions, we adopt the Bass model to estimate diffusion of discount store first. Next spatial diffusion model where spatial factor is added to the Bass model is used to estimate it. Finally by comparing Bass model with spatial diffusion model, we try to find out which model describes diffusion of discount store better. In addition, we investigate the relationship between similarity of retail environment(conceptual distance) and spatial factor impact with correlation analysis. Result and Implication: We suggest spatial diffusion model to describe diffusion of discount stores. To examine the proposed spatial diffusion model, 347 domestic discount stores are used and we divide nation into 5 districts, Seoul-Gyeongin(SG), Busan-Gyeongnam(BG), Daegu-Gyeongbuk(DG), Gwan- gju-Jeonla(GJ), Daejeon-Chungcheong(DC), and the result is shown

    . In a result of the Bass model(I), the estimates of innovation coefficient(p) and imitation coefficient(q) are 0.017 and 0.323 respectively. While the estimate of market potential is 384. A result of the Bass model(II) for each district shows the estimates of innovation coefficient(p) in SG is 0.019 and the lowest among 5 areas. This is because SG is the diffusion center. The estimates of imitation coefficient(q) in BG is 0.353 and the highest. The imitation coefficient in the vicinity of the diffusing center such as BG is higher than that in the diffusing center because much information flows through various paths more as diffusion is progressing. A result of the Bass model(II) shows the estimates of innovation coefficient(p) in SG is 0.019 and the lowest among 5 areas. This is because SG is the diffusion center. The estimates of imitation coefficient(q) in BG is 0.353 and the highest. The imitation coefficient in the vicinity of the diffusing center such as BG is higher than that in the diffusing center because much information flows through various paths more as diffusion is progressing. In a result of spatial diffusion model(IV), we can notice the changes between coefficients of the bass model and those of the spatial diffusion model. Except for GJ, the estimates of innovation and imitation coefficients in Model IV are lower than those in Model II. The changes of innovation and imitation coefficients are reflected to spatial coefficient(${\gamma}$). From spatial coefficient(${\gamma}$) we can infer that when the diffusion in the vicinity of the diffusing center occurs, the diffusion is influenced by one in the diffusing center. The difference between the Bass model(II) and the spatial diffusion model(IV) is statistically significant with the ${\chi}^2$-distributed likelihood ratio statistic is 16.598(p=0.0023). Which implies that the spatial diffusion model is more effective than the Bass model to describe diffusion of discount stores. So the research question (1) is supported. In addition, we found that there are statistically significant relationship between similarity of retail environment and spatial effect by using correlation analysis. So the research question (2) is also supported.

  • PDF
  • The Research on Online Game Hedonic Experience - Focusing on Moderate Effect of Perceived Complexity - (온라인 게임에서의 쾌락적 경험에 관한 연구 - 지각된 복잡성의 조절효과를 중심으로 -)

    • Lee, Jong-Ho;Jung, Yun-Hee
      • Journal of Global Scholars of Marketing Science
      • /
      • v.18 no.2
      • /
      • pp.147-187
      • /
      • 2008
    • Online game researchers focus on the flow and factors influencing flow. Flow is conceptualized as an optimal experience state and useful explaining game experience in online. Many game studies focused on the customer loyalty and flow in playing online game, In showing specific game experience, however, it doesn't examine multidimensional experience process. Flow is not construct which show absorbing process, but construct which show absorbing result. Hence, Flow is not adequate to examine multidimensional experience of games. Online game is included in hedonic consumption. Hedonic consumption is a relatively new field of study in consumer research and it explores the consumption experience as a experiential view(Hirschman and Holbrook 1982). Hedonic consumption explores the consumption experience not as an information processing event but from a phenomenological of experiential view, which is a primarily subjective state. It includes various playful leisure activities, sensory pleasures, daydreams, esthetic enjoyment, and emotional responses. In online game experience, therefore, it is right to access through a experiential view of hedonic consumption. The objective of this paper was to make up for lacks in our understanding of online game experience by developing a framework for better insight into the hedonic experience of online game. We developed this framework by integrating and extending existing research in marketing, online game and hedonic responses. We then discussed several expectations for this framework. We concluded by discussing the results of this study, providing general recommendation and directions for future research. In hedonic response research, Lacher's research(1994)and Jongho lee and Yunhee Jung' research (2005;2006) has served as a fundamental starting point of our research. A common element in this extended research is the repeated identification of the four hedonic responses: sensory response, imaginal response, emotional response, analytic response. The validity of these four constructs finds in research of music(Lacher 1994) and movie(Jongho lee and Yunhee Jung' research 2005;2006). But, previous research on hedonic response didn't show that constructs of hedonic response have cause-effect relation. Also, although hedonic response enable to different by stimulus properties. effects of stimulus properties is not showed. To fill this gap, while largely based on Lacher(1994)' research and Jongho Lee and Yunhee Jung(2005, 2006)' research, we made several important adaptation with the primary goal of bringing the model into online game and compensating lacks of previous research. We maintained the same construct proposed by Lacher et al.(1994), with four constructs of hedonic response:sensory response, imaginal response, emotional response, analytical response. In this study, the sensory response is typified by some physical movement(Yingling 1962), the imaginal response is typified by images, memories, or situations that game evokes(Myers 1914), and the emotional response represents the feelings one experiences when playing game, such as pleasure, arousal, dominance, finally, the analytical response is that game player engaged in cognition seeking while playing game(Myers 1912). However, this paper has several important differences. We attempted to suggest multi-dimensional experience process in online game and cause-effect relation among hedonic responses. Also, We investigated moderate effects of perceived complexity. Previous studies about hedonic responses didn't show influences of stimulus properties. According to Berlyne's theory(1960, 1974) of aesthetic response, perceived complexity is a important construct because it effects pleasure. Pleasure in response to an object will increase with increased complexity, to an optimal level. After that, with increased complexity, pleasure begins with a linearly increasing line for complexity. Therefore, We expected this perceived complexity will influence hedonic response in game experience. We discussed the rationale for these suggested changes, the assumptions of the resulting framework, and developed some expectations based on its application in Online game context. In the first stage of methodology, questions were developed to measure the constructs. We constructed a survey measuring our theoretical constructs based on a combination of sources, including Yingling(1962), Hargreaves(1962), Lacher (1994), Jongho Lee and Yunhee Jung(2005, 2006), Mehrabian and Russell(1974), Pucely et al(1987). Based on comments received in the pretest, we made several revisions to arrive at our final survey. We investigated the proposed framework through a convenience sample, where participation in a self-report survey was solicited from various respondents having different knowledges. All respondents participated to different degrees, in these habitually practiced activities and received no compensation for their participation. Questionnaires were distributed to graduates and we used 381 completed questionnaires to analysis. The sample consisted of more men(n=225) than women(n=156). In measure, the study used multi-item scales based previous study. We analyze the data using structural equation modeling(LISREL-VIII; Joreskog and Sorbom 1993). First, we used the entire sample(n=381) to refine the measures and test their convergent and discriminant validity. The evidence from both the factor analysis and the analysis of reliability provides support that the scales exhibit internal consistency and construct validity. Second, we test the hypothesized structural model. And, we divided the sample into two different complexity group and analyze the hypothesized structural model of each group. The analysis suggest that hedonic response plays different roles from hypothesized in our study. The results indicate that hedonic response-sensory response, imaginal response, emotional response, analytical response- are related positively to respondents' level of game satisfaction. And game satisfaction is related to higher levels of game loyalty. Additionally, we found that perceived complexity is important to online game experience. Our results suggest that importance of each hedonic response different by perceived game complexity. Understanding the role of perceived complexity in hedonic response enables to have a better understanding of underlying mechanisms at game experience. If game has high complexity, analytical response become important response. So game producers or marketers have to consider more cognitive stimulus. Controversy, if game has low complexity, sensorial response respectively become important. Finally, we discussed several limitations of our study and suggested directions for future research. we concluded with a discussion of managerial implications. Our study provides managers with a basis for game strategies.

    • PDF

    Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

    • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
      • Journal of Intelligence and Information Systems
      • /
      • v.20 no.1
      • /
      • pp.163-176
      • /
      • 2014
    • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

    Cooperative Sales Promotion in Manufacturer-Retailer Channel under Unplanned Buying Potential (비계획구매를 고려한 제조업체와 유통업체의 판매촉진 비용 분담)

    • Kim, Hyun Sik
      • Journal of Distribution Research
      • /
      • v.17 no.4
      • /
      • pp.29-53
      • /
      • 2012
    • As so many marketers get to use diverse sales promotion methods, manufacturer and retailer in a channel often use them too. In this context, diverse issues on sales promotion management arise. One of them is the issue of unplanned buying. Consumers' unplanned buying is clearly better off for the retailer but not for manufacturer. This asymmetric influence of unplanned buying should be dealt with prudently because of its possibility of provocation of channel conflict. However, there have been scarce studies on the sales promotion management strategy considering the unplanned buying and its asymmetric effect on retailer and manufacturer. In this paper, we try to find a better way for a manufacturer in a channel to promote performance through the retailer's sales promotion efforts when there is potential of unplanned buying effect. We investigate via game-theoretic modeling what is the optimal cost sharing level between the manufacturer and retailer when there is unplanned buying effect. We investigated following issues about the topic as follows: (1) What structure of cost sharing mechanism should the manufacturer and retailer in a channel choose when unplanned buying effect is strong (or weak)? (2) How much payoff could the manufacturer and retailer in a channel get when unplanned buying effect is strong (or weak)? We focus on the impact of unplanned buying effect on the optimal cost sharing mechanism for sales promotions between a manufacturer and a retailer in a same channel. So we consider two players in the game, a manufacturer and a retailer who are interacting in a same distribution channel. The model is of complete information game type. In the model, the manufacturer is the Stackelberg leader and the retailer is the follower. Variables in the model are as following table. Manufacturer's objective function in the basic game is as follows: ${\Pi}={\Pi}_1+{\Pi}_2$, where, ${\Pi}_1=w_1(1+L-p_1)-{\psi}^2$, ${\Pi}_2=w_2(1-{\epsilon}L-p_2)$. And retailer's is as follows: ${\pi}={\pi}_1+{\pi}_2$, where, ${\pi}_1=(p_1-w_1)(1+L-p_1)-L(L-{\psi})+p_u(b+L-p_u)$, ${\pi}_2=(p_2-w_2)(1-{\epsilon}L-p_2)$. The model is of four stages in two periods. Stages of the game are as follows. (Stage 1) Manufacturer sets wholesale price of the first period($w_1$) and cost sharing level of channel sales promotion(${\Psi}$). (Stage 2) Retailer sets retail price of the focal brand($p_1$), the unplanned buying item($p_u$), and sales promotion level(L). (Stage 3) Manufacturer sets wholesale price of the second period($w_2$). (Stage 4) Retailer sets retail price of the second period($p_2$). Since the model is a kind of dynamic games, we try to find a subgame perfect equilibrium to derive some theoretical and managerial implications. In order to obtain the subgame perfect equilibrium, we use the backward induction method. In using backward induction approach, we solve the problems backward from stage 4 to stage 1. By completely knowing follower's optimal reaction to the leader's potential actions, we can fold the game tree backward. Equilibrium of each variable in the basic game is as following table. We conducted more analysis of additional game about diverse cost level of manufacturer. Manufacturer's objective function in the additional game is same with that of the basic game as follows: ${\Pi}={\Pi}_1+{\Pi}_2$, where, ${\Pi}_1=w_1(1+L-p_1)-{\psi}^2$, ${\Pi}_2=w_2(1-{\epsilon}L-p_2)$. But retailer's objective function is different from that of the basic game as follows: ${\pi}={\pi}_1+{\pi}_2$, where, ${\pi}_1=(p_1-w_1)(1+L-p_1)-L(L-{\psi})+(p_u-c)(b+L-p_u)$, ${\pi}_2=(p_2-w_2)(1-{\epsilon}L-p_2)$. Equilibrium of each variable in this additional game is as following table. Major findings of the current study are as follows: (1) As the unplanned buying effect gets stronger, manufacturer and retailer had better increase the cost for sales promotion. (2) As the unplanned buying effect gets stronger, manufacturer had better decrease the cost sharing portion of total cost for sales promotion. (3) Manufacturer's profit is increasing function of the unplanned buying effect. (4) All results of (1),(2),(3) are alleviated by the increase of retailer's procurement cost to acquire unplanned buying items. The authors discuss the implications of those results for the marketers in manufacturers or retailers. The current study firstly suggests some managerial implications for the manufacturer how to share the sales promotion cost with the retailer in a channel to the high or low level of the consumers' unplanned buying potential.

    • PDF

    Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

    • Cha, Sungjae;Kang, Jungseok
      • Journal of Intelligence and Information Systems
      • /
      • v.24 no.4
      • /
      • pp.1-32
      • /
      • 2018
    • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

    Study of Coherent High-Power Electromagnetic Wave Generation Based on Cherenkov Radiation Using Plasma Wakefield Accelerator with Relativistic Electron Beam in Vacuum (진공 내 상대론적인 영역의 전자빔을 이용한 플라즈마 항적장 가속기 기반 체렌코프 방사를 통한 결맞는 고출력 전자파 발생 기술 연구)

    • Min, Sun-Hong;Kwon, Ohjoon;Sattorov, Matlabjon;Baek, In-Keun;Kim, Seontae;Hong, Dongpyo;Jang, Jungmin;Bhattacharya, Ranajoy;Cho, Ilsung;Kim, Byungsu;Park, Chawon;Jung, Wongyun;Park, Seunghyuk;Park, Gun-Sik
      • The Journal of Korean Institute of Electromagnetic Engineering and Science
      • /
      • v.29 no.6
      • /
      • pp.407-410
      • /
      • 2018
    • As the operating frequency of an electromagnetic wave increases, the maximum output and wavelength of the wave decreases, so that the size of the circuit cannot be reduced. As a result, the fabrication of a circuit with high power (of the order of or greater than kW range) and terahertz wave frequency band is limited, due to the problem of circuit size, to the order of ${\mu}m$ to mm. In order to overcome these limitations, we propose a source design technique for 0.1 THz~0.3 GW level with cylindrical shape (diameter ~2.4 cm). Modeling and computational simulations were performed to optimize the design of the high-power electromagnetic sources based on Cherenkov radiation generation technology using the principle of plasma wakefield acceleration with ponderomotive force and artificial dielectrics. An effective design guideline has been proposed to facilitate the fabrication of high-power terahertz wave vacuum devices of large diameter that are less restricted in circuit size through objective verification.

    Modeling of Sensorineural Hearing Loss for the Evaluation of Digital Hearing Aid Algorithms (디지털 보청기 알고리즘 평가를 위한 감음신경성 난청의 모델링)

    • 김동욱;박영철
      • Journal of Biomedical Engineering Research
      • /
      • v.19 no.1
      • /
      • pp.59-68
      • /
      • 1998
    • Digital hearing aids offer many advantages over conventional analog hearing aids. With the advent of high speed digital signal processing chips, new digital techniques have been introduced to digital hearing aids. In addition, the evaluation of new ideas in hearing aids is necessarily accompanied by intensive subject-based clinical tests which requires much time and cost. In this paper, we present an objective method to evaluate and predict the performance of hearing aid systems without the help of such subject-based tests. In the hearing impairment simulation(HIS) algorithm, a sensorineural hearing impairment medel is established from auditory test data of the impaired subject being simulated. Also, the nonlinear behavior of the loudness recruitment is defined using hearing loss functions generated from the measurements. To transform the natural input sound into the impaired one, a frequency sampling filter is designed. The filter is continuously refreshed with the level-dependent frequency response function provided by the impairment model. To assess the performance, the HIS algorithm was implemented in real-time using a floating-point DSP. Signals processed with the real-time system were presented to normal subjects and their auditory data modified by the system was measured. The sensorineural hearing impairment was simulated and tested. The threshold of hearing and the speech discrimination tests exhibited the efficiency of the system in its use for the hearing impairment simulation. Using the HIS system we evaluated three typical hearing aid algorithms.

    • PDF

    A Three-Dimensional Modeling Study of Lake Paldang for Spatial and Temporal Distributions of Temperature, Current, Residence Time, and Spreading Pattern of Incoming Flows (팔당호 수온, 유속, 체류시간의 시.공간적 분포 및 유입지류 흐름에 관한 3차원 모델 연구)

    • Na, Eun-Hye;Park, Seok-Soon
      • Journal of Korean Society of Environmental Engineers
      • /
      • v.27 no.9
      • /
      • pp.978-988
      • /
      • 2005
    • A three-dimensional dynamic model was applied to Lake Paldang, Han River in this study. The model was calibrated and verified using the data measured under different ambient conditions. The model results were in reasonable agreements with the field measurements in both calibration and verification. Utilizing the validated model, we analyzed the spatial and temporal distributions of temperature, current, residence time, and spreading pattern of incoming flows within the lake. Relatively low velocity and high temperature were computed at the surface layer in the southern region of the Sonae island. The longest residence time within the lake was predicted in the southern region of the Sonae island and the downstream region of the South Branch. This can be attributed to the fact that the back currents caused by the dam blocking occur mainly in these regions. Vertical thermal profiles indicated that the thermal stratifications would be occurred feebly in early summer and winter. During early spring and fall, it appeared that there would be no discernible differences at the vertical temperature profiles in the entire lake. The vertical overturns, however, do not occur during these periods due to an influence of high discharge flows from the dam. During midsummer monsoon season with high precipitation, the thermal stratification was disrupted by high incoming flow rates and discharges from the dam and very short residence time was resulted in the entire lake. In this circulation patterns, the plume of the Kyoungan stream with smallest flow rate and higher water temperature tends to travel downstream horizontally along the eastern shore of the south island and vertically at the top surface layer. The model results suggest that the Paldang lake should be a highly hydrodynamic water body with large spatial and temporal variations.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.