• Title/Summary/Keyword: Field test

Search Result 9,511, Processing Time 0.038 seconds

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

The Research on Online Game Hedonic Experience - Focusing on Moderate Effect of Perceived Complexity - (온라인 게임에서의 쾌락적 경험에 관한 연구 - 지각된 복잡성의 조절효과를 중심으로 -)

  • Lee, Jong-Ho;Jung, Yun-Hee
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.2
    • /
    • pp.147-187
    • /
    • 2008
  • Online game researchers focus on the flow and factors influencing flow. Flow is conceptualized as an optimal experience state and useful explaining game experience in online. Many game studies focused on the customer loyalty and flow in playing online game, In showing specific game experience, however, it doesn't examine multidimensional experience process. Flow is not construct which show absorbing process, but construct which show absorbing result. Hence, Flow is not adequate to examine multidimensional experience of games. Online game is included in hedonic consumption. Hedonic consumption is a relatively new field of study in consumer research and it explores the consumption experience as a experiential view(Hirschman and Holbrook 1982). Hedonic consumption explores the consumption experience not as an information processing event but from a phenomenological of experiential view, which is a primarily subjective state. It includes various playful leisure activities, sensory pleasures, daydreams, esthetic enjoyment, and emotional responses. In online game experience, therefore, it is right to access through a experiential view of hedonic consumption. The objective of this paper was to make up for lacks in our understanding of online game experience by developing a framework for better insight into the hedonic experience of online game. We developed this framework by integrating and extending existing research in marketing, online game and hedonic responses. We then discussed several expectations for this framework. We concluded by discussing the results of this study, providing general recommendation and directions for future research. In hedonic response research, Lacher's research(1994)and Jongho lee and Yunhee Jung' research (2005;2006) has served as a fundamental starting point of our research. A common element in this extended research is the repeated identification of the four hedonic responses: sensory response, imaginal response, emotional response, analytic response. The validity of these four constructs finds in research of music(Lacher 1994) and movie(Jongho lee and Yunhee Jung' research 2005;2006). But, previous research on hedonic response didn't show that constructs of hedonic response have cause-effect relation. Also, although hedonic response enable to different by stimulus properties. effects of stimulus properties is not showed. To fill this gap, while largely based on Lacher(1994)' research and Jongho Lee and Yunhee Jung(2005, 2006)' research, we made several important adaptation with the primary goal of bringing the model into online game and compensating lacks of previous research. We maintained the same construct proposed by Lacher et al.(1994), with four constructs of hedonic response:sensory response, imaginal response, emotional response, analytical response. In this study, the sensory response is typified by some physical movement(Yingling 1962), the imaginal response is typified by images, memories, or situations that game evokes(Myers 1914), and the emotional response represents the feelings one experiences when playing game, such as pleasure, arousal, dominance, finally, the analytical response is that game player engaged in cognition seeking while playing game(Myers 1912). However, this paper has several important differences. We attempted to suggest multi-dimensional experience process in online game and cause-effect relation among hedonic responses. Also, We investigated moderate effects of perceived complexity. Previous studies about hedonic responses didn't show influences of stimulus properties. According to Berlyne's theory(1960, 1974) of aesthetic response, perceived complexity is a important construct because it effects pleasure. Pleasure in response to an object will increase with increased complexity, to an optimal level. After that, with increased complexity, pleasure begins with a linearly increasing line for complexity. Therefore, We expected this perceived complexity will influence hedonic response in game experience. We discussed the rationale for these suggested changes, the assumptions of the resulting framework, and developed some expectations based on its application in Online game context. In the first stage of methodology, questions were developed to measure the constructs. We constructed a survey measuring our theoretical constructs based on a combination of sources, including Yingling(1962), Hargreaves(1962), Lacher (1994), Jongho Lee and Yunhee Jung(2005, 2006), Mehrabian and Russell(1974), Pucely et al(1987). Based on comments received in the pretest, we made several revisions to arrive at our final survey. We investigated the proposed framework through a convenience sample, where participation in a self-report survey was solicited from various respondents having different knowledges. All respondents participated to different degrees, in these habitually practiced activities and received no compensation for their participation. Questionnaires were distributed to graduates and we used 381 completed questionnaires to analysis. The sample consisted of more men(n=225) than women(n=156). In measure, the study used multi-item scales based previous study. We analyze the data using structural equation modeling(LISREL-VIII; Joreskog and Sorbom 1993). First, we used the entire sample(n=381) to refine the measures and test their convergent and discriminant validity. The evidence from both the factor analysis and the analysis of reliability provides support that the scales exhibit internal consistency and construct validity. Second, we test the hypothesized structural model. And, we divided the sample into two different complexity group and analyze the hypothesized structural model of each group. The analysis suggest that hedonic response plays different roles from hypothesized in our study. The results indicate that hedonic response-sensory response, imaginal response, emotional response, analytical response- are related positively to respondents' level of game satisfaction. And game satisfaction is related to higher levels of game loyalty. Additionally, we found that perceived complexity is important to online game experience. Our results suggest that importance of each hedonic response different by perceived game complexity. Understanding the role of perceived complexity in hedonic response enables to have a better understanding of underlying mechanisms at game experience. If game has high complexity, analytical response become important response. So game producers or marketers have to consider more cognitive stimulus. Controversy, if game has low complexity, sensorial response respectively become important. Finally, we discussed several limitations of our study and suggested directions for future research. we concluded with a discussion of managerial implications. Our study provides managers with a basis for game strategies.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Preliminary Report of the $1998{\sim}1999$ Patterns of Care Study of Radiation Therapy for Esophageal Cancer in Korea (식도암 방사선 치료에 대한 Patterns of Care Study ($1998{\sim}1999$)의 예비적 결과 분석)

  • Hur, Won-Joo;Choi, Young-Min;Lee, Hyung-Sik;Kim, Jeung-Kee;Kim, Il-Han;Lee, Ho-Jun;Lee, Kyu-Chan;Kim, Jung-Soo;Chun, Mi-Son;Kim, Jin-Hee;Ahn, Yong-Chan;Kim, Sang-Gi;Kim, Bo-Kyung
    • Radiation Oncology Journal
    • /
    • v.25 no.2
    • /
    • pp.79-92
    • /
    • 2007
  • [ $\underline{Purpose}$ ]: For the first time, a nationwide survey in the Republic of Korea was conducted to determine the basic parameters for the treatment of esophageal cancer and to offer a solid cooperative system for the Korean Pattern of Care Study database. $\underline{Materials\;and\;Methods}$: During $1998{\sim}1999$, biopsy-confirmed 246 esophageal cancer patients that received radiotherapy were enrolled from 23 different institutions in South Korea. Random sampling was based on power allocation method. Patient parameters and specific information regarding tumor characteristics and treatment methods were collected and registered through the web based PCS system. The data was analyzed by the use of the Chi-squared test. $\underline{Results}$: The median age of the collected patients was 62 years. The male to female ratio was about 91 to 9 with an absolute male predominance. The performance status ranged from ECOG 0 to 1 in 82.5% of the patients. Diagnostic procedures included an esophagogram (228 patients, 92.7%), endoscopy (226 patients, 91.9%), and a chest CT scan (238 patients, 96.7%). Squamous cell carcinoma was diagnosed in 96.3% of the patients; mid-thoracic esophageal cancer was most prevalent (110 patients, 44.7%) and 135 patients presented with clinical stage III disease. Fifty seven patients received radiotherapy alone and 37 patients received surgery with adjuvant postoperative radiotherapy. Half of the patients (123 patients) received chemotherapy together with RT and 70 patients (56.9%) received it as concurrent chemoradiotherapy. The most frequently used chemotherapeutic agent was a combination of cisplatin and 5-FU. Most patients received radiotherapy either with 6 MV (116 patients, 47.2%) or with 10 MV photons (87 patients, 35.4%). Radiotherapy was delivered through a conventional AP-PA field for 206 patients (83.7%) without using a CT plan and the median delivered dose was 3,600 cGy. The median total dose of postoperative radiotherapy was 5,040 cGy while for the non-operative patients the median total dose was 5,970 cGy. Thirty-four patients received intraluminal brachytherapy with high dose rate Iridium-192. Brachytherapy was delivered with a median dose of 300 cGy in each fraction and was typically delivered $3{\sim}4\;times$. The most frequently encountered complication during the radiotherapy treatment was esophagitis in 155 patients (63.0%). $\underline{Conclusion}$: For the evaluation and treatment of esophageal cancer patients at radiation facilities in Korea, this study will provide guidelines and benchmark data for the solid cooperative systems of the Korean PCS. Although some differences were noted between institutions, there was no major difference in the treatment modalities and RT techniques.

Development of a Device for Estimating the Optimal Artificial Insemination Time of Individually Stalled Sows Using Image Processing (영상처리기법을 이용한 스톨 사육 모돈의 인공수정적기 예측 장치 개발)

  • Kim, D.J.;Yeon, S.C.;Chang, H.H.
    • Journal of Animal Science and Technology
    • /
    • v.49 no.5
    • /
    • pp.677-688
    • /
    • 2007
  • 돼지를 포함한 대부분의 동물은 일정한 발정주기를 가지고 일정한 시기에 배란을 하는 자연배란동물이지만, 토끼, 고양이, 밍크 등의 암놈은 교미자극에 의해 배란이 일어나는 유기배란동물이다. 또한 1년에 한 번만 발정하는 단발정동물과 1년에 수차례 발정하는 다발정동물이 있다. 이 중에서 모돈은 1년에 수차례 발정하는 다발정 동물로서 발정기에 들면 비발정기와는 다른 행동을 나타낸다(Diehl 등, 2001). 양돈가의 수익을 최대화하기 위해서는 비생산일수를 최소로 줄여야 한다. 모돈의 비생산일수를 줄일 수 있는 한 가지 방법은 성공적으로 교배를 시키는 것이다. 이처럼 성공적으로 교배를 시키기 위해서는 수정적기를 정확히 예측해야 한다. 만약 수정적기를 정확히 판단하지 못하여 수태가 되지 않으면, 비생산일수가 늘어나 손실을 입게 된다. 따라서 수정적기를 정확히 판단하는 것은 모돈의 성공적인 인공수정에 있어서 중요한 요소이다. 수정적기는 배란이 일어나기 전 10시간에서 12시간 사이이며, 발정이 시작되는 시점을 기준으로 하였을 때 경산돈의 경우 26시간에서 34시간 사이이고 미경산돈의 경우는 18시간에서 26시간 사이이다(Evans 등, 2001). 현재 하루에 두 번 모돈의 발정을 확인하는 것이 일반화되어 있으며, 이 때 웅돈을 접촉시키거나 육안관찰을 통하여 발정 유무를 판단한다. 이러한 방법에는 숙련된 기술과 풍부한 경험이 요구될 뿐만 아니라 총 소요노동력의 30% 정도가 요구된다(Perez 등, 1986). 하루에 두 번밖에 발정을 감지하지 않기 때문에 발정이 언제 시작되었는지를 정확히 알 수 없으며, 또한 발정의 대부분이 새벽에 시작되므로 수정적기를 정확히 판단하기란 매우 어렵다. 만약 발정을 감지했더라도 적기에 인공수정을 하지 못한다면, 수태율이 낮아지므로 경제적 손실이 초래된다. 현재 이러한 문제점 때문에 2회에서 3회에 걸쳐 인공수정을 하고 있으나 이에 따른 소요비용과 소요노동력 등은 양돈가의 부담을 가중시키는 요인이 되고 있다. 돼지는 발정기가 되면 비발정기에 나타내지 않던 외음부의 냄새를 맡는 행동, 귀를 세우는 행동 및 승가허용 행동 등을 나타낸다(Diehl 등, 2001). 또한 돼지는 비발정기에 비하여 발정기에 더 많은 활동량을 나타낸다(Altman, 1941; Erez and Hartsock, 1990). Freson 등(1998)은 스톨에서 개별적으로 사육되고 있는 모돈의 활동량을 적외선센서를 이용하여 측정함으로써 발정을 86%까지 감지하였다고 보고하였다. 그러나 이 연구는 단지 모돈의 발정을 감지하였을 뿐 번식관리에 있어서 가장 중요한 수정적기의 판단 기준을 제시하지 못하였다. 따라서, 본 연구는 스톨에서 사육되는 모돈의 활동량을 측정함으로써 발정시작시각을 감지하고 이를 기준으로 인공수정적기를 예측할 수 있는 인공수정적기 예측 장치를 개발한 후 이의 성능을 농장실증실험을 통하여 시험하고자 수행되었다.

The Role of Social Capital and Identity in Knowledge Contribution in Virtual Communities: An Empirical Investigation (가상 커뮤니티에서 사회적 자본과 정체성이 지식기여에 미치는 역할: 실증적 분석)

  • Shin, Ho Kyoung;Kim, Kyung Kyu;Lee, Un-Kon
    • Asia pacific journal of information systems
    • /
    • v.22 no.3
    • /
    • pp.53-74
    • /
    • 2012
  • A challenge in fostering virtual communities is the continuous supply of knowledge, namely members' willingness to contribute knowledge to their communities. Previous research argues that giving away knowledge eventually causes the possessors of that knowledge to lose their unique value to others, benefiting all except the contributor. Furthermore, communication within virtual communities involves a large number of participants with different social backgrounds and perspectives. The establishment of mutual understanding to comprehend conversations and foster knowledge contribution in virtual communities is inevitably more difficult than face-to-face communication in a small group. In spite of these arguments, evidence suggests that individuals in virtual communities do engage in social behaviors such as knowledge contribution. It is important to understand why individuals provide their valuable knowledge to other community members without a guarantee of returns. In virtual communities, knowledge is inherently rooted in individual members' experiences and expertise. This personal nature of knowledge requires social interactions between virtual community members for knowledge transfer. This study employs the social capital theory in order to account for interpersonal relationship factors and identity theory for individual and group factors that may affect knowledge contribution. First, social capital is the relationship capital which is embedded within the relationships among the participants in a network and available for use when it is needed. Social capital is a productive resource, facilitating individuals' actions for attainment. Nahapiet and Ghoshal (1997) identify three dimensions of social capital and explain theoretically how these dimensions affect the exchange of knowledge. Thus, social capital would be relevant to knowledge contribution in virtual communities. Second, existing research has addressed the importance of identity in facilitating knowledge contribution in a virtual context. Identity in virtual communities has been described as playing a vital role in the establishment of personal reputations and in the recognition of others. For instance, reputation systems that rate participants in terms of the quality of their contributions provide a readily available inventory of experts to knowledge seekers. Despite the growing interest in identities, however, there is little empirical research about how identities in the communities influence knowledge contribution. Therefore, the goal of this study is to better understand knowledge contribution by examining the roles of social capital and identity in virtual communities. Based on a theoretical framework of social capital and identity theory, we develop and test a theoretical model and evaluate our hypotheses. Specifically, we propose three variables such as cohesiveness, reciprocity, and commitment, referring to the social capital theory, as antecedents of knowledge contribution in virtual communities. We further posit that members with a strong identity (self-presentation and group identification) contribute more knowledge to virtual communities. We conducted a field study in order to validate our research model. We collected data from 192 members of virtual communities and used the PLS method to analyse the data. The tests of the measurement model confirm that our data set has appropriate discriminant and convergent validity. The results of testing the structural model show that cohesion, reciprocity, and self-presentation significantly influence knowledge contribution, while commitment and group identification do not significantly influence knowledge contribution. Our findings on cohesion and reciprocity are consistent with the previous literature. Contrary to our expectations, commitment did not significantly affect knowledge contribution in virtual communities. This result may be due to the fact that knowledge contribution was voluntary in the virtual communities in our sample. Another plausible explanation for this result may be the self-selection bias for the survey respondents, who are more likely to contribute their knowledge to virtual communities. The relationship between self-presentation and knowledge contribution was found to be significant in virtual communities, supporting the results of prior literature. Group identification did not significantly affect knowledge contribution in this study, inconsistent with the wealth of research that identifies group identification as an important factor for knowledge sharing. This conflicting result calls for future research that examines the role of group identification in knowledge contribution in virtual communities. This study makes a contribution to theory development in the area of knowledge management in general and virtual communities in particular. For practice, the results of this study identify the circumstances under which individual factors would be effective for motivating knowledge contribution to virtual communities.

  • PDF

Home Economics Teachers' Perception of Cultural Diversity Education (문화다양성 교육에 대한 가정과교사의 인식)

  • Si, Se-In;Lee, Eun-Hee
    • Journal of Korean Home Economics Education Association
    • /
    • v.26 no.4
    • /
    • pp.115-128
    • /
    • 2014
  • The purpose of this study was to investigate home economics teachers' perception of cultural diversity education, to provide an efficient educational material for the multicultural education in teacher education and teacher retraining. 160 Home economics teachers answered the survey questionnaires. To analyze the data, SPSS 19.0 for Windows was used to conduct frequency analysis, factorial analysis, credibility analysis, t-test, one-way ANOVA, and Duncan's multiple comparison. The results of this study were as follows. Four dimensions of cultural diversity education were derived by factor analysis: cultural equality, diversity implementation, diversity value, comfort with diversity. As for their awareness about cultural diversity education, it was in the order of cultural equality, followed by diversity implementation, diversity value, and comfort with diversity. The groups were significantly different according to demographic variables. As for the whole awareness about cultural diversity education and the diversity implementation, group of age 40 teachers recognized more highly than other groups. Furthermore, teachers outside Jeonbuk area recognized more highly the cultural equality, diversity implementation, diversity value than those in Jeonbuk, which is the 3rd high area in the nation of multicultural family proportion. As for cultural equality and diversity implementation, teachers over 15 years of experience, recognized more highly than other groups. Those with the teacher certification in the college of education, recognized more highly the cultural equality, diversity value, comfort with diversity than teachers from the other colleges. Teachers who need multicultural education, recognized more highly cultural equality, diversity implementation, awareness of diversity than those who don't. These results imply that in home economics education, there must be more systematic studies on school field education and related educational programs in order to revitalize multicultural education. And for teachers with highly recognizing cultural diversity to conduct a systematic multicultural education more efficiently, there should be both systematic pre-service education programs at college level and in-service education programs for the teachers in terms of cultural diversity education.

  • PDF

Characteristics and breeding of a new cultivar of Pleurotus ostreatus that is tolerant to envirochanges (느타리 신품종 불량환경내성 '고솔'의 육성 및 자실체 특성)

  • Shin, Pyung-Gyun;Oh, Min-Ji;Kim, Eun-Sun;Oh, Youn-Lee;Jang, Kab-Yeul;Kong, Won-Sik;Yoo, Young-Bok
    • Journal of Mushroom
    • /
    • v.14 no.2
    • /
    • pp.59-63
    • /
    • 2016
  • A new commercial strain of oyster mushroom (was developed by hyphal anastomosis, and was improved byhybridization between a monokaryotic strain derived from Pleurotus ostreatus ASI 0635 (Gonji 7ho) and a dikaryotic strain derived from P. ostreatus ASI 0666 (Mongdol). The optimum temperatures for mycelial growth and fruiting body development were $25{\sim}30^{\circ}C$ and $12{\sim}18^{\circ}C$, respectively. When PDA (potato dextrose agar medium) and MCM (mushroom complete medium) were compared, mycelial growth was faster in MCM. Similar results were observed with the control strain P. ostreatus ASI 2504 (Suhan 1ho). Analysis of the genetic characteristics of the new cultivar ('Gosol') showed a different DNA profile from that of the control ASI 2504 strain, when RAPD (raurpDNA) primers URP1, 2, 3, and 7 were used. Fruiting body production per bottle was approximately116 g based on a production performance test. In addition, yields from a farm field trial were stably achieved in an inadequate production enviro. The color of the pileus was blackish gray, and the stipe was long and thick. Therefore, we expect that this new strain will satisfy consumer demand for high quality mushrooms.

A Field Survey and Analysis of Ground Water Level and Soil Moisture in A Riparian Vegetation Zone (식생사주 역에서 지하수위와 토양수분의 현장 조사·분석)

  • Woo, Hyo-Seop;Chung, Sang-Joon;Cho, Hyung-Jin
    • Journal of Korea Water Resources Association
    • /
    • v.44 no.10
    • /
    • pp.797-807
    • /
    • 2011
  • Phenomenon of vegetation recruitment on the sand bar is drastically rising in the streams and rivers in Korea. In the 1960s prior to industrialization and urbanization, most of the streams were consisted of sands and gravels, what we call, 'White River'. Owing to dam construction, stream maintenance, etc. carried out since the '70s, the characteristic of flow duration and sediment transport have been disturbed resulting in the abundance of vegetation in the waterfront, that is, 'Green River' is under progress. This study purposed to identify the correlation among water level, water temperature, rainfall, soil moisture and soil texture out of the factors which give an effect on the vegetation recruitment on the sand bar of unregulated stream. To this purpose, this study selected the downstream of Naeseong Stream, one of sand rivers in Korea, as the river section for test and conducted the monitoring and analysis for 289 days. In addition, this study analyzed the aerial photos taken from 1970 to 2009 in order to identify the aged change in vegetation from the past to the present. The range of the tested river section was 361 m in transverse length and about 2 km in longitudinal length. According to the survey analysis, the tested river section in Naeseong Stream was a gaining river showing the higher underground-water level by 20~30 m compared to Stream water level. The difference in the underground water temperature was less than $5^{\circ}C$ by day and season and the Stream temperature did not fall to $10^{\circ}C$ and less from May when the vegetation germination begins in earnest. The impact factor on soil moisture was the underground water level in the lower layer and the rainfall in the upper layer and it was found that all the upper and lower layer were influenced by soil particle size. The soil from surface to 1 m-underground out of 6 soil moisture-measured points was sand with the $D_{50}$ size of 0.07~1.37 mm and it's assumed that the capillary height possible in the particle size would reach around 14~43 cm. On the other hand, according to the result of space analysis on the tested river section of unregulated stream for 40 years, it was found that the artificial disturbance and drought promoted the vegetation recruitment and the flooding resulted in the frequency extinction of vegetation communities. Even though the small and large scales of recruitment and extinction in vegetation have been repeated since 1970, the present vegetation area increased clearly compared to the past. It's found that the vegetation area is gradually increasing over time.