• 제목/요약/키워드: Test field

Search Result 9,504, Processing Time 0.042 seconds

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

The Risk Assessment of Butachlor for the Freshwater Aquatic Organisms (Butachlor의 수서생물에 대한 위해성 평가)

  • Park, Yeon-Ki;Bae, Chul-Han;Kim, Byung-Seok;Lee, Jea-Bong;You, Are-Sun;Hong, Soon-Sung;Park, Kyung-Hoon;Shin, Jin-Sup;Hong, Moo-Ki;Lee, Kyu-Seung;Lee, Jung-Ho
    • The Korean Journal of Pesticide Science
    • /
    • v.13 no.1
    • /
    • pp.1-12
    • /
    • 2009
  • To assess the effect of butachlor on freshwater aquatic organisms, acute toxicity studies for algae, invertebrate and fishes were conducted. The algae grow inhibition studies were carried out to determine the growth inhibition effects of butachlor (Tech. 93.4%) in Pseudokirchneriella subcapitata (formerly knows as Selenastrum capriconutum), Desmodesmus subspicatus (formerly known as Scendusmus subspicatus), and Chlorella vulgaris during the exposure period of 72 hours. The toxicological responses of P. subcapitata, D. subspicatus, and C. vulgaris to butachlor, expressed in individual $ErC_{50}$ values were 0.002, 0.019, and $10.4mgL^{-1}$, respectively and NOEC values were 0.0008, 0.0016, and $5.34mg\;L^{-1}$, respectively. P. subcapitata was more sensitive than any other algae species. Butachlor has very high toxicity to the algae, such as P. subcapitata and D. subspicatu. In the acute immobilisation test for Daphnia magna, the 24 and $48h-EC_{50}$ values were 2.55 and $1.50mg\;L^{-1}$, respectively. As the results of the acute toxicity test on Cyprinus carpio, Oryzias latipes and Misgurnus anguillicaudatus, the $96h-LC_{50}s$ were 0.62, 0.41 and $0.24mg\;L^{-1}$, respectively. The following ecological risk assessment of butachlor was performed on the basis of the toxicological data of algae, invertebrate and fish and exposure concentrations in rice paddy, drain and river. When a butachlor formulation is applied in rice paddy field according to label recommendation, the measured concentration of butachlor in paddy water was $0.41mg\;L^{-1}$ and the predicted environmental concentration (PEC) of butachlor in drain water was $0.03 mg\;L^{-1}$. Residues of butachlor detected in major rivers between 1997 and 1998 were ranged from $0.0004mg\;L^{-1}$ to $0.0029mg\;L^{-1}$. Toxicity exposure ratios (TERs) of algae in rice paddy, drain and river were 0.004, 0.05 and 0.36, respectively and indicated that butachlor has a risk to algae in rice paddy, drain and river. On the other hand, TERs of invertebrate in rice paddy, drain and river were 3.6, 50 and 357, respectively, well above 2, indicating no risk to invertebrate. TERs of fish in rice paddy, drain and river were 0.58, 8 and 57, respectively. The TERs for fish indicated that butachlor poses a risk to fish in rice paddy but has no risk to fish in agricultural drain and river. In conclusion, butachlor has a minimal risk to algae in agricultural drain and river exposed from rice drainage but has no risk to invertebrate and fish.

The Role of Control Transparency and Outcome Feedback on Security Protection in Online Banking (계좌 이용 과정과 결과의 투명성이 온라인 뱅킹 이용자의 보안 인식에 미치는 영향)

  • Lee, Un-Kon;Choi, Ji Eun;Lee, Ho Geun
    • Information Systems Review
    • /
    • v.14 no.3
    • /
    • pp.75-97
    • /
    • 2012
  • Fostering trusting belief in financial transactions is a challenging task in Internet banking services. Authenticated Certificate had been regarded as an effective method to guarantee the trusting belief for online transactions. However, previous research claimed that this method has some loopholes for such abusers as hackers, who intend to attack the financial accounts of innocent transactors in Internet. Two types of methods have been suggested as alternatives for securing user identification and activity in online financial services. Control transparency uses information over the transaction process to verify and to control the transactions. Outcome feedback, which refers to the specific information about exchange outcomes, provides information over final transaction results. By using these two methods, financial service providers can send signals to involved parties about the robustness of their security mechanisms. These two methods-control transparency and outcome feedback-have been widely used in the IS field to enhance the quality of IS services. In this research, we intend to verify that these two methods can also be used to reduce risks and to increase the security protections in online banking services. The purpose of this paper is to empirically test the effects of the control transparency and the outcome feedback on the risk perceptions in Internet banking services. Our assumption is that these two methods-control transparency and outcome feedback-can reduce perceived risks involved with online financial transactions, while increasing perceived trust over financial service providers. These changes in user attitudes can increase the level of user satisfactions, which may lead to the increased user loyalty as well as users' willingness to pay for the financial transactions. Previous research in IS suggested that the increased level of transparency on the process and the result of transactions can enhance the information quality and decision quality of IS users. Transparency helps IS users to acquire the information needed to control the transaction counterpart and thus to complete transaction successfully. It is also argued that transparency can reduce the perceived transaction risks in IS usage. Many IS researchers also argued that the trust can be generated by the institutional mechanisms. Trusting belief refers to the truster's belief for the trustee to have attributes for being beneficial to the truster. Institution-based trust plays an important role to enhance the probability of achieving a successful outcome. When a transactor regards the conditions crucial for the transaction success, he or she considers the condition providers as trustful, and thus eventually trust the others involved with such condition providers. In this process, transparency helps the transactor complete the transaction successfully. Through the investigation of these studies, we expect that the control transparency and outcome feedback can reduce the risk perception on transaction and enhance the trust with the service provider. Based on a theoretical framework of transparency and institution-based trust, we propose and test a research model by evaluating research hypotheses. We have conducted a laboratory experiment in order to validate our research model. Since the transparency artifact(control transparency and outcome feedback) is not yet adopted in online banking services, the general survey method could not be employed to verify our research model. We collected data from 138 experiment subjects who had experiences with online banking services. PLS is used to analyze the experiment data. The measurement model confirms that our data set has appropriate convergent and discriminant validity. The results of testing the structural model indicate that control transparency significantly enhances the trust and significantly reduces the risk perception of online banking users. The result also suggested that the outcome feedback significantly enhances the trust of users. We have found that the reduced risk and the increased trust level significantly improve the level of service satisfaction. The increased satisfaction finally leads to the increased loyalty and willingness to pay for the financial services.

  • PDF

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

The Research on Online Game Hedonic Experience - Focusing on Moderate Effect of Perceived Complexity - (온라인 게임에서의 쾌락적 경험에 관한 연구 - 지각된 복잡성의 조절효과를 중심으로 -)

  • Lee, Jong-Ho;Jung, Yun-Hee
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.2
    • /
    • pp.147-187
    • /
    • 2008
  • Online game researchers focus on the flow and factors influencing flow. Flow is conceptualized as an optimal experience state and useful explaining game experience in online. Many game studies focused on the customer loyalty and flow in playing online game, In showing specific game experience, however, it doesn't examine multidimensional experience process. Flow is not construct which show absorbing process, but construct which show absorbing result. Hence, Flow is not adequate to examine multidimensional experience of games. Online game is included in hedonic consumption. Hedonic consumption is a relatively new field of study in consumer research and it explores the consumption experience as a experiential view(Hirschman and Holbrook 1982). Hedonic consumption explores the consumption experience not as an information processing event but from a phenomenological of experiential view, which is a primarily subjective state. It includes various playful leisure activities, sensory pleasures, daydreams, esthetic enjoyment, and emotional responses. In online game experience, therefore, it is right to access through a experiential view of hedonic consumption. The objective of this paper was to make up for lacks in our understanding of online game experience by developing a framework for better insight into the hedonic experience of online game. We developed this framework by integrating and extending existing research in marketing, online game and hedonic responses. We then discussed several expectations for this framework. We concluded by discussing the results of this study, providing general recommendation and directions for future research. In hedonic response research, Lacher's research(1994)and Jongho lee and Yunhee Jung' research (2005;2006) has served as a fundamental starting point of our research. A common element in this extended research is the repeated identification of the four hedonic responses: sensory response, imaginal response, emotional response, analytic response. The validity of these four constructs finds in research of music(Lacher 1994) and movie(Jongho lee and Yunhee Jung' research 2005;2006). But, previous research on hedonic response didn't show that constructs of hedonic response have cause-effect relation. Also, although hedonic response enable to different by stimulus properties. effects of stimulus properties is not showed. To fill this gap, while largely based on Lacher(1994)' research and Jongho Lee and Yunhee Jung(2005, 2006)' research, we made several important adaptation with the primary goal of bringing the model into online game and compensating lacks of previous research. We maintained the same construct proposed by Lacher et al.(1994), with four constructs of hedonic response:sensory response, imaginal response, emotional response, analytical response. In this study, the sensory response is typified by some physical movement(Yingling 1962), the imaginal response is typified by images, memories, or situations that game evokes(Myers 1914), and the emotional response represents the feelings one experiences when playing game, such as pleasure, arousal, dominance, finally, the analytical response is that game player engaged in cognition seeking while playing game(Myers 1912). However, this paper has several important differences. We attempted to suggest multi-dimensional experience process in online game and cause-effect relation among hedonic responses. Also, We investigated moderate effects of perceived complexity. Previous studies about hedonic responses didn't show influences of stimulus properties. According to Berlyne's theory(1960, 1974) of aesthetic response, perceived complexity is a important construct because it effects pleasure. Pleasure in response to an object will increase with increased complexity, to an optimal level. After that, with increased complexity, pleasure begins with a linearly increasing line for complexity. Therefore, We expected this perceived complexity will influence hedonic response in game experience. We discussed the rationale for these suggested changes, the assumptions of the resulting framework, and developed some expectations based on its application in Online game context. In the first stage of methodology, questions were developed to measure the constructs. We constructed a survey measuring our theoretical constructs based on a combination of sources, including Yingling(1962), Hargreaves(1962), Lacher (1994), Jongho Lee and Yunhee Jung(2005, 2006), Mehrabian and Russell(1974), Pucely et al(1987). Based on comments received in the pretest, we made several revisions to arrive at our final survey. We investigated the proposed framework through a convenience sample, where participation in a self-report survey was solicited from various respondents having different knowledges. All respondents participated to different degrees, in these habitually practiced activities and received no compensation for their participation. Questionnaires were distributed to graduates and we used 381 completed questionnaires to analysis. The sample consisted of more men(n=225) than women(n=156). In measure, the study used multi-item scales based previous study. We analyze the data using structural equation modeling(LISREL-VIII; Joreskog and Sorbom 1993). First, we used the entire sample(n=381) to refine the measures and test their convergent and discriminant validity. The evidence from both the factor analysis and the analysis of reliability provides support that the scales exhibit internal consistency and construct validity. Second, we test the hypothesized structural model. And, we divided the sample into two different complexity group and analyze the hypothesized structural model of each group. The analysis suggest that hedonic response plays different roles from hypothesized in our study. The results indicate that hedonic response-sensory response, imaginal response, emotional response, analytical response- are related positively to respondents' level of game satisfaction. And game satisfaction is related to higher levels of game loyalty. Additionally, we found that perceived complexity is important to online game experience. Our results suggest that importance of each hedonic response different by perceived game complexity. Understanding the role of perceived complexity in hedonic response enables to have a better understanding of underlying mechanisms at game experience. If game has high complexity, analytical response become important response. So game producers or marketers have to consider more cognitive stimulus. Controversy, if game has low complexity, sensorial response respectively become important. Finally, we discussed several limitations of our study and suggested directions for future research. we concluded with a discussion of managerial implications. Our study provides managers with a basis for game strategies.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Preliminary Report of the $1998{\sim}1999$ Patterns of Care Study of Radiation Therapy for Esophageal Cancer in Korea (식도암 방사선 치료에 대한 Patterns of Care Study ($1998{\sim}1999$)의 예비적 결과 분석)

  • Hur, Won-Joo;Choi, Young-Min;Lee, Hyung-Sik;Kim, Jeung-Kee;Kim, Il-Han;Lee, Ho-Jun;Lee, Kyu-Chan;Kim, Jung-Soo;Chun, Mi-Son;Kim, Jin-Hee;Ahn, Yong-Chan;Kim, Sang-Gi;Kim, Bo-Kyung
    • Radiation Oncology Journal
    • /
    • v.25 no.2
    • /
    • pp.79-92
    • /
    • 2007
  • [ $\underline{Purpose}$ ]: For the first time, a nationwide survey in the Republic of Korea was conducted to determine the basic parameters for the treatment of esophageal cancer and to offer a solid cooperative system for the Korean Pattern of Care Study database. $\underline{Materials\;and\;Methods}$: During $1998{\sim}1999$, biopsy-confirmed 246 esophageal cancer patients that received radiotherapy were enrolled from 23 different institutions in South Korea. Random sampling was based on power allocation method. Patient parameters and specific information regarding tumor characteristics and treatment methods were collected and registered through the web based PCS system. The data was analyzed by the use of the Chi-squared test. $\underline{Results}$: The median age of the collected patients was 62 years. The male to female ratio was about 91 to 9 with an absolute male predominance. The performance status ranged from ECOG 0 to 1 in 82.5% of the patients. Diagnostic procedures included an esophagogram (228 patients, 92.7%), endoscopy (226 patients, 91.9%), and a chest CT scan (238 patients, 96.7%). Squamous cell carcinoma was diagnosed in 96.3% of the patients; mid-thoracic esophageal cancer was most prevalent (110 patients, 44.7%) and 135 patients presented with clinical stage III disease. Fifty seven patients received radiotherapy alone and 37 patients received surgery with adjuvant postoperative radiotherapy. Half of the patients (123 patients) received chemotherapy together with RT and 70 patients (56.9%) received it as concurrent chemoradiotherapy. The most frequently used chemotherapeutic agent was a combination of cisplatin and 5-FU. Most patients received radiotherapy either with 6 MV (116 patients, 47.2%) or with 10 MV photons (87 patients, 35.4%). Radiotherapy was delivered through a conventional AP-PA field for 206 patients (83.7%) without using a CT plan and the median delivered dose was 3,600 cGy. The median total dose of postoperative radiotherapy was 5,040 cGy while for the non-operative patients the median total dose was 5,970 cGy. Thirty-four patients received intraluminal brachytherapy with high dose rate Iridium-192. Brachytherapy was delivered with a median dose of 300 cGy in each fraction and was typically delivered $3{\sim}4\;times$. The most frequently encountered complication during the radiotherapy treatment was esophagitis in 155 patients (63.0%). $\underline{Conclusion}$: For the evaluation and treatment of esophageal cancer patients at radiation facilities in Korea, this study will provide guidelines and benchmark data for the solid cooperative systems of the Korean PCS. Although some differences were noted between institutions, there was no major difference in the treatment modalities and RT techniques.

Development of a Device for Estimating the Optimal Artificial Insemination Time of Individually Stalled Sows Using Image Processing (영상처리기법을 이용한 스톨 사육 모돈의 인공수정적기 예측 장치 개발)

  • Kim, D.J.;Yeon, S.C.;Chang, H.H.
    • Journal of Animal Science and Technology
    • /
    • v.49 no.5
    • /
    • pp.677-688
    • /
    • 2007
  • 돼지를 포함한 대부분의 동물은 일정한 발정주기를 가지고 일정한 시기에 배란을 하는 자연배란동물이지만, 토끼, 고양이, 밍크 등의 암놈은 교미자극에 의해 배란이 일어나는 유기배란동물이다. 또한 1년에 한 번만 발정하는 단발정동물과 1년에 수차례 발정하는 다발정동물이 있다. 이 중에서 모돈은 1년에 수차례 발정하는 다발정 동물로서 발정기에 들면 비발정기와는 다른 행동을 나타낸다(Diehl 등, 2001). 양돈가의 수익을 최대화하기 위해서는 비생산일수를 최소로 줄여야 한다. 모돈의 비생산일수를 줄일 수 있는 한 가지 방법은 성공적으로 교배를 시키는 것이다. 이처럼 성공적으로 교배를 시키기 위해서는 수정적기를 정확히 예측해야 한다. 만약 수정적기를 정확히 판단하지 못하여 수태가 되지 않으면, 비생산일수가 늘어나 손실을 입게 된다. 따라서 수정적기를 정확히 판단하는 것은 모돈의 성공적인 인공수정에 있어서 중요한 요소이다. 수정적기는 배란이 일어나기 전 10시간에서 12시간 사이이며, 발정이 시작되는 시점을 기준으로 하였을 때 경산돈의 경우 26시간에서 34시간 사이이고 미경산돈의 경우는 18시간에서 26시간 사이이다(Evans 등, 2001). 현재 하루에 두 번 모돈의 발정을 확인하는 것이 일반화되어 있으며, 이 때 웅돈을 접촉시키거나 육안관찰을 통하여 발정 유무를 판단한다. 이러한 방법에는 숙련된 기술과 풍부한 경험이 요구될 뿐만 아니라 총 소요노동력의 30% 정도가 요구된다(Perez 등, 1986). 하루에 두 번밖에 발정을 감지하지 않기 때문에 발정이 언제 시작되었는지를 정확히 알 수 없으며, 또한 발정의 대부분이 새벽에 시작되므로 수정적기를 정확히 판단하기란 매우 어렵다. 만약 발정을 감지했더라도 적기에 인공수정을 하지 못한다면, 수태율이 낮아지므로 경제적 손실이 초래된다. 현재 이러한 문제점 때문에 2회에서 3회에 걸쳐 인공수정을 하고 있으나 이에 따른 소요비용과 소요노동력 등은 양돈가의 부담을 가중시키는 요인이 되고 있다. 돼지는 발정기가 되면 비발정기에 나타내지 않던 외음부의 냄새를 맡는 행동, 귀를 세우는 행동 및 승가허용 행동 등을 나타낸다(Diehl 등, 2001). 또한 돼지는 비발정기에 비하여 발정기에 더 많은 활동량을 나타낸다(Altman, 1941; Erez and Hartsock, 1990). Freson 등(1998)은 스톨에서 개별적으로 사육되고 있는 모돈의 활동량을 적외선센서를 이용하여 측정함으로써 발정을 86%까지 감지하였다고 보고하였다. 그러나 이 연구는 단지 모돈의 발정을 감지하였을 뿐 번식관리에 있어서 가장 중요한 수정적기의 판단 기준을 제시하지 못하였다. 따라서, 본 연구는 스톨에서 사육되는 모돈의 활동량을 측정함으로써 발정시작시각을 감지하고 이를 기준으로 인공수정적기를 예측할 수 있는 인공수정적기 예측 장치를 개발한 후 이의 성능을 농장실증실험을 통하여 시험하고자 수행되었다.

The Role of Social Capital and Identity in Knowledge Contribution in Virtual Communities: An Empirical Investigation (가상 커뮤니티에서 사회적 자본과 정체성이 지식기여에 미치는 역할: 실증적 분석)

  • Shin, Ho Kyoung;Kim, Kyung Kyu;Lee, Un-Kon
    • Asia pacific journal of information systems
    • /
    • v.22 no.3
    • /
    • pp.53-74
    • /
    • 2012
  • A challenge in fostering virtual communities is the continuous supply of knowledge, namely members' willingness to contribute knowledge to their communities. Previous research argues that giving away knowledge eventually causes the possessors of that knowledge to lose their unique value to others, benefiting all except the contributor. Furthermore, communication within virtual communities involves a large number of participants with different social backgrounds and perspectives. The establishment of mutual understanding to comprehend conversations and foster knowledge contribution in virtual communities is inevitably more difficult than face-to-face communication in a small group. In spite of these arguments, evidence suggests that individuals in virtual communities do engage in social behaviors such as knowledge contribution. It is important to understand why individuals provide their valuable knowledge to other community members without a guarantee of returns. In virtual communities, knowledge is inherently rooted in individual members' experiences and expertise. This personal nature of knowledge requires social interactions between virtual community members for knowledge transfer. This study employs the social capital theory in order to account for interpersonal relationship factors and identity theory for individual and group factors that may affect knowledge contribution. First, social capital is the relationship capital which is embedded within the relationships among the participants in a network and available for use when it is needed. Social capital is a productive resource, facilitating individuals' actions for attainment. Nahapiet and Ghoshal (1997) identify three dimensions of social capital and explain theoretically how these dimensions affect the exchange of knowledge. Thus, social capital would be relevant to knowledge contribution in virtual communities. Second, existing research has addressed the importance of identity in facilitating knowledge contribution in a virtual context. Identity in virtual communities has been described as playing a vital role in the establishment of personal reputations and in the recognition of others. For instance, reputation systems that rate participants in terms of the quality of their contributions provide a readily available inventory of experts to knowledge seekers. Despite the growing interest in identities, however, there is little empirical research about how identities in the communities influence knowledge contribution. Therefore, the goal of this study is to better understand knowledge contribution by examining the roles of social capital and identity in virtual communities. Based on a theoretical framework of social capital and identity theory, we develop and test a theoretical model and evaluate our hypotheses. Specifically, we propose three variables such as cohesiveness, reciprocity, and commitment, referring to the social capital theory, as antecedents of knowledge contribution in virtual communities. We further posit that members with a strong identity (self-presentation and group identification) contribute more knowledge to virtual communities. We conducted a field study in order to validate our research model. We collected data from 192 members of virtual communities and used the PLS method to analyse the data. The tests of the measurement model confirm that our data set has appropriate discriminant and convergent validity. The results of testing the structural model show that cohesion, reciprocity, and self-presentation significantly influence knowledge contribution, while commitment and group identification do not significantly influence knowledge contribution. Our findings on cohesion and reciprocity are consistent with the previous literature. Contrary to our expectations, commitment did not significantly affect knowledge contribution in virtual communities. This result may be due to the fact that knowledge contribution was voluntary in the virtual communities in our sample. Another plausible explanation for this result may be the self-selection bias for the survey respondents, who are more likely to contribute their knowledge to virtual communities. The relationship between self-presentation and knowledge contribution was found to be significant in virtual communities, supporting the results of prior literature. Group identification did not significantly affect knowledge contribution in this study, inconsistent with the wealth of research that identifies group identification as an important factor for knowledge sharing. This conflicting result calls for future research that examines the role of group identification in knowledge contribution in virtual communities. This study makes a contribution to theory development in the area of knowledge management in general and virtual communities in particular. For practice, the results of this study identify the circumstances under which individual factors would be effective for motivating knowledge contribution to virtual communities.

  • PDF