• Title/Summary/Keyword: Higher performance

Search Result 13,840, Processing Time 0.054 seconds

Seasonal Variations of Microphytobenthos in Sediments of the Estuarine Muddy Sandflat of Gwangyang Bay: HPLC Pigment Analysis (광합성색소 분석을 통한 광양만 갯벌 퇴적물 중 저서미세조류의 계절변화)

  • Lee, Yong-Woo;Choi, Eun-Jung;Kim, Young-Sang;Kang, Chang-Keun
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.14 no.1
    • /
    • pp.48-55
    • /
    • 2009
  • Seasonal variations of microalgal biomass and community composition in both the sediment and the seawater were investigated by HPLC pigment analysis in an estuarine muddy sandflat of Gwangyang Bay from January to November 2002. Based on the photosynthetic pigments, fucoxanthin, diadinoxanthin, and diatoxanthin were the most dominant pigments all the year round, indicating that diatoms were the predominant algal groups of both the sediment and the seawater in Gwangyang Bay. The other algal pigments except the diatom-marker pigments showed relatively low concentrations. Microphytobenthic chlorophyll ${\alpha}$ concentrations in the upper layer (0.5 cm) of sediments ranged from 3.44 (March at the middle site of the tidal flat) to 169 (July at the upper site) mg $m^{-2}$, with the annual mean concentrations of $68.4{\pm}45.5,\;21.3{\pm}14.3,\;22.9{\pm}15.6mg\;m^{-2}$ at the upper, middle, and lower tidal sites, respectively. Depth-integrated chlorophyll ${\alpha}$ concentrations in the overlying water column ranged from 1.66 (November) to 11.7 (July) mg $m^{-2}$, with an annual mean of $6.96{\pm}3.04mg\;m^{-2}$. Microphytobenthic biomasses were about 3${\sim}$10 times higher than depth-integrated phytoplankton biomass in the overlying water column. The physical characteristics of this shallow estuarine tidal flat, similarity in taxonomic composition of the phytoplankton and microphytobenthos, and similar seasonal patterns in their biomasses suggest that resuspended microphytobenthos are an important component of phytoplankton biomass in Gwangyang Bay. Therefore, considering the importance of microphytobenthos as possible food source for the estuarine benthic and pelagic consumers, a consistent monitoring work on the behavior of microphytobenthos is needed in the tidal flat ecosystems.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Study on the Genetic Variations of the Economic Traits by Backcrossing in Commercial Chickens (실용계군에 있어서 누진퇴교배에 의한 주요경제형질의 유전적 변이에 관한 연구)

  • 이종극;오봉국
    • Korean Journal of Poultry Science
    • /
    • v.16 no.2
    • /
    • pp.61-71
    • /
    • 1989
  • The purposes of this study were to investigate the genetic variations by backcrossing in commercial chickens. Backcrossing was carried out successively back to parent stock (P.S). Heritabilities and genetic correlation coefficients were estimated to verify the genetic variations. The data obtained from a breeding programme with commercial chickens (I strain) were collected from 1955 to 1987 at Poultry Breeding Farm, Seoul National University. Data came from a total of 1230 female offspring. The results obtained are summarized as follows: 1. The general performance ($Mean\pmStandard deviation$) of each trait was $663.94\pm87.11$g for 8 weeks body weight, $1579.1\pm155.43$g for 20 weeks body weight, $2124.1\pm215.3$g for 40 weeks body weight, $2269.1\pm242.94$g for 60 weeks body weight, $168.43\pm12.94$ day for a9e at sexual maturity (SM), $214.52\pm29.82$ eggs , for total egg number to 60 weeks of age (TEN), $61.45\pm3.48$ g for average weight (AEW), $13180.7\pm1823.22$ g for total egg mass to 60 weeks of age(TEM). All traits, except 10 weeks body weight and AEW, were significant for the degrees of backcross (p<0.01). 2. The pooled estimates of heritabilities derived from the sire, dam and combined variance components were 0.47~0.52 for age at sexual maturity (SM), 0.07~0.37 for total egg number (TEN), 0.40~0.54 for average egg weight (AEW), 0.18~0.27 for total egg mass (TEM). High heritability estimates were found for SM and AEW. TEN and TEM were estimated to be a lowly heritable traits. Heritability estimates from dam components were higher than those from sire components. These differences might be due to non-additive genetic effect and maternal effect. 3. The estimates of heritabilities and standard errors derived from combined variance components for different degrees of backcross were $0.47\pm0.11$ (BCO), $0.42\pm0.16$ (BC1), $0.51\pm0.29$ (BC2) for TEN, $0.59\pm0.20$ (BCO), $0.43\pm0.17$ (BC1), $0.35\pm0.18$ (BC2) for AEW, $0.28\pm0.12$(BC0), $0.20\pm0.11$(BC1), $0.18\pm0.14$ (BC2) for TEM. Heritability estimates for AEW and TEM were decreased by backcrossing while those for SM and TEN remained constant. Since backcrossing contributes to increased homozygosity, the genetic variation of the traits (AEW and TEM) decreased . 4. The pooled estimates of genetic correlation coefficients were -0.55 between SM and TEN, 0.20 between SM and AEW, -0.29 between TEN and AEW, 0.82 between TEM and TEN, 0.31 between TEM and AEW, -0.42 between TEM and SM. The genetic correlation between TEM and TEN was higher than that between TEM and AEW, and it was suggested that egg mass was strongly affected by egg number. Also, age at sexual maturity(SM) contributes to egg mass(TEM). 5. When backcrossing was carried out successively, the genetic correlation between TEM and TEN increased (BC0:0.79, BC1:0.82, BC2:0.91) but those between TEM and SM decreased (BC0:-0.54, BC1:-0.36, BC2:-0.09) with successive backcrosses.

  • PDF

Home Economics teachers' concern on creativity and personality education in Home Economics classes: Based on the concerns based adoption model(CBAM) (가정과 교사의 창의.인성 교육에 대한 관심과 실행에 대한 인식 - CBAM 모형에 기초하여-)

  • Lee, In-Sook;Park, Mi-Jeong;Chae, Jung-Hyun
    • Journal of Korean Home Economics Education Association
    • /
    • v.24 no.2
    • /
    • pp.117-134
    • /
    • 2012
  • The purpose of this study was to identify the stage of concern, the level of use, and the innovation configuration of Home Economics teachers regarding creativity and personality education in Home Economics(HE) classes. The survey questionnaires were sent through mails and e-mails to middle-school HE teachers in the whole country selected by systematic sampling and convenience sampling. Questionnaires of the stages of concern and the levels of use developed by Hall(1987) were used in this study. 187 data were used for the final analysis by using SPSS/window(12.0) program. The results of the study were as following: First, for the stage of concerns of HE teachers on creativity and personality education, the information stage of concerns(85.51) was the one with the highest response rate and the next high in the following order: the management stage of concerns(81.88), the awareness stage of concerns(82.15), the refocusing stage of concerns(68.80), the collaboration stage of concerns(61.97), and the consequence stage of concerns(59.76). Second, the levels of use of HE teachers on creativity and personality education was highest with the mechanical levels(level 3; 21.4%) and the next high in the following order: the orientation levels of use(level 1; 20.9%), the refinement levels(level 5; 17.1%), the non-use levels(level 0; 15.0%), the preparation levels(level 2; 10.2%), the integration levels(level 6; 5.9%), the renewal levels(level 7; 4.8%), the routine levels(level 4; 4.8%). Third, for the innovation configuration of HE teachers on creativity and personality education, more than half of the HE teachers(56.1%) mainly focused on personality education in their HE classes; 31.0% of the HE teachers performed both creativity and personality education; a small number of teachers(6.4%) focused on creativity education; the same number of teachers(6.4%) responded that they do not focus on neither of the two. Examining the level and type of performance HE teachers applied, the average score on the performance of creativity and personality education was 3.76 out of 5.00 and the mean of creativity component was 3.59 and of personality component was 3.94, higher than standard. For the creativity education, openness/sensitivity(3.97) education was performed most and the next most in the following order: problem-solving skill(3.79), curiosity/interest(3.73), critical thinking(3.63), problem-finding skill(3.61), originality(3.57), analogy(3.47), fluency/adaptability(3.46), precision(3.46), imagination(3.37), and focus/sympathy(3.37). For the personality education, the following components were performed in order from most to least: power of execution(4.07), cooperation/consideration/just(4.06), self-management skill(4.04), civic consciousness(4.04), career development ability(4.03), environment adaptability(3.95), responsibility/ownership(3.94), decision making(3.89), trust/honesty/promise(3.88), autonomy(3.86), and global competency(3.55). Regarding what makes performing creativity and personality education difficult, most HE teachers(64.71%) chose the lack of instructional materials and 40.11% of participants chose the lack of seminar and workshop opportunity. 38.5% chose the difficulty of developing an evaluation criteria or an evaluation tool while 25.67% responded that they do not know any means of performing creativity and personality education. Regarding the better way to support for creativity and personality education, the HE teachers chose in order from most to least: 'expansion of hands-on activities for students related to education on creativity and personality'(4.34), 'development of HE classroom culture putting emphasis on creativity and personality'(4.29), 'a proper curriculum on creativity and personality education that goes along with students' developmental stages'(4.27), 'securing enough human resource and number of professors who will conduct creativity and personality education'(4.21), 'establishment of the concept and value of the education on creativity and personality'(4.09), and 'educational promotion on creativity and personality education supported by local communities and companies'(3.94).

  • PDF

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

Comparison of the Operative Results of Performing Endoscopic Robot Assisted Minimally Invasive Surgery Versus Conventional Cardiac Surgery (수술용 내시경 로봇(AESOP)을 이용한 최소 침습적 개심술과 동 기간에 시행된 전통적인 개심술의 결과에 대한 비교)

  • Lee, Young-Ook;Cho, Joon-Yong;Lee, Jong-Tae;Kim, Gun-Jik
    • Journal of Chest Surgery
    • /
    • v.41 no.5
    • /
    • pp.598-604
    • /
    • 2008
  • Background: The improvements in endoscopic equipment and surgical robots has encouraged the performance of minimally invasive cardiac operations. Yet only a few Korean studies have compared this procedure with the sternotomy approach. Material and Method: Between December 2005 and July 2007, 48 patients (group A) underwent minimally invasive cardiac surgery with AESOP through a small right thoracotomy. During the same period, 50 patients (group B) underwent conventional surgery. We compared the operative time, the operative results, the post-operative pain and the recovery of both groups. Result: There was no hospital mortality and there were no significant differences in the incidence of operative complications between the two groups. The operative $(292.7{\pm}61.7\;and\;264.0{\pm}47.9min$, respectively; p=0.01) and CPB times ($128.4{\pm}37.6\;and\;101.7{\pm}32.5min$, respectively; <0.01) were longer for group A, whereas there was no difference between the aortic cross clamp times ($82.1{\pm}35.0\;and\;87.8{\pm}113.5min$, respectively; p=0.74) and ventilator times ($18.0{\pm}18.4\;and\;19.7{\pm}9.7$ hr, respectively; p=0.57) between the groups. The stay on the ICU $(53.2{\pm}40.2\;and\;72.8{\pm}42.1hr$, respectively; p=0.02) and the hospitalization time ($9.7{\pm}7.2\;and\;14.8{\pm}11.9days$, respectively; p=0.01) were shorter for group A. The Patients in group B had more transfusions, but the difference was not significant. For the overall operative intervals, which ranged from one to four weeks, the pair score was significantly lower for the patients of group A than for the patients of group B. In terms of the postoperative activities, which were measured by the Duke Activity Scale questionnaire, the functional status score was clearly higher for group A compared to group B. The analysis showed no difference in the severity of either post-repair of mitral ($0.7{\pm}1.0\;and\;0.9{\pm}0.9$, respectively; p=0.60) and tricuspid regurgitation ($1.0{\pm}0.9\;and\;1.1{\pm}1.0$, respectively; p=0.89). In both groups, there were no valve related complications, except for one patient with paravalvular leakage in each group. Conclusion: These results show that compared with the median sternotomy patients, the patients who underwent minimally invasive surgery enjoyed significant postoperative advantages such as less pain, a more rapid return to full activity, improved cosmetics and a reduced hospital stay. The minimally invasive surgery can be done with similar clinical safety compared to the conventional surgery that's done through a median sternotomy.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

The Effect of Common Features on Consumer Preference for a No-Choice Option: The Moderating Role of Regulatory Focus (재몰유선택적정황하공동특성대우고객희호적영향(在没有选择的情况下共同特性对于顾客喜好的影响): 조절초점적조절작용(调节焦点的调节作用))

  • Park, Jong-Chul;Kim, Kyung-Jin
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.89-97
    • /
    • 2010
  • This study researches the effects of common features on a no-choice option with respect to regulatory focus theory. The primary interest is in three factors and their interrelationship: common features, no-choice option, and regulatory focus. Prior studies have compiled vast body of research in these areas. First, the "common features effect" has been observed bymany noted marketing researchers. Tversky (1972) proposed the seminal theory, the EBA model: elimination by aspect. According to this theory, consumers are prone to focus only on unique features during comparison processing, thereby dismissing any common features as redundant information. Recently, however, more provocative ideas have attacked the EBA model by asserting that common features really do affect consumer judgment. Chernev (1997) first reported that adding common features mitigates the choice gap because of the increasing perception of similarity among alternatives. Later, however, Chernev (2001) published a critically developed study against his prior perspective with the proposition that common features may be a cognitive load to consumers, and thus consumers are possible that they are prone to prefer the heuristic processing to the systematic processing. This tends to bring one question to the forefront: Do "common features" affect consumer choice? If so, what are the concrete effects? This study tries to answer the question with respect to the "no-choice" option and regulatory focus. Second, some researchers hold that the no-choice option is another best alternative of consumers, who are likely to avoid having to choose in the context of knotty trade-off settings or mental conflicts. Hope for the future also may increase the no-choice option in the context of optimism or the expectancy of a more satisfactory alternative appearing later. Other issues reported in this domain are time pressure, consumer confidence, and alternative numbers (Dhar and Nowlis 1999; Lin and Wu 2005; Zakay and Tsal 1993). This study casts the no-choice option in yet another perspective: the interactive effects between common features and regulatory focus. Third, "regulatory focus theory" is a very popular theme in recent marketing research. It suggests that consumers have two focal goals facing each other: promotion vs. prevention. A promotion focus deals with the concepts of hope, inspiration, achievement, or gain, whereas prevention focus involves duty, responsibility, safety, or loss-aversion. Thus, while consumers with a promotion focus tend to take risks for gain, the same does not hold true for a prevention focus. Regulatory focus theory predicts consumers' emotions, creativity, attitudes, memory, performance, and judgment, as documented in a vast field of marketing and psychology articles. The perspective of the current study in exploring consumer choice and common features is a somewhat creative viewpoint in the area of regulatory focus. These reviews inspire this study of the interaction possibility between regulatory focus and common features with a no-choice option. Specifically, adding common features rather than omitting them may increase the no-choice option ratio in the choice setting only to prevention-focused consumers, but vice versa to promotion-focused consumers. The reasoning is that when prevention-focused consumers come in contact with common features, they may perceive higher similarity among the alternatives. This conflict among similar options would increase the no-choice ratio. Promotion-focused consumers, however, are possible that they perceive common features as a cue of confirmation bias. And thus their confirmation processing would make their prior preference more robust, then the no-choice ratio may shrink. This logic is verified in two experiments. The first is a $2{\times}2$ between-subject design (whether common features or not X regulatory focus) using a digital cameras as the relevant stimulus-a product very familiar to young subjects. Specifically, the regulatory focus variable is median split through a measure of eleven items. Common features included zoom, weight, memory, and battery, whereas the other two attributes (pixel and price) were unique features. Results supported our hypothesis that adding common features enhanced the no-choice ratio only to prevention-focus consumers, not to those with a promotion focus. These results confirm our hypothesis - the interactive effects between a regulatory focus and the common features. Prior research had suggested that including common features had a effect on consumer choice, but this study shows that common features affect choice by consumer segmentation. The second experiment was used to replicate the results of the first experiment. This experimental study is equal to the prior except only two - priming manipulation and another stimulus. For the promotion focus condition, subjects had to write an essay using words such as profit, inspiration, pleasure, achievement, development, hedonic, change, pursuit, etc. For prevention, however, they had to use the words persistence, safety, protection, aversion, loss, responsibility, stability etc. The room for rent had common features (sunshine, facility, ventilation) and unique features (distance time and building state). These attributes implied various levels and valence for replication of the prior experiment. Our hypothesis was supported repeatedly in the results, and the interaction effects were significant between regulatory focus and common features. Thus, these studies showed the dual effects of common features on consumer choice for a no-choice option. Adding common features may enhance or mitigate no-choice, contradictory as it may sound. Under a prevention focus, adding common features is likely to enhance the no-choice ratio because of increasing mental conflict; under the promotion focus, it is prone to shrink the ratio perhaps because of a "confirmation bias." The research has practical and theoretical implications for marketers, who may need to consider common features carefully in a practical display context according to consumer segmentation (i.e., promotion vs. prevention focus.) Theoretically, the results suggest some meaningful moderator variable between common features and no-choice in that the effect on no-choice option is partly dependent on a regulatory focus. This variable corresponds not only to a chronic perspective but also a situational perspective in our hypothesis domain. Finally, in light of some shortcomings in the research, such as overlooked attribute importance, low ratio of no-choice, or the external validity issue, we hope it influences future studies to explore the little-known world of the "no-choice option."

Seeking a Better Place: Sustainability in the CPG Industry (추심경호적지방(追寻更好的地方): 유포장적소비품적산업적가지속발전(有包装的消费品的产业的可持续发展))

  • Rapert, Molly Inhofe;Newman, Christopher;Park, Seong-Yeon;Lee, Eun-Mi
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.2
    • /
    • pp.199-207
    • /
    • 2010
  • For us, there is virtually no distinction between being a responsible citizen and a successful business... they are one and the same for Wal-Mart today." ~ Lee Scott, al-Mart CEO after the 2005 Katrina disaster; cited in Green to Gold (Esty and Winston 2006). Lee Scott's statement signaled a new era in sustainability as manufacturers and retailers around the globe watched the world's largest mass merchandiser confirm its intentions with respect to sustainability. For decades, the environmental movement has grown, slowly bleeding over into the corporate world. Companies have been born, products have been created, academic journals have been launched, and government initiatives have been undertaken - all in the pursuit of sustainability (Peattie and Crane 2005). While progress has been admittedly slower than some may desire, the emergence and entrance of environmentally concerned mass merchandisers has done much to help with sustainable efforts. To better understand this movement, we incorporate the perspectives of both executives and consumers involved in the consumer packaged goods (CPG) industry. This research relies on three underlying themes: (1) Conceptual and anecdotal evidence suggests that companies undertake sustainability initiatives for a plethora of reasons, (2) The number of sustainability initiatives continues to increase in the consumer packaged goods industries, and (3) That it is, therefore, necessary to explore the role that sustainability plays in the minds of consumers. In light of these themes, surveys were administered to and completed by 143 college students and 101 business executives to assess a number of variables in regards to sustainability including willingness-to-pay, behavioral intentions, attitudes, willingness-to-pay, and preferences. Survey results indicate that the top three reasons why executives believe sustainability to be important include (1) the opportunity for profitability, (2) the fulfillment of an obligation to the environment, and (3) a responsibility to customers and shareholders. College students identified the top three reasons as (1) a responsibility to the environment, (2) an indebtedness to future generations, and (3) an effective management of resources. While the rationale for supporting sustainability efforts differed between college students and executives, the executives and consumers reported similar responses for the majority of the remaining sustainability issues. Furthermore, when we asked consumers to assess the importance of six key issues (healthcare, economy, education, crime, government spending, and environment) previously identified as important to consumers by Gallup Poll, protecting the environment only ranked fourth out of the six (Carlson 2005). While all six of these issues were identified as important, the top three that emerged as most important were (1) improvements in education, (2) the economy, and (3) health care. As the pursuit and incorporation of sustainability continues to evolve, so too will the expected outcomes. New definitions of performance that reflect the social/business benefits as well as the lengthened implementation period are relevant and warranted (Ehrenfeld 2005; Hitchcock and Willard 2006). We identified three primary categories of outcomes based on a literature review of both anecdotal and conceptual expectations of sustainability: (1) improvements in constituent satisfaction, (2) differentiation opportunities, and (3) financial rewards. Within each of these categories, several specific outcomes were identified resulting in eleven different outcomes arising from sustainability initiatives. Our survey results indicate that the top five most likely outcomes for companies that pursue sustainability are: (1) green consumers will be more satisfied, (2) company image will be better, (3) corporate responsibility will be enhanced, (4) energy costs will be reduced, and (5) products will be more innovative. Additionally, to better understand the interesting intersection between the environmental "identity" of a consumer and the willingness to manifest that identity with marketplace purchases, we extended prior research developed by Experian Research (2008). Accordingly, respondents were categorized as one of four types of green consumers (Behavioral Greens, Think Greens, Potential Greens, or True Browns) to garner a better understanding of the green consumer in addition to assisting with a more effective interpretation of results. We assessed these consumers' willingness to engage in eco-friendly behavior by evaluating three options: (1) shopping at retailers that support environmental initiatives, (2) paying more for products that protect the environment, and (3) paying higher taxes so the government can support environmental initiatives. Think Greens expressed the greatest willingness to change, followed by Behavioral Greens, Potential Greens, and True Browns. These differences were all significant at p<.01. Further Conclusions and Implications We have undertaken a descriptive study which seeks to enhance our understanding of the strategic domain of sustainability. Specifically, this research fills a gap in the literature by comparing and contrasting the sustainability views of business executives and consumers with specific regard to preferences, intentions, willingness-to-pay, behavior, and attitudes. For practitioners, much can be gained from a strategic standpoint. In addition to the many results already reported, respondents also reported than willing to pay more for products that protect the environment. Other specific results indicate that female respondents consistently communicate a stronger willingness than males to pay more for these products and to shop at eco-friendly retailers. Knowing this additional information, practitioners can now have a more specific market in which to target and communicate their sustainability efforts. While this research is only an initial step towards understanding similarities and differences among practitioners and consumers regarding sustainability, it presents original findings that contribute to both practice and research. Future research should be directed toward examining other variables affecting this relationship, as well as other specific industries.