• Title/Summary/Keyword: Management Performance Evaluation

Search Result 2,673, Processing Time 0.032 seconds

Detection Ability of Occlusion Object in Deep Learning Algorithm depending on Image Qualities (영상품질별 학습기반 알고리즘 폐색영역 객체 검출 능력 분석)

  • LEE, Jeong-Min;HAM, Geon-Woo;BAE, Kyoung-Ho;PARK, Hong-Ki
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.3
    • /
    • pp.82-98
    • /
    • 2019
  • The importance of spatial information is rapidly rising. In particular, 3D spatial information construction and modeling for Real World Objects, such as smart cities and digital twins, has become an important core technology. The constructed 3D spatial information is used in various fields such as land management, landscape analysis, environment and welfare service. Three-dimensional modeling with image has the hig visibility and reality of objects by generating texturing. However, some texturing might have occlusion area inevitably generated due to physical deposits such as roadside trees, adjacent objects, vehicles, banners, etc. at the time of acquiring image Such occlusion area is a major cause of the deterioration of reality and accuracy of the constructed 3D modeling. Various studies have been conducted to solve the occlusion area. Recently the researches of deep learning algorithm have been conducted for detecting and resolving the occlusion area. For deep learning algorithm, sufficient training data is required, and the collected training data quality directly affects the performance and the result of the deep learning. Therefore, this study analyzed the ability of detecting the occlusion area of the image using various image quality to verify the performance and the result of deep learning according to the quality of the learning data. An image containing an object that causes occlusion is generated for each artificial and quantified image quality and applied to the implemented deep learning algorithm. The study found that the image quality for adjusting brightness was lower at 0.56 detection ratio for brighter images and that the image quality for pixel size and artificial noise control decreased rapidly from images adjusted from the main image to the middle level. In the F-measure performance evaluation method, the change in noise-controlled image resolution was the highest at 0.53 points. The ability to detect occlusion zones by image quality will be used as a valuable criterion for actual application of deep learning in the future. In the acquiring image, it is expected to contribute a lot to the practical application of deep learning by providing a certain level of image acquisition.

Automatic Speech Style Recognition Through Sentence Sequencing for Speaker Recognition in Bilateral Dialogue Situations (양자 간 대화 상황에서의 화자인식을 위한 문장 시퀀싱 방법을 통한 자동 말투 인식)

  • Kang, Garam;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.17-32
    • /
    • 2021
  • Speaker recognition is generally divided into speaker identification and speaker verification. Speaker recognition plays an important function in the automatic voice system, and the importance of speaker recognition technology is becoming more prominent as the recent development of portable devices, voice technology, and audio content fields continue to expand. Previous speaker recognition studies have been conducted with the goal of automatically determining who the speaker is based on voice files and improving accuracy. Speech is an important sociolinguistic subject, and it contains very useful information that reveals the speaker's attitude, conversation intention, and personality, and this can be an important clue to speaker recognition. The final ending used in the speaker's speech determines the type of sentence or has functions and information such as the speaker's intention, psychological attitude, or relationship to the listener. The use of the terminating ending has various probabilities depending on the characteristics of the speaker, so the type and distribution of the terminating ending of a specific unidentified speaker will be helpful in recognizing the speaker. However, there have been few studies that considered speech in the existing text-based speaker recognition, and if speech information is added to the speech signal-based speaker recognition technique, the accuracy of speaker recognition can be further improved. Hence, the purpose of this paper is to propose a novel method using speech style expressed as a sentence-final ending to improve the accuracy of Korean speaker recognition. To this end, a method called sentence sequencing that generates vector values by using the type and frequency of the sentence-final ending appearing in the utterance of a specific person is proposed. To evaluate the performance of the proposed method, learning and performance evaluation were conducted with a actual drama script. The method proposed in this study can be used as a means to improve the performance of Korean speech recognition service.

A Study on the Determinants of Patent Citation Relationships among Companies : MR-QAP Analysis (기업 간 특허인용 관계 결정요인에 관한 연구 : MR-QAP분석)

  • Park, Jun Hyung;Kwahk, Kee-Young;Han, Heejun;Kim, Yunjeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.21-37
    • /
    • 2013
  • Recently, as the advent of the knowledge-based society, there are more people getting interested in the intellectual property. Especially, the ICT companies leading the high-tech industry are working hard to strive for systematic management of intellectual property. As we know, the patent information represents the intellectual capital of the company. Also now the quantitative analysis on the continuously accumulated patent information becomes possible. The analysis at various levels becomes also possible by utilizing the patent information, ranging from the patent level to the enterprise level, industrial level and country level. Through the patent information, we can identify the technology status and analyze the impact of the performance. We are also able to find out the flow of the knowledge through the network analysis. By that, we can not only identify the changes in technology, but also predict the direction of the future research. In the field using the network analysis there are two important analyses which utilize the patent citation information; citation indicator analysis utilizing the frequency of the citation and network analysis based on the citation relationships. Furthermore, this study analyzes whether there are any impacts between the size of the company and patent citation relationships. 74 S&P 500 registered companies that provide IT and communication services are selected for this study. In order to determine the relationship of patent citation between the companies, the patent citation in 2009 and 2010 is collected and sociomatrices which show the patent citation relationship between the companies are created. In addition, the companies' total assets are collected as an index of company size. The distance between companies is defined as the absolute value of the difference between the total assets. And simple differences are considered to be described as the hierarchy of the company. The QAP Correlation analysis and MR-QAP analysis is carried out by using the distance and hierarchy between companies, and also the sociomatrices that shows the patent citation in 2009 and 2010. Through the result of QAP Correlation analysis, the patent citation relationship between companies in the 2009's company's patent citation network and the 2010's company's patent citation network shows the highest correlation. In addition, positive correlation is shown in the patent citation relationships between companies and the distance between companies. This is because the patent citation relationship is increased when there is a difference of size between companies. Not only that, negative correlation is found through the analysis using the patent citation relationship between companies and the hierarchy between companies. Relatively it is indicated that there is a high evaluation about the patent of the higher tier companies influenced toward the lower tier companies. MR-QAP analysis is carried out as follow. The sociomatrix that is generated by using the year 2010 patent citation relationship is used as the dependent variable. Additionally the 2009's company's patent citation network and the distance and hierarchy networks between the companies are used as the independent variables. This study performed MR-QAP analysis to find the main factors influencing the patent citation relationship between the companies in 2010. The analysis results show that all independent variables have positively influenced the 2010's patent citation relationship between the companies. In particular, the 2009's patent citation relationship between the companies has the most significant impact on the 2010's, which means that there is consecutiveness regarding the patent citation relationships. Through the result of QAP correlation analysis and MR-QAP analysis, the patent citation relationship between companies is affected by the size of the companies. But the most significant impact is the patent citation relationships that had been done in the past. The reason why we need to maintain the patent citation relationship between companies is it might be important in the use of strategic aspect of the companies to look into relationships to share intellectual property between each other, also seen as an important auxiliary of the partner companies to cooperate with.

Accuracy evaluation of microwave water surface current meter for measurement angles in middle flow condition (전자파표면유속계의 측정 각도에 따른 평수기 유속 측정 정확도 분석)

  • Son, Geunsoo;Kim, Dongsu;Kim, Kyungdong;Kim, Jongmin
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.1
    • /
    • pp.15-27
    • /
    • 2020
  • Streamflow discharge as a fundamental riverine quantity plays a crucial role in water resources management, thereby requiring accurate in-situ measurement. Recent advances in instrumentations for the streamflow discharge measurement has complemented or substituted classical devices and methods. Among various potential methods, surface current meter using microwave has increasingly begun to be applied not only for flood but also normal flow discharge measurement, remotely and safely enabling practitioners to measure flow velocity postulating indirect contact. With minimized field preparedness, this method facilitated and eased flood discharge measurement in the difficult in-situ conditions such as extreme flood in active ways emitting 24.125 GHz microwave without relying on natural lights. In South Korea, a rectangular shaped instrument named with Microwave Water Surface Current Meter (MWSCM) has been developed and commercially released around 2010, in which domestic agencies charging on streamflow observation shed lights on this approach regarding it as a potential substitute. Considering this brand-new device highlighted for efficient flow measurement, however, there has been few noticeable efforts in systematic and comprehensive evaluation of its performance in various measurement and riverine conditions that lead to lack in imminent and widely spreading usages in practices. This study attempted to evaluate the MWSCM in terms of instrumen's monitoring configuration particularly regarding tilt and yaw angle. In the middle of pointing the measurement spot in a given cross-section, the observation campaign inevitably poses accuracy issues related with different tilt and yaw angles of the instrument, which can be a conventionally major source of errors for this type of instrument. Focusing on the perspective of instrument configuration, the instrument was tested in a controlled outdoor river channel located in KICT River Experiment Center with a fixed flow condition of around 1 m/s flow speed with steady flow supply, 6 m of channel width, and less than 1 m of shallow flow depth, where the detailed velocity measurements with SonTek micro-ADV was used for validation. As results, less than 15 degree in tilting angle generated much higher deviation, and higher yawing angle proportionally increased coefficient of variance. Yaw angles affected accuracy in terms of measurement area.

Diagnosis and Improvements Plan Study of CIPP Model-based Vocational Competency Development Training Teacher Qualification Training (Training Course) (CIPP 모형 기반 직업능력개발훈련교사 자격연수(양성과정) 진단 및 개선 방안 연구)

  • Bae, Gwang-Min;Woo, Hye-Jung;Choi, Myung-Ran;Yoon, Gwan-Sik
    • Journal of vocational education research
    • /
    • v.36 no.2
    • /
    • pp.95-121
    • /
    • 2017
  • The vocational competency development training teacher must complete the training course for the training of vocational competency development training instructor and get the qualification of the vocational competency development training teacher from the Ministry of Employment & Labor with the criteria set by the Presidential Decree. Therefore, it can be said that H_university 's educational performance, which is the only vocational competency development training teacher in Korea and that plays a role of mass production in the labor market, has a great influence on vocational competency development training. The purpose of this study is to identify the problems through the analysis of actual condition of vocational competency development training education based on CIPP model, Furthermore, it was aimed to suggest improvement plan of qualification training education. In order to accomplish the purpose of the research, the present situation of the training course for the vocational competency development training teacher training students was grasped. And We conducted a survey to draw out the improvement plan and utilized the results of 173 copies. We conducted interviews by selecting eight subjects for in-depth analysis and Understand the details of the results of the surveys conducted. As a result of the study, positive responses were obtained from the educational objectives and educational resources in the context factors. On the other hand, there were negative opinions about the curriculum reflecting the learner and social needs. In the input factors, positive opinions were derived from the educational objectives and training requirements. However, there were many negative opinions about the achievement of the learner's educational goals. In addition, there were many negative opinions of online contents education. In the process factors, positive evaluation was high in class related part, learner attendance management, and institutional support. However, negative opinions were drawn on the comprehensive evaluation of qualification training period, and the learner's burden due to lack of learning period appeared to be the main reason. In the factor of calculation, Positive opinions were derived from the applicability of the business curriculum for training courses for training teachers who are in charge of education and training in industry occupations. However, there were negative opinions such as learning time, concentration of learning, and communication of instructors. Based on the results of the study, suggestions for improving the operation of vocational competency training teacher qualification training are as follows. First, it is necessary to flexibly manage the training schedule for the weekly training course for vocational competency development training teachers. Second, it is necessary to seek to improve the online education curriculum centered on consumers. Third, it is necessary to seek access to qualification training for local residents. Fourth, pre - education support for qualified applicants is required. Finally, follow-up care of qualified trainees is necessary.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

A Study on Outplacement Countermeasure and Retention Level Examination Analysis about Outplacement Competency of Special Security Government Official (특정직 경호공무원의 전직역량에 대한 보유수준 분석 및 전직지원방안 연구)

  • Kim, Beom-Seok
    • Korean Security Journal
    • /
    • no.33
    • /
    • pp.51-80
    • /
    • 2012
  • This study is to summarize main contents which was mentioned by Beomseok Kim' doctoral dissertation. The purpose of this study focuses on presenting the outplacement countermeasure and retention level examination analysis about outplacement competency of special security government official through implement of questionnaire method. The questionnaire for retention level examination including four groups of outplacement competency and twenty subcategories was implemented in the object of six hundered persons relevant to outplacement more than forty age and five grade administration official of special security government officials, who have outplacement experiences as outplacement successors, outplacement losers, and outplacement expectants, in order to achieve this research purpose effectively. The questionnaire examination items are four groups of outplacement competency and twenty subcategories which are the group of knowledge competency & four subcategories including expert knowledge, outplacement knowledge, self comprehension, and organization comprehension, the group of skill competency & nine subcategories including job skill competency, job performance skill, problem-solving skill, reforming skill, communication skill, organization management skill, crisis management skill, career development skill, and human network application skill, the group of attitude-emotion competency & seven subcategories including positive attitude, active attitude, responsibility, professionalism, devoting-sacrificing attitude, affinity, and self-controlling ability, and the group of value-ethics competency & two subcategories including ethical consciousness and morality. The respondents highly regard twenty-two outplacement competency and they consider themselves well-qualified for the subcategories valued over 4.0 such as the professional knowledge, active attitude, responsibility, ethics and morality while they mark the other subcategories below average still need to be improved. Thus, the following is suggestions for successful outplacement. First, individual effort is essential to strengthen their capabilities based on accurate self evaluation, for which the awareness and concept need to be redefined to help them face up to the reality by readjusting career goal to a realistic level. Second, active career development plan to improve shortcoming in terms of outplacement competency is required. Third, it is necessary to establish the infrastructure related to outplacement training such as ON-OFF Line training system and facilities for learning to reinforce user-oriented outplacement training as a regular training course before during after the retirement.

  • PDF

Evaluation of lines of NERICA 1 introgressed with Gn1a and WFP for yield and yield components as affected by nitrogen fertilization in Kenya

  • Makihara, Daigo;Samejima, Hiroaki;Kikuta, Mayumi;Kimani, John M.;Ashikari, Motoyuki;Angeles-Shim, Rosalyn;Sunohara, Hidehiko;Jena, Kshirod K.;Yamauchi, Akira;Doi, Kazuyuki
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2017.06a
    • /
    • pp.323-323
    • /
    • 2017
  • In many sub-Saharan African countries, boosting rice production is a pressing food security issue. To contribute to the increase in rice production, we have developed lines of NERICA 1 introgressed with the gene for spikelet number, Gn1a, and the gene for primary rachis-branch number, WFP by cross breeding. The performance of rice lines introgressed with the genes for yield related traits can be affected by cultivation environment and management. Thus, in this study, we aimed to evaluate the lines of NERICA 1 introgressed with Gn1a or/and WFP for yield and yield components under different nitrogen fertilization conditions in Kenya. A field trial was conducted at a paddy field in Kenya Agricultural and Livestock Research Organization-Mwea, Kirinyaga County ($0^{\circ}39^{\prime}S$, $0^{\circ}20^{\prime}E$) from August 2016 to January 2017. Eight lines of NERICA 1 introgressed with Gn1a and/or WFP, and their parents, NERICA 1 and ST12, were grown under 0 (NF) and $75(SF)kg\;N\;ha^{-1}$. At maturity, five hills per plot were harvested to determine the yield and yield components. The number of primary and secondary rachis-branches per panicle was measured on the longest panicle in each hill. Under SF, the introgression of WFP to NERICA 1 increased the number of primary and secondary rachis-branches by 27 and 25%, respectively. On the other hand, Gn1a did not increase the number of primary rachis-branches, whereas the number of secondary rachis-branches was increased by 38% on average. The number of primary and secondary rachis-branches of the lines introgressed with both genes increased by 25 and 56%, respectively. Although grain number per panicle increased 33% by Gn1a, 34% by WFP, and 43% by Gn1a+WFP, the yield increase by Gn1a, WFP, and Gn1a+WFP was only 14, 7, and 14%, respectively. The suppression of the yield increase was mainly attributed to the decline in the filled grain ratio. Under NF, WFP increased the number of primary and secondary rachis-branches by 20 and 19%, respectively. The introgression of both genes increased the former and the later by 19 and 35%, respectively. However, Gn1a did not change them under NF. Thus, even under NF, grain yield increased 11% by WFP and 24% by Gn1a+WFP due to the increased grain number although filled grain ratio declined. Our findings suggest that the introgression of Gn1a and WFP could contribute to the rice productivity improvement in sub-Saharan Africa even under low fertility conditions. Improving filled grain ratio of the lines introgressed with these genes by further breeding and fertilization management will be the focus of subsequent work.

  • PDF

A Case Study of the Performance and Success Factors of ISMP(Information Systems Master Plan) (정보시스템 마스터플랜(ISMP) 수행 성과와 성공요인에 관한 사례연구)

  • Park, So-Hyun;Lee, Kuk-Hie;Gu, Bon-Jae;Kim, Min-Seog
    • Information Systems Review
    • /
    • v.14 no.1
    • /
    • pp.85-103
    • /
    • 2012
  • ISMP is a method of writing clearly the user requirements in the RFP(Request for Proposal) of the IS development projects. Unlike the conventional methods of RFP preparation that describe the user requirements of target systems in a rather superficial manner, ISMP systematically identifies the businesses needs and the status of information technology, analyzes in detail the user requirements, and defines in detail the specific functions of the target systems. By increasing the clarity of RFP, the scale and complexity of related businesses can be calculated accurately, many responding companies can prepare proposals clearly, and the level of fairness during the evaluation of many proposals can be improved, as well. Above all though, the problems that are posed as chronic challenges in this field, i.e., the misunderstanding and conflicts between the users and developers, excessive burden on developers, etc. can be resolved. This study is a case study that analyzes the execution process, execution accomplishment, problems, and the success factors of two pilot projects that introduced ISMP for the first time. ISMP performance procedures of actual site were verified, and how the user needs in the request for quote are described was examined. The satisfaction levels of ISMP RFP for quote were found to be high as compared to the conventional RFP. Although occurred were some problems such as RFP preparation difficulties, increased workload, etc. due to the lack of understanding and execution experience of ISMP, in overall, also occurred were some positive effects such as the establishment of the scope of target systems, improved information sharing and cooperation between the users and the developers, seamless communication between issuing customer corporations and IT service companies, reduction of changes in user requirements, etc. As a result of conducting action research type in-depth interviews on the persons in charge of actual work, factors were derived as ISMP success factors: prior consensus on the need for ISMP, the acquisition of execution resources resulting from the support of CEO and CIO, and the selection of specification level of the user requirements. The results of this study will provide useful site information to the corporations that are considering adopting ISMP and IT service firms, and present meaningful suggestions on the future study directions to researchers in the field of IT service competitive advantages.

  • PDF