• Title/Summary/Keyword: complexity control

Search Result 1,190, Processing Time 0.035 seconds

Variation of Hospital Costs and Product Heterogeneity

  • Shin, Young-Soo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.11 no.1
    • /
    • pp.123-127
    • /
    • 1978
  • The major objective of this research is to identify those hospital characteristics that best explain cost variation among hospitals and to formulate linear models that can predict hospital costs. Specific emphasis is placed on hospital output, that is, the identification of diagnosis related patient groups (DRGs) which are medically meaningful and demonstrate similar patterns of hospital resource consumption. A casemix index is developed based on the DRGs identified. Considering the common problems encountered in previous hospital cost research, the following study requirements are estab-lished for fulfilling the objectives of this research: 1. Selection of hospitals that exercise similar medical and fiscal practices. 2. Identification of an appropriate data collection mechanism in which demographic and medical characteristics of individual patients as well as accurate and comparable cost information can be derived. 3. Development of a patient classification system in which all the patients treated in hospitals are able to be split into mutually exclusive categories with consistent and stable patterns of resource consumption. 4. Development of a cost finding mechanism through which patient groups' costs can be made comparable across hospitals. A data set of Medicare patients prepared by the Social Security Administration was selected for the study analysis. The data set contained 27,229 record abstracts of Medicare patients discharged from all but one short-term general hospital in Connecticut during the period from January 1, 1971, to December 31, 1972. Each record abstract contained demographic and diagnostic information, as well as charges for specific medical services received. The 'AUT-OGRP System' was used to generate 198 DRGs in which the entire range of Medicare patients were split into mutually exclusive categories, each of which shows a consistent and stable pattern of resource consumption. The 'Departmental Method' was used to generate cost information for the groups of Medicare patients that would be comparable across hospitals. To fulfill the study objectives, an extensive analysis was conducted in the following areas: 1. Analysis of DRGs: in which the level of resource use of each DRG was determined, the length of stay or death rate of each DRG in relation to resource use was characterized, and underlying patterns of the relationships among DRG costs were explained. 2. Exploration of resource use profiles of hospitals; in which the magnitude of differences in the resource uses or death rates incurred in the treatment of Medicare patients among the study hospitals was explored. 3. Casemix analysis; in which four types of casemix-related indices were generated, and the significance of these indices in the explanation of hospital costs was examined. 4. Formulation of linear models to predict hospital costs of Medicare patients; in which nine independent variables (i. e., casemix index, hospital size, complexity of service, teaching activity, location, casemix-adjusted death. rate index, occupancy rate, and casemix-adjusted length of stay index) were used for determining factors in hospital costs. Results from the study analysis indicated that: 1. The system of 198 DRGs for Medicare patient classification was demonstrated not only as a strong tool for determining the pattern of hospital resource utilization of Medicare patients, but also for categorizing patients by their severity of illness. 2. The wei틴fed mean total case cost (TOTC) of the study hospitals for Medicare patients during the study years was $11,27.02 with a standard deviation of $117.20. The hospital with the highest average TOTC ($1538.15) was 2.08 times more expensive than the hospital with the lowest average TOTC ($743.45). The weighted mean per diem total cost (DTOC) of the study hospitals for Medicare patients during the sutdy years was $107.98 with a standard deviation of $15.18. The hospital with the highest average DTOC ($147.23) was 1.87 times more expensive than the hospital with the lowest average DTOC ($78.49). 3. The linear models for each of the six types of hospital costs were formulated using the casemix index and the eight other hospital variables as the determinants. These models explained variance to the extent of 68.7 percent of total case cost (TOTC), 63.5 percent of room and board cost (RMC), 66.2 percent of total ancillary service cost (TANC), 66.3 percent of per diem total cost (DTOC), 56.9 percent of per diem room and board cost (DRMC), and 65.5 percent of per diem ancillary service cost (DTANC). The casemix index alone explained approximately one half of interhospital cost variation: 59.1 percent for TOTC and 44.3 percent for DTOC. Thsee results demonstrate that the casemix index is the most importand determinant of interhospital cost variation Future research and policy implications in regard to the results of this study is envisioned in the following three areas: 1. Utilization of casemix related indices in the Medicare data systems. 2. Refinement of data for hospital cost evaluation. 3. Development of a system for reimbursement and cost control in hospitals.

  • PDF

A Scalable and Modular Approach to Understanding of Real-time Software: An Architecture-based Software Understanding(ARSU) and the Software Re/reverse-engineering Environment(SRE) (실시간 소프트웨어의 조절적${\cdot}$단위적 이해 방법 : ARSU(Architecture-based Software Understanding)와 SRE(Software Re/reverse-engineering Environment))

  • Lee, Moon-Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3159-3174
    • /
    • 1997
  • This paper reports a research to develop a methodology and a tool for understanding of very large and complex real-time software. The methodology and the tool mostly developed by the author are called the Architecture-based Real-time Software Understanding (ARSU) and the Software Re/reverse-engineering Environment (SRE) respectively. Due to size and complexity, it is commonly very hard to understand the software during reengineering process. However the research facilitates scalable re/reverse-engineering of such real-time software based on the architecture of the software in three-dimensional perspectives: structural, functional, and behavioral views. Firstly, the structural view reveals the overall architecture, specification (outline), and the algorithm (detail) views of the software, based on hierarchically organized parent-chi1d relationship. The basic building block of the architecture is a software Unit (SWU), generated by user-defined criteria. The architecture facilitates navigation of the software in top-down or bottom-up way. It captures the specification and algorithm views at different levels of abstraction. It also shows the functional and the behavioral information at these levels. Secondly, the functional view includes graphs of data/control flow, input/output, definition/use, variable/reference, etc. Each feature of the view contains different kind of functionality of the software. Thirdly, the behavioral view includes state diagrams, interleaved event lists, etc. This view shows the dynamic properties or the software at runtime. Beside these views, there are a number of other documents: capabilities, interfaces, comments, code, etc. One of the most powerful characteristics of this approach is the capability of abstracting and exploding these dimensional information in the architecture through navigation. These capabilities establish the foundation for scalable and modular understanding of the software. This approach allows engineers to extract reusable components from the software during reengineering process.

  • PDF

A Preliminary Study for Nonlinear Dynamic Analysis of EEG in Patients with Dementia of Alzheimer's Type Using Lyapunov Exponent (리아프노프 지수를 이용한 알쯔하이머형 치매 환자 뇌파의 비선형 역동 분석을 위한 예비연구)

  • Chae, Jeong-Ho;Kim, Dai-Jin;Choi, Sung-Bin;Bahk, Won-Myong;Lee, Chung Tai;Kim, Kwang-Soo;Jeong, Jaeseung;Kim, Soo-Yong
    • Korean Journal of Biological Psychiatry
    • /
    • v.5 no.1
    • /
    • pp.95-101
    • /
    • 1998
  • The changes of electroencephalogram(EEG) in patients with dementia of Alzheimer's type are most commonly studied by analyzing power or magnitude in traditionally defined frequency bands. However because of the absence of an identified metric which quantifies the complex amount of information, there are many limitations in using such a linear method. According to the chaos theory, irregular signals of EEG can be also resulted from low dimensional deterministic chaos. Chaotic nonlinear dynamics in the EEG can be studied by calculating the largest Lyapunov exponent($L_1$). The authors have analyzed EEG epochs from three patients with dementia of Alzheimer's type and three matched control subjects. The largest $L_1$ is calculated from EEG epochs consisting of 16,384 data points per channel in 15 channels. The results showed that patients with dementia of Alzheimer's type had significantly lower $L_1$ than non-demented controls on 8 channels. Topographic analysis showed that the $L_1$ were significantly lower in patients with Alzheimer's disease on all the frontal, temporal, central, and occipital head regions. These results show that brains of patients with dementia of Alzheimer's type have a decreased chaotic quality of electrophysiological behavior. We conclude that the nonlinear analysis such as calculating the $L_1$ can be a promising tool for detecting relative changes in the complexity of brain dynamics.

  • PDF

The Method of Multi-screen Service using Scene Composition Technology based on HTML5 (HTML5 기반 장면구성 기술을 통한 멀티스크린 서비스 제공 방법)

  • Jo, Minwoo;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.18 no.6
    • /
    • pp.895-910
    • /
    • 2013
  • Multi-screen service is a service that consumes more than one media in a number of terminals simultaneously or discriminately. This multi-screen service has become useful due to distribute of smart TV and terminals. Also, in case of hybrid broadcasting environment that is convergence of broadcasting and communication environment, it is able to provide various user experience through contents consumed by multiple screens. In hybrid broadcasting environment, scene composition technology can be used as an element technology for multi-screen service. Using scene composition technology, multiple media can be consumed complexly through the specified presentation time and space. Thus, multi-screen service based on the scene composition technology can provide spatial and temporal control and consumption of multiple media by linkage between the terminals. However, existing scene composition technologies are not able to use easily in hybrid broadcasting because of applicable environmental constraints, the difficulty in applying the various terminal and complexity. For this problems, HTML5 can be considered. HTML5 is expected to be applied in various smart terminals commonly, and provides consumption of diverse media. So, in this paper, it proposes the scene composition and multi-screen service technology based on HTML5 that is expected be used in various smart terminals providing hybrid broadcasting environment. For this, it includes the introduction in terms of HTML5 and multi-screen service, the method of providing information related with scene composition and multi-screen service through the extention of elements and attributes in HTML5, media signaling between terminals and the method of synchronization. In addition, the proposed scene composition and multi-screen service technology based on HTML5 was verified through the implementation and experiment.

Designing Mobile Framework for Intelligent Personalized Marketing Service in Interactive Exhibition Space (인터랙티브 전시 환경에서 개인화 마케팅 서비스를 위한 모바일 프레임워크 설계)

  • Bae, Jong-Hwan;Sho, Su-Hwan;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.59-69
    • /
    • 2012
  • As exhibition industry, which is a part of 17 new growth engines of the government, is related to other industries such as tourism, transportation and financial industries. So it has a significant ripple effect on other industries. Exhibition is a knowledge-intensive, eco-friendly and high value-added Industry. Over 13,000 exhibitions are held every year around the world which contributes to getting foreign currency. Exhibition industry is closely related with culture and tourism and could be utilized as local and national development strategies and improve national brand image as well. Many countries try various efforts to invigorate exhibition industry by arranging related laws and support system. In Korea, more than 200 exhibitions are being held every year, but only 2~3 exhibitions are hosted with over 400 exhibitors and except these exhibitions most exhibitions have few foreign exhibitors. The main reason of weakness of domestic trade show is that there are no agencies managing exhibitionrelated statistics and there is no specific and reliable evaluation. This might cause impossibility of providing buyer or seller with reliable data, poor growth of exhibitions in terms of quality and thus service quality of trade shows cannot be improved. Hosting a lot of visitors (Public/Buyer/Exhibitor) is very crucial to the development of domestic exhibition industry. In order to attract many visitors, service quality of exhibition and visitor's satisfaction should be enhanced. For this purpose, a variety of real-time customized services through digital media and the services for creating new customers and retaining existing customers should be provided. In addition, by providing visitors with personalized information services they could manage their time and space efficiently avoiding the complexity of exhibition space. Exhibition industry can have competitiveness and industrial foundation through building up exhibition-related statistics, creating new information and enhancing research ability. Therefore, this paper deals with customized service with visitor's smart-phone at the exhibition space and designing mobile framework which enables exhibition devices to interact with other devices. Mobile server framework is composed of three different systems; multi-server interaction, server, client, display device. By making knowledge pool of exhibition environment, the accumulated data for each visitor can be provided as personalized service. In addition, based on the reaction of visitors each of all information is utilized as customized information and so the cyclic chain structure is designed. Multiple interaction server is designed to have functions of event handling, interaction process between exhibition device and visitor's smart-phone and data management. Client is an application processed by visitor's smart-phone and could be driven on a variety of platforms. Client functions as interface representing customized service for individual visitors and event input and output for simultaneous participation. Exhibition device consists of display system to show visitors contents and information, interaction input-output system to receive event from visitors and input toward action and finally the control system to connect above two systems. The proposed mobile framework in this paper provides individual visitors with customized and active services using their information profile and advanced Knowledge. In addition, user participation service is suggested as well by using interaction connection system between server, client, and exhibition devices. Suggested mobile framework is a technology which could be applied to culture industry such as performance, show and exhibition. Thus, this builds up the foundation to improve visitor's participation in exhibition and bring about development of exhibition industry by raising visitor's interest.

Research on Earthquake Occurrence Characteristics Through the Comparison of the Yangsan-ulsan Fault System and the Futagawa-Hinagu Fault System (양산-울산 단층계와 후타가와-히나구 단층계의 비교를 통한 지진발생특성 연구)

  • Lee, Jinhyun;Gwon, Sehyeon;Kim, Young-Seog
    • The Journal of the Petrological Society of Korea
    • /
    • v.25 no.3
    • /
    • pp.195-209
    • /
    • 2016
  • The understanding of geometric complexity of strike-slip Fault system can be an important factor to control fault reactivation and surface rupture propagation under the regional stress regime. The Kumamoto earthquake was caused by dextral reactivation of the Futagawa-Hinagu Fault system under the E-W maximum horizontal principal stress. The earthquakes are a set of earthquakes, including a foreshock earthquake with a magnitude 6.2 at the northern tip of the Hinagu Fault on April 14, 2016 and a magnitude 7.0 mainshock which generated at the intersection of the two faults on April 16, 2016. The hypocenters of the main shock and aftershocks have moved toward NE direction along the Futagawa Fault and terminated at Mt. Aso area. The intersection of the two faults has a similar configuration of ${\lambda}$-fault. The geometries and kinematics, of these faults were comparable to the Yansan-Ulsan Fault system in SE Korea. But slip rate is little different. The results of age dating show that the Quaternary faults distributed along the northern segment of the Yangsan Fault and the Ulsan Fault are younger than those along the southern segment of the Yansan Fault. This result is well consistent with the previous study with Column stress model. Thus, the seismic activity along the middle and northern segment of the Yangsan Fault and the Ulsan Fault might be relatively active compared with that of the southern segment of the Yangsan Fault. Therefore, more detailed seismic hazard and paleoseismic studies should be carried out in this area.

A study on the management of drawings of Metropolitan Rapid Transit (도시철도 도면 관리에 관한 연구 -서울시 도시철도공사를 중심으로-)

  • Kim, Miyon
    • The Korean Journal of Archival Studies
    • /
    • no.11
    • /
    • pp.181-214
    • /
    • 2005
  • Metropolitan rapid transit system plays an essential role in the public transportation system of any large city, and its managing agency is usually charged with the responsibility of storing and managing the design drawings of the system. The drawings are important and historically valuable documents that must be kept permanently because they contain comprehensive data that is used to manage and maintain the system. However, no study has been performed in Korea on how well agencies are preserving and managing these records. Seoul Metropolitan Rapid Transit Corporation(SMRT) is the managing agency established by the city of Seoul to operate subway lines 5, 6, 7, and 8 more efficiently to serve its citizens. By the Act on Records Management in Public Institutions(ARMPI), SMRT should establish a records center to manage its records. Furthermore, all drawings produced by SMRT and other third party entities should be in compliance with the Act. However, SMRT, as a form of local public corporation, can establish a records center by its own way. Accordingly, the National Archives & Records Service(NARS) has very little control over SMRT. Therefore, the purpose of this study is to research and analyze the present state of storage and management of the drawings of metropolitan rapid transit in SMRT and is to find a desirable method of preservation and management for drawings of metropolitan rapid transit. In the process of the study, it was found that a records center is being considered to manage only general official documents and not to manage the drawings as required by ARMPI. SMRT does not have a records center, and the environment of management on the drawings is very poor. Although there is a plan to develop a new management system for the drawings, it will be non-compliant of ARMPI. What's happening at SMRT does not reflect the state of all other cities' metropolitan rapid transit records management systems, but the state of creation of records center of local public corporation is the almost same state as SMRT. There should be continuous education and many studies conducted in order to manage the drawings of metropolitan rapid transit efficiently by records management system. This study proposes a records center based on both professional records centers and union records centers. Although metropolitan rapid transit is constructed and managed by each local public corporation, the overall characteristics and processes of metropolitan rapid transit projects are similar in nature. In consideration of huge quantity, complexity and specialty of drawings produced and used during construction and operation of metropolitan rapid transit, and overlap of each local public corporation's effort and cost of the storage and management of the drawings, they need to be managed in a professional and united way. As an example of professional records center, there is the National Personnel Records Center(NPRC) in St. Louis, Missouri. NPRC is one of the National Archives and Records Administration's largest operations and a central repository of personnel-related records on former and present federal employees and the military. It provides extensive information to government agencies, military veterans, former federal employees, family members, as well as researchers and historians. As an example of union records center, there is the Chinese Union Dangansil. It was established by several institutions and organizations, so united management of records can be performed and human efforts and facilities can be saved. We should establish a professional and united records center which manages drawings of metropolitan rapid transit and provides service to researchers and the public as well as members of the related institutions. This study can be an impetus to improve interest on management of not only drawings of metropolitan rapid transit but also drawings of various public facilities.

Identifying sources of heavy metal contamination in stream sediments using machine learning classifiers (기계학습 분류모델을 이용한 하천퇴적물의 중금속 오염원 식별)

  • Min Jeong Ban;Sangwook Shin;Dong Hoon Lee;Jeong-Gyu Kim;Hosik Lee;Young Kim;Jeong-Hun Park;ShunHwa Lee;Seon-Young Kim;Joo-Hyon Kang
    • Journal of Wetlands Research
    • /
    • v.25 no.4
    • /
    • pp.306-314
    • /
    • 2023
  • Stream sediments are an important component of water quality management because they are receptors of various pollutants such as heavy metals and organic matters emitted from upland sources and can be secondary pollution sources, adversely affecting water environment. To effectively manage the stream sediments, identification of primary sources of sediment contamination and source-associated control strategies will be required. We evaluated the performance of machine learning models in identifying primary sources of sediment contamination based on the physico-chemical properties of stream sediments. A total of 356 stream sediment data sets of 18 quality parameters including 10 heavy metal species(Cd, Cu, Pb, Ni, As, Zn, Cr, Hg, Li, and Al), 3 soil parameters(clay, silt, and sand fractions), and 5 water quality parameters(water content, loss on ignition, total organic carbon, total nitrogen, and total phosphorous) were collected near abandoned metal mines and industrial complexes across the four major river basins in Korea. Two machine learning algorithms, linear discriminant analysis (LDA) and support vector machine (SVM) classifiers were used to classify the sediments into four cases of different combinations of the sampling period and locations (i.e., mine in dry season, mine in wet season, industrial complex in dry season, and industrial complex in wet season). Both models showed good performance in the classification, with SVM outperformed LDA; the accuracy values of LDA and SVM were 79.5% and 88.1%, respectively. An SVM ensemble model was used for multi-label classification of the multiple contamination sources inlcuding landuses in the upland areas within 1 km radius from the sampling sites. The results showed that the multi-label classifier was comparable performance with sinlgle-label SVM in classifying mines and industrial complexes, but was less accurate in classifying dominant land uses (50~60%). The poor performance of the multi-label SVM is likely due to the overfitting caused by small data sets compared to the complexity of the model. A larger data set might increase the performance of the machine learning models in identifying contamination sources.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Studies on Combining Ability and Inheritance of Major Agronomic Characters in Naked Barley (과맥의 주요형질에 대한 조합능력 및 유전에 관한 연구)

  • Kyung-Soo Min
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.23 no.2
    • /
    • pp.1-24
    • /
    • 1978
  • To obtain basic information on the breeding of early maturing, short culm naked-barley varieties, the following 10 varieties, Ehime # 1, Shikoku #42, Yamate hadaka, Eijo hadaka, Kagawa # 1, Jangjubaeggwa, Baegdong, Cheongmaeg, Seto-hadaka and Mokpo #42 were used in diallel crosses in 1974. Heading date, culm length and grain yield per plant for the parents, $F_1's$ and $F_2's$ of the 10X10 partial diallel crosses were measured in 1976 for analysis of their combining ability, heritability and inheritance. The results obtained are summarized below; 1. Heritabilities in broad sense for heading date, culm length and grain yield per plant were 0.7831, 0.7599 and 0.6161, respectively. Narrow sense heritabilities for heading date were 0.3972 in $F_1$ and 0.7789 in $F_2$ and for culm length 0.6567 in $F_1$ and 0.6414 in $F_2.$ These values suggest that earliness and culm length could be successfully selected for in the early generations. Narrow sense heritability for grain yield was 0.3775 in $F_1$ and 0.4170 in $F_2.$ 2. GCA effects of the $F_1$ and $F_2$ generations for days to heading were high in the early direction for early-heading varieties, while for late-heading varieties the GCA effects were high in the late direction. Absolute values for GCA effects in $F_1$ were higher than in $F_2.$ SCA effects of the $F_1$ and $F_2$ generations were high in the early-heading direction for Shikoku # 42 x Mokpo # 42, Ehime # 1 x Yamate hadaka, Shikoku # 42 x Yamate hadaka and Shikoku #42 x Eijo hadaka. 3. The GCA effects for culm length in the $F_1$ and $F_2$ generations for tall varieties were high in the tall direction while short varieties were high in the short direction. Absolute values for the GCA effects in $F_1$ were higher than in $F_2.$ SCA effects were high in the short direction for the combinations of Mokpo # 42 with Ehime # 1, Yamate had aka and Eijo hadaka. 4. The GCA effects for grain yields per plant in the $F_1$ and $F_2$ generations for varieties with high yields per plant were high in the high yielding direction, while varieties with low yields per plant were high in the low yielding direction. Absolute values of the $F_1$ GCA effects were higher than the $F_2$ effects. The combinations with high SCA effects were Mokpo # 42 x Shikoku # 42, Mokpo # 42 x Seto hadaka and Mokpo # 42 x Cheongmaeg. 5. Mean heading dates of the $F_1$ and $F_2$ generations were earlier than those of mean mid-parent. Mean heading date of the $F_1$ generation was earlier than the $F_2$ generation. Crosses involving early-heading varieties showed a greater $F_1, $ mid-parent difference than crosses involving late-heading varieties. 6. Heading date was controlled by a partial dominance effect. Nine varieties excluding Mokpo # 42 showed allelic gene action. Ehime # 1, Shikoku # 42, Kagawa # 1 and Mokpo # 42 were recessive to the other tested varieties. 7. The $F_2$ segregations of the 45 crosses for days to heading showed that 33 cosses were of such complexity that they could not be explained by simple genetic inheritance. One cross showed a 3 : 1 ratio where earliness was dominant. Another cross showed a 3 : 1 ratio where lateness was dominant. Four other crosses showed a 9 : 7 ratio for earliness while six crosses showed a 9 : 7 ratio for lateness. 8. Many transgressive segregants for earliness were found in the following crosses; Eijo hadaka x Baegdong, Ehime # 1 x Seto hadaka, Yamate had aka x Kagawa # 1, Kagawa # 1 x Sato hadaka, Shikoku # 42 x Kagawa # 1, Ehime # 1 x Kagawa # 1, Ehime # 1 x Shikoku # 42, Ehime # 1 x Eijo hadaka. 9. Mean culm length of the F, and F. generations were usually taller than the mid-parent where tall parent were used. These trends were high in the short varieties, but low in the tall varieties. 10. Culm length was controlled by partial dominace which was gonverned by allelic gene(s). Culm length showed a high degree of control by additive genes. Mokpo # 42 was recessive while Baegdong was dominant. 11. The F_2 frequency for culm length was in large part normally distributed around the midparent value. However, some combinations showed transgressive segregation for either tall or short culm length. From combinations between medium tall varieties, Ehime # 1, Shikoku # 42, Eijo hadaka and Seto hadaka, many short segregants could be found. 12. Mean grain yields per plant of the F_1 and F_2 generations were 6% and 5% higher than those of mid-parents, respectively. The varieties with high yields per plant showed a low rate of yield increase in their F_1's and F_2's while the varieties with low yields per plant showed a high rate of yield increase in their F_1's and F_1's. 13. Grain yields per plant showed over-dominnee effects, governed by non-allelic genes. Mokpo # 42 showed recessive genetic control of grain yield per plant. It remains difficult to clarify the inheritance of grain yields per plant from these data.

  • PDF