• Title/Summary/Keyword: Probability method

Search Result 4,618, Processing Time 0.039 seconds

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

The Evaluation of Attenuation Difference and SUV According to Arm Position in Whole Body PET/CT (전신 PET/CT 검사에서 팔의 위치에 따른 감약 정도와 SUV 변화 평가)

  • Kwak, In-Suk;Lee, Hyuk;Choi, Sung-Wook;Suk, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.21-25
    • /
    • 2010
  • Purpose: For better PET imaging with accuracy the transmission scanning is inevitably required for attenuation correction. The attenuation is affected by condition of acquisition and patient position, consequently quantitative accuracy may be decreased in emission scan imaging. In this paper, the present study aims at providing the measurement for attenuation varying with the positions of the patient's arm in whole body PET/CT, further performing the comparative analysis over its SUV changes. Materials and Methods: NEMA 1994 PET phantom was filled with $^{18}F$-FDG and the concentration ratio of insert cylinder and background water fit to 4:1. Phantom images were acquired through emission scanning for 4min after conducting transmission scanning by using CT. In an attempt to acquire image at the state that the arm of the patient was positioned at the lower of ahead, image was acquired in away that two pieces of Teflon inserts were used additionally by fixing phantoms at both sides of phantom. The acquired imaged at a were reconstructed by applying the iterative reconstruction method (iteration: 2, subset: 28) as well as attenuation correction using the CT, and then VOI was drawn on each image plane so as to measure CT number and SUV and comparatively analyze axial uniformity (A.U=Standard deviation/Average SUV) of PET images. Results: It was found from the above phantom test that, when comparing two cases of whether Teflon insert was fixed or removed, the CT number of cylinder increased from -5.76 HU to 0 HU, while SUV decreased from 24.64 to 24.29 and A.U from 0.064 to 0.052. And the CT number of background water was identified to increase from -6.14 HU to -0.43 HU, whereas SUV decreased from 6.3 to 5.6 and A.U also decreased from 0.12 to 0.10. In addition, as for the patient image, CT number was verified to increase from 53.09 HU to 58.31 HU and SUV decreased from 24.96 to 21.81 when the patient's arm was positioned over the head rather than when it was lowered. Conclusion: When arms up protocol was applied, the SUV of phantom and patient image was decreased by 1.4% and 9.2% respectively. With the present study it was concluded that in case of PET/CT scanning against the whole body of a patient the position of patient's arm was not so much significant. Especially, the scanning under the condition that the arm is raised over to the head gives rise to more probability that the patient is likely to move due to long scanning time that causes the increase of uptake of $^{18}F$-FDG of brown fat at the shoulder part together with increased pain imposing to the shoulder and discomfort to a patient. As regarding consideration all of such factors, it could be rationally drawn that PET/CT scanning could be made with the arm of the subject lowered.

  • PDF

Herbicidal Phytotoxicity under Adverse Environments and Countermeasures (불량환경하(不良環境下)에서의 제초제(除草劑) 약해(藥害)와 경감기술(輕減技術))

  • Kwon, Y.W.;Hwang, H.S.;Kang, B.H.
    • Korean Journal of Weed Science
    • /
    • v.13 no.4
    • /
    • pp.210-233
    • /
    • 1993
  • The herbicide has become indispensable as much as nitrogen fertilizer in Korean agriculture from 1970 onwards. It is estimated that in 1991 more than 40 herbicides were registered for rice crop and treated to an area 1.41 times the rice acreage ; more than 30 herbicides were registered for field crops and treated to 89% of the crop area ; the treatment acreage of 3 non-selective foliar-applied herbicides reached 2,555 thousand hectares. During the last 25 years herbicides have benefited the Korean farmers substantially in labor, cost and time of farming. Any herbicide which causes crop injury in ordinary uses is not allowed to register in most country. Herbicides, however, can cause crop injury more or less when they are misused, abused or used under adverse environments. The herbicide use more than 100% of crop acreage means an increased probability of which herbicides are used wrong or under adverse situation. This is true as evidenced by that about 25% of farmers have experienced the herbicide caused crop injury more than once during last 10 years on authors' nationwide surveys in 1992 and 1993 ; one-half of the injury incidences were with crop yield loss greater than 10%. Crop injury caused by herbicide had not occurred to a serious extent in the 1960s when the herbicides fewer than 5 were used by farmers to the field less than 12% of total acreage. Farmers ascribed about 53% of the herbicidal injury incidences at their fields to their misuses such as overdose, careless or improper application, off-time application or wrong choice of the herbicide, etc. While 47% of the incidences were mainly due to adverse natural conditions. Such misuses can be reduced to a minimum through enhanced education/extension services for right uses and, although undesirable, increased farmers' experiences of phytotoxicity. The most difficult primary problem arises from lack of countermeasures for farmers to cope with various adverse environmental conditions. At present almost all the herbicides have"Do not use!" instructions on label to avoid crop injury under adverse environments. These "Do not use!" situations Include sandy, highly percolating, or infertile soils, cool water gushing paddy, poorly draining paddy, terraced paddy, too wet or dry soils, days of abnormally cool or high air temperature, etc. Meanwhile, the cultivated lands are under poor conditions : the average organic matter content ranges 2.5 to 2.8% in paddy soil and 2.0 to 2.6% in upland soil ; the canon exchange capacity ranges 8 to 12 m.e. ; approximately 43% of paddy and 56% of upland are of sandy to sandy gravel soil ; only 42% of paddy and 16% of upland fields are on flat land. The present situation would mean that about 40 to 50% of soil applied herbicides are used on the field where the label instructs "Do not use!". Yet no positive effort has been made for 25 years long by government or companies to develop countermeasures. It is a really sophisticated social problem. In the 1960s and 1970s a subside program to incoporate hillside red clayish soil into sandy paddy as well as campaign for increased application of compost to the field had been operating. Yet majority of the sandy soils remains sandy and the program and campaign had been stopped. With regard to this sandy soil problem the authors have developed a method of "split application of a herbicide onto sandy soil field". A model case study has been carried out with success and is introduced with key procedure in this paper. Climate is variable in its nature. Among the climatic components sudden fall or rise in temperature is hardly avoidable for a crop plant. Our spring air temperature fluctuates so much ; for example, the daily mean air temperature of Inchon city varied from 6.31 to $16.81^{\circ}C$ on April 20, early seeding time of crops, within${\times}$2Sd range of 30 year records. Seeding early in season means an increased liability to phytotoxicity, and this will be more evident in direct water-seeding of rice. About 20% of farmers depend on the cold underground-water pumped for rice irrigation. If the well is deep over 70m, the fresh water may be about $10^{\circ}C$ cold. The water should be warmed to about $20^{\circ}C$ before irrigation. This is not so practiced well by farmers. In addition to the forementioned adverse conditions there exist many other aspects to be amended. Among them the worst for liquid spray type herbicides is almost total lacking in proper knowledge of nozzle types and concern with even spray by the administrative, rural extension officers, company and farmers. Even not available in the market are the nozzles and sprayers appropriate for herbicides spray. Most people perceive all the pesticide sprayers same and concern much with the speed and easiness of spray, not with correct spray. There exist many points to be improved to minimize herbicidal phytotoxicity in Korea and many ways to achieve the goal. First of all it is suggested that 1) the present evaluation of a new herbicide at standard and double doses in registration trials is to be an evaluation for standard, double and triple doses to exploit the response slope in making decision for approval and recommendation of different dose for different situation on label, 2) the government is to recognize the facts and nature of the present problem to correct the present misperceptions and to develop an appropriate national program for improvement of soil conditions, spray equipment, extention manpower and services, 3) the researchers are to enhance researches on the countermeasures and 4) the herbicide makers/dealers are to correct their misperceptions and policy for sales, to develop database on the detailed use conditions of consumer one by one and to serve the consumers with direct counsel based on the database.

  • PDF

Surgery Alone and Surgery Plus Postoperative Radiation Therapy for Patients with pT3N0 Non-small Cell Lung Cancer Invading the Chest Wall (흉벽을 침범한 pT3N0 비소세포폐암 환자에서 수술 단독과 수술 후 방사선치료)

  • 박영제;임도훈;김관민;김진국;심영목;안용찬
    • Journal of Chest Surgery
    • /
    • v.37 no.10
    • /
    • pp.845-855
    • /
    • 2004
  • Background: No general consensus has been available regarding the necessity of postoperative radiation therapy (PORT) and its optimal techniques in the patients with chest wall invasion (pT3cw) and node negative (N0) non-small cell lung cancer (NSCLC). We did retrospective analyses on the pT3cwN0 NSCLC patients who received PORT because of presumed inadequate resection margin on surgical findings. And we compared them with the pT3cwN0 NSCLC patients who did not received PORT during the same period. Material and Method: From Aug. of 1994 till June of 2002, 22 pT3cwN0 NSCLC patients received PORT-PORT (+) group- and 16 pT3cwN0 NSCLC patients had no PORT-PORT (-) group. The radiation target volume for PORT (+) group was confined to the tumor bed plus the immediate adjacent tissue only, and no regional lymphatics were included. The prognostic factors for all patients were analyzed and survival rates, failure patterns were compared with two groups. Result: Age, tumor size, depth of chest wall invasion, postoperative mobidities were greater in PORT (-) group than PORT (+) group. In PORT (-) group, four patients who were consulted for PORT did not receive the PORT because of self refusal (3 patients) and delay in the wound repair (1 patient). For all patients, overall survival (OS), disease-free survival (DFS), loco-regional recurrence-free survival (LRFS), and distant metastases-free survival (DMFS) rates at 5 years were 35.3%, 30.3%, 80.9%, 36.3%. In univariate and multivariate analysis, only PORT significantly affect the survival. The 5 year as rates were 43.3% in the PORT (+) group and 25.0% in PORT (-) group (p=0.03). DFS, LRFS, DMFS rates were 36.9%, 84.9%, 43.1 % in PORT (+) group and 18.8%, 79.4%, 21.9% in PORT(-) group respectively. Three patients in PORT (-) group died of intercurrent disease without the evidence of recurrence. Few suffered from acute and late radiation side effects, all of which were RTOG grade 2 or lower. Conclusion: The strategy of adding PORT to surgery to improve the probability not only of local control but also of survival could be justified, considering that local control was the most important component in the successful treatment of pT3cw NSCLC patients, especially when the resection margin was not adequate. Authors were successful in the marked reduction of the incidence as well as the severity of the acute and late side effects of PORT, without taking too high risk of the regional failures by eliminating the regional lymphatics from the radiation target volume.

SURFACE ROUGHNESS OF COMPOSITE RESIN ACCORDING TO FINISHING METHODS (복합레진 표면의 연마방법에 따른 표면조도)

  • Min, Jeong-Bum;Cho, Kong-Chul;Cho, Young-Gon
    • Restorative Dentistry and Endodontics
    • /
    • v.32 no.2
    • /
    • pp.138-150
    • /
    • 2007
  • The purpose of this study was to evaluate the difference of surface roughness of composite resin according to composite resin type, polishing methods, and use of resin sealant. Two hundred rectangular specimens, sized $8{\times}3{\times}2mm$, were made of Micro-new (Bisco, Inc., Schaumburg, IL, U.S.A) and Filtek Supreme (3M ESPE Dental Products, St. Paul, MN, U.S.A.), and divided into two groups; Micronew-M group, Filtek Supreme-S group. Specimens for each composite group were subdivided into five groups by finishing and polishing instruments used; M1 & S1(polyester strip), M2 & S2 (Sof-Lex disc), M3 & S3 (Enhance disc and polishing paste), M4 & S4(Astropol) and M5 & S5 (finishing bur), Polished groups were added letter B after the application of resin surface sealant (Biscover), eg, M1B and S1B. After specimens were stored with distilled water for 24hr, average surface roughness (Ra) was taken using a surface roughness tester. Representative specimens of each group were examined by FE-SEM (S-4700: Hitachi High Technologies Co., Tokyo, Japan). The data were analysed using paired t-test, ANOVA and Duncan's tests at the 0.05 probability level. The results of this study were as follows ; 1. The lowest Ra was achieved in all groups using polyester strip and the highest Ra was achieved in M5, S5 and M5B groups using finishing bur. On FE-SEM, M1 and S1 groups provided the smoothest surfaces, M5 and S5 groups were presented the roughest surfaces and voids by debonding of filler on the polished specimens. 2. There was no significant difference in Ra between Micronew and Filtek Supreme before the application of resin sealant, but Micronew was smoother than Filek Supreme after the application of resin sealant. 3. There was significant corelation between Ra of type of composite resin and polishing methods before the application of resin sealant (p=0.000), but no significant interaction between them after the application of resin sealant. On FE-SEM, most of composite resin surfaces were smooth after the application of resin sealant on the polished specimens. 4. Compared with before and after the application of resin sealant in group treated in the same composite and polishing methods, Ra of M4B and M5B was statistically lower than that of M4 and M5, and S5B was lower than that of S5, respectively (p<0.05). In conclusion, surface roughness by polishing instruments was different according to type of composite resin. Overall, polyester strip produced the smoothest surface, but finishing bur produced the roughest surface. Application of resin sealant provided the smooth surfaces in specimens polished with Enhance, Astropol and finishing bur, but not provided them in specimens polished with Sof-Lex disc.

Policy Direction for The Farmland Sizing Suitable to Regional Trait (지역특성을 반영한 영농규모화사업의 발전방향-충남지역을 중심으로-)

  • Shim, Jae-Sung
    • The Journal of Natural Sciences
    • /
    • v.14 no.1
    • /
    • pp.83-121
    • /
    • 2004
  • This study was carried out to examine how solid the production foundation of rice in Chung-Nam Province is, and, if not, to probe alternative measures through the size of farms specializing in rice, of which direction would be a pivot of rice industry-oriented policy. The results obtained can be summarized as follows : 1. The amount of rice production in Chung-Nam Province is highest in Korea and the size of paddy field area is the second largest : This implying that the probability that rice production in Chung-Nam Province would be severely influenced by a global trend of market conditions. The number of farms specializing in rice becoming the core group of rice farming account for 7.7 percent of the total number of farm household in Korea. Average field area financial support which had been input to farm household by Government had a noticeable effect on the improvement of the policy of farm-size program. 2. Farm-size program in Chung-Nam Province established from 1980 to 2002 in creased the cultivation size of paddy field to 19,484 hectares, and this program enhanced the buying and selling of farmland and the number of farmland bargain reached 6,431 household and 16,517 hectares, respectively, in 1995-2002. Meanwhile, long-term letting and hiring of farmland appeared so active that the bargain acreage reached 6,970 hectares, and farm involved was 7,059 households, however, the farm-exchange-and-unity program did not satisfy our expectation, because the retirement farm operators reluctantly participated to sell their farms. Another reason that had delayed the bargain of farms rested on the general category of social complication attendant upon the exchange and unity operation for scattered farm. Such difficulties would work negative effects out to carry on the target of farm-size work in general. 3. The following measures were presented to propel the farm-size promotion program : a. Occupation shift project, followed by the social security program for retirement and elderly farm operators, should be promptly established and also a number of types of incentives for promoting the letting and hiring work and farm-exchange-and-unity program would also be set up. b. To establish the effective key system of rice production, all the farm operators should increase the unit area yield of rice and lower the production cost. To do so, a great deal of production teams of rice equipped with managerial techniques and capabilities need to be organized. And, also, there should be appropriate arrays of facilities including information system. This plan is desirable to be in line with a diversity of the structural implement of regional integration based on farm system building. c. To extend the size of farm and to improve farm management, we have to devise the enlargement of individual size of farm for maximized management and the utilization of farm-size grouping method. In conclusion, it can be said that the farm-size project in Chung-Nam Province which has continued since the 1980s was satisfactorily achieved. However, we still have a lot of problems to be solved to break down the barrier for attainment of the desirable farm-size operation work.. Farm-size project has fairly close relation with farm specialization in rice and, thus, the positive support for farm household including the integrated program for both retirement farmers and off-farm operators should be considered to pursue the progressive development of the farm-size program, which is key means to successful achievement of rice farming enforcement in Chung-Nam Province.

  • PDF

Current Status and Perspectives in Varietal Improvement of Rice Cultivars for High-Quality and Value-Added Products (쌀 품질 고급화 및 고부가가치화를 위한 육종현황과 전망)

  • 최해춘
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.47
    • /
    • pp.15-32
    • /
    • 2002
  • The endeavors enhancing the grain quality of high-yielding japonica rice were steadily continued during 1980s-1990s along with the self-sufficiency of rice production and the increasing demands of high-quality rices. During this time, considerably great progress and success was obtained in development of high-quality japonica cultivars and quality evaluation techniques including the elucidation of interrelationship between the physicochemical properties of rice grain and the physical or palatability components of cooked rice. In 1990s, some high-quality japonica rice cultivars and special rices adaptable for food processing such as large kernel, chalky endosperm, aromatic and colored rices were developed and its objective preference and utility was also examined by a palatability meter, rapid-visco analyzer and texture analyzer, Recently, new special rices such as extremely low-amylose dull or opaque non-glutinous endosperm mutants were developed. Also, a high-lysine rice variety was developed for higher nutritional utility. The water uptake rate and the maximum water absorption ratio showed significantly negative correlations with the K/Mg ratio and alkali digestion value(ADV) of milled rice. The rice materials showing the higher amount of hot water absorption exhibited the larger volume expansion of cooked rice. The harder rices with lower moisture content revealed the higher rate of water uptake at twenty minutes after soaking and the higher ratio of maximum water uptake under the room temperature condition. These water uptake characteristics were not associated with the protein and amylose contents of milled rice and the palatability of cooked rice. The water/rice ratio (in w/w basis) for optimum cooking was averaged to 1.52 in dry milled rices (12% wet basis) with varietal range from 1.45 to 1.61 and the expansion ratio of milled rice after proper boiling was average to 2.63(in v/v basis). The major physicochemical components of rice grain associated with the palatability of cooked rice were examined using japonica rice materials showing narrow varietal variation in grain size and shape, alkali digestibility, gel consistency, amylose and protein contents, but considerable difference in appearance and texture of cooked rice. The glossiness or gross palatability score of cooked rice were closely associated with the peak, hot paste and consistency viscosities of viscosities with year difference. The high-quality rice variety "IIpumbyeo" showed less portion of amylose on the outer layer of milled rice grain and less and slower change in iodine blue value of extracted paste during twenty minutes of boiling. This highly palatable rice also exhibited very fine net structure in outer layer and fine-spongy and well-swollen shape of gelatinized starch granules in inner layer and core of cooked rice kernel compared with the poor palatable rice through image of scanning electronic microscope. Gross sensory score of cooked rice could be estimated by multiple linear regression formula, deduced from relationship between rice quality components mentioned above and eating quality of cooked rice, with high probability of determination. The $\alpha$-amylose-iodine method was adopted for checking the varietal difference in retrogradation of cooked rice. The rice cultivars revealing the relatively slow retrogradation in aged cooked rice were IIpumbyeo, Chucheongyeo, Sasanishiki, Jinbubyeo and Koshihikari. A Tonsil-type rice, Taebaegbyeo, and a japonica cultivar, Seomjinbyeo, showed the relatively fast deterioration of cooked rice. Generally, the better rice cultivars in eating quality of cooked rice showed less retrogradation and much sponginess in cooled cooked rice. Also, the rice varieties exhibiting less retrogradation in cooled cooked rice revealed higher hot viscosity and lower cool viscosity of rice flour in amylogram. The sponginess of cooled cooked rice was closely associated with magnesium content and volume expansion of cooked rice. The hardness-changed ratio of cooked rice by cooling was negatively correlated with solids amount extracted during boiling and volume expansion of cooked rice. The major physicochemical properties of rice grain closely related to the palatability of cooked rice may be directly or indirectly associated with the retrogradation characteristics of cooked rice. The softer gel consistency and lower amylose content in milled rice revealed the higher ratio of popped rice and larger bulk density of popping. The stronger hardness of rice grain showed relatively higher ratio of popping and the more chalky or less translucent rice exhibited the lower ratio of intact popped brown rice. The potassium and magnesium contents of milled rice were negatively associated with gross score of noodle making mixed with wheat flour in half and the better rice for noodle making revealed relatively less amount of solid extraction during boiling. The more volume expansion of batters for making brown rice bread resulted the better loaf formation and more springiness in rice breed. The higher protein rices produced relatively the more moist white rice bread. The springiness of rice bread was also significantly correlated with high amylose content and hard gel consistency. The completely chalky and large grain rices showed better suitability far fermentation and brewing. The glutinous rice were classified into nine different varietal groups based on various physicochemical and structural characteristics of endosperm. There was some close associations among these grain properties and large varietal difference in suitability to various traditional food processing. Our breeding efforts on improvement of rice quality for high palatability and processing utility or value-adding products in the future should focus on not only continuous enhancement of marketing and eating qualities but also the diversification in morphological, physicochemical and nutritional characteristics of rice grain suitable for processing various value-added rice foods.ice foods.