• Title/Summary/Keyword: Image Processing Method

Search Result 4,591, Processing Time 0.032 seconds

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.

A Study on Fast Iris Detection for Iris Recognition in Mobile Phone (휴대폰에서의 홍채인식을 위한 고속 홍채검출에 관한 연구)

  • Park Hyun-Ae;Park Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.19-29
    • /
    • 2006
  • As the security of personal information is becoming more important in mobile phones, we are starting to apply iris recognition technology to these devices. In conventional iris recognition, magnified iris images are required. For that, it has been necessary to use large magnified zoom & focus lens camera to capture images, but due to the requirement about low size and cost of mobile phones, the zoom & focus lens are difficult to be used. However, with rapid developments and multimedia convergence trends in mobile phones, more and more companies have built mega-pixel cameras into their mobile phones. These devices make it possible to capture a magnified iris image without zoom & focus lens. Although facial images are captured far away from the user using a mega-pixel camera, the captured iris region possesses sufficient pixel information for iris recognition. However, in this case, the eye region should be detected for accurate iris recognition in facial images. So, we propose a new fast iris detection method, which is appropriate for mobile phones based on corneal specular reflection. To detect specular reflection robustly, we propose the theoretical background of estimating the size and brightness of specular reflection based on eye, camera and illuminator models. In addition, we use the successive On/Off scheme of the illuminator to detect the optical/motion blurring and sunlight effect on input image. Experimental results show that total processing time(detecting iris region) is on average 65ms on a Samsung SCH-S2300 (with 150MHz ARM 9 CPU) mobile phone. The rate of correct iris detection is 99% (about indoor images) and 98.5% (about outdoor images).

THE CURRENT STATUS OF BIOMEDICAL ENGINEERING IN THE USA

  • Webster, John G.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1992 no.05
    • /
    • pp.27-47
    • /
    • 1992
  • Engineers have developed new instruments that aid in diagnosis and therapy Ultrasonic imaging has provided a nondamaging method of imaging internal organs. A complex transducer emits ultrasonic waves at many angles and reconstructs a map of internal anatomy and also velocities of blood in vessels. Fast computed tomography permits reconstruction of the 3-dimensional anatomy and perfusion of the heart at 20-Hz rates. Positron emission tomography uses certain isotopes that produce positrons that react with electrons to simultaneously emit two gamma rays in opposite directions. It locates the region of origin by using a ring of discrete scintillation detectors, each in electronic coincidence with an opposing detector. In magnetic resonance imaging, the patient is placed in a very strong magnetic field. The precessing of the hydrogen atoms is perturbed by an interrogating field to yield two-dimensional images of soft tissue having exceptional clarity. As an alternative to radiology image processing, film archiving, and retrieval, picture archiving and communication systems (PACS) are being implemented. Images from computed radiography, magnetic resonance imaging (MRI), nuclear medicine, and ultrasound are digitized, transmitted, and stored in computers for retrieval at distributed work stations. In electrical impedance tomography, electrodes are placed around the thorax. 50-kHz current is injected between two electrodes and voltages are measured on all other electrodes. A computer processes the data to yield an image of the resistivity of a 2-dimensional slice of the thorax. During fetal monitoring, a corkscrew electrode is screwed into the fetal scalp to measure the fetal electrocardiogram. Correlations with uterine contractions yield information on the status of the fetus during delivery To measure cardiac output by thermodilution, cold saline is injected into the right atrium. A thermistor in the right pulmonary artery yields temperature measurements, from which we can calculate cardiac output. In impedance cardiography, we measure the changes in electrical impedance as the heart ejects blood into the arteries. Motion artifacts are large, so signal averaging is useful during monitoring. An intraarterial blood gas monitoring system permits monitoring in real time. Light is sent down optical fibers inserted into the radial artery, where it is absorbed by dyes, which reemit the light at a different wavelength. The emitted light travels up optical fibers where an external instrument determines O2, CO2, and pH. Therapeutic devices include the electrosurgical unit. A high-frequency electric arc is drawn between the knife and the tissue. The arc cuts and the heat coagulates, thus preventing blood loss. Hyperthermia has demonstrated antitumor effects in patients in whom all conventional modes of therapy have failed. Methods of raising tumor temperature include focused ultrasound, radio-frequency power through needles, or microwaves. When the heart stops pumping, we use the defibrillator to restore normal pumping. A brief, high-current pulse through the heart synchronizes all cardiac fibers to restore normal rhythm. When the cardiac rhythm is too slow, we implant the cardiac pacemaker. An electrode within the heart stimulates the cardiac muscle to contract at the normal rate. When the cardiac valves are narrowed or leak, we implant an artificial valve. Silicone rubber and Teflon are used for biocompatibility. Artificial hearts powered by pneumatic hoses have been implanted in humans. However, the quality of life gradually degrades, and death ensues. When kidney stones develop, lithotripsy is used. A spark creates a pressure wave, which is focused on the stone and fragments it. The pieces pass out normally. When kidneys fail, the blood is cleansed during hemodialysis. Urea passes through a porous membrane to a dialysate bath to lower its concentration in the blood. The blind are able to read by scanning the Optacon with their fingertips. A camera scans letters and converts them to an array of vibrating pins. The deaf are able to hear using a cochlear implant. A microphone detects sound and divides it into frequency bands. 22 electrodes within the cochlea stimulate the acoustic the acoustic nerve to provide sound patterns. For those who have lost muscle function in the limbs, researchers are implanting electrodes to stimulate the muscle. Sensors in the legs and arms feed back signals to a computer that coordinates the stimulators to provide limb motion. For those with high spinal cord injury, a puff and sip switch can control a computer and permit the disabled person operate the computer and communicate with the outside world.

  • PDF

A Real-Time Head Tracking Algorithm Using Mean-Shift Color Convergence and Shape Based Refinement (Mean-Shift의 색 수렴성과 모양 기반의 재조정을 이용한 실시간 머리 추적 알고리즘)

  • Jeong Dong-Gil;Kang Dong-Goo;Yang Yu Kyung;Ra Jong Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.1-8
    • /
    • 2005
  • In this paper, we propose a two-stage head tracking algorithm adequate for real-time active camera system having pan-tilt-zoom functions. In the color convergence stage, we first assume that the shape of a head is an ellipse and its model color histogram is acquired in advance. Then, the min-shift method is applied to roughly estimate a target position by examining the histogram similarity of the model and a candidate ellipse. To reflect the temporal change of object color and enhance the reliability of mean-shift based tracking, the target histogram obtained in the previous frame is considered to update the model histogram. In the updating process, to alleviate error-accumulation due to outliers in the target ellipse of the previous frame, the target histogram in the previous frame is obtained within an ellipse adaptively shrunken on the basis of the model histogram. In addition, to enhance tracking reliability further, we set the initial position closer to the true position by compensating the global motion, which is rapidly estimated on the basis of two 1-D projection datasets. In the subsequent stage, we refine the position and size of the ellipse obtained in the first stage by using shape information. Here, we define a robust shape-similarity function based on the gradient direction. Extensive experimental results proved that the proposed algorithm performs head hacking well, even when a person moves fast, the head size changes drastically, or the background has many clusters and distracting colors. Also, the propose algorithm can perform tracking with the processing speed of about 30 fps on a standard PC.

Analysis of Waterbody Changes in Small and Medium-Sized Reservoirs Using Optical Satellite Imagery Based on Google Earth Engine (Google Earth Engine 기반 광학 위성영상을 이용한 중소규모 저수지 수체 변화 분석)

  • Younghyun Cho;Joonwoo Noh
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.363-375
    • /
    • 2024
  • Waterbody change detection using satellite images has recently been carried out in various regions in South Korea, utilizing multiple types of sensors. This study utilizes optical satellite images from Landsat and Sentinel-2 based on Google Earth Engine (GEE) to analyze long-term surface water area changes in four monitored small and medium-sized water supply dams and agricultural reservoirs in South Korea. The analysis covers 19 years for the water supply dams and 27 years for the agricultural reservoirs. By employing image analysis methods such as normalized difference water index, Canny Edge Detection, and Otsu'sthresholding for waterbody detection, the study reliably extracted water surface areas, allowing for clear annual changes in waterbodies to be observed. When comparing the time series data of surface water areas derived from satellite images to actual measured water levels, a high correlation coefficient above 0.8 was found for the water supply dams. However, the agricultural reservoirs showed a lower correlation, between 0.5 and 0.7, attributed to the characteristics of agricultural reservoir management and the inadequacy of comparative data rather than the satellite image analysis itself. The analysis also revealed several inconsistencies in the results for smaller reservoirs, indicating the need for further studies on these reservoirs. The changes in surface water area, calculated using GEE, provide valuable spatial information on waterbody changes across the entire watershed, which cannot be identified solely by measuring water levels. This highlights the usefulness of efficiently processing extensive long-term satellite imagery data. Based on these findings, it is expected that future research could apply this method to a larger number of dam reservoirs with varying sizes,shapes, and monitoring statuses, potentially yielding additional insights into different reservoir groups.

Evaluation of Magnetization Transfer Ratio Imaging by Phase Sensitive Method in Knee Joint (슬관절 부위에서 자화전이 위상감도법에 의한 자화전이율 영상 평가)

  • Yoon, Moon-Hyun;Seung, Mi-Sook;Choe, Bo-Young
    • Progress in Medical Physics
    • /
    • v.19 no.4
    • /
    • pp.269-275
    • /
    • 2008
  • Although MR imaging is generally applicable to depict knee joint deterioration it, is sometimes occurred to mis-read and mis-diagnose the common knee joint diseases. In this study, we employed magnetization transfer ratio (MTR) method to improve the diagnosis of the various knee joint diseases. Spin-echo (SE) T2-weighted images (TR/TE 3,400-3,500/90-100 ms) were obtained in seven cases of knee joint deterioration, FSE T2-weighted images (TR/TE 4,500-5,000/100-108 ms) were obtained in seven cases of knee joint deterioration, gradient-echo (GRE) T2-weighted images (TR/TE 9/4.56/$50^{\circ}$ flip angle, NEX 1) were obtained in 3 cases of knee joint deterioration, In six cases of knee joint deterioration, fat suppression was performed using a T2-weighted short T1/tau inverse recovery (STIR) sequence (TR/TE =2,894-3,215 ms/70 ms, NEX 3, ETL 9). Calculation of MTR for individual pixels was performed on registration of unsaturated and saturated images. After processing to make MTR images, the images were displayed in gray color. For improving diagnosis, three-dimensional isotropic volume images, the MR tristimulus color mapping and the MTR map was employed. MTR images showed diagnostic images quality to assess the patients' pathologies. The intensity difference between MTR images and conventional MRI was seen on the color bar. The profile graph on MTR imaging effect showed a quantitative measure of the relative decrease in signal intensity due to the MT pulse. To diagnose the pathologies of the knee joint, the profile graph data was shown on the image as a small cross. The present study indicated that MTR images in the knee joint were feasible. Investigation of physical change on MTR imaging enables to provide us more insight in the physical and technical basis of MTR imaging. MTR images could be useful for rapid assessment of diseases that we examine unambiguous contrast in MT images of knee disorder patients.

  • PDF

Simultaneous Removal of NO and SO2 using Microbubble and Reducing Agent (마이크로버블과 환원제를 이용한 습식 NO 및 SO2의 동시제거)

  • Song, Dong Hun;Kang, Jo Hong;Park, Hyun Sic;Song, Hojun;Chung, Yongchul G.
    • Clean Technology
    • /
    • v.27 no.4
    • /
    • pp.341-349
    • /
    • 2021
  • In combustion facilities, the nitrogen and sulfur in fossil fuels react with oxygen to generate air pollutants such as nitrogen oxides (NOX) and sulfur oxides (SOX), which are harmful to the human body and cause environmental pollution. There are regulations worldwide to reduce NOX and SOX, and various technologies are being applied to meet these regulations. There are commercialized methods to reduce NOX and SOX emissions such as selective catalytic reduction (SCR), selective non-catalytic reduction (SNCR) and wet flue gas desulfurization (WFGD), but due to the disadvantages of these methods, many studies have been conducted to simultaneously remove NOX and SOX. However, even in the NOX and SOX simultaneous removal methods, there are problems with wastewater generation due to oxidants and absorbents, costs incurred due to the use of catalysts and electrolysis to activate specific oxidants, and the harmfulness of gas oxidants themselves. Therefore, in this research, microbubbles generated in a high-pressure disperser and reducing agents were used to reduce costs and facilitate wastewater treatment in order to compensate for the shortcomings of the NOX, SOX simultaneous treatment method. It was confirmed through image processing and ESR (electron spin resonance) analysis that the disperser generates real microbubbles. NOX and SOX removal tests according to temperature were also conducted using only microbubbles. In addition, the removal efficiencies of NOX and SOX are about 75% and 99% using a reducing agent and microbubbles to reduce wastewater. When a small amount of oxidizing agent was added to this microbubble system, both NOX and SOX removal rates achieved 99% or more. Based on these findings, it is expected that this suggested method will contribute to solving the cost and environmental problems associated with the wet oxidation removal method.

A Study of the Reactive Movement Synchronization for Analysis of Group Flow (그룹 몰입도 판단을 위한 움직임 동기화 연구)

  • Ryu, Joon Mo;Park, Seung-Bo;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.79-94
    • /
    • 2013
  • Recently, the high value added business is steadily growing in the culture and art area. To generated high value from a performance, the satisfaction of audience is necessary. The flow in a critical factor for satisfaction, and it should be induced from audience and measures. To evaluate interest and emotion of audience on contents, producers or investors need a kind of index for the measurement of the flow. But it is neither easy to define the flow quantitatively, nor to collect audience's reaction immediately. The previous studies of the group flow were evaluated by the sum of the average value of each person's reaction. The flow or "good feeling" from each audience was extracted from his face, especially, the change of his (or her) expression and body movement. But it was not easy to handle the large amount of real-time data from each sensor signals. And also it was difficult to set experimental devices, in terms of economic and environmental problems. Because, all participants should have their own personal sensor to check their physical signal. Also each camera should be located in front of their head to catch their looks. Therefore we need more simple system to analyze group flow. This study provides the method for measurement of audiences flow with group synchronization at same time and place. To measure the synchronization, we made real-time processing system using the Differential Image and Group Emotion Analysis (GEA) system. Differential Image was obtained from camera and by the previous frame was subtracted from present frame. So the movement variation on audience's reaction was obtained. And then we developed a program, GEX(Group Emotion Analysis), for flow judgment model. After the measurement of the audience's reaction, the synchronization is divided as Dynamic State Synchronization and Static State Synchronization. The Dynamic State Synchronization accompanies audience's active reaction, while the Static State Synchronization means to movement of audience. The Dynamic State Synchronization can be caused by the audience's surprise action such as scary, creepy or reversal scene. And the Static State Synchronization was triggered by impressed or sad scene. Therefore we showed them several short movies containing various scenes mentioned previously. And these kind of scenes made them sad, clap, and creepy, etc. To check the movement of audience, we defined the critical point, ${\alpha}$and ${\beta}$. Dynamic State Synchronization was meaningful when the movement value was over critical point ${\beta}$, while Static State Synchronization was effective under critical point ${\alpha}$. ${\beta}$ is made by audience' clapping movement of 10 teams in stead of using average number of movement. After checking the reactive movement of audience, the percentage(%) ratio was calculated from the division of "people having reaction" by "total people". Total 37 teams were made in "2012 Seoul DMC Culture Open" and they involved the experiments. First, they followed induction to clap by staff. Second, basic scene for neutralize emotion of audience. Third, flow scene was displayed to audience. Forth, the reversal scene was introduced. And then 24 teams of them were provided with amuse and creepy scenes. And the other 10 teams were exposed with the sad scene. There were clapping and laughing action of audience on the amuse scene with shaking their head or hid with closing eyes. And also the sad or touching scene made them silent. If the results were over about 80%, the group could be judged as the synchronization and the flow were achieved. As a result, the audience showed similar reactions about similar stimulation at same time and place. Once we get an additional normalization and experiment, we can obtain find the flow factor through the synchronization on a much bigger group and this should be useful for planning contents.

Current Status and Perspectives in Varietal Improvement of Rice Cultivars for High-Quality and Value-Added Products (쌀 품질 고급화 및 고부가가치화를 위한 육종현황과 전망)

  • 최해춘
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.47
    • /
    • pp.15-32
    • /
    • 2002
  • The endeavors enhancing the grain quality of high-yielding japonica rice were steadily continued during 1980s-1990s along with the self-sufficiency of rice production and the increasing demands of high-quality rices. During this time, considerably great progress and success was obtained in development of high-quality japonica cultivars and quality evaluation techniques including the elucidation of interrelationship between the physicochemical properties of rice grain and the physical or palatability components of cooked rice. In 1990s, some high-quality japonica rice cultivars and special rices adaptable for food processing such as large kernel, chalky endosperm, aromatic and colored rices were developed and its objective preference and utility was also examined by a palatability meter, rapid-visco analyzer and texture analyzer, Recently, new special rices such as extremely low-amylose dull or opaque non-glutinous endosperm mutants were developed. Also, a high-lysine rice variety was developed for higher nutritional utility. The water uptake rate and the maximum water absorption ratio showed significantly negative correlations with the K/Mg ratio and alkali digestion value(ADV) of milled rice. The rice materials showing the higher amount of hot water absorption exhibited the larger volume expansion of cooked rice. The harder rices with lower moisture content revealed the higher rate of water uptake at twenty minutes after soaking and the higher ratio of maximum water uptake under the room temperature condition. These water uptake characteristics were not associated with the protein and amylose contents of milled rice and the palatability of cooked rice. The water/rice ratio (in w/w basis) for optimum cooking was averaged to 1.52 in dry milled rices (12% wet basis) with varietal range from 1.45 to 1.61 and the expansion ratio of milled rice after proper boiling was average to 2.63(in v/v basis). The major physicochemical components of rice grain associated with the palatability of cooked rice were examined using japonica rice materials showing narrow varietal variation in grain size and shape, alkali digestibility, gel consistency, amylose and protein contents, but considerable difference in appearance and texture of cooked rice. The glossiness or gross palatability score of cooked rice were closely associated with the peak, hot paste and consistency viscosities of viscosities with year difference. The high-quality rice variety "IIpumbyeo" showed less portion of amylose on the outer layer of milled rice grain and less and slower change in iodine blue value of extracted paste during twenty minutes of boiling. This highly palatable rice also exhibited very fine net structure in outer layer and fine-spongy and well-swollen shape of gelatinized starch granules in inner layer and core of cooked rice kernel compared with the poor palatable rice through image of scanning electronic microscope. Gross sensory score of cooked rice could be estimated by multiple linear regression formula, deduced from relationship between rice quality components mentioned above and eating quality of cooked rice, with high probability of determination. The $\alpha$-amylose-iodine method was adopted for checking the varietal difference in retrogradation of cooked rice. The rice cultivars revealing the relatively slow retrogradation in aged cooked rice were IIpumbyeo, Chucheongyeo, Sasanishiki, Jinbubyeo and Koshihikari. A Tonsil-type rice, Taebaegbyeo, and a japonica cultivar, Seomjinbyeo, showed the relatively fast deterioration of cooked rice. Generally, the better rice cultivars in eating quality of cooked rice showed less retrogradation and much sponginess in cooled cooked rice. Also, the rice varieties exhibiting less retrogradation in cooled cooked rice revealed higher hot viscosity and lower cool viscosity of rice flour in amylogram. The sponginess of cooled cooked rice was closely associated with magnesium content and volume expansion of cooked rice. The hardness-changed ratio of cooked rice by cooling was negatively correlated with solids amount extracted during boiling and volume expansion of cooked rice. The major physicochemical properties of rice grain closely related to the palatability of cooked rice may be directly or indirectly associated with the retrogradation characteristics of cooked rice. The softer gel consistency and lower amylose content in milled rice revealed the higher ratio of popped rice and larger bulk density of popping. The stronger hardness of rice grain showed relatively higher ratio of popping and the more chalky or less translucent rice exhibited the lower ratio of intact popped brown rice. The potassium and magnesium contents of milled rice were negatively associated with gross score of noodle making mixed with wheat flour in half and the better rice for noodle making revealed relatively less amount of solid extraction during boiling. The more volume expansion of batters for making brown rice bread resulted the better loaf formation and more springiness in rice breed. The higher protein rices produced relatively the more moist white rice bread. The springiness of rice bread was also significantly correlated with high amylose content and hard gel consistency. The completely chalky and large grain rices showed better suitability far fermentation and brewing. The glutinous rice were classified into nine different varietal groups based on various physicochemical and structural characteristics of endosperm. There was some close associations among these grain properties and large varietal difference in suitability to various traditional food processing. Our breeding efforts on improvement of rice quality for high palatability and processing utility or value-adding products in the future should focus on not only continuous enhancement of marketing and eating qualities but also the diversification in morphological, physicochemical and nutritional characteristics of rice grain suitable for processing various value-added rice foods.ice foods.