• Title/Summary/Keyword: Performance increase

Search Result 11,667, Processing Time 0.043 seconds

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Recent Changes in Bloom Dates of Robinia pseudoacacia and Bloom Date Predictions Using a Process-Based Model in South Korea (최근 12년간 아까시나무 만개일의 변화와 과정기반모형을 활용한 지역별 만개일 예측)

  • Kim, Sukyung;Kim, Tae Kyung;Yoon, Sukhee;Jang, Keunchang;Lim, Hyemin;Lee, Wi Young;Won, Myoungsoo;Lim, Jong-Hwan;Kim, Hyun Seok
    • Journal of Korean Society of Forest Science
    • /
    • v.110 no.3
    • /
    • pp.322-340
    • /
    • 2021
  • Due to climate change and its consequential spring temperature rise, flowering time of Robinia pseudoacacia has advanced and a simultaneous blooming phenomenon occurred in different regions in South Korea. These changes in flowering time became a major crisis in the domestic beekeeping industry and the demand for accurate prediction of flowering time for R. pseudoacacia is increasing. In this study, we developed and compared performance of four different models predicting flowering time of R. pseudoacacia for the entire country: a Single Model for the country (SM), Modified Single Model (MSM) using correction factors derived from SM, Group Model (GM) estimating parameters for each region, and Local Model (LM) estimating parameters for each site. To achieve this goal, the bloom date data observed at 26 points across the country for the past 12 years (2006-2017) and daily temperature data were used. As a result, bloom dates for the north central region, where spring temperature increase was more than two-fold higher than southern regions, have advanced and the differences compared with the southwest region decreased by 0.7098 days per year (p-value=0.0417). Model comparisons showed MSM and LM performed better than the other models, as shown by 24% and 15% lower RMSE than SM, respectively. Furthermore, validation with 16 additional sites for 4 years revealed co-krigging of LM showed better performance than expansion of MSM for the entire nation (RMSE: p-value=0.0118, Bias: p-value=0.0471). This study improved predictions of bloom dates for R. pseudoacacia and proposed methods for reliable expansion to the entire nation.

Performance Evaluation of Reconstruction Algorithms for DMIDR (DMIDR 장치의 재구성 알고리즘 별 성능 평가)

  • Kwak, In-Suk;Lee, Hyuk;Moon, Seung-Cheol
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.2
    • /
    • pp.29-37
    • /
    • 2019
  • Purpose DMIDR(Discovery Molecular Imaging Digital Ready, General Electric Healthcare, USA) is a PET/CT scanner designed to allow application of PSF(Point Spread Function), TOF(Time of Flight) and Q.Clear algorithm. Especially, Q.Clear is a reconstruction algorithm which can overcome the limitation of OSEM(Ordered Subset Expectation Maximization) and reduce the image noise based on voxel unit. The aim of this paper is to evaluate the performance of reconstruction algorithms and optimize the algorithm combination to improve the accurate SUV(Standardized Uptake Value) measurement and lesion detectability. Materials and Methods PET phantom was filled with $^{18}F-FDG$ radioactivity concentration ratio of hot to background was in a ratio of 2:1, 4:1 and 8:1. Scan was performed using the NEMA protocols. Scan data was reconstructed using combination of (1)VPFX(VUE point FX(TOF)), (2)VPHD-S(VUE Point HD+PSF), (3)VPFX-S (TOF+PSF), (4)QCHD-S-400((VUE Point HD+Q.Clear(${\beta}-strength$ 400)+PSF), (5)QCFX-S-400(TOF +Q.Clear(${\beta}-strength$ 400)+PSF), (6)QCHD-S-50(VUE Point HD+Q.Clear(${\beta}-strength$ 50)+PSF) and (7)QCFX-S-50(TOF+Q.Clear(${\beta}-strength$ 50)+PSF). CR(Contrast Recovery) and BV(Background Variability) were compared. Also, SNR(Signal to Noise Ratio) and RC(Recovery Coefficient) of counts and SUV were compared respectively. Results VPFX-S showed the highest CR value in sphere size of 10 and 13 mm, and QCFX-S-50 showed the highest value in spheres greater than 17 mm. In comparison of BV and SNR, QCFX-S-400 and QCHD-S-400 showed good results. The results of SUV measurement were proportional to the H/B ratio. RC for SUV is in inverse proportion to the H/B ratio and QCFX-S-50 showed highest value. In addition, reconstruction algorithm of Q.Clear using 400 of ${\beta}-strength$ showed lower value. Conclusion When higher ${\beta}-strength$ was applied Q.Clear showed better image quality by reducing the noise. On the contrary, lower ${\beta}-strength$ was applied Q.Clear showed that sharpness increase and PVE(Partial Volume Effect) decrease, so it is possible to measure SUV based on high RC comparing to conventional reconstruction conditions. An appropriate choice of these reconstruction algorithm can improve the accuracy and lesion detectability. In this reason, it is necessary to optimize the algorithm parameter according to the purpose.

A Case Study on Improvement of Records Management Reference Table by Reorganizing BRM : The case of Reorganization of Seoul's BRM and Records Management Reference Table (BRM 정비를 통한 기록관리기준표 개선사례 서울시 BRM 및 기록관리기준표 정비사례를 중심으로)

  • Lee, Se-Jin;Kim, Hwa-Kyoung
    • The Korean Journal of Archival Studies
    • /
    • no.50
    • /
    • pp.273-309
    • /
    • 2016
  • Unlike other government agencies, the city of Seoul experienced a three-year gap between the establishment of a function classification system and the introduction of a business management system. As a result, the city has been unable to manage the current status of the function classification system, and this impeded the establishment of standards for records management. In September 2012, the Seoul Metropolitan Government integrated the department in charge of the standard sheet for record management with the department of function classification system into a new department: "Information Disclosure Policy Division." This new department is mainly responsible for record management and information disclosure, and taking this as an opportunity, the city government has pushed ahead with the maintenance project on BRM and Standards for Record Management (hereby "BRM maintenance project") over the past two years, from 2013 to 2014. The study was thus conducted to introduce the case for the improvement of standards for record management through the BRM maintenance project by mainly exploring the case of Seoul. During the BRM maintenance project, Seoul established a unique methodology to minimize the gap between the operation of a business management system and the burden of the person in charge of the BRM maintenance project. Furthermore, after the introduction of the business management system, the city government developed its own processes and applied the maintenance result to the system in close cooperation with the related departments, despite the lack of precedence on the maintenance of the classification system. In addition, training for the BRM managers of the department has taken place twice -before and after the maintenance-for the successful performance of the BRM maintenance project and the stable operation of the project in the future. During the period of maintenance, newsletters were distributed to all employees in an effort to induce their active participation and increase the importance of records management. To keep the performance of the maintenance project and to systematically manage BRM in the future, the city government has mapped out several plans for improvement: to apply the "BRM classification system of each purpose" to the service of the "Seoul Open Data Plaza"; to reinforce the function for task management in the business management system; and to develop the function of a records management system for the unit tasks. As such, the researchers hope that this study would serve as a helpful reference so that the organizations-which had planned to introduce BRM or to perform the maintenance project on classification system-experience fewer trials and errors.

Monthly HPLC Measurements of Pigments from an Intertidal Sediment of Geunso Bay Highlighting Variations of Biomass, Community Composition and Photo-physiology of Microphytobenthos (HPLC를 이용한 근소만 조간대 퇴적물내의 저서미세조류 현존량, 군집 및 광생리의 월 변화 분석)

  • KIM, EUN YOUNG;AN, SUNG MIN;CHOI, DONG HAN;LEE, HOWON;NOH, JAE HOON
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.24 no.1
    • /
    • pp.1-17
    • /
    • 2019
  • In this study, the surveys were carried out from October (2016) to October (2017) along the tidal flat of Geunso Bay, Taean Peninsula of the western edge of Korea. The sampling trips were carried out for a total of 16 times, once or twice a month. In order to investigate the monthly variation of the microphytobenthos (MPB) biomass, community composition and photo-physiology were analyzed by HPLC (High performance liquid chromatography). The total chlorophyll a (TChl a) concentrations used as an indicator of biomass of MPB in the upper 1 cm sediment layer ranged from 40.4 to $218.9mg\;m^{-2}$ throughout the sampling period. TChl a concentrations showed the maximum level on $24^{th}$ of February and remained high throughout March after which it started to declined. The biomass of MPB showed high values in winter and low values in summer. The monthly variations of Phaeophorbide a concentrations suggested that the low grazing intensity of the predator in the winter may have partly attributed to the MPB winter blooming. As a result of monthly variations of the MPB community composition using the major marker pigments, the concentrations of fucoxanthin, the marker pigment of benthic diatoms, were the highest throughout the year. The concentrations of most of the marker pigments except for chlorophyll b (chlorophytes) and peridinin (dinoflagellates) increased in winter. However, the concentrations of fucoxanthin increased the highest, and the relative ratios of the major marker pigments to TChl a except fucoxanthin decreased during the same period. The vertical distribution of Chl a and oxygen concentrations in the sediments using a fluorometer and an oxygen micro-optode Chl a concentrations decreased with oxygen concentrations with increasing depth of the sediment layers. Moreover, this tendency became more apparent in winter. The Chl a was uniformly vertical down to 12 mm from May to July, but the oxygen concentration distribution in May decreased sharply below 1 mm. The increase in phaeophorbide a concentration observed at this time is likely to be caused by increased oxygen consumption of zoobenthic grazing activities. This could be presumed that MPB cells are transported downward by bioturbation of zoobenthos. The relative ratios (DT/(DD+DT)) obtained with diadinoxanthin (DD) and diatoxanthin (DT), which are often used as indicators of photo-adaptation of MPB, decreased from October to March and increased in May. This indicated that there were monthly differences in activity of Xanthophyll cycle as well.

Comparison of Models for Stock Price Prediction Based on Keyword Search Volume According to the Social Acceptance of Artificial Intelligence (인공지능의 사회적 수용도에 따른 키워드 검색량 기반 주가예측모형 비교연구)

  • Cho, Yujung;Sohn, Kwonsang;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.103-128
    • /
    • 2021
  • Recently, investors' interest and the influence of stock-related information dissemination are being considered as significant factors that explain stock returns and volume. Besides, companies that develop, distribute, or utilize innovative new technologies such as artificial intelligence have a problem that it is difficult to accurately predict a company's future stock returns and volatility due to macro-environment and market uncertainty. Market uncertainty is recognized as an obstacle to the activation and spread of artificial intelligence technology, so research is needed to mitigate this. Hence, the purpose of this study is to propose a machine learning model that predicts the volatility of a company's stock price by using the internet search volume of artificial intelligence-related technology keywords as a measure of the interest of investors. To this end, for predicting the stock market, we using the VAR(Vector Auto Regression) and deep neural network LSTM (Long Short-Term Memory). And the stock price prediction performance using keyword search volume is compared according to the technology's social acceptance stage. In addition, we also conduct the analysis of sub-technology of artificial intelligence technology to examine the change in the search volume of detailed technology keywords according to the technology acceptance stage and the effect of interest in specific technology on the stock market forecast. To this end, in this study, the words artificial intelligence, deep learning, machine learning were selected as keywords. Next, we investigated how many keywords each week appeared in online documents for five years from January 1, 2015, to December 31, 2019. The stock price and transaction volume data of KOSDAQ listed companies were also collected and used for analysis. As a result, we found that the keyword search volume for artificial intelligence technology increased as the social acceptance of artificial intelligence technology increased. In particular, starting from AlphaGo Shock, the keyword search volume for artificial intelligence itself and detailed technologies such as machine learning and deep learning appeared to increase. Also, the keyword search volume for artificial intelligence technology increases as the social acceptance stage progresses. It showed high accuracy, and it was confirmed that the acceptance stages showing the best prediction performance were different for each keyword. As a result of stock price prediction based on keyword search volume for each social acceptance stage of artificial intelligence technologies classified in this study, the awareness stage's prediction accuracy was found to be the highest. The prediction accuracy was different according to the keywords used in the stock price prediction model for each social acceptance stage. Therefore, when constructing a stock price prediction model using technology keywords, it is necessary to consider social acceptance of the technology and sub-technology classification. The results of this study provide the following implications. First, to predict the return on investment for companies based on innovative technology, it is most important to capture the recognition stage in which public interest rapidly increases in social acceptance of the technology. Second, the change in keyword search volume and the accuracy of the prediction model varies according to the social acceptance of technology should be considered in developing a Decision Support System for investment such as the big data-based Robo-advisor recently introduced by the financial sector.

Effect of Corn Silage and Soybean Silage Mixture on Rumen Fermentation Characteristics In Vitro, and Growth Performance and Meat Grade of Hanwoo Steers (옥수수 사일리지와 대두 사일리지의 혼합급여가 In Vitro 반추위 발효성상 및 거세한우의 성장과 육질등급에 미치는 영향)

  • Kang, Juhui;Lee, Kihwan;Marbun, Tabita Dameria;Song, Jaeyong;Kwon, Chan Ho;Yoon, Duhak;Seo, Jin-Dong;Jo, Young Min;Kim, Jin Yeoul;Kim, Eun Joong
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.42 no.2
    • /
    • pp.61-72
    • /
    • 2022
  • The present study was conducted to examine the effect of soybean silage as a crude protein supplement for corn silage in the diet of Hanwoo steers. The first experiment was conducted to evaluate the effect of replacing corn silage with soybean silage at different levels on rumen fermentation characteristics in vitro. Commercially-purchased corn silage was replaced with 0, 4, 8, or 12% of soybean silage. Half gram of the substrate was added to 50 mL of buffer and rumen fluid from Hanwoo cows, and then incubated at 39℃ for 0, 3, 6, 12, 24, and 48 h. At 24 h, the pH of the control (corn silage only) was lower (p<0.05) than that of soybean-supplemented silages, and the pH numerically increased along with increasing proportions of soybean silage. Other rumen parameters, including gas production, ammonia nitrogen, and total volatile fatty acids, were variable. However, they tended to increase with increasing proportions of soybean silage. In the second experiment, 60 Hanwoo steers were allocated to one of three dietary treatments, namely, CON (concentrate with Italian ryegrass), CS (concentrate with corn silage), CS4% (concentrate with corn silage and 4% of soybean silage). Animals were offered experimental diets for 110 days during the growing period and then finished with typified beef diets that were commercially available to evaluate the effect of soybean silage on animal performance and meat quality. With the soybean silage, the weight gain and feed efficiency of the animal were more significant than those of the other treatments during the growing period (p<0.05). However, the dietary treatments had little effect on meat quality except for meat color. In conclusion, corn silage mixed with soybean silage even at a lower level provided a greater ruminal environment and animal performances, particularly with increased carcass weight and feed efficiency during growing period.

A Comparative Study on the Growth Performance of Korean Indigenous Chicken Pure Line by Sex and Twelve Strains (토종닭 순계 12계통과 성별에 따른 성장능력 비교 연구)

  • Kim, Kigon;Park, Byoungho;Jeon, Iksoo;Choo, Hyojun;Ham, Jinjoo;Park, Keon;Cha, Jaebeom
    • Korean Journal of Poultry Science
    • /
    • v.48 no.4
    • /
    • pp.193-206
    • /
    • 2021
  • This study aimed to identify the growth performance of Korean indigenous chicken pure-line by sex and twelve strains conserved in Poultry Research Institute, National Institute of Animal Science, Rural Development Administration. The effect of sex and strain on body weight was significantly different in every period, with males being heavier in all periods than females. In the case of biweekly weight gain, the tendency to increase rapidly from birth to six weeks old, and to decrease in the period from twelve to fourteen weeks old was common across all sex and strains. Depending on sex and strain, there were significant differences in age and the number of peaks. Regardless of sex and strain, the determination coefficient and adjusted determination coefficient showed high goodness of fit (99.1~99.9%) to growth functions. However, for each model, the goodness-of-fit had variations by sex and strains. von Betalanffy function had the best fit to growth curves in all the female strains except strain D. On the other hand, Gompertz function had the best fit for all the male strains except strain C. Logistic function showed the lowest goodness-of-fit in all sex and strains. Mature weights were in the order of von bertalanffy, Gompertz, and Logistic models, while growth ratio and maturing rate followed the order of logistic, gompertz, and von bertalanffy functions. This information could be useful for Korean indigenous chicken management and designing crossbreeding tests and breeding programs.

Prediction of patent lifespan and analysis of influencing factors using machine learning (기계학습을 활용한 특허수명 예측 및 영향요인 분석)

  • Kim, Yongwoo;Kim, Min Gu;Kim, Young-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.147-170
    • /
    • 2022
  • Although the number of patent which is one of the core outputs of technological innovation continues to increase, the number of low-value patents also hugely increased. Therefore, efficient evaluation of patents has become important. Estimation of patent lifespan which represents private value of a patent, has been studied for a long time, but in most cases it relied on a linear model. Even if machine learning methods were used, interpretation or explanation of the relationship between explanatory variables and patent lifespan was insufficient. In this study, patent lifespan (number of renewals) is predicted based on the idea that patent lifespan represents the value of the patent. For the research, 4,033,414 patents applied between 1996 and 2017 and finally granted were collected from USPTO (US Patent and Trademark Office). To predict the patent lifespan, we use variables that can reflect the characteristics of the patent, the patent owner's characteristics, and the inventor's characteristics. We build four different models (Ridge Regression, Random Forest, Feed Forward Neural Network, Gradient Boosting Models) and perform hyperparameter tuning through 5-fold Cross Validation. Then, the performance of the generated models are evaluated, and the relative importance of predictors is also presented. In addition, based on the Gradient Boosting Model which have excellent performance, Accumulated Local Effects Plot is presented to visualize the relationship between predictors and patent lifespan. Finally, we apply Kernal SHAP (SHapley Additive exPlanations) to present the evaluation reason of individual patents, and discuss applicability to the patent evaluation system. This study has academic significance in that it cumulatively contributes to the existing patent life estimation research and supplements the limitations of existing patent life estimation studies based on linearity. It is academically meaningful that this study contributes cumulatively to the existing studies which estimate patent lifespan, and that it supplements the limitations of linear models. Also, it is practically meaningful to suggest a method for deriving the evaluation basis for individual patent value and examine the applicability to patent evaluation systems.

A comparative study of risk according to smoke control flow rate and methods in case of train fire at subway platform (지하철 승강장에서 열차 화재 시 제연풍량 및 방식에 따른 위험도 비교 연구)

  • Ryu, Ji-Oh;Lee, Hu-Yeong
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.4
    • /
    • pp.327-339
    • /
    • 2022
  • The purpose of this study is to present the effective smoke control flow rate and mode for securing safety through quantitative risk assessment according to the smoke control flow rate and mode (supply or exhaust) of the platform when a train fire occurs at the subway platform. To this end, a fire outbreak scenario was created using a side platform with a central staircase as a model and fire analysis was performed for each scenario to compare and analyze fire propagation characteristics and ASET, evacuation analysis was performed to predict the number of deaths. In addition, a fire accident rate (F)/number of deaths (N) diagram (F/N diagram) was prepared for each scenario to compare and evaluate the risk according to the smoke control flow rate and mode. In the ASET analysis of harmful factors, carbon monoxide, temperature, and visible distance determined by performance-oriented design methods and standards for firefighting facilities, the effect of visible distance is the largest, In the case where the delay in entering the platform of the fire train was not taken into account, the ASET was analyzed to be about 800 seconds when the air flow rate was 4 × 833 m3/min. The estimated number of deaths varies greatly depending on the location of the vehicle of fire train, In the case of a fire occurring in a vehicle adjacent to the stairs, it is shown that the increase is up to three times that of the vehicle in the lead. In addition, when the smoke control flow rate increases, the number of fatalities decreases, and the reduction rate of the air supply method rather than the exhaust method increases. When the supply flow rate is 4 × 833 m3/min, the expected number of deaths is reduced to 13% compared to the case where ventilation is not performed. As a result of the risk assessment, it is found that the current social risk assessment criteria are satisfied when smoke control is performed, and the number of deaths is the flow rate 4 × 833 m3/min when smoke control is performed at 29.9 people in 10,000 year, It was analyzed that it decreased to 4.36 people.