• Title/Summary/Keyword: The Combined Model

Search Result 4,037, Processing Time 0.039 seconds

A Study on Developing a Model for Cancer Damage Cost Due to Risk from Benzene in Ulsan Metropolitan City (울산 지역에서 대기중 벤젠으로 인한 암 사망 손실비용 추정 모형에 관한 연구)

  • Lee, Yong-Jin;Kim, Ye-Shin;Shin, Dong-Chun;Shin, Young-Chul
    • Environmental and Resource Economics Review
    • /
    • v.13 no.1
    • /
    • pp.49-82
    • /
    • 2004
  • The study aimed to evaluate cancer damage cost due to risk from benzene inhalation. We performed health risk assessment based on US EPA guideline to estimate annual population risk in Ulsan metropolitan city. Also, we estimated a willingness-to-pay amount for reducing a cancer mortality rate to evaluate a value of statistical life. We combined the annual population risk and the value of statistical life to calculate the cancer damage cost. In the health risk assessment, we applied the benzene unit risk ($2.2{\times}10^{-6}{\sim}7.8{\times}10^{-6}$) in the US EPA'S Integrated Risk Information System to assess the annual population risk. Average concentration of benzene in ambient air is $7.88{\mu}g/m^3$(min: 1.16~max: $23.32{\mu}g/m^3$). We targeted an exposure population of 516,641 persons who aged over 30 years old. Using a Monte-Carlo simulation for uncertainty analysis, we evaluated that the population risk of benzene during ten years in Ulsan city is 2.90 persons (5 percentile: 0.32~95 percentile: 9.11persons). And the monthly average WTP for 5/1,000 cancer mortality reduction during ten years is 14,852 Won(95% C.I: 13,135~16,794 Won) and the implied VSL is 36 million Won(95% C.I: 30~40 million Won). Cancer damage cost due to risk from benzene inhalation during 10 years in Ulsan city is about 104 million Won(5 percentile: 13~95 percentile: 328 million Won). Health benefit cost to reduce a cancer mortality risk of benzene is about 50 million Won is Ulsan metropolitann city. But, it is very important that this cost is not for all health damage cost of cancer mortality in some area. We just recommended a model for evaluating a cancer risk reduction, so we must re-evaluate an integrated application of total VOCs damage cost including benzene.

  • PDF

Estimation of Genetic Parameters for Milk Production Traits in Holstein Dairy Cattle (홀스타인의 유생산형질에 대한 유전모수 추정)

  • Cho, Chungil;Cho, Kwanghyeon;Choy, Yunho;Choi, Jaekwan;Choi, Taejeong;Park, Byoungho;Lee, Seungsu
    • Journal of Animal Science and Technology
    • /
    • v.55 no.1
    • /
    • pp.7-11
    • /
    • 2013
  • The purpose of this study was to estimate (co) variance components of three milk production traits for genetic evaluation using a multiple lactation model. Each of the first five lactations was treated as different traits. For the parameter estimation study, a data set was set up including lactations from cows calved from 2001 to 2009. The total number of raw lactation records in first to fifth parities reached 1,416,589. At least 10 cows were required for each contemporary group, herd-year-season effect. Sires with fewer than 10 daughters were discarded. Lactations with 305d milk yield exceeding 15,000 kg were removed. In total, 1,456 sires of cows were remained after all the selection steps. A complete pedigree consisting of 292,382 records was used for the study. A sire model containing herd-year-season, caving age, and sire additive genetic effects was applied to the selected lactation data and pedigree for estimating (co) variance components via VCE. Heritabilities and genetic or residual correlations were then derived from the (co) variance estimates using R package. Genetic correlations between lactations ranged from 0.76 to 0.98 for milk yield, 0.79~1.00 for fat yield, 0.75~1.00 for protein yield. On individual lactation basis, relatively low heritability values were obtained 0.14~0.23, 0.13~0.20 and 0.14~0.19 for milk, fat, and protein yields, respectively. For the combined lactation heritability values were 0.29, 0.28, and 0.26 for milk, fat, and protein yields. The estimated parameters will be used in national genetic evaluations for production traits.

A Study on Change in Cement Mortar Characteristics under Carbonation Based on Tests for Hydration and Porosity (수화물 및 공극률 관측 실험을 통한 시멘트모르타르의 탄산화 특성 변화에 대한 연구)

  • Kwon, Seung-Jun;Song, Ha-Won;Park, Sang-Soon
    • Journal of the Korea Concrete Institute
    • /
    • v.19 no.5
    • /
    • pp.613-621
    • /
    • 2007
  • Due to the increasing significance of durability, much researches on carbonation, one of the major deterioration phenomena are carried out. However, conventional researches based on fully hardened concrete are focused on prediction of carbonation depth and they sometimes cause errors. In contrast with steel members, behaviors in early-aged concrete such as porosity and hydrates (calcium hydroxide) are very important and may be changed under carbonation process. Because transportation of deteriorating factors is mainly dependent on porosity and saturation, it is desirable to consider these changes in behaviors in early-aged concrete under carbonation for reasonable analysis of durability in long term exposure or combined deterioration. As for porosity, unless the decrease in $CO_2$ diffusion due to change in porosity is considered, the results from the prediction is overestimated. The carbonation depth and characteristics of pore water are mainly determined by amount of calcium hydroxide, and bound chloride content in carbonated concrete is also affected. So Analysis based on test for hydration and porosity is recently carried out for evaluation of carbonation characteristics. In this study, changes in porosity and hydrate $(Ca(OH)_2)$ under carbonation process are performed through the tests. Mercury Intrusion Porosimetry (MIP) for changed porosity, Thermogravimetric Analysis (TGA) for amount of $(Ca(OH)_2)$ are carried out respectively and analysis technique for porosity and hydrates under carbonation is developed utilizing modeling for behavior in early-aged concrete such as multi component hydration heat model (MCHHM) and micro pore structure formation model (MPSFM). The results from developed technique is in reasonable agreement with experimental data, respectively and they are evaluated to be used for analysis of chloride behavior in carbonated concrete.

Prediction of Entrance Surface Dose in Chest Digital Radiography (흉부 디지털촬영에서 입사표면선량 예측)

  • Lee, Won-Jeong;Jeong, Sun-Cheol
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.4
    • /
    • pp.573-579
    • /
    • 2019
  • The purpose of this study is predicted easily the entrance surface dose (ESD) in chest digital radiography. We used two detector type such as flat-panel detector (FP) and IP (Imaging plate detector). ESD was measured at each exposure condition combined tube voltage with tube current using dosimeter, after attaching on human phantom, it was repeated 3 times. Phantom images were evaluated independently by three chest radiologists after blinding image. Dose-area product (DAP) or exposure index (EI) was checked by Digital Imaging and Communications in Medicine (DICOM) header on phantom images. Statistical analysis was performed by the linear regression using SPSS ver. 19.0. ESD was significant difference between FP and IP($85.7{\mu}Gy$ vs. $124.6{\mu}Gy$, p=0.017). ESD was positively correlated with image quality in FP as well as IP. In FP, adjusted R square was 0.978 (97.8%) and linear regression model was $ESD=0.407+68.810{\times}DAP$. DAP was 4.781 by calculating the $DAP=0.021+0.014{\times}340{\mu}Gy$. In IP, adjusted R square was 0.645 (64.5%) and linear regression model was $ESD=-63.339+0.188{\times}EI$. EI was 1748.97 by calculating the $EI=565.431+3.481{\times}340{\mu}Gy$. In chest digital radiography, the ESD can be easily predicted by the DICOM header information.

Renal Effects of a Low Protein Diet and Antihypertensive Drugs on the Progression of Early Chronic Renal Failure in 5/6 Nephrectomized Rats (저단백 식이 및 항고혈압제의 투여가 만성신부전증의 진행에 미치는 영향에 관한 실험적 연구)

  • Kim, Kyo-Sun;Kim, Kee-Hyuk;Kim, Sang-Yun;Kang, Yong-Joo;Maeng, Won-Jae
    • Childhood Kidney Diseases
    • /
    • v.2 no.2
    • /
    • pp.125-132
    • /
    • 1998
  • Purpose : To study whether a low protein diet increase the efficacy of antihypertensive therapy on the progression of renal failure, we conducted an experimental study using 5/6 nephrectomized rats(n=63). Methods : At 7 days after surgery, rats were randomly assigned to three groups according to receiving antihypertensive drug: no antihypertensive drug (U), enalapril (E), and nicardipine (N), respectively and fed a low protein diet (6$\%$ protein). Proteinuria, mesangial matrix expansion score and glomerular volume were assessed at 4, 12 and 16 weeks after renal ablation. Results : Group U rats on a low protein diet developed progressive hypertension ($140{\pm}8,\;162{\pm}5,\;171{\pm}5\;and\;184{\pm}11\;mmHg$ at 4, 8, 12 and 16 weeks) which were controlled by E and N. Group U rats on a low protein diet developed proteinuria ($74{\pm}15\;mg/day$ at 16 weeks) which were decreased by E ($42{\pm}12 mg/day$) or N ($48{\pm}8 mg/day$) (p<0.05). Mesangial matrix expansion score and glomerular volume were not different between groups U, E and N on a low protein diet regardless of the antihypertensive drugs administered. Conclusion : A low protein diet did not affect blood pressure. Enalapril and nicardipine-treated rats on a low protein diet did not have different mesangial matrix expansion and glomerular volumes from rats on a low protein diet at 12 weeks and 16 weeks, in spite of the better controlling of systemic hypertension and lessening of proteinuria. Thus, combined treatment with a low protein diet and antihypertensive drugs didn't appear to show any addition,11 effects to attenuate glomerular injury.

  • PDF

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Analysis of Success Cases of InsurTech and Digital Insurance Platform Based on Artificial Intelligence Technologies: Focused on Ping An Insurance Group Ltd. in China (인공지능 기술 기반 인슈어테크와 디지털보험플랫폼 성공사례 분석: 중국 평안보험그룹을 중심으로)

  • Lee, JaeWon;Oh, SangJin
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.71-90
    • /
    • 2020
  • Recently, the global insurance industry is rapidly developing digital transformation through the use of artificial intelligence technologies such as machine learning, natural language processing, and deep learning. As a result, more and more foreign insurers have achieved the success of artificial intelligence technology-based InsurTech and platform business, and Ping An Insurance Group Ltd., China's largest private company, is leading China's global fourth industrial revolution with remarkable achievements in InsurTech and Digital Platform as a result of its constant innovation, using 'finance and technology' and 'finance and ecosystem' as keywords for companies. In response, this study analyzed the InsurTech and platform business activities of Ping An Insurance Group Ltd. through the ser-M analysis model to provide strategic implications for revitalizing AI technology-based businesses of domestic insurers. The ser-M analysis model has been studied so that the vision and leadership of the CEO, the historical environment of the enterprise, the utilization of various resources, and the unique mechanism relationships can be interpreted in an integrated manner as a frame that can be interpreted in terms of the subject, environment, resource and mechanism. As a result of the case analysis, Ping An Insurance Group Ltd. has achieved cost reduction and customer service development by digitally innovating its entire business area such as sales, underwriting, claims, and loan service by utilizing core artificial intelligence technologies such as facial, voice, and facial expression recognition. In addition, "online data in China" and "the vast offline data and insights accumulated by the company" were combined with new technologies such as artificial intelligence and big data analysis to build a digital platform that integrates financial services and digital service businesses. Ping An Insurance Group Ltd. challenged constant innovation, and as of 2019, sales reached $155 billion, ranking seventh among all companies in the Global 2000 rankings selected by Forbes Magazine. Analyzing the background of the success of Ping An Insurance Group Ltd. from the perspective of ser-M, founder Mammingz quickly captured the development of digital technology, market competition and changes in population structure in the era of the fourth industrial revolution, and established a new vision and displayed an agile leadership of digital technology-focused. Based on the strong leadership led by the founder in response to environmental changes, the company has successfully led InsurTech and Platform Business through innovation of internal resources such as investment in artificial intelligence technology, securing excellent professionals, and strengthening big data capabilities, combining external absorption capabilities, and strategic alliances among various industries. Through this success story analysis of Ping An Insurance Group Ltd., the following implications can be given to domestic insurance companies that are preparing for digital transformation. First, CEOs of domestic companies also need to recognize the paradigm shift in industry due to the change in digital technology and quickly arm themselves with digital technology-oriented leadership to spearhead the digital transformation of enterprises. Second, the Korean government should urgently overhaul related laws and systems to further promote the use of data between different industries and provide drastic support such as deregulation, tax benefits and platform provision to help the domestic insurance industry secure global competitiveness. Third, Korean companies also need to make bolder investments in the development of artificial intelligence technology so that systematic securing of internal and external data, training of technical personnel, and patent applications can be expanded, and digital platforms should be quickly established so that diverse customer experiences can be integrated through learned artificial intelligence technology. Finally, since there may be limitations to generalization through a single case of an overseas insurance company, I hope that in the future, more extensive research will be conducted on various management strategies related to artificial intelligence technology by analyzing cases of multiple industries or multiple companies or conducting empirical research.

Mediating Roles of Attachment for Information Sharing in Social Media: Social Capital Theory Perspective (소셜 미디어에서 정보공유를 위한 애착의 매개역할: 사회적 자본이론 관점)

  • Chung, Namho;Han, Hee Jeong;Koo, Chulmo
    • Asia pacific journal of information systems
    • /
    • v.22 no.4
    • /
    • pp.101-123
    • /
    • 2012
  • Currently, Social Media, it has widely a renown keyword and its related social trends and businesses have been fastly applied into various contexts. Social media has become an important research area for scholars interested in online technologies and cyber space and their social impacts. Social media is not only including web-based services but also mobile-based application services that allow people to share various style information and knowledge through online connection. Social media users have tendency to common identity- and bond-attachment through interactions such as 'thumbs up', 'reply note', 'forwarding', which may have driven from various factors and may result in delivering information, sharing knowledge, and specific experiences et al. Even further, almost of all social media sites provide and connect unknown strangers depending on shared interests, political views, or enjoyable activities, and other stuffs incorporating the creation of contents, which provides benefits to users. As fast developing digital devices including smartphone, tablet PC, internet based blogging, and photo and video clips, scholars desperately have began to study regarding diverse issues connecting human beings' motivations and the behavioral results which may be articulated by the format of antecedents as well as consequences related to contents that people create via social media. Social media such as Facebook, Twitter, or Cyworld users are more and more getting close each other and build up their relationships by a different style. In this sense, people use social media as tools for maintain pre-existing network, creating new people socially, and at the same time, explicitly find some business opportunities using personal and unlimited public networks. In terms of theory in explaining this phenomenon, social capital is a concept that describes the benefits one receives from one's relationship with others. Thereby, social media use is closely related to the form and connected of people, which is a bridge that can be able to achieve informational benefits of a heterogeneous network of people and common identity- and bonding-attachment which emphasizes emotional benefits from community members or friend group. Social capital would be resources accumulated through the relationships among people, which can be considered as an investment in social relations with expected returns and may achieve benefits from the greater access to and use of resources embedded in social networks. Social media using for their social capital has vastly been adopted in a cyber world, however, there has been little explaining the phenomenon theoretically how people may take advantages or opportunities through interaction among people, why people may interactively give willingness to help or their answers. The individual consciously express themselves in an online space, so called, common identity- or bonding-attachments. Common-identity attachment is the focus of the weak ties, which are loose connections between individuals who may provide useful information or new perspectives for one another but typically not emotional support, whereas common-bonding attachment is explained that between individuals in tightly-knit, emotionally close relationship such as family and close friends. The common identify- and bonding-attachment are mainly studying on-offline setting, which individual convey an impression to others that are expressed to own interest to others. Thus, individuals expect to meet other people and are trying to behave self-presentation engaging in opposite partners accordingly. As developing social media, individuals are motivated to disclose self-disclosures of open and honest using diverse cues such as verbal and nonverbal and pictorial and video files to their friends as well as passing strangers. Social media context, common identity- and bond-attachment for self-presentation seems different compared with face-to-face context. In the realm of social media, social users look for self-impression by posting text messages, pictures, video files. Under the digital environments, people interact to work, shop, learn, entertain, and be played. Social media provides increasingly the kinds of intention and behavior in online. Typically, identity and bond social capital through self-presentation is the intentional and tangible component of identity. At social media, people try to engage in others via a desired impression, which can maintain through performing coherent and complementary communications including displaying signs, symbols, brands made of digital stuffs(information, interest, pictures, etc,). In marketing area, consumers traditionally show common-identity as they select clothes, hairstyles, automobiles, logos, and so on, to impress others in any given context in a shopping mall or opera. To examine these social capital and attachment, we combined a social capital theory with an attachment theory into our research model. Our research model focuses on the common identity- and bond-attachment how they are formulated through social capitals: cognitive capital, structural capital, relational capital, and individual characteristics. Thus, we examined that individual online kindness, self-rated expertise, and social relation influence to build common identity- and bond-attachment, and the attachment effects make an impact on both the willingness to help, however, common bond seems not to show directly impact on information sharing. As a result, we discover that the social capital and attachment theories are mainly applicable to the context of social media and usage in the individual networks. We collected sample data of 256 who are using social media such as Facebook, Twitter, and Cyworld and analyzed the suggested hypotheses through the Structural Equation Model by AMOS. This study analyzes the direct and indirect relationship between the social network service usage and outcomes. Antecedents of kindness, confidence of knowledge, social relations are significantly affected to the mediators common identity-and bond attachments, however, interestingly, network externality does not impact, which we assumed that a size of network was a negative because group members would not significantly contribute if the members do not intend to actively interact with each other. The mediating variables had a positive effect on toward willingness to help. Further, common identity attachment has stronger significant on shared information.

  • PDF

Effects of Dietary Salt Restriction on the Development of Renal Failure in the Excision Remnant Kidney Model (식이 sodium 제한 및 식이 sodium 제한에 따른 항고혈압제의 투여가 만성신부전증의 진행에 미치는 영향에 관한 실험적 연구)

  • Kim Kee-Hyuk;Kim Sang-Yun;Kang Yong-Joo;Maeng Won-Jae;Kim Kyo-Sun
    • Childhood Kidney Diseases
    • /
    • v.3 no.2
    • /
    • pp.170-179
    • /
    • 1999
  • Purpose: To evaluate whether or not sodium restriction had its own beneficial effect and increased the efficiency of the anti-hypertensive drugs on the progression of renal failure. Methods: We studied using the excision remnant kidney model. Treatment groups were as follows: 5/6 nephrectomy and a 0.49% (normal-high) sodium diet (NN); 5/6 nephrectomy and a 0.25% (normal-low) sodium diet (LN); 5/6 nephrectomy, a 0.49% sodium diet and enalapril (NNE); 5/6 nephrectomy, a 0.49% sodium diet and nicardipine (NNN); 5/6 nephrectomy, a 0.25% sodium diet and enalapril (LNE); 5/6 nephrectomy, a 0.25% sodium diet and nicardipine (LNN). Both diets were isocaloric and had the same content of protein, phosphorus and calcium. Proteinuria, remnant kidney weight, mesangial expansion scores, and glomerular volume were assessed. Results: Blood pressure tended to be lower in LN compared to NN (P<0.05). NN developed progressive hypertension. LNE, LU, NNE, and NNN reduced blood pressure. LNE, LNN, NNE, NNN, and LN had significantly less proteinuria than NN at 16 weeks (P<0.05). At 24 weeks, LN developed proteinuria (82 mg/day), which were lessened in LNE (54 mg/day) and not lessened in LNN (76 mg/day). Mesangial expansion scores were significantly less in LN rats compared to those in NN rats. Glomerular volumes at 24 weeks in LN rats were significantly less compared to those at 16 weeks in NN rats. Mesangial expansion scores and glomerular volumes at 4, weeks, 12 weeks, and 24 weeks were not different among LN, LNE, and LNN groups. Conclusion: Dietary salt restriction lessens renal damage, at least in part, by inhibiting compensatory renal growth and reducing blood pressure. Enalapril was particularly successful in reducing proteinuria and glomerular injury when combined with dietary salt restriction.

  • PDF

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF