• Title/Summary/Keyword: Visual Models

Search Result 610, Processing Time 0.027 seconds

A study of Brachytherapy for Intraocular Tumor (안구내 악성종양에 대한 저준위 방사선요법에 관한 연구)

  • Ji, Gwang-Su;Yu, Dae-Heon;Lee, Seong-Gu;Kim, Jae-Hyu;Ji, Yeong-Hun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.8 no.1
    • /
    • pp.19-27
    • /
    • 1996
  • I. Project Title A Study of Brachytherapy for intraocular tumor II. Objective and Importance of the project The eye enucleation or external-beam radiation therapy that has been commonly used for the treatment of intraocular tumor have demerits of visual loss and in deficiency of effective tumor dose. Recently, brachytherapy using the plaques containing radioisotope-now treatment method that decrease the demerits of the above mentioned treatment methods and increase the treatment effect-is introduced and performed in the countries, Our purpose of this research is to design suitable shape of plaque for the ophthalmic brachytherapy, and to measure absorbed doses of Ir-192 ophthalmic plaque and thereby calculate the exact radiation dose of tumor and it's adjacent normal tissue. III. Scope and Contents of the project In order to brachytherapy for intraocular tumor, 1. to determine the eye model and selected suitable radioisotope 2. to design the suitable shape of plaque 3. to measure transmission factor and dose distribution for custom made plaques 4. to compare with the these data and results of computer dose calculation models IV. Results and Proposal for Applications The result were as followed. 1. Eye model was determined as a 25mm diameter sphere, Ir-192 was considered the most appropriate as radioisotope for brachytherapy, because of the size, half, energy and availability. 2. Considering the biological response with human tissue and protection of exposed dose, we made the plaques with gold, of which size were 15mm, 17mm and 20mm in diameter, and 1.5mm in thickness. 3. Transmission factor of plaques are all 0.71 with TLD and film dosimetry at the surface of plaques and 0.45, 0.49 at 1.5mm distance of surface, respectively. 4. As compared the measured data for the plaque with Ir-192 seeds to results of computer dose calculation model by Gary Luxton et al. and CAP-PLAN (Radiation Treatment Planning System), absorbed doses are within ${\pm}10\%$ and distance deviations are within 0.4mm Maximum error is $-11.3\%$ and 0.8mm, respectively. As a result of it, we can treat the intraocular tumor more effectively by using custom made gold plaque and Ir-192 seeds.

  • PDF

A Study on the School Health Education Programs Performed by School Nurses in Seoul Area (서울 시내 일부 국민학교에서 양호교사가 실시하고 있는 보건교육의 실태조사. (교실 수업을 중심으로))

  • 방에스터
    • Korean Journal of Health Education and Promotion
    • /
    • v.5 no.2
    • /
    • pp.26-40
    • /
    • 1988
  • This survey was conducted to find out the present status of health education program being provided in primary schools focusing its planning, operation, contents and attitude of school nurses in September, 1988. Total 413 school nurses who are presently working in Seoul city was surveyed by mail and 167 school nurses who responded to the questionnaire were finally ana lysed. The following results were obtained. 1. The general charcteristics of the school nurses′ surveyed. As for age distribution, 30-40 age group was 60.4% the highest and the mean age was 30. 13. As for educational attainment, junior nursing college was 71.9%. 68.3% of the surveyed was married and 43.1% of them has 5-10 years of working experiences. As for schools where school nurses are presently working, 31.7% has 2,000-3,000 students, 22.8% has 50-60 classes and 5 schools have more than 80 classes. 2. Planning of a school health education School health education was planned every semester in 55.7%, which was the highest. As for utilization status of the materials for planning of a school health education as a referance, 86.8% of the total respondants utilized the guidelines published by Seoul city School nurses′ Association, and the administrative guidelines for school health, textbooks, school health statistics and articles related to school health in order. It was tried whether the number of referances being utilized was related to the working experiences. It was found that the shorter the experiances, the more materials were utilized. It was answered that teaching plan for health education was prepared by school-nurses themselves (95.2%), and was differentiated as three levels as the first and second grades, the third and fourth grades, and the fifth and sixth grades 3. The contents of the school health education 16 subjects offered to 6 grades of students were surveyed as follows. As for fifth and sixth grades, contents on growth and development was most widely provided as 54.5%, and 68.9%, respectively. And the next to this subject, dental health education was also frequently offered to the second, third and fourth grades as 50.9%, 68.9%, and 47.3%, respctively. 4. The operation of school health education Health education provided by school nurses was conducted formallu in 36.6%, and formally of informally accordin to grades in 43.9%. It was answered that 50.3% of the surveyed school had started health education from 1987, when the plan for activation of school health was ordered from. Educational Committee. Teaching hours of school nurses was 6 in 32.9%, which was the highest. The lesson was provided for class unit in 77.2%, and sex education was sometimes offered to male and female students separately. As for support of health personnels out of school for health education, 79.0% did not receive any support. If there were any aids out of school, 62.9% received them from other related agencies and 74.3% anwered that it was once in a semester. As for expenses for health education, 57.3% did not input any expenses alloted to school health program as a whole. As audio-visual materials, slides were utulized most frequently and models, and charts in order. 5. Awareness of school nurses on the operation of school health education School nurses evaluated their educational quality as a health educator subjectively, 60-70% of them answered to be average in 4 domains such as knowledge, educational skill, ability to prepare teaching plan, and cooperation. As for the awareness on the support and cooperation of the higher institutions, 46.4% -61.8% answered that "so and so" toward Ministry of Education and Ministry of Affairs, and 13-37% "not supportive" Teachers of the corresponding schools were answered to be "so and so" in 55.9%-56.7%, and "very supportive" in 33.34%. There was a significant difference in formality of the lesson according to the support of the superintendent.

  • PDF

A Tool Box to Evaluate the Phased Array Coil Performance Using Retrospective 3D Coil Modeling (3차원 코일 모델링을 통해 위상배열코일 성능을 평가하기 위한 프로그램)

  • Perez, Marlon;Hernandez, Daniel;Michel, Eric;Cho, Min Hyoung;Lee, Soo Yeol
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.2
    • /
    • pp.107-119
    • /
    • 2014
  • Purpose : To efficiently evaluate phased array coil performance using a software tool box with which we can make visual comparison of the sensitivity of every coil element between the real experiment and EM simulation. Materials and Methods: We have developed a $C^{{+}{+}}$- and MATLAB-based software tool called Phased Array Coil Evaluator (PACE). PACE has the following functions: Building 3D models of the coil elements, importing the FDTD simulation results, and visualizing the coil sensitivity of each coil element on the ordinary Cartesian coordinate and the relative coil position coordinate. To build a 3D model of the phased array coil, we used an electromagnetic 3D tracker in a stylus form. After making the 3D model, we imported the 3D model into the FDTD electromagnetic field simulation tool. Results: An accurate comparison between the coil sensitivity simulation and real experiment on the tool box platform has been made through fine matching of the simulation and real experiment with aids of the 3D tracker. In the simulation and experiment, we used a 36-channel helmet-style phased array coil. At the 3D MRI data acquisition using the spoiled gradient echo sequence, we used the uniform cylindrical phantom that had the same geometry as the one in the FDTD simulation. In the tool box, we can conveniently choose the coil element of interest and we can compare the coil sensitivities element-by-element of the phased array coil. Conclusion: We expect the tool box can be greatly used for developing phased array coils of new geometry or for periodic maintenance of phased array coils in a more accurate and consistent manner.

A Study on the animation music video production for the viral marketing purposes A case study of project (바이럴 마케팅용 애니메이션 뮤직비디오 제작 연구 : 월드컵 응원가 <일어나라 대한민국> 사례를 중심으로)

  • Han, Sang-Gyun;Kim, Tak-Hoon;Kim, Yu-Mi
    • Cartoon and Animation Studies
    • /
    • s.22
    • /
    • pp.47-63
    • /
    • 2011
  • Recently, contemporary cultural contents have been shown its diversity changes followed by the birth of new media platforms with consumers' new needs in the global market. Also, developments of Internet and computer system networks are the main contributors of making this changes happened rapidly. This study aims to know that how to usefully use those new media platforms through the great example of stop-motion animation music video by analyzing its production and marketing process. The music video production had been focused to be completed with high quality by adjusting the production process economically in spite of the relatively short period(less than one month)from its crank-up to the deadline. Because the production was planned that main characters lead the whole story, the creative team had been tried to reduce the production hours by commonly use the same mold when they make original clay models by collecting the similarities of characters' appearances. By using CG technic, could overcome the visual monotonous from the similarities which inferred above. Also, the repeated rhythm in the music video, the similar scenes of backgrounds were commonly used by copy of the original scene. At the point of directing, the creative team considered both economical and art aspects for the quality work. In details, they divided the scenes into foreground and background, and removed unnecessary parts to save the production hours and budget but make depth of fields in the scenes. Except the viral marketing purposes, was searching for the methods to compensate the production cost. For this, the characters in the music video dressed the same T-shirts which are world-cup logo on, and those were designed for the sale after released the music video. Even the result of the sales was not enough to satisfied, it was estimated a great attempt to the domestic animation industry.

  • PDF

$^{99m}Tc$-Glucarate Uptake in Ischemic Tissue of Experimental Models of Cerebral Ischemia (실험적 뇌허혈증 모델에서 허혈 조직의 $^{99m}Tc$-glucarate 섭취)

  • Jeong, Jae-Min;Kim, Young-Ju;Choi, Seok-Rye;Kim, Chae-Kyun;Mar, Woong-Chun;Chung, June-Key;Lee, Myung-Chul;Koh, Chang-Soon;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.30 no.4
    • /
    • pp.484-492
    • /
    • 1996
  • To detect ischemic tissue in experimental model of cerebral ischemia made by middle cerebral artery(MCA)-occlusion, we acquired triple image of $^{99m}Tc$-glucarate, [$^{18}F$]fluoro-deoxyglucose (FDG), and 2,3,5- triphenyltetrazolium (TTC) staining. We made cerebral infarction either with reperfusion (after occlusion of 2 hours) or without reperfusion in 10 Sprague-Dawley rats by inserting thread to MCA through internal carotid artery. After 22 hours, we injected 740 MBq of $^{99m}Tc$-glucarate and 55.5 MBq of [$^{18}F$]FDG through tail vein. Each 1 mm slice of rat brains was frozen and exposed to imaging plate for 20 minutes in freezer to get an [$^{18}F$]FDG image. After 20 hours enough to fade radioactivity of [$^{18}F$]FDG, the slices were again imaged by BAS1500 for $^{99m}Tc$-glucarate uptake. Finally, these brain tissues were stained with TTC. Semi-quantitative visual analysis was done by grading 0 to 3 points according to the degree of uptakes($^{99m}Tc$-glucarate) and decreased uptakes([$^{18}F$]FDG and TTC). Ten rats survived with neurologic symptoms. TTC staining confirmed the development of infarction. The size of the infarction was relatively larger in the group without reperfusion. [$^{18}F$]FDG images were similar to TTC-stained images. However, we found regions with intermediate uptake which were not stained with TTC. We found regions with intermediate [$^{18}F$]FDG uptake where TTC staining was normal. $^{99m}Tc$-glucarate uptake was round only in TTC non-stained region. In the TTC stained regions, there were no uptake of $^{99m}Tc$-glucarate. We could not find clear relation between $^{99m}Tc$-glucarate uptake with [$^{18}F$]FDG uptake. This was partly because percent uptake of $^{99m}Tc$-glucarate was so small (less than 1 percent of injected dose) and because there were quite heterogeneity of patterns of [$^{18}F$]FDG uptake and TTC. With these findings, we could conclude that $^{99m}Tc$-glucarate were taken up only in part of ischemic tissues which were proven to be nonviable. The establishment of MCA-occluded rat model with or without reperfusion and triple imaging for $^{99m}Tc,\;^{18}F$ and TTC helped the characterization of $^{99m}Tc$-glucarate uptakes. Further work is needed to clarify the meaning or diversities or [$^{18}F$]FDG and TTC and their relation with $^{99m}Tc$-glucarate.

  • PDF

Investigating the Cognitive Process of a Student's Modeling on a Modeling-Emphasized Argument-Based General Chemistry Experiment (모델링을 강조한 논의 기반 일반화학실험에서 학생들의 모델링에 대한 인지과정 탐색)

  • Lee, Dongwon;Cho, Hey Sook;Nam, Jeonghee
    • Journal of The Korean Association For Science Education
    • /
    • v.35 no.2
    • /
    • pp.313-323
    • /
    • 2015
  • The purpose of this study is to investigate the cognitive process of student's modeling on a modeling-emphasized argument-based general chemistry experiment. The participants were twenty-one freshman students. Six topics were carried out during the first semester and semi-structured interview was implemented at the end of the semester. Semi-structured interview questions were used to elicit elements of effective model, modeling strategies, difficulties that students have experienced during modeling, and resolving the difficulties that students have experienced during modeling. All student interview data were collected and transcribed. The results of this study are summarized as follows: (1) Elements of effective model were considered to be visual expression, persuasive explanation, and rhetorical structure. (2) Modeling strategies included arranging important keywords or writing the outline, and during the modeling process, students used various data, suggested data after reconstructing, suggested definitions and explanations of core concepts, used meta-cognition, and considering rhetorical structure. (3) Difficulties students have experienced during modeling could be categorized as lack of modeling strategy and understanding. (4) Resolving difficulties students have experienced during modeling could be categorized as modeling strategy and understanding. Students learn the strategy by feedback, modeling experience, evaluation of experimental report, models which they constructed previously and references, and the understanding of contents were achieved through arguments which occurred during classes and during the process of writing the experimental reports. These results suggest that when using modeling in teaching and learning, the argument-based learning strategy can be effective in enhancing students' modeling by helping them to understand meta-modeling with scientific concepts.

Digital Documentation and Short-term Monitoring on Original Rampart Wall of the Gyejoksanseong Fortress in Daejeon, Korea (대전 계족산성 원형성벽의 디지털기록화 및 단기모니터링 연구)

  • Kim, Sung Han;Lee, Chan Hee;Jo, Young Hoon
    • Economic and Environmental Geology
    • /
    • v.52 no.2
    • /
    • pp.169-188
    • /
    • 2019
  • This study was carried out unmanned aerial photography and terrestrial laser scanning to establish digital database on original wall of Gyejoksanseong fortress, and measured ground control points for continuity of the monitoring. It also performed precise examination with the naked eye, unmanned aerial photogrammetry, endoscopy, total station and handy measurement to examine the structural stability of the original walls. The ground control points were considered as a point where visual field can be secured, 3 points were selected around each of the south and north walls. For the right side of the south original wall, aerial photogrammetry was conducted using drones and a deviation analysis of 3-dimensional digital models was performed for short-term monitoring. As a result, the two original walls were almost matched in range within 5mm, and no difference indicating displacement of stones was found, except for partial deviation. Regular monitoring of the areas with structural deformation such as bulging, weak and fracture zone by precisely examining with the naked eye and using high-resolution photo data revealed no distinct change. The inner foundation observed through endoscopy found out that filling stones of the original walls were still remained, while most filling soil was lost. As a result of measuring the total station focusing around the points with structural deformation on the original walls, the maximum displacements of the north and south walls were somewhat high with 6.6mm and 3.8mm, respectively, while the final displacements were relatively stable at below 2.9mm and 1.4mm, respectively. Handy measurement also did not reveal clear structural deformation with displacements below 0.82mm at all points. Even though the results of displacement monitoring on the original walls are stable, it is hard to secure structural stability due to the characteristics of ramparts where sudden brittle fracture occurs. Therefore, it is necessary to conduct conservational scientific diagnosis, precise monitoring, and structural analysis based on the 3-dimensional figuration information obtained in this research.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.