• Title/Summary/Keyword: additional information

Search Result 4,948, Processing Time 0.035 seconds

Clinical Utility of the MMPI-A-RF's Internalization and Externalization Higher-Order Scales: Comparison With the K-CBCL's Internalization and Externalization Scales (MMPI-A-RF의 내재화 및 외현화 상위 척도의 임상적 유용성: K-CBCL의 내재화 및 외현화 척도와의 비교)

  • Eun-Bin, Shin;Eun-Hee, Park;Hyun-Joo, Hong
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.30 no.2
    • /
    • pp.119-126
    • /
    • 2022
  • Objectives : The purpose of this study was to examine the clinical utility of the internalization and externalization higher-order scales of the Minnesota Multiphasic Personality Inventory-Adolescent Restructured Form (MMPI-A-RF), compared with those scales of the Korean Child Behavior Checklist (K-CBCL). Methods : 43 adolescents with internalizing disorders and 44 adolescents with externalizing disorders and their parents were administered the MMPI-A-RF and K-CBCL each. To verify the difference between the internalization and externalization scales of the MMPI-A-RF and K-CBCL for each group, independent-sample t test was performed. To compare the agreement between the MMPI-A-RF and K-CBCL, correlation analysis was also conducted. Lastly, to identify which scales significantly best predict each of the internalizing and externalizing disorder, logistic regression analysis was conducted. Results : Internalization scales of the MMPI-A-RF and K-CBCL were significantly higher in the internalizing disorder group, and the externalization scales were significantly higher in the externalizing disorder group. The positive correlation was significant only for internalization problems between the two evaluation measures in both groups (each r=0.360, p<0.05, r=0.572, p<0.05). In addition, the scales significantly predicted internalizing and externalizing disorders were the internalization and externalization scales of the MMPI-A-RF, followed by the externalization scale of the K-CBCL (R2=0.407, p<0.05). Conclusions : The internalization and externalization higher-order scales of the MMPI-A-RF were found to reliably reflect the characteristics of each disorder in adolescents and be useful evaluative scales to differentiate disorders. Moreover, if adolescents show externalization problems, additional information from the K-CBCL can be more useful to differentiate disorders.

A study on the rock mass classification in boreholes for a tunnel design using machine learning algorithms (머신러닝 기법을 활용한 터널 설계 시 시추공 내 암반분류에 관한 연구)

  • Lee, Je-Kyum;Choi, Won-Hyuk;Kim, Yangkyun;Lee, Sean Seungwon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.6
    • /
    • pp.469-484
    • /
    • 2021
  • Rock mass classification results have a great influence on construction schedule and budget as well as tunnel stability in tunnel design. A total of 3,526 tunnels have been constructed in Korea and the associated techniques in tunnel design and construction have been continuously developed, however, not many studies have been performed on how to assess rock mass quality and grade more accurately. Thus, numerous cases show big differences in the results according to inspectors' experience and judgement. Hence, this study aims to suggest a more reliable rock mass classification (RMR) model using machine learning algorithms, which is surging in availability, through the analyses based on various rock and rock mass information collected from boring investigations. For this, 11 learning parameters (depth, rock type, RQD, electrical resistivity, UCS, Vp, Vs, Young's modulus, unit weight, Poisson's ratio, RMR) from 13 local tunnel cases were selected, 337 learning data sets as well as 60 test data sets were prepared, and 6 machine learning algorithms (DT, SVM, ANN, PCA & ANN, RF, XGBoost) were tested for various hyperparameters for each algorithm. The results show that the mean absolute errors in RMR value from five algorithms except Decision Tree were less than 8 and a Support Vector Machine model is the best model. The applicability of the model, established through this study, was confirmed and this prediction model can be applied for more reliable rock mass classification when additional various data is continuously cumulated.

Predicting the Effects of Rooftop Greening and Evaluating CO2 Sequestration in Urban Heat Island Areas Using Satellite Imagery and Machine Learning (위성영상과 머신러닝 활용 도시열섬 지역 옥상녹화 효과 예측과 이산화탄소 흡수량 평가)

  • Minju Kim;Jeong U Park;Juhyeon Park;Jisoo Park;Chang-Uk Hyun
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.481-493
    • /
    • 2023
  • In high-density urban areas, the urban heat island effect increases urban temperatures, leading to negative impacts such as worsened air pollution, increased cooling energy consumption, and increased greenhouse gas emissions. In urban environments where it is difficult to secure additional green spaces, rooftop greening is an efficient greenhouse gas reduction strategy. In this study, we not only analyzed the current status of the urban heat island effect but also utilized high-resolution satellite data and spatial information to estimate the available rooftop greening area within the study area. We evaluated the mitigation effect of the urban heat island phenomenon and carbon sequestration capacity through temperature predictions resulting from rooftop greening. To achieve this, we utilized WorldView-2 satellite data to classify land cover in the urban heat island areas of Busan city. We developed a prediction model for temperature changes before and after rooftop greening using machine learning techniques. To assess the degree of urban heat island mitigation due to changes in rooftop greening areas, we constructed a temperature change prediction model with temperature as the dependent variable using the random forest technique. In this process, we built a multiple regression model to derive high-resolution land surface temperatures for training data using Google Earth Engine, combining Landsat-8 and Sentinel-2 satellite data. Additionally, we evaluated carbon sequestration based on rooftop greening areas using a carbon absorption capacity per plant. The results of this study suggest that the developed satellite-based urban heat island assessment and temperature change prediction technology using Random Forest models can be applied to urban heat island-vulnerable areas with potential for expansion.

A Study on the Social Venture Startup Phenomenon Using the Grounded Theory Approach (근거이론 접근법을 이용한 소셜벤처 창업 현상에 관한 고찰)

  • Seol, Byung Moon;Kim, Young Lag
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.1
    • /
    • pp.67-83
    • /
    • 2023
  • The social venture start-up phenomenon is found from the perspectives of social enterprise and for-profit enterprise. This study aims to fundamentally explore the start-up phenomenon of social ventures from these two perspectives. Considering the lack of prior research that researched both social and commercial perspectives at the same time, this paper analyzed using grounded theory approach of Strauss & Corbin(1998), an inductive research method that analyzes based on prior research and interview data. In order to collect data for this study, eight corporate representatives currently operating social ventures were interviewed and data and phenomena were analyzed. This progressed to a theoretical saturation where no additional information was derived. The analysis results of this study using the grounded theory approach are as follows. As a result of open coding and axial coding, 147 concepts and 70 subcategories were derived, and 18 categories were derived through the final abstraction process. In the selective coding, 'expansion of social venture entry in the social domain' and 'expansion of social function of for-profit companies' were selected as key categories, and a story line was formed around this. In this study, we saw that it is necessary to conduct academic research and analysis on the competitive factors required for companies that pursue the values of two conflicting relationships, such as social ventures, to survive with competitiveness. In practice, concepts such as collaboration with for-profit companies, value combination, entrepreneurship competency and performance improvement, social value execution competency reinforcement, communication strategy, for-profit enterprise value investment, and entrepreneur management competency were derived. This study explains the social venture phenomenon for social enterprises, commercial enterprises, and entrepreneurs who want to enter the social venture field. It is expected to provide the implications necessary for successful social venture startups.

  • PDF

Dental Assistant and Dental Hygienist-comparison with U.S. (치과 보조 인력과 치과위생사-미국의 제도 비교)

  • Youngyuhn Choi
    • Journal of Korean Dental Hygiene Science
    • /
    • v.6 no.2
    • /
    • pp.65-77
    • /
    • 2023
  • Background: The shortage of dental hygienists as assistant is a great concern to dental clinics, while dental hygienists are rather pursuing the role of oral hygiene control and preventive treatments which is the main role for dental hygienists in the United States. The dental hygienist and dental assistant system in the United States can be a reference in these discussions. Methods: Educational requirements for licensure and work areas for dental hygienists and dental assistants were investigated through the information provided by the American Dental Association (ADA), American Dental Hygienists Association, National Board Dental Hygiene Examination (NBDHE), Dental Assistants Association of America (ADAA), and Dental Assistants National Board (DANB). Results: In the United States, each state has different systems, but in general, dental hygienists obtain licenses after completing 2~3 years of associate degree programs in dental hygiene after obtaining basic learning skills, and mainly perform tasks related to patient screening procedures, oral hygiene management and preventive care. Dental assistants can take the license test after completing a training course of 9~11 months to obtain a dental assistant certification. Additional expanded work typically requires passing state qualification tests, completing a training program, obtaining a degree, or gaining clinical experience for a certain period of time, depending on the state Conclusion: The scope of work of dental hygienists designated by the Medical Engineer Act and the Enforcement Decree in Korea includes both the work of dental hygienists and dental assistants in the United States, and if a dental assistant system like the United States is introduced to address the current shortage of dental assistants, institutional supplementation such as adjustment of the scope of work and expansion of the role of dental hygienists in oral hygiene management and prevention work is needed and in-depth discussion is necessary.

EU's Space Code of Conduct: Right Step Forward (EU의 우주행동강령의 의미와 평가)

  • Park, Won-Hwa
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.27 no.2
    • /
    • pp.211-241
    • /
    • 2012
  • The Draft International Code of Conduct for Outer Space Activities officially proposed by the European Union on the occasion of the 55th Session of the United Nations Peaceful Uses of the Outer Space last June 2012 in Vienna, Austria is to fill the lacunae of the relevant norms to be applied to the human activities in the outer space and thus has the merit our attention. The missing elements of the norms span from the prohibition of an arms race, safety and security of the space objects including the measures to reduce the space debris to the exchange of information of space activities among space-faring nations. The EU's initiatives, when implemented, cover or will eventually prepare for the forum to deal with such issues of interests of the international community. The EU's initiatives begun at the end of 2008 included the unofficial contacts with major space powers including in particular the USA of which position is believed to have been reflected in the Draft with the aim to have it adopted in 2013. Although the Code is made up of soft law rather than hard law for the subscribing countries, the USA seems to be afraid of the eventuality whereby its strategic advantages in the outer space will be affected by the prohibiting norms, possibly to be pursued by the Code from its current non-binding character, of placing weapons in the outer space. It is with this trepidation that the USA has been opposing to the adoption of the United Nations Assembly Resolutions on the prevention of an arms race in the outer space (PAROS) and in the same context to the setting-up of a working group on the arms race in the outer space in the frame of the Conference on Disarmament. China and Russia who together put forward a draft Treaty on Prevention of the Placement of Weapons in Outer Space and of the Threat or Use of Force against Outer Space Objects (PPWT) in 2008 would not feel comfortable either because the EU initiatives will steal the lime light. Consequently their reactions are understandably passive towards the Draft Code while the reaction of the USA to the PPWT was a clear cut "No". With the above background, the future of the EU Code is uncertain. Nevertheless, the purpose of the Code to reduce the space debris, to allow exchange of the information on the space activities, and to protect the space objects through safety and security, all to maximize the principle of the peaceful use and exploration of the outer space is the laudable efforts on the part of EU. When the detailed negotiations will be held, some problems including the cost to be incurred by setting up an office for the clerical works could be discussed for both efficient and economic mechanism. For example, the new clerical works envisaged in the Draft Code could be discharged by the current UN OOSA (Office for Outer Space Affairs) with minimal additional resources. The EU's initiatives are another meaningful contribution following one due to it in adopting the Kyoto Protocol of 1997 to the UNFCCC (UN Framework Convention on the Climate Change) and deserve the praise from the thoughtful international community.

  • PDF

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Interpreting Bounded Rationality in Business and Industrial Marketing Contexts: Executive Training Case Studies (집행관배훈안례연구(阐述工商业背景下的有限合理性):집행관배훈안례연구(执行官培训案例研究))

  • Woodside, Arch G.;Lai, Wen-Hsiang;Kim, Kyung-Hoon;Jung, Deuk-Keyo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.3
    • /
    • pp.49-61
    • /
    • 2009
  • This article provides training exercises for executives into interpreting subroutine maps of executives' thinking in processing business and industrial marketing problems and opportunities. This study builds on premises that Schank proposes about learning and teaching including (1) learning occurs by experiencing and the best instruction offers learners opportunities to distill their knowledge and skills from interactive stories in the form of goal.based scenarios, team projects, and understanding stories from experts. Also, (2) telling does not lead to learning because learning requires action-training environments should emphasize active engagement with stories, cases, and projects. Each training case study includes executive exposure to decision system analysis (DSA). The training case requires the executive to write a "Briefing Report" of a DSA map. Instructions to the executive trainee in writing the briefing report include coverage in the briefing report of (1) details of the essence of the DSA map and (2) a statement of warnings and opportunities that the executive map reader interprets within the DSA map. The length maximum for a briefing report is 500 words-an arbitrary rule that works well in executive training programs. Following this introduction, section two of the article briefly summarizes relevant literature on how humans think within contexts in response to problems and opportunities. Section three illustrates the creation and interpreting of DSA maps using a training exercise in pricing a chemical product to different OEM (original equipment manufacturer) customers. Section four presents a training exercise in pricing decisions by a petroleum manufacturing firm. Section five presents a training exercise in marketing strategies by an office furniture distributer along with buying strategies by business customers. Each of the three training exercises is based on research into information processing and decision making of executives operating in marketing contexts. Section six concludes the article with suggestions for use of this training case and for developing additional training cases for honing executives' decision-making skills. Todd and Gigerenzer propose that humans use simple heuristics because they enable adaptive behavior by exploiting the structure of information in natural decision environments. "Simplicity is a virtue, rather than a curse". Bounded rationality theorists emphasize the centrality of Simon's proposition, "Human rational behavior is shaped by a scissors whose blades are the structure of the task environments and the computational capabilities of the actor". Gigerenzer's view is relevant to Simon's environmental blade and to the environmental structures in the three cases in this article, "The term environment, here, does not refer to a description of the total physical and biological environment, but only to that part important to an organism, given its needs and goals." The present article directs attention to research that combines reports on the structure of task environments with the use of adaptive toolbox heuristics of actors. The DSA mapping approach here concerns the match between strategy and an environment-the development and understanding of ecological rationality theory. Aspiration adaptation theory is central to this approach. Aspiration adaptation theory models decision making as a multi-goal problem without aggregation of the goals into a complete preference order over all decision alternatives. The three case studies in this article permit the learner to apply propositions in aspiration level rules in reaching a decision. Aspiration adaptation takes the form of a sequence of adjustment steps. An adjustment step shifts the current aspiration level to a neighboring point on an aspiration grid by a change in only one goal variable. An upward adjustment step is an increase and a downward adjustment step is a decrease of a goal variable. Creating and using aspiration adaptation levels is integral to bounded rationality theory. The present article increases understanding and expertise of both aspiration adaptation and bounded rationality theories by providing learner experiences and practice in using propositions in both theories. Practice in ranking CTSs and writing TOP gists from DSA maps serves to clarify and deepen Selten's view, "Clearly, aspiration adaptation must enter the picture as an integrated part of the search for a solution." The body of "direct research" by Mintzberg, Gladwin's ethnographic decision tree modeling, and Huff's work on mapping strategic thought are suggestions on where to look for research that considers both the structure of the environment and the computational capabilities of the actors making decisions in these environments. Such research on bounded rationality permits both further development of theory in how and why decisions are made in real life and the development of learning exercises in the use of heuristics occurring in natural environments. The exercises in the present article encourage learning skills and principles of using fast and frugal heuristics in contexts of their intended use. The exercises respond to Schank's wisdom, "In a deep sense, education isn't about knowledge or getting students to know what has happened. It is about getting them to feel what has happened. This is not easy to do. Education, as it is in schools today, is emotionless. This is a huge problem." The three cases and accompanying set of exercise questions adhere to Schank's view, "Processes are best taught by actually engaging in them, which can often mean, for mental processing, active discussion."

  • PDF

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.