• Title/Summary/Keyword: Process models

Search Result 5,625, Processing Time 0.04 seconds

Annotation Method based on Face Area for Efficient Interactive Video Authoring (효과적인 인터랙티브 비디오 저작을 위한 얼굴영역 기반의 어노테이션 방법)

  • Yoon, Ui Nyoung;Ga, Myeong Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.83-98
    • /
    • 2015
  • Many TV viewers use mainly portal sites in order to retrieve information related to broadcast while watching TV. However retrieving information that people wanted needs a lot of time to retrieve the information because current internet presents too much information which is not required. Consequentially, this process can't satisfy users who want to consume information immediately. Interactive video is being actively investigated to solve this problem. An interactive video provides clickable objects, areas or hotspots to interact with users. When users click object on the interactive video, they can see additional information, related to video, instantly. The following shows the three basic procedures to make an interactive video using interactive video authoring tool: (1) Create an augmented object; (2) Set an object's area and time to be displayed on the video; (3) Set an interactive action which is related to pages or hyperlink; However users who use existing authoring tools such as Popcorn Maker and Zentrick spend a lot of time in step (2). If users use wireWAX then they can save sufficient time to set object's location and time to be displayed because wireWAX uses vision based annotation method. But they need to wait for time to detect and track object. Therefore, it is required to reduce the process time in step (2) using benefits of manual annotation method and vision-based annotation method effectively. This paper proposes a novel annotation method allows annotator to easily annotate based on face area. For proposing new annotation method, this paper presents two steps: pre-processing step and annotation step. The pre-processing is necessary because system detects shots for users who want to find contents of video easily. Pre-processing step is as follow: 1) Extract shots using color histogram based shot boundary detection method from frames of video; 2) Make shot clusters using similarities of shots and aligns as shot sequences; and 3) Detect and track faces from all shots of shot sequence metadata and save into the shot sequence metadata with each shot. After pre-processing, user can annotates object as follow: 1) Annotator selects a shot sequence, and then selects keyframe of shot in the shot sequence; 2) Annotator annotates objects on the relative position of the actor's face on the selected keyframe. Then same objects will be annotated automatically until the end of shot sequence which has detected face area; and 3) User assigns additional information to the annotated object. In addition, this paper designs the feedback model in order to compensate the defects which are wrong aligned shots, wrong detected faces problem and inaccurate location problem might occur after object annotation. Furthermore, users can use interpolation method to interpolate position of objects which is deleted by feedback. After feedback user can save annotated object data to the interactive object metadata. Finally, this paper shows interactive video authoring system implemented for verifying performance of proposed annotation method which uses presented models. In the experiment presents analysis of object annotation time, and user evaluation. First, result of object annotation average time shows our proposed tool is 2 times faster than existing authoring tools for object annotation. Sometimes, annotation time of proposed tool took longer than existing authoring tools, because wrong shots are detected in the pre-processing. The usefulness and convenience of the system were measured through the user evaluation which was aimed at users who have experienced in interactive video authoring system. Recruited 19 experts evaluates of 11 questions which is out of CSUQ(Computer System Usability Questionnaire). CSUQ is designed by IBM for evaluating system. Through the user evaluation, showed that proposed tool is useful for authoring interactive video than about 10% of the other interactive video authoring systems.

Detection of Phantom Transaction using Data Mining: The Case of Agricultural Product Wholesale Market (데이터마이닝을 이용한 허위거래 예측 모형: 농산물 도매시장 사례)

  • Lee, Seon Ah;Chang, Namsik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.161-177
    • /
    • 2015
  • With the rapid evolution of technology, the size, number, and the type of databases has increased concomitantly, so data mining approaches face many challenging applications from databases. One such application is discovery of fraud patterns from agricultural product wholesale transaction instances. The agricultural product wholesale market in Korea is huge, and vast numbers of transactions have been made every day. The demand for agricultural products continues to grow, and the use of electronic auction systems raises the efficiency of operations of wholesale market. Certainly, the number of unusual transactions is also assumed to be increased in proportion to the trading amount, where an unusual transaction is often the first sign of fraud. However, it is very difficult to identify and detect these transactions and the corresponding fraud occurred in agricultural product wholesale market because the types of fraud are more intelligent than ever before. The fraud can be detected by verifying the overall transaction records manually, but it requires significant amount of human resources, and ultimately is not a practical approach. Frauds also can be revealed by victim's report or complaint. But there are usually no victims in the agricultural product wholesale frauds because they are committed by collusion of an auction company and an intermediary wholesaler. Nevertheless, it is required to monitor transaction records continuously and to make an effort to prevent any fraud, because the fraud not only disturbs the fair trade order of the market but also reduces the credibility of the market rapidly. Applying data mining to such an environment is very useful since it can discover unknown fraud patterns or features from a large volume of transaction data properly. The objective of this research is to empirically investigate the factors necessary to detect fraud transactions in an agricultural product wholesale market by developing a data mining based fraud detection model. One of major frauds is the phantom transaction, which is a colluding transaction by the seller(auction company or forwarder) and buyer(intermediary wholesaler) to commit the fraud transaction. They pretend to fulfill the transaction by recording false data in the online transaction processing system without actually selling products, and the seller receives money from the buyer. This leads to the overstatement of sales performance and illegal money transfers, which reduces the credibility of market. This paper reviews the environment of wholesale market such as types of transactions, roles of participants of the market, and various types and characteristics of frauds, and introduces the whole process of developing the phantom transaction detection model. The process consists of the following 4 modules: (1) Data cleaning and standardization (2) Statistical data analysis such as distribution and correlation analysis, (3) Construction of classification model using decision-tree induction approach, (4) Verification of the model in terms of hit ratio. We collected real data from 6 associations of agricultural producers in metropolitan markets. Final model with a decision-tree induction approach revealed that monthly average trading price of item offered by forwarders is a key variable in detecting the phantom transaction. The verification procedure also confirmed the suitability of the results. However, even though the performance of the results of this research is satisfactory, sensitive issues are still remained for improving classification accuracy and conciseness of rules. One such issue is the robustness of data mining model. Data mining is very much data-oriented, so data mining models tend to be very sensitive to changes of data or situations. Thus, it is evident that this non-robustness of data mining model requires continuous remodeling as data or situation changes. We hope that this paper suggest valuable guideline to organizations and companies that consider introducing or constructing a fraud detection model in the future.

The influence of perceived usefulness and perceived ease of use of experience store on satisfaction and loyalty (체험매장의 지각된 용이성과 유용성이 만족과 충성도에 미치는 영향)

  • Lee, Ji-Hyun
    • Journal of Distribution Science
    • /
    • v.9 no.3
    • /
    • pp.5-14
    • /
    • 2011
  • One of the new roles of modern retail stores is to supply consumers with a memorable experience. In Korea, enhancing a store's environment so that customers remember a unique shopping experience is recognized as a sound strategy for strengthening the store's competitiveness. Motivated by this incentive, awareness of the experience-store concept is starting to increase in various categories of the retail industry. However, many experience stores, except in a few cases, have yet to derive a significant profit, explaining why Korean consumers are somewhat unfamiliar with, yet fascinated by, the experience stores that now exist in the country. Consumer satisfaction directly, and indirectly, affects a company's future profit and potential financial gain; customer satisfaction also affects loyalty. Therefore, knowing the significant factors that increase satisfaction and loyalty is essential for any company, in any field, to be able to effectively differentiate itself from the competition. Intrigued by increased competition opportunities, most Korean companies have adopted experience-store marketing strategies. When establishing the most effective processes for increasing sales and achieving a sustainable competitive advantage of a new concept, companies should consider certain factors that influence consumers' ability to accept new concepts and ideas. The Technology Acceptance Model (TAM) is a theory that models how people accept new concepts. TAM proposes the following two factors that influence a person's decisions about how, and when, he or she will use a new product: "perceived usefulness" and "perceived ease of use." Much of the existing research has suggested that a person's character also affects the process for accepting new ideas. Such personal character attributes as individual preferences, self-confidence, and a person's values, traits, and/or skills affect the process for willingly consenting to try something new. It will be meaningful to establish how the TAM theory's components, as well as personal character, affect individuals accepting the experience-store concept. To that end, as it pertains to an experience store, the first goal of the study is to examine the influence of innovative factors (perceived usefulness and perceived ease of use) on satisfaction and loyalty. The second objective is to define the moderate effect of consumers' personal characteristics on the model. The proposed model was tested on 149 respondents who were engaged in leisure sports activities and bought sports outdoor garments and equipment. According to the study's findings, the satisfaction and loyalty of an experience store can be explained by perceived usefulness and perceived ease of use, with the study's results demonstrating the stronger of the two factors being "perceived ease of use." The study failed to explain the effects of a person's character on the model. In conclusion, when the companies that operate the experience stores execute their marketing and promotion strategies, they should stress the stores' "ease of use" product components. Additionally, it can be extrapolated from the study data that since the experience-store idea is still relatively unfamiliar to Korean consumers, most customers are not yet able to evaluate, nor take a position regarding, their respective attitudes toward experience stores.

  • PDF

A Study on the Effect of University Library User's Sense of Community on User Satisfaction and Loyalty (대학도서관 이용자의 공동체의식이 이용자 만족도 및 충성도에 미치는 영향 연구)

  • Roh, Hyo Jin;Chang, Woo Kwon
    • Journal of the Korean Society for information Management
    • /
    • v.36 no.1
    • /
    • pp.137-168
    • /
    • 2019
  • This study measures and analyzes the university library user's sense of community, service quality assessment, user satisfaction and loyalty. In addition, the effect of the university library user's sense of community on university library user satisfaction and loyalty mediated by the assessment of the quality of service is investigated. On the basis of study result, to improve user satisfaction and user loyalty, the direction and implications of library development are presented. In order to achieve the purpose of the study, precedent research and literature were investigated, and the study model and hypothesis were established based on theoretical background. In order to verify the hypothesis, a total of 300 questionnaires were distributed to subject who had experience using the Central Library among undergraduate students at the C National University, and the final 282 sample was used for analysis. To analyze the differences depending on the general characteristics of the samples, It is the result of an independent sample t-test and one-way ANOVA. The results of the mediated effects analysis using the PROCESS macro-programs models 4 and 6 of Hayes for hypothesis testing are as follows. First, The university library user's sense of community (Service Benefits Perception and Satisfaction, Mutual sense of influence) effect the user satisfaction of university library mediated by service quality assessment at statistical significance. This showed that the higher the university library user's sense of community, the higher the service quality assessment, and the higher the user satisfaction level of university library. Second, The university library user's sense of community (Service Benefits Perception and Satisfaction, Mutual sense of influence) effect the user loyalty of university library mediated by service quality assessment and user satisfaction. This showed that the higher the university library user's sense of community, the higher the service quality assessment, the higher user satisfaction level of university library and the higher the user loyalty level of university library. The results of this study showed that the university library user's sense of community has a direct and indirect effect on enhancing user satisfaction and loyalty through the service quality assessment.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Study on Training Dataset Configuration for Deep Learning Based Image Matching of Multi-sensor VHR Satellite Images (다중센서 고해상도 위성영상의 딥러닝 기반 영상매칭을 위한 학습자료 구성에 관한 연구)

  • Kang, Wonbin;Jung, Minyoung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1505-1514
    • /
    • 2022
  • Image matching is a crucial preprocessing step for effective utilization of multi-temporal and multi-sensor very high resolution (VHR) satellite images. Deep learning (DL) method which is attracting widespread interest has proven to be an efficient approach to measure the similarity between image pairs in quick and accurate manner by extracting complex and detailed features from satellite images. However, Image matching of VHR satellite images remains challenging due to limitations of DL models in which the results are depending on the quantity and quality of training dataset, as well as the difficulty of creating training dataset with VHR satellite images. Therefore, this study examines the feasibility of DL-based method in matching pair extraction which is the most time-consuming process during image registration. This paper also aims to analyze factors that affect the accuracy based on the configuration of training dataset, when developing training dataset from existing multi-sensor VHR image database with bias for DL-based image matching. For this purpose, the generated training dataset were composed of correct matching pairs and incorrect matching pairs by assigning true and false labels to image pairs extracted using a grid-based Scale Invariant Feature Transform (SIFT) algorithm for a total of 12 multi-temporal and multi-sensor VHR images. The Siamese convolutional neural network (SCNN), proposed for matching pair extraction on constructed training dataset, proceeds with model learning and measures similarities by passing two images in parallel to the two identical convolutional neural network structures. The results from this study confirm that data acquired from VHR satellite image database can be used as DL training dataset and indicate the potential to improve efficiency of the matching process by appropriate configuration of multi-sensor images. DL-based image matching techniques using multi-sensor VHR satellite images are expected to replace existing manual-based feature extraction methods based on its stable performance, thus further develop into an integrated DL-based image registration framework.

Preliminary Results of Marine Heat Flow Measurements in the Chukchi Abyssal Plain, Arctic Ocean, and Constraints on Crustal Origin (북극 척치 해저평원의 해양지열관측 초기결과와 지각기원에 대한 의미)

  • Kim, Young-Gyun;Hong, Jong Kuk;Jin, Young Keun;Jang, Minseok;So, Byung Dal
    • The Journal of Engineering Geology
    • /
    • v.32 no.1
    • /
    • pp.113-126
    • /
    • 2022
  • The tectonic history of the Chukchi Abyssal Plain in the Amerasia Basin, Arctic Ocean, has not been fully explored due to the harsh conditions of sea ice preventing detailed observation. Existing models of the tectonic history of the region provide contrasting interpretation of the timing of formation of the crust (Mesozoic to Cenozoic), crust type (from hyper-extended continental crust to oceanic crust), and formation process (from parallel/fan-shaped rifting to transformation faulting). To help determine the age of the oceanic crust, the geothermal gradient was measured at three stations in the south of abyssal plain at depth of 2,160-2,250 m below sea level. Heat flow measurement stations were located perpendicular to the spreading axis over a 40 km-long transect. In-situ thermal conductivity measurement, corrected by the laboratory test, gave observed marine heat flows of 55 to 61 mW/m2. All measurements were taken during Arctic expeditions in 2018 (ARA09C expedition) and 2021 (ARA12C expedition) by the Korean ice-breaking research vessel (IBRV) Araon. Given the assumption of oceanic crust, the results correspond to formation in the Late Cretaceous (Mesozoic). The inferred age supports the hypothesis of formation activated by the opening of the Makarov Basin during the Late Mesozoic-Cenozoic. This would make it contemporaneous with rifting of the Chukchi Border Land immediately east of the abyssal plain. The heat flow data indicate the base of the gas hydrate stability zone is located 332-367 m below the seafloor, this will help to identify the gas hydrate-related bottom simulating reflector in the future seismic survey, as already identified on the Chukchi Plateau. Further geophysical surveys, including heat flow measurements, are required to increase our understanding of the formation process and thermal mantle structure of the abyssal plain.

A Study on the Decision Factors for AI-based SaMD Adoption Using Delphi Surveys and AHP Analysis (델파이 조사와 AHP 분석을 활용한 인공지능 기반 SaMD 도입 의사결정 요인에 관한 연구)

  • Byung-Oh Woo;Jay In Oh
    • The Journal of Bigdata
    • /
    • v.8 no.1
    • /
    • pp.111-129
    • /
    • 2023
  • With the diffusion of digital innovation, the adoption of innovative medical technologies based on artificial intelligence is increasing in the medical field. This is driving the launch and adoption of AI-based SaMD(Software as a Medical Device), but there is a lack of research on the factors that influence the adoption of SaMD by medical institutions. The purpose of this study is to identify key factors that influence medical institutions' decisions to adopt AI-based SaMDs, and to analyze the weights and priorities of these factors. For this purpose, we conducted Delphi surveys based on the results of literature studies on technology acceptance models in healthcare industry, medical AI and SaMD, and developed a research model by combining HOTE(Human, Organization, Technology and Environment) framework and HABIO(Holistic Approach {Business, Information, Organizational}) framework. Based on the research model with 5 main criteria and 22 sub-criteria, we conducted an AHP(Analytical Hierarchy Process) analysis among the experts from domestic medical institutions and SaMD providers to empirically analyze SaMD adoption factors. The results of this study showed that the priority of the main criteria for determining the adoption of AI-based SaMD was in the order of technical factors, economic factors, human factors, organizational factors, and environmental factors. The priority of sub-criteria was in the order of reliability, cost reduction, medical staff's acceptance, safety, top management's support, security, and licensing & regulatory levels. Specifically, technical factors such as reliability, safety, and security were found to be the most important factors for SaMD adoption. In addition, the comparisons and analyses of the weights and priorities of each group showed that the weights and priorities of SaMD adoption factors varied by type of institution, type of medical institution, and type of job in the medical institution.

Data-centric XAI-driven Data Imputation of Molecular Structure and QSAR Model for Toxicity Prediction of 3D Printing Chemicals (3D 프린팅 소재 화학물질의 독성 예측을 위한 Data-centric XAI 기반 분자 구조 Data Imputation과 QSAR 모델 개발)

  • ChanHyeok Jeong;SangYoun Kim;SungKu Heo;Shahzeb Tariq;MinHyeok Shin;ChangKyoo Yoo
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.523-541
    • /
    • 2023
  • As accessibility to 3D printers increases, there is a growing frequency of exposure to chemicals associated with 3D printing. However, research on the toxicity and harmfulness of chemicals generated by 3D printing is insufficient, and the performance of toxicity prediction using in silico techniques is limited due to missing molecular structure data. In this study, quantitative structure-activity relationship (QSAR) model based on data-centric AI approach was developed to predict the toxicity of new 3D printing materials by imputing missing values in molecular descriptors. First, MissForest algorithm was utilized to impute missing values in molecular descriptors of hazardous 3D printing materials. Then, based on four different machine learning models (decision tree, random forest, XGBoost, SVM), a machine learning (ML)-based QSAR model was developed to predict the bioconcentration factor (Log BCF), octanol-air partition coefficient (Log Koa), and partition coefficient (Log P). Furthermore, the reliability of the data-centric QSAR model was validated through the Tree-SHAP (SHapley Additive exPlanations) method, which is one of explainable artificial intelligence (XAI) techniques. The proposed imputation method based on the MissForest enlarged approximately 2.5 times more molecular structure data compared to the existing data. Based on the imputed dataset of molecular descriptor, the developed data-centric QSAR model achieved approximately 73%, 76% and 92% of prediction performance for Log BCF, Log Koa, and Log P, respectively. Lastly, Tree-SHAP analysis demonstrated that the data-centric-based QSAR model achieved high prediction performance for toxicity information by identifying key molecular descriptors highly correlated with toxicity indices. Therefore, the proposed QSAR model based on the data-centric XAI approach can be extended to predict the toxicity of potential pollutants in emerging printing chemicals, chemical process, semiconductor or display process.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.