• Title/Summary/Keyword: Output

Search Result 23,718, Processing Time 0.05 seconds

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.

Evaluation of Cryptosporidiurn Disinfection by Ozone and Ultraviolet Irradiation Using Viability and Infectivity Assays (크립토스포리디움의 활성/감염성 판별법을 이용한 오존 및 자외선 소독능 평가)

  • Park Sang-Jung;Cho Min;Yoon Je-Yong;Jun Yong-Sung;Rim Yeon-Taek;Jin Ing-Nyol;Chung Hyen-Mi
    • Journal of Life Science
    • /
    • v.16 no.3 s.76
    • /
    • pp.534-539
    • /
    • 2006
  • In the ozone disinfection unit process of a piston type batch reactor with continuous ozone analysis using a flow injection analysis (FIA) system, the CT values for 1 log inactivation of Cryptosporidium parvum by viability assays of DAPI/PI and excystation were $1.8{\sim}2.2\;mg/L{\cdot}min$ at $25^{\circ}C$ and $9.1mg/L{\cdot}min$ at $5^{\circ}C$, respectively. At the low temperature, ozone requirement rises $4{\sim}5$ times higher in order to achieve the same level of disinfection at room temperature. In a 40 L scale pilot plant with continuous flow and constant 5 minutes retention time, disinfection effects were evaluated using excystation, DAPI/PI, and cell infection method at the same time. About 0.2 log inactivation of Cryptosporidium by DAPI/PI and excystation assay, and 1.2 log inactivation by cell infectivity assay were estimated, respectively, at the CT value of about $8mg/L{\cdot}min$. The difference between DAPI/PI and excystation assay was not significant in evaluating CT values of Cryptosporidium by ozone in both experiment of the piston and the pilot reactors. However, there was significant difference between viability assay based on the intact cell wall structure and function and infectivity assay based on the developing oocysts to sporozoites and merozoites in the pilot study. The stage of development should be more sensitive to ozone oxidation than cell wall intactness of oocysts. The difference of CT values estimated by viability assay between two studies may partly come from underestimation of the residual ozone concentration due to the manual monitoring in the pilot study, or the difference of the reactor scale (50 mL vs 40 L) and types (batch vs continuous). Adequate If value to disinfect 1 and 2 log scale of Cryptosporidium in UV irradiation process was 25 $mWs/cm^2$ and 50 $mWs/cm^2$, respectively, at $25^{\circ}C$ by DAPI/PI. At $5^{\circ}C$, 40 $mWs/cm^2$ was required for disinfecting 1 log Cryptosporidium, and 80 $mWs/cm^2$ for disinfecting 2 log Cryptosporidium. It was thought that about 60% increase of If value requirement to compensate for the $20^{\circ}C$ decrease in temperature was due to the low voltage low output lamp letting weaker UV rays occur at lower temperatures.

Operative Treatment of Congenitally Corrected Transposition of the Great Arteries(CCTGA) (교정형 대혈관 전위증의 수술적 치료)

  • 이정렬;조광리;김용진;노준량;서결필
    • Journal of Chest Surgery
    • /
    • v.32 no.7
    • /
    • pp.621-627
    • /
    • 1999
  • Background: Sixty five cases with congenitally corrected transposition of the great arteries (CCTGA) indicated for biventricular repair were operated on between 1984 and september 1998. Comparison between the results of the conventional(classic) connection(LV-PA) and the anatomic repair was done. Material and Method: Retrospective review was carried out based on the medical records of the patients. Operative procedures, complications and the long-term results accoding to the combining anomalies were analysed. Result: Mean age was 5.5$\pm$4.8 years(range, 2 months to 18years). Thirty nine were male and 26 were female. Situs solitus {S,L,L} was in 53 and situs inversus{I,D,D} in 12. There was no left ventricular outflow tract obstruction(LVOTO) in 13(20%) cases. The LVOTO was resulted from pulmonary stenosis(PS) in 26(40%)patients and from pulmonary atresia(PA) in 26(40%) patients. Twenty-five(38.5%) patients had tricuspid valve regurgitation(TR) greater than the mild degree that was present preoperatively. Twenty two patients previously underwent 24 systemic- pulmonary shunts previously. In the 13 patients without LVOTO, 7 simple closure of VSD or ASD, 3 tricuspid valve replacements(TVR), and 3 anatomic corrections(3 double switch operations: 1 Senning+ Rastelli, 1 Senning+REV-type, and 1 Senning+Arterial switch opera tion) were performed. As to the 26 patients with CCTGA+VSD or ASD+LVOTO(PS), 24 classic repairs and 2 double switch operations(1 Senning+Rastelli, 1 Mustard+REV-type) were done. In the 26 cases with CCTGA+VSD+LVOTO(PA), 19 classic repairs(18 Rastelli, 1 REV-type), and 7 double switch operations(7 Senning+Rastelli) were done. The degree of tricuspid regurgitation increased during the follow-up periods from 1.3$\pm$1.4 to 2.2$\pm$1.0 in the classic repair group(p<0.05), but not in the double switch group. Two patients had complete AV block preoperatively, and additional 7(10.8%) had newly developed complete AV block after the operation. Other complications were recurrent LVOTO(10), thromboembolism(4), persistent chest tube drainage over 2 weeks(4), chylothorax(3), bleeding(3), acute renal failure(2), and mediastinitis(2). Mean follow-up was 54$\pm$49 months(0-177 months). Thirteen patients died after the operation(operative mortality rate: 20.0%(13/65)), and there were 3 additional deaths during the follow up period(overall mortality: 24.6%(16/65)). The operative mortality in patients underwent anatomic repair was 33.3%(4/12). The actuarial survival rates at 1, 5, and 10 years were 75.0$\pm$5.6%, 75.0$\pm$5.6%, and 69.2$\pm$7.6%. Common causes of death were low cardiac output syndrome(8) and heart failure from TR(5). Conclusion: Although our study could not demonstrate the superiority of each classic or anatomic repair, we found that the anatomic repair has a merit of preventing the deterioration of tricuspid valve regurgitations. Meticulous selection of the patients and longer follow-up terms are mandatory to establish the selective advantages of both strategies.

  • PDF

The Cox-Maze Procedure for Atrial Fibrillation Concomitant with Mitral Valve Disease (승모판막질환에 동반된 심방세동에서 Cox-Maze 술식)

  • Kim, Ki-Bong;Cho, Kwang-Ree;Ahn, Hyuk
    • Journal of Chest Surgery
    • /
    • v.31 no.10
    • /
    • pp.939-944
    • /
    • 1998
  • Background: The sugical results of the Cox-Maze procedure (CMP) for lone atrial fibrillation(AF) have proven to be exellent. However, those for AF associated with mitral valve(MV) disease have been reported to be a little inferior. Materials and methods: To assess the efficacy and safety of the CMP as a combined procedure with MV operation, we studied retrospectively our experiences. Between April 1994 and October 1997, we experienced 70 (23 males, 47 females) cases of CMP concomitantly with MV operation. Results: The etiologies of MV disease were rheumatic in 67 and degenerative in 3 cases. The mean duration of AF before sugery was 66$\pm$70 months. Fifteen patients had the past medical history of thromboembolic complications, and left atrial thrombi were identified at operation in 24 patients. Twelve cases were reoperations. Aortic cross clamp (ACC) time was mean 151$\pm$44 minutes, and cardiopulmonary bypass (CPB) time was mean 246$\pm$65 minutes. Concomitant procedures were mitral valve replacement (MVR) in 19, MVR and aortic valve replacement (AVR) in 14, MVR and tricupid annuloplasty (TAP) in 8, MVR with AV repair in 3, MV repair in 11, MVR and coronary artery bypass grafting (CABG) in 2, MVR and AVR and CABG in 1, redo-MVR in 10, redo-MVR and redo-AVR in 2 patients. The rate of hospital mortality was 1.4%(1/70). Perioperative recurrence of AF was seen in 44(62.9%), and atrial tachyarrhythmias in 10(14.3%), low cardiac output syndrome in 4(5.7%), postoperative bleeding that required mediastinal exploration in 4(5.7%) patients. Other complications were acute renal failure in 2, aggravation of preoperative hemiplegia in 1, and transient delirium in 1 patient. We followed up all the survivors for 16.4 months(3-44months) on an average. Sinus rhythm has been restored in 65(94.2%) patients. AF has been controlled by operation alone in 73.9% and operation plus medication in 20.3%. Two patients needed permanent pacemaker implantation; one with sick sinus syndrome, and the other with tachycardia- bradycardia syndrome. Only two patients remained in AF. We followed up our patients with transthoracic echocardiography to assess the atrial contractilities and other cardiac functions. Right atrial contractility could be demonstrated in 92% and left atrial contractility in 53%.We compared our non-redo cases with redo cases. Although the duration of AF was significantly longer in redo cases, there was no differences in ACC time, CPB time, postoperative bleeding amount and sinus conversion rate. Conclusions: In conclusion, the CMP concomitant with MV operation demonstrated a high sinus conversion rate under the acceptable operative risk even in case of reoperation.

  • PDF

A Study on Risk Factors for Early Major Morbidity and Mortality in Multiple-valve Operations (중복판막수술후 조기성적에 영향을 미치는 인자에 관한 연구)

  • 한일용;조용길;황윤호;조광현
    • Journal of Chest Surgery
    • /
    • v.31 no.3
    • /
    • pp.233-241
    • /
    • 1998
  • To define the risk factors affecting the early major morbidity and mortality after multiple- valve operations, the preoperative, intraoperative and postoperative informations were retrospectively collected on 124 consecutive patients undergoing a multiple-valve operation between October 1985 and July 1996 at the department of Thoracic and Cardiovascular Surgery of Pusan Paik Hospital. The study population consists of 53 men and 71 women whose mean age was 37.9$\pm$11.5(mean$\pm$SD) years. Using the New York Heart Association(NYHA) classification, 41 patients(33.1%) were in functional class II, 60(48.4%) in class III, and 20(16.1%) in class IV preoperatively. Seven patients(5.6%) had undergone previous cardiac operations. Atrial fibrillations were present in 76 patients(61.3%), a history of cerebral embolism in 5(4.0%), and left atrial thrombus in 13(10.5%). The overall early mortality rate and postoperative morbidity was 8.1% and 21.8% respectively. Among the 124 cases of multiple-valve operation, there were 57(46.0%) of combined mitral valve replacement(MVR) and aortic valve replacement(AVR), 48(38.7%) of combined MVR and tricuspid annuloplasty(TVA), 12(9.7%) of combined MVR, AVR and TVA, 3(2.4%) of combined MVR and aortic valvuloplasty, 2(1.6%) of combined MVR and tricuspid valve replacement, and others. The patients were classified according to the postoperative outcomes; Group A(27 cases) included the patients who had early death or major morbidity such as low cardiac output syndrome, mediastinitis, cardiac rupture, ventricular arrhythmia, sepsis, and others; Group B(97 cases) included the patients who had the good postoperative outcomes. The patients were also classified into group of early death and survivor. In comparison of group A and group B, there were significant differences in aortic cross-clamping time(ACT, group A:153.4$\pm$42.4 minutes, group B:134.0$\pm$43.7 minutes, p=0.042), total bypass time(TBT, group A:187.4$\pm$65.5 minutes, group B:158.1$\pm$50.6 minutes, p=0.038), and NYHA functional class(I:33.3%, II:9.7%, III:20%, IV:50%, p=0.004). In comparison of early death(n=10) and survivor(n=114), there were significant differences in age(early death:45.2$\pm$8.7 years, survivor:37.2$\pm$11.6 years, p=0.036), sex(female:12.7%, male:1.9%, p=0.043), ACT(early death:167.1$\pm$38.4 minutes, survivor:135.7$\pm$43.7 minutes, p=0.030), and NYHA functional class(I:0%, II:4.9%, III:1.7%, IV:35%, p=0.001). In conclusion, the early major morbidity and mortality were influenced by the preoperative clinical status and therefore the earlier surgical intervention should be recommended whenever possible. Also, improved methods of myocardial protection and operative techniques may reduce the risk in patients with multiple-valve operation.

  • PDF

The study of thermal change by chemoport in radiofrequency hyperthermia (고주파 온열치료시 케모포트의 열적 변화 연구)

  • Lee, seung hoon;Lee, sun young;Gim, yang soo;Kwak, Keun tak;Yang, myung sik;Cha, seok yong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.2
    • /
    • pp.97-106
    • /
    • 2015
  • Purpose : This study evaluate the thermal changes caused by use of the chemoport for drug administration and blood sampling during radiofrequency hyperthermia. Materials and Methods : 20cm size of the electrode radio frequency hyperthermia (EHY-2000, Oncotherm KFT, Hungary) was used. The materials of the chemoport in our hospital from currently being used therapy are plastics, metal-containing epoxy and titanium that were made of the diameter 20 cm, height 20 cm insertion of the self-made cylindrical Agar phantom to measure the temperature. Thermoscope(TM-100, Oncotherm Kft, Hungary) and Sim4Life (Ver2.0, Zurich, Switzerland) was compared to the actual measured temperature. Each of the electrode measurement position is the central axis and the central axis side 1.5 cm, 0 cm(surface), 0.5 cm, 1.8 cm, 2.8 cm in depth was respectively measured. The measured temperature is $24.5{\sim}25.5^{\circ}C$, humidity is 30% ~ 32%. In five-minute intervals to measure the output power of 100W, 60 min. Results : In the electrode central axis 2.8 cm depth, the maximum temperature of the case with the unused of the chemoport, plastic, epoxy and titanium were respectively $39.51^{\circ}C$, $39.11^{\circ}C$, $38.81^{\circ}C$, $40.64^{\circ}C$, simulated experimental data were $42.20^{\circ}C$, $41.50^{\circ}C$, $40.70^{\circ}C$, $42.50^{\circ}C$. And in the central axis electrode side 1.5 cm depth 2.8 cm, mesured data were $39.37^{\circ}C$, $39.32^{\circ}C$, $39.20^{\circ}C$, $39.46^{\circ}C$, the simulated experimental data were $42.00^{\circ}C$, $41.80^{\circ}C$, $41.20^{\circ}C$, $42.30^{\circ}C$. Conclusion : The thermal variations were caused by radiofrequency electromagnetic field surrounding the chemoport showed lower than in the case of unused in non-conductive plastic material and epoxy material, the titanum chemoport that made of conductor materials showed a slight differences. This is due to the metal contents in the chemoport and the geometry of the chemoport. And because it uses a low radio frequency bandwidth of the used equipment. That is, although use of the chemoport in this study do not significantly affect the surrounding tissue. That is, because the thermal change is insignificant, it is suggested that the hazard of the chemoport used in this study doesn't need to be considered.

  • PDF

A Conceptual Review of the Transaction Costs within a Distribution Channel (유통경로내의 거래비용에 대한 개념적 고찰)

  • Kwon, Young-Sik;Mun, Jang-Sil
    • Journal of Distribution Science
    • /
    • v.10 no.2
    • /
    • pp.29-41
    • /
    • 2012
  • This paper undertakes a conceptual review of transaction cost to broaden the understanding of the transaction cost analysis (TCA) approach. More than 40 years have passed since Coase's fundamental insight that transaction, coordination, and contracting costs must be considered explicitly in explaining the extent of vertical integration. Coase (1937) forced economists to identify previously neglected constraints on the trading process to foster efficient intrafirm, rather than interfirm, transactions. The transaction cost approach to economic organization study regards transactions as the basic units of analysis and holds that understanding transaction cost economy is central to organizational study. The approach applies to determining efficient boundaries, as between firms and markets, and to internal transaction organization, including employment relations design. TCA, developed principally by Oliver Williamson (1975,1979,1981a) blends institutional economics, organizational theory, and contract law. Further progress in transaction costs research awaits the identification of critical dimensions in which transaction costs differ and an examination of the economizing properties of alternative institutional modes for organizing transactions. The crucial investment distinction is: To what degree are transaction-specific (non-marketable) expenses incurred? Unspecialized items pose few hazards, since buyers can turn toalternative sources, and suppliers can sell output intended for one order to other buyers. Non-marketability problems arise when specific parties' identities have important cost-bearing consequences. Transactions of this kind are labeled idiosyncratic. The summarized results of the review are as follows. First, firms' distribution decisions often prompt examination of the make-or-buy question: Should a marketing activity be performed within the organization by company employees or contracted to an external agent? Second, manufacturers introducing an industrial product to a foreign market face a difficult decision. Should the product be marketed primarily by captive agents (the company sales force and distribution division) or independent intermediaries (outside sales agents and distribution)? Third, the authors develop a theoretical extension to the basic transaction cost model by combining insights from various theories with the TCA approach. Fourth, other such extensions are likely required for the general model to be applied to different channel situations. It is naive to assume the basic model appliesacross markedly different channel contexts without modifications and extensions. Although this study contributes to scholastic research, it is limited by several factors. First, the theoretical perspective of TCA has attracted considerable recent interest in the area of marketing channels. The analysis aims to match the properties of efficient governance structures with the attributes of the transaction. Second, empirical evidence about TCA's basic propositions is sketchy. Apart from Anderson's (1985) study of the vertical integration of the selling function and John's (1984) study of opportunism by franchised dealers, virtually no marketing studies involving the constructs implicated in the analysis have been reported. We hope, therefore, that further research will clarify distinctions between the different aspects of specific assets. Another important line of future research is the integration of efficiency-oriented TCA with organizational approaches that emphasize specific assets' conceptual definition and industry structure. Finally, research of transaction costs, uncertainty, opportunism, and switching costs is critical to future study.

  • PDF

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Developmental Plans and Research on Private Security in Korea (한국 민간경비 실태 및 발전방안)

  • Kim, Tea-Hwan;Park, Ok-Cheol
    • Korean Security Journal
    • /
    • no.9
    • /
    • pp.69-98
    • /
    • 2005
  • The security industry for civilians (Private Security), was first introduced to Korea via the US army's security system in the early 1960's. Shortly after then, official police laws were enforced in 1973, and private security finally started to develop with the passing of the 'service security industry' law in 1976. Korea's Private Security industry grew rapidly in the 1980's with the support of foreign funds and products, and now there are thought to be approximately 2000 private security enterprises currently running in Korea. However, nowadays the majority of these enterprises are experiencing difficulties such as lack of funds, insufficient management, and lack of control over employees, as a result, it seems difficult for some enterprises to avoid the low production output and bankruptcy. As a result of this these enterprises often settle these matters illegally, such as excessive dumping or avoiding problems by hiring inappropriate employees who don't have the right skills or qualifications for the jobs. The main problem with the establishment of this kind of security service is that it is so easy to make inroads into this private service market. All these hindering factors inhibit the market growth and impede qualitative development. Based on these main reasons, I researched this area, and will analyze and criticize the present condition of Korea's private security. I will present a possible development plan for the private security of Korea by referring to cases from the US and Japan. My method of researching was to investigate any related documentary records and articles and to interview people for necessary evidence. The theoretical study, involves investigation books and dissertations which are published from inside and outside of the country, and studying the complete collection of laws and regulations, internet data, various study reports, and the documentary records and the statistical data of many institutions such as the National Police Office, judicial training institute, and the enterprises of private security. Also, in addition, the contents of professionals who are in charge of practical affairs on the spot in order to overcomes the critical points of documentary records when investigating dissertation. I tried to get a firm grasp of the problems and difficulties which people in these work enterprises experience, this I thought would be most effective by interviewing the workers, for example: how they feel in the work places and what are the elements which inpede development? And I also interviewed policemen who are in charge of supervising the private escort enterprises, in an effort to figure out the problems and differences in opinion between domestic private security service and the police. From this investigation and research I will try to pin point the major problems of the private security and present a developmental plan. Firstly-Companies should unify the private police law and private security service law. Secondly-It is essential to introduce the 'specialty certificate' system for the quality improvement of private security service. Thirdly-must open up a new private security market by improving old system. Fourth-must build up the competitive power of the security service enterprises which is based on an efficient management. Fifth-needs special marketing strategy to hold customers Sixth-needs positive research based on theoretical studies. Seventh-needs the consistent and even training according to effective market demand. Eighth-Must maintain interrelationship with the police department. Ninth-must reinforce the system of Korean private security service association. Tenth-must establish private security laboratory. Based on these suggestions there should be improvement of private security service.

  • PDF