• Title/Summary/Keyword: 수준

Search Result 44,244, Processing Time 0.073 seconds

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.

A Methodology to Develop a Curriculum based on National Competency Standards - Focused on Methodology for Gap Analysis - (국가직무능력표준(NCS)에 근거한 조경분야 교육과정 개발 방법론 - 갭분석을 중심으로 -)

  • Byeon, Jae-Sang;Ahn, Seong-Ro;Shin, Sang-Hyun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.43 no.1
    • /
    • pp.40-53
    • /
    • 2015
  • To train the manpower to meet the requirements of the industrial field, the introduction of the National Qualification Frameworks(hereinafter referred to as NQF) was determined in 2001 by National Competency Standards(hereinafter referred to as NCS) centrally of the Office for Government Policy Coordination. Also, for landscape architecture in the construction field, the "NCS -Landscape Architecture" pilot was developed in 2008 to be test operated for 3 years starting in 2009. Especially, as the 'realization of a competence-based society, not by educational background' was adopted as one of the major government projects in the Park Geun-Hye government(inaugurated in 2013) the NCS system was constructed on a nationwide scale as a detailed method for practicing this. However, in the case of the NCS developed by the nation, the ideal job performing abilities are specified, therefore there are weaknesses of not being able to reflect the actual operational problem differences in the student level between universities, problems of securing equipment and professors, and problems in the number of current curricula. For soft landing to practical curriculum, the process of clearly analyzing the gap between the current curriculum and the NCS must be preceded. Gap analysis is the initial stage methodology to reorganize the existing curriculum into NCS based curriculum, and based on the ability unit elements and performance standards for each NCS ability unit, the discrepancy between the existing curriculum within the department or the level of coincidence used a Likert scale of 1 to 5 to fill in and analyze. Thus, the universities wishing to operate NCS in the future measuring the level of coincidence and the gap between the current university curriculum and NCS can secure the basic tool to verify the applicability of NCS and the effectiveness of further development and operation. The advantages of reorganizing the curriculum through gap analysis are, first, that the government financial support project can be connected to provide quantitative index of the NCS adoption rate for each qualitative department, and, second, an objective standard is provided on the insufficiency or sufficiency when reorganizing to NCS based curriculum. In other words, when introducing in the subdivisions of the relevant NCS, the insufficient ability units and the ability unit elements can be extracted, and the supplementary matters for each ability unit element per existing subject can be extracted at the same time. There is an advantage providing directions for detailed class program and basic subject opening. The Ministry of Education and the Ministry of Employment and Labor must gather people from the industry to actively develop and supply the NCS standard a practical level to systematically reflect the requirements of the industrial field the educational training and qualification, and the universities wishing to apply NCS must reorganize the curriculum connecting work and qualification based on NCS. To enable this, the universities must consider the relevant industrial prospect and the relation between the faculty resources within the university and the local industry to clearly select the NCS subdivision to be applied. Afterwards, gap analysis must be used for the NCS based curriculum reorganization to establish the direction of the reorganization more objectively and rationally in order to participate in the process evaluation type qualification system efficiently.

THE EFFECT OF INTERMITTENT COMPOSITE CURING ON MARGINAL ADAPTATION (복합레진의 간헐적 광중합 방법이 변연적합도에 미치는 영향)

  • Yun, Yong-Hwan;Park, Sung-Ho
    • Restorative Dentistry and Endodontics
    • /
    • v.32 no.3
    • /
    • pp.248-259
    • /
    • 2007
  • The aim of this research was to study the effect of intermittent polymerization on marginal adaptation by comparing the marginal adaptation of intermittently polymerized composite to that of continuously polymerized composite. The materials used for this study were Pyramid (Bisco Inc., Schaumburg, U.S.A.) and Heliomolar (Ivoclar Vivadent, Liechtenstein) . The experiment was carried out in class II MOD cavities prepared in 48 extracted human maxillary premolars. The samples were divided into 4 groups by light curing method: group 1- continuous curing (60s light on with no light off), group 2-intermittent curing (cycles of 3s with 2s light on & 1s light off for 90s); group 3- intermittent curing (cycles of 2s with 1s light on & 1s light off for 120s); group 4- intermittent curing (cycles of 3s with 1s light on & 2s light off for 180s). Consequently the total amount of light energy radiated was same in all the groups. Each specimen went through thermo-mechanical loading (TML) which consisted of mechanical loading (720,000 cycles, 5.0 kg) with a speed of 120 rpm for 100hours and thermocycling (6000 thermocycles of alternating water of $50^{\circ}C$ and $55^{\circ}C$). The continuous margin (CM) (%) of the total margin and regional margins, occlusal enamel (OE), vertical enamel (VE), and cervical enamel (CE) was measured before and after TML under a $\times200$ digital light microscope. Three-way ANOVA and Duncan's Multiple Range Test was performed at 95% level of confidence to test the effect of 3 variables on CM (%) of the total margin: light curing conditions, composite materials and effect of TML. In each group, One-way ANOVA and Duncan's Multiple Range Test was additionally performed to compare CM (%) of regions (OE, VE CE). The results indicated that all the three variables were statistically significant (p < 0.05). Before TML, in groups using Pyramid, groups 3 and 4 showed higher CM (%) than groups 1 and 2, and in groups using Heliomolar. groups 3 and 4 showed higher CM (%) than group 1 (p < 0.05). After TML, in both Pyramid and Heliomo)ar groups, group 3 showed higher CM (%) than group 1 (p < 0.05) CM (%) of the regions are significantly different in each group (p < 0.05). Before TML, no statistical difference was found between groups within the VE and CE region. In the OE region, group 4 of Pyramid showed higher CM (%) than group 2, and groups 2 and 4 of Heliomolar showed higher CM (%) than group 1 (p < 0.05). After TML, no statistical difference was found among groups within the VE and CE region. In the OE region, group 3 of Pyramid showed higher CM (%) than groups 1 and 2, and groups 2,3 and 4 of Heliomolar showed higher CM (%) than group 1 (p < 0.05). It was concluded that intermittent polymerization may be effective in reducing marginal gap formation.

Effect of Hydrogen Peroxide Enema on Recovery of Carbon Monoxide Poisoning (과산화수소 관장이 급성 일산화탄소중독의 회복에 미치는 영향)

  • Park, Won-Kyun;Chae, E-Up
    • The Korean Journal of Physiology
    • /
    • v.20 no.1
    • /
    • pp.53-63
    • /
    • 1986
  • Carbon monoxide(CO) poisoning has been one of the major environmental problems because of the tissue hypoxia, especially brain tissue hypoxia, due to the great affinity of CO with hemoglobin. Inhalation of the pure oxygen$(0_2)$ under the high atmospheric pressure has been considered as the best treatment of CO poisoning by the supply of $0_2$ to hypoxic tissues with dissolved from in plasma and also by the rapid elimination of CO from the carboxyhemoglobin(HbCO). Hydrogen peroxide $(H_2O_2)$ was rapidly decomposed to water and $0_2$ under the presence of catalase in the blood, but the intravenous administration of $H_2O_2$ is hazardous because of the formation of methemoglobin and air embolism. However, it was reported that the enema of $H_2O_2$ solution below 0.75% could be continuously supplied $0_2$ to hypoxic tissues without the hazards mentioned above. This study was performed to evaluate the effect of $H_2O_2$ enema on the elimination of CO from the HbCO in the recovery of the acute CO poisoning. Rabbits weighting about 2.0 kg were exposed to If CO gas mixture with room air for 30 minutes. After the acute CO poisoning, 30 rabbits were divided into three groups relating to the recovery period. The first group T·as exposed to the room air and the second group w·as inhalated with 100% $0_2$ under 1 atmospheric pressure. The third group was administered 10 ml of 0.5H $H_2O_2$ solution per kg weight by enema immediately after CO poisoning and exposed to the room air during the recovery period. The arterial blood was sampled before and after CO poisoning ana in 15, 30, 60 and 90 minutes of the recovery period. The blood pH, $Pco_2\;and\;Po_2$ were measured anaerobically with a Blood Gas Analyzer and the saturation percentage of HbCO was measured by the Spectrophotometric method. The effect of $H_2O_2$ enema on the recovery from the acute CO poisoning was observed and compared with the room air group and the 100% $0_2$ inhalation group. The results obtained from the experiment are as follows: The pH of arterial blood was significantly decreased after CO poisoning and until the first 15 minutes of the recovery period in all groups. Thereafter, it was slowly increased to the level of the before CO poisoning, but the recovery of pH of the $H_2O_2$ enema group was more delayed than that of the other groups during the recovery period. $Paco_2$ was significantly decreased after CO poisoning in all groups. Boring the recovery Period, $Paco_2$ of the room air group was completely recovered to the level of the before CO Poisoning, but that of the 100% $O_2$ inhalation group and the $H_2O_2$ enema group was not recovered until the 90 minutes of the recovery period. $Paco_2$ was slightly decreased after CO poisoning. During the recovery Period, it was markedly increased in the first 15 minutes and maintained the level above that before CO Poisoning in all groups. Furthermore $Paco_2$ of the $H_2O_2$ enema group was 102 to 107 mmHg and it was about 10 mmHg higher than that of the room air group during the recovery period. The saturation percentage of HbCO was increased up to the range of 54 to 72 percents after CO poisoning and in general it was generally diminished during the recovery period. However in the $H_2O_2$ enema group the diminution of the saturation percentage of HbCO was generally faster than that of the 100% $O_2$ inhalation group and the room air group, and its diminution in the 100% $O_2$ inhalation group was also slightly faster than that of the room air group at the relatively later time of the recovery period. In conclusion, the enema of 0.5% $H_2O_2$ solution is seems to facilitate the elimination of CO from the HbCO in the blood and increase $Paco_2$ simultaneously during the recovery period of the acute CO poisoning.

  • PDF

Light and Electron Microscopy of Gill and Kidney on Adaptation of Tilapia(Oreochromis niloticus) in the Various Salinities (틸라피아의 해수순치시(海水馴致時) 아가미와 신장(腎臟)의 광학(光學) 및 전자현미경적(電子顯微鏡的) 관찰(觀察))

  • Yoon, Jong-Man;Cho, Kang-Yong;Park, Hong-Yang
    • Applied Microscopy
    • /
    • v.23 no.2
    • /
    • pp.27-40
    • /
    • 1993
  • This study was taken to examine the light microscopic and ultrastructural changes of gill and kidney of female tilapia{Oreochromis niloticus) adapted in 0%o, 10%o, 20%o, and 30%o salt concentrations, respectively, by light, scanning and transmission electron microscope. The results obtained in these experiments were summarized as follows: Gill chloride cell hyperplasia, gill lamellar epithelial separation, kidney glomerular shrinkage, blood congestion in kidneys and deposition of hyalin droplets in kidney glomeruli, tubules were the histological alterations in Oreochromis niloticus. Incidence and severity of gill chloride cell hyperplasia rapidly increased together with increase of salinity, and the number of chloride cells in gill lamellae rapidly increased in response to high external NaCl concentrations. The ultrastructure by scanning electron microscope(SEM) indicated that the gill secondary lamella of tilapia(Oreochromis niloticus) exposed to seawater, were characterized by rough convoluted surfaces during the adaptation. Transmission electron microscopy(TEM) indicated that mitochondria in chloride cells exposed to seawater, were both large and elongate and contained well-developed cristae. TEM also showed the increased chloride cells exposed to seawater. The presence of two mitochondria-rich cell types is discussed with regard to their possible role in the hypoosmoregulatory changes which occur during seawater-adaptation. Most Oreochromis niloticus adapted in seawater had an occasional glomerulus completely filling Bowman's capsule in kidney, and glomerular shrinkage was occurred higher in kidney tissues of individuals living in 10%o, 20%o, 30%o of seawater than in those living in 0%o of freshwater, and blood congestion was occurred severer in kidney tissues of individuals living 20%o, 30%o of seawater than in those living in 10%o of seawater. There were decreases in the glomerular area and the nuclear area in the main segments of the nephron, and that the nuclear areas of the nephron cells in seawater-adapted tilapia were of smaller size than those from freshwater-adapted fish. Our findings demonstrated that Oreochromis niloticus tolerated moderately saline environment and the increased body weight living in 30%o was relatively higher than that living in 10%o in spite of histopathological changes.

  • PDF

Effects of Recipient Oocytes and Electric Stimulation Condition on In Vitro Development of Cloned Embryos after Interspecies Nuclear Transfer with Caprine Somatic Cell (수핵난자와 전기적 융합조건이 산양의 이종간 복제수정란의 체외발달에 미치는 영향)

  • 이명열;박희성
    • Reproductive and Developmental Biology
    • /
    • v.28 no.1
    • /
    • pp.21-27
    • /
    • 2004
  • This study was conducted to investigate the developmental ability of caprine embryos after somatic cell interspecies nuclear transfer. Recipient bovine and porcine oocytes were obtained from slaughterhouse and were matured in vitro according to established protocols. Donor cells were obtained from an ear-skin biopsy of a caprine, digested with 0.25% trypsin-EDTA in PBS and primary fibroblast cultures were established in TCM-199 with 10% FBS. The matured oocytes were dipped in D-PBS plus 10% FBS + 7.5 $\mu$ g/ml cytochalasin B and 0.05M sucrose. Enucleation were accomplished by aspirating the first polar body and partial cytoplasm which containing metaphase II chromosomes using a micropipette with an out diameter of 20∼30 $\mu$m. A Single donor cell was individually transferred into the perivitelline space of each enucleated oocyte. The reconstructed oocytes were electric fusion with 0.3M mannitol fusion medium. After the electrofusion, embryos were activated by electric stimulation. Interspecies nuclear transfer embryos with bovine cytoplasts were cultured in TCM-199 medium supplemented with 10% FBS including bovine oviduct epithelial cells for 7∼9 day. And porcine cytoplasts were cultured in NCSU-23 medium supplemented with 10% FBS for 6 ∼8 day at $39^{\circ}C, 5% CO_2 $in air. Interspecies nuclear transfer by recipient bovine oocytes were fused with electric length 1.95 kv/cm and 2.10 kv/cm. There was no significant difference between two electric length in fusion rate(47.7 and 44.6%) and in cleavage rate(41.9 and 54.5%). Using electric length 1.95 kv/cm and 2.10 kv/cm in caprine-porcine NT oocytes, there was also no significant difference between two treatments in fusion rate(51.3 and 46.1%) and in cleavage rate(75.0 and 84.9%). The caprine-bovine NT oocytes fusion rate was lower(P<0.05) in 1 pulse for 60 $\mu$sec(19.3%), than those from 1 pulse for 30 $\mu$sec(50.8%) and 2 pulse for 30 $\mu$sec(31.0%). The cleavage rate was higher(P<0.05) in 1 pulse for 30 $\mu$sec(53.3%) and 2 pulse for 30 $\mu$sec(50.0%), than in 1 pulse for 60 $\mu$sec(18.2%). The caprine-porcine NT oocytes fusion rate was 48.1% in 1 pulse for 30 $\mu$sec, 45.2% in 2 pulse for 30 $\mu$sec and 48.6% in 1 pulse for 60 $\mu$sec. The cleavage rate was higher(P<0.05) in 1 pulse for 30 $\mu$sec(78.4%) and 1 pulse for 60 $\mu$sec(79.4%), than in 2 pulse for 30 $\mu$sec(53.6%). In caprine-bovine NT embryos, the developmental rate of morula and blastocyst stage embryos were 22.6% in interspecies nuclear transfer and 30.6% in parthenotes, which was no significant differed. The developmental rate of morula and blastocyst stage embryos with caprine-porcine NT embryos were lower(P<0.05) in interspecies nuclear transfer(5.1%) than parthenotes(37.4%).

Correlation analysis of radiation therapy position and dose factors for left breast cancer (좌측 유방암의 방사선치료 자세와 선량인자의 상관관계 분석)

  • Jeon, Jaewan;Park, Cheolwoo;Hong, Jongsu;Jin, Seongjin;Kang, Junghun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.29 no.1
    • /
    • pp.37-48
    • /
    • 2017
  • Purpose: The most basic conditions of radiation therapy is to prevent unnecessary exposure of normal tissue. The risk factors that are important o evaluate the dose emitted to the lung and heart from radiation therapy for breast cancer. Therefore, comparing the dose factors of a normal tissue according to the radion treatment position and Seeking an effective radiation treatment for breast cancer through the analysis of the correlation relationship. Materials and Methods: Computed tomography was conducted among 30 patients with left breast cancer in supine and prone position. Eclipse Treatment Planning System (Ver.11) was established by computerized treatment planning. Using the DVH compared the incident dose to normal tissue by position. Based on the result, Using the SPSS (ver.18) analyzed the dose in each normal tissue factors and Through the correlation analysis between variables, independent sample test examined the association. Finally The HI, CI value were compared Using the MIRADA RTx (ver. ad 1.6) in the supine, prone position Results: The results of computerized treatment planning of breast cancer in the supine position were V20, $16.5{\pm}2.6%$ and V30, $13.8{\pm}2.2%$ and Mean dose, $779.1{\pm}135.9cGy$ (absolute value). In the prone position it showed in the order $3.1{\pm}2.2%$, $1.8{\pm}1.7%$, $241.4{\pm}138.3cGy$. The prone position showed overall a lower dose. The average radiation dose 537.7 cGy less was exposured. In the case of heart, it showed that V30, $8.1{\pm}2.6%$ and $5.1{\pm}2.5%$, Mean dose, $594.9{\pm}225.3$ and $408{\pm}183.6cGy$ in the order supine, prone position. Results of statistical analysis, Cronbach's Alpha value of reliability analysis index is 0.563. The results of the correlation analysis between variables, position and dose factors of lung is about 0.89 or more, Which means a high correlation. For the heart, on the other hand it is less correlated to V30 (0.488), mean dose (0.418). Finally The results of independent samples t-test, position and dose factors of lung and heart were significantly higher in both the confidence level of 99 %. Conclusion: Radiation therapy is currently being developed state-of-the-art linear accelerator and a variety of treatment plan technology. The basic premise of the development think normal tissue protection around PTV. Of course, if you treat a breast cancer patient is in the prone position it take a lot of time and reproducibility of set-up problems. Nevertheless, As shown in the experiment results it is possible to reduce the dose to enter the lungs and the heart from the prone position. In conclusion, if a sufficient treatment time in the prone position and place correct confirmation will be more effective when the radiation treatment to patient.

  • PDF

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.