• Title/Summary/Keyword: Systems Improvement

Search Result 6,070, Processing Time 0.039 seconds

Automatic Target Recognition Study using Knowledge Graph and Deep Learning Models for Text and Image data (지식 그래프와 딥러닝 모델 기반 텍스트와 이미지 데이터를 활용한 자동 표적 인식 방법 연구)

  • Kim, Jongmo;Lee, Jeongbin;Jeon, Hocheol;Sohn, Mye
    • Journal of Internet Computing and Services
    • /
    • v.23 no.5
    • /
    • pp.145-154
    • /
    • 2022
  • Automatic Target Recognition (ATR) technology is emerging as a core technology of Future Combat Systems (FCS). Conventional ATR is performed based on IMINT (image information) collected from the SAR sensor, and various image-based deep learning models are used. However, with the development of IT and sensing technology, even though data/information related to ATR is expanding to HUMINT (human information) and SIGINT (signal information), ATR still contains image oriented IMINT data only is being used. In complex and diversified battlefield situations, it is difficult to guarantee high-level ATR accuracy and generalization performance with image data alone. Therefore, we propose a knowledge graph-based ATR method that can utilize image and text data simultaneously in this paper. The main idea of the knowledge graph and deep model-based ATR method is to convert the ATR image and text into graphs according to the characteristics of each data, align it to the knowledge graph, and connect the heterogeneous ATR data through the knowledge graph. In order to convert the ATR image into a graph, an object-tag graph consisting of object tags as nodes is generated from the image by using the pre-trained image object recognition model and the vocabulary of the knowledge graph. On the other hand, the ATR text uses the pre-trained language model, TF-IDF, co-occurrence word graph, and the vocabulary of knowledge graph to generate a word graph composed of nodes with key vocabulary for the ATR. The generated two types of graphs are connected to the knowledge graph using the entity alignment model for improvement of the ATR performance from images and texts. To prove the superiority of the proposed method, 227 documents from web documents and 61,714 RDF triples from dbpedia were collected, and comparison experiments were performed on precision, recall, and f1-score in a perspective of the entity alignment..

Establishing Optimal Conditions for LED-Based Speed Breeding System in Soybean [Glycine max (L.) Merr.] (LED 기반 콩[Glycine max (L.) Merr.] 세대단축 시스템 구축을 위한 조건 설정)

  • Gyu Tae Park;Ji-Hyun Bae;Ju Seok Lee;Soo-Kwon Park;Dool-Yi Kim;Jung-Kyung Moon;Mi-Suk Seo
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.68 no.4
    • /
    • pp.304-312
    • /
    • 2023
  • Plant breeding is a time-consuming process, mainly due to the limited annual generational advancement. A speed breeding system, using LED light sources, has been applied to accelerate generational progression in various crops. However, detailed protocols applicable to soybeans are still insufficient. In this study, we report the optimized protocols for a speed breeding system comprising 12 soybean varieties with various maturity ecotypes. We investigated the effects of two light qualities (RGB ratio), three levels of light intensity (PPFD), and two soil conditions on the flowering time and development of soybeans. Our results showed that an increase in the red wavelength of the light spectrum led to a delay in flowering time. Furthermore, as light intensity increased, flowering time, average internode length, and plant height decreased, while the number of nodes, branches, and pods increased. When compared to agronomic soil, horticultural soil resulted in an increase of more than 50% in the number of nodes, branches, and pods. Consequently, the optimal conditions were determined as follows: a 10-hour short-day photoperiod, an equal RGB ratio (1:1:1), light intensity exceeding 1,300 PPFD, and the use of horticultural soil. Under these conditions, the average flowering time was found to be 27.3±2.48 days, with an average seed yield of 7.9±2.67. Thus, the speed breeding systems reduced the flowering time by more than 40 days, compared to the average flowering time of Korean soybean resources (approximately 70 days). By using a controlled growth chamber that is unaffected by external environmental conditions, up to 6 generations can be achieved per year. The use of LED illumination and streamlined facilities further contributes to cost savings. This study highlights the substantial potential of integrating modern crop breeding techniques, such as digital breeding and genetic editing, with generational shortening systems to accelerate crop improvement.

Herbal Medicines for the Improvement of Immune Function in Patients with Cancer: A Protocol for Systematic Review and Meta-Analysis (한약의 암 환자에 대한 면역기능 개선 효과 : 체계적 문헌고찰과 메타분석 프로토콜)

  • Young-Min Cho;Soobin Jang;Mi Mi Ko;Han-eum Joo;Hwa-Seung Yoo;Mi-Kyung Jeong
    • The Journal of Internal Korean Medicine
    • /
    • v.45 no.3
    • /
    • pp.335-341
    • /
    • 2024
  • Objectives: Patients with cancer eventually fail to respond to therapy when malignant cells develop effective ways to evade immunosurveillance. Conventional cancer treatments, such as radiation therapy and chemotherapy, aim to cure the disease or prolong the patient's life. However, the toxicity and side effects of conventional treatments limit their efficacy. Herbal medicine is a typical complementary and integrative form of medicine for cancer treatment in Asia. This protocol evaluates the effectiveness of herbal medicines in improving the immune function of patients with cancer. Methods: The following electronic databases will be searched: MEDLINE via PubMed, EMBASE via Elsevier, Cochrane Central Register of Controlled Trials, China National Knowledge Infrastructure (CNKI), and Korean databases including Regional Information Sharing Systems (RISS), National Digital Science Library (NDSL), and Oriental Medicine Advanced Searching Integrated System (OASIS). Additionally, prospective randomized controlled trials that evaluate the effectiveness of herbal medicines on immune function in patients with cancer will be included in this review. All outcomes related to the immune function of patients with cancer (e.g., CD3, CD4, CD8, CD4/CD8 ratio, CD19 (B cells), dendritic cells (CD11), CD56 (NK cells), and macrophages) will be included in this review. Results: This review is expected to provide data on the effectiveness of herbal medicines on improving immune functions in patients with cancers. Conclusion: This systematic review will help patients and clinicians establish new management options for cancer treatment.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

A Study on the Location of Retail Trade in Kwangju-si and Its Inhabitants와 Effcient Utilization (광주시 소매업의 입지와 주민의 효율적 이용에 관한 연구)

  • ;Jeon, Kyung-sook
    • Journal of the Korean Geographical Society
    • /
    • v.30 no.1
    • /
    • pp.68-92
    • /
    • 1995
  • Recentry the structure of the retail trade have been chanaed with its environmantal changes. Some studies may be necessary on the changing process of environment and fundamental structure analyses of the retail trade. This study analyzes the location of retail trades, inhabitants' behavior in retail tredes and their desirable utilization scheme of them in Kwangju-si. Some study methods, contents and coming-out results are as follows: 1. Retail trades can be classified into independent stores, chain-stores (supermarket, voluntary chain and frenchiise system and convenience store), department stores, cooperative associations, traditional, markets mail-order marketing, automatic vending and others by service levels, selling-items, prices, managements, methods of retailing and store or nonstore type. 2. In Kwangju, the environment of retail trades is related to the consumers of population structure: chanes in consumers pattern, trends toward agings and nuclear family, increase of leisur: time and female advances to society. Rapid structural shift in retail trade has also been occurred due to these social changes. Traditionl and premodern markets until 1970s altere to supermarkets or department stores in 1980s, and various types, large enterprises and foreign capitals came into being in 1990s. 3. The locational characteristics of retail trades are resulted from the spatial analysis of the total population distribution, and from the calculation of segregation index in the light of potential demand. The densely-populated areas occurs in newly-built apartment housing complex which is distributed with a ring-shaped pattern around the old urban core. The numbers and rates of the aged over sixty in Kwangsan-gu and the circumference area of Mt.Moodeung, are larger and higher where rural elements are remarkable. A relation between population distribution and retail trade are analysed by the index of population per shop. The index of the population number per shop is lower in urban center, as a whole, being more convenient for consumers. In newly-formed apartment complex areas, on the other, the index more than 1,000 per shop, meeting not the demands for consumers. Because both the younger and the aged are numerous in these areas, the retail trade pattern pertinent to both are needed. Urban fringes including Kwangsan-gu and the vicinity of Mt.Moodeung have some problems owing to the most of population number per shop (more than 1, 500) and the most extensive as well. 4. The regional characteristic of retail trade is analyzed through the location quotient of shops by locational patterns and centerality index. Chungkum-dong is the highest-order central place in CBD. It is the core of retail trades, which has higher-ordered specialty store including three big department stores, supermarkets and large stores. Taegum-dong, Chungsu-dong, Taeui-dong, and Numun-dong that are neiahbored to Chungkum-dong fall on the second group. They have a central commercial section where large chain stores, specialty shopping streets, narrow-line retailing shops (furniture, amusement service, and gallary), supermarkets and daily markets are located. The third group is formed on the axis of state roads linking to Naju-kun, Changseong-kun, Tamyang-kun, Hwasun-kun and forme-Songjeong-eup. It is related to newly, rising apartment housing complex along a trunk road, and characterized by markets and specialty stores. The fourth group has neibourhood-shopping centers including older residential area and Songjeong-eup area with independent stores and supermarkets as main retailing functions. The last group contains inner residential area and outer part of a city including Songjeong-eup. Outer part of miscellaneous shops being occasionally found is rural rather than urban (Fig. 7). 5. The residents' behaviors using retail trade are analyzed by factors of goods and facilities. Department stores are very high level in preference for higher-order shopping-goods such as clothes for full dress in view of both diversity and quality of goods(28.9%). But they have severe traffic congestions, and high competitions for market ranges caused by their sma . 64.0% of respondents make combined purpose trips together with banking and shopping. 6. For more efficiency of retail-trading, it is necessary to induce spatial distribution policy with regard to opportunity frequency of goods selection by central place, frontier regions and age groups. Also we must consider to analyze competition among different types of retail trade and analyze the consumption behaviors of working females and younger-aged groups, in aspects of time and space. Service improvement and the rationalization of management should be accomplished in such as cooperative location (situation) must be under consideration in relations to other functions such as finance, leisure & sports, and culture centers. Various service systems such as installment, credit card and peremium ticket, new used by enterprises, must also be carried service improvement. The rationalization and professionalization in for the commercial goods are bsically requested.

  • PDF

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

Performance Improvement on Short Volatility Strategy with Asymmetric Spillover Effect and SVM (비대칭적 전이효과와 SVM을 이용한 변동성 매도전략의 수익성 개선)

  • Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.119-133
    • /
    • 2020
  • Fama asserted that in an efficient market, we can't make a trading rule that consistently outperforms the average stock market returns. This study aims to suggest a machine learning algorithm to improve the trading performance of an intraday short volatility strategy applying asymmetric volatility spillover effect, and analyze its trading performance improvement. Generally stock market volatility has a negative relation with stock market return and the Korean stock market volatility is influenced by the US stock market volatility. This volatility spillover effect is asymmetric. The asymmetric volatility spillover effect refers to the phenomenon that the US stock market volatility up and down differently influence the next day's volatility of the Korean stock market. We collected the S&P 500 index, VIX, KOSPI 200 index, and V-KOSPI 200 from 2008 to 2018. We found the negative relation between the S&P 500 and VIX, and the KOSPI 200 and V-KOSPI 200. We also documented the strong volatility spillover effect from the VIX to the V-KOSPI 200. Interestingly, the asymmetric volatility spillover was also found. Whereas the VIX up is fully reflected in the opening volatility of the V-KOSPI 200, the VIX down influences partially in the opening volatility and its influence lasts to the Korean market close. If the stock market is efficient, there is no reason why there exists the asymmetric volatility spillover effect. It is a counter example of the efficient market hypothesis. To utilize this type of anomalous volatility spillover pattern, we analyzed the intraday volatility selling strategy. This strategy sells short the Korean volatility market in the morning after the US stock market volatility closes down and takes no position in the volatility market after the VIX closes up. It produced profit every year between 2008 and 2018 and the percent profitable is 68%. The trading performance showed the higher average annual return of 129% relative to the benchmark average annual return of 33%. The maximum draw down, MDD, is -41%, which is lower than that of benchmark -101%. The Sharpe ratio 0.32 of SVS strategy is much greater than the Sharpe ratio 0.08 of the Benchmark strategy. The Sharpe ratio simultaneously considers return and risk and is calculated as return divided by risk. Therefore, high Sharpe ratio means high performance when comparing different strategies with different risk and return structure. Real world trading gives rise to the trading costs including brokerage cost and slippage cost. When the trading cost is considered, the performance difference between 76% and -10% average annual returns becomes clear. To improve the performance of the suggested volatility trading strategy, we used the well-known SVM algorithm. Input variables include the VIX close to close return at day t-1, the VIX open to close return at day t-1, the VK open return at day t, and output is the up and down classification of the VK open to close return at day t. The training period is from 2008 to 2014 and the testing period is from 2015 to 2018. The kernel functions are linear function, radial basis function, and polynomial function. We suggested the modified-short volatility strategy that sells the VK in the morning when the SVM output is Down and takes no position when the SVM output is Up. The trading performance was remarkably improved. The 5-year testing period trading results of the m-SVS strategy showed very high profit and low risk relative to the benchmark SVS strategy. The annual return of the m-SVS strategy is 123% and it is higher than that of SVS strategy. The risk factor, MDD, was also significantly improved from -41% to -29%.

Studies on the Improvement of the Cropping System (I) (작부체계(作付體系) 개선(改善)에 관(關)한 조사연구(調査硏究)(I))

  • Choi, Chang Yeol
    • Korean Journal of Agricultural Science
    • /
    • v.10 no.1
    • /
    • pp.61-73
    • /
    • 1983
  • This study was conducted to obtain fundamental informations on the improvement of cropping system to increase in land utilization rate and crop production. In order to group the characteristics of areas, Chungnam province was classified into 4 classes: Suburb (Daedeog Gun, Cheonwon Gun), Plain (Nonsan Gun, Dangjin Gun) Coastal (Seosan Gun, Boryeong Gun) and Hilly region (Gongju Gun, Cheongyang Gun). 100 farm households were sampled from each region, and cropping system and utilization state of paddy and upland in 1982 were surveyed. The results obtained were summarized as follows: 1. Average utilization rate of upland was 161.9 % The utilization rate of upland at plain was highest (188.9 %), and that at suburb showed lowest value (152.0%). 2. Number of crops cultivated at upland was 32 kinds. Among the rate of planting area of each crop. soybean showed highest rate of 18.8%, barley 15.4%, red-pepper 13.1% and chinese' cabbage 10.1% respectively, but the red pepper showed highest rate of planting area at suburb, the barley at hilly region and the soybean at plain and coastal region. 3. Average utilization rate of paddy was 115.6% and the utilization rate of paddy at suburb showed the highest value (140.0%) and that at coastal region the lowest value (108.2%). 4. 12 kinds of crops were cultivated at paddy before or after rice cultivation. Among the crops cultivated at paddy before or after rice cultivation, barley showed the highest area rate (5.0%) of cultivation and strawberry the next but the strawberry showed the highest area rate of cultivation at suburb and barley at other regions. 5. The cropping systems at upland were divided into single cropping and double cropping. Types of double cropping at upland were classified into 38 types by the combinations of crops. Among the types of double cropping, the rate of cultivation area of soybean after barley combination was 35.0%, but at suburb the rate of this type of cropping system was low and the double cropping of vegetable combinations showed high rate. 6. Types of double cropping at paddy were classified into 6 types. As a whole, double cropping of barley after rice combination showed highest rate of cultivation area (42.8%) among crop combinations but at suburb, the area rate of this type cropping was low and cultivation of fruit vegetable after rice showed highest rate. The area rate of post - cropping to rice was 76.3% of whole double cropping area at paddy and significantly higher than the rate of precropping to rice. 7. Some kinds of crop combinations were consisted of same family or closely related crops and the characteristics of the crop rotation between those crops are almost same. The area cultivated those unreasonable crop combinations were 19.09 ha. 8. At upland, planting area of the cereal crops, vegetale crops and industrial crops crops and industrial crops was 88.92ha, 93.70ha and 21.80ha respectively. The Planting area of cereal crops was significantly less than that of vegetable crops. 9. Most of all the research reports on the cropping system from 1910 to 1980 were about the post cropping after rice harvest. The objectives of researches could be classified into 14 kinds and the important objectives of researches were the planting time, the amounting of manuring, the quantity of seeding, the transplanting time, the ridging method, the sowing method and the variety test.

  • PDF

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.