• Title/Summary/Keyword: 3D Based

Search Result 15,829, Processing Time 0.055 seconds

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

Clinical Study on Prenatal care, and Dietary Intakes for Pregnant Women and new Mothers (임산부의 산전관리와 산욕기 영양실태에 관한 연구)

  • Chia, Soon-Hyang;Park, Chai-Soon
    • Journal of Nutrition and Health
    • /
    • v.9 no.4
    • /
    • pp.36-46
    • /
    • 1976
  • This study was projected to provide basic data on prenatal care for future direction in maternity and child care, and also to investigate the diet of women during pregnancy and the period directly afterwards in order to offer to mothers appropriate advice for the improvement of nutritional standards. A clinical study on prenatal care was based on 1054 delivery cases. A nutritional survey was performed on 174 mothers admitted to the department of obstetrics at St. Mary's Hospital during the period of March, 1975 to February, 1976. The results obtained are summarized as follows; I. Clinical study on prenatal care 1) The age distribution showed 59.4% of the mothers were between the ages of 25 to 29 years old. 2) The gestational period was highest between the 37th and 40th gestational weeks. 33.7% of the mothers were primigravidae and 31.8% of them primiparae. 3) 41.3% of the mothers had not received prenatal care or had only received it once before. 4) Induced deliveries were 61.8% and spontantaneous deliveries 38.2%. 61.9% of the mothers had received prenatal care, while those without prenatal care accounted for 61.6% of the total induced deliveries. 5) Low birth weights were 7.7% and 5.0% of the mothers had received prenatal care, while 11.5% had no prenatal care. 6) There were 1.13% of still births, 0.32% of the mothers had prenatal care and the remainder did not have prenatal care. 7) Of those receiving prenatal care, 2.1% showed in the $0{\sim}3$ Apgar score group, 6.3% in the $4{\sim}6$ Apgar score group, and 91.6% in the $7{\sim}10$ Apgar score group. Among the non-prenatally cared for group 5.0% of the newborns were in the $0{\sim}3$ Apgar score group, 9.7% were in $4{\sim}6$ Apgar score group and 85.3% were in the $7{\sim}10$ Apgar score group. 8) Obstetrical complications were developed in 11.86% of the pregnant women when they were hospitalized. Among the group receiving the prenatal care 8.1% of the mothers had obstetrical complications. In the group without prenatal care 17.16% of the mothers had obstetrical complications. The most common obstetrical complication was malpresentation. 9) The first prenatal care was received between the 37th and 40th gestationl weeks. II. Food intake during pregnancy The following are the results from the questionnaires of the mothers concerning diets during pregnancy; 1) Main meals and snacks In 32.2% of the cases, their main meals during the diet amounted to more than was usually eaten at other times. In 67.8% of the cases, their main meals during the diet were the same as that usually eaten. In 22.4% of the cases, snacks during the diet amounted to more than usually eaten at other times. In 77.6% of the cases, snacks during the diet were the same as usually eaten. 2) Itemized list The mothers made a special effort to include certain items in their diets, the following is a breakdown of those items; a. egg, meat, fish 33.3% b. fruit, vegetables 32.2%. c. milk, fruit juice 18.4% d. cake, bread 2.9% e. nothing special 13.2% 3) Milk 44.8% of the mothers had at least one cup of milk everyday. 33.4% of the mothers had at least one cup of milk on occasion. 15.5% of the mothers did not have any milk. 4) Vitamins 39.7% of the mothers had vitamins everyday. 24.7% of the mothers had vitamins occasionally. 35.6% of the mothers did not have any vitamins. 5) Anemic symptoms 9.2% of the mothers very often had anemic symptoms during pregnancy. 39.1% of the mothers often had anemic symptoms during pregnancy. 51.7% of the mothers did not have anemic symptoms at all. 6) Taboos on food 23% of the mothers recognized 'taboos' on food during pregnancy 27% of the mothers displayed on uncertainty about the 'taboos' on food during pregnancy 50% of the mothers displayed indifference toward the taboos. III. Nutritional survey on the new mothers diet. 1) The diets for new mothers can be divided into four categories, such as general diet, low sodium diet, soft diet and liquid diet. 2) Cooked rice and seaweed soup were the main foods for the new mothers as has been the traditional diet for Korean mothers. 3) The average diet contained 1,783g. And the average consumption of the basic food groups per capita per day was 1,265g for cereals and grains, 456g for meats and legumes, 58g for fruits and vegetables, 0g for milk and fish and 4g for fats and oils. 4) In addition to the 1,783g of food in the main diet there was also 142.8g of food taken as snacks. 5) The average daily consumption of calories and nutrients was 2,697 Kcal and 123.4g for proteins, 44.9g for fats, 718.2mg for calcium, 14mg for iron, 2,101.4 I.U. for vitamin A, 0.43mg for thiamine, 1.02mg for riboflavin, 15.88mg for niacin, 5.26mg for ascorbic acid. When these figures are compared with the recommended allowances for new mothers in Korea, the calories and nutrients taken in were satisfactory. But the intake of minerals and vitamins was below the recommended allowance.

  • PDF

On Listening, Reflection and Meditation in Vedānta (베단따의 '듣기·숙고하기·명상하기'(문·사·수)에 관하여)

  • Park, Hyo-yeop
    • Journal of Korean Philosophical Society
    • /
    • v.116
    • /
    • pp.155-180
    • /
    • 2010
  • The three means of listening, reflection and meditation (${\acute{s}}raava{\d{n}}a$, manana and $nididhy{\bar{a}}sana$) which are central devices of practice in $Ved{\bar{a}}nta$ philosophy should be understood not as a continuative step but as a methodological extension on condition of having one and the same purpose. In other words, the three means should be interpreted in a listening-oriented manner, in which the process has to be methodologically extended to reflection and meditation only when the direct knowledge on the reality is not gained in listening. This kind of interpretation can be more justified by displaying significant characteristics of Indian philosophy implied in the three means. It can be easily said that $Ved{\bar{a}}nta$ belonging to the liberation-centric tradition is a project of 'regaining essential self' through which the self becomes essential self by knowing that self. In this case the listening-oriented interpretation coincides with the basic teachings of $Ved{\bar{a}}nta$, since listening alone can be a sufficient means for obtaining knowledge of the original self. Further, as the project of 'regaining essential self' is carried out by the three means, these can be called a sort of 'event' that is carried out according to the scenario of $Ved{\bar{a}}ntic$ metaphysics. In this case listening is a course of comprehending the scenario of event participated by oneself, and that participant can accomplish the project by way of listening the scenario alone judged as somewhat more effective for liberation. However, in the later $Ved{\bar{a}}nta$ there arises a meditation-oriented interpretation of which three means are regarded not as a methodological extension but as a continuative step, because of the emphasis on meditation under the lasting influence of other philosophical systems. This is a result of epistemic desire that tries to convert what is heard to what is specially perceived or what is given to what is accepted. It may be said that this interpretation emphasizing the phased transition from the indirect to the direct of knowledge is an attempt to rationalize the repetitive delay of event as the actual failure of project. Furthermore, an assertion of the later $Ved{\bar{a}}nta$ which refers the fourth means called $sam{\bar{a}}dhi$ is based on the logic that the self-realization is possible apart from and outside the text, and accordingly it is incompatible with an assertion of the early $Ved{\bar{a}}nta$ that the self-realization is a reproduction as it is of the scenario guided by the absolute text. After all, the standard interpretation on the three means in $Ved{\bar{a}}nta$ have to be the listening-oriented, but not be the meditation-oriented or the $sam{\bar{a}}dhi$-oriented.

Determination of shear wave velocity profiles in soil deposit from seismic piezo-cone penetration test (탄성파 피에조콘 관입 시험을 통한 국내 퇴적 지반의 전단파 속도 결정)

  • Sun Chung Guk;Jung Gyungja;Jung Jong Hong;Kim Hong-Jong;Cho Sung-Min
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2005.09a
    • /
    • pp.125-153
    • /
    • 2005
  • It has been widely known that the seismic piezo-cone penetration test (SCPTU) is one of the most useful techniques for investigating the geotechnical characteristics including dynamic soil properties. As the practical applications in Korea, SCPTU was carried out at two sites in Busan and four sites in Incheon, which are mainly composed of alluvial or marine soil deposits. From the SCPTU waveform data obtained from the testing sites, the first arrival times of shear waves were and the corresponding time differences with depth were determined using the cross-over method, and the shear wave velocity profiles (VS) were derived based on the refracted ray path method based on Snell's law and similar to the trend of cone tip resistance (qt) profiles. In Incheon area, the testing depths of SCPTU were deeper than those of conventional down-hole seismic tests. Moreover, for the application of the conventional CPTU to earthquake engineering practices, the correlations between VS and CPTU data were deduced based on the SCPTU results. For the empirical evaluation of VS for all soils together with clays and sands which are classified unambiguously in this study by the soil behavior type classification Index (IC), the authors suggested the VS-CPTU data correlations expressed as a function of four parameters, qt, fs, $\sigma$, v0 and Bq, determined by multiple statistical regression modeling. Despite the incompatible strain levels of the down-hole seismic test during SCPTU and the conventional CPTU, it is shown that the VS-CPTU data correlations for all soils clays and sands suggested in this study is applicable to the preliminary estimation of VS for the Korean deposits and is more reliable than the previous correlations proposed by other researchers.

  • PDF

Function of the Korean String Indexing System for the Subject Catalog (주제목록을 위한 한국용어열색인 시스템의 기능)

  • Yoon Kooho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.15
    • /
    • pp.225-266
    • /
    • 1988
  • Various theories and techniques for the subject catalog have been developed since Charles Ammi Cutter first tried to formulate rules for the construction of subject headings in 1876. However, they do not seem to be appropriate to Korean language because the syntax and semantics of Korean language are different from those of English and other European languages. This study therefore attempts to develop a new Korean subject indexing system, namely Korean String Indexing System(KOSIS), in order to increase the use of subject catalogs. For this purpose, advantages and disadvantages between the classed subject catalog nd the alphabetical subject catalog, which are typical subject ca-alogs in libraries, are investigated, and most of remarkable subject indexing systems, in particular the PRECIS developed by the British National Bibliography, are reviewed and analysed. KOSIS is a string indexing based on purely the syntax and semantics of Korean language, even though considerable principles of PRECIS are applied to it. The outlines of KOSIS are as follows: 1) KOSIS is based on the fundamentals of natural language and an ingenious conjunction of human indexing skills and computer capabilities. 2) KOSIS is. 3 string indexing based on the 'principle of context-dependency.' A string of terms organized accoding to his principle shows remarkable affinity with certain patterns of words in ordinary discourse. From that point onward, natural language rather than classificatory terms become the basic model for indexing schemes. 3) KOSIS uses 24 role operators. One or more operators should be allocated to the index string, which is organized manually by the indexer's intellectual work, in order to establish the most explicit syntactic relationship of index terms. 4) Traditionally, a single -line entry format is used in which a subject heading or index entry is presented as a single sequence of words, consisting of the entry terms, plus, in some cases, an extra qualifying term or phrase. But KOSIS employs a two-line entry format which contains three basic positions for the production of index entries. The 'lead' serves as the user's access point, the 'display' contains those terms which are themselves context dependent on the lead, 'qualifier' sets the lead term into its wider context. 5) Each of the KOSIS entries is co-extensive with the initial subject statement prepared by the indexer, since it displays all the subject specificities. Compound terms are always presented in their natural language order. Inverted headings are not produced in KOSIS. Consequently, the precision ratio of information retrieval can be increased. 6) KOSIS uses 5 relational codes for the system of references among semantically related terms. Semantically related terms are handled by a different set of routines, leading to the production of 'See' and 'See also' references. 7) KOSIS was riginally developed for a classified catalog system which requires a subject index, that is an index -which 'trans-lates' subject index, that is, an index which 'translates' subjects expressed in natural language into the appropriate classification numbers. However, KOSIS can also be us d for a dictionary catalog system. Accordingly, KOSIS strings can be manipulated to produce either appropriate subject indexes for a classified catalog system, or acceptable subject headings for a dictionary catalog system. 8) KOSIS is able to maintain a constistency of index entries and cross references by means of a routine identification of the established index strings and reference system. For this purpose, an individual Subject Indicator Number and Reference Indicator Number is allocated to each new index strings and new index terms, respectively. can produce all the index entries, cross references, and authority cards by means of either manual or mechanical methods. Thus, detailed algorithms for the machine-production of various outputs are provided for the institutions which can use computer facilities.

  • PDF

Decreased White Matter Structural Connectivity in Psychotropic Drug-Naïve Adolescent Patients with First Onset Major Depressive Disorder (정신과적 투약력이 없는 초발 주요 우울장애 청소년 환아들에서의 백질 구조적 연결성 감소)

  • Suh, Eunsoo;Kim, Jihyun;Suh, Sangil;Park, Soyoung;Lee, Jeonho;Lee, Jongha;Kim, In-Seong;Lee, Moon-Soo
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.25 no.2
    • /
    • pp.153-165
    • /
    • 2017
  • Objectives : Recent neuroimaging studies focus on dysfunctions in connectivity between cognitive circuits and emotional circuits: anterior cingulate cortex that connects dorsolateral orbitofrontal cortex and prefrontal cortex to limbic system. Previous studies on pediatric depression using DTI have reported decreased neural connectivity in several brain regions, including the amygdala, anterior cingulate cortex, superior longitudinal fasciculus. We compared the neural connectivity of psychotropic drug naïve adolescent patients with a first onset of major depressive episode with healthy controls using DTI. Methods : Adolescent psychotropic drug naïve patients(n=26, 10 men, 16 women; age range, 13-18 years) who visited the Korea University Guro Hospital and were diagnosed with first onset major depressive disorder were registered. Healthy controls(n=27, 5 males, 22 females; age range, 12-17 years) were recruited. Psychiatric interviews, complete psychometrics including IQ and HAM-D, MRI including diffusion weighted image acquisition were conducted prior to antidepressant administration to the patients. Fractional anisotropy(FA), radial, mean, and axial diffusivity were estimated using DTI. FMRIB Software Library-Tract Based Spatial Statistics was used for statistical analysis. Results : We did not observe any significant difference in whole brain analysis. However, ROI analysis on right superior longitudinal fasciculus resulted in 3 clusters with significant decrease of FA in patients group. Conclusions : The patients with adolescent major depressive disorder showed statistically significant FA decrease in the DTI-based structure compared with healthy control. Therefore we suppose DTI can be used as a bio-marker in psychotropic drug-naïve adolescent patients with first onset major depressive disorder.

Characteristics and Seasonal Variations in the Structure of Coleoptera Communities (갑충군집(甲蟲群集)의 구조적(構造的) 특성(特性)과 계절적(季節的) 발생소장(發生消長))

  • Kim, Ho Jun
    • Journal of Korean Society of Forest Science
    • /
    • v.80 no.1
    • /
    • pp.82-96
    • /
    • 1991
  • This study was carried out to investigate the structural characteristics of Coleoptera communities inhabiting the crowns of the Korean pine(Pinus koraiensis S. et Z.). Four plantations of the Korean pine, stand A (11 years old), stand B(21 years old), stand C(31 years old), and stand D(46 years old), were selected in Sudong-myen, Namyangju-gun, Kyeonggi-do. Sampling was done by knock down methods using insectide(DDVP), which was conducted from April, 1986 to September, 1987, except for the winter season. The following major conclusions are drawn from this study : 1. The total number of Coleoptera was 107 species of 85 genera in 35 families : 83 species of 66 genera in 27 families in 1986 and 74 species of 52 genera in 30 families in 1987. 2. The abundant families, based on the number of species, were Staphylinidae (16.8%), Coccinellidae(7.5%), Chrysomlidae(6.5%), Curculionidae(6.5.%), and Cerambycidae(5.6%). These five families occupied 43.0% of the total number of species. 3. The important families, based on the number of individuals, were Cantharidae(28.2%), Catopidae(27.7%), and Coccinellidae(23.0%). These three families occupied 78.9% of the total number of individuals. 4. The important species, based on the number of individuals, were Podabrus sp. (22.6%, C-antharidae), Catnps sp. 1 (21.7%. Catopidae), Anatis halonis (15.2%. Coccinellidae). Dominant species was Podabrus sp. (25.2% in 1986 and Catops sp. 1(24.9%) in 1987. 5. Generally, more spices and individual numbers were found in older stands than in younger ones. 6. The Coleoptera communities decreased in the thinned stand (stand C). Such a phenomenon in the thinned stand was likely to last two or more years. 7. The Coleoptera communities reached their peak of abundance in May, and decreased thereafter.

  • PDF

A Study on the Effect of Students' Problem Solving Ability and Satisfactions in Woodworking Product Making Program Using Design Thinking (목공 제품 제작 활동에서 디자인 씽킹의 활용이 학생들의 만족도와 문제해결력에 미치는 영향)

  • Kim, SeongIl
    • 대한공업교육학회지
    • /
    • v.44 no.2
    • /
    • pp.142-163
    • /
    • 2019
  • The purpose of this study is to analyze the effect of problem solving ability and satisfaction of university students who are pre-technology teachers in woodworking products(birdhouse) making program using design thinking. Survey responses are analyzed by statistical programs(SPSS ver.20) such as satisfaction, confidence in problem solving, difficulties and causes of difficulties according to gender and grade of 33 students who conducted experience programs in extra-curricular programs to improve creativity and problem solving ability. The main conclusions of this study are as follows: First, the average of total satisfaction about experience programs is 4.39, which is somewhat high. The highest average response is 'feelings of accomplishment' and 'advice in the surroundings'(M = 4.46). There is no significant difference between male and female, and grade. The students interest in group-based different birdhouse woodworking together with the help of the surrounding people by the process of design thinking rather than practice to follow. Therefore, I'd like to recommend to other students due to this program shows a high self-confidence, sense of accomplishment, and satisfaction. Second, the total average response of students 'self-confidence for problem solving at the group based making experience program using design thinking is 3.80. In result of group activities, the students have self-confidence of 'problem-solving ability and deal with difficult situations'. Later, in making programs, complementing difficulties of making can enhance the satisfaction of the students. Third, in the survey questionnaire related with problem solving ability confidence, between 'I have the ability to solve many problems' and 'I always have the ability to cope with new and difficult business situations' show the highest correlation. Therefore, in order to improve self-confidence of problem solving ability, it is necessary to prepare teaching learning programs that can strengthen problem solving ability. Fourth, in the new design and making process not a given product design, the most difficult step is 'the process of rework and modifying idea product'. The main reason that students have difficulty in the production process is 'lack of knowledge and ability to produce'. To make various woodworking products using design thinking process, it can be helpful to make works if you have enough training on woodworking and design thinking before product making. The students' satisfaction about team-based learning using design thinking that helps improving creativity and problem solving ability is high. Therefore, the result of the research in other making activity program that design thinking is applied and analyzed can improve students' problem solving ability.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Development of the Regulatory Impact Analysis Framework for the Convergence Industry: Case Study on Regulatory Issues by Emerging Industry (융합산업 규제영향분석 프레임워크 개발: 신산업 분야별 규제이슈 사례 연구)

  • Song, Hye-Lim;Seo, Bong-Goon;Cho, Sung-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.199-230
    • /
    • 2021
  • Innovative new products and services are being launched through the convergence between heterogeneous industries, and social interest and investment in convergence industries such as AI, big data-based future cars, and robots are continuously increasing. However, in the process of commercialization of convergence new products and services, there are many cases where they do not conform to the existing regulatory and legal system, which causes many difficulties in companies launching their products and services into the market. In response to these industrial changes, the current government is promoting the improvement of existing regulatory mechanisms applied to the relevant industry along with the expansion of investment in new industries. This study, in these convergence industry trends, aimed to analysis the existing regulatory system that is an obstacle to market entry of innovative new products and services in order to preemptively predict regulatory issues that will arise in emerging industries. In addition, it was intended to establish a regulatory impact analysis system to evaluate adequacy and prepare improvement measures. The flow of this study is divided into three parts. In the first part, previous studies on regulatory impact analysis and evaluation systems are investigated. This was used as basic data for the development direction of the regulatory impact framework, indicators and items. In the second regulatory impact analysis framework development part, indicators and items are developed based on the previously investigated data, and these are applied to each stage of the framework. In the last part, a case study was presented to solve the regulatory issues faced by actual companies by applying the developed regulatory impact analysis framework. The case study included the autonomous/electric vehicle industry and the Internet of Things (IoT) industry, because it is one of the emerging industries that the Korean government is most interested in recently, and is judged to be most relevant to the realization of an intelligent information society. Specifically, the regulatory impact analysis framework proposed in this study consists of a total of five steps. The first step is to identify the industrial size of the target products and services, related policies, and regulatory issues. In the second stage, regulatory issues are discovered through review of regulatory improvement items for each stage of commercialization (planning, production, commercialization). In the next step, factors related to regulatory compliance costs are derived and costs incurred for existing regulatory compliance are calculated. In the fourth stage, an alternative is prepared by gathering opinions of the relevant industry and experts in the field, and the necessity, validity, and adequacy of the alternative are reviewed. Finally, in the final stage, the adopted alternatives are formulated so that they can be applied to the legislation, and the alternatives are reviewed by legal experts. The implications of this study are summarized as follows. From a theoretical point of view, it is meaningful in that it clearly presents a series of procedures for regulatory impact analysis as a framework. Although previous studies mainly discussed the importance and necessity of regulatory impact analysis, this study presented a systematic framework in consideration of the various factors required for regulatory impact analysis suggested by prior studies. From a practical point of view, this study has significance in that it was applied to actual regulatory issues based on the regulatory impact analysis framework proposed above. The results of this study show that proposals related to regulatory issues were submitted to government departments and finally the current law was revised, suggesting that the framework proposed in this study can be an effective way to resolve regulatory issues. It is expected that the regulatory impact analysis framework proposed in this study will be a meaningful guideline for technology policy researchers and policy makers in the future.