• Title/Summary/Keyword: 시스템분석

Search Result 44,561, Processing Time 0.078 seconds

Functional Expression of an Anti-GFP Camel Heavy Chain Antibody Fused to Streptavidin (Streptavidin이 융합된 GFP항원 특이적인 VHH 항체의 기능적 발현)

  • Han, Seung Hee;Kim, Jin-Kyoo
    • Journal of Life Science
    • /
    • v.28 no.12
    • /
    • pp.1416-1423
    • /
    • 2018
  • With strong biotin binding affinity ($K_D=10^{-14}M$), the tetrameric feature of streptavidin could be used to increase the antigen binding activity of a camel heavy chain (VHH) antibody through their fusion, here stained with biotinylated horseradish peroxidase and subsequent immunoassays ELISA and Western blot analysis. For this application, we cloned the streptavidin gene amplified from the Streptomyces avidinii chromosome by PCR, and this was fused to the gene of the 8B9 VHH antibody which is specific to green fluorescent protein (GFP) antigens. To express a soluble fusion protein in Escherichia coli, we used the pUC119 plasmid-based expression system which uses the lacZ promoter for induction by IPTG, the pelB leader sequence at the N-terminus for secretion into the periplasmic space, and six polyhistidine tags at the C-terminus for purification of the expressed proteins using an $Ni^+$-NTA-agarose column. Although streptavidin is toxic to E. coli because of its strong biotin binding property, this soluble fusion protein was expressed successfully. In SDS-PAGE, the size of the purified fusion protein was 122.4 kDa in its native condition and 30.6 kDa once denatured by boiling, suggesting the tetramerization of the monomeric subunit by non-covalent association through the streptavidin moiety fusing to the 8B9 VHH antibody. In addition, this fusion protein showed biotin binding activity similar to streptavidin as well as GFP antigen binding activity through both ELISA and Western blot analysis. In conclusion, the protein resulting from the fusion of an 8B9 VHH antibody with streptavidin was successfully expressed and purified as a soluble tetramer in E. coli; it showed both biotin and GFP antigen binding activity suggesting the possible production of a tetrameric and bifunctional VHH antibody.

On Utilization of Inactive Storage in Dam during Drought Period (가뭄 극복을 위한 댐의 비활용용량 활용 방안 연구)

  • Joo, Hongjun;Kim, Deokhwan;Kim, Jungwook;Bae, Younghye;Kim, Hung Soo
    • Journal of Wetlands Research
    • /
    • v.20 no.4
    • /
    • pp.353-362
    • /
    • 2018
  • The purpose of this study is to suggest a structure plan for improving the utilization of inactive storage in the dam for overcoming the drought. Inactive storage in the dam is composed of the emergency storage and dead storage. The emergency storage can be used for the case of emergency such as drought. But, in general, the dead storage for sedimentation is not used even for the emergency. Therefore, this study considers the part of dead storage that the sedimentation is not progressed yet can be used during the severe drought period and is called "drought storage in a dam". The accurate Sediment Level(SL) analysis for the computation of the drought storage should be performed and so the present and future SL in the dam reservoir is estimated using SED-2D linked with RMA-2 model of SMS. After the consideration of additionally available storage capacity based on the estimated SL, the drought storage is finally determined. Present data based on historical data, future predicted future climate factors by Representative Concentrarion Pathways(RCP) 8.5 scenario. Then, using the TANK model, dam inflows were determined, and future period such as SL and drought storage were suggested. As the results, we have found that the available drought storage will be reduced in the future when we compare the present drought storage with the future one. This is due to a increase variability of climate change. Therefore, we should take the necessary study for the increase of available drought storage in the future.

A Study on the Records Management for the National Assembly Members (국회의원 기록관리 방안 연구)

  • Kim, Jang-hwan
    • The Korean Journal of Archival Studies
    • /
    • no.55
    • /
    • pp.39-71
    • /
    • 2018
  • The purpose of this study is to examine the reality of the records management of the National Assembly members and suggest a desirable alternative. Until the Public Records Management Act was enacted in 1999, the level of the records management in the National Assembly was not beyond that of the document management in both the administration and the legislature. Rather, the National Assembly has maintained a records management tradition that systematically manages the minutes and bills since the Constitutional Assembly. After the Act was legislated in 2000, the National Assembly Records Management Regulation was enacted and enforced, and the Archives was established in the form of a subsidiary organ of the Secretariat of the National Assembly, even though its establishment is not obligatory. In addition, for the first time, an archivist was assigned as a records and archives researcher in Korea, whose role is to respond quickly in accordance with the records schedule of the National Assembly, making its service faster than that of the administration. However, the power of the records management of the National Assembly Archives at the time of the Secretariat of the National Assembly was greatly reduced, so the revision of the regulations in accordance with the revised Act in 2007 was not completed until 2011. In the case of the National Assembly, the direct influence of the executive branch was insignificant. As the National Assembly had little direct influence on the administration, it had little positive influence on records management innovation under Roh Moo-Hyun Administration. Even within the National Assembly, the records management observed by its members is insignificant both in practice and in theory. As the National Assembly members are excluded from the Act, there is no legal basis to enforce a records management method upon them. In this study, we analyze the records management problem of the National Assembly members, which mainly concerns the National Assembly records management plan established in the National Archives. Moreover, this study proposes three kinds of records management methods for the National Assembly members, namely, the legislation and revision of regulations, the records management consulting of the National Assembly members, and the transfer of the dataset of administrative information systems and websites.

Epidemiological investigation on the outbreak of foodborne and waterborne disease due to Norovirus with delayed notification (노로바이러스에 기인한 수인성·식품매개감염병 집단발생의 지연신고에 대한 역학조사)

  • Ha, Mikyung;Kim, Hyeongsu;Kim, Yong Ho;Na, Min Sun;Yu, Mi Jung
    • Journal of agricultural medicine and community health
    • /
    • v.43 no.4
    • /
    • pp.258-269
    • /
    • 2018
  • Objectives: There was an outbreak of foodborne and waterborne disease among high school students at Okcheon in June, 2018. First attack occurred June $5^{th}$ but seven days later it was notified. The purpose of this investigation was to evaluate the pathogen of outbreak and cause of delayed notification. Methods: First, we did a questionnaire survey for 61 cases and 122 controls to find what symptoms they had and whether they ate foods or drank water from June $2^{nd}$ to June $12^{th}$. Second, we investigated the environment of cafeteria and drinking water. Third, we examined specimen of cases and environment to identify bacteria or virus. Results: Attack rate of this outbreak was 7.8%. Drinking water was strongly suspected as a source of infection in questionnaire survey but we could not find the exact time of exposure. Norovirus was identified in specimen of cases (2 students), drinking water (at main building and dormitory) and cafeteria (knife, dishtowel, hand of chef) Conclusions: We decided norovirus as the pathogen of this outbreak based on the clinical features of cases with diarrhea vomiting, abdominal pain and recovery within 2 or 3 days after onset, outbreak due to drinking water and microbiologic examination, And the cause of delayed notification might be the non-existence of the nurse teacher at that time and the lack of understanding of teachers on immediate notification under the outbreak. To prevent the delayed notification, notification system about outbreak of foodborne and waterborne disease in school is needed to be improved.

Successful Management and Operating System of a UNESCO World Heritage Site - A Case Study on the Wadi Al-Hitan of Egypt - (유네스코 세계자연유산의 성공적인 관리와 운영체계 - 『이집트 Wadi Al-Hitan』의 사례 -)

  • Lim, Jong Deock
    • Korean Journal of Heritage: History & Science
    • /
    • v.44 no.1
    • /
    • pp.106-121
    • /
    • 2011
  • The number of World Natural Heritage Sites is smaller than that of World Cultural Heritage Sites. As of 2010, the total number of natural sites was 180, which is less than 1/3 of all cultural sites. The reason why the number of natural sites is smaller can be attributed to the evaluating criteria of OUV(outstanding universal value). Only 9 fossil related sites were designated as World Heritage Sites among 180 Natural Sites. This study compares their OUVs including the academic value and characteristics of the 9 World Heritage Sites to provide data and reference for KCDC(Korean Cretaceous Dinosaur Coast) to apply as a World Natural Heritage Site. This study was carried out to obtain information and data on the Wadi Al-Hitan of Egypt which was designated as a World Natural Heritage Site. The study includes field investigation for whale fossils, interviews of site paleontologists and staff, and inspections of facilities. Three factors can likely be attributed to its successful management and operating system. First, there is a system for comprehensive research and a monitoring plan. Secondly, experts have been recruited and hired and professional training for staff members has been done properly. Finally, the Wadi Al-Hitan has developed local resources with specialized techniques for conservation and construction design, which matched well with whale fossils and the environment at the site. The Wadi Al-Hitan put a master plan into practice and achieved goals for action plans. To designate a future World Natural Heritage Site in Korea, it is important to be recognized by international experts including IUCN specialists as the best in one's field with OUV. Full-time regular-status employees for a research position are necessary from the preparation stage for the UNESCO World Heritage Site. Local government and related organizations must do their best to control monitoring plans and to improve academic value after the UNESCO World Heritage Site designation. As we experienced during the designation process of Jeju Volcanic Island and Lava Tubes as the first Korean World Natural Heritage Site, participation by various scholars and specialists need to be in harmony with active endeavors from local governments and NGOs.

Development Process for User Needs-based Chatbot: Focusing on Design Thinking Methodology (사용자 니즈 기반의 챗봇 개발 프로세스: 디자인 사고방법론을 중심으로)

  • Kim, Museong;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.221-238
    • /
    • 2019
  • Recently, companies and public institutions have been actively introducing chatbot services in the field of customer counseling and response. The introduction of the chatbot service not only brings labor cost savings to companies and organizations, but also enables rapid communication with customers. Advances in data analytics and artificial intelligence are driving the growth of these chatbot services. The current chatbot can understand users' questions and offer the most appropriate answers to questions through machine learning and deep learning. The advancement of chatbot core technologies such as NLP, NLU, and NLG has made it possible to understand words, understand paragraphs, understand meanings, and understand emotions. For this reason, the value of chatbots continues to rise. However, technology-oriented chatbots can be inconsistent with what users want inherently, so chatbots need to be addressed in the area of the user experience, not just in the area of technology. The Fourth Industrial Revolution represents the importance of the User Experience as well as the advancement of artificial intelligence, big data, cloud, and IoT technologies. The development of IT technology and the importance of user experience have provided people with a variety of environments and changed lifestyles. This means that experiences in interactions with people, services(products) and the environment become very important. Therefore, it is time to develop a user needs-based services(products) that can provide new experiences and values to people. This study proposes a chatbot development process based on user needs by applying the design thinking approach, a representative methodology in the field of user experience, to chatbot development. The process proposed in this study consists of four steps. The first step is 'setting up knowledge domain' to set up the chatbot's expertise. Accumulating the information corresponding to the configured domain and deriving the insight is the second step, 'Knowledge accumulation and Insight identification'. The third step is 'Opportunity Development and Prototyping'. It is going to start full-scale development at this stage. Finally, the 'User Feedback' step is to receive feedback from users on the developed prototype. This creates a "user needs-based service (product)" that meets the process's objectives. Beginning with the fact gathering through user observation, Perform the process of abstraction to derive insights and explore opportunities. Next, it is expected to develop a chatbot that meets the user's needs through the process of materializing to structure the desired information and providing the function that fits the user's mental model. In this study, we present the actual construction examples for the domestic cosmetics market to confirm the effectiveness of the proposed process. The reason why it chose the domestic cosmetics market as its case is because it shows strong characteristics of users' experiences, so it can quickly understand responses from users. This study has a theoretical implication in that it proposed a new chatbot development process by incorporating the design thinking methodology into the chatbot development process. This research is different from the existing chatbot development research in that it focuses on user experience, not technology. It also has practical implications in that companies or institutions propose realistic methods that can be applied immediately. In particular, the process proposed in this study can be accessed and utilized by anyone, since 'user needs-based chatbots' can be developed even if they are not experts. This study suggests that further studies are needed because only one field of study was conducted. In addition to the cosmetics market, additional research should be conducted in various fields in which the user experience appears, such as the smart phone and the automotive market. Through this, it will be able to be reborn as a general process necessary for 'development of chatbots centered on user experience, not technology centered'.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.

A Development and Validation Study of the Web-based Korean Version of the Eating Disorder Diagnostic Scale DSM-5 (웹 기반 한국판 섭식장애진단척도 DSM-5의 개발 및 타당화 연구)

  • Lee, Hye Rin;Kwag, Kyung Hwa;Lee, You Kyung;Han, Soo Wan;Kim, Youl-Ri
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.28 no.2
    • /
    • pp.185-193
    • /
    • 2020
  • Objectives : The aim of this study was to develop and to verify the Korean version of the Eating Disorder Diagnosis Scale DSM-5 (K-EDDS) as a web-based diagnostic system, which enables rapid diagnosis of patients for early intervention. Methods : A total of 119 persons participated in the study, including patients with eating disorders (n=38) and college students (n=81). Along with the paper-and-pencil SCOFF, all participants completed the web-based K-EDDS, the Eating Disorder Examination-Questionaire (EDE-Q), and the Clinical Impairment Assessment Questionnaire (CIA). The semi-structured interview using the Eating Disorder Examination Interview (EDE) was conducted for participants with two or more SCOFF scores. Within two weeks, the web-based K-EDDS, the EDE-Q, and the CIA were re-tested. Results : In the exploratory factor analysis, four factors were extracted : body dissatisfaction, binge behaviors, binge frequency and compensatory behaviors. The four subscales of the web-based K-EDDS had significant correlation with each of the four subscales of the EDE-Q. The internal consistency of the web-based K-EDDS was highly satisfactory (Cronbach's alpha=0.93). The diagnostic agreement between the web-based K-EDDS and the EDE was excellent (96.83%), and the web-based K-EDDS's test-retest diagnostic agreement was fairly good (92.86%). The web-based K-EDDS and the CIA also showed significant differences between patients and general population, supporting discriminant validity. Conclusions : This study suggested that the web-based K-EDDS is a valid tool for assisting diagnosis of eating disorders based on DSM-5 in clinical and research fields.