• Title/Summary/Keyword: Deleted Data

Search Result 219, Processing Time 0.036 seconds

Detecting and Extracting Changed Objects in Ground Information (지반정보 변화객체 탐지·추출 시스템 개발)

  • Kim, Kwangsoo;Kim, Bong Wan;Jang, In Sung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.515-523
    • /
    • 2021
  • An integrated underground spatial map consists of underground facilities, underground structures, and ground information, and is periodically updated. In this paper, we design and implement a system for detecting and extracting only changed ground objects to shorten the map update speed. To find the changed objects, all the objects are compared, which are included in the newly input map and the reference map in the integrated map. Since the entire process of comparing objects and generating results is classified by function, the implemented system is composed of several modules such as object comparer, changed object detector, history data manager, changed object extractor, changed type classifier, and changed object saver. We use two metrics: detection rate and extraction rate, to evaluate the performance of the system. As a result of applying the system to boreholes, ground wells, soil layers, and rock floors in Pyeongtaek, 100% of inserted, deleted, and updated objects in each layer are detected. In addition, it provides the advantage of ensuring the up-to-dateness of the reference map by downloading it whenever maps are compared. In the future, additional research is needed to confirm the stability and effectiveness of the developed system using various data to apply it to the field.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (비정형 텍스트 분석을 활용한 이슈의 동적 변이과정 고찰)

  • Lim, Myungsu;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.1-18
    • /
    • 2016
  • Owing to the extensive use of Web media and the development of the IT industry, a large amount of data has been generated, shared, and stored. Nowadays, various types of unstructured data such as image, sound, video, and text are distributed through Web media. Therefore, many attempts have been made in recent years to discover new value through an analysis of these unstructured data. Among these types of unstructured data, text is recognized as the most representative method for users to express and share their opinions on the Web. In this sense, demand for obtaining new insights through text analysis is steadily increasing. Accordingly, text mining is increasingly being used for different purposes in various fields. In particular, issue tracking is being widely studied not only in the academic world but also in industries because it can be used to extract various issues from text such as news, (SocialNetworkServices) to analyze the trends of these issues. Conventionally, issue tracking is used to identify major issues sustained over a long period of time through topic modeling and to analyze the detailed distribution of documents involved in each issue. However, because conventional issue tracking assumes that the content composing each issue does not change throughout the entire tracking period, it cannot represent the dynamic mutation process of detailed issues that can be created, merged, divided, and deleted between these periods. Moreover, because only keywords that appear consistently throughout the entire period can be derived as issue keywords, concrete issue keywords such as "nuclear test" and "separated families" may be concealed by more general issue keywords such as "North Korea" in an analysis over a long period of time. This implies that many meaningful but short-lived issues cannot be discovered by conventional issue tracking. Note that detailed keywords are preferable to general keywords because the former can be clues for providing actionable strategies. To overcome these limitations, we performed an independent analysis on the documents of each detailed period. We generated an issue flow diagram based on the similarity of each issue between two consecutive periods. The issue transition pattern among categories was analyzed by using the category information of each document. In this study, we then applied the proposed methodology to a real case of 53,739 news articles. We derived an issue flow diagram from the articles. We then proposed the following useful application scenarios for the issue flow diagram presented in the experiment section. First, we can identify an issue that actively appears during a certain period and promptly disappears in the next period. Second, the preceding and following issues of a particular issue can be easily discovered from the issue flow diagram. This implies that our methodology can be used to discover the association between inter-period issues. Finally, an interesting pattern of one-way and two-way transitions was discovered by analyzing the transition patterns of issues through category analysis. Thus, we discovered that a pair of mutually similar categories induces two-way transitions. In contrast, one-way transitions can be recognized as an indicator that issues in a certain category tend to be influenced by other issues in another category. For practical application of the proposed methodology, high-quality word and stop word dictionaries need to be constructed. In addition, not only the number of documents but also additional meta-information such as the read counts, written time, and comments of documents should be analyzed. A rigorous performance evaluation or validation of the proposed methodology should be performed in future works.

X-tree Diff: An Efficient Change Detection Algorithm for Tree-structured Data (X-tree Diff: 트리 기반 데이터를 위한 효율적인 변화 탐지 알고리즘)

  • Lee, Suk-Kyoon;Kim, Dong-Ah
    • The KIPS Transactions:PartC
    • /
    • v.10C no.6
    • /
    • pp.683-694
    • /
    • 2003
  • We present X-tree Diff, a change detection algorithm for tree-structured data. Our work is motivated by need to monitor massive volume of web documents and detect suspicious changes, called defacement attack on web sites. From this context, our algorithm should be very efficient in speed and use of memory space. X-tree Diff uses a special ordered labeled tree, X-tree, to represent XML/HTML documents. X-tree nodes have a special field, tMD, which stores a 128-bit hash value representing the structure and data of subtrees, so match identical subtrees form the old and new versions. During this process, X-tree Diff uses the Rule of Delaying Ambiguous Matchings, implying that it perform exact matching where a node in the old version has one-to one corrspondence with the corresponding node in the new, by delaying all the others. It drastically reduces the possibility of wrong matchings. X-tree Diff propagates such exact matchings upwards in Step 2, and obtain more matchings downwsards from roots in Step 3. In step 4, nodes to ve inserted or deleted are decided, We aldo show thst X-tree Diff runs on O(n), woere n is the number of noses in X-trees, in worst case as well as in average case, This result is even better than that of BULD Diff algorithm, which is O(n log(n)) in worst case, We experimented X-tree Diff on reat data, which are about 11,000 home pages from about 20 wev sites, instead of synthetic documets manipulated for experimented for ex[erimentation. Currently, X-treeDiff algorithm is being used in a commeercial hacking detection system, called the WIDS(Web-Document Intrusion Detection System), which is to find changes occured in registered websites, and report suspicious changes to users.

Korean Food Exchange Lists for Diabetes: Revised 2010 (2010 당뇨병 환자를 위한 식품교환표 개정)

  • Ju, Dal-Lae;Jang, Hak-Chul;Cho, Young-Yun;Cho, Jae-Won;Yoo, Hye-Sook;Choi, Kyung-Suk;Woo, Mi-Hye;Sohn, Cheong-Min;Park, Yoo-Kyoung;Choue, Ryo-Won
    • Journal of Nutrition and Health
    • /
    • v.44 no.6
    • /
    • pp.577-591
    • /
    • 2011
  • A food exchange system for diabetes is a useful tool for meal planning and nutritional education. The first edition of the Korean food exchange lists was developed in 1988 and the second edition was revised in 1995. With recent changes in the food marketplace and eating patterns of Koreans, the third edition of food exchange lists was revised in 2010 by the Korean Diabetes Association, the Korean Nutrition Society, the Korean Society of Community Nutrition, the Korean Dietetic Association and the Korean Association of Diabetes Dietetic Educators through a joint research effort. The third edition is based on nutritional recommendations for people with diabetes and focuses on adding foods to implement personalized nutrition therapy considering individual preferences in diverse dietary environment. Foods were selected based on scientific evidence including the 2007 Korea National Health and Nutrition Examination Survey data analysis and survey responses from 53 diabetes dietetic educators. While a few foods were deleted, a number of foods were added, with 313 food items in food group lists and 339 food items in the appendix. Consistent with previous editions, the third edition of the food exchange lists included six food categories (grains, meat, vegetables, fats and oils, milk, and fruits). The milk group was subdivided into whole milk group and low fat milk. The standard nutrient content in one exchange from each food group was almost the same as the previous edition. Korea Food & Drug Administration's FANTASY (Food And Nutrient daTA SYstem) database was used to obtain nutrient values for each individual food and to determine the serving size most appropriate for matching reference nutrients values by each food group. The revised food exchange lists were subjected to a public hearing by experts. The third edition of the food exchange lists will be a helpful tool for educating people with diabetes to select the kinds and amounts of foods for glycemic control, which will eventually lead to preventing complications while maintaining the pleasure of eating.

Characterization of the Monoclonal Antibody Specific to Human S100A6 Protein (인체 S100A6 단백질에 특이한 단일클론 항체)

  • Kim, Jae Wha;Yoon, Sun Young;Joo, Joung-Hyuck;Kang, Ho Bum;Lee, Younghee;Choe, Yong-Kyung;Choe, In Seong
    • IMMUNE NETWORK
    • /
    • v.2 no.3
    • /
    • pp.175-181
    • /
    • 2002
  • Background: S100A6 is a calcium-binding protein overexpressed in several tumor cell lines including melanoma with high metastatic activity and involved in various cellular processes such as cell division and differentiation. To detect S100A6 protein in patient' samples (ex, blood or tissue), it is essential to produce a monoclonal antibody specific to the protein. Methods: First, cDNA coding for ORF region of human S100A6 gene was amplified and cloned into the expression vector for GST fusion protein. We have produced recombinant S100A6 protein and subsequently, monoclonal antibodies to the protein. The specificity of anti-S100A6 monoclonal antibody was confirmed using recombinant S100A recombinant proteins of other S100A family (GST-S100A1, GST-S100A2 and GST-S100A4) and the cell lysates of several human cell lines. Also, to identify the specific recognition site of the monoclonal antibody, we have performed the immunoblot analysis with serially deleted S100A6 recombinant proteins. Results: GST-S100A6 recombinant protein was induced and purified. And then S100A6 protein excluding GST protein was obtained and monoclonal antibody to the protein was produced. Monoclonal antibody (K02C12-1; patent number, 330311) has no cross-reaction to several other S100 family proteins. It appears that anti-S100A6 monoclonal antibody reacts with the region containing the amino acid sequence from 46 to 61 of S100A6 protein. Conclusion: These data suggest that anti-S100A6 monoclonal antibody produced can be very useful in development of diagnostic system for S100A6 protein.

A Study on the GUI Design of Fashion Customizing Web : Centered on Custom Knitware (패션 커스터마이징 웹 GUI디자인연구 : 커스텀 니트웨어를 중심으로)

  • Jang, Hui-Su;Nam, Won-Suk
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.4
    • /
    • pp.124-137
    • /
    • 2020
  • The need for customized products has also been increasing as more active consumers consume according to their values in recent years. Accordingly, fashion customizing web is becoming popular, but because custom freedom is low, we want to increase custom freedom by applying knitwear. To this end, a theoretical review was conducted through prior research and literature research on customization, knit design, and GUI, and based on this, a case analysis was conducted focusing on knit-making programs and fashion customizing web. Knit designs have more considerations than other fashion design process, resulting in more UIs, so users should use visual elements that are easily recognizable. Therefore, a draft assessment item was derived based on the preceding survey and three Delphi surveys were conducted on experts based on the draft. Each item was modified and deleted during the Delphi research process to produce the Custom Knitware Web GUI Design Guide. Through this study, we were able to identify the need for intuitive understanding and application of knit custom functions in GUI design of custom knitwear web. Through this research, it is expected that this data will be used to improve the usability of custom knitwear websites and to refer to knit design fields that utilize knit machines.

Development of Grocery Shopping Skills Enhancement Program for Chronic Schizophrenia Using Delphi Study (만성조현병 환자를 위한 식료품 쇼핑 기술 강화 프로그램 기초연구: 델파이기법)

  • Kim, Yong-Sub;Lee, Seong-A
    • The Journal of Korean society of community based occupational therapy
    • /
    • v.10 no.1
    • /
    • pp.17-30
    • /
    • 2020
  • Objective : The purpose of this study is to provide basic data for the development of a instrumental activities of daily living training program called grocery shopping for schizophrenic patients in Delphi. Methods : The final program items and contents were completed through the first and third delphi surveys from August 2018 to March 2019. The expert composition selected 26 occupational therapists related to mental health. Three surveys were conducted and 23 experts participated in the Delphi survey. The second questionnaire, which was created from an open questionnaire, was designed to indicate the degree of importance using the Likert 5-point scale. As a result of the response of the 3rd questionnaire, the level of expert consensus was reconfirmed by analyzing average, standard deviation, and content validity ratio (CVR). Results : Three rounds of Delphi research reveal four categories of questions: grocery shopping views, product purchase strategies, necessary functions, and expert knowledge and experience on how to make purchase decisions. 24 items were selected. Through the 2nd and 3rd Delphi surveys, 4 items that did not meet the criteria of goodness of fit of each item or duplicated the contents were deleted and finally 20 items were extracted. Conclusion : Experts' agreement on grocery shopping technology was drawn from an occupational therapy perspective so that patients with schizophrenia living in the community could recover and participate as a member of society.

Taste Compounds of Fresh-Water Fishes 8. Taste Compounds of Crucian Carp Meat (담수어의 정미성분에 관한 연구 8. 붕어의 정미성분)

  • YANG Syng-Taek;LEE Eung-Ho
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.17 no.3
    • /
    • pp.170-176
    • /
    • 1984
  • This study was directed to define the taste compounds of crucian carp, Carassius caressius, free amino acids, nucleotides and their related compounds, organic bases, sugars, organic acids and minerals in the extracts of crucian carp were analyzed, and then followed by sensory evaluation of synthetic extracts prepared from 44 pure chemicals on the basis of the analytical data. Taste panel assessments of synthetic extracts prepared with each extractive component omitted were carried out by a triangle difference test, and changes in taste profile were assessed. In free amino acid composition, histidine was dominant occupying $46\%$ of the total free amino acids. The other abundant free amino acids were glycine, lysine, alanine and taurine. As for the nucleotides, IMP was dominant showing about $80\%$ of the total of nucleotides. The most abundant organic base was total creatinine. The content of betaine was poor and TMAO were trace in content. The main organic acids were succinic, propionic, butyric and valeric acid. Small amount of glucose, fructose and inositol were detected and ribose and arabinose were trace in content $K^+,\;Na^+,\;PO_4^{3-}\;and\;Cl^-$ were found to be the major ions and small amount of $Ca^{++}\;and\;Mg^{2+}$ were deleted. Judging from the results of omission test, the major components which contribute to produce the taste were serine, glutamic acid, lysine, arginine, tyrosine, phenylalanine, IMP, $Na^+,\;K^+\;and\;PO_4^{3-}$.

  • PDF

Deriving Priorities of Competences Required for Digital Forensic Experts using AHP (AHP 방법을 활용한 디지털포렌식 전문가 역량의 우선순위 도출)

  • Yun, Haejung;Lee, Seung Yong;Lee, Choong C.
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.1
    • /
    • pp.107-122
    • /
    • 2017
  • Nowadays, digital forensic experts are not only computer experts who restore and find deleted files, but also general experts who posses various capabilities including knowledge about processes/laws, communication skills, and ethics. However, there have been few studies about qualifications or competencies required for digital forensic experts comparing with their importance. Therefore, in this study, AHP questionnaires were distributed to digital forensic experts and analyzed to derive priorities of competencies; the first-tier questions which consisted of knowledge, technology, and attitude, and the second-tier ones which have 20 items. Research findings showed that the most important competency was knowledge, followed by technology and attitude but no significant difference was found. Among 20 items of the second-tier competencies, the most important competency was "digital forensics equipment/tool program utilization skill" and it was followed by "data extraction and imaging skill from storage devices." Attitude such as "judgment," "morality," "communication skill," "concentration" were subsequently followed. The least critical one was "substantial law related to actual cases." Previous studies on training/education for digital forensics experts focused on law, IT knowledge, and usage of analytic tools while attitude-related competencies have not given proper attention. We hope this study can provide helpful implications to design curriculum and qualifying exam to foster digital forensic experts.

Construction and Validation of Infection Control Practice Scale for Dental Hygienist (치과위생사의 감염관리 실천도 측정도구의 개발과 타당화)

  • Cho, Young-Sik;Jun, Bo-Hye;Choi, Young-Suk
    • Journal of dental hygiene science
    • /
    • v.9 no.1
    • /
    • pp.53-59
    • /
    • 2009
  • Infection control is now recognized as an important quality indicator in dental health service setting. The purpose of this study was to develop and validate Dental Hygienist's Infection Control Practice Scale for quality management of dental health service in Korea. The data of 254 dental hygienists was subjected to exploratory factor analysis using SPSS 16.0 and confirmatory factor analysis using AMOS 16.0. The total items of preliminary scale were 21 items and 5 subscale. Principal component analysis was completed with Varimax rotation. The results show a change in factor structure from 5 factor solution to 4 factor solution. The confirmatory factor analysis confirmed the four subscales(Immunization and periodic tests, Clinical procedure, Handwashing, Personal protection) which have a total of 12 items. After the item deleted because factor loading was low, measured model was tested. The results of the measurement model indicated fit indices: $x^2$= 79.593(df = 38, 0 = 0.000), RMR = 0.045, GFI = 0.940, CFI = 0.904, AGFI = 0.896, NFI = 0.837, TLI = 0.861, RMSEA = 0.67. The squared correlation between four constructs were less than the average variance extracted(AVE) of four constructs. Multiple regression analysis was completed. Dependent variable was the perceived infection control practice by dental hygienist. Independent variables were four summated subscales(R = 0.552, $R^2$= 0.304, Adjusted $R^2$= 0.431, F = 25.813, p = 0.000). Unstandardized coefficients of three independent variables were statistically significant.

  • PDF