• Title/Summary/Keyword: Query type

Search Result 166, Processing Time 0.029 seconds

Excisional lipectomy versus liposuction in HIV-associated lipodystrophy

  • Barton, Natalie;Moore, Ryan;Prasad, Karthik;Evans, Gregory
    • Archives of Plastic Surgery
    • /
    • v.48 no.6
    • /
    • pp.685-690
    • /
    • 2021
  • Background Human immunodeficiency virus (HIV)-associated lipodystrophy is a known consequence of long-term highly active antiretroviral therapy (HAART). However, a significant number of patients on HAART therapy were left with the stigmata of complications, including fat redistribution. Few studies have described the successful removal of focal areas of lipohypertrophy with successful outcomes. This manuscript reviews the outcomes of excisional lipectomy versus liposuction for HIV-associated cervicodorsal lipodystrophy. Methods We performed a 15-year retrospective review of HIV-positive patients with lipodystrophy. Patients were identified by query of secure operative logs. Data collected included demographics, medications, comorbidities, duration of HIV, surgical intervention type, pertinent laboratory values, and the amount of tissue removed. Results Nine male patients with HIV-associated lipodystrophy underwent a total of 17 procedures. Of the patients who underwent liposuction initially (n=5), 60% (n=3) experienced a recurrence. There were a total of three cases of primary liposuction followed by excisional lipectomy. One hundred percent of these cases were noted to have a recurrence postoperatively, and there was one case of seroma formation. Of the subjects who underwent excisional lipectomy (n=4), there were no documented recurrences; however, one patient's postoperative course was complicated by seroma formation. Conclusions HIV-associated lipodystrophy is a disfiguring complication of HAART therapy with significant morbidity. Given the limitations of liposuction alone as the primary intervention, excisional lipectomy is recommended as the primary treatment. Liposuction may be used for better contouring and for subsequent procedures. While there is a slightly higher risk for complications, adjunctive techniques such as quilting sutures and placement of drains may be used in conjunction with excisional lipectomy.

Index-based Searching on Timestamped Event Sequences (타임스탬프를 갖는 이벤트 시퀀스의 인덱스 기반 검색)

  • 박상현;원정임;윤지희;김상욱
    • Journal of KIISE:Databases
    • /
    • v.31 no.5
    • /
    • pp.468-478
    • /
    • 2004
  • It is essential in various application areas of data mining and bioinformatics to effectively retrieve the occurrences of interesting patterns from sequence databases. For example, let's consider a network event management system that records the types and timestamp values of events occurred in a specific network component(ex. router). The typical query to find out the temporal casual relationships among the network events is as fellows: 'Find all occurrences of CiscoDCDLinkUp that are fellowed by MLMStatusUP that are subsequently followed by TCPConnectionClose, under the constraint that the interval between the first two events is not larger than 20 seconds, and the interval between the first and third events is not larger than 40 secondsTCPConnectionClose. This paper proposes an indexing method that enables to efficiently answer such a query. Unlike the previous methods that rely on inefficient sequential scan methods or data structures not easily supported by DBMSs, the proposed method uses a multi-dimensional spatial index, which is proven to be efficient both in storage and search, to find the answers quickly without false dismissals. Given a sliding window W, the input to a multi-dimensional spatial index is a n-dimensional vector whose i-th element is the interval between the first event of W and the first occurrence of the event type Ei in W. Here, n is the number of event types that can be occurred in the system of interest. The problem of‘dimensionality curse’may happen when n is large. Therefore, we use the dimension selection or event type grouping to avoid this problem. The experimental results reveal that our proposed technique can be a few orders of magnitude faster than the sequential scan and ISO-Depth index methods.hods.

Synthetic Trajectory Generation Tool for Indoor Moving Objects (실내공간 이동객체 궤적 생성기)

  • Ryoo, Hyung Gyu;Kim, Soo Jin;Li, Ki Joune
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.24 no.4
    • /
    • pp.59-66
    • /
    • 2016
  • For the performance experiments of databases systems with moving object databases, we need moving object trajectory data sets. For example, benchmark data sets of moving object trajectories are required for experiments on query processing of moving object databases. For those reasons, several tools have been developed for generating moving objects in Euclidean spaces or road network spaces. Indoor space differs from outdoor spaces in many aspects and moving object generator for indoor space should reflect these differences. Even some tools were developed to produce virtual moving object trajectories in indoor space, the movements generated by them are not realistic. In this paper, we present a moving object generation tool for indoor space. First, this tool generates trajectories for pedestrians in an indoor space. And it provides a parametric generation of trajectories considering not only speed, number of pedestrians, minimum distance between pedestrians but also type of spaces, time constraints, and type of pedestrians. We try to reflect the patterns of pedestrians in indoor space as realistic as possible. For the reason of interoperability, several geospatial standards are used in the development of the tool.

The Effect of Deal-Proneness in the Searching Pattern on the Purchase Probability of Customer in Online Travel Services (소비자 키워드광고 탐색패턴에 나타난 촉진지향성이 온라인 여행상품 구매확률에 미치는 영향)

  • Kim, Hyun Gyo;Lee, Dong Il
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.39 no.1
    • /
    • pp.29-48
    • /
    • 2014
  • The recent keyword advertising does not reflect the individual customer searching pattern because it is focused on each keyword at the aggregate level. The purpose of this research is to observe processes of customer searching patterns. To be specific, individual deal-proneness is mainly concerned. This study incorporates location as a control variable. This paper examines the relationship between customers' searching patterns and probability of purchase. A customer searching session, which is the collection of sequence of keyword queries, is utilized as the unit of analysis. The degree of deal-proneness is measured using customer behavior which is revealed by customer searching keywords in the session. Deal-proneness measuring function calculates the discount of deal prone keyword leverage in accordance with customer searching order. Location searching specificity function is also calculated by the same logic. The analyzed data is narrowed down to the customer query session which has more than two keyword queries. The number of the data is 218,305 by session, which is derived from Internet advertising agency's (COMAS) advertisement managing data and the travel business advertisement revenue data from advertiser's. As a research result, there are three types of the deal-prone customer. At first, there is an unconditional active deal-proneness customer. It is the customer who has lower deal-proneness which means that he/she utilizes deal-prone keywords in the last phase. He/she starts searching a keyword like general ones and then finally purchased appropriate products by utilizing deal-prone keywords in the last time. Those two types of customers have the similar rates of purchase. However, the last type of the customer has middle deal-proneness; who utilizes deal-prone keywords in the middle of the process. This type of a customer closely gets into the information by employing deal-prone keywords but he/she could not find out appropriate alternative then would modify other keywords to look for other alternatives. That is the reason why the purchase probability in this case would be decreased Also, this research confirmed that there is a loyalty effect using location searching specificity. The customer who has higher trip loyalty for specificity location responds to selected promotion rather than general promotion. So, this customer has a lower probability to purchase.

Application of Geographic Information Systems for Effective Management of University Forests (대학연습림의 효율적 관리를 위한 지리정보시스템의 활용방안)

  • Kwon, Taeho;Kim, Taekyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.2 no.3
    • /
    • pp.81-90
    • /
    • 1999
  • The functional change of university forest have led to need more complicated techniques for forest management strategies, and more information about forest and natural environment. Therefore the systematic tools, like the so-called Forest Information System to which apply the techniques of geographic information system, are eagerly required for collecting, editing, managing, analyzing the various data about forest and environment, and for supporting the decision-making process. The digital mapping, which could be a primary step to construct the Forest Information System, was carried out using the many kinds of thematic spatial data referring to the Seongju Experimental Forest of Taegu University. As a result, various digital maps including forest type, soil type and so on were constructed. And then we made an user-interface system to link the attributive data in management plan to the thematic spatial data. This system was regarded as the effective tool capable of the more rapid query, analysis and update of related data for systematic management of university forest. Moreover, it would be a useful tool of decision-making in devising, assessing and operating the plan of forest management and development. But there would be much room for supplementation and improvement to make the more convenient and powerful system for the external demands, therefore more concerns and efforts in collecting, revising and updating the data is continuously required.

  • PDF

A Search Method for Components Based-on XML Component Specification (XML 컴포넌트 명세서 기반의 컴포넌트 검색 기법)

  • Park, Seo-Young;Shin, Yoeng-Gil;Wu, Chi-Su
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.2
    • /
    • pp.180-192
    • /
    • 2000
  • Recently, the component technology has played a main role in software reuse. It has changed the code-based reuse into the binary code-based reuse, because components can be easily combined into the developing software only through component interfaces. Since components and component users have increased rapidly, it is necessary that the users of components search for the most proper components for HTML among the enormous number of components on the Internet. It is desirable to use web-document-typed specifications for component specifications on the Internet. This paper proposes to use XML component specifications instead of HTML specifications, because it is impossible to represent the semantics of contexts using HTML. We also propose the XML context-search method based on XML component specifications. Component users use the contexts for the component properties and the terms for the values of component properties in their queries for searching components. The index structure for the context-based search method is the inverted file indexing structure of term-context-component specification. Not only an XML context-based search method but also a variety of search methods based on context-based search, such as keyword, search, faceted search, and browsing search method, are provided for the convenience of users. We use the 3-layer architecture, with an interface layer, a query expansion layer, and an XML search engine layer, of the search engine for the efficient index scheme. In this paper, an XML DTD(Document Type Definition) for component specification is defined and the experimental results of comparing search performance of XML with HTML are discussed.

  • PDF

Rule Discovery and Matching for Forecasting Stock Prices (주가 예측을 위한 규칙 탐사 및 매칭)

  • Ha, You-Min;Kim, Sang-Wook;Won, Jung-Im;Park, Sang-Hyun;Yoon, Jee-Hee
    • Journal of KIISE:Databases
    • /
    • v.34 no.3
    • /
    • pp.179-192
    • /
    • 2007
  • This paper addresses an approach that recommends investment types for stock investors by discovering useful rules from past changing patterns of stock prices in databases. First, we define a new rule model for recommending stock investment types. For a frequent pattern of stock prices, if its subsequent stock prices are matched to a condition of an investor, the model recommends a corresponding investment type for this stock. The frequent pattern is regarded as a rule head, and the subsequent part a rule body. We observed that the conditions on rule bodies are quite different depending on dispositions of investors while rule heads are independent of characteristics of investors in most cases. With this observation, we propose a new method that discovers and stores only the rule heads rather than the whole rules in a rule discovery process. This allows investors to define various conditions on rule bodies flexibly, and also improves the performance of a rule discovery process by reducing the number of rules. For efficient discovery and matching of rules, we propose methods for discovering frequent patterns, constructing a frequent pattern base, and indexing them. We also suggest a method that finds the rules matched to a query issued by an investor from a frequent pattern base, and a method that recommends an investment type using the rules. Finally, we verify the superiority of our approach via various experiments using real-life stock data.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

S-XML Transformation Method for Efficient Distribution of Spatial Information on u-GIS Environment (u-GIS 환경에서 효율적인 공간 정보 유통을 위한 S-XML 변환 기법)

  • Lee, Dong-Wook;Baek, Sung-Ha;Kim, Gyoung-Bae;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.1
    • /
    • pp.55-62
    • /
    • 2009
  • In u-GIS environment, we collect spatial data needed through sensor network and provide them with information real-time processed or stored. When information through Internet is requested on Web based applications, it is transmitted in XML. Especially, when requested information includes spatial data, GML, S-XML, and other document that can process spatial data are used. In this processing, real-time stream data processed in DSMS is transformed to S-XML document type and spatial information service based on web receive S-XML document through Internet. Because most of spatial application service use existing spatial DBMS as a storage system, The data used in S-XML and SDBMS needs transformation between themselves. In this paper, we propose S-XML a transformation method using caching of spatial data. The proposed method caches the spatial data part of S-XML to transform S-XML and relational spatial database for providing spatial data efficiently and it transforms cached data without additional transformation cost when a transformation between data in the same region is required. Through proposed method, we show that it reduced the cost of transformation between S-XML documents and spatial information services based on web to provide spatial information in u-GIS environment and increased the performance of query processing through performance evaluation.

  • PDF