• Title/Summary/Keyword: retrieval

Search Result 4,778, Processing Time 0.031 seconds

Pharmaco-mechanical Thrombectomy and Stent Placement in Patients with May-Thurner Syndrome and Lower Extremity Deep Venous Thrombosis (May-Thurner 증후군과 동반된 하지 심부정맥혈전환자에서 혈전제거술과 스텐트삽입술)

  • Jeon, Yonh-Sun;Kim, Yong-Sam;Cho, Jung-Soo;Yoon, Yong-Han;Baek, Wan-Ki;Kim, Kwang-Ho;Kim, Joung-Taek
    • Journal of Chest Surgery
    • /
    • v.42 no.6
    • /
    • pp.757-762
    • /
    • 2009
  • Background: Compression of the left common iliac vein by the overriding common iliac artery is frequently combined with acute deep vein thrombosis in patients with May-Thurner Syndrome. We evaluate the results of treatment with thrombolysis and thrombectomy followed by stenting in 34 patients with May-Thurner Syndrome combined with lower extremity deep venous thrombosis. Material and Method: The authors retrospectively reviewed the records of 34 patients (mean age: $65{\pm}14$ year old) who had undergone stent insertion for acute deep vein thrombosis that was caused by May-Thurner syndrome. After thrombectomy and thrombolysis, insertion of a wall stent and balloon angioplasty were performed to relieve the compression of the left common iliac vein. Urokinase at a rate of 80,000 to 120,000 U/hour was infused into the thrombosed vein via a multi-side hole thrombolysis catheter. A retrieval inferior vena cava (IVC) filter was placed to protect against pulmonary embolism in 30 patients (88%). Oral anticoagulation with warfarin was maintained for 3 months, and follow-up Multi Detector Computerized Tomography (MDCT) angiography was done at the date of the patients' hospital discharge and at the 6 months follow-up. Result: The symptoms of deep venous thrombosis disappeared in two patients (4%), and there was clinical improvement within 48 hours in twenty eight patients (82%), but there was no improvement in four patients (8%). The MDCT angiography at discharge showed no thrombus in 9 patients (26%) and partial thrombus in 21 (62%), whereas the follow-up MDCT at $6.4{\pm}5.5$ months (32 patients) revealed no thrombus in 23 patients (72%), and partial thrombus in 9 patients (26%). Two patients (6%) had recurrence of DVT, so they underwent retreatment. Conclusion: Stent insertion with catheter-directed thrombolysis and thrombectomy is an effective treatment for May-Thurner syndrome combined with acute deep vein thrombosis in the lower extremity.

Probabilistic Anatomical Labeling of Brain Structures Using Statistical Probabilistic Anatomical Maps (확률 뇌 지도를 이용한 뇌 영역의 위치 정보 추출)

  • Kim, Jin-Su;Lee, Dong-Soo;Lee, Byung-Il;Lee, Jae-Sung;Shin, Hee-Won;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.317-324
    • /
    • 2002
  • Purpose: The use of statistical parametric mapping (SPM) program has increased for the analysis of brain PET and SPECT images. Montreal Neurological Institute (MNI) coordinate is used in SPM program as a standard anatomical framework. While the most researchers look up Talairach atlas to report the localization of the activations detected in SPM program, there is significant disparity between MNI templates and Talairach atlas. That disparity between Talairach and MNI coordinates makes the interpretation of SPM result time consuming, subjective and inaccurate. The purpose of this study was to develop a program to provide objective anatomical information of each x-y-z position in ICBM coordinate. Materials and Methods: Program was designed to provide the anatomical information for the given x-y-z position in MNI coordinate based on the Statistical Probabilistic Anatomical Map (SPAM) images of ICBM. When x-y-z position was given to the program, names of the anatomical structures with non-zero probability and the probabilities that the given position belongs to the structures were tabulated. The program was coded using IDL and JAVA language for 4he easy transplantation to any operating system or platform. Utility of this program was shown by comparing the results of this program to those of SPM program. Preliminary validation study was peformed by applying this program to the analysis of PET brain activation study of human memory in which the anatomical information on the activated areas are previously known. Results: Real time retrieval of probabilistic information with 1 mm spatial resolution was archived using the programs. Validation study showed the relevance of this program: probability that the activated area for memory belonged to hippocampal formation was more than 80%. Conclusion: These programs will be useful for the result interpretation of the image analysis peformed on MNI coordinate, as done in SPM program.

Some Instances of Manchurian Naturalization and Settlement in Choson Dynasty (향화인의 조선 정착 사례 연구 - 여진 향화인을 중심으로 -)

  • Won, Chang-Ae
    • (The)Study of the Eastern Classic
    • /
    • no.37
    • /
    • pp.33-61
    • /
    • 2009
  • In the late Koryo period, until 14th century, there had been at least two groups of Manchurians who were conferred citizenships; one group was living as an original inhabitant in the coastal area of north­eastern part of Korean peninsular, long time ago, and they were over one thousand households. The other was coming down from inland, eastern part of Yoha River, to the area of Tuman River to settle down and they were at least around one hundred and sixty households, including such tribes as Al-tha-ry, Ol-lyang-hap, Ol-jok-hap and others. They were treated courteously, from the early days of Choson dynasty, with governmental policies in an economic, political, and social ways. They were given, for instance, a house, a land, household furniture, and clothes. They were allowed to get marry with a native Korean to settle down. They were educated how to cultivate their lands. It was also possible for them to be given an official position politically or allowed to take a National Civil Official Examination. The fact they could take such an Examination, in particular, means they were treated fairly and equally, because they also had a privilege to improve their social positions through the formal system as much as common people. Two typical families were scrutinized, in this paper, family Chong-hae Lee and family Chon-ju Ju. All of them were successful to settle down with different backgrounds each other. The former were from a headman, Lee Jee-ran, who controlled his tribe, over five hundred households. He was given three titles of a meritorious retainer at the founding of Chosun dynasty, at the retrieval of armies, and an enshrined retainer. His son, Lee Wha-yong, was also given a vassal of merit who kept a close tie successfully with the king's family through a marriage. Upon the foundation of their ancestors, their grandsons, family Lee Hyo-yang and family Lee Hyo-gang, each, had taken solid root as an aristocratic Yang-ban class. The former became a high officer family, generation by generation, while the latter changed into a civil official family through Civil Official Examinations. They lived mainly around Seoul, Kyong-gi Province and some lived in their original places, Ham-kyong Province. Chu-man, the first ancestor, was given a meritorious retainer at the founding of the dynasty and Chu-in was also given a high officer position from the government. They kept living at the original place, Ham-heung, Ham-kyong Province, and then became an outstanding local family there. They began to pass the Civil Official Examinations. After 17th century on the passers were 17 in Civil Official Examinations and 40 were passed in lower civil examinations. The positions in government they attained usually were remonstrance which position was prohibited particularly to North­Western people at that time. The Chosun dynasty was open to Machurians widely through the system of envoy, convoy, and naturalization. It was intended to build up an enclosure policy through a friendly diplomatic relation with them against any possible invasion from outside. This is one reason why they were supported fully that much in a various way.

The Effect of Bilateral Eye Movements on Face Recognition in Patients with Schizophrenia (양측성 안구운동이 조현병 환자의 얼굴 재인에 미치는 영향)

  • Lee, Na-Hyun;Kim, Ji-Woong;Im, Woo-Young;Lee, Sang-Min;Lim, Sanghyun;Kwon, Hyukchan;Kim, Min-Young;Kim, Kiwoong;Kim, Seung-Jun
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.24 no.1
    • /
    • pp.102-108
    • /
    • 2016
  • Objectives : The deficit of recognition memory has been found as one of the common neurocognitive impairments in patients with schizophrenia. In addition, they were reported to fail to enhance the memory about emotional stimuli. Previous studies have shown that bilateral eye movements enhance the memory retrieval. Therefore, this study was conducted in order to investigate the memory enhancement of bilaterally alternating eye movements in schizophrenic patients. Methods : Twenty one patients with schizophrenia participated in this study. The participants learned faces (angry or neutral faces), and then performed a recognition memory task in relation to the faces after bilateral eye movements and central fixation. Recognition accuracy, response bias, and mean response time to hits were compared and analysed. Two-way repeated measure analysis of variance was performed for statistical analysis. Results : There was a significant effect of bilateral eye movements condition in mean response time(F=5.812, p<0.05) and response bias(F=10.366, p<0.01). Statistically significant interaction effects were not observed between eye movement condition and face emotion type. Conclusions : Irrespective of the emotional difference of facial stimuli, recognition memory processing was more enhanced after bilateral eye movements in patients with schizophrenia. Further study will be needed to investigate the underlying neural mechanism of bilateral eye movements-induced memory enhancement in patients with schizophrenia.

Estimation of Ground-level PM10 and PM2.5 Concentrations Using Boosting-based Machine Learning from Satellite and Numerical Weather Prediction Data (부스팅 기반 기계학습기법을 이용한 지상 미세먼지 농도 산출)

  • Park, Seohui;Kim, Miae;Im, Jungho
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.2
    • /
    • pp.321-335
    • /
    • 2021
  • Particulate matter (PM10 and PM2.5 with a diameter less than 10 and 2.5 ㎛, respectively) can be absorbed by the human body and adversely affect human health. Although most of the PM monitoring are based on ground-based observations, they are limited to point-based measurement sites, which leads to uncertainty in PM estimation for regions without observation sites. It is possible to overcome their spatial limitation by using satellite data. In this study, we developed machine learning-based retrieval algorithm for ground-level PM10 and PM2.5 concentrations using aerosol parameters from Geostationary Ocean Color Imager (GOCI) satellite and various meteorological parameters from a numerical weather prediction model during January to December of 2019. Gradient Boosted Regression Trees (GBRT) and Light Gradient Boosting Machine (LightGBM) were used to estimate PM concentrations. The model performances were examined for two types of feature sets-all input parameters (Feature set 1) and a subset of input parameters without meteorological and land-cover parameters (Feature set 2). Both models showed higher accuracy (about 10 % higher in R2) by using the Feature set 1 than the Feature set 2. The GBRT model using Feature set 1 was chosen as the final model for further analysis(PM10: R2 = 0.82, nRMSE = 34.9 %, PM2.5: R2 = 0.75, nRMSE = 35.6 %). The spatial distribution of the seasonal and annual-averaged PM concentrations was similar with in-situ observations, except for the northeastern part of China with bright surface reflectance. Their spatial distribution and seasonal changes were well matched with in-situ measurements.

A Study on the Design of the Grid-Cell Assessment System for the Optimal Location of Offshore Wind Farms (해상풍력발전단지의 최적 위치 선정을 위한 Grid-cell 평가 시스템 개념 설계)

  • Lee, Bo-Kyeong;Cho, Ik-Soon;Kim, Dae-Hae
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.24 no.7
    • /
    • pp.848-857
    • /
    • 2018
  • Recently, around the world, active development of new renewable energy sources including solar power, waves, and fuel cells, etc. has taken place. Particularly, floating offshore wind farms have been developed for saving costs through large scale production, using high-quality wind power and minimizing noise damage in the ocean area. The development of floating wind farms requires an evaluation of the Maritime Safety Audit Scheme under the Maritime Safety Act in Korea. Floating wind farms shall be assessed by applying the line and area concept for systematic development, management and utilization of specified sea water. The development of appropriate evaluation methods and standards is also required. In this study, proper standards for marine traffic surveys and assessments were established and a systemic treatment was studied for assessing marine spatial area. First, a marine traffic data collector using AIS or radar was designed to conduct marine traffic surveys. In addition, assessment methods were proposed such as historical tracks, traffic density and marine traffic pattern analysis applying the line and area concept. Marine traffic density can be evaluated by spatial and temporal means, with an adjusted grid-cell scale. Marine traffic pattern analysis was proposed for assessing ship movement patterns for transit or work in sea areas. Finally, conceptual design of a Marine Traffic and Safety Assessment Solution (MaTSAS) was competed that can be analyzed automatically to collect and assess the marine traffic data. It could be possible to minimize inaccurate estimation due to human errors such as data omission or misprints through automated and systematic collection, analysis and retrieval of marine traffic data. This study could provides reliable assessment results, reflecting the line and area concept, according to sea area usage.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Study on Improvement of Collaborative Filtering Based on Implicit User Feedback Using RFM Multidimensional Analysis (RFM 다차원 분석 기법을 활용한 암시적 사용자 피드백 기반 협업 필터링 개선 연구)

  • Lee, Jae-Seong;Kim, Jaeyoung;Kang, Byeongwook
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.139-161
    • /
    • 2019
  • The utilization of the e-commerce market has become a common life style in today. It has become important part to know where and how to make reasonable purchases of good quality products for customers. This change in purchase psychology tends to make it difficult for customers to make purchasing decisions in vast amounts of information. In this case, the recommendation system has the effect of reducing the cost of information retrieval and improving the satisfaction by analyzing the purchasing behavior of the customer. Amazon and Netflix are considered to be the well-known examples of sales marketing using the recommendation system. In the case of Amazon, 60% of the recommendation is made by purchasing goods, and 35% of the sales increase was achieved. Netflix, on the other hand, found that 75% of movie recommendations were made using services. This personalization technique is considered to be one of the key strategies for one-to-one marketing that can be useful in online markets where salespeople do not exist. Recommendation techniques that are mainly used in recommendation systems today include collaborative filtering and content-based filtering. Furthermore, hybrid techniques and association rules that use these techniques in combination are also being used in various fields. Of these, collaborative filtering recommendation techniques are the most popular today. Collaborative filtering is a method of recommending products preferred by neighbors who have similar preferences or purchasing behavior, based on the assumption that users who have exhibited similar tendencies in purchasing or evaluating products in the past will have a similar tendency to other products. However, most of the existed systems are recommended only within the same category of products such as books and movies. This is because the recommendation system estimates the purchase satisfaction about new item which have never been bought yet using customer's purchase rating points of a similar commodity based on the transaction data. In addition, there is a problem about the reliability of purchase ratings used in the recommendation system. Reliability of customer purchase ratings is causing serious problems. In particular, 'Compensatory Review' refers to the intentional manipulation of a customer purchase rating by a company intervention. In fact, Amazon has been hard-pressed for these "compassionate reviews" since 2016 and has worked hard to reduce false information and increase credibility. The survey showed that the average rating for products with 'Compensated Review' was higher than those without 'Compensation Review'. And it turns out that 'Compensatory Review' is about 12 times less likely to give the lowest rating, and about 4 times less likely to leave a critical opinion. As such, customer purchase ratings are full of various noises. This problem is directly related to the performance of recommendation systems aimed at maximizing profits by attracting highly satisfied customers in most e-commerce transactions. In this study, we propose the possibility of using new indicators that can objectively substitute existing customer 's purchase ratings by using RFM multi-dimensional analysis technique to solve a series of problems. RFM multi-dimensional analysis technique is the most widely used analytical method in customer relationship management marketing(CRM), and is a data analysis method for selecting customers who are likely to purchase goods. As a result of verifying the actual purchase history data using the relevant index, the accuracy was as high as about 55%. This is a result of recommending a total of 4,386 different types of products that have never been bought before, thus the verification result means relatively high accuracy and utilization value. And this study suggests the possibility of general recommendation system that can be applied to various offline product data. If additional data is acquired in the future, the accuracy of the proposed recommendation system can be improved.

Development of a Retrieval Algorithm for Adjustment of Satellite-viewed Cloudiness (위성관측운량 보정을 위한 알고리즘의 개발)

  • Son, Jiyoung;Lee, Yoon-Kyoung;Choi, Yong-Sang;Ok, Jung;Kim, Hye-Sil
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.3
    • /
    • pp.415-431
    • /
    • 2019
  • The satellite-viewed cloudiness, a ratio of cloudy pixels to total pixels ($C_{sat,\;prev}$), inevitably differs from the "ground-viewed" cloudiness ($C_{grd}$) due to different viewpoints. Here we develop an algorithm to retrieve the satellite-viewed, but adjusted cloudiness to $C_{grd} (C_{sat,\;adj})$. The key process of the algorithm is to convert the cloudiness projected on the plane surface into the cloudiness on the celestial hemisphere from the observer. For this conversion, the supplementary satellite retrievals such as cloud detection and cloud top pressure are used as they provide locations of cloudy pixels and cloud base height information, respectively. The algorithm is tested for Himawari-8 level 1B data. The $C_{sat,\;adj}$ and $C_{sat,\;prev}$ are retrieved and validated with $C_{grd}$ of SYNOP station over Korea (22 stations) and China (724 stations) during only daytime for the first seven days of every month from July 2016 to June 2017. As results, the mean error of $C_{sat,\;adj}$ (0.61) is less that than that of $C_{sat,\;prev}$ (1.01). The percent of detection for 'Cloudy' scenario of $C_{sat,\;adj}$ (73%) is higher than that of $C_{sat,\;prev}$ (60%) The percent of correction, the accuracy, of $C_{sat,\;adj}$ is 61%, while that of $C_{sat,\;prev}$ is 55% for all seasons. For the December-January-February period when cloudy pixels are readily overestimated, the proportion of correction of $C_{sat,\;adj$ is 60%, while that of $C_{sat,\;prev}$ is 56%. Therefore, we conclude that the present algorithm can effectively get the satellite cloudiness near to the ground-viewed cloudiness.

Conditional Generative Adversarial Network based Collaborative Filtering Recommendation System (Conditional Generative Adversarial Network(CGAN) 기반 협업 필터링 추천 시스템)

  • Kang, Soyi;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.157-173
    • /
    • 2021
  • With the development of information technology, the amount of available information increases daily. However, having access to so much information makes it difficult for users to easily find the information they seek. Users want a visualized system that reduces information retrieval and learning time, saving them from personally reading and judging all available information. As a result, recommendation systems are an increasingly important technologies that are essential to the business. Collaborative filtering is used in various fields with excellent performance because recommendations are made based on similar user interests and preferences. However, limitations do exist. Sparsity occurs when user-item preference information is insufficient, and is the main limitation of collaborative filtering. The evaluation value of the user item matrix may be distorted by the data depending on the popularity of the product, or there may be new users who have not yet evaluated the value. The lack of historical data to identify consumer preferences is referred to as data sparsity, and various methods have been studied to address these problems. However, most attempts to solve the sparsity problem are not optimal because they can only be applied when additional data such as users' personal information, social networks, or characteristics of items are included. Another problem is that real-world score data are mostly biased to high scores, resulting in severe imbalances. One cause of this imbalance distribution is the purchasing bias, in which only users with high product ratings purchase products, so those with low ratings are less likely to purchase products and thus do not leave negative product reviews. Due to these characteristics, unlike most users' actual preferences, reviews by users who purchase products are more likely to be positive. Therefore, the actual rating data is over-learned in many classes with high incidence due to its biased characteristics, distorting the market. Applying collaborative filtering to these imbalanced data leads to poor recommendation performance due to excessive learning of biased classes. Traditional oversampling techniques to address this problem are likely to cause overfitting because they repeat the same data, which acts as noise in learning, reducing recommendation performance. In addition, pre-processing methods for most existing data imbalance problems are designed and used for binary classes. Binary class imbalance techniques are difficult to apply to multi-class problems because they cannot model multi-class problems, such as objects at cross-class boundaries or objects overlapping multiple classes. To solve this problem, research has been conducted to convert and apply multi-class problems to binary class problems. However, simplification of multi-class problems can cause potential classification errors when combined with the results of classifiers learned from other sub-problems, resulting in loss of important information about relationships beyond the selected items. Therefore, it is necessary to develop more effective methods to address multi-class imbalance problems. We propose a collaborative filtering model using CGAN to generate realistic virtual data to populate the empty user-item matrix. Conditional vector y identify distributions for minority classes and generate data reflecting their characteristics. Collaborative filtering then maximizes the performance of the recommendation system via hyperparameter tuning. This process should improve the accuracy of the model by addressing the sparsity problem of collaborative filtering implementations while mitigating data imbalances arising from real data. Our model has superior recommendation performance over existing oversampling techniques and existing real-world data with data sparsity. SMOTE, Borderline SMOTE, SVM-SMOTE, ADASYN, and GAN were used as comparative models and we demonstrate the highest prediction accuracy on the RMSE and MAE evaluation scales. Through this study, oversampling based on deep learning will be able to further refine the performance of recommendation systems using actual data and be used to build business recommendation systems.