• Title/Summary/Keyword: retrieval and use

Search Result 700, Processing Time 0.027 seconds

Net Primary Production Changes over Korea and Climate Factors (위성영상으로 분석한 장기간 남한지역 순 일차생산량 변화: 기후인자의 영향)

  • Hong, Ji-Youn;Shim, Chang-Sub;Lee, Moung-Jin;Baek, Gyoung-Hye;Song, Won-Kyong;Jeon, Seong-Woo;Park, Yong-Ha
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.4
    • /
    • pp.467-480
    • /
    • 2011
  • Spatial and temporal variabilities of NPP(Net Primary Production) retrieved from two satellite instruments, AVHRR(Advanced Very High Resolution Radiometer, 1981-2000) and MODIS(MODerate-resolution Imaging Spectroradiometer, 2000-2006), were investigated. The range of mean NPP from A VHRR and MODIS were estimated to be 894-1068 $g{\cdot}C/m^2$/yr and 610-694.90 $g{\cdot}C/m^2$/yr, respectively. The discrepancy of NPP between the two instruments is about 325 $g{\cdot}C/m^2$/yr, and MODIS product is generally closer to the ground measurement than AVHRR despite the limitation in direct comparison such as spatial resolution and vegetation classification. The higher NPP values over South Korea are related to the regions with higher biomass (e.g., mountains) and higher annual temperature. The interannual NPP trends from the two satellite products were computed, and both mean annual trends show continuous NPP increase; 2.14 $g{\cdot}C/m^2$/yr from AVHRR(1981-2000) and 6.08 $g{\cdot}C/m^2$/yr from MODIS (2000-2006) over South Korea. Specifically, the higher increasing trends over the Southwestern region are likely due to the increasing productivity of crop fields from sufficient irrigation and fertilizer use. The retrieved NPP shows a closer relationship between monthly temperature and precipitation, which results in maximum correlation during summer monsoons. The difference in the detection wavelength and model schemes during the retrieval can make a significant difference in the satellite products, and a better accuracy in the meterological and land use data and modeling applications will be necessary to improve the satellite-based NPP data.

An Analysis of IT Trends Using Tweet Data (트윗 데이터를 활용한 IT 트렌드 분석)

  • Yi, Jin Baek;Lee, Choong Kwon;Cha, Kyung Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.143-159
    • /
    • 2015
  • Predicting IT trends has been a long and important subject for information systems research. IT trend prediction makes it possible to acknowledge emerging eras of innovation and allocate budgets to prepare against rapidly changing technological trends. Towards the end of each year, various domestic and global organizations predict and announce IT trends for the following year. For example, Gartner Predicts 10 top IT trend during the next year, and these predictions affect IT and industry leaders and organization's basic assumptions about technology and the future of IT, but the accuracy of these reports are difficult to verify. Social media data can be useful tool to verify the accuracy. As social media services have gained in popularity, it is used in a variety of ways, from posting about personal daily life to keeping up to date with news and trends. In the recent years, rates of social media activity in Korea have reached unprecedented levels. Hundreds of millions of users now participate in online social networks and communicate with colleague and friends their opinions and thoughts. In particular, Twitter is currently the major micro blog service, it has an important function named 'tweets' which is to report their current thoughts and actions, comments on news and engage in discussions. For an analysis on IT trends, we chose Tweet data because not only it produces massive unstructured textual data in real time but also it serves as an influential channel for opinion leading on technology. Previous studies found that the tweet data provides useful information and detects the trend of society effectively, these studies also identifies that Twitter can track the issue faster than the other media, newspapers. Therefore, this study investigates how frequently the predicted IT trends for the following year announced by public organizations are mentioned on social network services like Twitter. IT trend predictions for 2013, announced near the end of 2012 from two domestic organizations, the National IT Industry Promotion Agency (NIPA) and the National Information Society Agency (NIA), were used as a basis for this research. The present study analyzes the Twitter data generated from Seoul (Korea) compared with the predictions of the two organizations to analyze the differences. Thus, Twitter data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. To overcome these challenges, we used SAS IRS (Information Retrieval Studio) developed by SAS to capture the trend in real-time processing big stream datasets of Twitter. The system offers a framework for crawling, normalizing, analyzing, indexing and searching tweet data. As a result, we have crawled the entire Twitter sphere in Seoul area and obtained 21,589 tweets in 2013 to review how frequently the IT trend topics announced by the two organizations were mentioned by the people in Seoul. The results shows that most IT trend predicted by NIPA and NIA were all frequently mentioned in Twitter except some topics such as 'new types of security threat', 'green IT', 'next generation semiconductor' since these topics non generalized compound words so they can be mentioned in Twitter with other words. To answer whether the IT trend tweets from Korea is related to the following year's IT trends in real world, we compared Twitter's trending topics with those in Nara Market, Korea's online e-Procurement system which is a nationwide web-based procurement system, dealing with whole procurement process of all public organizations in Korea. The correlation analysis show that Tweet frequencies on IT trending topics predicted by NIPA and NIA are significantly correlated with frequencies on IT topics mentioned in project announcements by Nara market in 2012 and 2013. The main contribution of our research can be found in the following aspects: i) the IT topic predictions announced by NIPA and NIA can provide an effective guideline to IT professionals and researchers in Korea who are looking for verified IT topic trends in the following topic, ii) researchers can use Twitter to get some useful ideas to detect and predict dynamic trends of technological and social issues.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Sequential use of Intramuscular and Oral Progesterone for Luteal Phase Support in in vitro Fertilization (체외수정시술 환자에서 황체기 보강 시 근주 투여와 경구 투여의 연속적 이용)

  • Kim, Sang-Don;Jee, Byung-Chul;Lee, Jung-Ryeol;Suh, Chang-Suk;Kim, Seok-Hyun;Moon, Shin-Yong
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.37 no.1
    • /
    • pp.41-48
    • /
    • 2010
  • Objectives: The aim of this study was to assess appropriate time to convert intramuscular progesterone support to oral administration for luteal phase support in in vitro fertilization (IVF). Methods: Seventy-six cycles of IVF in which fetal heart beat was identified after treatment were included. Patients underwent controlled ovarian hyperstimulation with GnRH agonist long protocol (n=7) or GnRH antagonist protocol (n=66). Cryopreserved embryo transfer was performed in three cycles. Luteal support was initiated by daily intramuscular injection of progesterone, and after confirmation of fetal heart beat, converted to oral micronized progesterone (Utrogestan, Laboratoires Besins International, France) 300 mg daily before or after 8 gestational weeks. The oral progesterone was continued for 11 weeks. Results: Overall clinical abortion rate was 3.9% (3/76) and mean time to conversion was $8^{+4}$ gestational weeks ($46{\pm}5.8$ days after oocytes retrieval). The abortion rate was 5.6% (1/17) and 3.4% (2/59) in patients with conversion before 7 weeks and after 8 weeks, respectively, which were not statistically significant (p=0.678). The miscarriages were occurred at $9^{+4}$ weeks, $11^{+3}$ weeks and $11^{+4}$ weeks. Conclusion: Sequential luteal support using intramuscular and oral progesterone yields a relatively low clinical abortion rate. If fetal heart beat confirmed, sequential regimen appears to be safe and convenient method to reduce patients' discomfort induced by multiple injections.

A Study of Sound Expression in Webtoon (웹툰의 사운드 표현에 관한 연구)

  • Mok, Hae Jung
    • Cartoon and Animation Studies
    • /
    • s.36
    • /
    • pp.469-491
    • /
    • 2014
  • Webtoon has developed the method that makes it possible to express sound visually. Also we can also hear sound in webtoon through the development of web technology. It is natural that we analyze the sound that we can hear, but we can also analyze the sound that we can not hear. This study is based on 'dual code' in cognitive psychology. Cartoonists can make visual expression on the basis of auditive impression and memory, and readers can recall the sound through the process of memory and memory-retrieval. This study analyzes both audible sound and inaudable sound. Concise analysis owes the method to film sound theory. Three main factor, Volume, pitch, and tone are recognized by frequency in acoustics. On the other hand they are expressed by the thickness and site of line and image of sound source. The visual expression of in screen sound and off screen sound is related to the frame of comics. Generally the outside of frame means off sound, but some off sound is in the frame. In addition, horror comics use much sound for the effect of genre like horror film. When analyzing comics sound using this kinds of the method film sound analysis, we can find that webtoon has developed creative expression method comparing with simple ones of early comics. Especially arranging frames and expressing sound following and vertical moving are new ones in webtoon. Also types and arrangement of frame has been varied. BGM is the first in using audible sound and recently BGM composed mixing sound effect is being used. In addition, the program which makes it possible for readers to hear sound according to scroll moving. Especially horror genre raise the genre effects using this technology. Various methods of visualizing sound are being created, and the change shows that webtoon could be the model of convergence in contents.

A Study on Improvement of Collaborative Filtering Based on Implicit User Feedback Using RFM Multidimensional Analysis (RFM 다차원 분석 기법을 활용한 암시적 사용자 피드백 기반 협업 필터링 개선 연구)

  • Lee, Jae-Seong;Kim, Jaeyoung;Kang, Byeongwook
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.139-161
    • /
    • 2019
  • The utilization of the e-commerce market has become a common life style in today. It has become important part to know where and how to make reasonable purchases of good quality products for customers. This change in purchase psychology tends to make it difficult for customers to make purchasing decisions in vast amounts of information. In this case, the recommendation system has the effect of reducing the cost of information retrieval and improving the satisfaction by analyzing the purchasing behavior of the customer. Amazon and Netflix are considered to be the well-known examples of sales marketing using the recommendation system. In the case of Amazon, 60% of the recommendation is made by purchasing goods, and 35% of the sales increase was achieved. Netflix, on the other hand, found that 75% of movie recommendations were made using services. This personalization technique is considered to be one of the key strategies for one-to-one marketing that can be useful in online markets where salespeople do not exist. Recommendation techniques that are mainly used in recommendation systems today include collaborative filtering and content-based filtering. Furthermore, hybrid techniques and association rules that use these techniques in combination are also being used in various fields. Of these, collaborative filtering recommendation techniques are the most popular today. Collaborative filtering is a method of recommending products preferred by neighbors who have similar preferences or purchasing behavior, based on the assumption that users who have exhibited similar tendencies in purchasing or evaluating products in the past will have a similar tendency to other products. However, most of the existed systems are recommended only within the same category of products such as books and movies. This is because the recommendation system estimates the purchase satisfaction about new item which have never been bought yet using customer's purchase rating points of a similar commodity based on the transaction data. In addition, there is a problem about the reliability of purchase ratings used in the recommendation system. Reliability of customer purchase ratings is causing serious problems. In particular, 'Compensatory Review' refers to the intentional manipulation of a customer purchase rating by a company intervention. In fact, Amazon has been hard-pressed for these "compassionate reviews" since 2016 and has worked hard to reduce false information and increase credibility. The survey showed that the average rating for products with 'Compensated Review' was higher than those without 'Compensation Review'. And it turns out that 'Compensatory Review' is about 12 times less likely to give the lowest rating, and about 4 times less likely to leave a critical opinion. As such, customer purchase ratings are full of various noises. This problem is directly related to the performance of recommendation systems aimed at maximizing profits by attracting highly satisfied customers in most e-commerce transactions. In this study, we propose the possibility of using new indicators that can objectively substitute existing customer 's purchase ratings by using RFM multi-dimensional analysis technique to solve a series of problems. RFM multi-dimensional analysis technique is the most widely used analytical method in customer relationship management marketing(CRM), and is a data analysis method for selecting customers who are likely to purchase goods. As a result of verifying the actual purchase history data using the relevant index, the accuracy was as high as about 55%. This is a result of recommending a total of 4,386 different types of products that have never been bought before, thus the verification result means relatively high accuracy and utilization value. And this study suggests the possibility of general recommendation system that can be applied to various offline product data. If additional data is acquired in the future, the accuracy of the proposed recommendation system can be improved.

Effects of Gonadotropin Releasing Hormone on Steroidogenesis and Apoptosis of Human Granulosa-Lutein Cells (생식샘자극호르몬분비호르몬이 사람 과립-황체화 세포의 스테로이드 생성과 세포자연사에 미치는 영향)

  • Lee, Hyo-Jin;Yang, Hyun-Won
    • Development and Reproduction
    • /
    • v.13 no.4
    • /
    • pp.353-362
    • /
    • 2009
  • GnRH and its receptor are known to express locally in the ovary and to regulate the ovarian function by affecting on granulosa and lutein cells. It has been reported that GnRH directly causes apoptosis in the granulosa and lutein cells of the ovary. However, whether the apoptosis of the cells by GnRH is recovered by FSH as an anti-apoptotic factor is not yet known. In this study, we evaluated the apoptosis and the production of progesterone $(P_4)$ and estradiol $(E_2)$ after treatment with 5, 50, and 100 ng/$m\ell$ GnRH and 1 IU/ml FSH in the granulosa-lutein cells that are obtained during oocyte-retrieval for IVF-ET. Results of DNA fragment analysis and TUNEL assay demonstrated that DNA fragmentation and the rate of apoptotic cells were increased in a dose-dependent manner showing a significant increase in the cells treated with 100 ng/$m\ell$ GnRH. In addition, we found that FSH suppresses the apoptosis of the cells induced by GnRH. In the results of chemiluminescence assay for $P_4$ and $E_2$, $P_4$ production was decreased by GnRH treatment, whereas $E_2$ production was not changed. We also demonstrated that FSH inhibits the suppressive effect of GnRH on $P_4$ production as the result of apoptosis. The present results suggest that GnRH agonist using in ovarian hyperstimulation protocol might induce the dysfunction of the ovary, but its function could be recovered by FSH. These results also will be expected to use as the basic data to elucidate the physiological role of GnRH and to develop new ovarian hyperstimulation protocols for IVF-ET.

  • PDF

A Performance Comparison of the Mobile Agent Model with the Client-Server Model under Security Conditions (보안 서비스를 고려한 이동 에이전트 모델과 클라이언트-서버 모델의 성능 비교)

  • Han, Seung-Wan;Jeong, Ki-Moon;Park, Seung-Bae;Lim, Hyeong-Seok
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.3
    • /
    • pp.286-298
    • /
    • 2002
  • The Remote Procedure Call(RPC) has been traditionally used for Inter Process Communication(IPC) among precesses in distributed computing environment. As distributed applications have been complicated more and more, the Mobile Agent paradigm for IPC is emerged. Because there are some paradigms for IPC, researches to evaluate and compare the performance of each paradigm are issued recently. But the performance models used in the previous research did not reflect real distributed computing environment correctly, because they did not consider the evacuation elements for providing security services. Since real distributed environment is open, it is very vulnerable to a variety of attacks. In order to execute applications securely in distributed computing environment, security services which protect applications and information against the attacks must be considered. In this paper, we evaluate and compare the performance of the Remote Procedure Call with that of the Mobile Agent in IPC paradigms. We examine security services to execute applications securely, and propose new performance models considering those services. We design performance models, which describe information retrieval system through N database services, using Petri Net. We compare the performance of two paradigms by assigning numerical values to parameters and measuring the execution time of two paradigms. In this paper, the comparison of two performance models with security services for secure communication shows the results that the execution time of the Remote Procedure Call performance model is sharply increased because of many communications with the high cryptography mechanism between hosts, and that the execution time of the Mobile Agent model is gradually increased because the Mobile Agent paradigm can reduce the quantity of the communications between hosts.

Probabilistic Anatomical Labeling of Brain Structures Using Statistical Probabilistic Anatomical Maps (확률 뇌 지도를 이용한 뇌 영역의 위치 정보 추출)

  • Kim, Jin-Su;Lee, Dong-Soo;Lee, Byung-Il;Lee, Jae-Sung;Shin, Hee-Won;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.317-324
    • /
    • 2002
  • Purpose: The use of statistical parametric mapping (SPM) program has increased for the analysis of brain PET and SPECT images. Montreal Neurological Institute (MNI) coordinate is used in SPM program as a standard anatomical framework. While the most researchers look up Talairach atlas to report the localization of the activations detected in SPM program, there is significant disparity between MNI templates and Talairach atlas. That disparity between Talairach and MNI coordinates makes the interpretation of SPM result time consuming, subjective and inaccurate. The purpose of this study was to develop a program to provide objective anatomical information of each x-y-z position in ICBM coordinate. Materials and Methods: Program was designed to provide the anatomical information for the given x-y-z position in MNI coordinate based on the Statistical Probabilistic Anatomical Map (SPAM) images of ICBM. When x-y-z position was given to the program, names of the anatomical structures with non-zero probability and the probabilities that the given position belongs to the structures were tabulated. The program was coded using IDL and JAVA language for 4he easy transplantation to any operating system or platform. Utility of this program was shown by comparing the results of this program to those of SPM program. Preliminary validation study was peformed by applying this program to the analysis of PET brain activation study of human memory in which the anatomical information on the activated areas are previously known. Results: Real time retrieval of probabilistic information with 1 mm spatial resolution was archived using the programs. Validation study showed the relevance of this program: probability that the activated area for memory belonged to hippocampal formation was more than 80%. Conclusion: These programs will be useful for the result interpretation of the image analysis peformed on MNI coordinate, as done in SPM program.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.