• Title/Summary/Keyword: 성능목표

Search Result 2,190, Processing Time 0.026 seconds

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

The Classification System and Information Service for Establishing a National Collaborative R&D Strategy in Infectious Diseases: Focusing on the Classification Model for Overseas Coronavirus R&D Projects (국가 감염병 공동R&D전략 수립을 위한 분류체계 및 정보서비스에 대한 연구: 해외 코로나바이러스 R&D과제의 분류모델을 중심으로)

  • Lee, Doyeon;Lee, Jae-Seong;Jun, Seung-pyo;Kim, Keun-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.127-147
    • /
    • 2020
  • The world is suffering from numerous human and economic losses due to the novel coronavirus infection (COVID-19). The Korean government established a strategy to overcome the national infectious disease crisis through research and development. It is difficult to find distinctive features and changes in a specific R&D field when using the existing technical classification or science and technology standard classification. Recently, a few studies have been conducted to establish a classification system to provide information about the investment research areas of infectious diseases in Korea through a comparative analysis of Korea government-funded research projects. However, these studies did not provide the necessary information for establishing cooperative research strategies among countries in the infectious diseases, which is required as an execution plan to achieve the goals of national health security and fostering new growth industries. Therefore, it is inevitable to study information services based on the classification system and classification model for establishing a national collaborative R&D strategy. Seven classification - Diagnosis_biomarker, Drug_discovery, Epidemiology, Evaluation_validation, Mechanism_signaling pathway, Prediction, and Vaccine_therapeutic antibody - systems were derived through reviewing infectious diseases-related national-funded research projects of South Korea. A classification system model was trained by combining Scopus data with a bidirectional RNN model. The classification performance of the final model secured robustness with an accuracy of over 90%. In order to conduct the empirical study, an infectious disease classification system was applied to the coronavirus-related research and development projects of major countries such as the STAR Metrics (National Institutes of Health) and NSF (National Science Foundation) of the United States(US), the CORDIS (Community Research & Development Information Service)of the European Union(EU), and the KAKEN (Database of Grants-in-Aid for Scientific Research) of Japan. It can be seen that the research and development trends of infectious diseases (coronavirus) in major countries are mostly concentrated in the prediction that deals with predicting success for clinical trials at the new drug development stage or predicting toxicity that causes side effects. The intriguing result is that for all of these nations, the portion of national investment in the vaccine_therapeutic antibody, which is recognized as an area of research and development aimed at the development of vaccines and treatments, was also very small (5.1%). It indirectly explained the reason of the poor development of vaccines and treatments. Based on the result of examining the investment status of coronavirus-related research projects through comparative analysis by country, it was found that the US and Japan are relatively evenly investing in all infectious diseases-related research areas, while Europe has relatively large investments in specific research areas such as diagnosis_biomarker. Moreover, the information on major coronavirus-related research organizations in major countries was provided by the classification system, thereby allowing establishing an international collaborative R&D projects.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

Fast Join Mechanism that considers the switching of the tree in Overlay Multicast (오버레이 멀티캐스팅에서 트리의 스위칭을 고려한 빠른 멤버 가입 방안에 관한 연구)

  • Cho, Sung-Yean;Rho, Kyung-Taeg;Park, Myong-Soon
    • The KIPS Transactions:PartC
    • /
    • v.10C no.5
    • /
    • pp.625-634
    • /
    • 2003
  • More than a decade after its initial proposal, deployment of IP Multicast has been limited due to the problem of traffic control in multicast routing, multicast address allocation in global internet, reliable multicast transport techniques etc. Lately, according to increase of multicast application service such as internet broadcast, real time security information service etc., overlay multicast is developed as a new internet multicast technology. In this paper, we describe an overlay multicast protocol and propose fast join mechanism that considers switching of the tree. To find a potential parent, an existing search algorithm descends the tree from the root by one level at a time, and it causes long joining latency. Also, it is try to select the nearest node as a potential parent. However, it can't select the nearest node by the degree limit of the node. As a result, the generated tree has low efficiency. To reduce long joining latency and improve the efficiency of the tree, we propose searching two levels of the tree at a time. This method forwards joining request message to own children node. So, at ordinary times, there is no overhead to keep the tree. But the joining request came, the increasing number of searching messages will reduce a long joining latency. Also searching more nodes will be helpful to construct more efficient trees. In order to evaluate the performance of our fast join mechanism, we measure the metrics such as the search latency and the number of searched node and the number of switching by the number of members and degree limit. The simulation results show that the performance of our mechanism is superior to that of the existing mechanism.

An Energy Efficient Cluster Management Method based on Autonomous Learning in a Server Cluster Environment (서버 클러스터 환경에서 자율학습기반의 에너지 효율적인 클러스터 관리 기법)

  • Cho, Sungchul;Kwak, Hukeun;Chung, Kyusik
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.4 no.6
    • /
    • pp.185-196
    • /
    • 2015
  • Energy aware server clusters aim to reduce power consumption at maximum while keeping QoS(Quality of Service) compared to energy non-aware server clusters. They adjust the power mode of each server in a fixed or variable time interval to let only the minimum number of servers needed to handle current user requests ON. Previous studies on energy aware server cluster put efforts to reduce power consumption further or to keep QoS, but they do not consider energy efficiency well. In this paper, we propose an energy efficient cluster management based on autonomous learning for energy aware server clusters. Using parameters optimized through autonomous learning, our method adjusts server power mode to achieve maximum performance with respect to power consumption. Our method repeats the following procedure for adjusting the power modes of servers. Firstly, according to the current load and traffic pattern, it classifies current workload pattern type in a predetermined way. Secondly, it searches learning table to check whether learning has been performed for the classified workload pattern type in the past. If yes, it uses the already-stored parameters. Otherwise, it performs learning for the classified workload pattern type to find the best parameters in terms of energy efficiency and stores the optimized parameters. Thirdly, it adjusts server power mode with the parameters. We implemented the proposed method and performed experiments with a cluster of 16 servers using three different kinds of load patterns. Experimental results show that the proposed method is better than the existing methods in terms of energy efficiency: the numbers of good response per unit power consumed in the proposed method are 99.8%, 107.5% and 141.8% of those in the existing static method, 102.0%, 107.0% and 106.8% of those in the existing prediction method for banking load pattern, real load pattern, and virtual load pattern, respectively.

A Study of Guide System for Cerebrovascular Intervention (뇌혈관 중재시술 지원 가이드 시스템에 관한 연구)

  • Lee, Sung-Gwon;Jeong, Chang-Won;Yoon, Kwon-Ha;Joo, Su-Chong
    • Journal of Internet Computing and Services
    • /
    • v.17 no.1
    • /
    • pp.101-107
    • /
    • 2016
  • Due to the recent advancement in digital imaging technology, development of intervention equipment has become generalize. Video arbitration procedure is a process to insert a tiny catheter and a guide wire in the body, so in order to enhance the effectiveness and safety of this treatment, the high-quality of x-ray of image should be used. However, the increasing of radiation has become the problem. Therefore, the studies to improve the performance of x-ray detectors are being actively processed. Moreover, this intervention is based on the reference of the angiographic imaging and 3D medical image processing. In this paper, we propose a guidance system to support this intervention. Through this intervention, it can solve the problem of the existing 2D medical images based vessel that has a formation of cerebrovascular disease, and guide the real-time tracking and optimal route to the target lesion by intervention catheter and guide wire tool. As a result, the system was completely composed for medical image acquisition unit and image processing unit as well as a display device. The experimental environment, guide services which are provided by the proposed system Brain Phantom (complete intracranial model with aneurysms, ref H+N-S-A-010) was taken with x-ray and testing. To generate a reference image based on the Laplacian algorithm for the image processing which derived from the cerebral blood vessel model was applied to DICOM by Volume ray casting technique. $A^*$ algorithm was used to provide the catheter with a guide wire tracking path. Finally, the result does show the location of the catheter and guide wire providing in the proposed system especially, it is expected to provide a useful guide for future intervention service.

Development of a Ranging Inspection Technique in a Sodium-cooled Fast Reactor Using a Plate-type Ultrasonic Waveguide Sensor (판형 웨이브가이드 초음파 센서를 이용한 소듐냉각고속로 원격주사 검사기법 개발)

  • Kim, Hoe Woong;Kim, Sang Hwal;Han, Jae Won;Joo, Young Sang;Park, Chang Gyu;Kim, Jong Bum
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.25 no.1
    • /
    • pp.48-57
    • /
    • 2015
  • In a sodium-cooled fast reactor, which is a Generation-IV reactor, refueling is conducted by rotating, but not opening, the reactor head to prevent a reaction between the sodium, water and air. Therefore, an inspection technique that checks for the presence of any obstacles between the reactor core and the upper internal structure, which could disturb the rotation of the reactor head, is essential prior to the refueling of a sodium-cooled fast reactor. To this end, an ultrasound-based inspection technique should be employed because the opacity of the sodium prevents conventional optical inspection techniques from being applied to the monitoring of obstacles. In this study, a ranging inspection technique using a plate-type ultrasonic waveguide sensor was developed to monitor the presence of any obstacles between the reactor core and the upper internal structure in the opaque sodium. Because the waveguide sensor installs an ultrasonic transducer in a relatively cold region and transmits the ultrasonic waves into the hot radioactive liquid sodium through a long waveguide, it offers better reliability and is less susceptible to thermal or radiation damage. A 10 m horizontal beam waveguide sensor capable of radiating an ultrasonic wave horizontally was developed, and beam profile measurements and basic experiments were carried out to investigate the characteristics of the developed sensor. The beam width and propagation distance of the ultrasonic wave radiated from the sensor were assessed based on the experimental results. Finally, a feasibility test using cylindrical targets (corresponding to the shape of possible obstacles) was also conducted to evaluate the applicability of the developed ranging inspection technique to actual applications.

Carbon Monoxide Dispersion in an Urban Area Simulated by a CFD Model Coupled to the WRF-Chem Model (WRF-Chem 모델과 결합된 CFD 모델을 활용한 도시 지역의 일산화탄소 확산 연구)

  • Kwon, A-Rum;Park, Soo-Jin;Kang, Geon;Kim, Jae-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.679-692
    • /
    • 2020
  • We coupled a CFD model to the WRF-Chem model (WRF-CFD model) and investigated the characteristics of flows and carbon monoxide (CO) distributions in a building-congested district. We validated the simulated results against the measured wind speeds, wind directions, and CO concentrations. The WRF-Chem model simulated the winds from southwesterly to southeasterly, overestimating the measured wind speeds. The statistical validation showed that the WRF-CFD model simulated the measured wind speeds more realistically than the WRF-Chem model. The WRF-Chem model significantly underestimated the measured CO concentrations, and the WRF-CFD model improved the CO concentration prediction. Based on the statistical validation results, the WRF-CFD model improved the performance in predicting the CO concentrations by taking complicatedly distributed buildings and mobiles sources of CO into account. At 04 KST on May 22, there was a downdraft around the AQMS, and airflow with a relatively low CO concentration was advected from the upper layer. Resultantly, the CO concentration was lower at the AQMS than the surrounding area. At 15 KST on May 22, there was an updraft around the AQMS. This resulted in a slightly higher CO concentration than the surroundings. The WRF-CFD model transported CO emitted from the mobile sources to the AQMS measurement altitude, well reproducing the measured CO concentration. At 18 KST on May 22, the WRF-CFD model simulated high CO concentrations because of high CO emission, broad updraft area, and an increase in turbulent diffusion cause by wind-shear increase near the ground.

정지궤도 통신해양기상위성의 기상분야 요구사항에 관하여

  • Ahn, Myung-Hwan;Kim, Kum-Lan
    • Atmosphere
    • /
    • v.12 no.4
    • /
    • pp.20-42
    • /
    • 2002
  • Based on the "Mid to Long Term Plan for Space Development", a project to launch COMeS (Communication, Oceanography, and Meteorological Satellite) into the geostationary orbit is undergoing. Accordingly, KMA (Korea Meteorological Administration) has defined the meteorological missions and prepared the user requirements to fulfill the missions. To make a realistic user requirements, we prepared a first draft based on the ideal meteorological products derivable from a geostationary platform and sent the RFI (request for information) to the sensor manufacturers. Based on the responses to the RFI and other considerations, we revised the user requirement to be a realistic plan for the 2008 launch of the satellite. This manuscript introduces the revised user requirements briefly. The major mission defined in the revised user requirement is the augmentation of the detection and prediction ability of the severe weather phenomena, especially around the Korean Peninsula. The required payload is an enhanced Imager, which includes the major observation channels of the current geostationary sounder. To derive the required meteorological products from the Imager, at least 12 channels are required with the optimum of 16 channels. The minimum 12 channels are 6 wavelength bands used for current geostationary satellite, and additional channels in two visible bands, a near infrared band, two water vapor bands and one ozone absorption band. From these enhanced channel observation, we are going to derive and utilize the information of water vapor, stability index, wind field, and analysis of special weather phenomena such as the yellow sand event in addition to the standard derived products from the current geostationary Imager data. For a better temporal coverage, the Imager is required to acquire the full disk data within 15 minutes and to have the rapid scan mode for the limited area coverage. The required thresholds of spatial resolutions are 1 km and 2 km for visible and infrared channels, respectively, while the target resolutions are 0.5 km and 1 km.

A Study on the Governance of U.S. Global Positioning System (미국 글로벌위성항법시스템(GPS)의 거버넌스에 관한 연구 - 한국형위성항법시스템 거버넌스를 위한 제언 -)

  • Jung, Yung-Jin
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.35 no.3
    • /
    • pp.127-150
    • /
    • 2020
  • A Basic Plan for the Promotion of Space Development (hereinafter referred to as "basic plan"), which prescribes mid- and long-term policy objectives and basic direction-setting on space development every five years, is one of the matters to be deliberated by the National Space Committee. Confirmed February 2018 by the Committee, the 3rd Basic Plan has a unique matter, compared to the 2nd Basic Plan. It is to construct "Korean Positioning System(KPS)". Almost every country in the world including Korea has been relying on GPS. On the occasion of the shooting down of a Korean Air flight 007 by Soviet Russia, GPS Standard Positioning Service has been open to the world. Due to technical errors of GPS or conflict of interests between countries in international relations, however, the above Service can be interrupted at any time. Such cessation might bring extensive damage to the social, economic and security domains of every country. This is why some countries has been constructing an independent global or regional satellite navigation system: EU(Galileo), Russia(Glonass), India(NaVic), Japan(QZSS), and China(Beidou). So does South Korea. Once KPS is built, it is expected to make use of the system in various areas such as transportation, aviation, disaster, construction, defense, ocean, distribution, telecommunication, etc. For this, a pan-governmental governance is needed to be established. And this governance must be based on the law. Korea is richly experienced in developing and operating individually satellite itself, but it has little experience in the simultaneous development and operation of the satellites, ground, and users systems, such as KPS. Therefore we need to review overseas cases, in order to minimize trial and error. U.S. GPS is a classic example.