• Title/Summary/Keyword: Algorithm Based

Search Result 27,992, Processing Time 0.059 seconds

Quantitative Comparisons between CT and $^{68}Ge$ Transmission Attenuation Corrected $^{18}F-FDG$ PET Images: Measured Attenuation Correction vs. Segmented Attenuation Correction (CT와 $^{68}Ge$ 감쇠보정 $^{18}F-FDG$ PET 영상의 정량적 비교: 측정감쇠보정대 분할감쇠보정)

  • Choi, Joon-Young;Woo, Sang-Keun;Choi, Yong;Choe, Yearn-Seong;Lee, Kyung-Han;Kim, Byung-Tae
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.1
    • /
    • pp.49-53
    • /
    • 2007
  • Purpose: It was reported that CT-based measured attenuation correction (CT-MAC) produced radioactivity concentration values significantly higher than $^{68}Ge$-based segmented attenuation correction (Ge-SAC) in PET images. However, it was unknown whether the radioactivity concentration difference resulted from different sources (CT vs. Ge) or types (MAC vs. SAC) of attenuation correction (AC). We evaluated the influences of the source and type of AC on the radioactivity concentration differences between reconstructed PET images in normal subjects and patients. Material and Methods: Five normal subjects and 35 patients with a known or suspected cancer underwent $^{18}F-FDG$ PET/CT. In each subject, attenuation corrected PET images using OSEM algorithm (28 subsets, 2 iterations) were reconstructed by 4 methods: CT-MAC, CT-SAC, Ge-MAC, and Ge-SAC. The physiological uptake in normal subjects and pathological uptake in patients were quantitatively compared between the PET images according to the source and type of AC. Results: The SUVs of physiological uptake measured in CT-MAC PET images were significantly higher than other 3 differently corrected PET images. Maximum SUVs of the 145 foci with abnormal FDG uptake in CT-MAC images were significantly highest among 4 differently corrected PET images with a difference of 2.4% to 5.1% (p<0.001). The SUVs of pathological uptake in Ge-MAC images were significantly higher than those in CT-SAC and Ge-MAC PET images (p<0.001). Conclusion: Quantitative radioactivity values were highest in CT-MAC PET images. The adoption of MAC may make a more contribution than the adoption of CT attenuation map to such differences.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

Online news-based stock price forecasting considering homogeneity in the industrial sector (산업군 내 동질성을 고려한 온라인 뉴스 기반 주가예측)

  • Seong, Nohyoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.1-19
    • /
    • 2018
  • Since stock movements forecasting is an important issue both academically and practically, studies related to stock price prediction have been actively conducted. The stock price forecasting research is classified into structured data and unstructured data, and it is divided into technical analysis, fundamental analysis and media effect analysis in detail. In the big data era, research on stock price prediction combining big data is actively underway. Based on a large number of data, stock prediction research mainly focuses on machine learning techniques. Especially, research methods that combine the effects of media are attracting attention recently, among which researches that analyze online news and utilize online news to forecast stock prices are becoming main. Previous studies predicting stock prices through online news are mostly sentiment analysis of news, making different corpus for each company, and making a dictionary that predicts stock prices by recording responses according to the past stock price. Therefore, existing studies have examined the impact of online news on individual companies. For example, stock movements of Samsung Electronics are predicted with only online news of Samsung Electronics. In addition, a method of considering influences among highly relevant companies has also been studied recently. For example, stock movements of Samsung Electronics are predicted with news of Samsung Electronics and a highly related company like LG Electronics.These previous studies examine the effects of news of industrial sector with homogeneity on the individual company. In the previous studies, homogeneous industries are classified according to the Global Industrial Classification Standard. In other words, the existing studies were analyzed under the assumption that industries divided into Global Industrial Classification Standard have homogeneity. However, existing studies have limitations in that they do not take into account influential companies with high relevance or reflect the existence of heterogeneity within the same Global Industrial Classification Standard sectors. As a result of our examining the various sectors, it can be seen that there are sectors that show the industrial sectors are not a homogeneous group. To overcome these limitations of existing studies that do not reflect heterogeneity, our study suggests a methodology that reflects the heterogeneous effects of the industrial sector that affect the stock price by applying k-means clustering. Multiple Kernel Learning is mainly used to integrate data with various characteristics. Multiple Kernel Learning has several kernels, each of which receives and predicts different data. To incorporate effects of target firm and its relevant firms simultaneously, we used Multiple Kernel Learning. Each kernel was assigned to predict stock prices with variables of financial news of the industrial group divided by the target firm, K-means cluster analysis. In order to prove that the suggested methodology is appropriate, experiments were conducted through three years of online news and stock prices. The results of this study are as follows. (1) We confirmed that the information of the industrial sectors related to target company also contains meaningful information to predict stock movements of target company and confirmed that machine learning algorithm has better predictive power when considering the news of the relevant companies and target company's news together. (2) It is important to predict stock movements with varying number of clusters according to the level of homogeneity in the industrial sector. In other words, when stock prices are homogeneous in industrial sectors, it is important to use relational effect at the level of industry group without analyzing clusters or to use it in small number of clusters. When the stock price is heterogeneous in industry group, it is important to cluster them into groups. This study has a contribution that we testified firms classified as Global Industrial Classification Standard have heterogeneity and suggested it is necessary to define the relevance through machine learning and statistical analysis methodology rather than simply defining it in the Global Industrial Classification Standard. It has also contribution that we proved the efficiency of the prediction model reflecting heterogeneity.

The Korean Cough Guideline: Recommendation and Summary Statement

  • Rhee, Chin Kook;Jung, Ji Ye;Lee, Sei Won;Kim, Joo-Hee;Park, So Young;Yoo, Kwang Ha;Park, Dong Ah;Koo, Hyeon-Kyoung;Kim, Yee Hyung;Jeong, Ina;Kim, Je Hyeong;Kim, Deog Kyeom;Kim, Sung-Kyoung;Kim, Yong Hyun;Park, Jinkyeong;Choi, Eun Young;Jung, Ki-Suck;Kim, Hui Jung
    • Tuberculosis and Respiratory Diseases
    • /
    • v.79 no.1
    • /
    • pp.14-21
    • /
    • 2016
  • Cough is one of the most common symptom of many respiratory diseases. The Korean Academy of Tuberculosis and Respiratory Diseases organized cough guideline committee and cough guideline was developed by this committee. The purpose of this guideline is to help clinicians to diagnose correctly and treat efficiently patients with cough. In this article, we have stated recommendation and summary of Korean cough guideline. We also provided algorithm for acute, subacute, and chronic cough. For chronic cough, upper airway cough syndrome (UACS), cough variant asthma (CVA), and gastroesophageal reflux disease (GERD) should be considered. If UACS is suspicious, first generation anti-histamine and nasal decongestant can be used empirically. In CVA, inhaled corticosteroid is recommended in order to improve cough. In GERD, proton pump inhibitor is recommended in order to improve cough. Chronic bronchitis, bronchiectasis, bronchiolitis, lung cancer, aspiration, angiotensin converting enzyme inhibitor, habit, psychogenic cough, interstitial lung disease, environmental and occupational factor, tuberculosis, obstructive sleep apnea, peritoneal dialysis, and idiopathic cough can be also considered as cause of chronic cough. Level of evidence for treatment is mostly low. Thus, in this guideline, many recommendations are based on expert opinion. Further study regarding treatment for cough is mandatory.

Context Prediction Using Right and Wrong Patterns to Improve Sequential Matching Performance for More Accurate Dynamic Context-Aware Recommendation (보다 정확한 동적 상황인식 추천을 위해 정확 및 오류 패턴을 활용하여 순차적 매칭 성능이 개선된 상황 예측 방법)

  • Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.19 no.3
    • /
    • pp.51-67
    • /
    • 2009
  • Developing an agile recommender system for nomadic users has been regarded as a promising application in mobile and ubiquitous settings. To increase the quality of personalized recommendation in terms of accuracy and elapsed time, estimating future context of the user in a correct way is highly crucial. Traditionally, time series analysis and Makovian process have been adopted for such forecasting. However, these methods are not adequate in predicting context data, only because most of context data are represented as nominal scale. To resolve these limitations, the alignment-prediction algorithm has been suggested for context prediction, especially for future context from the low-level context. Recently, an ontological approach has been proposed for guided context prediction without context history. However, due to variety of context information, acquiring sufficient context prediction knowledge a priori is not easy in most of service domains. Hence, the purpose of this paper is to propose a novel context prediction methodology, which does not require a priori knowledge, and to increase accuracy and decrease elapsed time for service response. To do so, we have newly developed pattern-based context prediction approach. First of ail, a set of individual rules is derived from each context attribute using context history. Then a pattern consisted of results from reasoning individual rules, is developed for pattern learning. If at least one context property matches, say R, then regard the pattern as right. If the pattern is new, add right pattern, set the value of mismatched properties = 0, freq = 1 and w(R, 1). Otherwise, increase the frequency of the matched right pattern by 1 and then set w(R,freq). After finishing training, if the frequency is greater than a threshold value, then save the right pattern in knowledge base. On the other hand, if at least one context property matches, say W, then regard the pattern as wrong. If the pattern is new, modify the result into wrong answer, add right pattern, and set frequency to 1 and w(W, 1). Or, increase the matched wrong pattern's frequency by 1 and then set w(W, freq). After finishing training, if the frequency value is greater than a threshold level, then save the wrong pattern on the knowledge basis. Then, context prediction is performed with combinatorial rules as follows: first, identify current context. Second, find matched patterns from right patterns. If there is no pattern matched, then find a matching pattern from wrong patterns. If a matching pattern is not found, then choose one context property whose predictability is higher than that of any other properties. To show the feasibility of the methodology proposed in this paper, we collected actual context history from the travelers who had visited the largest amusement park in Korea. As a result, 400 context records were collected in 2009. Then we randomly selected 70% of the records as training data. The rest were selected as testing data. To examine the performance of the methodology, prediction accuracy and elapsed time were chosen as measures. We compared the performance with case-based reasoning and voting methods. Through a simulation test, we conclude that our methodology is clearly better than CBR and voting methods in terms of accuracy and elapsed time. This shows that the methodology is relatively valid and scalable. As a second round of the experiment, we compared a full model to a partial model. A full model indicates that right and wrong patterns are used for reasoning the future context. On the other hand, a partial model means that the reasoning is performed only with right patterns, which is generally adopted in the legacy alignment-prediction method. It turned out that a full model is better than a partial model in terms of the accuracy while partial model is better when considering elapsed time. As a last experiment, we took into our consideration potential privacy problems that might arise among the users. To mediate such concern, we excluded such context properties as date of tour and user profiles such as gender and age. The outcome shows that preserving privacy is endurable. Contributions of this paper are as follows: First, academically, we have improved sequential matching methods to predict accuracy and service time by considering individual rules of each context property and learning from wrong patterns. Second, the proposed method is found to be quite effective for privacy preserving applications, which are frequently required by B2C context-aware services; the privacy preserving system applying the proposed method successfully can also decrease elapsed time. Hence, the method is very practical in establishing privacy preserving context-aware services. Our future research issues taking into account some limitations in this paper can be summarized as follows. First, user acceptance or usability will be tested with actual users in order to prove the value of the prototype system. Second, we will apply the proposed method to more general application domains as this paper focused on tourism in amusement park.

Accuracy Evaluation of CT-Based Attenuation Correction in SPECT with Different Energy of Radioisotopes (SPECT/CT에서 CT를 기반으로 한 Attenuation Correction의 정확도 평가)

  • Kim, Seung Jeong;Kim, Jae Il;Kim, Jung Soo;Kim, Tae Yeop;Kim, Soo Mee;Woo, Jae Ryong;Lee, Jae Sung;Kim, Yoo Kyeong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.1
    • /
    • pp.25-29
    • /
    • 2013
  • Purpose: In this study, we evaluated the accuracy of CT-based attenuation correction (AC) under the conventional CT protocol (140 kVp, on average 50-60 keV) by comparing the SPECT image qualities of different energy of radioisotopes, $^{201}Tl,\;^{99m}Tc$ and $^{131}I$. Materials and Methods: Using a cylindrical phantom, three different SPECT scans of $^{201}Tl$ (70 keV, 55.5 MBq), $^{99m}Tc$ (140 keV, 281.2 MBq) and $^{131}I$ (364 keV, 96.2 MBq) were performed. The CT image was obtained with 140 kVp and 2.5 mA in GE Hawkeye 4. The OSEM reconstruction algorithm was performed with 2 iterations and 10 subsets. The experiments were performed in the 4 different conditions; non-AC and non-scatter correction (SC), only AC, only SC, AC and SC in terms of uniformity and center to peripheral ratio (CPR). Results: The uniformity was calculated from the uniform whole region in the reconstructed images. For $^{201}Tl$ and $^{99m}Tc$, the uniformities were improved by about 10-20% AC was applied, but these were decreased by about 2% as SC was applied. The uniformity of $^{131}I$ was slightly increased as both AC and SC were applied. The CPR of the reconstructed image was close to one, when AC was applied for $^{201}Tl$ and $^{99m}Tc$ scans and $^{131}I$ was distant from 1 and that is only AC. Conclusion: The image uniformity improved by AC on low energy likely to $^{201}Tl$ and $^{99m}Tc$. However, image uniformity of high energy such as $^{131}I$ was improved, when both AC and SC was applied.

  • PDF

Operational Ship Monitoring Based on Multi-platforms (Satellite, UAV, HF Radar, AIS) (다중 플랫폼(위성, 무인기, AIS, HF 레이더)에 기반한 시나리오별 선박탐지 모니터링)

  • Kim, Sang-Wan;Kim, Donghan;Lee, Yoon-Kyung;Lee, Impyeong;Lee, Sangho;Kim, Junghoon;Kim, Keunyong;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_2
    • /
    • pp.379-399
    • /
    • 2020
  • The detection of illegal ship is one of the key factors in building a marine surveillance system. Effective marine surveillance requires the means for continuous monitoring over a wide area. In this study, the possibility of ship detection monitoring based on satellite SAR, HF radar, UAV and AIS integration was investigated. Considering the characteristics of time and spatial resolution for each platform, the ship monitoring scenario consisted of a regular surveillance system using HFR data and AIS data, and an event monitoring system using satellites and UAVs. The regular surveillance system still has limitations in detecting a small ship and accuracy due to the low spatial resolution of HF radar data. However, the event monitoring system using satellite SAR data effectively detects illegal ships using AIS data, and the ship speed and heading direction estimated from SAR images or ship tracking information using HF radar data can be used as the main information for the transition to UAV monitoring. For the validation of monitoring scenario, a comprehensive field experiment was conducted from June 25 to June 26, 2019, at the west side of Hongwon Port in Seocheon. KOMPSAT-5 SAR images, UAV data, HF radar data and AIS data were successfully collected and analyzed by applying each developed algorithm. The developed system will be the basis for the regular and event ship monitoring scenarios as well as the visualization of data and analysis results collected from multiple platforms.

R-lambda Model based Rate Control for GOP Parallel Coding in A Real-Time HEVC Software Encoder (HEVC 실시간 소프트웨어 인코더에서 GOP 병렬 부호화를 지원하는 R-lambda 모델 기반의 율 제어 방법)

  • Kim, Dae-Eun;Chang, Yongjun;Kim, Munchurl;Lim, Woong;Kim, Hui Yong;Seok, Jin Wook
    • Journal of Broadcast Engineering
    • /
    • v.22 no.2
    • /
    • pp.193-206
    • /
    • 2017
  • In this paper, we propose a rate control method based on the $R-{\lambda}$ model that supports a parallel encoding structure in GOP levels or IDR period levels for 4K UHD input video in real-time. For this, a slice-level bit allocation method is proposed for parallel encoding instead of sequential encoding. When a rate control algorithm is applied in the GOP level or IDR period level parallelism, the information of how many bits are consumed cannot be shared among the frames belonging to a same frame level except the lowest frame level of the hierarchical B structure. Therefore, it is impossible to manage the bit budget with the existing bit allocation method. In order to solve this problem, we improve the bit allocation procedure of the conventional ones that allocate target bits sequentially according to the encoding order. That is, the proposed bit allocation strategy is to assign the target bits in GOPs first, then to distribute the assigned target bits from the lowest depth level to the highest depth level of the HEVC hierarchical B structure within each GOP. In addition, we proposed a processing method that is used to improve subjective image qualities by allocating the bits according to the coding complexities of the frames. Experimental results show that the proposed bit allocation method works well for frame-level parallel HEVC software encoders and it is confirmed that the performance of our rate controller can be improved with a more elaborate bit allocation strategy by using the preprocessing results.

Design of a Crowd-Sourced Fingerprint Mapping and Localization System (군중-제공 신호지도 작성 및 위치 추적 시스템의 설계)

  • Choi, Eun-Mi;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.9
    • /
    • pp.595-602
    • /
    • 2013
  • WiFi fingerprinting is well known as an effective localization technique used for indoor environments. However, this technique requires a large amount of pre-built fingerprint maps over the entire space. Moreover, due to environmental changes, these maps have to be newly built or updated periodically by experts. As a way to avoid this problem, crowd-sourced fingerprint mapping attracts many interests from researchers. This approach supports many volunteer users to share their WiFi fingerprints collected at a specific environment. Therefore, crowd-sourced fingerprinting can automatically update fingerprint maps up-to-date. In most previous systems, however, individual users were asked to enter their positions manually to build their local fingerprint maps. Moreover, the systems do not have any principled mechanism to keep fingerprint maps clean by detecting and filtering out erroneous fingerprints collected from multiple users. In this paper, we present the design of a crowd-sourced fingerprint mapping and localization(CMAL) system. The proposed system can not only automatically build and/or update WiFi fingerprint maps from fingerprint collections provided by multiple smartphone users, but also simultaneously track their positions using the up-to-date maps. The CMAL system consists of multiple clients to work on individual smartphones to collect fingerprints and a central server to maintain a database of fingerprint maps. Each client contains a particle filter-based WiFi SLAM engine, tracking the smartphone user's position and building each local fingerprint map. The server of our system adopts a Gaussian interpolation-based error filtering algorithm to maintain the integrity of fingerprint maps. Through various experiments, we show the high performance of our system.

Estimation of river discharge using satellite-derived flow signals and artificial neural network model: application to imjin river (Satellite-derived flow 시그널 및 인공신경망 모형을 활용한 임진강 유역 유출량 산정)

  • Li, Li;Kim, Hyunglok;Jun, Kyungsoo;Choi, Minha
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.7
    • /
    • pp.589-597
    • /
    • 2016
  • In this study, we investigated the use of satellite-derived flow (SDF) signals and a data-based model for the estimation of outflow for the river reach where in situ measurements are either completely unavailable or are difficult to access for hydraulic and hydrology analysis such as the upper basin of Imjin River. It has been demonstrated by many studies that the SDF signals can be used as the river width estimates and the correlation between SDF signals and river width is related to the shape of cross sections. To extract the nonlinear relationship between SDF signals and river outflow, Artificial Neural Network (ANN) model with SDF signals as its inputs were applied for the computation of flow discharge at Imjin Bridge located in Imjin River. 15 pixels were considered to extract SDF signals and Partial Mutual Information (PMI) algorithm was applied to identify the most relevant input variables among 150 candidate SDF signals (including 0~10 day lagged observations). The estimated discharges by ANN model were compared with the measured ones at Imjin Bridge gauging station and correlation coefficients of the training and validation were 0.86 and 0.72, respectively. It was found that if the 1 day previous discharge at Imjin bridge is considered as an input variable for ANN model, the correlation coefficients were improved to 0.90 and 0.83, respectively. Based on the results in this study, SDF signals along with some local measured data can play an useful role in river flow estimation and especially in flood forecasting for data-scarce regions as it can simulate the peak discharge and peak time of flood events with satisfactory accuracy.