• Title/Summary/Keyword: data algorithm system

Search Result 6,169, Processing Time 0.043 seconds

LI-RADS Treatment Response versus Modified RECIST for Diagnosing Viable Hepatocellular Carcinoma after Locoregional Therapy: A Systematic Review and Meta-Analysis of Comparative Studies (국소 치료 후 잔존 간세포암의 진단을 위한 LI-RADS 치료 반응 알고리즘과 Modified RECIST 기준 간 비교: 비교 연구를 대상으로 한 체계적 문헌고찰과 메타분석)

  • Dong Hwan Kim;Bohyun Kim;Joon-Il Choi;Soon Nam Oh;Sung Eun Rha
    • Journal of the Korean Society of Radiology
    • /
    • v.83 no.2
    • /
    • pp.331-343
    • /
    • 2022
  • Purpose To systematically compare the performance of liver imaging reporting and data system treatment response (LR-TR) with the modified Response Evaluation Criteria in Solid Tumors (mRECIST) for diagnosing viable hepatocellular carcinoma (HCC) treated with locoregional therapy (LRT). Materials and Methods Original studies of intra-individual comparisons between the diagnostic performance of LR-TR and mRECIST using dynamic contrast-enhanced CT or MRI were searched in MEDLINE and EMBASE, up to August 25, 2021. The reference standard for tumor viability was surgical pathology. The meta-analytic pooled sensitivity and specificity of the viable category using each criterion were calculated using a bivariate random-effects model and compared using bivariate meta-regression. Results For five eligible studies (430 patients with 631 treated observations), the pooled per-lesion sensitivities and specificities were 58% (95% confidence interval [CI], 45%-70%) and 93% (95% CI, 88%-96%) for the LR-TR viable category and 56% (95% CI, 42%-69%) and 86% (95% CI, 72%-94%) for the mRECIST viable category, respectively. The LR-TR viable category provided significantly higher pooled specificity (p < 0.01) than the mRECIST but comparable pooled sensitivity (p = 0.53). Conclusion The LR-TR algorithm demonstrated better specificity than mRECIST, without a significant difference in sensitivity for the diagnosis of pathologically viable HCC after LRT.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Development of the Risk Evaluation Model for Rear End Collision on the Basis of Microscopic Driving Behaviors (미시적 주행행태를 반영한 후미추돌위험 평가모형 개발)

  • Chung, Sung-Bong;Song, Ki-Han;Park, Chang-Ho;Chon, Kyung-Soo;Kho, Seung-Young
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.6
    • /
    • pp.133-144
    • /
    • 2004
  • A model and a measure which can evaluate the risk of rear end collision are developed. Most traffic accidents involve multiple causes such as the human factor, the vehicle factor, and the highway element at any given time. Thus, these factors should be considered in analyzing the risk of an accident and in developing safety models. Although most risky situations and accidents on the roads result from the poor response of a driver to various stimuli, many researchers have modeled the risk or accident by analyzing only the stimuli without considering the response of a driver. Hence, the reliabilities of those models turned out to be low. Thus in developing the model behaviors of a driver, such as reaction time and deceleration rate, are considered. In the past, most studies tried to analyze the relationships between a risk and an accident directly but they, due to the difficulty of finding out the directional relationships between these factors, developed a model by considering these factors, developed a model by considering indirect factors such as volume, speed, etc. However, if the relationships between risk and accidents are looked into in detail, it can be seen that they are linked by the behaviors of a driver, and depending on drivers the risk as it is on the road-vehicle system may be ignored or call drivers' attention. Therefore, an accident depends on how a driver handles risk, so that the more related risk to and accident occurrence is not the risk itself but the risk responded by a driver. Thus, in this study, the behaviors of a driver are considered in the model and to reflect these behaviors three concepts related to accidents are introduced. And safe stopping distance and accident occurrence probability were used for better understanding and for more reliable modeling of the risk. The index which can represent the risk is also developed based on measures used in evaluating noise level, and for the risk comparison between various situations, the equivalent risk level, considering the intensity and duration time, is developed by means of the weighted average. Validation is performed with field surveys on the expressway of Seoul, and the test vehicle was made to collect the traffic flow data, such as deceleration rate, speed and spacing. Based on this data, the risk by section, lane and traffic flow conditions are evaluated and compared with the accident data and traffic conditions. The evaluated risk level corresponds closely to the patterns of actual traffic conditions and counts of accident. The model and the method developed in this study can be applied to various fields, such as safety test of traffic flow, establishment of operation & management strategy for reliable traffic flow, and the safety test for the control algorithm in the advanced safety vehicles and many others.

K-DEV: A Borehole Deviation Logging Probe Applicable to Steel-cased Holes (철재 케이싱이 설치된 시추공에서도 적용가능한 공곡검층기 K-DEV)

  • Yoonho, Song;Yeonguk, Jo;Seungdo, Kim;Tae Jong, Lee;Myungsun, Kim;In-Hwa, Park;Heuisoon, Lee
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.4
    • /
    • pp.167-176
    • /
    • 2022
  • We designed a borehole deviation survey tool applicable for steel-cased holes, K-DEV, and developed a prototype for a depth of 500 m aiming to development of own equipment required to secure deep subsurface characterization technologies. K-DEV is equipped with sensors that provide digital output with verified high performance; moreover, it is also compatible with logging winch systems used in Korea. The K-DEV prototype has a nonmagnetic stainless steel housing with an outer diameter of 48.3 mm, which has been tested in the laboratory for water resistance up to 20 MPa and for durability by running into a 1-km deep borehole. We confirmed the operational stability and data repeatability of the prototype by constantly logging up and down to the depth of 600 m. A high-precision micro-electro-mechanical system (MEMS) gyroscope was used for the K-DEV prototype as the gyro sensor, which is crucial for azimuth determination in cased holes. Additionally, we devised an accurate trajectory survey algorithm by employing Unscented Kalman filtering and data fusion for optimization. The borehole test with K-DEV and a commercial logging tool produced sufficiently similar results. Furthermore, the issue of error accumulation due to drift over time of the MEMS gyro was successfully overcome by compensating with stationary measurements for the same attitude at the wellhead before and after logging, as demonstrated by the nearly identical result to the open hole. We believe that the methodology of K-DEV development and operational stability, as well as the data reliability of the prototype, were confirmed through these test applications.

Liver Splitting Using 2 Points for Liver Graft Volumetry (간 이식편의 체적 예측을 위한 2점 이용 간 분리)

  • Seo, Jeong-Joo;Park, Jong-Won
    • The KIPS Transactions:PartB
    • /
    • v.19B no.2
    • /
    • pp.123-126
    • /
    • 2012
  • This paper proposed a method to separate a liver into left and right liver lobes for simple and exact volumetry of the river graft at abdominal MDCT(Multi-Detector Computed Tomography) image before the living donor liver transplantation. A medical team can evaluate an accurate river graft with minimized interaction between the team and a system using this algorithm for ensuring donor's and recipient's safe. On the image of segmented liver, 2 points(PMHV: a point in Middle Hepatic Vein and PPV: a point at the beginning of right branch of Portal Vein) are selected to separate a liver into left and right liver lobes. Middle hepatic vein is automatically segmented using PMHV, and the cutting line is decided on the basis of segmented Middle Hepatic Vein. A liver is separated on connecting the cutting line and PPV. The volume and ratio of the river graft are estimated. The volume estimated using 2 points are compared with a manual volume that diagnostic radiologist processed and estimated and the weight measured during surgery to support proof of exact volume. The mean ${\pm}$ standard deviation of the differences between the actual weights and the estimated volumes was $162.38cm^3{\pm}124.39$ in the case of manual segmentation and $107.69cm^3{\pm}97.24$ in the case of 2 points method. The correlation coefficient between the actual weight and the manually estimated volume is 0.79, and the correlation coefficient between the actual weight and the volume estimated using 2 points is 0.87. After selection the 2 points, the time involved in separation a liver into left and right river lobe and volumetry of them is measured for confirmation that the algorithm can be used on real time during surgery. The mean ${\pm}$ standard deviation of the process time is $57.28sec{\pm}32.81$ per 1 data set ($149.17pages{\pm}55.92$).

NIRS Calibration Equation Development and Validation for Total Nitrogen Contents Field Analysis in Fresh Rice Leaves (벼 생엽의 질소함량 현장분석을 위한 NIRS 검량식 개발 및 검증)

  • Song, Young-Eun;Lee, Deok-Ryeol;Cho, Seong-Hyun;Lee, Ki-Kwon;Jeong, Jong-Seong;Gwon, Yeong-Rip;Cho, Kyu Chae
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.58 no.3
    • /
    • pp.301-307
    • /
    • 2013
  • This study was evaluated high end research grade Near Infrared Reflectance Spectrophotometer (NIRS) to field grade multiple Near Infrared Reflectance Spectrophotometer (NIRS) for rapid analysis at fresh rice leaf at sight with 238 samples of fresh rice leaf during year 2012, collected Jeollabuk-do for evaluate accuracy and precision between instruments. Firstly collected and build database high end research grade NIRS using with 400 nm ~ 2500 nm during from year 2003 to year 2009, seven years collected fresh rice leaf database then trim and fit to field grade NIRS with 1200 nm ~ 2400 nm then build and create calibration, transfer calibration with special transfer algorithm. The result between instruments was 0.005% differences, rapidly analysis for chemical constituents, Total nitrogen in fresh rice leaf within 5 minutes at sight and the result equivalent with laboratory data. Nevertheless last during more than 8 years collected samples for build calibration was organic samples that make differentiate by local or yearly bases etc. This strongly suggest population evaluation technique needed and constantly update calibration and maintenance calibration to proper handling database accumulation and spread out by knowledgable control laboratory analysis and reflect calibration update such as powerful control center needed for long lasting usage of fresh rice leaf analysis with NIRS at sight. Especially the agriculture products such as rice will continuously changes that made easily find out the changes and update routinely, if not near future NIRS was worthless due to those changes. Many research related NIRS was shortly study not long term study that made not well using NIRS, so the system needed check simple and instantly using with local language supported signal methods global distance (GD) and neighbour distance (ND) algorithm. Finally the multiple popular field grades instruments should be the same results not only between research grade instruments but also between multiple field grade instruments that needed easily transfer calibration and maintenance between instruments via internet networking techniques.

Transfer and Validation of NIRS Calibration Models for Evaluating Forage Quality in Italian Ryegrass Silages (이탈리안 라이그라스 사일리지의 품질평가를 위한 근적외선분광 (NIRS) 검량식의 이설 및 검증)

  • Cho, Kyu Chae;Park, Hyung Soo;Lee, Sang Hoon;Choi, Jin Hyeok;Seo, Sung;Choi, Gi Jun
    • Journal of Animal Environmental Science
    • /
    • v.18 no.sup
    • /
    • pp.81-90
    • /
    • 2012
  • This study was evaluated high end research grade Near infrared spectrophotometer (NIRS) to low end popular field grade multiple Near infrared spectrophotometer (NIRS) for rapid analysis at forage quality at sight with 241 samples of Italian ryegrass silage during 3 years collected whole country for evaluate accuracy and precision between instruments. Firstly collected and build database high end research grade NIRS using with Unity Scientific Model 2500X (650 nm~2,500 nm) then trim and fit to low end popular field grade NIRS with Unity Scientific Model 1400 (1,400 nm~2,400 nm) then build and create calibration, transfer calibration with special transfer algorithm. The result between instruments was 0.000%~0.343% differences, rapidly analysis for chemical constituents, NDF, ADF, and crude protein, crude ash and fermentation parameter such as moisture, pH and lactic acid, finally forage quality parameter, TDN, DMI, RFV within 5 minutes at sight and the result equivalent with laboratory data. Nevertheless during 3 years collected samples for build calibration was organic samples that make differentiate by local or yearly bases etc. This strongly suggest population evaluation technique needed and constantly update calibration and maintenance calibration to proper handling database accumulation and spread out by knowledgable control laboratory analysis and reflect calibration update such as powerful control center needed for long lasting usage of forage analysis with NIRS at sight. Especially the agriculture products such as forage will continuously changes that made easily find out the changes and update routinely, if not near future NIRS was worthless due to those changes. Many research related NIRS was shortly study not long term study that made not well using NIRS, so the system needed check simple and instantly using with local language supported signal methods Global Distance (GD) and Neighbour Distance (ND) algorithm. Finally the multiple popular field grades instruments should be the same results not only between research grade instruments but also between multiple popular field grade instruments that needed easily transfer calibration and maintenance between instruments via internet networking techniques.

RPC Correction of KOMPSAT-3A Satellite Image through Automatic Matching Point Extraction Using Unmanned AerialVehicle Imagery (무인항공기 영상 활용 자동 정합점 추출을 통한 KOMPSAT-3A 위성영상의 RPC 보정)

  • Park, Jueon;Kim, Taeheon;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1135-1147
    • /
    • 2021
  • In order to geometrically correct high-resolution satellite imagery, the sensor modeling process that restores the geometric relationship between the satellite sensor and the ground surface at the image acquisition time is required. In general, high-resolution satellites provide RPC (Rational Polynomial Coefficient) information, but the vendor-provided RPC includes geometric distortion caused by the position and orientation of the satellite sensor. GCP (Ground Control Point) is generally used to correct the RPC errors. The representative method of acquiring GCP is field survey to obtain accurate ground coordinates. However, it is difficult to find the GCP in the satellite image due to the quality of the image, land cover change, relief displacement, etc. By using image maps acquired from various sensors as reference data, it is possible to automate the collection of GCP through the image matching algorithm. In this study, the RPC of KOMPSAT-3A satellite image was corrected through the extracted matching point using the UAV (Unmanned Aerial Vehichle) imagery. We propose a pre-porocessing method for the extraction of matching points between the UAV imagery and KOMPSAT-3A satellite image. To this end, the characteristics of matching points extracted by independently applying the SURF (Speeded-Up Robust Features) and the phase correlation, which are representative feature-based matching method and area-based matching method, respectively, were compared. The RPC adjustment parameters were calculated using the matching points extracted through each algorithm. In order to verify the performance and usability of the proposed method, it was compared with the GCP-based RPC correction result. The GCP-based method showed an improvement of correction accuracy by 2.14 pixels for the sample and 5.43 pixelsfor the line compared to the vendor-provided RPC. In the proposed method using SURF and phase correlation methods, the accuracy of sample was improved by 0.83 pixels and 1.49 pixels, and that of line wasimproved by 4.81 pixels and 5.19 pixels, respectively, compared to the vendor-provided RPC. Through the experimental results, the proposed method using the UAV imagery presented the possibility as an alternative to the GCP-based method for the RPC correction.

The Correction Effect of Motion Artifacts in PET/CT Image using System (PET/CT 검사 시 움직임 보정 기법의 유용성 평가)

  • Yeong-Hak Jo;Se-Jong Yoo;Seok-Hwan Bae;Jong-Ryul Seon;Seong-Ho Kim;Won-Jeong Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.1
    • /
    • pp.45-52
    • /
    • 2024
  • In this study, an AI-based algorithm was developed to prevent image quality deterioration and reading errors due to patient movement in PET/CT examinations that use radioisotopes in medical institutions to test cancer and other diseases. Using the Mothion Free software developed using, we checked the degree of correction of movement due to breathing, evaluated its usefulness, and conducted a study for clinical application. The experimental method was to use an RPM Phantom to inject the radioisotope 18F-FDG into a vacuum vial and a sphere of a NEMA IEC body Phantom of different sizes, and to produce images by directing the movement of the radioisotope into a moving lesion during respiration. The vacuum vial had different degrees of movement at different positions, and the spheres of the NEMA IEC body Phantom of different sizes produced different sizes of lesions. Through the acquired images, the lesion volume, maximum SUV, and average SUV were each measured to quantitatively evaluate the degree of motion correction by Motion Free. The average SUV of vacuum vial A, with a large degree of movement, was reduced by 23.36 %, and the error rate of vacuum vial B, with a small degree of movement, was reduced by 29.3 %. The average SUV error rate at the sphere 37mm and 22mm of the NEMA IEC body Phantom was reduced by 29.3 % and 26.51 %, respectively. The average error rate of the four measurements from which the error rate was calculated decreased by 30.03 %, indicating a more accurate average SUV value. In this study, only two-dimensional movements could be produced, so in order to obtain more accurate data, a Phantom that can embody the actual breathing movement of the human body was used, and if the diversity of the range of movement was configured, a more accurate evaluation of usability could be made.