• Title/Summary/Keyword: Log System

Search Result 1,510, Processing Time 0.029 seconds

Targetoid Primary Liver Malignancy in Chronic Liver Disease: Prediction of Postoperative Survival Using Preoperative MRI Findings and Clinical Factors

  • So Hyun Park;Subin Heo;Bohyun Kim;Jungbok Lee;Ho Joong Choi;Pil Soo Sung;Joon-Il Choi
    • Korean Journal of Radiology
    • /
    • v.24 no.3
    • /
    • pp.190-203
    • /
    • 2023
  • Objective: We aimed to assess and validate the radiologic and clinical factors that were associated with recurrence and survival after curative surgery for heterogeneous targetoid primary liver malignancies in patients with chronic liver disease and to develop scoring systems for risk stratification. Materials and Methods: This multicenter retrospective study included 197 consecutive patients with chronic liver disease who had a single targetoid primary liver malignancy (142 hepatocellular carcinomas, 37 cholangiocarcinomas, 17 combined hepatocellular carcinoma-cholangiocarcinomas, and one neuroendocrine carcinoma) identified on preoperative gadoxetic acid-enhanced MRI and subsequently surgically removed between 2010 and 2017. Of these, 120 patients constituted the development cohort, and 77 patients from separate institution served as an external validation cohort. Factors associated with recurrence-free survival (RFS) and overall survival (OS) were identified using a Cox proportional hazards analysis, and risk scores were developed. The discriminatory power of the risk scores in the external validation cohort was evaluated using the Harrell C-index. The Kaplan-Meier curves were used to estimate RFS and OS for the different risk-score groups. Results: In RFS model 1, which eliminated features exclusively accessible on the hepatobiliary phase (HBP), tumor size of 2-5 cm or > 5 cm, and thin-rim arterial phase hyperenhancement (APHE) were included. In RFS model 2, tumors with a size of > 5 cm, tumor in vein (TIV), and HBP hypointense nodules without APHE were included. The OS model included a tumor size of > 5 cm, thin-rim APHE, TIV, and tumor vascular involvement other than TIV. The risk scores of the models showed good discriminatory performance in the external validation set (C-index, 0.62-0.76). The scoring system categorized the patients into three risk groups: favorable, intermediate, and poor, each with a distinct survival outcome (all log-rank p < 0.05). Conclusion: Risk scores based on rim arterial enhancement pattern, tumor size, HBP findings, and radiologic vascular invasion status may help predict postoperative RFS and OS in patients with targetoid primary liver malignancies.

Percutaneous Biliary Metallic Stent Insertion in Patients with Malignant Duodenobiliary Obstruction: Outcomes and Factors Influencing Biliary Stent Patency

  • Ji Hye Kwon;Dong Il Gwon;Jong Woo Kim;Hee Ho Chu;Jin Hyoung Kim;Gi-Young Ko;Hyun-Ki Yoon;Kyu-Bo Sung
    • Korean Journal of Radiology
    • /
    • v.21 no.6
    • /
    • pp.695-706
    • /
    • 2020
  • Objective: To investigate the technical and clinical efficacy of the percutaneous insertion of a biliary metallic stent, and to identify the factors associated with biliary stent dysfunction in patients with malignant duodenobiliary obstruction. Materials and Methods: The medical records of 70 patients (39 men and 31 women; mean age, 63 years; range, 38-90 years) who were treated for malignant duodenobiliary obstruction at our institution between April 2007 and December 2018, were retrospectively reviewed. Variables found significant by univariate log-rank analysis (p < 0.2) were considered as suitable candidates for a multiple Cox's proportional hazard model. Results: The biliary stents were successfully placed in all 70 study patients. Biliary stent insertion with subsequent duodenal stent insertion was performed in 33 patients and duodenal stent insertion with subsequent biliary stent insertion was performed in the other 37 study subjects. The median patient survival and stent patency time were 107 days (95% confidence interval [CI], 78-135 days) and 270 days (95% CI, 95-444 days), respectively. Biliary stent dysfunction was observed in 24 (34.3%) cases. Multiple Cox's proportional hazard analysis revealed that the location of the distal biliary stent was the only independent factor affecting biliary stent patency (hazard ratio, 3.771; 95% CI, 1.157-12.283). The median biliary stent patency was significantly longer in patients in whom the distal end of the biliary stent was beyond the distal end of the duodenal stent (median, 327 days; 95% CI, 249-450 days), rather than within the duodenal stent (median, 170 days; 95% CI, 115-225 days). Conclusion: The percutaneous insertion of the biliary metallic stent appears to be a technically feasible, safe, and effective method of treating malignant duodenobiliary obstruction. In addition, a biliary stent system with a distal end located beyond the distal end of the duodenal stent will contribute towards longer stent patency in these patients.

Development of a Ship's Logbook Data Extraction Model Using OCR Program (OCR 프로그램을 활용한 선박 항해일지 데이터 추출 모델 개발)

  • Dain Lee;Sung-Cheol Kim;Ik-Hyun Youn
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.30 no.1
    • /
    • pp.97-107
    • /
    • 2024
  • Despite the rapid advancement in image recognition technology, achieving perfect digitization of tabular documents and handwritten documents still challenges. The purpose of this study is to improve the accuracy of digitizing the logbook by correcting errors by utilizing associated rules considered during logbook entries. Through this, it is expected to enhance the accuracy and reliability of data extracted from logbook through OCR programs. This model is to improve the accuracy of digitizing the logbook of the training ship "Saenuri" at the Mokpo Maritime University by correcting errors identified after Optical Character Recognition (OCR) program recognition. The model identified and corrected errors by utilizing associated rules considered during logbook entries. To evaluate the effect of model, the data before and after correction were divided by features, and comparisons were made between the same sailing number and the same feature. Using this model, approximately 10.6% of errors out of the total estimated error rate of about 11.8% were identified, and 56 out of 123 errors were corrected. A limitation of this study is that it only focuses on information from Dist.Run to Stand Course sections of the logbook, which contain navigational information. Future research will aim to correct more information from the logbook, including weather information, to overcome this limitation.

Analysis of Joint Characteristics and Rock Mass Classification using Deep Borehole and Geophysical Logging (심부 시추공 회수코어와 물리검층 자료를 활용한 절리 및 암반등급 평가)

  • Dae-Sung Cheon;Seungbeom Choi;Won-Kyong Song;Seong Kon Lee
    • Tunnel and Underground Space
    • /
    • v.34 no.4
    • /
    • pp.330-354
    • /
    • 2024
  • In site characterization of high-level radioactive waste, discontinuity(joint) distribution and rock mass classification, which are key evaluation parameters in the rock engineering field, were evaluated using deep boreholes in the Wonju granite and Chuncheon granite, which belong to Mesozoic Jurassic era. To evaluate joint distribution characteristics, fracture zones and joint surfaces extracted from ATV data were used, and major joint sets were evaluated along with joint frequency according to depth, dip direction, and dip. Both the Wonju and Chuncheon granites that were studied showed a tendency for the frequency of joints to increase linearly with depth, and joints with high angles were relatively widely distributed. In addition, relatively large amounts of weathering tended to occur even in deep depth due to groundwater inflow through high-angle joints. RQD values remained consistently low even at considerable depth. Meanwhile, joint groups with low angles showed different joint characteristics from joint sets with high angles. Rock mass classification was performed based on RMR system, and along with rock mass classification for 50 m intervals where uniaxial compressive strength was performed, continuous rock mass classification according to depth was performed using velocity log data and geostatistical techniques. The Wonju granite exhibited a superior rock mass class compared to the Chuncheon granite. In the 50 m interval and continuous rock mass classification, the shallow part of the Wonju granite showed a higher class than the deep part, and the deep part of the Chuncheon granite showed a higher class than the shallow part.

Acoustic Scattering Characteristis of the Individual Fish (어체의 초음파 산란특성에 관한 연구)

  • 신형일
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.27 no.1
    • /
    • pp.21-30
    • /
    • 1991
  • The estimation of the fish biomass density or the size of fish by means of the acoustic equipment is an important part in the quantitative assessment of fisheries resources. The precision of such estimates depend upon the target strength of fish and the accuracy to which the acoustic equipment has been calibrated. This paper examine the accuracy of the digital measurement system which is manufactured by way of trial in order to masure the target strength of fish, and calibrations of that system carry out with an ogive and a ellipsoid made of the aluminum and the epoxy, respectively. Furthermore, measurements of target strength for eight species of fish are made at 25, 50, 100 kHz. The accuracy of the digital measurement system is compared the theory with measurements on ogive and ellipsoid, and the agreement is reasonable. Result of establishments on the target strength to fish length and to fish weight regression obtained from the measurements are available to provide the methods of design for use in interpreting acoustic measurements of fish abundance on the experimented eight species.

  • PDF

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

Comparative Analysis of ViSCa Platform-based Mobile Payment Service with other Cases (스마트카드 가상화(ViSCa) 플랫폼 기반 모바일 결제 서비스 제안 및 타 사례와의 비교분석)

  • Lee, June-Yeop;Lee, Kyoung-Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.163-178
    • /
    • 2014
  • Following research proposes "Virtualization of Smart Cards (ViSCa)" which is a security system that aims to provide a multi-device platform for the deployment of services that require a strong security protocol, both for the access & authentication and execution of its applications and focuses on analyzing Virtualization of Smart Cards (ViSCa) platform-based mobile payment service by comparing with other similar cases. At the present day, the appearance of new ICT, the diffusion of new user devices (such as smartphones, tablet PC, and so on) and the growth of internet penetration rate are creating many world-shaking services yet in the most of these applications' private information has to be shared, which means that security breaches and illegal access to that information are real threats that have to be solved. Also mobile payment service is, one of the innovative services, has same issues which are real threats for users because mobile payment service sometimes requires user identification, an authentication procedure and confidential data sharing. Thus, an extra layer of security is needed in their communication and execution protocols. The Virtualization of Smart Cards (ViSCa), concept is a holistic approach and centralized management for a security system that pursues to provide a ubiquitous multi-device platform for the arrangement of mobile payment services that demand a powerful security protocol, both for the access & authentication and execution of its applications. In this sense, Virtualization of Smart Cards (ViSCa) offers full interoperability and full access from any user device without any loss of security. The concept prevents possible attacks by third parties, guaranteeing the confidentiality of personal data, bank accounts or private financial information. The Virtualization of Smart Cards (ViSCa) concept is split in two different phases: the execution of the user authentication protocol on the user device and the cloud architecture that executes the secure application. Thus, the secure service access is guaranteed at anytime, anywhere and through any device supporting previously required security mechanisms. The security level is improved by using virtualization technology in the cloud. This virtualization technology is used terminal virtualization to virtualize smart card hardware and thrive to manage virtualized smart cards as a whole, through mobile cloud technology in Virtualization of Smart Cards (ViSCa) platform-based mobile payment service. This entire process is referred to as Smart Card as a Service (SCaaS). Virtualization of Smart Cards (ViSCa) platform-based mobile payment service virtualizes smart card, which is used as payment mean, and loads it in to the mobile cloud. Authentication takes place through application and helps log on to mobile cloud and chooses one of virtualized smart card as a payment method. To decide the scope of the research, which is comparing Virtualization of Smart Cards (ViSCa) platform-based mobile payment service with other similar cases, we categorized the prior researches' mobile payment service groups into distinct feature and service type. Both groups store credit card's data in the mobile device and settle the payment process at the offline market. By the location where the electronic financial transaction information (data) is stored, the groups can be categorized into two main service types. First is "App Method" which loads the data in the server connected to the application. Second "Mobile Card Method" stores its data in the Integrated Circuit (IC) chip, which holds financial transaction data, which is inbuilt in the mobile device secure element (SE). Through prior researches on accept factors of mobile payment service and its market environment, we came up with six key factors of comparative analysis which are economic, generality, security, convenience(ease of use), applicability and efficiency. Within the chosen group, we compared and analyzed the selected cases and Virtualization of Smart Cards (ViSCa) platform-based mobile payment service.

X-tree Diff: An Efficient Change Detection Algorithm for Tree-structured Data (X-tree Diff: 트리 기반 데이터를 위한 효율적인 변화 탐지 알고리즘)

  • Lee, Suk-Kyoon;Kim, Dong-Ah
    • The KIPS Transactions:PartC
    • /
    • v.10C no.6
    • /
    • pp.683-694
    • /
    • 2003
  • We present X-tree Diff, a change detection algorithm for tree-structured data. Our work is motivated by need to monitor massive volume of web documents and detect suspicious changes, called defacement attack on web sites. From this context, our algorithm should be very efficient in speed and use of memory space. X-tree Diff uses a special ordered labeled tree, X-tree, to represent XML/HTML documents. X-tree nodes have a special field, tMD, which stores a 128-bit hash value representing the structure and data of subtrees, so match identical subtrees form the old and new versions. During this process, X-tree Diff uses the Rule of Delaying Ambiguous Matchings, implying that it perform exact matching where a node in the old version has one-to one corrspondence with the corresponding node in the new, by delaying all the others. It drastically reduces the possibility of wrong matchings. X-tree Diff propagates such exact matchings upwards in Step 2, and obtain more matchings downwsards from roots in Step 3. In step 4, nodes to ve inserted or deleted are decided, We aldo show thst X-tree Diff runs on O(n), woere n is the number of noses in X-trees, in worst case as well as in average case, This result is even better than that of BULD Diff algorithm, which is O(n log(n)) in worst case, We experimented X-tree Diff on reat data, which are about 11,000 home pages from about 20 wev sites, instead of synthetic documets manipulated for experimented for ex[erimentation. Currently, X-treeDiff algorithm is being used in a commeercial hacking detection system, called the WIDS(Web-Document Intrusion Detection System), which is to find changes occured in registered websites, and report suspicious changes to users.

Groundwater Flow Analysis in Fractured Rocks Using Zonal Pumping Tests and Water Quality Logs (구간양수시험과 수질검층자료에 의한 균열암반내 지하수 유동 분석)

  • Hamm, Se-Yeong;Sung, Ig-Hwan;Lee, Byeong-Dae;Jang, Seong;Cheong, Jae-Yeol;Lee, Jeong-Hwan
    • The Journal of Engineering Geology
    • /
    • v.16 no.4 s.50
    • /
    • pp.411-427
    • /
    • 2006
  • This study aimed to recognize characteristics of groundwater flow in fractured bedrocks based on zonal pump-ing tests, slug tests, water quality logs and borehole TV camera logs conducted on two boreholes (NJ-11 and SJ-8) in the city of Naju. Especially, the zonal pumping tests using sin91e Packer were executed to reveal groundwater flow characteristics in the fractured bedrocks with depth. On borehole NJ-11, the zonal pumping tests resulted in a flow dimension of 1.6 with a packer depth of 56.9 meters. It also resulted in lower flow dimensions as moving to shallower packer depths, reaching a flow dimension of 1 at a 24 meter packer depth. This fact indicates that uniform permissive fractures take place in deeper zones at the borehole. On borehole SJ-8, a flow dimension of 1.7 was determined at the deepest packer level (50 m). Next, a dimension of 1.8 was obtained at 32 meters of packer depth, and lastly a dimension of 1.4 at 19 meters of packer depth. The variation of flow dimension with different packer depths is interpreted by the variability of permissive fractures with depth. Zonal pumping tests led to the utilization of the Moench (1984) dual-porosity model because hydraulic characteristics in the test holes were most suitable to the fractured bedrocks. Water quality logs displayed a tendency to increase geothermal temperature, to increase pH and to decrease dissolved oxygen. In addition, there was an increasing tendency towards electrical conductance and a decreasing tendency towards dissolved oxygen at most fracture zones.

Prognostic Value of TNM Staging in Small Cell Lung Cancer (소세포폐암의 TNM 병기에 따른 예후)

  • Park, Jae-Yong;Kim, Kwan-Young;Chae, Sang-Cheol;Kim, Jeong-Seok;Kim, Kwon-Yeop;Park, Ki-Su;Cha, Seung-Ik;Kim, Chang-Ho;Kam, Sin;Jung, Tae-Hoon
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.2
    • /
    • pp.322-332
    • /
    • 1998
  • Background: Accurate staging is important to determine treatment modalities and to predict prognosis for the patients with lung cancer. The simple two-stage system of the Veteran's Administration Lung Cancer study Group has been used for staging of small cell lung cancer(SCLC) because treatment usually consists of chemotherapy with or without radiotherapy. However, this system does not accurately reflect segregation of patients into homogenous prognostic groups. Therefore, a variety of new staging system have been proposed as more intensive treatments including either intensive radiotherapy or surgery enter clinical trials. We evaluate the prognostic importance of TNM staging, which has the advantage of providing a uniform detailed classification of tumor spread, in patients with SCLC. Methods: The medical records of 166 patients diagnosed with SCLC between January 1989 and December 1996 were reviewed retrospectively. The influence of TNM stage on survival was analyzed in 147 patients, among 166 patients, who had complete TNM staging data. Results: Three patients were classified in stage I / II, 15 in stage III a, 78 in stage IIIb and 48 in stage IV. Survival rate at 1 and 2 years for these patients were as follows: stage I / II, 75% and 37.5% ; stage IIIa, 46.7% and 25.0% ; stage III b, 34.3% and 11.3% ; and stage IV, 2.6% and 0%. The 2-year survival rates for 84 patients who received chemotherapy(more than 2 cycles) with or without radiotherapy were as follows: stage I / II, 37.5% ; stage rna, 31.3% ; stage IIIb 13.5% ; and stage IV 0%. Overall outcome according to TNM staging was significantly different whether or not received treatment. However, there was no significant difference between stage IIIa and stage IIIb though median survival and 2-year survival rate were higher in stage IIIa than stage IIIb. Conclusion: These results suggest that the TNM staging system may be helpful for predicting the prognosis of patients with SCLC.

  • PDF