• Title/Summary/Keyword: multi-time step

Search Result 376, Processing Time 0.029 seconds

Image Registration for PET/CT and CT Images with Particle Swarm Optimization (Particle Swarm Optimization을 이용한 PET/CT와 CT영상의 정합)

  • Lee, Hak-Jae;Kim, Yong-Kwon;Lee, Ki-Sung;Moon, Guk-Hyun;Joo, Sung-Kwan;Kim, Kyeong-Min;Cheon, Gi-Jeong;Choi, Jong-Hak;Kim, Chang-Kyun
    • Journal of radiological science and technology
    • /
    • v.32 no.2
    • /
    • pp.195-203
    • /
    • 2009
  • Image registration is a fundamental task in image processing used to match two or more images. It gives new information to the radiologists by matching images from different modalities. The objective of this study is to develop 2D image registration algorithm for PET/CT and CT images acquired by different systems at different times. We matched two CT images first (one from standalone CT and the other from PET/CT) that contain affluent anatomical information. Then, we geometrically transformed PET image according to the results of transformation parameters calculated by the previous step. We have used Affine transform to match the target and reference images. For the similarity measure, mutual information was explored. Use of particle swarm algorithm optimized the performance by finding the best matched parameter set within a reasonable amount of time. The results show good agreements of the images between PET/CT and CT. We expect the proposed algorithm can be used not only for PET/CT and CT image registration but also for different multi-modality imaging systems such as SPECT/CT, MRI/PET and so on.

  • PDF

The Study on Optimal Image Processing and Identifying Threshold Values for Enhancing the Accuracy of Damage Information from Natural Disasters (자연재해 피해정보 산출의 정확도 향상을 위한 최적 영상처리 및 임계치 결정에 관한 연구)

  • Seo, Jung-Taek;Kim, Kye-Hyun
    • Spatial Information Research
    • /
    • v.19 no.5
    • /
    • pp.1-11
    • /
    • 2011
  • This study mainly focused on the method of accurately extracting damage information in the im agery change detection process using the constructed high resolution aerial im agery. Bongwha-gun in Gyungsangbuk-do which had been severely damaged from a localized torrential downpour at the end of July, 2008 was selected as study area. This study utilized aerial im agery having photographing scale of 30cm gray image of pre-disaster and 40cm color image of post-disaster. In order to correct errors from the differences of the image resolution of pre-/post-disaster and time series, the prelim inary phase of image processing techniques such as normalizing, contrast enhancement and equalizing were applied to reduce errors. The extent of the damage was calculated using one to one comparison of the intensity of each pixel of pre-/post-disaster im aged. In this step, threshold values which facilitate to extract the extent that damage investigator wants were applied by setting difference values of the intensity of pixel of pre-/post-disaster. The accuracy of optimal image processing and the result of threshold values were verified using the error matrix. The results of the study enabled the early exaction of the extents of the damages using the aerial imagery with identical characteristics. It was also possible to apply to various damage items for imagery change detection in case of utilizing multi-band im agery. Furthermore, more quantitative estimation of the dam ages would be possible with the use of numerous GIS layers such as land cover and cadastral maps.

Rapid Detection of Pathogens Associated with Dental Caries and Periodontitis by PCR Using a Modified DNA Extraction Method (PCR을 이용한 치아우식증 및 치주염 연관 병원체의 빠른 검출)

  • Kim, Jaehwan;Kim, Miah;Lee, Daewoo;Baik, Byeongju;Yang, Yeonmi;Kim, Jaegon
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.41 no.4
    • /
    • pp.292-297
    • /
    • 2014
  • DNA extraction is a prerequisite for the identification of pathogens in clinical samples. Commercial DNA extraction kits generally involve time-consuming and laborious multi-step procedures. In the present study, our modified DNA isolation method for saliva samples allows for the quick detection of pathogens associated with dental caries or periodontitis by PCR within 1 h. To release DNA from the bacteria, 1 min of boiling was adequate, and the resulting isolated DNA can be used many times and is suitable for long term storage of at least 13 months at $4^{\circ}C$, and even longer at $-20^{\circ}C$. In conclusion, our modified DNA extraction method is simple, rapid, and cost-effective, and suitable for preparing DNA from clinical samples for PCR for the rapid detection of oral pathogens from saliva.

Thickness Evaluation of the Aluminum Using Pulsed Eddy Current (펄스 와전류를 이용한 알루미늄 두께 평가)

  • Lee, Jeong-Ki;Suh, Dong-Man;Lee, Seung-Seok
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.25 no.1
    • /
    • pp.15-19
    • /
    • 2005
  • Conventional eddy current testing has been used for the detection of the defect-like fatigue crack in the conductive materials, such as aluminum, which uses a sinusoidal signal with very narrow frequency bandwidth, Whereas, the pulsed eddy current method uses a pulse signal with a broad bandwidth. This can allow multi-frequency eddy current testing, and the penetration depth is greater than that of the conventional eddy current testing. In this work, a pulsed eddy current instrument was developed for evaluating the metal loss. The developed instrument was composed of the pulse generator generating the maximum square pulse voltage of 40V, an amplifier controlled up to 52dB, an A/D converter of 16 bit and the sampling frequency of 20 MHz, and an industrial personal computer operated by the Windows program. A pulsed eddy current probe was designed as a pancake type in which the sensing roil was located inside the driving roil. The output signals of the sensing roil increased rapidly wich the step pulse driving voltage かn off, and the latter part of the sensing coil output voltage decreased exponentially with time. The decrement value of the output signals increased as the thickness of the aluminum test piece increased.

An Automatic Mobile Cell Counting System for the Analysis of Biological Image (생물학적 영상 분석을 위한 자동 모바일 셀 계수 시스템)

  • Seo, Jaejoon;Chun, Junchul;Lee, Jin-Sung
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.39-46
    • /
    • 2015
  • This paper presents an automatic method to detect and count the cells from microorganism images based on mobile environments. Cell counting is an important process in the field of biological and pathological image analysis. In the past, cell counting is done manually, which is known as tedious and time consuming process. Moreover, the manual cell counting can lead inconsistent and imprecise results. Therefore, it is necessary to make an automatic method to detect and count cells from biological images to obtain accurate and consistent results. The proposed multi-step cell counting method automatically segments the cells from the image of cultivated microorganism and labels the cells by utilizing topological analysis of the segmented cells. To improve the accuracy of the cell counting, we adopt watershed algorithm in separating agglomerated cells from each other and morphological operation in enhancing the individual cell object from the image. The system is developed by considering the availability in mobile environments. Therefore, the cell images can be obtained by a mobile phone and the processed statistical data of microorganism can be delivered by mobile devices in ubiquitous smart space. From the experiments, by comparing the results between manual and the proposed automatic cell counting we can prove the efficiency of the developed system.

Analysis System of School Life Records Based on Data Mining for College Entrance (데이터 마이닝 기반 대학입시를 위한 학교생활기록부 분석시스템)

  • Yang, Jinwoo;Kim, Donghyun;Lim, Jongtae;Yoo, Jaesoo
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.2
    • /
    • pp.49-58
    • /
    • 2021
  • The Korean curriculum and admission system have evolved through numerous changes. Currently, the nation's college entrance rate stands at nearly 70 percent, and it is the highest among OECD members. Amid this environment, the importance of school life records is increasing among students who are interested in going to college and who have the highest percentage in the nation's education system. Happiness is not the order of grades, but I can find my future and happiness at the same time through active school life. Through the analysis system of school life records, you can find interests and career paths suitable for yourself, and analyze and supplement factors suitable for the university and department you want to go to, so that you can take a step further in successful advancement. Each item in the school records is divided into three categories to analyze the necessary and unnecessary words. By visualizing and numericalizing the analyzed data, an analysis system is established that can be supplemented in school life. An analysis system through data mining can be utilized by concisely summarizing sentences of different elements and extracting words by applying the multi-topic minutes summary system using word frequency and similarity analysis as an existing prior study.

Implementation of Git's Commit Message Classification Model Using GPT-Linked Source Change Data

  • Ji-Hoon Choi;Jae-Woong Kim;Seong-Hyun Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.123-132
    • /
    • 2023
  • Git's commit messages manage the history of source changes during project progress or operation. By utilizing this historical data, project risks and project status can be identified, thereby reducing costs and improving time efficiency. A lot of research related to this is in progress, and among these research areas, there is research that classifies commit messages as a type of software maintenance. Among published studies, the maximum classification accuracy is reported to be 95%. In this paper, we began research with the purpose of utilizing solutions using the commit classification model, and conducted research to remove the limitation that the model with the highest accuracy among existing studies can only be applied to programs written in the JAVA language. To this end, we designed and implemented an additional step to standardize source change data into natural language using GPT. This text explains the process of extracting commit messages and source change data from Git, standardizing the source change data with GPT, and the learning process using the DistilBERT model. As a result of verification, an accuracy of 91% was measured. The proposed model was implemented and verified to ensure accuracy and to be able to classify without being dependent on a specific program. In the future, we plan to study a classification model using Bard and a management tool model helpful to the project using the proposed classification model.

Interactive analysis tools for the wide-angle seismic data for crustal structure study (Technical Report) (지각 구조 연구에서 광각 탄성파 자료를 위한 대화식 분석 방법들)

  • Fujie, Gou;Kasahara, Junzo;Murase, Kei;Mochizuki, Kimihiro;Kaneda, Yoshiyuki
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.1
    • /
    • pp.26-33
    • /
    • 2008
  • The analysis of wide-angle seismic reflection and refraction data plays an important role in lithospheric-scale crustal structure study. However, it is extremely difficult to develop an appropriate velocity structure model directly from the observed data, and we have to improve the structure model step by step, because the crustal structure analysis is an intrinsically non-linear problem. There are several subjective processes in wide-angle crustal structure modelling, such as phase identification and trial-and-error forward modelling. Because these subjective processes in wide-angle data analysis reduce the uniqueness and credibility of the resultant models, it is important to reduce subjectivity in the analysis procedure. From this point of view, we describe two software tools, PASTEUP and MODELING, to be used for developing crustal structure models. PASTEUP is an interactive application that facilitates the plotting of record sections, analysis of wide-angle seismic data, and picking of phases. PASTEUP is equipped with various filters and analysis functions to enhance signal-to-noise ratio and to help phase identification. MODELING is an interactive application for editing velocity models, and ray-tracing. Synthetic traveltimes computed by the MODELING application can be directly compared with the observed waveforms in the PASTEUP application. This reduces subjectivity in crustal structure modelling because traveltime picking, which is one of the most subjective process in the crustal structure analysis, is not required. MODELING can convert an editable layered structure model into two-way traveltimes which can be compared with time-sections of Multi Channel Seismic (MCS) reflection data. Direct comparison between the structure model of wide-angle data with the reflection data will give the model more credibility. In addition, both PASTEUP and MODELING are efficient tools for handling a large dataset. These software tools help us develop more plausible lithospheric-scale structure models using wide-angle seismic data.

A Study on the Applicability of Soilremediation Technology for Contaminated Sediment in Agro-livestock Reservoir (농축산저수지 오염퇴적토의 토양정화기술에 대한 적용성 연구)

  • Jung, Jaeyun;Chang, Yoonyoung
    • Journal of Environmental Impact Assessment
    • /
    • v.29 no.3
    • /
    • pp.157-181
    • /
    • 2020
  • Sediments from rivers, lakes and marine ports serve as end points for pollutants discharged into the water, and at the same time serve as sources of pollutants that are continuously released into the water. Until now, the contaminated sediments have been landfilled or dumped at sea. Landfilling, however, was expensive and dumping at sea was completely banned due to the London Convention. Therefore, this study applied contaminated sedimentation soil of 'Royal Palace Livestock Complex' as soil purification method. Soil remediation methods were applied to pretreatment, composting, soil washing, electrokinetics, and thermal desorption by selecting overseas application cases and domestically applicable application technologies. As a result of surveying the site for pollutant characteristics, Disolved Oxigen (DO), Suspended Solid (SS), Chemical Oxygen Demand (COD), Total Nitrogen (TN), and Total Phosphorus (TP) exceeded the discharged water quality standard, and especially SS, COD, TN, and TP exceeded the standard several tens to several hundred times. Soil showed high concentrations of copper and zinc, which promote the growth of pig feed, and cadmium exceeded 1 standard of Soil Environment Conservation Act. In the pretreatment technology, hydrocyclone was used for particle size separation, and the fine soil was separated by more than 80%. Composting was performed on organic and Total Petroleum Hydrocarbon (TPH) contaminated soils. TPH was treated within the standard of concern, and E. coli was analyzed to be high in organic matter, and the fertilizer specification was satisfied by applying the optimum composting conditions at 70℃, but the organic matter content was lower than the fertilizer specification. As a result of continuous washing test, Cd has 5 levels of residual material in fine soil. Cu and Zn were mostly composed of ion exchange properties (stage 1), carbonates (stage 2), and iron / manganese oxides (stage 3), which facilitate easy separation of contamination. As a result of applying acid dissolution and multi-stage washing step by step, hydrochloric acid, 1.0M, 1: 3, 200rpm, 60min was analyzed as the optimal washing factor. Most of the contaminated sediments were found to satisfy the Soil Environmental Conservation Act's standards. Therefore, as a result of the applicability test of this study, soil with high heavy metal contamination was used as aggregate by applying soil cleaning after pre-treatment. It was possible to verify that it was efficient to use organic and oil-contaminated soil as compost Maturity after exterminating contaminants and E. coli by applying composting.

Research on ITB Contract Terms Classification Model for Risk Management in EPC Projects: Deep Learning-Based PLM Ensemble Techniques (EPC 프로젝트의 위험 관리를 위한 ITB 문서 조항 분류 모델 연구: 딥러닝 기반 PLM 앙상블 기법 활용)

  • Hyunsang Lee;Wonseok Lee;Bogeun Jo;Heejun Lee;Sangjin Oh;Sangwoo You;Maru Nam;Hyunsik Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.11
    • /
    • pp.471-480
    • /
    • 2023
  • The Korean construction order volume in South Korea grew significantly from 91.3 trillion won in public orders in 2013 to a total of 212 trillion won in 2021, particularly in the private sector. As the size of the domestic and overseas markets grew, the scale and complexity of EPC (Engineering, Procurement, Construction) projects increased, and risk management of project management and ITB (Invitation to Bid) documents became a critical issue. The time granted to actual construction companies in the bidding process following the EPC project award is not only limited, but also extremely challenging to review all the risk terms in the ITB document due to manpower and cost issues. Previous research attempted to categorize the risk terms in EPC contract documents and detect them based on AI, but there were limitations to practical use due to problems related to data, such as the limit of labeled data utilization and class imbalance. Therefore, this study aims to develop an AI model that can categorize the contract terms based on the FIDIC Yellow 2017(Federation Internationale Des Ingenieurs-Conseils Contract terms) standard in detail, rather than defining and classifying risk terms like previous research. A multi-text classification function is necessary because the contract terms that need to be reviewed in detail may vary depending on the scale and type of the project. To enhance the performance of the multi-text classification model, we developed the ELECTRA PLM (Pre-trained Language Model) capable of efficiently learning the context of text data from the pre-training stage, and conducted a four-step experiment to validate the performance of the model. As a result, the ensemble version of the self-developed ITB-ELECTRA model and Legal-BERT achieved the best performance with a weighted average F1-Score of 76% in the classification of 57 contract terms.