• Title/Summary/Keyword: 데이터 저장

Search Result 5,780, Processing Time 0.029 seconds

Comparison of Center Error or X-ray Field and Light Field Size of Diagnostic Digital X-ray Unit according to the Hospital Grade (병원 등급에 따른 X선조사야와 광조사야 간의 면적 및 중심점 오차 비교)

  • Lee, Won-Jeong;Song, Gyu-Ri;Shin, Hyun-yi
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.3
    • /
    • pp.245-252
    • /
    • 2020
  • The purpose of this study was intended to recognize the importance of quality control (QC) in order to reduce exposure and improve image quality by comparing the center-point (CP) of according to hospital grade and the difference between X-ray field (XF) and light field (LF) in diagnostic digital X-ray devices. XF and LF size, CP were measured in 12 digital X-ray devices at 10 hospitals located in 00 metropolitan cities. Phantom was made in different width respectively, using 0.8 mm wire after attaching to the standardized graph paper on transparent plastic plate and marked as cross wire in the center of the phantom. After placing the phantom on the table of the digital X-ray device, the images were obtained by shooting it vertically each field of survey. All images were acquired under the same conditions of exposure at distance of 100cm between the focus-detector. XF and LF size, CP error were measured using the picture archiving communication system. data were expressed as mean with standard error and then analyzed using SPSS ver. 22.0. The difference in field between the XF and LF size was the smallest in clinic, followed by university hospitals, hospitals and general hospitals. Based on the university hospitals with the least CP error, there was a statistically significant difference in CP error between university hospitals and clinics (p=0.024). Group less than 36-month after QC had fewer statistical errors than 36-month group (0.26 vs. 0.88, p=0.036). The difference between the XF and LF size was the lowest in clinic and CP error was the lowest in university hospital. Moreover, hospitals with short period of time after QC have fewer CP error and it means that introduction of timely QC according to the QC items is essential.

Development of a Small Animal Positron Emission Tomography Using Dual-layer Phoswich Detector and Position Sensitive Photomultiplier Tube: Preliminary Results (두층 섬광결정과 위치민감형광전자증배관을 이용한 소동물 양전자방출단층촬영기 개발: 기초실험 결과)

  • Jeong, Myung-Hwan;Choi, Yong;Chung, Yong-Hyun;Song, Tae-Yong;Jung, Jin-Ho;Hong, Key-Jo;Min, Byung-Jun;Choe, Yearn-Seong;Lee, Kyung-Han;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.5
    • /
    • pp.338-343
    • /
    • 2004
  • Purpose: The purpose of this study was to develop a small animal PET using dual layer phoswich detector to minimize parallax error that degrades spatial resolution at the outer part of field-of-view (FOV). Materials and Methods: A simulation tool GATE (Geant4 Application for Tomographic Emission) was used to derive optimal parameters of small PET, and PET was developed employing the parameters. Lutetium Oxyorthosilicate (LSO) and Lutetium-Yttrium Aluminate-Perovskite(LuYAP) was used to construct dual layer phoswitch crystal. $8{\times}8$ arrays of LSO and LuYAP pixels, $2mm{\times}2mm{\times}8mm$ in size, were coupled to a 64-channel position sensitive photomultiplier tube. The system consisted of 16 detector modules arranged to one ring configuration (ring inner diameter 10 cm, FOV of 8 cm). The data from phoswich detector modules were fed into an ADC board in the data acquisition and preprocessing PC via sockets, decoder block, FPGA board, and bus board. These were linked to the master PC that stored the events data on hard disk. Results: In a preliminary test of the system, reconstructed images were obtained by using a pair of detectors and sensitivity and spatial resolution were measured. Spatial resolution was 2.3 mm FWHM and sensitivity was 10.9 $cps/{\mu}Ci$ at the center of FOV. Conclusion: The radioactivity distribution patterns were accurately represented in sinograms and images obtained by PET with a pair of detectors. These preliminary results indicate that it is promising to develop a high performance small animal PET.

COMPARISON OF SCREW-IN EFFECT FOR SEVERAL NICKEL-TITANIUM ROTARY INSTRUMENTS IN SIMULATED RESIN ROOT CANAL (모형 레진 근관에서 수종의 전동 니켈-티타늄 파일에 대한 screw-in effect 비교)

  • Ha, Jung-Hong;Jin, Myoung-Uk;Kim, Young-Kyung;Kim, Sung-Kyo
    • Restorative Dentistry and Endodontics
    • /
    • v.35 no.4
    • /
    • pp.267-272
    • /
    • 2010
  • Screw-in effect is one of the unintended phenomena that occurs during the root canal preparation with nickel-titanium rotary files. The aim of this study was to compare the screw-in effect among various nickel-titanium rotary file systems. Six different nickel-titanium rotary instruments (ISO 20/.06 taper) were used: $K3^{TM}$ (SybronEndo, Glendora, CA, USA), $M_{two}$ (VDW GmbH, Munchen, Germany), NRT with safe-tip and with active tip (Mani Inc., Shioya-gun, Japan), ProFile$^{(R)}$ (Dentsply-Maillefer, Ballaigues, Switzerland) and ProTaper$^{(R)}$ (Dentsply-Maillefer, Ballaigues, Switzerland). For ProTaper$^{(R)}$, S2 was selected because it has size 20. Root canal instrumentations were done in sixty simulated single-curved resin root canals with a rotational speed of 300 rpm and single pecking motion. A special device was designed to measure the force of screw-in effect. A dynamometer of the device recorded the screw-in force during simulated canal preparation and the recorded data was stored in a computer with designed software (LCV-USE-VS, Lorenz Messtechnik GmbH, Alfdorf, Germany). The data were subjected to one-way ANOVA and Tukey's multiple range test for post-hoc test. P value of less than 0.05 was regarded significant. ProTaper$^{(R)}$ produced significantly more screw-in effects than any other instruments in the study (p < 0.001). $K3^{TM}$ produced significantly more screw-in effects than $M_{two}$, and ProFile$^{(R)}$ (p < 0.001). There was no significant difference among $M_{two}$, NRT, and ProFile$^{(R)}$ (p > 0.05), and between NRT with active tip and NRT with safe one neither (p > 0.05). From the result of the present study, it was concluded, therefore, that there seems significant differences of screw-in effect among the tested nickel-titanium rotary instruments. The radial lands and rake angle of nickel-titanium rotary instrument might be the cause of the difference.

Study of Geological Log Database for Public Wells, Jeju Island (제주도 공공 관정 지질주상도 DB 구축 소개)

  • Pak, Song-Hyon;Koh, Giwon;Park, Junbeom;Moon, Dukchul;Yoon, Woo Seok
    • Economic and Environmental Geology
    • /
    • v.48 no.6
    • /
    • pp.509-523
    • /
    • 2015
  • This study introduces newly implemented geological well logs database for Jeju public water wells, built for a research project focusing on integrated hydrogeology database of Jeju Island. A detailed analysis of the existing 1,200 Jeju Island geological logs for the public wells developed since 1970 revealed six major indications to be improved for their use in Jeju geological logs DB construction: (1) lack of uniformity in rock name classification, (2) poor definitions of pyroclastic deposits and sand and gravel layers, (3) lack of well borehole aquifer information, (4) lack of information on well screen installation in many water wells, (5) differences by person in geological logging descriptions. A new Jeju geological logs DB enabling standardized input and output formats has been implemented to overcome the above indications by reestablishing the names of Jeju volcanic and sedimentary rocks and utilizing a commercial, database-based input structured, geological log program. The newly designed database structure in geological log program enables users to store a large number of geology, well drilling, and test data at the standardized DB input structure. Also, well borehole groundwater and aquifer test data can be easily added without modifying the existing database structure. Thus, the newly implemented geological logs DB could be a standardized DB for a large number of Jeju existing public wells and new wells to be developed in the future at Jeju Island. Also, the new geological logs DB will be a basis for ongoing project 'Developing GIS-based integrated interpretation system for Jeju Island hydrogeology'.

Quantitative Microbial Risk Assessment of Pathogenic Vibrio through Sea Squirt Consumption in Korea (우렁쉥이에 대한 병원성 비브리오균 정량적 미생물 위해평가)

  • Ha, Jimyeong;Lee, Jeeyeon;Oh, Hyemin;Shin, Il-Shik;Kim, Young-Mog;Park, Kwon-Sam;Yoon, Yohan
    • Journal of Food Hygiene and Safety
    • /
    • v.35 no.1
    • /
    • pp.51-59
    • /
    • 2020
  • This study evalutated the risk of foodborne illness from Vibrio spp. (Vibrio vulnificus and Vibrio cholerae) through sea squirt consumption. The prevalence of V. vulnificus and V. cholerae in sea squirt was evaluated, and the predictive models to describe the kinetic behavior of the Vibrio in sea squirt were developed. Distribution temperatures and times were collected, and they were fitted to probabilistic distributions to determine the appropriate distributions. The raw data from the Korea National Health and Nutrition Examination Survey 2016 were used to estimate the consumption rates and amount of sea squirt. In the hazard characterization, the Beta-Poisson model for V. vulnificus and V. cholerae infection was used. With the collected data, a simulation model was prepared and it was run with @RISK to estimate probabilities of foodborne illness by pathogenic Vibrio spp. through sea squirt consumption. Among 101 sea squirt samples, there were no V. vulnificus positive samples, but V. cholerae was detected in one sample. The developed predictive models described the fates of Vibrio spp. in sea squirt during distribution and storage, appropriately shown as 0.815-0.907 of R2 and 0.28 of RMSE. The consumption rate of sea squirt was 0.26%, and the daily consumption amount was 68.84 g per person. The Beta-Poisson model [P=1-(1+Dose/β)] was selected as a dose-response model. With these data, a simulation model was developed, and the risks of V. vulnificus and V. cholerae foodborne illness from sea squirt consumption were 2.66×10-15, and 1.02×10-12, respectively. These results suggest that the risk of pathogenic Vibrio spp. in sea squirt could be considered low in Korea.

A Knowledge Management System for Supporting Development of the Next Generation Information Appliances (차세대 정보가전 신제품 개발 지원을 위한 지식관리시스템 개발)

  • Park, Ji-Soo;Baek, Dong-Hyun
    • Information Systems Review
    • /
    • v.6 no.2
    • /
    • pp.137-159
    • /
    • 2004
  • The next generation information appliances are those that can be connected with other appliances through a wired or wireless network in order to make it possible for them to transmit and receive data between them and to be remotely controlled from inside or outside of the home. Many electronic companies have aggressively invested in developing new information appliances to take the initiative in upcoming home networking era. They require systematic methods for developing new information appliances and sharing the knowledge acquired from the methods. This paper stored the knowledge acquired from developing the information appliances and developed a knowledge management system that supports the companies to use the knowledge and develop their own information appliances. In order to acquire the knowledge, this paper applied two methods for User-Centered Design in stead of using the general ones for knowledge acquisition. This paper suggested new product ideas by analyzing and observing user actions and stored the knowledge in knowledge bases, which included Knowledge from Analyzing User Actions and Knowledge from Observing User Actions. Seven new product ideas, suggested from the User-Centered Design, were made into design mockups and their videos were produced to show the real situations where they would be used in home of the future, which were stored in the knowledge base of Knowledge from Producing New Emotive Life Videos. Finally, data on present development states of future homes in Europe and Japan and newspapers articles from domestic newspapers were collected and stored in the knowledge base of Knowledge from Surveying Technology Developments. This paper developed a web-based knowledge management system that supports the companies to use the acquired knowledge. Knowledge users can get the knowledge required for developing new information appliances and suggest their own product ideas by using the knowledge management system. This will make the results from this research not confined to a case study of product development but extended to playing a role of facilitating the development of the next generation information appliances.

Interactions and Changes between Sapflow Flux, Soil Water Tension, and Soil Moisture Content at the Artificial Forest of Abies holophylla in Gwangneung, Gyeonggido (광릉 전나무인공림에서 수액이동량, 토양수분장력 그리고 토양함수량의 변화와 상호작용)

  • Jun, Jaehong;Kim, Kyongha;Yoo, Jaeyun;Jeong, Yongho;Jeong, Changgi
    • Journal of Korean Society of Forest Science
    • /
    • v.94 no.6
    • /
    • pp.496-503
    • /
    • 2005
  • This study was conducted to investigate the influences of sapflow flux on soil water tensions and soil moisture content at the Abies holophylla plots in Gwangneung, Gyeonggido, from September to October 2004. The Abies holophylla had been planted in 1976 and thinning and pruning were carried out in 1996 and 2004. Sapflow flux was measured by the heat pulse method, and soil water tension was measured by tensiometer at hillslope and streamside. Time domain reflectometry probes (TDR) were positioned horizontally at the depth of 10, 30 and 50 cm to measure soil moisture content. All of data were recorded every 30 minutes with the dataloggers. The sapflow flux responded sensitively to rainfall, so little sapflow was detected in rainy days. The average daily sapflow flux of sample trees was 10.16l, a maximum was 15.09l, and a minimum was 0.0l. The sapflow flux's diurnal changes showed that sapflow flux increased from 9 am and up to 0.74 l/30 min. The highest sapflow flux maintained by 3 pm and decreased almost 0.0 l/30 mm after 7 pm. The average soil water tensions were low ($-141.3cmH_2O$, $-52.9cmH_2O$ and $-134.2cmH_2O$) at hillslope and high ($-6.1cmH_2O$, $-18.0cmH_2O$ and $-3.7cmH_2O$) at streamside. When the soil moisture content decreased after rainfall, the soil water tension at hillslope responded sensitively to the sapflow flux. The soil water tension decreased as the sapflow flux increased during the day time, whereas increased during the night time when the sapflow flux was not detected. On the other hand, there was no significant relationship between soil water tension and sapflow flux at streamside. Soil moisture content at hillslope decreased continuously after rain, and showed a negative correlation to sapflow flux like a soil water tension at hillslope. As considered results above, it was confirmed that the response of soil moisture tension to sapflow flux at hillslope and streamside were different.

Derivation of Green Infrastructure Planning Factors for Reducing Particulate Matter - Using Text Mining - (미세먼지 저감을 위한 그린인프라 계획요소 도출 - 텍스트 마이닝을 활용하여 -)

  • Seok, Youngsun;Song, Kihwan;Han, Hyojoo;Lee, Junga
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.5
    • /
    • pp.79-96
    • /
    • 2021
  • Green infrastructure planning represents landscape planning measures to reduce particulate matter. This study aimed to derive factors that may be used in planning green infrastructure for particulate matter reduction using text mining techniques. A range of analyses were carried out by focusing on keywords such as 'particulate matter reduction plan' and 'green infrastructure planning elements'. The analyses included Term Frequency-Inverse Document Frequency (TF-IDF) analysis, centrality analysis, related word analysis, and topic modeling analysis. These analyses were carried out via text mining by collecting information on previous related research, policy reports, and laws. Initially, TF-IDF analysis results were used to classify major keywords relating to particulate matter and green infrastructure into three groups: (1) environmental issues (e.g., particulate matter, environment, carbon, and atmosphere), target spaces (e.g., urban, park, and local green space), and application methods (e.g., analysis, planning, evaluation, development, ecological aspect, policy management, technology, and resilience). Second, the centrality analysis results were found to be similar to those of TF-IDF; it was confirmed that the central connectors to the major keywords were 'Green New Deal' and 'Vacant land'. The results from the analysis of related words verified that planning green infrastructure for particulate matter reduction required planning forests and ventilation corridors. Additionally, moisture must be considered for microclimate control. It was also confirmed that utilizing vacant space, establishing mixed forests, introducing particulate matter reduction technology, and understanding the system may be important for the effective planning of green infrastructure. Topic analysis was used to classify the planning elements of green infrastructure based on ecological, technological, and social functions. The planning elements of ecological function were classified into morphological (e.g., urban forest, green space, wall greening) and functional aspects (e.g., climate control, carbon storage and absorption, provision of habitats, and biodiversity for wildlife). The planning elements of technical function were classified into various themes, including the disaster prevention functions of green infrastructure, buffer effects, stormwater management, water purification, and energy reduction. The planning elements of the social function were classified into themes such as community function, improving the health of users, and scenery improvement. These results suggest that green infrastructure planning for particulate matter reduction requires approaches related to key concepts, such as resilience and sustainability. In particular, there is a need to apply green infrastructure planning elements in order to reduce exposure to particulate matter.

A Study on Applied to Optimal Diagnostic Device in Portal Vein Visualization: Focused on MRI and CT (간문맥 묘출을 위한 최적의 영상진단 장치에 관한 연구: MRI, CT 중심으로)

  • Goo, Eun-Hoe
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.2
    • /
    • pp.217-225
    • /
    • 2019
  • The purpose of this study was to quantitate signal to noise ratio and contrast to noise ratio of the portal vein using CT and 3.0T MRI and to investigate the optimal imaging device. Twenty patients who inspective CT and 3.0T MRI between February 2018 and April 2018 were randomly assigned to receive data from the picture archiving communication system. The SNR and CNR values were evaluated by measuring the mean and standard deviation of the region of interest of the four regions of the portal vein (the main portal vein, the right vein, the left vein, and the middle vein). The results showed that SNR was 9.180.72 in the right context, 9.410.84 in the left context, 9.540.59 in the middle context, 9.550.75 in the order context, and 22.292.03 in the right context and 25.893 in the 3.0T MRI. 19, median context: 24.392.87, and order Mac: 26.642.30 (p<0.05). CNR was 3.790.68 in the CT context, 3.740.65 in the left context, 3.710.39 in the middle context, 3.790.68 in the order context, 9.490.65 in the right context, and 11.0001.90 in the 3.0T MRI, Intermediate context: 12.701.75, order Mac: 10.010.98, 3.0T MRI was higher than CT (p<0.05). In conclusion, SNR and CNR values were higher in the 3.0T MRI than CT in the 4 portal regions. Therefore, 3.0T MRI using non-ionizing radiation was the most superior imaging equipment than CT.

Knowledge graph-based knowledge map for efficient expression and inference of associated knowledge (연관지식의 효율적인 표현 및 추론이 가능한 지식그래프 기반 지식지도)

  • Yoo, Keedong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.49-71
    • /
    • 2021
  • Users who intend to utilize knowledge to actively solve given problems proceed their jobs with cross- and sequential exploration of associated knowledge related each other in terms of certain criteria, such as content relevance. A knowledge map is the diagram or taxonomy overviewing status of currently managed knowledge in a knowledge-base, and supports users' knowledge exploration based on certain relationships between knowledge. A knowledge map, therefore, must be expressed in a networked form by linking related knowledge based on certain types of relationships, and should be implemented by deploying proper technologies or tools specialized in defining and inferring them. To meet this end, this study suggests a methodology for developing the knowledge graph-based knowledge map using the Graph DB known to exhibit proper functionality in expressing and inferring relationships between entities and their relationships stored in a knowledge-base. Procedures of the proposed methodology are modeling graph data, creating nodes, properties, relationships, and composing knowledge networks by combining identified links between knowledge. Among various Graph DBs, the Neo4j is used in this study for its high credibility and applicability through wide and various application cases. To examine the validity of the proposed methodology, a knowledge graph-based knowledge map is implemented deploying the Graph DB, and a performance comparison test is performed, by applying previous research's data to check whether this study's knowledge map can yield the same level of performance as the previous one did. Previous research's case is concerned with building a process-based knowledge map using the ontology technology, which identifies links between related knowledge based on the sequences of tasks producing or being activated by knowledge. In other words, since a task not only is activated by knowledge as an input but also produces knowledge as an output, input and output knowledge are linked as a flow by the task. Also since a business process is composed of affiliated tasks to fulfill the purpose of the process, the knowledge networks within a business process can be concluded by the sequences of the tasks composing the process. Therefore, using the Neo4j, considered process, task, and knowledge as well as the relationships among them are defined as nodes and relationships so that knowledge links can be identified based on the sequences of tasks. The resultant knowledge network by aggregating identified knowledge links is the knowledge map equipping functionality as a knowledge graph, and therefore its performance needs to be tested whether it meets the level of previous research's validation results. The performance test examines two aspects, the correctness of knowledge links and the possibility of inferring new types of knowledge: the former is examined using 7 questions, and the latter is checked by extracting two new-typed knowledge. As a result, the knowledge map constructed through the proposed methodology has showed the same level of performance as the previous one, and processed knowledge definition as well as knowledge relationship inference in a more efficient manner. Furthermore, comparing to the previous research's ontology-based approach, this study's Graph DB-based approach has also showed more beneficial functionality in intensively managing only the knowledge of interest, dynamically defining knowledge and relationships by reflecting various meanings from situations to purposes, agilely inferring knowledge and relationships through Cypher-based query, and easily creating a new relationship by aggregating existing ones, etc. This study's artifacts can be applied to implement the user-friendly function of knowledge exploration reflecting user's cognitive process toward associated knowledge, and can further underpin the development of an intelligent knowledge-base expanding autonomously through the discovery of new knowledge and their relationships by inference. This study, moreover than these, has an instant effect on implementing the networked knowledge map essential to satisfying contemporary users eagerly excavating the way to find proper knowledge to use.