• Title/Summary/Keyword: 자동인지

Search Result 20,370, Processing Time 0.046 seconds

Standardization and Management of Interface Terminology regarding Chief Complaints, Diagnoses and Procedures for Electronic Medical Records: Experiences of a Four-hospital Consortium (전자의무기록 표준화 용어 관리 프로세스 정립)

  • Kang, Jae-Eun;Kim, Kidong;Lee, Young-Ae;Yoo, Sooyoung;Lee, Ho Young;Hong, Kyung Lan;Hwang, Woo Yeon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.679-687
    • /
    • 2021
  • The purpose of the present study was to document the standardization and management process of interface terminology regarding the chief complaints, diagnoses, and procedures, including surgery in a four-hospital consortium. The process was proposed, discussed, modified, and finalized in 2016 by the Terminology Standardization Committee (TSC), consisting of personnel from four hospitals. A request regarding interface terminology was classified into one of four categories: 1) registration of a new term, 2) revision, 3) deleting an old term and registering a new term, and 4) deletion. A request was processed in the following order: 1) collecting testimonies from related departments and 2) voting by the TSC. At least five out of the seven possible members of the voting pool need to approve of it. Mapping to the reference terminology was performed by three independent medical information managers. All processes were performed online, and the voting and mapping results were collected automatically. This process made the decision-making process clear and fast. In addition, this made users receptive to the decision of the TSC. In the 16 months after the process was adopted, there were 126 new terms registered, 131 revisions, 40 deletions of an old term and the registration of a new term, and 1235 deletions.

Trends in QA/QC of Phytoplankton Data for Marine Ecosystem Monitoring (해양생태계 모니터링을 위한 식물플랑크톤 자료의 정도 관리 동향)

  • YIH, WONHO;PARK, JONG WOO;SEONG, KYEONG AH;PARK, JONG-GYU;YOO, YEONG DU;KIM, HYUNG SEOP
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.26 no.3
    • /
    • pp.220-237
    • /
    • 2021
  • Since the functional importance of marine phytoplankton was firstly advocated from early 1880s massive data on the species composition and abundance were produced by classical microscopic observation and the advanced auto-imaging technologies. Recently, pigment composition resulted from direct chemical analysis of phytoplankton samples or indirect remote sensing could be used for the group-specific quantification, which leads us to more diversified data production methods and for more improved spatiotemporal accessibilities to the target data-gathering points. In quite a few cases of many long-term marine ecosystem monitoring programs the phytoplankton species composition and abundance was included as a basic monitoring item. The phytoplankton data could be utilized as a crucial evidence for the long-term change in phytoplankton community structure and ecological functioning at the monitoring stations. Usability of the phytoplankton data sometimes is restricted by the differences in data producers throughout the whole monitoring period. Methods for sample treatments, analyses, and species identification of the phytoplankton species could be inconsistent among the different data producers and the monitoring years. In-depth study to determine the precise quantitative values of the phytoplankton species composition and abundance might be begun by Victor Hensen in late 1880s. International discussion on the quality assurance of the marine phytoplankton data began in 1969 by the SCOR Working Group 33 of ICSU. Final report of the Working group in 1974 (UNESCO Technical Papers in Marine Science 18) was later revised and published as the UNESCO Monographs on oceanographic methodology 6. The BEQUALM project, the former body of IPI (International Phytoplankton Intercomparison) for marine phytoplankton data QA/QC under ISO standard, was initiated in late 1990. The IPI is promoting international collaboration for all the participating countries to apply the QA/QC standard established from the 20 years long experience and practices. In Korea, however, such a QA/QC standard for marine phytoplankton species composition and abundance data is not well established by law, whereas that for marine chemical data from measurements and analysis has been already set up and managed. The first priority might be to establish a QA/QC standard system for species composition and abundance data of marine phytoplankton, then to be extended to other functional groups at the higher consumer level of marine food webs.

Effects of Halogen and Light-Shielding Curtains on Acquisition of Hyperspectral Images in Greenhouses (온실 내 초분광 영상 취득 시 할로겐과 차광 커튼이 미치는 영향)

  • Kim, Tae-Yang;Ryu, Chan-Seok;Kang, Ye-seong;Jang, Si-Hyeong;Park, Jun-Woo;Kang, Kyung-Suk;Baek, Hyeon-Chan;Park, Min-Jun;Park, Jin-Ki
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.306-315
    • /
    • 2021
  • This study analyzed the effects of light-shielding curtains and halogens on spectrum when acquiring hyperspectral images in a greenhouse. The image data of tarp (1.4*1.4 m, 12%) with 30 degrees of angles was achieved three times with four conditions depending on 14 heights using the automatic image acquisition system installed in the greenhouse at the department of Southern Area of National Institute of Crop Science. When the image was acquired without both a light-shielding curtain and halogen lamp, there was a difference in spectral tendencies between direct light and shadow parts on the base of 550 nm. The average coefficient of variation (CV) for direct light and shadow parts was 1.8% and 4.2%, respective. The average CV value was increased to 12.5% regardless of shadows. When the image was acquired only used a halogen lamp, the average CV of the direct light and shadow parts were 2 .6% and 10.6%, and the width of change on the spectrum was increased because the amount of halogen light was changed depending on the height. In the case of shading curtains only used, the average CV was 1.6%, and the distinction between direct light and shadows disappeared. When the image was acquired using a shading curtain and halogen lamp, the average CV was increased to 10.2% because the amount of halogen light differed depending on the height. When the average CV depending on the height was calculated using halogen and light-shielding curtains, it was 1.4% at 0.1m and 1.9% at 0.2 m, 2 .6% at 0.3m, and 3.3% at 0.4m of height, respectively. When hyperspectral imagery is acquired, it is necessary to use a shading curtain to minimize the effect of shadows. Moreover, in case of supplementary lighting by using a halogen lamp, it is judged to be effective when the size of the object is less than 0.2 m and the distance between the object and the housing is kept constant.

Smart farm development strategy suitable for domestic situation -Focusing on ICT technical characteristics for the development of the industry6.0- (국내 실정에 적합한 스마트팜 개발 전략 -6차산업의 발전을 위한 ICT 기술적 특성을 중심으로-)

  • Han, Sang-Ho;Joo, Hyung-Kun
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.147-157
    • /
    • 2022
  • This study tried to propose a smart farm technology strategy suitable for the domestic situation, focusing on the differentiation suitable for the domestic situation of ICT technology. In the case of advanced countries in the overseas agricultural industry, it was confirmed that they focused on the development of a specific stage that reflected the geographical characteristics of each country, the characteristics of the agricultural industry, and the characteristics of the people's demand. Confirmed that no enemy development is being performed. Therefore, in response to problems such as a rapid decrease in the domestic rural population, aging population, loss of agricultural price competitiveness, increase in fallow land, and decrease in use rate of arable land, this study aims to develop smart farm ICT technology in the future to create quality agricultural products and have price competitiveness. It was suggested that the smart farm should be promoted by paying attention to the excellent performance, ease of use due to the aging of the labor force, and economic feasibility suitable for a small business scale. First, in terms of economic feasibility, the ICT technology is configured by selecting only the functions necessary for the small farm household (primary) business environment, and the smooth communication system with these is applied to the ICT technology to gradually update the functions required by the actual farmhouse. suggested that it may contribute to the reduction. Second, in terms of performance, it is suggested that the operation accuracy can be increased if attention is paid to improving the communication function of ICT, such as adjusting the difficulty of big data suitable for the aging population in Korea, using a language suitable for them, and setting an algorithm that reflects their prediction tendencies. Third, the level of ease of use. Smart farms based on ICT technology for the development of the Industry6.0 (1.0(Agriculture, Forestry) + 2.0(Agricultural and Water & Water Processing) + 3.0 (Service, Rural Experience, SCM)) perform operations according to specific commands, finally suggested that ease of use can be promoted by presetting and standardizing devices based on big data configuration customized for each regional environment.

Analysis of the Effect of Objective Functions on Hydrologic Model Calibration and Simulation (목적함수에 따른 매개변수 추정 및 수문모형 정확도 비교·분석)

  • Lee, Gi Ha;Yeon, Min Ho;Kim, Young Hun;Jung, Sung Ho
    • Journal of Korean Society of Disaster and Security
    • /
    • v.15 no.1
    • /
    • pp.1-12
    • /
    • 2022
  • An automatic optimization technique is used to estimate the optimal parameters of the hydrologic model, and different hydrologic response results can be provided depending on objective functions. In this study, the parameters of the event-based rainfall-runoff model were estimated using various objective functions, the reproducibility of the hydrograph according to the objective functions was evaluated, and appropriate objective functions were proposed. As the rainfall-runoff model, the storage function model(SFM), which is a lumped hydrologic model used for runoff simulation in the current Korean flood forecasting system, was selected. In order to evaluate the reproducibility of the hydrograph for each objective function, 9 rainfall events were selected for the Cheoncheon basin, which is the upstream basin of Yongdam Dam, and widely-used 7 objective functions were selected for parameter estimation of the SFM for each rainfall event. Then, the reproducibility of the simulated hydrograph using the optimal parameter sets based on the different objective functions was analyzed. As a result, RMSE, NSE, and RSR, which include the error square term in the objective function, showed the highest accuracy for all rainfall events except for Event 7. In addition, in the case of PBIAS and VE, which include an error term compared to the observed flow, it also showed relatively stable reproducibility of the hydrograph. However, in the case of MIA, which adjusts parameters sensitive to high flow and low flow simultaneously, the hydrograph reproducibility performance was found to be very low.

Knowledge graph-based knowledge map for efficient expression and inference of associated knowledge (연관지식의 효율적인 표현 및 추론이 가능한 지식그래프 기반 지식지도)

  • Yoo, Keedong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.49-71
    • /
    • 2021
  • Users who intend to utilize knowledge to actively solve given problems proceed their jobs with cross- and sequential exploration of associated knowledge related each other in terms of certain criteria, such as content relevance. A knowledge map is the diagram or taxonomy overviewing status of currently managed knowledge in a knowledge-base, and supports users' knowledge exploration based on certain relationships between knowledge. A knowledge map, therefore, must be expressed in a networked form by linking related knowledge based on certain types of relationships, and should be implemented by deploying proper technologies or tools specialized in defining and inferring them. To meet this end, this study suggests a methodology for developing the knowledge graph-based knowledge map using the Graph DB known to exhibit proper functionality in expressing and inferring relationships between entities and their relationships stored in a knowledge-base. Procedures of the proposed methodology are modeling graph data, creating nodes, properties, relationships, and composing knowledge networks by combining identified links between knowledge. Among various Graph DBs, the Neo4j is used in this study for its high credibility and applicability through wide and various application cases. To examine the validity of the proposed methodology, a knowledge graph-based knowledge map is implemented deploying the Graph DB, and a performance comparison test is performed, by applying previous research's data to check whether this study's knowledge map can yield the same level of performance as the previous one did. Previous research's case is concerned with building a process-based knowledge map using the ontology technology, which identifies links between related knowledge based on the sequences of tasks producing or being activated by knowledge. In other words, since a task not only is activated by knowledge as an input but also produces knowledge as an output, input and output knowledge are linked as a flow by the task. Also since a business process is composed of affiliated tasks to fulfill the purpose of the process, the knowledge networks within a business process can be concluded by the sequences of the tasks composing the process. Therefore, using the Neo4j, considered process, task, and knowledge as well as the relationships among them are defined as nodes and relationships so that knowledge links can be identified based on the sequences of tasks. The resultant knowledge network by aggregating identified knowledge links is the knowledge map equipping functionality as a knowledge graph, and therefore its performance needs to be tested whether it meets the level of previous research's validation results. The performance test examines two aspects, the correctness of knowledge links and the possibility of inferring new types of knowledge: the former is examined using 7 questions, and the latter is checked by extracting two new-typed knowledge. As a result, the knowledge map constructed through the proposed methodology has showed the same level of performance as the previous one, and processed knowledge definition as well as knowledge relationship inference in a more efficient manner. Furthermore, comparing to the previous research's ontology-based approach, this study's Graph DB-based approach has also showed more beneficial functionality in intensively managing only the knowledge of interest, dynamically defining knowledge and relationships by reflecting various meanings from situations to purposes, agilely inferring knowledge and relationships through Cypher-based query, and easily creating a new relationship by aggregating existing ones, etc. This study's artifacts can be applied to implement the user-friendly function of knowledge exploration reflecting user's cognitive process toward associated knowledge, and can further underpin the development of an intelligent knowledge-base expanding autonomously through the discovery of new knowledge and their relationships by inference. This study, moreover than these, has an instant effect on implementing the networked knowledge map essential to satisfying contemporary users eagerly excavating the way to find proper knowledge to use.

Topic Modeling Insomnia Social Media Corpus using BERTopic and Building Automatic Deep Learning Classification Model (BERTopic을 활용한 불면증 소셜 데이터 토픽 모델링 및 불면증 경향 문헌 딥러닝 자동분류 모델 구축)

  • Ko, Young Soo;Lee, Soobin;Cha, Minjung;Kim, Seongdeok;Lee, Juhee;Han, Ji Yeong;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.39 no.2
    • /
    • pp.111-129
    • /
    • 2022
  • Insomnia is a chronic disease in modern society, with the number of new patients increasing by more than 20% in the last 5 years. Insomnia is a serious disease that requires diagnosis and treatment because the individual and social problems that occur when there is a lack of sleep are serious and the triggers of insomnia are complex. This study collected 5,699 data from 'insomnia', a community on 'Reddit', a social media that freely expresses opinions. Based on the International Classification of Sleep Disorders ICSD-3 standard and the guidelines with the help of experts, the insomnia corpus was constructed by tagging them as insomnia tendency documents and non-insomnia tendency documents. Five deep learning language models (BERT, RoBERTa, ALBERT, ELECTRA, XLNet) were trained using the constructed insomnia corpus as training data. As a result of performance evaluation, RoBERTa showed the highest performance with an accuracy of 81.33%. In order to in-depth analysis of insomnia social data, topic modeling was performed using the newly emerged BERTopic method by supplementing the weaknesses of LDA, which is widely used in the past. As a result of the analysis, 8 subject groups ('Negative emotions', 'Advice and help and gratitude', 'Insomnia-related diseases', 'Sleeping pills', 'Exercise and eating habits', 'Physical characteristics', 'Activity characteristics', 'Environmental characteristics') could be confirmed. Users expressed negative emotions and sought help and advice from the Reddit insomnia community. In addition, they mentioned diseases related to insomnia, shared discourse on the use of sleeping pills, and expressed interest in exercise and eating habits. As insomnia-related characteristics, we found physical characteristics such as breathing, pregnancy, and heart, active characteristics such as zombies, hypnic jerk, and groggy, and environmental characteristics such as sunlight, blankets, temperature, and naps.

Evaluation of the usefulness of IGRT(Image Guided Radiation Therapy) for markerless patients using SGPS(Surface-Guided Patient Setup) (표면유도환자셋업(Surface-Guided Patient Setup, SGPS)을 활용한 Markerless환자의 영상유도방사선치료(Image Guided Radiation Therapy, IGRT)시 유용성 평가)

  • Lee, Kyeong-jae;Lee, Eung-man;Lee, Jeong-su;Kim, Da-yeon;Ko, Hyeon-jun;Choi, Shin-cheol
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.33
    • /
    • pp.109-116
    • /
    • 2021
  • Purpose: The purpose of this study is to evaluate the usefulness of Surface-Guided Patient Setup by comparing the patient positioning accuracy when image-guided radiation therapy was used for Markerless patients(unmarked on the skin) using Surface-Guided Patient Setup and Marker patients(marked on the skin) using Laser-Based Patient Setup. Materials And Methods: The position error during IGRT was compared between a Markerless patient initially set up with SGPS using an optical surface scanning system using three cameras and a Marker patient initially set up with LBPS that aligns the laser with the marker drawn on the patient's skin. Both SGPS and LBPS were performed on 20 prostate cancer patients and 10 Stereotactic Radiation Surgery patients, respectively, and SGPS was performed on an additional 60 breast cancer patients. All were performed IGRT using CBCT or OBI. Position error of 6 degrees of freedom was obtained using Auto-Matching System, and comparison and analysis were performed using Offline-Review in the treatment planning system. Result: The difference between the root mean square (RMS) of SGPS and LBPS in prostate cancer patients was Vrt -0.02cm, Log -0.02cm, Lat 0.01cm, Pit -0.01°, Rol -0.01°, Rtn -0.01°, SRS patients was Vrt 0.02cm, Log -0.05cm, Lat 0.00cm, Pit -0.30°, Rol -0.15°, Rtn -0.33°. there was no significant difference between the two regions. According to the IGRT standard of breast cancer patients, RMS was Vrt 0.26, Log 0.21, Lat 0.15, Pit 0.81, Rol 0.49, Rtn 0.59. Conclusion:. As a result of this study, the position error value of SGPS compared to LBPS did not show a significant difference between prostate cancer patients and SRS patients. In the case of additionally performed SGPS breast cancer patients, the position error value was not large based on IGRT. Therefore, it is considered that it will be useful to replace LBPS with SGPS, which has the great advantage of not requiring patient skin marking..

Comparisons of Soil Water Retention Characteristics and FDR Sensor Calibration of Field Soils in Korean Orchards (노지 과수원 토성별 수분보유 특성 및 FDR 센서 보정계수 비교)

  • Lee, Kiram;Kim, Jongkyun;Lee, Jaebeom;Kim, Jongyun
    • Journal of Bio-Environment Control
    • /
    • v.31 no.4
    • /
    • pp.401-408
    • /
    • 2022
  • As research on a controlled environment system based on crop growth environment sensing for sustainable production of horticultural crops and its industrial use has been important, research on how to properly utilize soil moisture sensors for outdoor cultivation is being actively conducted. This experiment was conducted to suggest the proper method of utilizing the TEROS 12, an FDR (frequency domain reflectometry) sensor, which is frequently used in industry and research fields, for each orchard soil in three regions in Korea. We collected soils from each orchard where fruit trees were grown, investigated the soil characteristics and soil water retention curve, and compared TEROS 12 sensor calibration equations to correlate the sensor output to the corresponding soil volumetric water content through linear and cubic regressions for each soil sample. The estimated value from the calibration equation provided by the manufacturer was also compared. The soil collected from all three orchards showed different soil characteristics and volumetric water content values by each soil water retention level across the soil samples. In addition, the cubic calibration equation for TEROS 12 sensor showed the highest coefficient of determination higher than 0.95, and the lowest RMSE for all soil samples. When estimating volumetric water contents from TEROS 12 sensor output using the calibration equation provided by the manufacturer, their calculated volumetric water contents were lower than the actual volumetric water contents, with the difference up to 0.09-0.17 m3·m-3 depending on the soil samples, indicating an appropriate calibration for each soil should be preceded before FDR sensor utilization. Also, there was a difference in the range of soil volumetric water content corresponding to the soil water retention levels across the soil samples, suggesting that the soil water retention information should be required to properly interpret the volumetric water content value of the soil. Moreover, soil with a high content of sand had a relatively narrow range of volumetric water contents for irrigation, thus reducing the accuracy of an FDR sensor measurement. In conclusion, analyzing soil water retention characteristics of the target soil and the soil-specific calibration would be necessary to properly quantify the soil water status and determine their adequate irrigation point using an FDR sensor.

Analysis of Optimal Resolution and Number of GCP Chips for Precision Sensor Modeling Efficiency in Satellite Images (농림위성영상 정밀센서모델링 효율성 재고를 위한 최적의 해상도 및 지상기준점 칩 개수 분석)

  • Choi, Hyeon-Gyeong;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1445-1462
    • /
    • 2022
  • Compact Advanced Satellite 500-4 (CAS500-4), which is scheduled to be launched in 2025, is a mid-resolution satellite with a 5 m resolution developed for wide-area agriculture and forest observation. To utilize satellite images, it is important to establish a precision sensor model and establish accurate geometric information. Previous research reported that a precision sensor model could be automatically established through the process of matching ground control point (GCP) chips and satellite images. Therefore, to improve the geometric accuracy of satellite images, it is necessary to improve the GCP chip matching performance. This paper proposes an improved GCP chip matching scheme for improved precision sensor modeling of mid-resolution satellite images. When using high-resolution GCP chips for matching against mid-resolution satellite images, there are two major issues: handling the resolution difference between GCP chips and satellite images and finding the optimal quantity of GCP chips. To solve these issues, this study compared and analyzed chip matching performances according to various satellite image upsampling factors and various number of chips. RapidEye images with a resolution of 5m were used as mid-resolution satellite images. GCP chips were prepared from aerial orthographic images with a resolution of 0.25 m and satellite orthogonal images with a resolution of 0.5 m. Accuracy analysis was performed using manually extracted reference points. Experiment results show that upsampling factor of two and three significantly improved sensor model accuracy. They also show that the accuracy was maintained with reduced number of GCP chips of around 100. The results of the study confirmed the possibility of applying high-resolution GCP chips for automated precision sensor modeling of mid-resolution satellite images with improved accuracy. It is expected that the results of this study can be used to establish a precise sensor model for CAS500-4.