• Title/Summary/Keyword: 실제 시스템

Search Result 11,632, Processing Time 0.035 seconds

Prefetching based on the Type-Level Access Pattern in Object-Relational DBMSs (객체관계형 DBMS에서 타입수준 액세스 패턴을 이용한 선인출 전략)

  • Han, Wook-Shin;Moon, Yang-Sae;Whang, Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.529-544
    • /
    • 2001
  • Prefetching is an effective method to minimize the number of roundtrips between the client and the server in database management systems. In this paper we propose new notions of the type-level access pattern and the type-level access locality and developed an efficient prefetchin policy based on the notions. The type-level access patterns is a sequence of attributes that are referenced in accessing the objects: the type-level access locality a phenomenon that regular and repetitive type-level access patterns exist. Existing prefetching methods are based on object-level or page-level access patterns, which consist of object0ids of page-ids of the objects accessed. However, the drawback of these methods is that they work only when exactly the same objects or pages are accessed repeatedly. In contrast, even though the same objects are not accessed repeatedly, our technique effectively prefetches objects if the same attributes are referenced repeatedly, i,e of there is type-level access locality. Many navigational applications in Object-Relational Database Management System(ORDBMs) have type-level access locality. Therefore our technique can be employed in ORDBMs to effectively reduce the number of roundtrips thereby significantly enhancing the performance. We have conducted extensive experiments in a prototype ORDBMS to show the effectiveness of our algorithm. Experimental results using the 007 benchmark and a real GIS application show that our technique provides orders of magnitude improvements in the roundtrips and several factors of improvements in overall performance over on-demand fetching and context-based prefetching, which a state-of the art prefetching method. These results indicate that our approach significantly and is a practical method that can be implemented in commercial ORDMSs.

  • PDF

Comparison of Measured and Calculated Carboxylation Rate, Electron Transfer Rate and Photosynthesis Rate Response to Different Light Intensity and Leaf Temperature in Semi-closed Greenhouse with Carbon Dioxide Fertilization for Tomato Cultivation (반밀폐형 온실 내에서 탄산가스 시비에 따른 광강도와 엽온에 반응한 토마토 잎의 최대 카복실화율, 전자전달율 및 광합성율 실측값과 모델링 방정식에 의한 예측값의 비교)

  • Choi, Eun-Young;Jeong, Young-Ae;An, Seung-Hyun;Jang, Dong-Cheol;Kim, Dae-Hyun;Lee, Dong-Soo;Kwon, Jin-Kyung;Woo, Young-Hoe
    • Journal of Bio-Environment Control
    • /
    • v.30 no.4
    • /
    • pp.401-409
    • /
    • 2021
  • This study aimed to estimate the photosynthetic capacity of tomato plants grown in a semi-closed greenhouse using temperature response models of plant photosynthesis by calculating the ribulose 1,5-bisphosphate carboxylase/oxygenase maximum carboxylation rate (Vcmax), maximum electron transport rate (Jmax), thermal breakdown (high-temperature inhibition), and leaf respiration to predict the optimal conditions of the CO2-controlled greenhouse, for maximizing the photosynthetic rate. Gas exchange measurements for the A-Ci curve response to CO2 level with different light intensities {PAR (Photosynthetically Active Radiation) 200µmol·m-2·s-1 to 1500µmol·m-2·s-1} and leaf temperatures (20℃ to 35℃) were conducted with a portable infrared gas analyzer system. Arrhenius function, net CO2 assimilation (An), thermal breakdown, and daylight leaf respiration (Rd) were also calculated using the modeling equation. Estimated Jmax, An, Arrhenius function value, and thermal breakdown decreased in response to increased leaf temperature (> 30℃), and the optimum leaf temperature for the estimated Jmax was 30℃. The CO2 saturation point of the fifth leaf from the apical region was reached at 600ppm for 200 and 400µmol·m-2·s-1 of PAR, at 800ppm for 600 and 800µmol·m-2·s-1 of PAR, at 1000ppm for 1000µmol of PAR, and at 1500ppm for 1200 and 1500µmol·m-2·s-1 of PAR levels. The results suggest that the optimal conditions of CO2 concentration can be determined, using the photosynthetic model equation, to improve the photosynthetic rates of fruit vegetables grown in greenhouses.

A Study on the Characteristics of Seoul Olympic Organizing Committee's Official Documents (서울올림픽대회 조직위원회 공문서의 성격에 관한 연구)

  • Cheon, Ho-Jun
    • The Korean Journal of Archival Studies
    • /
    • no.24
    • /
    • pp.113-171
    • /
    • 2010
  • The purpose of this study was to examine the characteristics of Seoul Olympic Organizing Committee's official documents. To conduct this work, the fundamental of producing archives were examined by analyzing structure and management of Seoul Olympic Organizing Committee and structure of official document production. After all, simultaneous and synthesis characteristics of Seoul Olympic Organizing Committee's official documents were presented through overall analysis of production fundamental and relationship between their management and remained archives. The result of this study are as follows. Firstly, The Organizing Committee had bicameral organizational structure and matrix organizational format consisting of functional department and project department. Indicating the institutions and development phase of decision making in the committee, most of institutions were in name only. Also, there were many problems occurred in the procedure of decision making since the president of committee exercised all of the authorities. Secondly, It was found that existing official documents of the committee were partial and caused fragment phenomenon and severe situations because of unsystematic archival management department and regulations. Moreover, as the result of investigating production procedure and management of official documents, procedure of production, distribution, preservation and abolition of them were specifically verified. Thirdly, It was verified that the official documents were abolished arbitrarily because of unsystematic archival management department and insufficient regulations. For the actual condition of management, filing or description activity which is essential measure for using and utilizing the official documents has not been conducted yet. Based on these facts, the characteristics of Seoul Olympic Organizing Committee's official documents can be referred as follows. The official archives of the committee have multiplicity of the origin and severe fragment phenomenon damaging the origin and the elementary substance of the archives. Also, the format of existing archives was unbalance. Besides, there was not enough related research since they were in adverse situation to utilize them as the archives which are not assessed or not arranged. Thus, it was hard to grasp the utility value at present and future, and was also limited for usage object.

A Study on Transfer Process Model for long-term preservation of Electronic Records (전자기록의 장기보존을 위한 이관절차모형에 관한 연구)

  • Cheon, kwon-ju
    • The Korean Journal of Archival Studies
    • /
    • no.16
    • /
    • pp.39-96
    • /
    • 2007
  • Traditionally, the concept of transfer is that physical records such as paper documents, videos, photos are made a delivery to Archives or Records centers on the basis of transfer guidelines. But, with the automation of records management environment and spreading new records creation and management applications, we can create records and manage them in the cyberspace. In these reasons, the existing transfer system is that we move filed records to Archives or Records centers by paper boxes, needs to be changed. Under the needing conditions of a new transfer paradigm, the fact that the revision of Records Act that include some provisions about electronic records management and transfer, is desirable and proper. Nevertheless, the electronic transfer provisions are too conceptional to apply records management practice, so we have to develop detailed methods and processes. In this context, this paper suggest that a electronic records transfer process model on the basis of international standard and foreign countries' cases. Doing transfer records is one of the records management courses to use valuable records in the future. So, both producer and archive have to transfer records itself and context information to long-term preservation repository according to the transfer guidelines. In the long run, transfer comes to be the conclusion that records are moved to archive by a formal transfer process with taking a proper records protection steps. To accomplish these purposes, I analyzed the 'OAIS Reference Model' and 'Producer-Archive Interface Methodology Abstract Standard-CCSDS Blue Book' which is made by CCSDS(Consultative committee for Space Data Systems). but from both the words of 'Reference Model' and 'Standard', we can understand that these standard are not suitable for applying business practice directly. To solve this problem, I also analyzed foreign countries' transfer cases. Through the analysis of theory and case, I suggest that an Electronic Records Transfer Process Model which is consist of five sub-process that are 'Ingest prepare ${\rightarrow}$ Ingest ${\rightarrow}$ Validation ${\rightarrow}$ Preservation ${\rightarrow}$ Archival storage' and each sub-process also have some transfer elements. Especially, to confirm the new process model's feasibility, after classifying two types - one is from Public Records center to Public Archive, the other is from Civil Records center to Public or Civil Archive - of Korean Transfer, I made the new Transfer Model applied to the two types of transfer cases.

Data issue and Improvement Direction for Marine Spatial Planning (해양공간계획 지원을 위한 정보 현안 및 개선 방향 연구)

  • CHANG, Min-Chol;PARK, Byung-Moon;CHOI, Yun-Soo;CHOI, Hee-Jung;KIM, Tae-Hoon;LEE, Bang-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.175-190
    • /
    • 2018
  • Recently, policy of the marine advanced countries were switched from the preemption using ocean to post-project development. In this study, we suggest improvement and the pending issues when are deducted to the database of the marine spatial information is constructed over the GIS system for the Korean Marine Spatial Planning (KMSP). More than 250 spatial information in the seas of Korea were processed in order of data collection, GIS transformation, data analysis and processing, data grouping, and space mapping. It's process had some problem occurred to error of coordinate system, digitizing process for lack of the spatial information, performed by overlapping for the original marine spatial information, and so on. Moreover, solution is needed to data processing methods excluding personal information which is necessary when produce the spatial data for analysis of the used marine status and minimized method for different between the spatial information based GIS system and the based real information. Therefore, collection and securing system of lacking marine spatial information is enhanced for marine spatial planning. it is necessary to link and expand marine fisheries survey system. It is needed to the marine spatial planning. The marine spatial planning is required to the evaluation index of marine spatial and detailed marine spatial map. In addition, Marine spatial planning is needed to standard guideline and system of quality management. This standard guideline generate to phase for production, processing, analysis, and utilization. Also, the quality management system improve for the information quality of marine spatial information. Finally, we suggest necessity need for the depths study which is considered as opening extension of the marine spatial information and deduction on application model.

Palliative Care Practitioners' Perception toward Pediatric Palliative Care in the Republic of Korea (소아완화의료에 대한 호스피스 완화의료 전문기관 종사자의 인식)

  • Moon, Yi Ji;Shin, Hee Young;Kim, Min Sun;Song, In Gyu;Kim, Cho Hee;Yu, Juyoun;Park, Hye Yoon
    • Journal of Hospice and Palliative Care
    • /
    • v.22 no.1
    • /
    • pp.39-47
    • /
    • 2019
  • Purpose: This study was performed to investigate the current status of pediatric palliative care provision and how it is perceived by the palliative care experts. Methods: A descriptive study was conducted with 61 hospice institutions. From September through October 2017, a questionnaire was completed by experts from the participating institutions. Data were analyzed using SPSS 21.0. Results: Among 61 institutions, palliative care is currently provided for pediatric cancer patients by 11 institutions (18.0%), all of which are concentrated in Seoul, Incheon and Gyeonggi and Gyengsang provinces; 85.2% of all do not plan to provide specialized pediatric palliative care in the future. According to the experts, the main barriers in providing pediatric palliative care were the insufficient number of trained specialists regardless of the delivery type. Experts said that it was appropriate to intervene when children were diagnosed with cancer that was less likely to be cured (33.7%) and to move to palliative care institutions when their conditions worsened (38.2%); and it was necessary to establish a specialized pediatric palliative care system, independent from the existing institutions for adult patients (73.8%). Conclusion: It is necessary to develop an education program to establish a nationwide pediatric palliative care centers. Pediatric palliative care intervention should be provided upon diagnosis rather than at the point of death. Patients should be transferred to palliative care institutions after intervention by their existing pediatric palliative care team at the hospital is started.

Development Process for User Needs-based Chatbot: Focusing on Design Thinking Methodology (사용자 니즈 기반의 챗봇 개발 프로세스: 디자인 사고방법론을 중심으로)

  • Kim, Museong;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.221-238
    • /
    • 2019
  • Recently, companies and public institutions have been actively introducing chatbot services in the field of customer counseling and response. The introduction of the chatbot service not only brings labor cost savings to companies and organizations, but also enables rapid communication with customers. Advances in data analytics and artificial intelligence are driving the growth of these chatbot services. The current chatbot can understand users' questions and offer the most appropriate answers to questions through machine learning and deep learning. The advancement of chatbot core technologies such as NLP, NLU, and NLG has made it possible to understand words, understand paragraphs, understand meanings, and understand emotions. For this reason, the value of chatbots continues to rise. However, technology-oriented chatbots can be inconsistent with what users want inherently, so chatbots need to be addressed in the area of the user experience, not just in the area of technology. The Fourth Industrial Revolution represents the importance of the User Experience as well as the advancement of artificial intelligence, big data, cloud, and IoT technologies. The development of IT technology and the importance of user experience have provided people with a variety of environments and changed lifestyles. This means that experiences in interactions with people, services(products) and the environment become very important. Therefore, it is time to develop a user needs-based services(products) that can provide new experiences and values to people. This study proposes a chatbot development process based on user needs by applying the design thinking approach, a representative methodology in the field of user experience, to chatbot development. The process proposed in this study consists of four steps. The first step is 'setting up knowledge domain' to set up the chatbot's expertise. Accumulating the information corresponding to the configured domain and deriving the insight is the second step, 'Knowledge accumulation and Insight identification'. The third step is 'Opportunity Development and Prototyping'. It is going to start full-scale development at this stage. Finally, the 'User Feedback' step is to receive feedback from users on the developed prototype. This creates a "user needs-based service (product)" that meets the process's objectives. Beginning with the fact gathering through user observation, Perform the process of abstraction to derive insights and explore opportunities. Next, it is expected to develop a chatbot that meets the user's needs through the process of materializing to structure the desired information and providing the function that fits the user's mental model. In this study, we present the actual construction examples for the domestic cosmetics market to confirm the effectiveness of the proposed process. The reason why it chose the domestic cosmetics market as its case is because it shows strong characteristics of users' experiences, so it can quickly understand responses from users. This study has a theoretical implication in that it proposed a new chatbot development process by incorporating the design thinking methodology into the chatbot development process. This research is different from the existing chatbot development research in that it focuses on user experience, not technology. It also has practical implications in that companies or institutions propose realistic methods that can be applied immediately. In particular, the process proposed in this study can be accessed and utilized by anyone, since 'user needs-based chatbots' can be developed even if they are not experts. This study suggests that further studies are needed because only one field of study was conducted. In addition to the cosmetics market, additional research should be conducted in various fields in which the user experience appears, such as the smart phone and the automotive market. Through this, it will be able to be reborn as a general process necessary for 'development of chatbots centered on user experience, not technology centered'.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

A Case Study: Improvement of Wind Risk Prediction by Reclassifying the Detection Results (풍해 예측 결과 재분류를 통한 위험 감지확률의 개선 연구)

  • Kim, Soo-ock;Hwang, Kyu-Hong
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.3
    • /
    • pp.149-155
    • /
    • 2021
  • Early warning systems for weather risk management in the agricultural sector have been developed to predict potential wind damage to crops. These systems take into account the daily maximum wind speed to determine the critical wind speed that causes fruit drops and provide the weather risk information to farmers. In an effort to increase the accuracy of wind risk predictions, an artificial neural network for binary classification was implemented. In the present study, the daily wind speed and other weather data, which were measured at weather stations at sites of interest in Jeollabuk-do and Jeollanam-do as well as Gyeongsangbuk- do and part of Gyeongsangnam- do provinces in 2019, were used for training the neural network. These weather stations include 210 synoptic and automated weather stations operated by the Korean Meteorological Administration (KMA). The wind speed data collected at the same locations between January 1 and December 12, 2020 were used to validate the neural network model. The data collected from December 13, 2020 to February 18, 2021 were used to evaluate the wind risk prediction performance before and after the use of the artificial neural network. The critical wind speed of damage risk was determined to be 11 m/s, which is the wind speed reported to cause fruit drops and damages. Furthermore, the maximum wind speeds were expressed using Weibull distribution probability density function for warning of wind damage. It was found that the accuracy of wind damage risk prediction was improved from 65.36% to 93.62% after re-classification using the artificial neural network. Nevertheless, the error rate also increased from 13.46% to 37.64%, as well. It is likely that the machine learning approach used in the present study would benefit case studies where no prediction by risk warning systems becomes a relatively serious issue.