• Title/Summary/Keyword: systems

Search Result 113,576, Processing Time 0.12 seconds

Requirement Analysis for Agricultural Meteorology Information Service Systems based on the Fourth Industrial Revolution Technologies (4차 산업혁명 기술에 기반한 농업 기상 정보 시스템의 요구도 분석)

  • Kim, Kwang Soo;Yoo, Byoung Hyun;Hyun, Shinwoo;Kang, DaeGyoon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.3
    • /
    • pp.175-186
    • /
    • 2019
  • Efforts have been made to introduce the climate smart agriculture (CSA) for adaptation to future climate conditions, which would require collection and management of site specific meteorological data. The objectives of this study were to identify requirements for construction of agricultural meteorology information service system (AMISS) using technologies that lead to the fourth industrial revolution, e.g., internet of things (IoT), artificial intelligence, and cloud computing. The IoT sensors that require low cost and low operating current would be useful to organize wireless sensor network (WSN) for collection and analysis of weather measurement data, which would help assessment of productivity for an agricultural ecosystem. It would be recommended to extend the spatial extent of the WSN to a rural community, which would benefit a greater number of farms. It is preferred to create the big data for agricultural meteorology in order to produce and evaluate the site specific data in rural areas. The digital climate map can be improved using artificial intelligence such as deep neural networks. Furthermore, cloud computing and fog computing would help reduce costs and enhance the user experience of the AMISS. In addition, it would be advantageous to combine environmental data and farm management data, e.g., price data for the produce of interest. It would also be needed to develop a mobile application whose user interface could meet the needs of stakeholders. These fourth industrial revolution technologies would facilitate the development of the AMISS and wide application of the CSA.

Conservation Scientific Diagnosis and Evaluation of Bird Track Sites from the Haman Formation at Yongsanri in Haman, Korea (함안 용산리 함안층 새발자국 화석산지의 보존과학적 진단 및 평가)

  • Lee, Gyu Hye;Park, Jun Hyoung;Lee, Chan Hee
    • Korean Journal of Heritage: History & Science
    • /
    • v.52 no.3
    • /
    • pp.74-93
    • /
    • 2019
  • The Bird Track Site in the Haman Formation in Yongsanri (Natural Monument No. 222) was reported on the named Koreanaornis hamanensis and Jindongornipes kimi sauropod footprint Brontopodus and ichnospecies Ochlichnus formed by Nematoda. This site has outstanding academic value because it is where the second-highest number of bird tracks have been reported in the world. However, only 25% of the site remains after being designated a natural monument in 1969. This is due to artificial damage caused by worldwide fame and quarrying for flat stone used in Korean floor heating systems. The Haman Formation, including this fossil site, has lithofacies showing reddish-grey siltstone and black shale, alternately. The boundary of the two rocks is progressive, and sedimentary structures like ripple marks and sun cracks can clearly be found. This site was divided into seven formations according to sedimentary sequences and structures. The results of a nondestructive deterioration evaluation showed that chemical and biological damage rates were very low for all formations. Also, physical damage displayed low rates with 0.49% on exfoliation, 0.04% on blistering, 0.28% on break-out; however, the joint crack index was high, 6.20. Additionally, efflorescence was observed on outcrops at the backside and the northwestern side. Physical properties measured by an indirect ultrasonic analysis were found to be moderately weathered (MW). Above all, the southeastern side was much fresher, though some areas around the column of protection facility appeared more weathered. Furthermore, five kinds of discontinuity surface can be found at this site, with the bedding plane showing the higher share. There is the possibility of toppling failure occurring at this site but stable on plane and wedge failure by means of stereographic projection. We concluded that the overall level of deterioration and stability were relatively fine. However, continuous monitoring and conservation treatment and management should be performed as situations such as the physicochemical weathering of the fossil layer, and the efflorescence of the mortar adjoining the protection facility's column appear to be challenging to control.

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.

A Case Study on the UK Park and Green Space Policies for Inclusive Urban Regeneration (영국의 포용적 도시재생을 위한 공원녹지 정책 사례 연구)

  • Kim, Jung-Hwa;Kim, Yong-Gook
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.47 no.5
    • /
    • pp.78-90
    • /
    • 2019
  • The purpose of this study is to explore the direction of developing policies for parks and green spaces for inclusive urban planning and regeneration. By reviewing the status, budget, and laws pertaining to urban parks in Korea, as well as assessing the inclusivity of urban parks, this study revealed the problems and limitations in Korea as follows. First, the urban park system, which takes into account indicators such as park area per capita and green space ratio, is focused only on quantitative expansion. Second, the distribution of urban parks is unequal; hence, the higher the number of vulnerable residents, the lower the quality of urban parks and green spaces. Moreover, this study focused on the UK central government, along with the five local governments, including London, Edinburgh, Cardiff, Belfast, and Liverpool. Through an analysis of the contexts and contents establishing UK park and green space policies that can reduce socioeconomic inequalities while at the same time increase inclusiveness. This study discovered the following. The government's awareness of the necessity of tackling socioeconomic inequalities to make an inclusive society, the change in the urban regeneration policies from physical redevelopment to neighborhood renewal, and the survey and research on the correlation of parks and green spaces, inequality, health, and well-being provided the background for policy establishment. As a result, the creation of an inclusive society has been reflected in the stated goals of the UK's national plan and the strategies for park and green space supply and qualitative improvement. Deprived areas and vulnerable groups have been included in many local governments' park and green space policies. Also, tools for analyzing deficiencies in parks and methods for examining the qualitative evaluation of parks were developed. Besides, for the sustainability of each project, various funding programs have been set up, such as raising funds and fund-matching schemes. Different ways of supporting partnerships have been arranged, such as the establishment of collaborative bodies for government organizations, allowing for the participation of private organizations. The study results suggested five policy schemes, including conducting research on inequality and inclusiveness for parks and green spaces, developing strategies for improving the quality of park services, identifying tools for analyzing policy areas, developing park project models for urban regeneration, and building partnerships and establishing support systems.

A Study on the Consciousness Survey of Improvement of Emergency Rescue Training -Based on the Fire Fighting Organizations in Gangwon Province- (긴급구조훈련 개선에 관한 의식조사 연구 -강원도 소방조직을 중심으로-)

  • Choi, Yunjung;Koo, Wonhoi;Baek, Minho
    • Journal of the Society of Disaster Information
    • /
    • v.15 no.3
    • /
    • pp.440-449
    • /
    • 2019
  • Purpose: Fire-fighting organizations are the very first agencies that take actions at a disaster scene, and emergency rescue training is carried out for prompt and systematic response. However, there is a need for a change due to the limitations in emergency rescue trainings such as perfunctory trainings or trainings without considering regional or environmental characteristics. Method: This study is to conduct theoretical review with regard to emergency rescue training and present a measure to improve the emergency rescue training through attitude survey targeting fire-fighting organizations in Gangwon area. Result: Facilities that cause difficulties when doing emergency rescue activity were mostly hazardous material storage and processing facilities. In terms of the level of emergency rescue and response task, most respondents answered that the emergency rescue was insufficient. The respondents answered that the effectiveness of emergency rescue training was helpful, but some responses showed that the training was not helpful because of scenario-based training, seeming training, similar training carried out every year, unrealistic training, and lack of competent authorities' interest and perfunctory participations. Most respondents answered for the appropriateness of emergency rescue training and evaluation that they were satisfied, however, they were not satisfied with the evaluation methods irrelevant to the type of training, evaluation methods requiring unnecessary training scale, and evaluation methods leading perfunctory participations of competent authorities. Lastly, respondents mostly answered that training reflecting various damage situations are necessary regarding the demand on the improvement of emergency rescue training. Conclusion: The improvement measures for emergency rescue training are as follows. First, it is necessary to set and prepare various training contents in accordance with regional characteristics by reviewing major disasters occurred in the region. Second, it is necessary to revise the emergency rescue training guidelines and manuals for appropriate training plan for each fire station, provide education and training for working-level staff members, and establish training in a way that types, tactics, and strategies of emergency rescue training could be utilized practically. Third, it is necessary to prepare a scheme that can lead participation and provide incentive or penalty from the planning stage of training in order to increase the participation of supporting and competent authorities when an actual disaster occurs. Fourth, it is necessary to establish support arrangements and cooperative systems by authority through training by fire stations or zones in preparation for disaster situations that may occur simultaneously. Fifth, it is necessary to put emphasis on the training process rather than the result for emergency rescue training and evaluation, pay attention to the identification of supplement points for each disaster situation and make improvements. Especially, type or form of training should be considered rather than evaluating the execution status of detailed processes, and the evaluation measure that can consider the completeness (proficiency) of training and the status of role performance rather than the scale of training should be prepared. Sixth, type and method of training should be improved in accordance with the characteristics of each fire station by identifying the demand of working-level staff members for an efficient emergency rescue training.

Effect of abutment superimposition process of dental model scanner on final virtual model (치과용 모형 스캐너의 지대치 중첩 과정이 최종 가상 모형에 미치는 영향)

  • Yu, Beom-Young;Son, Keunbada;Lee, Kyu-Bok
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.57 no.3
    • /
    • pp.203-210
    • /
    • 2019
  • Purpose: The purpose of this study was to verify the effect of the abutment superimposition process on the final virtual model in the scanning process of single and 3-units bridge model using a dental model scanner. Materials and methods: A gypsum model for single and 3-unit bridges was manufactured for evaluating. And working casts with removable dies were made using Pindex system. A dental model scanner (3Shape E1 scanner) was used to obtain CAD reference model (CRM) and CAD test model (CTM). The CRM was scanned without removing after dividing the abutments in the working cast. Then, CTM was scanned with separated from the divided abutments and superimposed on the CRM (n=20). Finally, three-dimensional analysis software (Geomagic control X) was used to analyze the root mean square (RMS) and Mann-Whitney U test was used for statistical analysis (${\alpha}=.05$). Results: The RMS mean abutment for single full crown preparation was $10.93{\mu}m$ and the RMS average abutment for 3 unit bridge preparation was $6.9{\mu}m$. The RMS mean of the two groups showed statistically significant differences (P<.001). In addition, errors of positive and negative of two groups averaged $9.83{\mu}m$, $-6.79{\mu}m$ and 3-units bridge abutment $6.22{\mu}m$, $-3.3{\mu}m$, respectively. The mean values of the errors of positive and negative of two groups were all statistically significantly lower in 3-unit bridge abutments (P<.001). Conclusion: Although the number of abutments increased during the scan process of the working cast with removable dies, the error due to the superimposition of abutments did not increase. There was also a significantly higher error in single abutments, but within the range of clinically acceptable scan accuracy.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

Comparison of the wall clock time for extracting remote sensing data in Hierarchical Data Format using Geospatial Data Abstraction Library by operating system and compiler (운영 체제와 컴파일러에 따른 Geospatial Data Abstraction Library의 Hierarchical Data Format 형식 원격 탐사 자료 추출 속도 비교)

  • Yoo, Byoung Hyun;Kim, Kwang Soo;Lee, Jihye
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.1
    • /
    • pp.65-73
    • /
    • 2019
  • The MODIS (Moderate Resolution Imaging Spectroradiometer) data in Hierarchical Data Format (HDF) have been processed using the Geospatial Data Abstraction Library (GDAL). Because of a relatively large data size, it would be preferable to build and install the data analysis tool with greater computing performance, which would differ by operating system and the form of distribution, e.g., source code or binary package. The objective of this study was to examine the performance of the GDAL for processing the HDF files, which would guide construction of a computer system for remote sensing data analysis. The differences in execution time were compared between environments under which the GDAL was installed. The wall clock time was measured after extracting data for each variable in the MODIS data file using a tool built lining against GDAL under a combination of operating systems (Ubuntu and openSUSE), compilers (GNU and Intel), and distribution forms. The MOD07 product, which contains atmosphere data, were processed for eight 2-D variables and two 3-D variables. The GDAL compiled with Intel compiler under Ubuntu had the shortest computation time. For openSUSE, the GDAL compiled using GNU and intel compilers had greater performance for 2-D and 3-D variables, respectively. It was found that the wall clock time was considerably long for the GDAL complied with "--with-hdf4=no" configuration option or RPM package manager under openSUSE. These results indicated that the choice of the environments under which the GDAL is installed, e.g., operation system or compiler, would have a considerable impact on the performance of a system for processing remote sensing data. Application of parallel computing approaches would improve the performance of the data processing for the HDF files, which merits further evaluation of these computational methods.