• Title/Summary/Keyword: engineering documents

Search Result 1,094, Processing Time 0.026 seconds

A Study on Automated Fake News Detection Using Verification Articles (검증 자료를 활용한 가짜뉴스 탐지 자동화 연구)

  • Han, Yoon-Jin;Kim, Geun-Hyung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.12
    • /
    • pp.569-578
    • /
    • 2021
  • Thanks to web development today, we can easily access online news via various media. As much as it is easy to access online news, we often face fake news pretending to be true. As fake news items have become a global problem, fact-checking services are provided domestically, too. However, these are based on expert-based manual detection, and research to provide technologies that automate the detection of fake news is being actively conducted. As for the existing research, detection is made available based on contextual characteristics of an article and the comparison of a title and the main article. However, there is a limit to such an attempt making detection difficult when manipulation precision has become high. Therefore, this study suggests using a verifying article to decide whether a news item is genuine or not to be affected by article manipulation. Also, to improve the precision of fake news detection, the study added a process to summarize a subject article and a verifying article through the summarization model. In order to verify the suggested algorithm, this study conducted verification for summarization method of documents, verification for search method of verification articles, and verification for the precision of fake news detection in the finally suggested algorithm. The algorithm suggested in this study can be helpful to identify the truth of an article before it is applied to media sources and made available online via various media sources.

A Study on the Development of H2 Fuel Cell Education Platform: Meta-Fuelcell (연료전지 교육 플랫폼 Meta-Fuelcell 개발에 관한 연구)

  • Duong, Thuy Trang;Gwak, Kyung-Min;Shin, Hyun-Jun;Rho, Young-J.
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.29-35
    • /
    • 2022
  • This paper proposes a fuel cell education framework installed on a Metaverse environment, which is to reduce the burden of education costs and improve the effect of education or learning. This Meta-Fuel cell platform utilizes the Unity 3D Web and enables not only theoretical education but also hands-on training. The platform was designed and developed to accommodate a variety of unit education contents, such as ppt documents, videos, etc. The platform, therdore, integrates ppt and video demonstrations for theoretical education, as well as software content "STACK-Up" for hands-on training. Theoretical education section provides specialized liberal arts knowledge on hydrogen, including renewable energy, hydrogen economy, and fuel cells. The software "STACK-Up" provides a hands-on practice on assembling the stack parts. Stack is the very core component of fuel cells. The Meta-Fuelcell platform improves the limitations of face-to-face education. It provides educators with the opportunities of non-face-to-face education without restrictions such as educational place, time, and occupancy. On the other hand, learners can choose educational themes, order, etc. It provides educators and learners with interesting experiences to be active in the metaverse space. This platform is being applied experimentally to a education project which is to develop advanced manpower in the fuel cell industry. Its improvement is in progress.

A Study on the Blockchain based Frequency Allocation Process for Private 5G (블록체인 기반 5G 특화망 주파수 할당 프로세스 연구)

  • Won-Seok Yoo;Won-Cheol Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.1
    • /
    • pp.24-32
    • /
    • 2023
  • The current Private 5G use procedure goes through the step of application examination, use and usage inspection, and can be divided in to application, examination step as a procedure before frequency allocation, and use, usage inspection step as a procedure after frequency allocation. Various types of documents are required to apply for a Private 5G, and due to the document screening process and radio station inspection for using Private 5G frequencies, the procedure for Private 5G applicants to use Private 5G is complicated and takes a considerable amount of time. In this paper, we proposed Frequency Allocation Process for Private 5G using a blockchain platform, which is fast and simplified than the current procedure. Through the use of a blockchain platform and NFT (Non-Fungible Token), reliability and integrity of the data required in the frequency allocation process were secured, and security of frequency usage information was maintained and a reliable Private 5G frequency allocation process was established. Also by applying the RPA system that minimizes human intervention, fairness was secured in the process of allocating Private 5G. Finally, the frequency allocation process of Private 5G based on the Ethereum blockchain was performed though a simulation.

Yun Chi-Ho's Garden Plan for the Anglo-Korean School in Gaeseong (윤치호의 개성 한영서원 정원 계획)

  • Kim, Jung-Hwa
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.2
    • /
    • pp.81-93
    • /
    • 2023
  • The purpose of this study is to clarify the background of the plans and the spatial characteristics of the garden at the Anglo-Korean School, an educational institution established in Gaeseong in 1906 by Yun Chi-ho and the American Methodist Church. The time scope of the study is from 1906, when the school was opened, to the early 1920s, when the basic building structure of the school was completed. The spatial scope is the school complex, located in Gaeseong, and its affiliated facilities. The contents of the study include the planning background and purpose, spatial layout, and plants used in the school garden. This study reviewed Yun Ch'i-ho's papers and Warren A. Candler's papers at Emory University, documents, photos, and maps produced in the early 20th century. The results show that the school garden was first mentioned at the school's opening and that with a strong will, Yun Chi-ho insisted on establishing a school garden. The garden was located around the engineering department building and was divided into several sections and lots. Economic plants, such as fruit trees, comprised the garden and were sourced from the Methodist Church of the South, USA. This study reveals that the garden at the Anglo-Korean School functioned as a training ground for agriculture and horticulture education and was differentiated from Seowon, a traditional Korean academy that symbolically spaced Neo-Confucianism and that emphasized the views of the surrounding nature during the Joseon Dynasty.

The Experimental Study on the Transient Brake Time of Vehicles by Road Pavement and Friction Coefficient (노면 포장별 차량의 제동경과시간 및 마찰계수에 관한 실험적 연구)

  • Lim, Chang-Sik;Choi, Yang-Won
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.6D
    • /
    • pp.587-597
    • /
    • 2010
  • When a car accident occurs, people who had an accident are not free from civil and criminal issues so that the accident investigator should reenact and analyze the accident situation accurately. In addition, the obtained documents through the analysis of such car accident occurrence and related factors have to be used to carry out the improvement of the areas that has numerous car accidents and complementary actions. The vehicle speed, accelerating force, braking power are currently known as the most affecting factors in accordance with many car accidents, traffic facilities, road design, etc. The vehicle's performance and rode friction coefficient road surface friction coefficient are affecting the most closely in this field. Especially, once the estimate of the speed of the accident moment relating to main eleven articles of Traffic Accident Exemption Law is very important and accuracy is required. However, currently the researches of these matters have not made exclusively yet in Korea. In this study by reflecting this current situation, until the sudden braking history is found from the car's sudden braking, it estimates accurately the transient brake time and rode friction coefficient by measuring a time of transient brake time through the precision speed detector (Vericom VC2000PC). The analysis of the experimental results calculated the transient brake time and friction coefficient to fit into the purpose of this study in the basis of different kind of various special purpose asphalt pavement and slip-prevention pavement and provided the fundamental data.

Development of SVM-based Construction Project Document Classification Model to Derive Construction Risk (건설 리스크 도출을 위한 SVM 기반의 건설프로젝트 문서 분류 모델 개발)

  • Kang, Donguk;Cho, Mingeon;Cha, Gichun;Park, Seunghee
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.6
    • /
    • pp.841-849
    • /
    • 2023
  • Construction projects have risks due to various factors such as construction delays and construction accidents. Based on these construction risks, the method of calculating the construction period of the construction project is mainly made by subjective judgment that relies on supervisor experience. In addition, unreasonable shortening construction to meet construction project schedules delayed by construction delays and construction disasters causes negative consequences such as poor construction, and economic losses are caused by the absence of infrastructure due to delayed schedules. Data-based scientific approaches and statistical analysis are needed to solve the risks of such construction projects. Data collected in actual construction projects is stored in unstructured text, so to apply data-based risks, data pre-processing involves a lot of manpower and cost, so basic data through a data classification model using text mining is required. Therefore, in this study, a document-based data generation classification model for risk management was developed through a data classification model based on SVM (Support Vector Machine) by collecting construction project documents and utilizing text mining. Through quantitative analysis through future research results, it is expected that risk management will be possible by being used as efficient and objective basic data for construction project process management.

Analysis of Causes of Increase in Construction Cost and Minimize Increase of Cost through Performance Evaluation of Public Construction Projects (공공건설사업의 성과평가를 통한 공사비 증가 원인 분석 및 증가 최소화 방안)

  • Moon, Hyunseok;Ryoo, Geunho;Hong, Hyunki
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.4
    • /
    • pp.567-575
    • /
    • 2024
  • The government and local governments are utilizing regulations, policies, and various project management techniques at each project stage to effectively and systematically manage public construction projects. However, the increase in construction costs continues to occur. In order to solve these problems, this study attempted to collect cases of existing public construction projects and analyze historical data. And it would like to derive the impact factor of increased construction costs. Based on the results, a plan was proposed to prevent the causes of increase and minimize in construction costs. The main reasons for the increase in costs were the occurrence of civil petition, differences between design documents and site conditions, and changes due to requests from the owner and changes in the business plan. And to solve these problems, this study proposed improvement for each cause from an institutional perspective, along with interview with experts and project owners. The results of this study are significant in improving the process of existing public construction process and presenting key inspection and review items for each major inspection stage in order to solve problem resulting from the analysis of the performance of public construction projects.

NIRS AS AN ESSENTIAL TOOL IN FOOD SAFETY PROGRAMS: FEED INGREDIENTS PREDICTION H COMMERCIAL COMPOUND FEEDING STUFFS

  • Varo, Ana-Garrido;MariaDoloresPerezMarin;Cabrera, Augusto-Gomez;JoseEmilioGuerrero Ginel;FelixdePaz;NatividadDelgado
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1153-1153
    • /
    • 2001
  • Directive 79/373/EEC on the marketing of compound feeding stuffs, provided far a flexible declaration arrangement confined to the indication of the feed materials without stating their quantity and the possibility was retained to declare categories of feed materials instead of declaring the feed materials themselves. However, the BSE (Bovine Spongiform Encephalopathy) and the dioxin crisis have demonstrated the inadequacy of the current provisions and the need of detailed qualitative and quantitative information. On 10 January 2000 the Commission submitted to the Council a proposal for a Directive related to the marketing of compound feeding stuffs and the Council adopted a Common Position (EC N$^{\circ}$/2001) published at the Official Journal of the European Communities of 2. 2. 2001. According to the EC (EC N$^{\circ}$ 6/2001) the feeds material contained in compound feeding stufs intended for animals other than pets must be declared according to their percentage by weight, by descending order of weight and within the following brackets (I :< 30%; II :> 15 to 30%; III :> 5 to 15%; IV : 2% to 5%; V: < 2%). For practical reasons, it shall be allowed that the declarations of feed materials included in the compound feeding stuffs are provided on an ad hoc label or accompanying document. However, documents alone will not be sufficient to restore public confidence on the animal feed industry. The objective of the present work is to obtain calibration equations fur the instanteneous and simultaneous prediction of the chemical composition and the percentage of ingredients of unground compound feeding stuffs. A total of 287 samples of unground compound feeds marketed in Spain were scanned in a FOSS-NIR Systems 6500 monochromator using a rectangular cup with a quartz window (16 $\times$ 3.5 cm). Calibration equations were obtained for the prediction of moisture ($R^2$= 0.84, SECV = 0.54), crude protein ($R^2$= 0.96, SECV = 0.75), fat ($R^2$= 0.86, SECV = 0.54), crude fiber ($R^2$= 0.97, SECV = 0.63) and ashes ($R^2$= 0.86, SECV = 0.83). The sane set of spectroscopic data was used to predict the ingredient composition of the compound feeds. The preliminary results show that NIRS has an excellent ability ($r^2$$\geq$ 0, 9; RPD $\geq$ 3) for the prediction of the percentage of inclusion of alfalfa, sunflower meal, gluten meal, sugar beet pulp, palm meal, poultry meal, total meat meal (meat and bone meal and poultry meal) and whey. Other equations with a good predictive performance ($R^2$$\geq$0, 7; 2$\leq$RPD$\leq$3) were the obtained for the prediction of soya bean meal, corn, molasses, animal fat and lupin meal. The equations obtained for the prediction of other constituents (barley, bran, rice, manioc, meat and bone meal, fish meal, calcium carbonate, ammonium clorure and salt have an accuracy enough to fulfill the requirements layed down by the Common Position (EC Nº 6/2001). NIRS technology should be considered as an essential tool in food Safety Programs.

  • PDF

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.