• Title/Summary/Keyword: field reliability data

Search Result 801, Processing Time 0.027 seconds

Development of Time-Cost Trade-Off Algorithm for JIT System of Prefabricated Girder Bridges (Nodular GIrder) (프리팹 교량 거더 (노듈러 거더)의 적시 시공을 위한 공기-비용 알고리즘 개발)

  • Kim, Dae-Young;Chung, Taewon;Kim, Rang-Gyun
    • Korean Journal of Construction Engineering and Management
    • /
    • v.24 no.3
    • /
    • pp.12-19
    • /
    • 2023
  • In the case of the construction industry, the relationship between process and cost should be appropriately distributed so that the finished product can be delivered at the minimum fee within the construction period. At that time, it should be considered the size of the bridge, the construction method, the environment and production capacity of the factory, and the transport distance. However, due to various reasons that occur during the construction period, problems such as construction delay, construction cost increase, and quality and reliability degradation occur. Therefore, a systematic and scientific construction technique and process management technology are needed to break away from the conventional method. The prefab(Pre-Fabrication) is a representative OSC (Off-Site Construction) method manufactured in a factory and constructed onsite. This study develops a resource and process plan optimization system for the process management of the Nodular girder, a prefab bridge girder. A simulation algorithm develops to automatically test various variables in the personnel equipment mobilization plan to derive the optimal value. And, the algorithm was applied to the Paju-Pocheon Expressway Construction (Section 3) Dohwa 4 Bridge under construction, and the results compare. Based on construction work standard product calculation, actual input manpower, equipment type, and quantity were applied to the Activity Card, and the amount of work by quantity counting, resource planning, and resource requirements was reflected. In the future, we plan to improve the accuracy of the program by applying forecasting techniques including various field data.

A Study on the Information Behavior of Students in Specialized High School - A Case Study of B Specialized High School (특성화고등학교 학생들의 정보이용행태 연구- B 특성화고등학교 사례 분석)

  • Euikyung Oh
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.415-423
    • /
    • 2023
  • The purpose of this study was to prepare basic data for improving school library information service by investigating the information usage behavior of specialized high school students. Preferred information sources for each situation requiring information and the level of solving information problems using information sources were investigated, and difference analysis was conducted by department and grade. As a result of the survey, the percentage of students who preferred Internet portal services, personal information sources (teachers, friends, parents), and social media was high, while the percentage of students who preferred traditional print information sources and mass media was very low. The average score of the information problem solving level was 3.55, and the problem solving level in the areas of employment and career/admission was relatively low. Preferred sources of information were similar regardless of grade and department, and the difference between departments in information problem solving level was not statistically significant, but the difference between grades was statistically significant. In addition, there is an academic contribution in this field that specific examples of youth information use behavior have been added. Based on the results of the study, librarians should make efforts to verify the reliability of Internet portal site information, improve and promote library information sources, and expand library use education. In future studies, it was suggested to develop customized information services.

Development of Hazard-Level Forecasting Model using Combined Method of Genetic Algorithm and Artificial Neural Network at Signalized Intersections (유전자 알고리즘과 신경망 이론의 결합에 의한 신호교차로 위험도 예측모형 개발에 관한 연구)

  • Kim, Joong-Hyo;Shin, Jae-Man;Park, Je-Jin;Ha, Tae-Jun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.4D
    • /
    • pp.351-360
    • /
    • 2010
  • In 2010, the number of registered vehicles reached almost at 17.48 millions in Korea. This dramatic increase of vehicles influenced to increase the number of traffic accidents which is one of the serious social problems and also to soar the personal and economic losses in Korea. Through this research, an enhanced intersection hazard prediction model by combining Genetic Algorithm and Artificial Neural Network will be developed in order to obtain the important data for developing the countermeasures of traffic accidents and eventually to reduce the traffic accidents in Korea. Firstly, this research has investigated the influencing factors of road geometric features on the traffic volume of each approaching for the intersections where traffic accidents and congestions frequently take place and, a linear regression model of traffic accidents and traffic conflicts were developed by examining the relationship between traffic accidents and traffic conflicts through the statistical significance tests. Secondly, this research also developed an intersection hazard prediction model by combining Genetic Algorithm and Artificial Neural Network through applying the intersection traffic volume, the road geometric features and the specific variables of traffic conflicts. Lastly, this research found out that the developed model is better than the existed forecasting models in terms of the reliability and accuracy by comparing the actual number of traffic accidents and the predicted number of accidents from the developed model. In conclusion, it is expect that the cost/effectiveness of any traffic safety improvement projects can be maximized if this developed intersection hazard prediction model by combining Genetic Algorithm and Artificial Neural Network use practically at field in the future.

Research on optimal safety ship-route based on artificial intelligence analysis using marine environment prediction (해양환경 예측정보를 활용한 인공지능 분석 기반의 최적 안전항로 연구)

  • Dae-yaoung Eeom;Bang-hee Lee
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2023.05a
    • /
    • pp.100-103
    • /
    • 2023
  • Recently, development of maritime autonomoust surface ships and eco-friendly ships, production and evaluation research considering various marine environments is needed in the field of optimal routes as the demand for accurate and detailed real-time marine environment prediction information expands. An algorithm that can calculate the optimal route while reducing the risk of the marine environment and uncertainty in energy consumption in smart ships was developed in 2 stages. In the first stage, a profile was created by combining marine environmental information with ship location and status information within the Automatic Ship Identification System(AIS). In the second stage, a model was developed that could define the marine environment energy map using the configured profile results, A regression equation was generated by applying Random Forest among machine learning techniques to reflect about 600,000 data. The Random Forest coefficient of determination (R2) was 0.89, showing very high reliability. The Dijikstra shortest path algorithm was applied to the marine environment prediction at June 1 to 3, 2021, and to calculate the optimal safety route and express it on the map. The route calculated by the random forest regression model was streamlined, and the route was derived considering the state of the marine environment prediction information. The concept of route calculation based on real-time marine environment prediction information in this study is expected to be able to calculate a realistic and safe route that reflects the movement tendency of ships, and to be expanded to a range of economic, safety, and eco-friendliness evaluation models in the future.

  • PDF

The Development of Instruments to Assess Attitudes Toward Science of Students and Their Parents (학생과 부모의 과학에 대한 태도 측정 도구 개발)

  • Choi, Sung-Youn;Kim, Sung-Yeon;Kim, Sung-Won
    • Journal of The Korean Association For Science Education
    • /
    • v.27 no.3
    • /
    • pp.272-284
    • /
    • 2007
  • The purpose of this study was to describe the scales of attitudes toward science and the validation of instruments for students and their parents. These instruments include three scales: cognition about value of science, affection toward science & science learning, and conative participation in scientific activities. A sample of middle school students (N=198) and their parents (N=153) was selected. Data analysis indicated that the instruments developed in this study had proper validity and reliability measures (${\alpha}=0.93$ for student questionnaire, ${\alpha}=0.88$ for parent questionnaire). The results reveal that both students and parents were well aware of the academic/vocational and social value of science, but they had low awareness of the individual value. In spite of that, students have positive feelings regarding enjoyment toward science and science learning, their self-concept and self-efficacy were low. And parents' responses were observed to support their kids in general field but not in science. Especially, female students had low participation in scientific activities and also their parents had low support for scientific activities (p<.0.1). Finally, there were positive correlations between students' attitudes toward science and their parents' affection toward science & science learning and conative participation in science activities.

Developing and Applying the Questionnaire to Measure Science Core Competencies Based on the 2015 Revised National Science Curriculum (2015 개정 과학과 교육과정에 기초한 과학과 핵심역량 조사 문항의 개발 및 적용)

  • Ha, Minsu;Park, HyunJu;Kim, Yong-Jin;Kang, Nam-Hwa;Oh, Phil Seok;Kim, Mi-Jum;Min, Jae-Sik;Lee, Yoonhyeong;Han, Hyo-Jeong;Kim, Moogyeong;Ko, Sung-Woo;Son, Mi-Hyun
    • Journal of The Korean Association For Science Education
    • /
    • v.38 no.4
    • /
    • pp.495-504
    • /
    • 2018
  • This study was conducted to develop items to measure scientific core competency based on statements of scientific core competencies presented in the 2015 revised national science curriculum and to identify the validity and reliability of the newly developed items. Based on the explanations of scientific reasoning, scientific inquiry ability, scientific problem-solving ability, scientific communication ability, participation/lifelong learning in science presented in the 2015 revised national science curriculum, 25 items were developed by five science education experts. To explore the validity and reliability of the developed items, data were collected from 11,348 students in elementary, middle, and high schools nationwide. The content validity, substantive validity, the internal structure validity, and generalization validity proposed by Messick (1995) were examined by various statistical tests. The results of the MNSQ analysis showed that there were no nonconformity in the 25 items. The confirmatory factor analysis using the structural equation modeling revealed that the five-factor model was a suitable model. The differential item functioning analyses by gender and school level revealed that the nonconformity DIF value was found in only two out of 175 cases. The results of the multivariate analysis of variance by gender and school level showed significant differences of test scores between schools and genders, and the interaction effect was also significant. The assessment items of science core competency based on the 2015 revised national science curriculum are valid from a psychometric point of view and can be used in the science education field.

A Study on the Establishment Case of Technical Standard for Electronic Record Information Package (전자문서 정보패키지 구축 사례 연구 - '공인전자문서보관소 전자문서 정보패키지 기술규격 개발 연구'를 중심으로-)

  • Kim, Sung-Kyum
    • The Korean Journal of Archival Studies
    • /
    • no.16
    • /
    • pp.97-146
    • /
    • 2007
  • Those days when people used paper to make up and manage all kinds of documents in the process of their jobs are gone now. Today electronic types of documents have replaced paper. Unlike paper documents, electronic ones contribute to the maximum job efficiency with their convenience in production and storage. But they too have some disadvantages; it's difficult to distinguish originals and copies like paper documents; it's not easy to examine if there is a change or damage to the documents; they are also prone to alteration and damage by the external influences in the electronic environment; and electronic documents require enormous amounts of workforce and costs for immediate measures to be taken according to the changes to the S/W and H/W environment. Despite all those weaknesses, however, electronic documents increasingly account for more percentage in the current job environment thanks to their job convenience and efficiency of production costs. Both the government and private sector have made efforts to come up with plans to maximize their advantages and minimize their risks at the same time. One of the methods is the Authorized Retention Center which is described in the study. There are a couple of prerequisites for its smooth operation; they should guarantee the legal validity of electronic documents in the administrative aspects and first secure the reliability and authenticity of electronic documents in the technological aspects. Responding to those needs, the Ministry of Commerce, Industry and Energy and the Korea Institute for Electronic Commerce, which were the two main bodies to drive the Authorized Retention Center project, revised the Electronic Commerce Act and supplemented the provisions to guarantee the legal validity of electronic documents in 2005 and conducted researches on the ways to preserve electronic documents for a long term and secure their reliability, which had been demanded by the users of the center, in 2006. In an attempt to fulfill those goals of the Authorized Retention Center, this study researched technical standard for electronic record information package of the center and applied the ISO 14721 information package model that's the standard for the long-term preservation of digital data. It also suggested a process to produce and manage information package so that there would be the SIP, AIP and DIP metadata features for the production, preservation, and utilization by users points of electronic documents and they could be implemented according to the center's policies. Based on the previous study, the study introduced the flow charts among the production and progress process, application methods and packages of technical standard for electronic record information package at the center and suggested some issues that should be consistently researched in the field of records management based on the results.

The Effect of Influencer's Characteristics and Contnets Quality on Brand Attitude and Purchase Intention: Trust and Self-congruity as a Mediator (소셜미디어 인플루언서의 개인특성과 콘텐츠 특성이 브랜드 태도와 구매의도에 미치는 영향: 신뢰와 자아일치성을 매개로)

  • Lee, Myung Jin;Lee, Sang Won
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.16 no.5
    • /
    • pp.159-175
    • /
    • 2021
  • This study attempted to analyze the relationship between influencer's characteristic factors such as professionalism, authenticity, and interactivity and content quality factors consisting of accuracy, completeness, and diversity on brand attitude and purchase attitude through trust and self-consistency. To reveal the structural relationship between main variables, a survey was conducted on 201 users. An EFA, CFA, and reliability analysis were performed to confirm reliability and validity. And structural equation was conducted to verify hypothesis. The main results are as follows. First, it was found that professionalism and interactivity had a significant positive effect on trust. And, accuracy, completeness, and variety were all found to have a significant positive effect on trust. Second, in the relationship between individual characteristic factors and self-consistency, it was found that professionalism and authenticity had a significant positive effect on self-consistency. In addition, in the relationship between content quality and self-consistency, accuracy, completeness, and diversity were found to have a positive effect on self-consistency along with trust. Third, in the relationship between trust and self-consistency on brand attitude and purchase intention, both trust and self-consistency were found to have a statistically significant positive effect on brand attitude. It was found that only self-consistency and brand attitude had a statistically significant positive effect on purchase intention. These findings showed that when users perceive professionalism and interaction with influencer, trust increases, and professionalism and progress increase self-consistency with influencer. In addition, in the case of content quality, it was found that trust and self-consistency responded positively when perceived content quality through content accuracy, completeness, and diversity. Also, trust and self-consistency increased attitudes toward brands and could influence consumption behavior such as purchase intention. Therefore, for effective marketing performance using influencer's influence in the field of influencer marketing, which has a strong information delivery on products and brands, not only personal characteristics such as professionalism, authenticity, and interactivity, but also quality of content should be considered. The above research results are expected to suggest implications for marketing strategies and practices as one available basic data to exert the expected effect of marketing using influencer.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.