• Title/Summary/Keyword: Reliability of artificial intelligence

Search Result 177, Processing Time 0.033 seconds

Development of Optimum Traffic Safety Evaluation Model Using the Back-Propagation Algorithm (역전파 알고리즘을 이용한 최적의 교통안전 평가 모형개발)

  • Kim, Joong-Hyo;Kwon, Sung-Dae;Hong, Jeong-Pyo;Ha, Tae-Jun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.35 no.3
    • /
    • pp.679-690
    • /
    • 2015
  • The need to remove the cause of traffic accidents by improving the engineering system for a vehicle and the road in order to minimize the accident hazard. This is likely to cause traffic accident continue to take a large and significant social cost and time to improve the reliability and efficiency of this generally poor road, thereby generating a lot of damage to the national traffic accident caused by improper environmental factors. In order to minimize damage from traffic accidents, the cause of accidents must be eliminated through technological improvements of vehicles and road systems. Generally, it is highly probable that traffic accident occurs more often on roads that lack safety measures, and can only be improved with tremendous time and costs. In particular, traffic accidents at intersections are on the rise due to inappropriate environmental factors, and are causing great losses for the nation as a whole. This study aims to present safety countermeasures against the cause of accidents by developing an intersection Traffic safety evaluation model. It will also diagnose vulnerable traffic points through BPA (Back -propagation algorithm) among artificial neural networks recently investigated in the area of artificial intelligence. Furthermore, it aims to pursue a more efficient traffic safety improvement project in terms of operating signalized intersections and establishing traffic safety policies. As a result of conducting this study, the mean square error approximate between the predicted values and actual measured values of traffic accidents derived from the BPA is estimated to be 3.89. It appeared that the BPA appeared to have excellent traffic safety evaluating abilities compared to the multiple regression model. In other words, The BPA can be effectively utilized in diagnosing and practical establishing transportation policy in the safety of actual signalized intersections.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

A Study on the Korea Future Internet Promotion Plan for Cyber Security Enhancement (사이버 보안 강화를 위한 한국형 미래 인터넷 추진 방안에 관한 연구)

  • Lim, Gyoo-Gun;Jin, Hai-Yan;Ahn, Jae-Ik
    • Informatization Policy
    • /
    • v.29 no.1
    • /
    • pp.24-37
    • /
    • 2022
  • Amid rapid changes in the ICT environment attributed to the 4th Industrial Revolution, the development of information & communication technology, and COVID-19, the existing internet developed without considering security, mobility, manageability, QoS, etc. As a result, the structure of the internet has become complicated, and problems such as security, stability, and reliability vulnerabilities continue to occur. In addition, there is a demand for a new concept of the internet that can provide stability and reliability resulting from digital transformation-geared advanced technologies such as artificial intelligence and IoT. Therefore, in order to suggest a way of implementing the Korean future internet that can strengthen cybersecurity, this study suggests the direction and strategy for promoting the future internet that is suitable for the Korean cyber environment through analyzing important key factors in the implementation of the future internet and evaluating the trend and suitability of domestic & foreign research related to future internet. The importance of key factors in the implementation of the future internet proceeds in the order of security, integrity, availability, stability, and confidentiality. Currently, future internet projects are being studied in various ways around the world. Among numerous projects, Bright Internet most adequately satisfies the key elements of future internet implementation and was evaluated as the most suitable technology for Korea's cyber environment. Technical issues as well as strategic and legal issues must be considered in order to promote the Bright Internet as the frontrunner Korean future internet. As for technical issues, it is necessary to adopt SAVA IPv6-NID in selecting the Bright Internet as the standard of Korean future internet and integrated data management at the data center level, and then establish a cooperative system between different countries. As for strategic issues, a secure management system and establishment of institution are needed. Lastly, in the case of legal issues, the requirement of GDPR, which includes compliance with domestic laws such as Korea's revised Data 3 Act, must be fulfilled.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

Data-centric XAI-driven Data Imputation of Molecular Structure and QSAR Model for Toxicity Prediction of 3D Printing Chemicals (3D 프린팅 소재 화학물질의 독성 예측을 위한 Data-centric XAI 기반 분자 구조 Data Imputation과 QSAR 모델 개발)

  • ChanHyeok Jeong;SangYoun Kim;SungKu Heo;Shahzeb Tariq;MinHyeok Shin;ChangKyoo Yoo
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.523-541
    • /
    • 2023
  • As accessibility to 3D printers increases, there is a growing frequency of exposure to chemicals associated with 3D printing. However, research on the toxicity and harmfulness of chemicals generated by 3D printing is insufficient, and the performance of toxicity prediction using in silico techniques is limited due to missing molecular structure data. In this study, quantitative structure-activity relationship (QSAR) model based on data-centric AI approach was developed to predict the toxicity of new 3D printing materials by imputing missing values in molecular descriptors. First, MissForest algorithm was utilized to impute missing values in molecular descriptors of hazardous 3D printing materials. Then, based on four different machine learning models (decision tree, random forest, XGBoost, SVM), a machine learning (ML)-based QSAR model was developed to predict the bioconcentration factor (Log BCF), octanol-air partition coefficient (Log Koa), and partition coefficient (Log P). Furthermore, the reliability of the data-centric QSAR model was validated through the Tree-SHAP (SHapley Additive exPlanations) method, which is one of explainable artificial intelligence (XAI) techniques. The proposed imputation method based on the MissForest enlarged approximately 2.5 times more molecular structure data compared to the existing data. Based on the imputed dataset of molecular descriptor, the developed data-centric QSAR model achieved approximately 73%, 76% and 92% of prediction performance for Log BCF, Log Koa, and Log P, respectively. Lastly, Tree-SHAP analysis demonstrated that the data-centric-based QSAR model achieved high prediction performance for toxicity information by identifying key molecular descriptors highly correlated with toxicity indices. Therefore, the proposed QSAR model based on the data-centric XAI approach can be extended to predict the toxicity of potential pollutants in emerging printing chemicals, chemical process, semiconductor or display process.

A Basic Study for Sustainable Analysis and Evaluation of Energy Environment in Buildings : Focusing on Energy Environment Historical Data of Residential Buildings (빌딩의 지속가능 에너지환경 분석 및 평가를 위한 기초 연구 : 주거용 건물의 에너지환경 실적정보를 중심으로)

  • Lee, Goon-Jae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.1
    • /
    • pp.262-268
    • /
    • 2017
  • The energy consumption of buildings is approximately 20.5% of the total energy consumption, and the interest in energy efficiency and low consumption of the building is increasing. Several studies have performed energy analysis and evaluation. Energy analysis and evaluation are effective when applied in the initial design phase. In the initial design phase, however, the energy performance is evaluated using general level information, such as glazing area and surface area. Therefore, the evaluation results of the detailed design stage, which is based on the drawings, including detailed information of the materials and facilities, will be different. Thus far, most studies have reported the analysis and evaluation at the detailed design stage, where detailed information about the materials installed in the building becomes clear. Therefore, it is possible to improve the accuracy of the energy environment analysis if the energy environment information generated during the life cycle of the building can be established and accurate information can be provided in the analysis at the initial design stage using a probability / statistical method. On the other hand, historical data on energy use has not been established in Korea. Therefore, this study performed energy environment analysis to construct the energy environment historical data. As a result of the research, information classification system, information model, and service model for acquiring and providing energy environment information that can be used for building lifecycle information of buildings are presented and used as the basic data. The results can be utilized in the historical data management system so that the reliability of analysis can be improved by supplementing the input information at the initial design stage. If the historical data is stacked, it can be used as learning data in methods, such as probability / statistics or artificial intelligence for energy environment analysis in the initial design stage.

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.