• 제목/요약/키워드: Artificial-Intelligence

Search Result 5,396, Processing Time 0.03 seconds

Agricultural Applicability of AI based Image Generation (AI 기반 이미지 생성 기술의 농업 적용 가능성)

  • Seungri Yoon;Yeyeong Lee;Eunkyu Jung;Tae In Ahn
    • Journal of Bio-Environment Control
    • /
    • v.33 no.2
    • /
    • pp.120-128
    • /
    • 2024
  • Since ChatGPT was released in 2022, the generative artificial intelligence (AI) industry has seen massive growth and is expected to bring significant innovations to cognitive tasks. AI-based image generation, in particular, is leading major changes in the digital world. This study investigates the technical foundations of Midjourney, Stable Diffusion, and Firefly-three notable AI image generation tools-and compares their effectiveness by examining the images they produce. The results show that these AI tools can generate realistic images of tomatoes, strawberries, paprikas, and cucumbers, typical crops grown in greenhouse. Especially, Firefly stood out for its ability to produce very realistic images of greenhouse-grown crops. However, all tools struggled to fully capture the environmental context of greenhouses where these crops grow. The process of refining prompts and using reference images has proven effective in accurately generating images of strawberry fruits and their cultivation systems. In the case of generating cucumber images, the AI tools produced images very close to real ones, with no significant differences found in their evaluation scores. This study demonstrates how AI-based image generation technology can be applied in agriculture, suggesting a bright future for its use in this field.

An Efficient Dual Queue Strategy for Improving Storage System Response Times (저장시스템의 응답 시간 개선을 위한 효율적인 이중 큐 전략)

  • Hyun-Seob Lee
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.3
    • /
    • pp.19-24
    • /
    • 2024
  • Recent advances in large-scale data processing technologies such as big data, cloud computing, and artificial intelligence have increased the demand for high-performance storage devices in data centers and enterprise environments. In particular, the fast data response speed of storage devices is a key factor that determines the overall system performance. Solid state drives (SSDs) based on the Non-Volatile Memory Express (NVMe) interface are gaining traction, but new bottlenecks are emerging in the process of handling large data input and output requests from multiple hosts simultaneously. SSDs typically process host requests by sequentially stacking them in an internal queue. When long transfer length requests are processed first, shorter requests wait longer, increasing the average response time. To solve this problem, data transfer timeout and data partitioning methods have been proposed, but they do not provide a fundamental solution. In this paper, we propose a dual queue based scheduling scheme (DQBS), which manages the data transfer order based on the request order in one queue and the transfer length in the other queue. Then, the request time and transmission length are comprehensively considered to determine the efficient data transmission order. This enables the balanced processing of long and short requests, thus reducing the overall average response time. The simulation results show that the proposed method outperforms the existing sequential processing method. This study presents a scheduling technique that maximizes data transfer efficiency in a high-performance SSD environment, which is expected to contribute to the development of next-generation high-performance storage systems

Temperature Prediction and Control of Cement Preheater Using Alternative Fuels (대체연료를 사용하는 시멘트 예열실 온도 예측 제어)

  • Baasan-Ochir Baljinnyam;Yerim Lee;Boseon Yoo;Jaesik Choi
    • Resources Recycling
    • /
    • v.33 no.4
    • /
    • pp.3-14
    • /
    • 2024
  • The preheating and calcination processes in cement manufacturing, which are crucial for producing the cement intermediate product clinker, require a substantial quantity of fossil fuels to generate high-temperature thermal energy. However, owing to the ever-increasing severity of environmental pollution, considerable efforts are being made to reduce carbon emissions from fossil fuels in the cement industry. Several preliminary studies have focused on increasing the usage of alternative fuels like refuse-derived fuel (RDF). Alternative fuels offer several advantages, such as reduced carbon emissions, mitigated generation of nitrogen oxides, and incineration in preheaters and kilns instead of landfilling. However, owing to the diverse compositions of alternative fuels, estimating their calorific value is challenging. This makes it difficult to regulate the preheater stability, thereby limiting the usage of alternative fuels. Therefore, in this study, a model based on deep neural networks is developed to accurately predict the preheater temperature and propose optimal fuel input quantities using explainable artificial intelligence. Utilizing the proposed model in actual preheating process sites resulted in a 5% reduction in fossil fuel usage, 5%p increase in the substitution rate with alternative fuels, and 35% reduction in preheater temperature fluctuations.

Development of an AI Model to Determine the Relationship between Cerebrovascular Disease and the Work Environment as well as Analysis of Consistency with Expert Judgment (뇌심혈관 질환과 업무 환경의 연관성 판단을 위한 AI 모델의 개발 및 전문가 판단과의 일치도 분석)

  • Juyeon Oh;Ki-bong Yoo;Ick Hoon Jin;Byungyoon Yun;Juho Sim;Heejoo Park;Jongmin Lee;Jian Lee;Jin-Ha Yoon
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.34 no.3
    • /
    • pp.202-213
    • /
    • 2024
  • Introduction: Acknowledging the global issue of diseases potentially caused by overwork, this study aims to develop an AI model to help workers understand the connection between cerebrocardiovascular diseases and their work environment. Materials and methods: The model was trained using medical and legal expertise along with data from the 2021 occupational disease adjudication certificate by the Industrial Accident Compensation Insurance and Prevention Service. The Polyglot-ko-5.8B model, which is effective for processing Korean, was utilized. Model performance was evaluated through accuracy, precision, sensitivity, and F1-score metrics. Results: The model trained on a comprehensive dataset, including expert knowledge and actual case data, outperformed the others with respective accuracy, precision, sensitivity, and F1-scores of 0.91, 0.89, 0.84, and 0.87. However, it still had limitations in responding to certain scenarios. Discussion: The comprehensive model proved most effective in diagnosing work-related cerebrocardiovascular diseases, highlighting the significance of integrating actual case data in AI model development. Despite its efficacy, the model showed limitations in handling diverse cases and offering health management solutions. Conclusion: The study succeeded in creating an AI model to discern the link between work factors and cerebrocardiovascular diseases, showcasing the highest efficacy with the comprehensively trained model. Future enhancements towards a template-based approach and the development of a user-friendly chatbot webUI for workers are recommended to address the model's current limitations.

A Study on Information Bias Perceived by Users of AI-driven News Recommendation Services: Focusing on the Establishment of Ethical Principles for AI Services (AI 자동 뉴스 추천 서비스 사용자가 인지하는 정보 편향성에 대한 연구: AI 서비스의 윤리 원칙 수립을 중심으로)

  • Minjung Park;Sangmi Chai
    • Knowledge Management Research
    • /
    • v.25 no.3
    • /
    • pp.47-71
    • /
    • 2024
  • AI-driven news recommendation systems are widely used today, providing personalized news consumption experiences. However, there are significant concerns that these systems might increase users' information bias by mainly showing information from limited perspectives. This lack of diverse information access can prevent users from forming well-rounded viewpoints on specific issues, leading to social problems like Filter bubbles or Echo chambers. These issues can deepen social divides and information inequality. This study aims to explore how AI-based news recommendation services affect users' perceived information bias and to create a foundation for ethical principles in AI services. Specifically, the study looks at the impact of ethical principles like accountability, the right to explanation, the right to choose, and privacy protection on users' perceptions of information bias in AI news systems. The findings emphasize the need for AI service providers to strengthen ethical standards to improve service quality and build user trust for long-term use. By identifying which ethical principles should be prioritized in the design and implementation of AI services, this study aims to help develop corporate ethical frameworks, internal policies, and national AI ethics guidelines.

A Study of Influencing Factors for Intentional Inaccurate Information Provision in Conversations with Chatbots: In the Context of Online Dating Services (챗봇과의 대화에서 의도적인 부정확한 정보 제공에 대한 영향 요인 연구: 온라인 데이팅 서비스 이용 상황에서)

  • Chanhee Kwak;Junyeong Lee;Jinyoung Min;HanByeol Stella Choi
    • Knowledge Management Research
    • /
    • v.25 no.3
    • /
    • pp.73-98
    • /
    • 2024
  • Chatbots are becoming increasingly popular as interactive communication tools that provide not only convenience but also a friendly and humanized experience. Due to the interactive nature of chatbots, they can exchange information with users to perform various tasks, and users sometimes intentionally provide inaccurate information. Considering social presence of conversational agents, perceived risk of providing personal information, and trust in algorithms as key influencing factors, this study explores the effects of those factors on the intention to provide inaccurate information in the context of online dating services and examine whether these effects vary across types of conversational agents. We conducted an analysis of structural equation model using data collected from Amazon Mechanical Turk (MTurk). The analysis results showed significant relationships between factors related to the intention to provide inaccurate information and empirically confirmed that those relationships vary by types of conversational agents. Out findings have academic implications for the behavior of providing inaccurate information in online environments and practical implications for designing chatbots to reduce such intentions. We also discuss the ethical implications of the consequences of inaccurate information online.

Development and Validation of a Korean Generative AI Literacy Scale (한국형 생성 인공지능 리터러시 척도 개발 및 타당화)

  • Hwan-Ho Noh;Hyeonjeong Kim;Minjin Kim
    • Knowledge Management Research
    • /
    • v.25 no.3
    • /
    • pp.145-171
    • /
    • 2024
  • Literacy initially referred to the ability to read and understand written documents and processed information. With the advancement of digital technology, the scope of literacy expanded to include the access and use of digital information, evolving into the concept of digital literacy. The application and purpose of digital literacy vary across different fields, leading to the use of various terminologies. This study focuses on generative artificial intelligence (AI), which is gaining increasing importance in the AI era, to assess users' literacy levels. The research aimed to extend the concept of literacy proposed in previous studies and develop a tool suitable for Korean users. Through exploratory factor analysis, we identified that generative AI literacy consists of four factors: AI utilization ability, critical evaluation, ethical use, and creative application. Subsequently, confirmatory factor analysis validated the statistical appropriateness of the model structure composed of these four factors. Additionally, correlation analyses between the newly developed literacy tool and existing AI literacy scales and AI service evaluation tools revealed significant relationships, confirming the validity of the tool. Finally, the implications, limitations, and directions for future research are discussed.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Development on Early Warning System about Technology Leakage of Small and Medium Enterprises (중소기업 기술 유출에 대한 조기경보시스템 개발에 대한 연구)

  • Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.143-159
    • /
    • 2017
  • Due to the rapid development of IT in recent years, not only personal information but also the key technologies and information leakage that companies have are becoming important issues. For the enterprise, the core technology that the company possesses is a very important part for the survival of the enterprise and for the continuous competitive advantage. Recently, there have been many cases of technical infringement. Technology leaks not only cause tremendous financial losses such as falling stock prices for companies, but they also have a negative impact on corporate reputation and delays in corporate development. In the case of SMEs, where core technology is an important part of the enterprise, compared to large corporations, the preparation for technological leakage can be seen as an indispensable factor in the existence of the enterprise. As the necessity and importance of Information Security Management (ISM) is emerging, it is necessary to check and prepare for the threat of technology infringement early in the enterprise. Nevertheless, previous studies have shown that the majority of policy alternatives are represented by about 90%. As a research method, literature analysis accounted for 76% and empirical and statistical analysis accounted for a relatively low rate of 16%. For this reason, it is necessary to study the management model and prediction model to prevent leakage of technology to meet the characteristics of SMEs. In this study, before analyzing the empirical analysis, we divided the technical characteristics from the technology value perspective and the organizational factor from the technology control point based on many previous researches related to the factors affecting the technology leakage. A total of 12 related variables were selected for the two factors, and the analysis was performed with these variables. In this study, we use three - year data of "Small and Medium Enterprise Technical Statistics Survey" conducted by the Small and Medium Business Administration. Analysis data includes 30 industries based on KSIC-based 2-digit classification, and the number of companies affected by technology leakage is 415 over 3 years. Through this data, we conducted a randomized sampling in the same industry based on the KSIC in the same year, and compared with the companies (n = 415) and the unaffected firms (n = 415) 1:1 Corresponding samples were prepared and analyzed. In this research, we will conduct an empirical analysis to search for factors influencing technology leakage, and propose an early warning system through data mining. Specifically, in this study, based on the questionnaire survey of SMEs conducted by the Small and Medium Business Administration (SME), we classified the factors that affect the technology leakage of SMEs into two factors(Technology Characteristics, Organization Characteristics). And we propose a model that informs the possibility of technical infringement by using Support Vector Machine(SVM) which is one of the various techniques of data mining based on the proven factors through statistical analysis. Unlike previous studies, this study focused on the cases of various industries in many years, and it can be pointed out that the artificial intelligence model was developed through this study. In addition, since the factors are derived empirically according to the actual leakage of SME technology leakage, it will be possible to suggest to policy makers which companies should be managed from the viewpoint of technology protection. Finally, it is expected that the early warning model on the possibility of technology leakage proposed in this study will provide an opportunity to prevent technology Leakage from the viewpoint of enterprise and government in advance.