• Title/Summary/Keyword: AI. Artificial Intelligence

Search Result 1,968, Processing Time 0.028 seconds

A Study on the Potential Use of ChatGPT in Public Design Policy Decision-Making (공공디자인 정책 결정에 ChatGPT의 활용 가능성에 관한연구)

  • Son, Dong Joo;Yoon, Myeong Han
    • Journal of Service Research and Studies
    • /
    • v.13 no.3
    • /
    • pp.172-189
    • /
    • 2023
  • This study investigated the potential contribution of ChatGPT, a massive language and information model, in the decision-making process of public design policies, focusing on the characteristics inherent to public design. Public design utilizes the principles and approaches of design to address societal issues and aims to improve public services. In order to formulate public design policies and plans, it is essential to base them on extensive data, including the general status of the area, population demographics, infrastructure, resources, safety, existing policies, legal regulations, landscape, spatial conditions, current state of public design, and regional issues. Therefore, public design is a field of design research that encompasses a vast amount of data and language. Considering the rapid advancements in artificial intelligence technology and the significance of public design, this study aims to explore how massive language and information models like ChatGPT can contribute to public design policies. Alongside, we reviewed the concepts and principles of public design, its role in policy development and implementation, and examined the overview and features of ChatGPT, including its application cases and preceding research to determine its utility in the decision-making process of public design policies. The study found that ChatGPT could offer substantial language information during the formulation of public design policies and assist in decision-making. In particular, ChatGPT proved useful in providing various perspectives and swiftly supplying information necessary for policy decisions. Additionally, the trend of utilizing artificial intelligence in government policy development was confirmed through various studies. However, the usage of ChatGPT also unveiled ethical, legal, and personal privacy issues. Notably, ethical dilemmas were raised, along with issues related to bias and fairness. To practically apply ChatGPT in the decision-making process of public design policies, first, it is necessary to enhance the capacities of policy developers and public design experts to a certain extent. Second, it is advisable to create a provisional regulation named 'Ordinance on the Use of AI in Policy' to continuously refine the utilization until legal adjustments are made. Currently, implementing these two strategies is deemed necessary. Consequently, employing massive language and information models like ChatGPT in the public design field, which harbors a vast amount of language, holds substantial value.

Analysis of the Impact of Satellite Remote Sensing Information on the Prediction Performance of Ungauged Basin Stream Flow Using Data-driven Models (인공위성 원격 탐사 정보가 자료 기반 모형의 미계측 유역 하천유출 예측성능에 미치는 영향 분석)

  • Seo, Jiyu;Jung, Haeun;Won, Jeongeun;Choi, Sijung;Kim, Sangdan
    • Journal of Wetlands Research
    • /
    • v.26 no.2
    • /
    • pp.147-159
    • /
    • 2024
  • Lack of streamflow observations makes model calibration difficult and limits model performance improvement. Satellite-based remote sensing products offer a new alternative as they can be actively utilized to obtain hydrological data. Recently, several studies have shown that artificial intelligence-based solutions are more appropriate than traditional conceptual and physical models. In this study, a data-driven approach combining various recurrent neural networks and decision tree-based algorithms is proposed, and the utilization of satellite remote sensing information for AI training is investigated. The satellite imagery used in this study is from MODIS and SMAP. The proposed approach is validated using publicly available data from 25 watersheds. Inspired by the traditional regionalization approach, a strategy is adopted to learn one data-driven model by integrating data from all basins, and the potential of the proposed approach is evaluated by using a leave-one-out cross-validation regionalization setting to predict streamflow from different basins with one model. The GRU + Light GBM model was found to be a suitable model combination for target basins and showed good streamflow prediction performance in ungauged basins (The average model efficiency coefficient for predicting daily streamflow in 25 ungauged basins is 0.7187) except for the period when streamflow is very small. The influence of satellite remote sensing information was found to be up to 10%, with the additional application of satellite information having a greater impact on streamflow prediction during low or dry seasons than during wet or normal seasons.

Data-centric XAI-driven Data Imputation of Molecular Structure and QSAR Model for Toxicity Prediction of 3D Printing Chemicals (3D 프린팅 소재 화학물질의 독성 예측을 위한 Data-centric XAI 기반 분자 구조 Data Imputation과 QSAR 모델 개발)

  • ChanHyeok Jeong;SangYoun Kim;SungKu Heo;Shahzeb Tariq;MinHyeok Shin;ChangKyoo Yoo
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.523-541
    • /
    • 2023
  • As accessibility to 3D printers increases, there is a growing frequency of exposure to chemicals associated with 3D printing. However, research on the toxicity and harmfulness of chemicals generated by 3D printing is insufficient, and the performance of toxicity prediction using in silico techniques is limited due to missing molecular structure data. In this study, quantitative structure-activity relationship (QSAR) model based on data-centric AI approach was developed to predict the toxicity of new 3D printing materials by imputing missing values in molecular descriptors. First, MissForest algorithm was utilized to impute missing values in molecular descriptors of hazardous 3D printing materials. Then, based on four different machine learning models (decision tree, random forest, XGBoost, SVM), a machine learning (ML)-based QSAR model was developed to predict the bioconcentration factor (Log BCF), octanol-air partition coefficient (Log Koa), and partition coefficient (Log P). Furthermore, the reliability of the data-centric QSAR model was validated through the Tree-SHAP (SHapley Additive exPlanations) method, which is one of explainable artificial intelligence (XAI) techniques. The proposed imputation method based on the MissForest enlarged approximately 2.5 times more molecular structure data compared to the existing data. Based on the imputed dataset of molecular descriptor, the developed data-centric QSAR model achieved approximately 73%, 76% and 92% of prediction performance for Log BCF, Log Koa, and Log P, respectively. Lastly, Tree-SHAP analysis demonstrated that the data-centric-based QSAR model achieved high prediction performance for toxicity information by identifying key molecular descriptors highly correlated with toxicity indices. Therefore, the proposed QSAR model based on the data-centric XAI approach can be extended to predict the toxicity of potential pollutants in emerging printing chemicals, chemical process, semiconductor or display process.

The study of heavy rain warning in Gangwon State using threshold rainfall (침수유발 강우량을 이용한 강원특별자치도 호우특보 기준에 관한 연구)

  • Lee, Hyeonjia;Kang, Donghob;Lee, Iksangc;Kim, Byungsikd
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.11
    • /
    • pp.751-764
    • /
    • 2023
  • Gangwon State is centered on the Taebaek Mountains with very different climate characteristics depending on the region, and localized heavy rainfall is a frequent occurrence. Heavy rain disasters have a short duration and high spatial and temporal variability, causing many casualties and property damage. In the last 10 years (2012~2021), the number of heavy rain disasters in Gangwon State was 28, with an average cost of 45.6 billion won. To reduce heavy rain disasters, it is necessary to establish a disaster management plan at the local level. In particular, the current criteria for heavy rain warnings are uniform and do not consider local characteristics. Therefore, this study aims to propose a heavy rainfall warning criteria that considers the threshold rainfall for the advisory areas located in Gangwon State. As a result of analyzing the representative value of threshold rainfall by advisory area, the Mean value was similar to the criteria for issuing a heavy rain warning, and it was selected as the criteria for a heavy rain warning in this study. The rainfall events of Typhoon Mitag in 2019, Typhoons Maysak and Haishen in 2020, and Typhoon Khanun in 2023 were applied as rainfall events to review the criteria for heavy rainfall warnings, as a result of Hit Rate accuracy verification, this study reflects the actual warning well with 72% in Gangneung Plain and 98% in Wonju. The criteria for heavy rain warnings in this study are the same as the crisis warning stages (Attention, Caution, Alert, and Danger), which are considered to be possible for preemptive rain disaster response. The results of this study are expected to complement the uniform decision-making system for responding to heavy rain disasters in the future and can be used as a basis for heavy rain warnings that consider disaster risk by region.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Application of Responsive Identity Design in Sejong City: Focusing on Minimalism (세종특별자치시 반응형 아이덴티티 디자인 적용: 미니멀리즘을 중심으로)

  • Cha, Hyun-Ji
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.11
    • /
    • pp.656-668
    • /
    • 2020
  • The Sejong City was launched in July 2012 and was initially focused on the relocation of central administrative agencies, but it has been changing from an administrative city to a fourth industrial city since 2019 to a smart city and the implementation of Korea's New Deal in 2020. Identity design needs to be reevaluated accordingly. In particular, the web environment is also calling for an optimized identity design due to rapid changes in information technology such as various wearables and the Internet of Things. As the number of responsive web sites where information and communication technologies can be developed and optimized screens can be viewed increased, identity was intuitively communicated to users and designs were applied to make them more distinct and empathetic to other cities. Prior to the study, we looked at prior studies on the changing times in the web environment and the reactive web, and analyzed the identity design of the reactive web and applied minimalism characteristics step by step. Based on this, we surveyed experts and non-experts on the proposed survey by applying minimalist characteristics (simple, repeatability, and spatiality) of reactive identity and found that it was easily and intuitively recognizable in a small web environment such as mobile. Therefore, we hope that Sejong City's identity will continue to be studied in various ways and efficient management so that identity can be established in accordance with the changes of the times.

An Ontology-based Generation of Operating Procedures for Boiler Shutdown : Knowledge Representation and Application to Operator Training (온톨로지 기반의 보일러 셧다운 절차 생성 : 지식표현 및 훈련시나리오 활용)

  • Park, Myeongnam;Kim, Tae-Ok;Lee, Bongwoo;Shin, Dongil
    • Journal of the Korean Institute of Gas
    • /
    • v.21 no.4
    • /
    • pp.47-61
    • /
    • 2017
  • The preconditions of the usefulness of an operator safety training model in large plants are the versatility and accuracy of operational procedures, obtained by detailed analysis of the various types of risks associated with the operation, and the systematic representation of knowledge. In this study, we consider the artificial intelligence planning method for the generation of operation procedures; classify them into general actions, actions and technical terms of the operator; and take into account the sharing and reuse of knowledge, defining a knowledge expression ontology. In order to expand and extend the general operations of the operation, we apply a Hierarchical Task Network (HTN). Actual boiler plant case studies are classified according to operating conditions, states and operating objectives between the units, and general emergency shutdown procedures are created to confirm the applicability of the proposed method. These results based on systematic knowledge representation can be easily applied to general plant operation procedures and operator safety training scenarios and will be used for automatic generation of safety training scenarios.

The Effect of Planned Behavior of University Student who Participates in Education for Starting Agricultural Business on Entrepreneurship and Will to Start the Business (창업농교육 참여대학생의 계획적행동이 기업가정신과 창업의지에 미치는 영향)

  • Lee, So-Young
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.13 no.1
    • /
    • pp.145-155
    • /
    • 2018
  • The matter of cultivating entrepreneurship and will to start a business of university students majoring in agriculture and life sciences and college students majoring in agriculture as a future leader in the sector is a very important object of study. However, the discussion on entrepreneurship, establishment of a business and venture based on creative technology and innovative management have been scarcely had, because traditionally the majority of agricultural business has been a small-sized and simple business run by a small farmer. Education for starting an agricultural business in agriculture industry has been ignored even in the developed countries. ICT and AI(artificial intelligence)-based smart agriculture in the 4th Industrial Revolution Age is emerging as a new growth potential of our agriculture industry. Thus, the interest in farmers to start a business and venture agriculture is growing in the agriculture industry. Accordingly, the study draws the influence factors regarding the effect of the planned behavior of the university students who take part in the education course for starting an agricultural business and an agricultural venture business on entrepreneurship and will to start the business and conducts the empirical analysis. The businessmen who newly join the agriculture industry should perform the technical innovation and the creative business activities to be able to compete in the agriculture industry.

Artificial Intelligence Algorithms, Model-Based Social Data Collection and Content Exploration (소셜데이터 분석 및 인공지능 알고리즘 기반 범죄 수사 기법 연구)

  • An, Dong-Uk;Leem, Choon Seong
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.23-34
    • /
    • 2019
  • Recently, the crime that utilizes the digital platform is continuously increasing. About 140,000 cases occurred in 2015 and about 150,000 cases occurred in 2016. Therefore, it is considered that there is a limit handling those online crimes by old-fashioned investigation techniques. Investigators' manual online search and cognitive investigation methods those are broadly used today are not enough to proactively cope with rapid changing civil crimes. In addition, the characteristics of the content that is posted to unspecified users of social media makes investigations more difficult. This study suggests the site-based collection and the Open API among the content web collection methods considering the characteristics of the online media where the infringement crimes occur. Since illegal content is published and deleted quickly, and new words and alterations are generated quickly and variously, it is difficult to recognize them quickly by dictionary-based morphological analysis registered manually. In order to solve this problem, we propose a tokenizing method in the existing dictionary-based morphological analysis through WPM (Word Piece Model), which is a data preprocessing method for quick recognizing and responding to illegal contents posting online infringement crimes. In the analysis of data, the optimal precision is verified through the Vote-based ensemble method by utilizing a classification learning model based on supervised learning for the investigation of illegal contents. This study utilizes a sorting algorithm model centering on illegal multilevel business cases to proactively recognize crimes invading the public economy, and presents an empirical study to effectively deal with social data collection and content investigation.

  • PDF

Thermal Compression of Copper-to-Copper Direct Bonding by Copper films Electrodeposited at Low Temperature and High Current Density (저온 및 고전류밀도 조건에서 전기도금된 구리 박막 간의 열-압착 직접 접합)

  • Lee, Chae-Rin;Lee, Jin-Hyeon;Park, Gi-Mun;Yu, Bong-Yeong
    • Proceedings of the Korean Institute of Surface Engineering Conference
    • /
    • 2018.06a
    • /
    • pp.102-102
    • /
    • 2018
  • Electronic industry had required the finer size and the higher performance of the device. Therefore, 3-D die stacking technology such as TSV (through silicon via) and micro-bump had been used. Moreover, by the development of the 3-D die stacking technology, 3-D structure such as chip to chip (c2c) and chip to wafer (c2w) had become practicable. These technologies led to the appearance of HBM (high bandwidth memory). HBM was type of the memory, which is composed of several stacked layers of the memory chips. Each memory chips were connected by TSV and micro-bump. Thus, HBM had lower RC delay and higher performance of data processing than the conventional memory. Moreover, due to the development of the IT industry such as, AI (artificial intelligence), IOT (internet of things), and VR (virtual reality), the lower pitch size and the higher density were required to micro-electronics. Particularly, to obtain the fine pitch, some of the method such as copper pillar, nickel diffusion barrier, and tin-silver or tin-silver-copper based bump had been utillized. TCB (thermal compression bonding) and reflow process (thermal aging) were conventional method to bond between tin-silver or tin-silver-copper caps in the temperature range of 200 to 300 degrees. However, because of tin overflow which caused by higher operating temperature than melting point of Tin ($232^{\circ}C$), there would be the danger of bump bridge failure in fine-pitch bonding. Furthermore, regulating the phase of IMC (intermetallic compound) which was located between nickel diffusion barrier and bump, had a lot of problems. For example, an excess of kirkendall void which provides site of brittle fracture occurs at IMC layer after reflow process. The essential solution to reduce the difficulty of bump bonding process is copper to copper direct bonding below $300^{\circ}C$. In this study, in order to improve the problem of bump bonding process, copper to copper direct bonding was performed below $300^{\circ}C$. The driving force of bonding was the self-annealing properties of electrodeposited Cu with high defect density. The self-annealing property originated in high defect density and non-equilibrium grain boundaries at the triple junction. The electrodeposited Cu at high current density and low bath temperature was fabricated by electroplating on copper deposited silicon wafer. The copper-copper bonding experiments was conducted using thermal pressing machine. The condition of investigation such as thermal parameter and pressure parameter were varied to acquire proper bonded specimens. The bonded interface was characterized by SEM (scanning electron microscope) and OM (optical microscope). The density of grain boundary and defects were examined by TEM (transmission electron microscopy).

  • PDF