• Title/Summary/Keyword: A.I(Artificial Intelligence)

Search Result 286, Processing Time 0.022 seconds

A Study on the Influence of IT Education Service Quality on Educational Satisfaction, Work Application Intention, and Recommendation Intention: Focusing on the Moderating Effects of Learner Position and Participation Motivation (IT교육 서비스품질이 교육만족도, 현업적용의도 및 추천의도에 미치는 영향에 관한 연구: 학습자 직위 및 참여동기의 조절효과를 중심으로)

  • Kang, Ryeo-Eun;Yang, Sung-Byung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.169-196
    • /
    • 2017
  • The fourth industrial revolution represents a revolutionary change in the business environment and its ecosystem, which is a fusion of Information Technology (IT) and other industries. In line with these recent changes, the Ministry of Employment and Labor of South Korea announced 'the Fourth Industrial Revolution Leader Training Program,' which includes five key support areas such as (1) smart manufacturing, (2) Internet of Things (IoT), (3) big data including Artificial Intelligence (AI), (4) information security, and (5) bio innovation. Based on this program, we can get a glimpse of the South Korean government's efforts and willingness to emit leading human resource with advanced IT knowledge in various fusion technology-related and newly emerging industries. On the other hand, in order to nurture excellent IT manpower in preparation for the fourth industrial revolution, the role of educational institutions capable of providing high quality IT education services is most of importance. However, these days, most IT educational institutions have had difficulties in providing customized IT education services that meet the needs of consumers (i.e., learners), without breaking away from the traditional framework of providing supplier-oriented education services. From previous studies, it has been found that the provision of customized education services centered on learners leads to high satisfaction of learners, and that higher satisfaction increases not only task performance and the possibility of business application but also learners' recommendation intention. However, since research has not yet been conducted in a comprehensive way that consider both antecedent and consequent factors of the learner's satisfaction, more empirical research on this is highly desirable. With the advent of the fourth industrial revolution, a rising interest in various convergence technologies utilizing information technology (IT) has brought with the growing realization of the important role played by IT-related education services. However, research on the role of IT education service quality in the context of IT education is relatively scarce in spite of the fact that research on general education service quality and satisfaction has been actively conducted in various contexts. In this study, therefore, the five dimensions of IT education service quality (i.e., tangibles, reliability, responsiveness, assurance, and empathy) are derived from the context of IT education, based on the SERVPERF model and related previous studies. In addition, the effects of these detailed IT education service quality factors on learners' educational satisfaction and their work application/recommendation intentions are examined. Furthermore, the moderating roles of learner position (i.e., practitioner group vs. manager group) and participation motivation (i.e., voluntary participation vs. involuntary participation) in relationships between IT education service quality factors and learners' educational satisfaction, work application intention, and recommendation intention are also investigated. In an analysis using the structural equation model (SEM) technique based on a questionnaire given to 203 participants of IT education programs in an 'M' IT educational institution in Seoul, South Korea, tangibles, reliability, and assurance were found to have a significant effect on educational satisfaction. This educational satisfaction was found to have a significant effect on both work application intention and recommendation intention. Moreover, it was discovered that learner position and participation motivation have a partial moderating impact on the relationship between IT education service quality factors and educational satisfaction. This study holds academic implications in that it is one of the first studies to apply the SERVPERF model (rather than the SERVQUAL model, which has been widely adopted by prior studies) is to demonstrate the influence of IT education service quality on learners' educational satisfaction, work application intention, and recommendation intention in an IT education environment. The results of this study are expected to provide practical guidance for IT education service providers who wish to enhance learners' educational satisfaction and service management efficiency.

Corporate Credit Rating based on Bankruptcy Probability Using AdaBoost Algorithm-based Support Vector Machine (AdaBoost 알고리즘기반 SVM을 이용한 부실 확률분포 기반의 기업신용평가)

  • Shin, Taek-Soo;Hong, Tae-Ho
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.25-41
    • /
    • 2011
  • Recently, support vector machines (SVMs) are being recognized as competitive tools as compared with other data mining techniques for solving pattern recognition or classification decision problems. Furthermore, many researches, in particular, have proved them more powerful than traditional artificial neural networks (ANNs) (Amendolia et al., 2003; Huang et al., 2004, Huang et al., 2005; Tay and Cao, 2001; Min and Lee, 2005; Shin et al., 2005; Kim, 2003).The classification decision, such as a binary or multi-class decision problem, used by any classifier, i.e. data mining techniques is so cost-sensitive particularly in financial classification problems such as the credit ratings that if the credit ratings are misclassified, a terrible economic loss for investors or financial decision makers may happen. Therefore, it is necessary to convert the outputs of the classifier into wellcalibrated posterior probabilities-based multiclass credit ratings according to the bankruptcy probabilities. However, SVMs basically do not provide such probabilities. So it required to use any method to create the probabilities (Platt, 1999; Drish, 2001). This paper applied AdaBoost algorithm-based support vector machines (SVMs) into a bankruptcy prediction as a binary classification problem for the IT companies in Korea and then performed the multi-class credit ratings of the companies by making a normal distribution shape of posterior bankruptcy probabilities from the loss functions extracted from the SVMs. Our proposed approach also showed that their methods can minimize the misclassification problems by adjusting the credit grade interval ranges on condition that each credit grade for credit loan borrowers has its own credit risk, i.e. bankruptcy probability.

A Methodology of AI Learning Model Construction for Intelligent Coastal Surveillance (해안 경계 지능화를 위한 AI학습 모델 구축 방안)

  • Han, Changhee;Kim, Jong-Hwan;Cha, Jinho;Lee, Jongkwan;Jung, Yunyoung;Park, Jinseon;Kim, Youngtaek;Kim, Youngchan;Ha, Jeeseung;Lee, Kanguk;Kim, Yoonsung;Bang, Sungwan
    • Journal of Internet Computing and Services
    • /
    • v.23 no.1
    • /
    • pp.77-86
    • /
    • 2022
  • The Republic of Korea is a country in which coastal surveillance is an imperative national task as it is surrounded by seas on three sides under the confrontation between South and North Korea. However, due to Defense Reform 2.0, the number of R/D (Radar) operating personnel has decreased, and the period of service has also been shortened. Moreover, there is always a possibility that a human error will occur. This paper presents specific guidelines for developing an AI learning model for the intelligent coastal surveillance system. We present a three-step strategy to realize the guidelines. The first stage is a typical stage of building an AI learning model, including data collection, storage, filtering, purification, and data transformation. In the second stage, R/D signal analysis is first performed. Subsequently, AI learning model development for classifying real and false images, coastal area analysis, and vulnerable area/time analysis are performed. In the final stage, validation, visualization, and demonstration of the AI learning model are performed. Through this research, the first achievement of making the existing weapon system intelligent by applying the application of AI technology was achieved.

A Development of Flood Mapping Accelerator Based on HEC-softwares (HEC 소프트웨어 기반 홍수범람지도 엑셀러레이터 개발)

  • Kim, JongChun;Hwang, Seokhwan;Jeong, Jongho
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.2
    • /
    • pp.173-182
    • /
    • 2024
  • In recent, there has been a trend toward primarily utilizing data-driven models employing artificial intelligence technologies, such as machine learning, for flood prediction. These data-driven models offer the advantage of utilizing pre-training results, significantly reducing the required simulation time. However, it remains that a considerable amount of flood data is necessary for the pre-training in data-driven models, while the available observed data for application is often insufficient. As an alternative, validated simulation results from physically-based models are being employed as pre-training data alongside observed data. In this context, we developed a flood mapping accelerator to generate flood maps for pre-training. The proposed accelerator automates the entire process of flood mapping, i.e., estimating flood discharge using HEC-1, calculating water surface levels using HEC-RAS, simulating channel overflow and generating flood maps using RAS Mapper. With the accelerator, users can easily prepare a database for pre-training of data-driven models from hundreds to tens of thousands of rainfall scenarios. It includes various convenient menus containing a Graphic User Interface(GUI), and its practical applicability has been validated across 26 test-beds.

Domain Knowledge Incorporated Local Rule-based Explanation for ML-based Bankruptcy Prediction Model (머신러닝 기반 부도예측모형에서 로컬영역의 도메인 지식 통합 규칙 기반 설명 방법)

  • Soo Hyun Cho;Kyung-shik Shin
    • Information Systems Review
    • /
    • v.24 no.1
    • /
    • pp.105-123
    • /
    • 2022
  • Thanks to the remarkable success of Artificial Intelligence (A.I.) techniques, a new possibility for its application on the real-world problem has begun. One of the prominent applications is the bankruptcy prediction model as it is often used as a basic knowledge base for credit scoring models in the financial industry. As a result, there has been extensive research on how to improve the prediction accuracy of the model. However, despite its impressive performance, it is difficult to implement machine learning (ML)-based models due to its intrinsic trait of obscurity, especially when the field requires or values an explanation about the result obtained by the model. The financial domain is one of the areas where explanation matters to stakeholders such as domain experts and customers. In this paper, we propose a novel approach to incorporate financial domain knowledge into local rule generation to provide explanations for the bankruptcy prediction model at instance level. The result shows the proposed method successfully selects and classifies the extracted rules based on the feasibility and information they convey to the users.

An User Experience Analysis of Virtual Assistant Using Grounded Theory - Focused on SKT Virtual Personal Assistant 'NUGU' - (근거 이론을 적용한 가상 비서의 사용자 경험 분석 - SKT 가상 비서 'NUGU'를 중심으로 -)

  • Hwang, Seung Hee;Yun, Ray Jaeyoung
    • Journal of the HCI Society of Korea
    • /
    • v.12 no.2
    • /
    • pp.31-40
    • /
    • 2017
  • This a qualitative research about the virtual personal assistant, voice recognition device SKT 'NUGU' which was launched on September 1, 2016. For the study, an in-depth interview was committed with the 9 research participants who had used this device for more than a month. For the result of the interview, 362 concepts were discovered and through open coding, axis coding, selective coding the concepts got categorized in 16 sub-categories and 10 top categories. After recognizing 362 concepts from the interview sources, I proposed a paradigm model from the open coding. And from the selective coding, the main category of the study has been narrowed down to understand the 'Usage Patterns by Each Type'. As a result of the typification, it was confirmed that the usage pattern can be described in two different types of the dependent and inquiry type. From the result of the research, it provided the basic data about the user experience of virtual assistant which can be utilized when suggesting virtual personal assistant in the near future.

A Study on the System for AI Service Production (인공지능 서비스 운영을 위한 시스템 측면에서의 연구)

  • Hong, Yong-Geun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.10
    • /
    • pp.323-332
    • /
    • 2022
  • As various services using AI technology are being developed, much attention is being paid to AI service production. Recently, AI technology is acknowledged as one of ICT services, a lot of research is being conducted for general-purpose AI service production. In this paper, I describe the research results in terms of systems for AI service production, focusing on the distribution and production of machine learning models, which are the final steps of general machine learning development procedures. Three different Ubuntu systems were built, and experiments were conducted on the system, using data from 2017 validation COCO dataset in combination of different AI models (RFCN, SSD-Mobilenet) and different communication methods (gRPC, REST) to request and perform AI services through Tensorflow serving. Through various experiments, it was found that the type of AI model has a greater influence on AI service inference time than AI machine communication method, and in the case of object detection AI service, the number and complexity of objects in the image are more affected than the file size of the image to be detected. In addition, it was confirmed that if the AI service is performed remotely rather than locally, even if it is a machine with good performance, it takes more time to infer the AI service than if it is performed locally. Through the results of this study, it is expected that system design suitable for service goals, AI model development, and efficient AI service production will be possible.

Deep Learning Algorithm for Automated Segmentation and Volume Measurement of the Liver and Spleen Using Portal Venous Phase Computed Tomography Images

  • Yura Ahn;Jee Seok Yoon;Seung Soo Lee;Heung-Il Suk;Jung Hee Son;Yu Sub Sung;Yedaun Lee;Bo-Kyeong Kang;Ho Sung Kim
    • Korean Journal of Radiology
    • /
    • v.21 no.8
    • /
    • pp.987-997
    • /
    • 2020
  • Objective: Measurement of the liver and spleen volumes has clinical implications. Although computed tomography (CT) volumetry is considered to be the most reliable noninvasive method for liver and spleen volume measurement, it has limited application in clinical practice due to its time-consuming segmentation process. We aimed to develop and validate a deep learning algorithm (DLA) for fully automated liver and spleen segmentation using portal venous phase CT images in various liver conditions. Materials and Methods: A DLA for liver and spleen segmentation was trained using a development dataset of portal venous CT images from 813 patients. Performance of the DLA was evaluated in two separate test datasets: dataset-1 which included 150 CT examinations in patients with various liver conditions (i.e., healthy liver, fatty liver, chronic liver disease, cirrhosis, and post-hepatectomy) and dataset-2 which included 50 pairs of CT examinations performed at ours and other institutions. The performance of the DLA was evaluated using the dice similarity score (DSS) for segmentation and Bland-Altman 95% limits of agreement (LOA) for measurement of the volumetric indices, which was compared with that of ground truth manual segmentation. Results: In test dataset-1, the DLA achieved a mean DSS of 0.973 and 0.974 for liver and spleen segmentation, respectively, with no significant difference in DSS across different liver conditions (p = 0.60 and 0.26 for the liver and spleen, respectively). For the measurement of volumetric indices, the Bland-Altman 95% LOA was -0.17 ± 3.07% for liver volume and -0.56 ± 3.78% for spleen volume. In test dataset-2, DLA performance using CT images obtained at outside institutions and our institution was comparable for liver (DSS, 0.982 vs. 0.983; p = 0.28) and spleen (DSS, 0.969 vs. 0.968; p = 0.41) segmentation. Conclusion: The DLA enabled highly accurate segmentation and volume measurement of the liver and spleen using portal venous phase CT images of patients with various liver conditions.

Autopoietic Machinery and the Emergence of Third-Order Cybernetics (자기생산 기계 시스템과 3차 사이버네틱스의 등장)

  • Lee, Sungbum
    • Cross-Cultural Studies
    • /
    • v.52
    • /
    • pp.277-312
    • /
    • 2018
  • First-order cybernetics during the 1940s and 1950s aimed for control of an observed system, while second-order cybernetics during the mid-1970s aspired to address the mechanism of an observing system. The former pursues an objective, subjectless, approach to a system, whereas the latter prefers a subjective, personal approach to a system. Second-order observation must be noted since a human observer is a living system that has its unique cognition. Maturana and Varela place the autopoiesis of this biological system at the core of second-order cybernetics. They contend that an autpoietic system maintains, transforms and produces itself. Technoscientific recreation of biological autopoiesis opens up to a new step in cybernetics: what I describe as third-order cybernetics. The formation of technoscientific autopoiesis overlaps with the Fourth Industrial Revolution or what Erik Brynjolfsson and Andrew McAfee call the Second Machine Age. It leads to a radical shift from human centrism to posthumanity whereby humanity is mechanized, and machinery is biologized. In two versions of the novel Demon Seed, American novelist Dean Koontz explores the significance of technoscientific autopoiesis. The 1973 version dramatizes two kinds of observers: the technophobic human observer and the technology-friendly machine observer Proteus. As the story concludes, the former dominates the latter with the result that an anthropocentric position still works. The 1997 version, however, reveals the victory of the techno-friendly narrator Proteus over the anthropocentric narrator. Losing his narrational position, the technophobic human narrator of the story disappears. In the 1997 version, Proteus becomes the subject of desire in luring divorcee Susan. He longs to flaunt his male egomaniac. His achievement of male identity is a sign of technological autopoiesis characteristic of third-order cybernetics. To display self-producing capabilities integral to the autonomy of machinery, Koontz's novel demonstrates that Proteus manipulates Susan's egg to produce a human-machine mixture. Koontz's demon child, problematically enough, implicates the future of eugenics in an era of technological autopoiesis. Proteus creates a crossbreed of humanity and machinery to engineer a perfect body and mind. He fixes incurable or intractable diseases through genetic modifications. Proteus transfers a vast amount of digital information to his offspring's brain, which enables the demon child to achieve state-of-the-art intelligence. His technological editing of human genes and consciousness leads to digital standardization through unanimous spread of the best qualities of humanity. He gathers distinguished human genes and mental status much like collecting luxury brands. Accordingly, Proteus's child-making project ultimately moves towards technologically-controlled eugenics. Pointedly, it disturbs the classical ideal of liberal humanism celebrating a human being as the master of his or her nature.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.