• Title/Summary/Keyword: Artificial intelligence in Design

Search Result 692, Processing Time 0.033 seconds

Model-Based Intelligent Framework Interface for UAV Autonomous Mission (무인기 자율임무를 위한 모델 기반 지능형 프레임워크 인터페이스)

  • Son Gun Joon;Lee Jaeho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.3
    • /
    • pp.111-121
    • /
    • 2024
  • Recently, thanks to the development of artificial intelligence technologies such as image recognition, research on unmanned aerial vehicles is being actively conducted. In particular, related research is increasing in the field of military drones, which costs a lot to foster professional pilot personnel, and one of them is the study of an intelligent framework for autonomous mission performance of reconnaissance drones. In this study, we tried to design an intelligent framework for unmanned aerial vehicles using the methodology of designing an intelligent framework for service robots. For the autonomous mission performance of unmanned aerial vehicles, the intelligent framework and unmanned aerial vehicle module must be smoothly linked. However, it was difficult to provide interworking for drones using periodic message protocols with model-based interfaces of intelligent frameworks for existing service robots. First, the message model lacked expressive power for periodic message protocols, followed by the problem that interoperability of asynchronous data exchange methods of periodic message protocols and intelligent frameworks was not provided. To solve this problem, this paper proposes a message model extension method for message periodic description to secure the model's expressive power for the periodic message model, and proposes periodic and asynchronous data exchange methods using the extended model to provide interoperability of different data exchange methods.

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

The Need and Improvement Direction of New Computer Media Classes in Landscape Architectural Education in University (대학 내 조경전공 교육과정에 있어 새로운 컴퓨터 미디어 수업의 필요와 개선방향)

  • Na, Sungjin
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.1
    • /
    • pp.54-69
    • /
    • 2021
  • In 2020, civilized society's overall lifestyle showed a distinct change from consumable analog media, such as paper, to digital media with the increased penetration of cloud computing, and from wired media to wireless media. Based on these social changes, this work examines whether the use of computer media in the field of landscape architecture is appropriately applied. This study will give directions for new computer media classes in landscape architectural education in the 4th Industrial Revolution era. Landscape architecture is a field that directly proposes the realization of a positive lifestyle and the creation of a living environment and is closely connected with social change. However, there is no clear evidence that landscape architectural education is making any visible change, while the digital infrastructure of the 4th Industrial Revolution, such as Artificial Intelligence (AI), Big Data, autonomous vehicles, cloud networks, and the Internet of Things, is changing the contemporary society in terms of technology, culture, and economy among other aspects. Therefore, it is necessary to review the current state of the use of computer technology and media in landscape architectural education, and also to examine the alternative direction of the curriculum for the new digital era. First, the basis for discussion was made by studying the trends of computational design in modern landscape architecture. Next, the changes and current status of computer media classes in domestic and overseas landscape education were analyzed based on prior research and curriculum. As a result, the number and the types of computer media classes increased significantly between the study in 1994 and the current situation in 2020 in the foreign landscape department, whereas there were no obvious changes in the domestic landscape department. This shows that the domestic landscape education is passively coping with the changes in the digital era. Lastly, based on the discussions, this study examined alternatives to the new curriculum that landscape architecture department should pursue in a new degital world.

A Study on Project-based Smart Learning Tool Model (프로젝트 기반 스마트 학습 도구 모델에 관한 연구)

  • Lee, Keun-Ho
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.5
    • /
    • pp.93-98
    • /
    • 2022
  • With the development of new digital technologies, research on various learning tools is being actively conducted. These learning tools are also being developed so that they can be applied to various environments by applying the technology of artificial intelligence or using smart functions to which big data technology is applied. These smart learning tools are contributing a lot to increasing educational effectiveness and learning efficiency. Recently, various learning tools have been applied in universities, and solutions for smart learning from smart attendance are introduced to improve student learning efficiency. This study intends to propose a design for a smart learning tool that can increase the efficiency of project progress and increase the scalability of the results when conducting a company's customized project through such a university's smart learning tool. The proposed smart learning tool is expected to have the advantage of being able to easily adapt to the practical business project as the company-customized projects that can improve practical skills are smoothly used as a learning tool. The proposed project-based smart learning tool model is later built as a related LMS and applied to actual project progress to check its utility, and to revise and supplement the proposed smart learning tool model to provide a project-based smart learning function want to strengthen.

The Meta-Analysis on Effects of Living Lab-Based Education (리빙랩 기반 교육 프로그램의 효과에 대한 메타분석)

  • So Hee Yoon
    • Journal of Practical Engineering Education
    • /
    • v.14 no.3
    • /
    • pp.505-512
    • /
    • 2022
  • The purpose of this study is to synthesize effects of the living lab-based education through meta-analysis. Seven primary studies reporting the effect of living lab-based education were carefully selected for data analysis. Research questions are as follows. First, what is the overall effect size of the living lab-based education? The overall effect size refers to the effect on the cognitive and affective domains. Second, what is the effect size of the living lab-based education according to categorical variables? Categorical variables are outcome characteristics, study characteristics, and design characteristics. Results are summarized as follows. First, the overall effect size of living lab-based education was 0.347. Second, the effect size according to the cognitive domain was 1.244 for information process, 0.593 for communication, 0.261 for problem solving, and 0.26 for creativity. Third, the effect size according to subject area was shown in the order of electrical and electronic engineering 1.146, technology and home economics 0.489, artificial intelligence 0.379, and practical arts 0.168. Fourth, the effect size according to school level was 1.058 for high school, 0.312 for middle school, and 0.217 for elementary school. Fifth, the effect size by grade level was 0.295 when two or more grades were integrated and 0.294 for a single grade.

Characterizing Strategy of Emotional sympathetic Robots in Animation and Movie - Focused on Appearance and Behavior tendency Analysis - (애니메이션 및 영화에 등장하는 정서교감형 로봇의 캐릭터라이징 전략 - 외형과 행동 경향성 분석을 중심으로 -)

  • Ryu, Beom-Yeol;Yang, Se-Hyeok
    • Cartoon and Animation Studies
    • /
    • s.48
    • /
    • pp.85-116
    • /
    • 2017
  • The purpose of this study is to analyze conditions that robots depicted in cinematographic works like animations or movies sympathize with and form an attachment with the nuclear person and organize characterizing strategies for emotional sympathetic robots. Along with the development of technology, the areas of artificial intelligence and robots are no longer considered to belong to science fiction but as realistic issues. Therefore, this author assumes that the expressive characteristics of emotional sympathetic robots created by cinematographic works should be used as meaningful factors in expressively embodying human-friendly service robots to be distributed widely afterwards, that is, in establishing the features of characters. To lay the grounds for it, this research has begun. As the subjects of analysis, this researcher has chosen robot characters whose emotional intimacy with the main person is clearly observed among those found in movies and animations produced after the 1920 when robot's contemporary concept was declared. Also, to understand robots' appearance and behavioral tendency, this study (1) has classified robots' external impressions into five types (human-like, cartoon, tool-like, artificial bring, pet or creature) and (2) has classified behavioral tendencies considered to be the outer embodiment of personality by using DiSC, the tool to diagnose behavioral patterns. Meanwhile, it has been observed that robots equipped with high emotional intimacy are all strongly independent about their duties and indicate great emotional acceptance. Therefore, 'influence' and 'Steadiness' types show great emotional acceptance, the influencing type tends to be highly independent, and the 'Conscientiousness' type tends to indicate less emotional acceptance and independency in general. Yet, according to the analysis on external impressions, appearance factors hardly have any significant relationship with emotional sympathy. It implies that regarding the conditions of robots equipped with great emotional sympathy, emotional sympathy grounded on communication exerts more crucial effects than first impression similarly to the process of forming interpersonal relationship in reality. Lastly, to study the characters of robots, it is absolutely needed to have consilient competence embracing different areas widely. This author also has felt that only with design factors or personality factors, it is hard to estimate robot characters and also analyze a vast amount of information demanded in sympathy with humans entirely. However, this researcher will end this thesis as the foundation for it expecting that the general artistic value of animations can be used preciously afterwards in developing robots that have to be studied interdisciplinarily.

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

In-service teacher's perception on the mathematical modeling tasks and competency for designing the mathematical modeling tasks: Focused on reality (현직 수학 교사들의 수학적 모델링 과제에 대한 인식과 과제 개발 역량: 현실성을 중심으로)

  • Hwang, Seonyoung;Han, Sunyoung
    • The Mathematical Education
    • /
    • v.62 no.3
    • /
    • pp.381-400
    • /
    • 2023
  • As the era of solving various and complex problems in the real world using artificial intelligence and big data appears, problem-solving competencies that can solve realistic problems through a mathematical approach are required. In fact, the 2015 revised mathematics curriculum and the 2022 revised mathematics curriculum emphasize mathematical modeling as an activity and competency to solve real-world problems. However, the real-world problems presented in domestic and international textbooks have a high proportion of artificial problems that rarely occur in real-world. Accordingly, domestic and international countries are paying attention to the reality of mathematical modeling tasks and suggesting the need for authentic tasks that reflect students' daily lives. However, not only did previous studies focus on theoretical proposals for reality, but studies analyzing teachers' perceptions of reality and their competency to reflect reality in the task are insufficient. Accordingly, this study aims to analyze in-service mathematics teachers' perception of reality among the characteristics of tasks for mathematical modeling and the in-service mathematics teachers' competency for designing the mathematical modeling tasks. First of all, five criteria for satisfying the reality were established by analyzing literatures. Afterward, teacher training was conducted under the theme of mathematical modeling. Pre- and post-surveys for 41 in-service mathematics teachers who participated in the teacher training was conducted to confirm changes in perception of reality. The pre- and post- surveys provided a task that did not reflect reality, and in-service mathematics teachers determined whether the task given in surveys reflected reality and selected one reason for the judgment among five criteria for reality. Afterwards, frequency analysis was conducted by coding the results of the survey answered by in-service mathematics teachers in the pre- and post- survey, and frequencies were compared to confirm in-service mathematics teachers' perception changes on reality. In addition, the mathematical modeling tasks designed by in-service teachers were evaluated with the criteria for reality to confirm the teachers' competency for designing mathematical modeling tasks reflecting the reality. As a result, it was shown that in-service mathematics teachers changed from insufficient perception that only considers fragmentary criterion for reality to perceptions that consider all the five criteria of reality. In particular, as a result of analyzing the basis for judgment among in-service mathematics teachers whose judgment on reality was reversed in the pre- and post-survey, changes in the perception of in-service mathematics teachers was confirmed, who did not consider certain criteria as a criterion for reality in the pre-survey, but considered them as a criterion for reality in the post-survey. In addition, as a result of evaluating the tasks designed by in-service mathematics teachers for mathematical modeling, in-service mathematics teachers showed the competency to reflect reality in their tasks. However, among the five criteria for reality, the criterion for "situations that can occur in students' daily lives," "need to solve the task," and "require conclusions in a real-world situation" were relatively less reflected. In addition, it was found that the proportion of teachers with low task development competencies was higher in the teacher group who could not make the right judgment than in the teacher group who could make the right judgment on the reality of the task. Based on the results of these studies, this study provides implications for teacher education to enable mathematics teachers to apply mathematical modeling lesson in their classes.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.