• Title/Summary/Keyword: design domain

Search Result 2,330, Processing Time 0.029 seconds

Spatial Factors' Analysis of Affecting on Automated Driving Safety Using Spatial Information Analysis Based on Level 4 ODD Elements (Level 4 자율주행서비스 ODD 구성요소 기반 공간정보분석을 통한 자율주행의 안전성에 영향을 미치는 공간적 요인 분석)

  • Tagyoung Kim;Jooyoung Maeng;Kyeong-Pyo Kang;SangHoon Bae
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.5
    • /
    • pp.182-199
    • /
    • 2023
  • Since 2021, government departments have been promoting Automated Driving Technology Development and Innovation Project as national research and development(R&D) project. The automated vehicles and service technologies developed as part of these projects are planned to be subsequently provided to the public at the selected Living Lab City. Therefore, it is important to determine a spatial area and operation section that enables safe and stable automated driving, depending on the purpose and characteristics of the target service. In this study, the static Operational Design Domain(ODD) elements for Level 4 automated driving services were reclassified by reviewing previously published papers and related literature surveys and investigating field data. Spatial analysis techniques were used to consider the reclassified ODD elements for level 4 in the real area of level 3 automated driving services because it is important to reflect the spatial factors affecting safety related to real automated driving technologies and services. Consequently, a total of six driving mode changes(disengagement) were derived through spatial information analysis techniques, and the factors affecting the safety of automated driving were crosswalk, traffic light, intersection, bicycle road, pocket lane, caution sign, and median strip. This spatial factor analysis method is expected to be useful for determining special areas for the automated driving service.

A Fundamental Study of VIV Fatigue Analysis Procedure for Dynamic Power Cables Subjected to Severely Sheared Currents (강한 전단 해류 환경에서 동적 전력케이블의 VIV 피로해석 절차에 관한 기초 연구)

  • Chunsik Shim;Min Suk Kim;Chulmin Kim;Yuho Rho;Jeabok Lee;Kwangsu Chea;Kangho Kim;Daseul Jeong
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.60 no.5
    • /
    • pp.375-387
    • /
    • 2023
  • The subsea power cables are increasingly important for harvesting renewable energies as we develop offshore wind farms located at a long distance from shore. Particularly, the continuous flexural motion of inter-array dynamic power cable of floating offshore wind turbine causes tremendous fatigue damages on the cable. As the subsea power cable consists of the helical structures with various components unlike a mooring line and a steel pipe riser, the fatigue analysis of the cables should be performed using special procedures that consider stick/slip phenomenon. This phenomenon occurs between inner helically wound components when they are tensioned or compressed by environmental loads and the floater motions. In particular, Vortex-induced vibration (VIV) can be generated by currents and have significant impacts on the fatigue life of the cable. In this study, the procedure for VIV fatigue analysis of the dynamic power cable has been established. Additionally, the respective roles of programs employed and required inputs and outputs are explained in detail. Demonstrations of case studies are provided under severely sheared currents to investigate the influences on amplitude variations of dynamic power cables caused by the excitation of high mode numbers. Finally, sensitivity studies have been performed to compare dynamic cable design parameters, specifically, structural damping ratio, higher order harmonics, and lift coefficients tables. In the future, one of the fundamental assumptions to assess the VIV response will be examined in detail, namely a narrow-banded Gaussian process derived from the VIV amplitudes. Although this approach is consistent with current industry standards, the level of consistency and the potential errors between the Gaussian process and the fatigue damage generated from deterministic time-domain results are to be confirmed to verify VIV fatigue analysis procedure for slender marine structures.

Analysis of the Types of Scientific Models in the Life Domain of Science Textbooks (중등 과학 교과서의 생명 영역에 제시된 과학적 모형들의 유형 분석)

  • Kim, Mi-Young;Kim, Heui-Baik
    • Journal of The Korean Association For Science Education
    • /
    • v.29 no.4
    • /
    • pp.423-436
    • /
    • 2009
  • This study aims to develop an analytic framework that can be used to classify scientific models in science textbooks according to modes and attributes of representation and to investigate types of scientific models presented in the biology section of science textbooks for the $7^{th}$ to $10^{th}$ grades. The results showed that modes of representation of scientific models are related to the nature of sub-areas of biology sections. Generally, the iconic model and symbolic model were in dominant use, including drawings of organs and explanations of working of systems. However, the chapters on 'The Organization of Life' and 'The Continuity of Life' showed a relatively high frequency in use of the actual model. The theoretical model was presented in a part of 'The Continuity of Life', due to its highly abstract characteristics. Moreover, the gestural model and analogical model showed very low frequency. From the perspective of attributes of representation, frequency of the static model was very high, while one of the dynamic models was very low. Therefore, efforts to recognize the properties of scientific concepts more clearly and to develop diverse types of models that can represent the concepts adequately are required. Analysis of these types of scientific models can offer recognition of the usefulness and limitations of models in representing the concepts or phenomena, and can help us to design adequate models depicting particular properties of given concepts. Also, this type of analysis may motivate researchers to strive to reveal correct methods for and limits of using the scientific models that are presented in existing science textbooks, as well as to provide useful information to organize the science textbooks according to the revised $7^{th}$ national science curriculum.

A Frequency Domain Motion Response Analysis of Substructure of Floating Offshore Wind Turbine with Varying Trim (부유식 해상풍력발전기 하부구조물의 종경사각에 따른 주파수 영역 운동응답 분석)

  • In-hyuk Nam;Young-Myung Choi;Ikseung Han;Chaeog Lim;Jinuk Kim;Sung-chul Shin
    • Journal of Navigation and Port Research
    • /
    • v.48 no.3
    • /
    • pp.155-163
    • /
    • 2024
  • As the demand for reducing carbon emissions increases, efforts to reduce the usage of fossil fuels and research on renewable energy are also increasing. Among the various renewable energy harvesting techniques, the floating offshore wind turbine has several advantages. Compared to other technologies, it has fewer installation limitations due to interference with human activity. Additionally, a large wind turbine farm can be constructed in the open ocean. Therefore, it is important to conduct motion analysis of floating offshore wind turbines in waves during the initial stage of conceptual design. In this study, a frequency motion analysis was conducted on a semi-submersible type floating offshore wind turbine. The analysis focused on the effects of varying trim on the motion characteristics. Specifically, motion response analysis was performed on heave, roll, and pitch. Natural period analysis confirmed that changing the trim angle did not significantly affect the heave and pitch motions, but it did have a regular impact on the roll motion.

Development of Content System in Practical Problem-Based Home Economics Global Citizenship Education Program for Elementary School Students (실천적 문제 중심 프로그램 개발 과정에 따른 초등 가정과 세계시민교육 내용체계 개발)

  • Kwon, Boeun;Yu, Nan Sook
    • Journal of Korean Home Economics Education Association
    • /
    • v.35 no.4
    • /
    • pp.81-98
    • /
    • 2023
  • The purpose of this study is to develop the global citizenship education program content system in the context of the 2022 revised home economics curriculum, recognizing the need for global citizenship education. The study is grounded in practical problem-based program. In the analysis stage, the educational trend requiring global citizenship education in the national curriculum was elucidated by analyzing the 2022 revised curriculum documents and future education documents related to global citizenship education. A framework for global citizenship education goals within home economics curriculum was designed. Accordingly, the educational goal of this curriculum, "Cultivation of family competencies linked to global citizenship competencies," was established by posing the perennial question, "As a global citizen, what actions should I take for sustainable choices related to the living environment?" In the design stage, four key ideas in the "Living Environment and Sustainable Choices" domain of the 2022 revised home economics curriculum for elementary school were analyzed to derive practical problems. In the development stage, content elements were derived based on four criteria. This study is significant in analyzing the need for global citizenship education within home economics context by analyzing documents related to global citizenship competencies for the future and examining the 2022 revised curriculum's general framework and home economics education curriculum.

Comparison of Association Rule Learning and Subgroup Discovery for Mining Traffic Accident Data (교통사고 데이터의 마이닝을 위한 연관규칙 학습기법과 서브그룹 발견기법의 비교)

  • Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.1-16
    • /
    • 2015
  • Traffic accident is one of the major cause of death worldwide for the last several decades. According to the statistics of world health organization, approximately 1.24 million deaths occurred on the world's roads in 2010. In order to reduce future traffic accident, multipronged approaches have been adopted including traffic regulations, injury-reducing technologies, driving training program and so on. Records on traffic accidents are generated and maintained for this purpose. To make these records meaningful and effective, it is necessary to analyze relationship between traffic accident and related factors including vehicle design, road design, weather, driver behavior etc. Insight derived from these analysis can be used for accident prevention approaches. Traffic accident data mining is an activity to find useful knowledges about such relationship that is not well-known and user may interested in it. Many studies about mining accident data have been reported over the past two decades. Most of studies mainly focused on predict risk of accident using accident related factors. Supervised learning methods like decision tree, logistic regression, k-nearest neighbor, neural network are used for these prediction. However, derived prediction model from these algorithms are too complex to understand for human itself because the main purpose of these algorithms are prediction, not explanation of the data. Some of studies use unsupervised clustering algorithm to dividing the data into several groups, but derived group itself is still not easy to understand for human, so it is necessary to do some additional analytic works. Rule based learning methods are adequate when we want to derive comprehensive form of knowledge about the target domain. It derives a set of if-then rules that represent relationship between the target feature with other features. Rules are fairly easy for human to understand its meaning therefore it can help provide insight and comprehensible results for human. Association rule learning methods and subgroup discovery methods are representing rule based learning methods for descriptive task. These two algorithms have been used in a wide range of area from transaction analysis, accident data analysis, detection of statistically significant patient risk groups, discovering key person in social communities and so on. We use both the association rule learning method and the subgroup discovery method to discover useful patterns from a traffic accident dataset consisting of many features including profile of driver, location of accident, types of accident, information of vehicle, violation of regulation and so on. The association rule learning method, which is one of the unsupervised learning methods, searches for frequent item sets from the data and translates them into rules. In contrast, the subgroup discovery method is a kind of supervised learning method that discovers rules of user specified concepts satisfying certain degree of generality and unusualness. Depending on what aspect of the data we are focusing our attention to, we may combine different multiple relevant features of interest to make a synthetic target feature, and give it to the rule learning algorithms. After a set of rules is derived, some postprocessing steps are taken to make the ruleset more compact and easier to understand by removing some uninteresting or redundant rules. We conducted a set of experiments of mining our traffic accident data in both unsupervised mode and supervised mode for comparison of these rule based learning algorithms. Experiments with the traffic accident data reveals that the association rule learning, in its pure unsupervised mode, can discover some hidden relationship among the features. Under supervised learning setting with combinatorial target feature, however, the subgroup discovery method finds good rules much more easily than the association rule learning method that requires a lot of efforts to tune the parameters.

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.

Study for making movie poster applied Augmented Reality (증강현실 영화포스터 제작연구)

  • Lee, Ki Ho
    • Cartoon and Animation Studies
    • /
    • s.48
    • /
    • pp.359-383
    • /
    • 2017
  • 3,000 years ago, since the first poster of humanity appeared in Egypt, the invention of printing technique and the development of civilization have accelerated the poster production technology. In keeping with this, the expression of poster has also been developed as an attempt to express artistic sensibility in a simple arrangement of characters, and now it has become an art form that has become a domain of professional designers. However, the technological development in the expression of poster is keep staying in two-dimensional, and is dependent on printing only that it is irrelevant to the change of ICT environment based on modern multimedia. Especially, among the many kinds of posters, the style of movie posters, which are the only objects for video, are still printed on paper, and many attempts have been made so far, but the movie industry still does not consider ICT integration at all. This study started with the feature that the object of the movie poster dealt with the video and attempted to introduce the augmented reality to apply the dynamic image of the movie to the static poster. In the graduation work of the media design major of a university in Korea, the poster of each works for promoting the visual work of the students was designed and printed in the form of a commercial film poster. Among them, 6 artworks that are considered to be suitable for augmented reality were selected and augmented reality was introduced and exhibited. Content that appears matched to the poster through the mobile device is reproduced on a poster of a scene of the video, but the text informations of the original poster are kept as they are, so that is able to build a moving poster looked like a wanted from the movie "Harry Potter". In order to produce this augmented reality poster, we applied augmented reality to posters of existing commercial films produced in two different formats, and found a way to increase the characteristics of AR contents. Through this, we were able to understand poster design suitable for AR representation, and technical expression for stable operation of augmented reality can be summarized in the matching process of augmented reality contents production.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

Design of Translator for generating Secure Java Bytecode from Thread code of Multithreaded Models (다중스레드 모델의 스레드 코드를 안전한 자바 바이트코드로 변환하기 위한 번역기 설계)

  • 김기태;유원희
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2002.06a
    • /
    • pp.148-155
    • /
    • 2002
  • Multithreaded models improve the efficiency of parallel systems by combining inner parallelism, asynchronous data availability and the locality of von Neumann model. This model executes thread code which is generated by compiler and of which quality is given by the method of generation. But multithreaded models have the demerit that execution model is restricted to a specific platform. On the contrary, Java has the platform independency, so if we can translate from threads code to Java bytecode, we can use the advantages of multithreaded models in many platforms. Java executes Java bytecode which is intermediate language format for Java virtual machine. Java bytecode plays a role of an intermediate language in translator and Java virtual machine work as back-end in translator. But, Java bytecode which is translated from multithreaded models have the demerit that it is not secure. This paper, multhithread code whose feature of platform independent can execute in java virtual machine. We design and implement translator which translate from thread code of multithreaded code to Java bytecode and which check secure problems from Java bytecode.

  • PDF