• Title/Summary/Keyword: 모델기반 설계

Search Result 4,434, Processing Time 0.034 seconds

An analysis of students' online class preference depending on the gender and levels of school using Apriori Algorithm (Apriori 알고리즘을 활용한 학습자의 성별과 학교급에 따른 온라인 수업 유형 선호도 분석)

  • Kim, Jinhee;Hwang, Doohee;Lee, Sang-Soog
    • Journal of Digital Convergence
    • /
    • v.20 no.1
    • /
    • pp.33-39
    • /
    • 2022
  • This study aims to investigate the online class preference depending on students' gender and school level. To achieve this aim, the study conducted a survey on 4,803 elementary, middle, and high school students in 17 regions nationwide. The valid data of 4,524 were then analyzed using the Apriori algorithm to discern the associated patterns of the online class preference corresponding to their gender and school level. As a result, a total of 16 rules, including 7 from elementary school students, 4 from middle school students, and 5 from high school students were derived. To be specific, elementary school male students preferred software-based classes whereas elementary female students preferred maker-based classes. In the case of middle school, both male and female students preferred virtual experience-based classes. On the other hand, high school students had a higher preference for subject-specific lecture-based classes. The study findings can serve as empirical evidence for explaining the needs of online classes perceived by K-12 students. In addition, this study can be used as basic research to present and suggest areas of improvement for diversifying online classes. Future studies can further conduct in-depth analysis on the development of various online class activities and models, the design of online class platforms, and the female students' career motivation in the field of science and technology.

3D Architecture Modeling and Quantity Estimation using SketchUp (스케치업을 활용한 3D 건축모델링 및 물량산출)

  • Kim, Min Gyu;Um, Dae Yong
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.701-708
    • /
    • 2017
  • The construction cost is estimated based on the drawings at the design stage and constructor will find efficient construction methods for budgeting and budgeting appropriate to the budget. Accurate quantity estimation and budgeting are critical to determining whether the project is profitable or not. However, since this process is mostly performed depending on manpower or 2D drawings, errors are likely to occur and The BIM(Build Information Modeling) program, which can be automated, is very expensive and difficult to apply in the field. In this study, 3D architectural modeling was performed using SketchUp which is a 3D modeling software and suggest a methodology for Quantity Estimation. As a result, 3D modeling was performed effectively using 2D drawings of buildings. Based on the modeling results, it was possible to calculate the difference of the quantity estimation by 2D drawing and 3D modeling. The research suggests that the 3D modeling using the SketchUp and the calculation of the quantity can prevent the error of the conventional 2D calculation method. If the applicability of the research method is verified through continuous research, it will contribute to increase the efficiency of architectural modeling and quantity Estimation work.

Corona 19 Crisis and Data-State: Korean Data-State and Health Crisis Governance (코로나19 위기와 데이터 국가: 한국의 데이터 국가와 보건위기 거버넌스)

  • Jang, Hoon
    • Korean Journal of Legislative Studies
    • /
    • v.26 no.3
    • /
    • pp.125-159
    • /
    • 2020
  • Amid global pandemic of covid-19, Korean government's response has drawn wide attention among social scientists as well as medical studies. The role of Korean state and civil society has attracted particular attention among others. Yet, this paper criticizes extant studies on Korean case which focus on the extensive intervention of the strong state and subjective attitude of Korean citizens in coping with covid-19. The concept of the strong state lacks social scientific specification and subjective citizens do not match with Korean realities. This article argues that Korean state's capacity in collecting and mobilizing digital data may offer better understanding for the successful responses to the pandemic. First, Korean state is the ultimate coordinator in collecting, analyzing and applying big data about the expansion of covid-19 with its huge network of dataveillance. Also, such role has been largely based upon relevant legal framework and well prepared manuals and cooperation with civic actors and companies. In other words, Korean digital dataveillance had demonstrated its transparency and cooperative governance. Second, such dataveillance capacity has deep roots in the long-term development of Korean state's big data management. Korean state has evolved about thirty years while enhancing digital data network within governments, companies and private sectors. Third, the relationship between Korean state's dataveillance and civil society can be characterized as a state centered push model. This model demonstrates highly effective governmental responses to covid-19 crisis but fall short of building social consensus in balancing individual freedom, human rights and effective containment policies. It means communitarian solidarity among citizens has not been a major factor in Korea's successful response yet.

A Study on Intelligent Self-Recovery Technologies for Cyber Assets to Actively Respond to Cyberattacks (사이버 공격에 능동대응하기 위한 사이버 자산의 지능형 자가복구기술 연구)

  • Se-ho Choi;Hang-sup Lim;Jung-young Choi;Oh-jin Kwon;Dong-kyoo Shin
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.137-144
    • /
    • 2023
  • Cyberattack technology is evolving to an unpredictable degree, and it is a situation that can happen 'at any time' rather than 'someday'. Infrastructure that is becoming hyper-connected and global due to cloud computing and the Internet of Things is an environment where cyberattacks can be more damaging than ever, and cyberattacks are still ongoing. Even if damage occurs due to external influences such as cyberattacks or natural disasters, intelligent self-recovery must evolve from a cyber resilience perspective to minimize downtime of cyber assets (OS, WEB, WAS, DB). In this paper, we propose an intelligent self-recovery technology to ensure sustainable cyber resilience when cyber assets fail to function properly due to a cyberattack. The original and updated history of cyber assets is managed in real-time using timeslot design and snapshot backup technology. It is necessary to secure technology that can automatically detect damage situations in conjunction with a commercialized file integrity monitoring program and minimize downtime of cyber assets by analyzing the correlation of backup data to damaged files on an intelligent basis to self-recover to an optimal state. In the future, we plan to research a pilot system that applies the unique functions of self-recovery technology and an operating model that can learn and analyze self-recovery strategies appropriate for cyber assets in damaged states.

Dynamic Shear Behavior Characteristics of PHC Pile-cohesive Soil Ground Contact Interface Considering Various Environmental Factors (다양한 환경인자를 고려한 PHC 말뚝-사질토 지반 접촉면의 동적 전단거동 특성)

  • Kim, Young-Jun;Kwak, Chang-Won;Park, Inn-Joon
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.1
    • /
    • pp.5-14
    • /
    • 2024
  • PHC piles demonstrate superior resistance to compression and bending moments, and their factory-based production enhances quality assurance and management processes. Despite these advantages that have resulted in widespread use in civil engineering and construction projects, the design process frequently relies on empirical formulas or N-values to estimate the soil-pile friction, which is crucial for bearing capacity, and this reliance underscores a significant lack of experimental validation. In addition, environmental factors, e.g., the pH levels in groundwater and the effects of seawater, are commonly not considered. Thus, this study investigates the influence of vibrating machine foundations on PHC pile models in consideration of the effects of varying pH conditions. Concrete model piles were subjected to a one-month conditioning period in different pH environments (acidic, neutral, and alkaline) and under the influence of seawater. Subsequent repeated direct shear tests were performed on the pile-soil interface, and the disturbed state concept was employed to derive parameters that effectively quantify the dynamic behavior of this interface. The results revealed a descending order of shear stress in neutral, acidic, and alkaline conditions, with the pH-influenced samples exhibiting a more pronounced reduction in shear stress than those affected by seawater.

Ontology-based User Customized Search Service Considering User Intention (온톨로지 기반의 사용자 의도를 고려한 맞춤형 검색 서비스)

  • Kim, Sukyoung;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.129-143
    • /
    • 2012
  • Recently, the rapid progress of a number of standardized web technologies and the proliferation of web users in the world bring an explosive increase of producing and consuming information documents on the web. In addition, most companies have produced, shared, and managed a huge number of information documents that are needed to perform their businesses. They also have discretionally raked, stored and managed a number of web documents published on the web for their business. Along with this increase of information documents that should be managed in the companies, the need of a solution to locate information documents more accurately among a huge number of information sources have increased. In order to satisfy the need of accurate search, the market size of search engine solution market is becoming increasingly expended. The most important functionality among much functionality provided by search engine is to locate accurate information documents from a huge information sources. The major metric to evaluate the accuracy of search engine is relevance that consists of two measures, precision and recall. Precision is thought of as a measure of exactness, that is, what percentage of information considered as true answer are actually such, whereas recall is a measure of completeness, that is, what percentage of true answer are retrieved as such. These two measures can be used differently according to the applied domain. If we need to exhaustively search information such as patent documents and research papers, it is better to increase the recall. On the other hand, when the amount of information is small scale, it is better to increase precision. Most of existing web search engines typically uses a keyword search method that returns web documents including keywords which correspond to search words entered by a user. This method has a virtue of locating all web documents quickly, even though many search words are inputted. However, this method has a fundamental imitation of not considering search intention of a user, thereby retrieving irrelevant results as well as relevant ones. Thus, it takes additional time and effort to set relevant ones out from all results returned by a search engine. That is, keyword search method can increase recall, while it is difficult to locate web documents which a user actually want to find because it does not provide a means of understanding the intention of a user and reflecting it to a progress of searching information. Thus, this research suggests a new method of combining ontology-based search solution with core search functionalities provided by existing search engine solutions. The method enables a search engine to provide optimal search results by inferenceing the search intention of a user. To that end, we build an ontology which contains concepts and relationships among them in a specific domain. The ontology is used to inference synonyms of a set of search keywords inputted by a user, thereby making the search intention of the user reflected into the progress of searching information more actively compared to existing search engines. Based on the proposed method we implement a prototype search system and test the system in the patent domain where we experiment on searching relevant documents associated with a patent. The experiment shows that our system increases the both recall and precision in accuracy and augments the search productivity by using improved user interface that enables a user to interact with our search system effectively. In the future research, we will study a means of validating the better performance of our prototype system by comparing other search engine solution and will extend the applied domain into other domains for searching information such as portal.

Dual Codec Based Joint Bit Rate Control Scheme for Terrestrial Stereoscopic 3DTV Broadcast (지상파 스테레오스코픽 3DTV 방송을 위한 이종 부호화기 기반 합동 비트율 제어 연구)

  • Chang, Yong-Jun;Kim, Mun-Churl
    • Journal of Broadcast Engineering
    • /
    • v.16 no.2
    • /
    • pp.216-225
    • /
    • 2011
  • Following the proliferation of three-dimensional video contents and displays, many terrestrial broadcasting companies have been preparing for stereoscopic 3DTV service. In terrestrial stereoscopic broadcast, it is a difficult task to code and transmit two video sequences while sustaining as high quality as 2DTV broadcast due to the limited bandwidth defined by the existing digital TV standards such as ATSC. Thus, a terrestrial 3DTV broadcasting with a heterogeneous video codec system, where the left image and right images are based on MPEG-2 and H.264/AVC, respectively, is considered in order to achieve both high quality broadcasting service and compatibility for the existing 2DTV viewers. Without significant change in the current terrestrial broadcasting systems, we propose a joint rate control scheme for stereoscopic 3DTV service based on the heterogeneous dual codec systems. The proposed joint rate control scheme applies to the MPEG-2 encoder a quadratic rate-quantization model which is adopted in the H.264/AVC. Then the controller is designed for the sum of the left and right bitstreams to meet the bandwidth requirement of broadcasting standards while the sum of image distortions is minimized by adjusting quantization parameter obtained from the proposed optimization scheme. Besides, we consider a condition on maintaining quality difference between the left and right images around a desired level in the optimization in order to mitigate negative effects on human visual system. Experimental results demonstrate that the proposed bit rate control scheme outperforms the rate control method where each video coding standard uses its own bit rate control algorithm independently in terms of the increase in PSNR by 2.02%, the decrease in the average absolute quality difference by 77.6% and the reduction in the variance of the quality difference by 74.38%.

A stratified random sampling design for paddy fields: Optimized stratification and sample allocation for effective spatial modeling and mapping of the impact of climate changes on agricultural system in Korea (농지 공간격자 자료의 층화랜덤샘플링: 농업시스템 기후변화 영향 공간모델링을 위한 국내 농지 최적 층화 및 샘플 수 최적화 연구)

  • Minyoung Lee;Yongeun Kim;Jinsol Hong;Kijong Cho
    • Korean Journal of Environmental Biology
    • /
    • v.39 no.4
    • /
    • pp.526-535
    • /
    • 2021
  • Spatial sampling design plays an important role in GIS-based modeling studies because it increases modeling efficiency while reducing the cost of sampling. In the field of agricultural systems, research demand for high-resolution spatial databased modeling to predict and evaluate climate change impacts is growing rapidly. Accordingly, the need and importance of spatial sampling design are increasing. The purpose of this study was to design spatial sampling of paddy fields (11,386 grids with 1 km spatial resolution) in Korea for use in agricultural spatial modeling. A stratified random sampling design was developed and applied in 2030s, 2050s, and 2080s under two RCP scenarios of 4.5 and 8.5. Twenty-five weather and four soil characteristics were used as stratification variables. Stratification and sample allocation were optimized to ensure minimum sample size under given precision constraints for 16 target variables such as crop yield, greenhouse gas emission, and pest distribution. Precision and accuracy of the sampling were evaluated through sampling simulations based on coefficient of variation (CV) and relative bias, respectively. As a result, the paddy field could be optimized in the range of 5 to 21 strata and 46 to 69 samples. Evaluation results showed that target variables were within precision constraints (CV<0.05 except for crop yield) with low bias values (below 3%). These results can contribute to reducing sampling cost and computation time while having high predictive power. It is expected to be widely used as a representative sample grid in various agriculture spatial modeling studies.

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

Modeling and Intelligent Control for Activated Sludge Process (활성슬러지 공정을 위한 모델링과 지능제어의 적용)

  • Cheon, Seong-pyo;Kim, Bongchul;Kim, Sungshin;Kim, Chang-Won;Kim, Sanghyun;Woo, Hae-Jin
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.22 no.10
    • /
    • pp.1905-1919
    • /
    • 2000
  • The main motivation of this research is to develop an intelligent control strategy for Activated Sludge Process (ASP). ASP is a complex and nonlinear dynamic system because of the characteristic of wastewater, the change in influent flow rate, weather conditions, and etc. The mathematical model of ASP also includes uncertainties which are ignored or not considered by process engineer or controller designer. The ASP is generally controlled by a PID controller that consists of fixed proportional, integral, and derivative gain values. The PID gains are adjusted by the expert who has much experience in the ASP. The ASP model based on $Matlab^{(R)}5.3/Simulink^{(R)}3.0$ is developed in this paper. The performance of the model is tested by IWA(International Water Association) and COST(European Cooperation in the field of Scientific and Technical Research) data that include steady-state results during 14 days. The advantage of the developed model is that the user can easily modify or change the controller by the help of the graphical user interface. The ASP model as a typical nonlinear system can be used to simulate and test the proposed controller for an educational purpose. Various control methods are applied to the ASP model and the control results are compared to apply the proposed intelligent control strategy to a real ASP. Three control methods are designed and tested: conventional PID controller, fuzzy logic control approach to modify setpoints, and fuzzy-PID control method. The proposed setpoints changer based on the fuzzy logic shows a better performance and robustness under disturbances. The objective function can be defined and included in the proposed control strategy to improve the effluent water quality and to reduce the operating cost in a real ASP.

  • PDF