• Title/Summary/Keyword: optimization-based framework

Search Result 350, Processing Time 0.031 seconds

Analysis on General High School Locations for Opening Common Curriculum Courses based on High School Credit System: Focusing on Seoul (고교학점제에 따른 일반고의 공동교육과정 과목 개설학교 입지 분석: 서울시를 중심으로)

  • Kim, Sung-Yeun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.148-159
    • /
    • 2021
  • This study focused on searching for optimal locations for general high schools by considering the minimum move distance and the maximum student capacity upon starting a common curriculum based on a high school credit system by taking Seoul as an illustration. The main results were as follows. First, the results from P-median showed that the students' average move distance was below 625m when more than 30% of general high schools offer the common curriculum courses. In addition, the results from MCLP indicated that it was possible to hold all the students. Second, although all the universities located in Seoul open the common curriculum courses, it would not be available to hold all students. On the other hand, when more than 20% of the universities open the courses, MCLP indicated that it was possible to hold the same capacity. In addition, the Office of Education should support moving to the universities offering courses for students affiliated with high schools located in the southeastern area of Seoul and in poor transportation areas. It is expected that by suggesting a problem solving framework regarding space with a spatial optimization method, the study results can be used as a basic data for selecting schools offering common curriculum courses.

Regionalization of rainfall-runoff model parameters based on the correlation of regional characteristic factors (지역특성인자의 상호연관성을 고려한 강우-유출모형 매개변수 지역화)

  • Kim, Jin-Guk;Sumyia, Uranchimeg;Kim, Tae-Jeong;Kwon, Hyun-Han
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.11
    • /
    • pp.955-968
    • /
    • 2021
  • A water resource plan is routinely based on a natural flow and can be estimated using observed streamflow data or a long-term continuous rainfall-runoff model. However, the watershed with the natural flow is very limited to the upstream area of the dam. In particular, for the ungauged watershed, a rainfall-runoff model is established for the gauged watershed, and the model is then applied to the ungauged watershed by transferring the associated parameters. In this study, the GR4J rainfall-runoff model is mainly used to regionalize the parameters that are estimated from the 14 dam watershed via an optimization process. In terms of optimizing the parameters, the Bayesian approach was applied to consider the uncertainty of parameters quantitatively, and a number of parameter samples obtained from the posterior distribution were used for the regionalization. Here, the relationship between the estimated parameters and the topographical factors was first identified, and the dependencies between them are effectively modeled by a Copula function approach to obtain the regionalized parameters. The predicted streamflow with the use of regionalized parameters showed a good agreement with that of the observed with a correlation of about 0.8. It was found that the proposed regionalized framework is able to effectively simulate streamflow for the ungauged watersheds by the use of the regionalized parameters, along with the associated uncertainty, informed by the basin characteristics.

Process Alignment between MND-AF and ADDMe for Products Reusability (산출물 재사용성을 위한 MND-AF와 ADDMe 프로세스 정렬)

  • Bu, Yong-Hee;Lee, Tae-Gong
    • Journal of the military operations research society of Korea
    • /
    • v.32 no.2
    • /
    • pp.131-142
    • /
    • 2006
  • Nowadays, most enterprises have introduced both EA methodology to optimize an entire enterprise and CBD methodology to improve a software reusability. The Korea Government not only have developed many EA guiding products such as EA framework, Reference Model, Guideline, etc. but also have instituted a law to optimize a government-wide enterprise. The Minister of National Defense(MND) have developed the MND-AF as a standard methodology for EA and the ADDMe as a standard methodology for CBD. But it is possible to develop products of WD-AF and ADDMe redundantly because the process of MND-AF and ADDMe is not quitely aligned. The purpose of this paper is to present a scheme that ADDMe can reuse the artifacts of MND-AF by analyzing the relationships between two processes. In order to identify the relationships between two processes, we first identify the relation of a 'definition' part of two processes and then identify the relation of an 'attribute' part based on the relation of a 'detailed definition' part. As a result we found that 113 attributes of MND-AF are related to 49 attributes of ADDMe. Therefore the proposed study will decrease the development cost and time and will be a good example for aligning the process of EA and CBD methodology.

A Model-based Methodology for Application Specific Energy Efficient Data path Design Using FPGAs (FPGA에서 에너지 효율이 높은 데이터 경로 구성을 위한 계층적 설계 방법)

  • Jang Ju-Wook;Lee Mi-Sook;Mohanty Sumit;Choi Seonil;Prasanna Viktor K.
    • The KIPS Transactions:PartA
    • /
    • v.12A no.5 s.95
    • /
    • pp.451-460
    • /
    • 2005
  • We present a methodology to design energy-efficient data paths using FPGAs. Our methodology integrates domain specific modeling, coarse-grained performance evaluation, design space exploration, and low-level simulation to understand the tradeoffs between energy, latency, and area. The domain specific modeling technique defines a high-level model by identifying various components and parameters specific to a domain that affect the system-wide energy dissipation. A domain is a family of architectures and corresponding algorithms for a given application kernel. The high-level model also consists of functions for estimating energy, latency, and area that facilitate tradeoff analysis. Design space exploration(DSE) analyzes the design space defined by the domain and selects a set of designs. Low-level simulations are used for accurate performance estimation for the designs selected by the DSE and also for final design selection We illustrate our methodology using a family of architectures and algorithms for matrix multiplication. The designs identified by our methodology demonstrate tradeoffs among energy, latency, and area. We compare our designs with a vendor specified matrix multiplication kernel to demonstrate the effectiveness of our methodology. To illustrate the effectiveness of our methodology, we used average power density(E/AT), energy/(area x latency), as themetric for comparison. For various problem sizes, designs obtained using our methodology are on average $25\%$ superior with respect to the E/AT performance metric, compared with the state-of-the-art designs by Xilinx. We also discuss the implementation of our methodology using the MILAN framework.

Robust parameter set selection of unsteady flow model using Pareto optimums and minimax regret approach (파레토 최적화와 최소최대 후회도 방법을 이용한 부정류 계산모형의 안정적인 매개변수 추정)

  • Li, Li;Chung, Eun-Sung;Jun, Kyung Soo
    • Journal of Korea Water Resources Association
    • /
    • v.50 no.3
    • /
    • pp.191-200
    • /
    • 2017
  • A robust parameter set (ROPS) selection framework for an unsteady flow model was developed by combining Pareto optimums obtained by outcomes of model calibration using multi-site observations with the minimax regret approach (MRA). The multi-site calibration problem which is a multi-objective problem was solved by using an aggregation approach which aggregates the weighted criteria related to different sites into one measure, and then performs a large number of individual optimization runs with different weight combinations to obtain Pareto solutions. Roughness parameter structure which can describe the variation of Manning's n with discharges and sub-reaches was proposed and the related coefficients were optimized as model parameters. By applying the MRA which is a decision criterion, the Pareto solutions were ranked based on the obtained regrets related to each Pareto solution, and the top-rated one due to the lowest aggregated regrets of both calibration and validation was determined as the only ROPS. It was found that the determination of variable roughness and the corresponding standardized RMSEs at the two gauging stations varies considerably depending on the combinations of weights on the two sites. This method can provide the robust parameter set for the multi-site calibration problems in hydrologic and hydraulic models.

Poststructural Curriculum and Topic-centered Framework of The New Science Curriculum (후기 구조주의 교육과정과 새 과학과 교육과정의 주제 중심 내용 구성)

  • Kwak, Young-Sun;Lee, Yang-Rak
    • Journal of the Korean earth science society
    • /
    • v.28 no.2
    • /
    • pp.169-178
    • /
    • 2007
  • In this research we diagnosed the actual status of the 7th National science elective curriculum and suggested a way to select and organize the content of the new science elective curriculum. The first science education reform was grounded in the structuralism where the structure of discipline was valued above everything else. On the other hand, the second science education reform suggested alternative interpretations of students' opportunity to learn, putting a brake on the structuralist thinking. According to the survey result, the majority of the science elective courses are in need for revision because the contents are overcrowded, too difficult in light of students' learning readiness, failed to draw students' interest in science, and are overlapped and repeated among the 10th grade science, high school science I and II. In particular, Earth Science II and physics II are the most unfavorable courses among students. Thus, we recommended a fundamental change be made in the new curriculum in addition to the optimization of the content. In this paper, we suggested 'topic-centered content organization' for the science elective course I, i.e., Physics I, Chemistry I, Biology I and Earth Science I that is designed for both science track and non-science track students. Since curriculum provides students with an 'opportunity to learn', a curriculum study should focus on what the 'opportunity to learn' is that students ought to be offered. Based on the result of this study, we recommended one way to select and organize the content of high school elective curriculum.

An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

  • Ka, Hoi-Kwang;Kim, Jin-soo
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.443-472
    • /
    • 2014
  • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.

Re-Analysis of Clark Model Based on Drainage Structure of Basin (배수구조를 기반으로 한 Clark 모형의 재해석)

  • Park, Sang Hyun;Kim, Joo Cheol;Jeong, Dong Kug;Jung, Kwan Sue
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.6
    • /
    • pp.2255-2265
    • /
    • 2013
  • This study presents the width function-based Clark model. To this end, rescaled width function with distinction between hillslope and channel velocity is used as time-area curve and then it is routed through linear storage within the framework of not finite difference scheme used in original Clark model but analytical expression of linear storage routing. There are three parameters focused in this study: storage coefficient, hillslope velocity and channel velocity. SCE-UA, one of the popular global optimization methods, is applied to estimate them. The shapes of resulting IUHs from this study are evaluated in terms of the three statistical moments of hydrologic response functions: mean, variance and the third moment about the center of IUH. The correlation coefficients to the three statistical moments simulated in this study against these of observed hydrographs were estimated at 0.995 for the mean, 0.993 for the variance and 0.983 for the third moment about the center of IUH. The shape of resulting IUHs from this study give rise to satisfactory simulation results in terms of the mean and variance. But the third moment about the center of IUH tend to be overestimated. Clark model proposed in this study is superior to the one only taking into account mean and variance of IUH with respect to skewness, peak discharge and peak time of runoff hydrograph. From this result it is confirmed that the method suggested in this study is useful tool to reflect the heterogeneity of drainage path and hydrodynamic parameters. The variation of statistical moments of IUH are mainly influenced by storage coefficient and in turn the effect of channel velocity is greater than the one of hillslope velocity. Therefore storage coefficient and channel velocity are the crucial factors in shaping the form of IUH and should be considered carefully to apply Clark model proposed in this study.

Opportunity Tree Framework Design For Optimization of Software Development Project Performance (소프트웨어 개발 프로젝트 성능의 최적화를 위한 Opportunity Tree 모델 설계)

  • Song Ki-Won;Lee Kyung-Whan
    • The KIPS Transactions:PartD
    • /
    • v.12D no.3 s.99
    • /
    • pp.417-428
    • /
    • 2005
  • Today, IT organizations perform projects with vision related to marketing and financial profit. The objective of realizing the vision is to improve the project performing ability in terms of QCD. Organizations have made a lot of efforts to achieve this objective through process improvement. Large companies such as IBM, Ford, and GE have made over $80\%$ of success through business process re-engineering using information technology instead of business improvement effect by computers. It is important to collect, analyze and manage the data on performed projects to achieve the objective, but quantitative measurement is difficult as software is invisible and the effect and efficiency caused by process change are not visibly identified. Therefore, it is not easy to extract the strategy of improvement. This paper measures and analyzes the project performance, focusing on organizations' external effectiveness and internal efficiency (Qualify, Delivery, Cycle time, and Waste). Based on the measured project performance scores, an OT (Opportunity Tree) model was designed for optimizing the project performance. The process of design is as follows. First, meta data are derived from projects and analyzed by quantitative GQM(Goal-Question-Metric) questionnaire. Then, the project performance model is designed with the data obtained from the quantitative GQM questionnaire and organization's performance score for each area is calculated. The value is revised by integrating the measured scores by area vision weights from all stakeholders (CEO, middle-class managers, developer, investor, and custom). Through this, routes for improvement are presented and an optimized improvement method is suggested. Existing methods to improve software process have been highly effective in division of processes' but somewhat unsatisfactory in structural function to develop and systemically manage strategies by applying the processes to Projects. The proposed OT model provides a solution to this problem. The OT model is useful to provide an optimal improvement method in line with organization's goals and can reduce risks which may occur in the course of improving process if it is applied with proposed methods. In addition, satisfaction about the improvement strategy can be improved by obtaining input about vision weight from all stakeholders through the qualitative questionnaire and by reflecting it to the calculation. The OT is also useful to optimize the expansion of market and financial performance by controlling the ability of Quality, Delivery, Cycle time, and Waste.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.