• Title/Summary/Keyword: analysis and computation

Search Result 2,226, Processing Time 0.032 seconds

The Comparison of Basic Science Research Capacity of OECD Countries

  • Lim, Yang-Taek;Song, Choong-Han
    • Journal of Technology Innovation
    • /
    • v.11 no.1
    • /
    • pp.147-176
    • /
    • 2003
  • This Paper Presents a new measurement technique to derive the level of BSRC (Basic Science and Research Capacity) index by use of the factor analysis which is extended with the assumption of the standard normal probability distribution of the selected explanatory variables. The new measurement method is used to forecast the gap of Korea's BSRC level compared with those of major OECD countries in terms of time lag and to make their international comparison during the time period of 1981∼1999, based on the assumption that the BSRC progress function of each country takes the form of the logistic curve. The US BSRC index is estimated to be 0.9878 in 1981, 0.9996 in 1990 and 0.99991 in 1999, taking the 1st place. The US BSRC level has been consistently the top among the 16 selected variables, followed by Japan, Germany, France and the United Kingdom, in order. Korea's BSRC is estimated to be 0.2293 in 1981, taking the lowest place among the 16 OECD countries. However, Korea's BSRC indices are estimated to have been increased to 0.3216 (in 1990) and 0.44652 (in 1999) respectively, taking 10th place. Meanwhile, Korea's BSRC level in 1999 (0.44652) is estimated to reach those of the US and Japan in 2233 and 2101, respectively. This means that Korea falls 234 years behind USA and 102 years behind Japan, respectively. Korea is also estimated to lag 34 years behind Germany, 16 years behind France and the UK, 15 years behind Sweden, 11 years behind Canada, 7 years behind Finland, and 5 years behind the Netherlands. For the period of 1981∼1999, the BSRC development speed of the US is estimated to be 0.29700. Its rank is the top among the selected OECD countries, followed by Japan (0.12800), Korea (0.04443), and Germany (0.04029). the US BSRC development speed (0.2970) is estimated to be 2.3 times higher than that of Japan (0.1280), and 6.7 times higher than that of Korea. German BSRC development speed (0.04029) is estimated to be fastest in Europe, but it is 7.4 times slower than that of the US. The estimated BSRC development speeds of Belgium, Finland, Italy, Denmark and the UK stand between 0.01 and 0.02, which are very slow. Particularly, the BSRC development speed of Spain is estimated to be minus 0.0065, staying at the almost same level of BSRC over time (1981 ∼ 1999). Since Korea shows BSRC development speed much slower than those of the US and Japan but relative]y faster than those of other countries, the gaps in BSRC level between Korea and the other countries may get considerably narrower or even Korea will surpass possibly several countries in BSRC level, as time goes by. Korea's BSRC level had taken 10th place till 1993. However, it is estimated to be 6th place in 2010 by catching up the UK, Sweden, Finland and Holland, and 4th place in 2020 by catching up France and Canada. The empirical results are consistent with OECD (2001a)'s computation that Korea had the highest R&D expenditures growth during 1991∼1999 among all OECD countries ; and the value-added of ICT industries in total business sectors value added is 12% in Korea, but only 8% in Japan. And OECD (2001b) observed that Korea, together with the US, Sweden, and Finland, are already the four most knowledge-based countries. Hence, the rank of the knowledge-based country was measured by investment in knowledge which is defined as public and private spending on higher education, expenditures on R&D and investment in software.

  • PDF

A Template-based Interactive University Timetabling Support System (템플릿 기반의 상호대화형 전공강의시간표 작성지원시스템)

  • Chang, Yong-Sik;Jeong, Ye-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.121-145
    • /
    • 2010
  • University timetabling depending on the educational environments of universities is an NP-hard problem that the amount of computation required to find solutions increases exponentially with the problem size. For many years, there have been lots of studies on university timetabling from the necessity of automatic timetable generation for students' convenience and effective lesson, and for the effective allocation of subjects, lecturers, and classrooms. Timetables are classified into a course timetable and an examination timetable. This study focuses on the former. In general, a course timetable for liberal arts is scheduled by the office of academic affairs and a course timetable for major subjects is scheduled by each department of a university. We found several problems from the analysis of current course timetabling in departments. First, it is time-consuming and inefficient for each department to do the routine and repetitive timetabling work manually. Second, many classes are concentrated into several time slots in a timetable. This tendency decreases the effectiveness of students' classes. Third, several major subjects might overlap some required subjects in liberal arts at the same time slots in the timetable. In this case, it is required that students should choose only one from the overlapped subjects. Fourth, many subjects are lectured by same lecturers every year and most of lecturers prefer the same time slots for the subjects compared with last year. This means that it will be helpful if departments reuse the previous timetables. To solve such problems and support the effective course timetabling in each department, this study proposes a university timetabling support system based on two phases. In the first phase, each department generates a timetable template from the most similar timetable case, which is based on case-based reasoning. In the second phase, the department schedules a timetable with the help of interactive user interface under the timetabling criteria, which is based on rule-based approach. This study provides the illustrations of Hanshin University. We classified timetabling criteria into intrinsic and extrinsic criteria. In intrinsic criteria, there are three criteria related to lecturer, class, and classroom which are all hard constraints. In extrinsic criteria, there are four criteria related to 'the numbers of lesson hours' by the lecturer, 'prohibition of lecture allocation to specific day-hours' for committee members, 'the number of subjects in the same day-hour,' and 'the use of common classrooms.' In 'the numbers of lesson hours' by the lecturer, there are three kinds of criteria : 'minimum number of lesson hours per week,' 'maximum number of lesson hours per week,' 'maximum number of lesson hours per day.' Extrinsic criteria are also all hard constraints except for 'minimum number of lesson hours per week' considered as a soft constraint. In addition, we proposed two indices for measuring similarities between subjects of current semester and subjects of the previous timetables, and for evaluating distribution degrees of a scheduled timetable. Similarity is measured by comparison of two attributes-subject name and its lecturer-between current semester and a previous semester. The index of distribution degree, based on information entropy, indicates a distribution of subjects in the timetable. To show this study's viability, we implemented a prototype system and performed experiments with the real data of Hanshin University. Average similarity from the most similar cases of all departments was estimated as 41.72%. It means that a timetable template generated from the most similar case will be helpful. Through sensitivity analysis, the result shows that distribution degree will increase if we set 'the number of subjects in the same day-hour' to more than 90%.

A Case Study for the Utilization of Food Safety Health Indicators in Korea: Computation of Composite Indices to Verify Important Indicators and Understand Correlations with Socioeconomic Status (우리나라 식품안전보건지표를 활용한 사례연구: 다양한 통합지수 산출을 통한 주요 지표 확인 및 사회경제적 지위와의 상관성 파악)

  • Choi, Giehae;Byun, Garam;Lee, Jong-Tae
    • Journal of Food Hygiene and Safety
    • /
    • v.30 no.3
    • /
    • pp.227-235
    • /
    • 2015
  • Food-Health indicators have been developed and utilized internationally in the 'Food' domain of environment and health indicators. In Korea, however, Food Safety Health Indicators which are in the introductory stage had been developed separately from Environmental Health Indicators. The aim of the current study is to suggest feasible applications of the domestic Food Safety Health Indicators as a case study. We introduced 3 possible applications which are as follows: 1) production of two types of Integrated Food Safety Health Index; 2) conduction of correlation analysis between the Integrated Food Safety Health Index and Food Safety Health Indicators; 3) conduction of regression analysis to evaluate the relationship between the Integrated Food Safety Health Index and socioeconomic status. As a result, we provided the calculated Integrated Food Safety Health Index I and Integrated Food Safety Health Index II, which represents the regional food safety level in relative and absolute terms, respectively. Integrated Food Safety Health Index I was significantly correlated with the outbreaks of food-borne diseases (caused by Campylobacter jejuni, Bacillus cereus, Salmonella spp. and unknown cause) and incidence of E.coli infections. Integrated Food Safety Health Index II significantly decreased as the proportion of foreigners and women increased, and increased as the population density increased. Utilization of such Integrated Food Safety Health Indicators may be helpful in understanding the overall domestic food safety level and identifying the indicators which must be considered with priorities to enhance the food safety levels regionally and domestically. Furthermore, analyzing the association between Integrated Food Safety Health Index and factors other than food safety could be useful in conducting risk management and identifying susceptible populations. Food Safety Health Indicators can be useful in other applications, and may serve as a supporting material in establishing or modifying policy plans to enhance food safety. Therefore, keen interests by researchers accompanied by further studies on food safety health indicators are needed.

Comparison of the wall clock time for extracting remote sensing data in Hierarchical Data Format using Geospatial Data Abstraction Library by operating system and compiler (운영 체제와 컴파일러에 따른 Geospatial Data Abstraction Library의 Hierarchical Data Format 형식 원격 탐사 자료 추출 속도 비교)

  • Yoo, Byoung Hyun;Kim, Kwang Soo;Lee, Jihye
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.1
    • /
    • pp.65-73
    • /
    • 2019
  • The MODIS (Moderate Resolution Imaging Spectroradiometer) data in Hierarchical Data Format (HDF) have been processed using the Geospatial Data Abstraction Library (GDAL). Because of a relatively large data size, it would be preferable to build and install the data analysis tool with greater computing performance, which would differ by operating system and the form of distribution, e.g., source code or binary package. The objective of this study was to examine the performance of the GDAL for processing the HDF files, which would guide construction of a computer system for remote sensing data analysis. The differences in execution time were compared between environments under which the GDAL was installed. The wall clock time was measured after extracting data for each variable in the MODIS data file using a tool built lining against GDAL under a combination of operating systems (Ubuntu and openSUSE), compilers (GNU and Intel), and distribution forms. The MOD07 product, which contains atmosphere data, were processed for eight 2-D variables and two 3-D variables. The GDAL compiled with Intel compiler under Ubuntu had the shortest computation time. For openSUSE, the GDAL compiled using GNU and intel compilers had greater performance for 2-D and 3-D variables, respectively. It was found that the wall clock time was considerably long for the GDAL complied with "--with-hdf4=no" configuration option or RPM package manager under openSUSE. These results indicated that the choice of the environments under which the GDAL is installed, e.g., operation system or compiler, would have a considerable impact on the performance of a system for processing remote sensing data. Application of parallel computing approaches would improve the performance of the data processing for the HDF files, which merits further evaluation of these computational methods.

Resolving the 'Gray sheep' Problem Using Social Network Analysis (SNA) in Collaborative Filtering (CF) Recommender Systems (소셜 네트워크 분석 기법을 활용한 협업필터링의 특이취향 사용자(Gray Sheep) 문제 해결)

  • Kim, Minsung;Im, Il
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.137-148
    • /
    • 2014
  • Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used

    . Past studies to improve CF performance typically used additional information other than users' evaluations such as demographic data. Some studies applied SNA techniques as a new similarity metric. This study is novel in that it used SNA to separate dataset. This study shows that performance of CF can be improved, without any additional information, when SNA techniques are used as proposed. This study has several theoretical and practical implications. This study empirically shows that the characteristics of dataset can affect the performance of CF recommender systems. This helps researchers understand factors affecting performance of CF. This study also opens a door for future studies in the area of applying SNA to CF to analyze characteristics of dataset. In practice, this study provides guidelines to improve performance of CF recommender systems with a simple modification.

  • DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

    • 박만배
      • Proceedings of the KOR-KST Conference
      • /
      • 1995.02a
      • /
      • pp.101-113
      • /
      • 1995
    • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

    • PDF

    A Study on the Intelligent Quick Response System for Fast Fashion(IQRS-FF) (패스트 패션을 위한 지능형 신속대응시스템(IQRS-FF)에 관한 연구)

    • Park, Hyun-Sung;Park, Kwang-Ho
      • Journal of Intelligence and Information Systems
      • /
      • v.16 no.3
      • /
      • pp.163-179
      • /
      • 2010
    • Recentlythe concept of fast fashion is drawing attention as customer needs are diversified and supply lead time is getting shorter in fashion industry. It is emphasized as one of the critical success factors in the fashion industry how quickly and efficiently to satisfy the customer needs as the competition has intensified. Because the fast fashion is inherently susceptible to trend, it is very important for fashion retailers to make quick decisions regarding items to launch, quantity based on demand prediction, and the time to respond. Also the planning decisions must be executed through the business processes of procurement, production, and logistics in real time. In order to adapt to this trend, the fashion industry urgently needs supports from intelligent quick response(QR) system. However, the traditional functions of QR systems have not been able to completely satisfy such demands of the fast fashion industry. This paper proposes an intelligent quick response system for the fast fashion(IQRS-FF). Presented are models for QR process, QR principles and execution, and QR quantity and timing computation. IQRS-FF models support the decision makers by providing useful information with automated and rule-based algorithms. If the predefined conditions of a rule are satisfied, the actions defined in the rule are automatically taken or informed to the decision makers. In IQRS-FF, QRdecisions are made in two stages: pre-season and in-season. In pre-season, firstly master demand prediction is performed based on the macro level analysis such as local and global economy, fashion trends and competitors. The prediction proceeds to the master production and procurement planning. Checking availability and delivery of materials for production, decision makers must make reservations or request procurements. For the outsourcing materials, they must check the availability and capacity of partners. By the master plans, the performance of the QR during the in-season is greatly enhanced and the decision to select the QR items is made fully considering the availability of materials in warehouse as well as partners' capacity. During in-season, the decision makers must find the right time to QR as the actual sales occur in stores. Then they are to decide items to QRbased not only on the qualitative criteria such as opinions from sales persons but also on the quantitative criteria such as sales volume, the recent sales trend, inventory level, the remaining period, the forecast for the remaining period, and competitors' performance. To calculate QR quantity in IQRS-FF, two calculation methods are designed: QR Index based calculation and attribute similarity based calculation using demographic cluster. In the early period of a new season, the attribute similarity based QR amount calculation is better used because there are not enough historical sales data. By analyzing sales trends of the categories or items that have similar attributes, QR quantity can be computed. On the other hand, in case of having enough information to analyze the sales trends or forecasting, the QR Index based calculation method can be used. Having defined the models for decision making for QR, we design KPIs(Key Performance Indicators) to test the reliability of the models in critical decision makings: the difference of sales volumebetween QR items and non-QR items; the accuracy rate of QR the lead-time spent on QR decision-making. To verify the effectiveness and practicality of the proposed models, a case study has been performed for a representative fashion company which recently developed and launched the IQRS-FF. The case study shows that the average sales rateof QR items increased by 15%, the differences in sales rate between QR items and non-QR items increased by 10%, the QR accuracy was 70%, the lead time for QR dramatically decreased from 120 hours to 8 hours.

    Topographic Factors Computation in Island: A Comparison of Different Open Source GIS Programs (오픈소스 GIS 프로그램의 지형인자 계산 비교: 도서지역 경사도와 지형습윤지수 중심으로)

    • Lee, Bora;Lee, Ho-Sang;Lee, Gwang-Soo
      • Korean Journal of Remote Sensing
      • /
      • v.37 no.5_1
      • /
      • pp.903-916
      • /
      • 2021
    • An area's topography refers to the shape of the earth's surface, described by its elevation, slope, and aspect, among other features. The topographical conditions determine energy flowsthat move water and energy from higher to lower elevations, such as how much solar energy will be received and how much wind or rain will affect it. Another common factor, the topographic wetness index (TWI), is a calculation in digital elevation models of the tendency to accumulate water per slope and unit area, and is one of the most widely referenced hydrologic topographic factors, which helps explain the location of forest vegetation. Analyses of topographical factors can be calculated using a geographic information system (GIS) program based on digital elevation model (DEM) data. Recently, a large number of free open source software (FOSS) GIS programs are available and developed for researchers, industries, and governments. FOSS GIS programs provide opportunitiesfor flexible algorithms customized forspecific user needs. The majority of biodiversity in island areas exists at about 20% higher elevations than in land ecosystems, playing an important role in ecological processes and therefore of high ecological value. However, island areas are vulnerable to disturbances and damage, such as through climate change, environmental pollution, development, and human intervention, and lacks systematic investigation due to geographical limitations (e.g. remoteness; difficulty to access). More than 4,000 of Korea's islands are within a few hours of its coast, and 88% are uninhabited, with 52% of them forested. The forest ecosystems of islands have fewer encounters with human interaction than on land, and therefore most of the topographical conditions are formed naturally and affected more directly by weather conditions or the environment. Therefore, the analysis of forest topography in island areas can be done more precisely than on its land counterparts, and therefore has become a major focus of attention in Korea. This study is focused on calculating the performance of different topographical factors using FOSS GIS programs. The test area is the island forests in Korea's south and the DEM of the target area was processed with GRASS GIS and SAGA GIS. The final slopes and TWI maps were produced as comparisons of the differences between topographic factor calculations of each respective FOSS GIS program. Finally, the merits of each FOSS GIS program used to calculate the topographic factors is discussed.

    Geographic Distribution of Physician Manpower by Gini Index (GINI계수에 의한 의사의 지역간 분포양상)

    • Moon, Byung-Wook;Park, Jae-Yong
      • Journal of Preventive Medicine and Public Health
      • /
      • v.20 no.2 s.22
      • /
      • pp.301-311
      • /
      • 1987
    • The purpose of this study is to analyze degree of geographic maldistribution of physicians and changes in the distributional pattern in Korea over the years 1980-1985. In assessing the degree of disparity in physician distribution and in identifying changes in the distributional pattern, the Gini index of concentration was used. The geographical units selected for computation of the Gini index in this analysis are districts (Gu), cities (Si), and counties (Gun). Locational data for 1980 and 1985 were obtained from the population census data in the Economic Planning Board and regular reports of physicians in the Korean Medical Association. The rates of physicians located counties to whole physicaians were 10.4% in 1980 and 9.6% in 1985. In term of the ratio of physicians per 100,000 population, rural area had 9.18 physicians in 1980 and 12.95 in 1985, 7.13 general practitioner in 1980 and 7.29 in 1955, and 2.05 specialists in 1980 and 5.66 in 1985. Only specialists of genral surgery and preventive medicine were distributed over 10% in county and distribution of every specialists except chest surgery in county increased in 1955, comparing with that rates of 1980. The Gini index computed to measure inequality of physician distribution in 1985 indicate as follows; physicians 0.3466, general practitioners 0.5479, and specialists 0.5092. But the Gini index for physicians and specialists fell -15.40% and -10.42% from 1980 to 1985, indication more even distribution. The changes in the Gini index over the period for specialists from 0.3639 to 0.4542 for districts, from 0.2510 to 0.1949 for cities, and 0.5303 to 0.5868 for counties indicate distributional change of 24.81%, -22.35%, and 10.65% respectively. The Gini indices for specialists of neuro-surgery, chest surgery, plastic surgery, ophthalmology, tuberculosis, preventive medicine, and anatomical pathology in 1985 were higher than Gini indices in 1980.

    • PDF

    Estimation of river discharge using satellite-derived flow signals and artificial neural network model: application to imjin river (Satellite-derived flow 시그널 및 인공신경망 모형을 활용한 임진강 유역 유출량 산정)

    • Li, Li;Kim, Hyunglok;Jun, Kyungsoo;Choi, Minha
      • Journal of Korea Water Resources Association
      • /
      • v.49 no.7
      • /
      • pp.589-597
      • /
      • 2016
    • In this study, we investigated the use of satellite-derived flow (SDF) signals and a data-based model for the estimation of outflow for the river reach where in situ measurements are either completely unavailable or are difficult to access for hydraulic and hydrology analysis such as the upper basin of Imjin River. It has been demonstrated by many studies that the SDF signals can be used as the river width estimates and the correlation between SDF signals and river width is related to the shape of cross sections. To extract the nonlinear relationship between SDF signals and river outflow, Artificial Neural Network (ANN) model with SDF signals as its inputs were applied for the computation of flow discharge at Imjin Bridge located in Imjin River. 15 pixels were considered to extract SDF signals and Partial Mutual Information (PMI) algorithm was applied to identify the most relevant input variables among 150 candidate SDF signals (including 0~10 day lagged observations). The estimated discharges by ANN model were compared with the measured ones at Imjin Bridge gauging station and correlation coefficients of the training and validation were 0.86 and 0.72, respectively. It was found that if the 1 day previous discharge at Imjin bridge is considered as an input variable for ANN model, the correlation coefficients were improved to 0.90 and 0.83, respectively. Based on the results in this study, SDF signals along with some local measured data can play an useful role in river flow estimation and especially in flood forecasting for data-scarce regions as it can simulate the peak discharge and peak time of flood events with satisfactory accuracy.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.