• Title/Summary/Keyword: e-Business Standard

Search Result 333, Processing Time 0.03 seconds

A practical analysis approach to the functional requirements standards for electronic records management system (기록관리시스템 기능요건 표준의 실무적 해석)

  • Yim, Jin-Hee
    • The Korean Journal of Archival Studies
    • /
    • no.18
    • /
    • pp.139-178
    • /
    • 2008
  • The functional requirements standards for electronic records management systems which have been published recently describe the specifications very precisely including not only core functions of records management but also the function of system management and optional modules. The fact that these functional requirements standards seem to be similar to each other in terms of the content of functions described in the standards is linked to the global standardization trends in the practical area of electronic records. In addition, these functional requirements standards which have been built upon with collaboration of archivists from many national archives, IT specialists, consultants and records management applications vendors result in not only obtaining high quality but also establishing the condition that the standards could be the certificate criteria easily. Though there might be a lot of different ways and approaches to benchmark the functional requirements standards developed from advanced electronic records management practice, this paper is showing the possibility and meaningful business cases of gaining useful practical ideas learned from imaging electronic records management practices related to the functional requirements standards. The business cases are explored central functions of records management and the intellectual control of the records such as classification scheme or disposal schedules. The first example is related to the classification scheme. Should the records classification be fixed at same number of level? Should a record item be filed only at the last node of classification scheme? The second example addresses a precise disposition schedule which is able to impose the event-driven chronological retention period to records and which could be operated using a inheritance concept between the parent nodes and child nodes in classification scheme. The third example shows the usage of the function which holds or freeze and release the records required to keep as evidence to comply with compliance like e-Discovery or the risk management of organizations under the premise that the records management should be the basis for the legal compliance. The last case shows some cases for bulk batch operation required if the records manager can use the ERMS as their useful tool. It is needed that the records managers are able to understand and interpret the specifications of functional requirements standards for ERMS in the practical view point, and to review the standards and extract required specifications for upgrading their own ERMS. The National Archives of Korea should provide various stakeholders with a sound basis for them to implement effective and efficient electronic records management practices through expanding the usage scope of the functional requirements standard for ERMS and making the common understanding about its implications.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

The Application of Operations Research to Librarianship : Some Research Directions (운영연구(OR)의 도서관응용 -그 몇가지 잠재적응용분야에 대하여-)

  • Choi Sung Jin
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.4
    • /
    • pp.43-71
    • /
    • 1975
  • Operations research has developed rapidly since its origins in World War II. Practitioners of O. R. have contributed to almost every aspect of government and business. More recently, a number of operations researchers have turned their attention to library and information systems, and the author believes that significant research has resulted. It is the purpose of this essay to introduce the library audience to some of these accomplishments, to present some of the author's hypotheses on the subject of library management to which he belives O. R. has great potential, and to suggest some future research directions. Some problem areas in librianship where O. R. may play a part have been discussed and are summarized below. (1) Library location. It is usually necessary to make balance between accessibility and cost In location problems. Many mathematical methods are available for identifying the optimal locations once the balance between these two criteria has been decided. The major difficulties lie in relating cost to size and in taking future change into account when discriminating possible solutions. (2) Planning new facilities. Standard approaches to using mathematical models for simple investment decisions are well established. If the problem is one of choosing the most economical way of achieving a certain objective, one may compare th althenatives by using one of the discounted cash flow techniques. In other situations it may be necessary to use of cost-benefit approach. (3) Allocating library resources. In order to allocate the resources to best advantage the librarian needs to know how the effectiveness of the services he offers depends on the way he puts his resources. The O. R. approach to the problems is to construct a model representing effectiveness as a mathematical function of levels of different inputs(e.g., numbers of people in different jobs, acquisitions of different types, physical resources). (4) Long term planning. Resource allocation problems are generally concerned with up to one and a half years ahead. The longer term certainly offers both greater freedom of action and greater uncertainty. Thus it is difficult to generalize about long term planning problems. In other fields, however, O. R. has made a significant contribution to long range planning and it is likely to have one to make in librarianship as well. (5) Public relations. It is generally accepted that actual and potential users are too ignorant both of the range of library services provided and of how to make use of them. How should services be brought to the attention of potential users? The answer seems to lie in obtaining empirical evidence by controlled experiments in which a group of libraries participated. (6) Acquisition policy. In comparing alternative policies for acquisition of materials one needs to know the implications of each service which depends on the stock. Second is the relative importance to be ascribed to each service for each class of user. By reducing the level of the first, formal models will allow the librarian to concentrate his attention upon the value judgements which will be necessary for the second. (7) Loan policy. The approach to choosing between loan policies is much the same as the previous approach. (8) Manpower planning. For large library systems one should consider constructing models which will permit the skills necessary in the future with predictions of the skills that will be available, so as to allow informed decisions. (9) Management information system for libraries. A great deal of data can be available in libraries as a by-product of all recording activities. It is particularly tempting when procedures are computerized to make summary statistics available as a management information system. The values of information to particular decisions that may have to be taken future is best assessed in terms of a model of the relevant problem. (10) Management gaming. One of the most common uses of a management game is as a means of developing staff's to take decisions. The value of such exercises depends upon the validity of the computerized model. If the model were sufficiently simple to take the form of a mathematical equation, decision-makers would probably able to learn adequately from a graph. More complex situations require simulation models. (11) Diagnostics tools. Libraries are sufficiently complex systems that it would be useful to have available simple means of telling whether performance could be regarded as satisfactory which, if it could not, would also provide pointers to what was wrong. (12) Data banks. It would appear to be worth considering establishing a bank for certain types of data. It certain items on questionnaires were to take a standard form, a greater pool of data would de available for various analysis. (13) Effectiveness measures. The meaning of a library performance measure is not readily interpreted. Each measure must itself be assessed in relation to the corresponding measures for earlier periods of time and a standard measure that may be a corresponding measure in another library, the 'norm', the 'best practice', or user expectations.

  • PDF

The Effect of Characteristics of Web-site Usability on Trust and Purchase Intention of Social Commerce Sites (웹사용성 특성이 소셜커머스의 신뢰와 구매의도에 미치는 영향)

  • Jung, Lee-Sang
    • Management & Information Systems Review
    • /
    • v.34 no.1
    • /
    • pp.1-20
    • /
    • 2015
  • This study analyses the relationship with the feature of the web usability and Social commerce. Therefore, this research tries to investigate the relationship about in which the characteristic of the web-site usability is connected through the confidence about the Social commerce site to the re-purchase. The existing research that the information system and Electronic commerce relates is considered and the usability factor about the Social commerce site tries to be drawn and these try to verify on the relationship with the trust building of the user about the Social commerce site and purchasing intention through the positive research. The result is follows. First, the visual property of the Social commerce site was confirmed to have the descriptive ability noted to the confidence. Second, it was confirmed to have the descriptive ability that information characteristic about the Social commerce site notes to the confidence. Third, it was confirmed to have the descriptive ability that relative characteristic about the Social commerce site notes to the confidence. The importance of the web-site usuability has preferentially to be considered in being put through this research and building the confidence of the social commerce user could be known. When running with the social commerce site design, this is determined because it can become the useful standard.

  • PDF

Groundwater control measures for deep urban tunnels (도심지 대심도 터널의 지하수 변동 영향 제어 방안)

  • Jeong, Jae-Ho;Kim, Kang-Hyun;Song, Myung-Kyu;Shin, Jong-Ho
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.6
    • /
    • pp.403-421
    • /
    • 2021
  • Most of the urban tunnels in Korea, which are represented by the 1st to 3rd subways, use the drainage tunnel by NATM. Recently, when a construction project that actively utilizes large-scale urban space is promoted, negative effects that do not conform to the existing empirical rules of urban tunnels may occur. In particular, there is a high possibility that groundwater fluctuations and hydrodynamic behavior will occur owing to the practice of tunnel technology in Korea, which has mainly applied the drainage tunnel. In order to solve the problem of the drainage tunnel, attempts are being made to control groundwater fluctuations. For this, the establishment of tunnel groundwater management standard concept and the analysis of the tunnel hydraulic behavior were performed. To prevent the problem of groundwater fluctuations caused by the construction of large-scale tunnels in urban areas, it was suggested that the conceptual transformation of the empirical technical practice, which is applied only in the underground safety impact assessment stage, to the direction of controlling the inflow in the tunnel, is required. And the relationship between the groundwater level and the inflow of the tunnel required for setting the allowable inflow when planning the tunnel was derived. The introduction of a tunnel groundwater management concept is expected to help solve problems such as groundwater fluctuations, ground settlement, depletion of groundwater resources, and decline of maintenance performance in various urban deep tunnel construction projects to be promoted in the future.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Understanding the Mismatch between ERP and Organizational Information Needs and Its Responses: A Study based on Organizational Memory Theory (조직의 정보 니즈와 ERP 기능과의 불일치 및 그 대응책에 대한 이해: 조직 메모리 이론을 바탕으로)

  • Jeong, Seung-Ryul;Bae, Uk-Ho
    • Asia pacific journal of information systems
    • /
    • v.22 no.2
    • /
    • pp.21-38
    • /
    • 2012
  • Until recently, successful implementation of ERP systems has been a popular topic among ERP researchers, who have attempted to identify its various contributing factors. None of these efforts, however, explicitly recognize the need to identify disparities that can exist between organizational information requirements and ERP systems. Since ERP systems are in fact "packages" -that is, software programs developed by independent software vendors for sale to organizations that use them-they are designed to meet the general needs of numerous organizations, rather than the unique needs of a particular organization, as is the case with custom-developed software. By adopting standard packages, organizations can substantially reduce many of the potential implementation risks commonly associated with custom-developed software. However, it is also true that the nature of the package itself could be a risk factor as the features and functions of the ERP systems may not completely comply with a particular organization's informational requirements. In this study, based on the organizational memory mismatch perspective that was derived from organizational memory theory and cognitive dissonance theory, we define the nature of disparities, which we call "mismatches," and propose that the mismatch between organizational information requirements and ERP systems is one of the primary determinants in the successful implementation of ERP systems. Furthermore, we suggest that customization efforts as a coping strategy for mismatches can play a significant role in increasing the possibilities of success. In order to examine the contention we propose in this study, we employed a survey-based field study of ERP project team members, resulting in a total of 77 responses. The results of this study show that, as anticipated from the organizational memory mismatch perspective, the mismatch between organizational information requirements and ERP systems makes a significantly negative impact on the implementation success of ERP systems. This finding confirms our hypothesis that the more mismatch there is, the more difficult successful ERP implementation is, and thus requires more attention to be drawn to mismatch as a major failure source in ERP implementation. This study also found that as a coping strategy on mismatch, the effects of customization are significant. In other words, utilizing the appropriate customization method could lead to the implementation success of ERP systems. This is somewhat interesting because it runs counter to the argument of some literature and ERP vendors that minimized customization (or even the lack thereof) is required for successful ERP implementation. In many ERP projects, there is a tendency among ERP developers to adopt default ERP functions without any customization, adhering to the slogan of "the introduction of best practices." However, this study asserts that we cannot expect successful implementation if we don't attempt to customize ERP systems when mismatches exist. For a more detailed analysis, we identified three types of mismatches-Non-ERP, Non-Procedure, and Hybrid. Among these, only Non-ERP mismatches (a situation in which ERP systems cannot support the existing information needs that are currently fulfilled) were found to have a direct influence on the implementation of ERP systems. Neither Non-Procedure nor Hybrid mismatches were found to have significant impact in the ERP context. These findings provide meaningful insights since they could serve as the basis for discussing how the ERP implementation process should be defined and what activities should be included in the implementation process. They show that ERP developers may not want to include organizational (or business processes) changes in the implementation process, suggesting that doing so could lead to failed implementation. And in fact, this suggestion eventually turned out to be true when we found that the application of process customization led to higher possibilities of failure. From these discussions, we are convinced that Non-ERP is the only type of mismatch we need to focus on during the implementation process, implying that organizational changes must be made before, rather than during, the implementation process. Finally, this study found that among the various customization approaches, bolt-on development methods in particular seemed to have significantly positive effects. Interestingly again, this finding is not in the same line of thought as that of the vendors in the ERP industry. The vendors' recommendations are to apply as many best practices as possible, thereby resulting in the minimization of customization and utilization of bolt-on development methods. They particularly advise against changing the source code and rather recommend employing, when necessary, the method of programming additional software code using the computer language of the vendor. As previously stated, however, our study found active customization, especially bolt-on development methods, to have positive effects on ERP, and found source code changes in particular to have the most significant effects. Moreover, our study found programming additional software to be ineffective, suggesting there is much difference between ERP developers and vendors in viewpoints and strategies toward ERP customization. In summary, mismatches are inherent in the ERP implementation context and play an important role in determining its success. Considering the significance of mismatches, this study proposes a new model for successful ERP implementation, developed from the organizational memory mismatch perspective, and provides many insights by empirically confirming the model's usefulness.

  • PDF

The Effect of Subject Well-being on the Consumer's Pricing of Alternatives (주관적 행복이 대안에 대한 소비자의 가격 책정에 미치는 영향)

  • Kim, Moon-Seop;Choi, Jong-An
    • Journal of Distribution Science
    • /
    • v.10 no.4
    • /
    • pp.29-36
    • /
    • 2012
  • Research on subjective well-being (SWB) has flourished in recent years. As SWB determines cognitive and motivational processes, including social comparison and cognitive dissonance, it determines how consumers make decisions, including the comparison and evaluation of alternatives. Considering that the comparison and evaluation of alternatives is related to social comparison and cognitive dissonance, the influence of SWB on the comparison and evaluation of alternatives needs to be investigated. This research aims to examine the effect of SWB on the comparison and evaluation of alternatives, especially when people acquire additional information about their chosen or non-chosen alternatives, leading to a change of absolute/relative value of alternatives. The reasonable price of an alternative as evaluated by individuals is used as a measure reflecting the perceived value of an alternative. Putting all of this together, the current study intended to investigate the influence of absolute and relative value on the reasonable price of an alternative depending on SWB. Participants were randomly assigned to one of two experiment groups (deterioration of non-chosen alternative vs. improvement of non-chosen alternative). After reading consumer report ratings of alternatives shown on monitor screens, participants chose one of the alternatives, followed by the change of the consumer report ratings (deterioration of non-chosen alternative vs. improvement of non-chosen alternative). Participants evaluated the reasonable price of their chosen alternative based on the provided price of the non-chosen alternative. Two weeks after the experiment, they were asked to answer survey questionnaire on SWB measures. A regression was performed on the reasonable price with experiment groups, mean-centered SWB, and their interaction. There was a significant simple effect of groups and SWB. More importantly, these effects were qualified by the predicted interaction of groups and SWB. To interpret this interaction further, simple slope tests were performed on the price when SWB was centered at one standard deviation above (i.e., happy people) and below (i.e., unhappy people) the mean. As predicted, happy people rated the reasonable price of the chosen alternative higher in the improvement of non-chosen alternative group than in the deterioration of non-chosen alternative group. Conversely, unhappy people showed no price difference between groups. These results show that happy people pay attention to the absolute value of the alternative, whereas unhappy people give more weight to the relative value as well as to the absolute value of a chosen alternative, indicating that unhappy people are more sensitive to the negative information of a non-chosen alternative compared to happy people. The present research expanded the existing research stream on SWB by showing the influence of SWB on the consumers' evaluation of alternatives. Furthermore, this study adds to previous research on SWB and social comparison by suggesting that unhappy people tend to be more sensitive to negative social comparison information of alternatives even when a target of social comparison is not explicitly present. Moreover, these results yield some managerial implications on how to provide product information based on SWB in order to make products more attractive among the alternatives available to consumers.

  • PDF

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

A Study on Experimental Construction of Community Garden - A Case Study on Rooftop of SAHA Disabled Welfare House - (커뮤니티 가든 조성을 위한 실험 연구 - 사하 장애인복지관 옥상을 대상으로 -)

  • Kim, Seung-Hwan;Yoon, Sung-Yung;Cha, Min-Jun;Yoo, yeon-seo;Cho, Ji-Young;Kim, Yoon-Sun
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.40 no.2
    • /
    • pp.24-37
    • /
    • 2012
  • In this study, Community Garden of various national and international practices trends to an advanced research, the concept of community garden participated with a group operation out of initiative to produce safety food while securing space for the community, ensuring the area that has gone through a new form of active secure urban green space plan, urban renewal movement was defined as the mean. Furthermore, for the purpose of improving the poor welfare environment by attempting to experimentally make a community garden of a disabled welfare house rooftop and how to target its planning and construction process, partnership involvement, business processes have been investigated, such as cost sharing. The whole process including a budget for development of this case was conducted by the Busan Green Trust. Standard Chartered (SC) First Bank's 50% fund share by community chest, participation of volunteers, support of Busan City and Saba-gu, outside of that, sharing parts or trial to participate by diverse partnership of enterprise, public corporation and laboratory, these are the key in developing community garden's model. Established community garden places resulted food production to users of welfare center for the disabled, participating urban agricultural experience program, horticultural therapy, complex community chapter and cultural center. Furthermore, we could find the meaning of rooftop community garden in the point that it is a low cost garden by applying movable and unmovable planters. This study is profitable for improving urban environment, ensuring community chapter and urban green areas, regenerating a city to develop experimental community garden model by using a welfare house rooftop.