• Title/Summary/Keyword: Multiple-view approach

Search Result 88, Processing Time 0.024 seconds

Multi-Criteria Group Decision Making under Imprecise Preference Judgments: Using Fuzzy Logic with Linguistic Quantifier

  • Choi, Duke-Hyun;Ahn, Byeong-Seok;Kim, Soung-Hie
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2005.11a
    • /
    • pp.557-567
    • /
    • 2005
  • The increasing complexity of the socio-economic environments makes it less and less possible for single decision-maker to consider all relevant aspects of problem. Therefore are, many organizations employ groups in decision making. In this paper, we present a multiperson decision making method using fuzzy logic with linguistic quantifier when each of group members specifies imprecise judgments possibly both on performance evaluations of alternatives with respect to the multiperson criteria and on the criteria. Inexact or vague preferences have appeared in the decision making literatures with a view to relaxing the burdens of preference specifications imposed to the decision-makers and thus taking into account the vagueness of human judgments. Allowing for the types of imprecise judgments in the model, however, makes more difficult a clear selection of alternative(s) that a group wants to make. So, further interactions with the decision-makers may proceed to the extent to compensate for the initial comforts of preference specifications. These interaction may not however guarantee the selection of the best alternative to implement. To circumvent this deadlock situation, we present a procedure for obtaining a satisfying solution by the use of linguistic quantifier guided aggregation which implies fuzzy majority. This is an approach to combine a prescriptive decision method via a mathematical programming and a well-established approximate solution method to aggregate multiple objects.

  • PDF

Identifying Research Trends in Big data-driven Digital Transformation Using Text Mining (텍스트마이닝을 활용한 빅데이터 기반의 디지털 트랜스포메이션 연구동향 파악)

  • Minjun, Kim
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.54-64
    • /
    • 2022
  • A big data-driven digital transformation is defined as a process that aims to innovate companies by triggering significant changes to their capabilities and designs through the use of big data and various technologies. For a successful big data-driven digital transformation, reviewing related literature, which enhances the understanding of research statuses and the identification of key research topics and relationships among key topics, is necessary. However, understanding and describing literature is challenging, considering its volume and variety. Establishing a common ground for central concepts is essential for science. To clarify key research topics on the big data-driven digital transformation, we carry out a comprehensive literature review by performing text mining of 439 articles. Text mining is applied to learn and identify specific topics, and the suggested key references are manually reviewed to develop a state-of-the-art overview. A total of 10 key research topics and relationships among the topics are identified. This study contributes to clarifying a systematized view of dispersed studies on big data-driven digital transformation across multiple disciplines and encourages further academic discussions and industrial transformation.

A PLS Path Modeling Approach on the Cause-and-Effect Relationships among BSC Critical Success Factors for IT Organizations (PLS 경로모형을 이용한 IT 조직의 BSC 성공요인간의 인과관계 분석)

  • Lee, Jung-Hoon;Shin, Taek-Soo;Lim, Jong-Ho
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.207-228
    • /
    • 2007
  • Measuring Information Technology(IT) organizations' activities have been limited to mainly measure financial indicators for a long time. However, according to the multifarious functions of Information System, a number of researches have been done for the new trends on measurement methodologies that come with financial measurement as well as new measurement methods. Especially, the researches on IT Balanced Scorecard(BSC), concept from BSC measuring IT activities have been done as well in recent years. BSC provides more advantages than only integration of non-financial measures in a performance measurement system. The core of BSC rests on the cause-and-effect relationships between measures to allow prediction of value chain performance measures to allow prediction of value chain performance measures, communication, and realization of the corporate strategy and incentive controlled actions. More recently, BSC proponents have focused on the need to tie measures together into a causal chain of performance, and to test the validity of these hypothesized effects to guide the development of strategy. Kaplan and Norton[2001] argue that one of the primary benefits of the balanced scorecard is its use in gauging the success of strategy. Norreklit[2000] insist that the cause-and-effect chain is central to the balanced scorecard. The cause-and-effect chain is also central to the IT BSC. However, prior researches on relationship between information system and enterprise strategies as well as connection between various IT performance measurement indicators are not so much studied. Ittner et al.[2003] report that 77% of all surveyed companies with an implemented BSC place no or only little interest on soundly modeled cause-and-effect relationships despite of the importance of cause-and-effect chains as an integral part of BSC. This shortcoming can be explained with one theoretical and one practical reason[Blumenberg and Hinz, 2006]. From a theoretical point of view, causalities within the BSC method and their application are only vaguely described by Kaplan and Norton. From a practical consideration, modeling corporate causalities is a complex task due to tedious data acquisition and following reliability maintenance. However, cause-and effect relationships are an essential part of BSCs because they differentiate performance measurement systems like BSCs from simple key performance indicator(KPI) lists. KPI lists present an ad-hoc collection of measures to managers but do not allow for a comprehensive view on corporate performance. Instead, performance measurement system like BSCs tries to model the relationships of the underlying value chain in cause-and-effect relationships. Therefore, to overcome the deficiencies of causal modeling in IT BSC, sound and robust causal modeling approaches are required in theory as well as in practice for offering a solution. The propose of this study is to suggest critical success factors(CSFs) and KPIs for measuring performance for IT organizations and empirically validate the casual relationships between those CSFs. For this purpose, we define four perspectives of BSC for IT organizations according to Van Grembergen's study[2000] as follows. The Future Orientation perspective represents the human and technology resources needed by IT to deliver its services. The Operational Excellence perspective represents the IT processes employed to develop and deliver the applications. The User Orientation perspective represents the user evaluation of IT. The Business Contribution perspective captures the business value of the IT investments. Each of these perspectives has to be translated into corresponding metrics and measures that assess the current situations. This study suggests 12 CSFs for IT BSC based on the previous IT BSC's studies and COBIT 4.1. These CSFs consist of 51 KPIs. We defines the cause-and-effect relationships among BSC CSFs for IT Organizations as follows. The Future Orientation perspective will have positive effects on the Operational Excellence perspective. Then the Operational Excellence perspective will have positive effects on the User Orientation perspective. Finally, the User Orientation perspective will have positive effects on the Business Contribution perspective. This research tests the validity of these hypothesized casual effects and the sub-hypothesized causal relationships. For the purpose, we used the Partial Least Squares approach to Structural Equation Modeling(or PLS Path Modeling) for analyzing multiple IT BSC CSFs. The PLS path modeling has special abilities that make it more appropriate than other techniques, such as multiple regression and LISREL, when analyzing small sample sizes. Recently the use of PLS path modeling has been gaining interests and use among IS researchers in recent years because of its ability to model latent constructs under conditions of nonormality and with small to medium sample sizes(Chin et al., 2003). The empirical results of our study using PLS path modeling show that the casual effects in IT BSC significantly exist partially in our hypotheses.

Method of Measuring Color Difference Between Images using Corresponding Points and Histograms (대응점 및 히스토그램을 이용한 영상 간의 컬러 차이 측정 기법)

  • Hwang, Young-Bae;Kim, Je-Woo;Choi, Byeong-Ho
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.305-315
    • /
    • 2012
  • Color correction between two or multiple images is very crucial for the development of subsequent algorithms and stereoscopic 3D camera system. Even though various color correction methods are proposed recently, there are few methods for measuring the performance of these methods. In addition, when two images have view variation by camera positions, previous methods for the performance measurement may not be appropriate. In this paper, we propose a method of measuring color difference between corresponding images for color correction. This method finds matching points that have the same colors between two scenes to consider the view variation by correspondence searches. Then, we calculate statistics from neighbor regions of these matching points to measure color difference. From this approach, we can consider misalignment of corresponding points contrary to conventional geometric transformation by a single homography. To handle the case that matching points cannot cover the whole regions, we calculate statistics of color difference from the whole image regions. Finally, the color difference is computed by the weighted summation between correspondence based and the whole region based approaches. This weight is determined by calculating the ratio of occupying regions by correspondence based color comparison.

Multi-Criteria Group Decision Making under Imprecise Preference Judgments : Using Fuzzy Logic with Linguistic Quantifier (불명료한 선호정보 하의 다기준 그룹의사결정 : Linguistic Quantifier를 통한 퍼지논리 활용)

  • Choi, Duke Hyun;Ahn, Byeong Seok;Kim, Soung Hie
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.3
    • /
    • pp.15-32
    • /
    • 2006
  • The increasing complexity of the socio-economic environments makes it less and less possible for single decision-maker to consider all relevant aspects of problem. Therefore, many organizations employ groups in decision making. In this paper, we present a multiperson decision making method using fuzzy logic with linguistic quantifier when each of group members specifies imprecise judgments possibly both on performance evaluations of alternatives with respect to the multiple criteria and on the criteria. Inexact or vague preferences have appeared in the decision making literatures with a view to relaxing the burdens of preference specifications imposed to the decision-makers and thus taking into account the vagueness of human judgments. Allowing for the types of imprecise judgments in the model, however, makes more difficult a clear selection of alternative(s) that a group wants to make. So, further interactions with the decision-makers may proceed to the extent to compensate for the initial comforts of preference specifications. These interactions may not however guarantee the selection of the best alternative to implement. To circumvent this deadlock situation, we present a procedure for obtaining a satisfying solution by the use of linguistic quantifier guided aggregation which implies fuzzy majority. This is an approach to combine a prescriptive decision method via a mathematical programming and a well-established approximate solution method to aggregate multiple objects.

  • PDF

Real-Time Hierarchical Techniques for Rendering of Translucent Materials and Screen-Space Interpolation (반투명 재질의 렌더링과 화면 보간을 위한 실시간 계층화 알고리즘)

  • Ki, Hyun-Woo;Oh, Kyoung-Su
    • Journal of Korea Game Society
    • /
    • v.7 no.1
    • /
    • pp.31-42
    • /
    • 2007
  • In the natural world, most materials such as skin, marble and cloth are translucent. Their appearance is smooth and soft compared with metals or mirrors. In this paper, we propose a new GPU based hierarchical rendering technique for translucent materials, based on the dipole diffusion approximation, at interactive rates. Information of incident light, position, normal, and irradiance, on the surfaces are stored into 2D textures by rendering from a primary light view. Huge numbers of pixel photons are clustered into quad-tree image pyramids. Each pixel, we select clusters (sets of photons), and then we approximate multiple subsurface scattering term with the clusters. We also introduce a novel hierarchical screen-space interpolation technique by exploiting spatial coherence with early-z culling on the GPU. We also build image pyramids of the screen using mipmap and pixel shader. Each pixel of the pyramids is stores position, normal and spatial similarity of children pixels. If a pixel's the similarity is high, we render the pixel and interpolate the pixel to multiple pixels. Result images show that our method can interactively render deformable translucent objects by approximating hundreds of thousand photons with only hundreds clusters without any preprocessing. We use an image-space approach for entire process on the GPU, thus our method is less dependent to scene complexity.

  • PDF

Stem Cells and Cell-Cell Communication in the Understanding of the Role of Diet and Nutrients in Human Diseases

  • Trosko James E.
    • Journal of Food Hygiene and Safety
    • /
    • v.22 no.1
    • /
    • pp.1-14
    • /
    • 2007
  • The term, "food safety", has traditionally been viewed as a practical science aimed at assuring the prevention acute illnesses caused by biological microorganisms, and only to a minor extent, chronic diseases cause by chronic low level exposures to natural and synthetic chemicals or pollutants. "food safety" meant to prevent microbiological agents/toxins in/on foods, due to contamination any where from "farm to Fork", from causing acute health effects, especially to the young, immune-compromised, genetically-predisposed and elderly. However, today a broader view must also include the fact that diet, perse (nutrients, vitamins/minerals, calories), as well as low level toxins and pollutant or supplemented synthetic chemicals, can alter gene expressions of stem/progenitor/terminally-differentiated cells, leading to chronic inflammation and other mal-functions that could lead to diseases such as cancer, diabetes, atherogenesis and possibly reproductive and neurological disorders. Understanding of the mechanisms by which natural or synthetic chemical toxins/toxicants, in/on food, interact with the pathogenesis of acute and chronic diseases, should lead to a "systems" approach to "food safety". Clearly, the interactions of diet/food with the genetic background, gender, and developmental state of the individual, together with (a) interactions of other endogenous/exogenous chemicals/drugs; (b) the specific biology of the cells being affected; (c) the mechanisms by which the presence or absence of toxins/toxicants and nutrients work to cause toxicities; and (d) how those mechanisms affect the pathogenesis of acute and/or chronic diseases, must be integrated into a "system" approach. Mechanisms of how toxins/toxicants cause cellular toxicities, such as mutagenesis; cytotoxicity and altered gene expression, must take into account (a) irreversible or reversal changes caused by these toxins or toxicants; (b)concepts of thresholds or no-thresholds of action; and (c) concepts of differential effects on stem cells, progenitor cells and terminally differentiated cells in different organs. This brief Commentary tries to illustrate this complex interaction between what is on/in foods with one disease, namely cancer. Since the understanding of cancer, while still incomplete, can shed light on the multiple ways that toxins/toxicants, as well as dietary modulation of nutrients/vitamins/metals/ calories, can either enhance or reduce the risk to cancer. In particular, diets that alter the embryo-fetal micro-environment might dramatically alter disease formation later in life. In effect "food safety" can not be assessed without understanding how food could be 'toxic', or how that mechanism of toxicity interacts with the pathogenesis of any disease.

Bilayer Segmentation of Consistent Scene Images by Propagation of Multi-level Cues with Adaptive Confidence (다중 단계 신호의 적응적 전파를 통한 동일 장면 영상의 이원 영역화)

  • Lee, Soo-Chahn;Yun, Il-Dong;Lee, Sang-Uk
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.450-462
    • /
    • 2009
  • So far, many methods for segmenting single images or video have been proposed, but few methods have dealt with multiple images with analogous content. These images, which we term consistent scene images, include concurrent images of a scene and gathered images of a similar foreground, and may be collectively utilized to describe a scene or as input images for multi-view stereo. In this paper, we present a method to segment these images with minimum user input, specifically, manual segmentation of one image, by iteratively propagating information via multi-level cues with adaptive confidence depending on the nature of the images. Propagated cues are used as the bases to compute multi-level potentials in an MRF framework, and segmentation is done by energy minimization. Both cues and potentials are classified as low-, mid-, and high- levels based on whether they pertain to pixels, patches, and shapes. A major aspect of our approach is utilizing mid-level cues to compute low- and mid- level potentials, and high-level cues to compute low-, mid-, and high- level potentials, thereby making use of inherent information. Through this process, the proposed method attempts to maximize the amount of both extracted and utilized information in order to maximize the consistency of the segmentation. We demonstrate the effectiveness of the proposed method on several sets of consistent scene images and provide a comparison with results based only on mid-level cues [1].

Political Geography of Ulsan Oil Refinery (울산공업단지의 서막, 정유공장 건설의 정치지리)

  • Gimm, Dong-Wan;Kim, Min-Ho
    • Journal of the Korean Geographical Society
    • /
    • v.49 no.2
    • /
    • pp.139-159
    • /
    • 2014
  • This study problematizes the dominance of developmental state theory and its negative influences in the field of Korean studies, in particular, dealing with the industrialization during the developmental era, 1960s~70s. As is generally known, the theory has been in a position of unchallenged authority on the industrialization experience of East Asian countries, including South Korea. However, at the same time, it has also misled us into overlooking strategic relations that had articulated the state forms at multiple scales. This study aims to reconstruct the historical contexts by the theorizing prompted by recent work on state space. I shed light on the multiscalar strategic relations that had shaped the Ulsan refinery plant as a representative state space of the South Korean industrialization during two decades after liberation. Specifically, the study illustrates the features and roles of Cold War networks and multiscalar agnets such as Nam Goong-Yeon. By identifying the plant as a result of sequential articulations between Ulsan and other scales, this study concludes by suggesting to reframing the strategic relational spaces, beyond the view of methodological nationalism, in the perspective of multiscalar approach.

  • PDF

Software development project management using Agile methodology (Agile 방법론을 이용한 소프트웨어 개발 프로젝트관리)

  • kim, tai-dal
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.155-162
    • /
    • 2016
  • In recent years, hoping the interaction of individuals and rather than software development process and tools, and customers want software that works first, rather than a comprehensive document, in cooperation with the customer, rather than the developer negotiate a contract, to each other stick to the plan I think even more so than the value that corresponds to the change. In view of this, software development is given the autonomy and motivation to project team rather than process-oriented and have a passion and vision and human relations oriented management approach is required. In recent years, increasing the productivity benefits of agile development processes, improved quality, efficiency and customer satisfaction as is demonstrated in the methodology selected to promote the project, attention was given to the experts. Contemporary demands with regard to the methodology chosen to meet your needs, in this paper in the organization, and to solve the problems of product-based Cross functional team proposed methodology Feature Team model, this model is an organizational Cross functional team and the team is not the outcome (product) basis, were examined for the model that points to progress the development across multiple product as a functional unit, value-plan through the driven agile technique-based model and proposed a difference. And the domain analysis, required extraction by conventional JAD (joint application development) meeting the targets for the object-oriented modeling, in modeling and organize, review, aware in advance and the UML Structure and Behavior Diagrams and proposed to proceed with the project.