• Title/Summary/Keyword: systems approach method

Search Result 3,708, Processing Time 0.032 seconds

Personal Information Overload and User Resistance in the Big Data Age (빅데이터 시대의 개인정보 과잉이 사용자 저항에 미치는 영향)

  • Lee, Hwansoo;Lim, Dongwon;Zo, Hangjung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.125-139
    • /
    • 2013
  • Big data refers to the data that cannot be processes with conventional contemporary data technologies. As smart devices and social network services produces vast amount of data, big data attracts much attention from researchers. There are strong demands form governments and industries for bib data as it can create new values by drawing business insights from data. Since various new technologies to process big data introduced, academic communities also show much interest to the big data domain. A notable advance related to the big data technology has been in various fields. Big data technology makes it possible to access, collect, and save individual's personal data. These technologies enable the analysis of huge amounts of data with lower cost and less time, which is impossible to achieve with traditional methods. It even detects personal information that people do not want to open. Therefore, people using information technology such as the Internet or online services have some level of privacy concerns, and such feelings can hinder continued use of information systems. For example, SNS offers various benefits, but users are sometimes highly exposed to privacy intrusions because they write too much personal information on it. Even though users post their personal information on the Internet by themselves, the data sometimes is not under control of the users. Once the private data is posed on the Internet, it can be transferred to anywhere by a few clicks, and can be abused to create fake identity. In this way, privacy intrusion happens. This study aims to investigate how perceived personal information overload in SNS affects user's risk perception and information privacy concerns. Also, it examines the relationship between the concerns and user resistance behavior. A survey approach and structural equation modeling method are employed for data collection and analysis. This study contributes meaningful insights for academic researchers and policy makers who are planning to develop guidelines for privacy protection. The study shows that information overload on the social network services can bring the significant increase of users' perceived level of privacy risks. In turn, the perceived privacy risks leads to the increased level of privacy concerns. IF privacy concerns increase, it can affect users to from a negative or resistant attitude toward system use. The resistance attitude may lead users to discontinue the use of social network services. Furthermore, information overload is mediated by perceived risks to affect privacy concerns rather than has direct influence on perceived risk. It implies that resistance to the system use can be diminished by reducing perceived risks of users. Given that users' resistant behavior become salient when they have high privacy concerns, the measures to alleviate users' privacy concerns should be conceived. This study makes academic contribution of integrating traditional information overload theory and user resistance theory to investigate perceived privacy concerns in current IS contexts. There is little big data research which examined the technology with empirical and behavioral approach, as the research topic has just emerged. It also makes practical contributions. Information overload connects to the increased level of perceived privacy risks, and discontinued use of the information system. To keep users from departing the system, organizations should develop a system in which private data is controlled and managed with ease. This study suggests that actions to lower the level of perceived risks and privacy concerns should be taken for information systems continuance.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

A Study on Effective Methods of Polygon Modeling through Modeling Process-Related System (모델링 공정 연계 시스템을 통한 효율적 폴리곤 모델링 기법에 대한 탐구)

  • Kim, Sang-Don;Lee, Hyun-Seok
    • Cartoon and Animation Studies
    • /
    • s.37
    • /
    • pp.143-158
    • /
    • 2014
  • In the modeling processes of 3D computer animation, methods to build optimal work conditions to realize real forms for more efficient works have been advanced. Digital sculpting software, published in 1999, ZBrush has been positioned as an essential factor in character model work requiring of realistic descriptions through different manufacturing methods from previous modeling work processes and easy shape realization. Their functional areas are expanding. So, in this production case paper, as a method to product more optimized animation character models, the efficiency of production method linking digital sculpting software (Z-Brush) and animation production software (Maya) was deliberated and its consequences and implications are suggested. To this end, first the technical features of polygon modeling and Retopology were reviewed. Second, based on it, the efficiency of animation character modeling work processes through step linking ZBrush and Maya suggested in this paper was analyzed. Third, based on the features drawn before, in order to prove the hypothesis on modeling optimization method suggested in this paper, the production process of character Dumvee from a short animation film, 'Cula & Mina' was analyzed as an example. Through this study, it was found that technical approach easiness and high level of completion could be realized through two software linked work processes. This study is considered to be a reference for optimizing production process of related industries or modeling-related classes by deliberating different modeling process linked systems.

Bilayer Segmentation of Consistent Scene Images by Propagation of Multi-level Cues with Adaptive Confidence (다중 단계 신호의 적응적 전파를 통한 동일 장면 영상의 이원 영역화)

  • Lee, Soo-Chahn;Yun, Il-Dong;Lee, Sang-Uk
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.450-462
    • /
    • 2009
  • So far, many methods for segmenting single images or video have been proposed, but few methods have dealt with multiple images with analogous content. These images, which we term consistent scene images, include concurrent images of a scene and gathered images of a similar foreground, and may be collectively utilized to describe a scene or as input images for multi-view stereo. In this paper, we present a method to segment these images with minimum user input, specifically, manual segmentation of one image, by iteratively propagating information via multi-level cues with adaptive confidence depending on the nature of the images. Propagated cues are used as the bases to compute multi-level potentials in an MRF framework, and segmentation is done by energy minimization. Both cues and potentials are classified as low-, mid-, and high- levels based on whether they pertain to pixels, patches, and shapes. A major aspect of our approach is utilizing mid-level cues to compute low- and mid- level potentials, and high-level cues to compute low-, mid-, and high- level potentials, thereby making use of inherent information. Through this process, the proposed method attempts to maximize the amount of both extracted and utilized information in order to maximize the consistency of the segmentation. We demonstrate the effectiveness of the proposed method on several sets of consistent scene images and provide a comparison with results based only on mid-level cues [1].

Sensing NO3-N and K Ions in Hydroponic Solution Using Ion-Selective Membranes (이온선택성 멤브레인을 이용한 양액 내 질산태 질소 및 칼륨 측정)

  • Kim, Won-Kyung;Park, Tu-San;Kim, Young-Joo;Roh, Mi-Young;Cho, Seong-In;Kim, Hak-Jin
    • Journal of Biosystems Engineering
    • /
    • v.35 no.5
    • /
    • pp.343-349
    • /
    • 2010
  • Rapid on-site sensing of nitrate-nitrogen and potassium ions in hydroponic solution would increase the efficiency of nutrient use for greenhouse crops cultivated in closed hydroponic systems while reducing the potential for environmental pollution in water and soil. Ion-selective electrodes (ISEs) are a promising approach because of their small size, rapid response, and the ability to directly measure the analyte. The capabilities of the ISEs for sensing nitrate and potassium in hydroponic solution can be affected by the presence of other ions such as calcium, magnesium, sulfate, sodium, and chloride in the solution itself. This study was conducted to investigate the applicability of two ISEs consisting of TDDA-NPOE and valinomycin-DOS PVC membranes for quantitative determinations of $NO_3$-N and K in hydroponic solution. Nine hydroponic solutions were prepared by diluting highly concentrated paprika hydroponic solution to provide a concentration range of 3 to 400 mg/L for $NO_3$-N and K. Two of the calibration curves relating membrane response and nutrient concentration provided coefficients of determination ($R^2$) > 0.98 and standard errors of calibration (SEC) of < 3.79 mV. The use of the direct potentiometry method, in conjunction with an one-point EMF compensation technique, was feasible for measuring $NO_3$-N and K in paprika hydroponic solution due to almost 1:1 relationships and high coefficients of determination ($R^2$ > 0.97) between the levels of $NO_3$-N and K obtained with the ion-selective electrodes and standard instruments. However, even though there were strong linear relationships ($R^2$ > 0.94) between the $NO_3$-N and K concentrations determined by the Gran's plot-based multiple standard addition method and by standard instruments, hydroponic $NO_3$-N concentrations measured with the ISEs, on average, were about 10% higher than those obtained with the automated analyzer whereas the K ISE predicted about 59% lower K than did the ICP spectrometer, probably due to no compensation for a difference between actual and expected concentrations of standard solutions directly prepared.

Multi-Criteria Group Decision Making under Imprecise Preference Judgments : Using Fuzzy Logic with Linguistic Quantifier (불명료한 선호정보 하의 다기준 그룹의사결정 : Linguistic Quantifier를 통한 퍼지논리 활용)

  • Choi, Duke Hyun;Ahn, Byeong Seok;Kim, Soung Hie
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.3
    • /
    • pp.15-32
    • /
    • 2006
  • The increasing complexity of the socio-economic environments makes it less and less possible for single decision-maker to consider all relevant aspects of problem. Therefore, many organizations employ groups in decision making. In this paper, we present a multiperson decision making method using fuzzy logic with linguistic quantifier when each of group members specifies imprecise judgments possibly both on performance evaluations of alternatives with respect to the multiple criteria and on the criteria. Inexact or vague preferences have appeared in the decision making literatures with a view to relaxing the burdens of preference specifications imposed to the decision-makers and thus taking into account the vagueness of human judgments. Allowing for the types of imprecise judgments in the model, however, makes more difficult a clear selection of alternative(s) that a group wants to make. So, further interactions with the decision-makers may proceed to the extent to compensate for the initial comforts of preference specifications. These interactions may not however guarantee the selection of the best alternative to implement. To circumvent this deadlock situation, we present a procedure for obtaining a satisfying solution by the use of linguistic quantifier guided aggregation which implies fuzzy majority. This is an approach to combine a prescriptive decision method via a mathematical programming and a well-established approximate solution method to aggregate multiple objects.

  • PDF

The Development Method of IFC Extension Elements using Work Breakdown Structure in River Fields (작업분류체계를 활용한 하천분야 IFC 확장 개발방안)

  • Won, Jisun;Shin, Jaeyoung;Moon, Hyoun-Seok;Ju, Ki-Beom
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.4
    • /
    • pp.77-84
    • /
    • 2018
  • As the application of BIM (Building Information Modeling) to the civil sector has become practical, and mandatory for road projects, the standardization, development of systems, etc. for the application and operation of BIM are required. In particular, it is important to develop BIM data standards for producing, sharing and managing the lifecycle data of civil facilities because they are commonly national public facilities. The BIM data standards have been developed by utilizing or extending IFC (Industry Foundation Classes), which is an international standard, but schema extensions of river facilities has not been developed thus far. This study proposes an approach to an IFC extension for river facilities using the WBS (Work Breakdown Structure) as a fundamental study for IFC-based schema extension in the river field. For this purpose, the research was carried out as follows. First, the IFC extension development method was selected to represent the river facilities by analyzing the existing IFC structure and previous research cases for the IFC extension. Second, extended elements of the river facilities were identified through an analysis of the WBS and classified according to the high-level structure of the IFC schema. Third, the classified elements were arranged based on the IFC hierarchy and the IFC schema extension for river facilities was established. Based on the suggested extension method of IFC schema, this study developed the schema by defining the element components and parts of river facilities, such as distribution flow elements and deriving their detailed types and properties.

Efficient Algorithms for Multicommodity Network Flow Problems Applied to Communications Networks (다품종 네트워크의 효율적인 알고리즘 개발 - 정보통신 네트워크에의 적용 -)

  • 윤석진;장경수
    • The Journal of Information Technology
    • /
    • v.3 no.2
    • /
    • pp.73-85
    • /
    • 2000
  • The efficient algorithms are suggested in this study for solving the multicommodity network flow problems applied to Communications Systems. These problems are typical NP-complete optimization problems that require integer solution and in which the computational complexity increases numerically in appropriate with the problem size. Although the suggested algorithms are not absolutely optimal, they are developed for computationally efficient and produce near-optimal and primal integral solutions. We supplement the traditional Lagrangian method with a price-directive decomposition. It proceeded as follows. First, A primal heuristic from which good initial feasible solutions can be obtained is developed. Second, the dual is initialized using marginal values from the primal heuristic. Generally, the Lagrangian optimization is conducted from a naive dual solution which is set as ${\lambda}=0$. The dual optimization converged very slowly because these values have sort of gaps from the optimum. Better dual solutions improve the primal solution, and better primal bounds improve the step size used by the dual optimization. Third, a limitation that the Lagrangian decomposition approach has Is dealt with. Because this method is dual based, the solution need not converge to the optimal solution in the multicommodity network problem. So as to adjust relaxed solution to a feasible one, we made efficient re-allocation heuristic. In addition, the computational performances of various versions of the developed algorithms are compared and evaluated. First, commercial LP software, LINGO 4.0 extended version for LINDO system is utilized for the purpose of implementation that is robust and efficient. Tested problem sets are generated randomly Numerical results on randomly generated examples demonstrate that our algorithm is near-optimal (< 2% from the optimum) and has a quite computational efficiency.

  • PDF

Liquefaction-Induced Uplift of Geotechnical Buried Structures: Centrifuge Modeling and Seismic Performance-Based Design (지반 액상화에 의한 지중 매설구조물의 부상: 원심모형시험 및 내진성능설계)

  • Kang, Gi-Chun;Iai, Susumu
    • Journal of the Korean Geotechnical Society
    • /
    • v.28 no.10
    • /
    • pp.5-16
    • /
    • 2012
  • Geotechnical buried structures with relatively light weight have been suffering from uplift damage due to liquefaction in the past earthquakes. The factor of safety approach by Koseki et al. (1997a), which is widely used in seismic design, predicts the triggering of uplift. However, a method for "quantitative" estimates of the uplift displacement has yet to be established. Estimation of the uplift displacement may be an important factor to be considered for designing underground structures under the framework of performance-based design (ISO23469, 2005). Therefore, evaluation of the uplift displacement of buried structure in liquefied ground during earthquakes is needed for a performance-based design as a practical application. In order to predict the uplift displacement quantitatively, a simplified method is derived based on the equilibrium of vertical forces acting on buried structures in backfill during earthquakes (Tobita et al., 2012). The method is verified through comparisons with results of centrifuge model tests and damaged sewerage systems after the 2004 Niigata-ken Chuetsu, Japan, earthquake. The proposed flow diagram for performance-based design includes estimation of the uplift displacement as well as liquefaction limit of backfill.

An Adaptive Time Delay Estimation Method Based on Canonical Correlation Analysis (정준형 상관 분석을 이용한 적응 시간 지연 추정에 관한 연구)

  • Lim, Jun-Seok;Hong, Wooyoung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.6
    • /
    • pp.548-555
    • /
    • 2013
  • The localization of sources has a numerous number of applications. To estimate the position of sources, the relative delay between two or more received signals for the direct signal must be determined. Although the generalized cross-correlation method is the most popular technique, an approach based on eigenvalue decomposition (EVD) is also popular one, which utilizes an eigenvector of the minimum eigenvalue. The performance of the eigenvalue decomposition (EVD) based method degrades in the low SNR and the correlated environments, because it is difficult to select a single eigenvector for the minimum eigenvalue. In this paper, we propose a new adaptive algorithm based on Canonical Correlation Analysis (CCA) in order to extend the operation range to the lower SNR and the correlation environments. The proposed algorithm uses the eigenvector corresponding to the maximum eigenvalue in the generalized eigenvalue decomposition (GEVD). The estimated eigenvector contains all the information that we need for time delay estimation. We have performed simulations with uncorrelated and correlated noise for several SNRs, showing that the CCA based algorithm can estimate the time delays more accurately than the adaptive EVD algorithm.