• Title/Summary/Keyword: information System Integration

Search Result 2,311, Processing Time 0.038 seconds

Exploring Influence of Network Structure, Organizational Learning Culture, and Knowledge Management Participation on Individual Creativity and Performance: Comparison of SI Proposal Team and R&D Team (네트워크 구조와 조직학습문화, 지식경영참여가 개인창의성 및 성과에 미치는 영향에 관한 실증분석: SI제안팀과 R&D팀의 비교연구)

  • Lee, Kun-Chang;Seo, Young-Wook;Chae, Seong-Wook;Song, Seok-Woo
    • Asia pacific journal of information systems
    • /
    • v.20 no.4
    • /
    • pp.101-123
    • /
    • 2010
  • Recently, firms are operating a number of teams to accomplish organizational performance. Especially, ad hoc teams like proposal preparation team are quite different from permanent teams like R&D team in the sense of how the team forms network structure and deals with organizational learning culture and knowledge management participation efforts. Moreover, depending on the team characteristics, individual creativity will differ from each other, which will lead to organizational performance eventually. Previous studies in the field of creativity are lacking in this issue. So main objectives of this study are organized as follows. First, the issue of how to improve individual creativity and organizational performance will be analyzed empirically. This issue will be performed depending on team characteristics such as ad hoc team and permanent team. Antecedents adopted for this research objective are cultural and knowledge factors such as organizational learning culture, and knowledge management participation. Second, the network structure such as degree centrality, and structural hole is used to analyze its influence on individual creativity and organizational performance. SI (System Integration) companies are facing severely tough requirements from clients to submit very creative proposals. Also, R&D teams are widely accepted as relatively creative teams because their responsibilities are focused on suggesting innovative techniques to make their companies remain competitive in the market. SI teams are usually ad hoc, while R&D teams are permanent on an average. By taking advantage of these characteristics of the two kinds of teams, we will prove the validity of the proposed research questions. To obtain the survey data, we accessed 7 SI teams (74 members), and 6 R&D teams (63 members), collecting 137 valid questionnaires. PLS technique was applied to analyze the survey data. Results are as follows. First, in case of SI teams, organizational learning culture affects individual creativity significantly. Meanwhile, knowledge management participation has a significant influence on Individual creativity for the permanent teams. Second, degree centrality Influences individual creativity significantly in case of SI teams. This is comparable with the fact that structural hole has a significant impact on individual creativity for the R&D teams. Practical implications can be summarized as follows: First, network structure of ad hoc team should be designed differently from one of permanent team. Ad hoc team is supposed to show a high creativity in a rather short period, implying that network density among team members should be improved, and those members with high degree centrality should be encouraged to show their Individual creativity and take a leading role by allowing them to get heavily engaged in knowledge sharing and diffusion. In contrast, permanent team should be designed to take advantage of structural hole instead of focusing on network density. Since structural hole can be utilized very effectively in the permanent team, strong arbitrators' merits in the permanent team will increase and therefore helps increase both network efficiency and effectiveness too. In this way, individual creativity in the permanent team is likely to lead to organizational creativity in a seamless way. Second, way of Increasing individual creativity should be sought from the perspective of organizational culture and knowledge management. Organization is supposed to provide a cultural atmosphere in which Innovative idea suggestions and active discussion among team members are encouraged. In this way, trust builds up among team members, facilitating the formation of organizational learning culture. Third, in the ad hoc team, organizational looming culture should be built such a way that individual creativity can grow up fast in a rather short period. Since time is tight, reasonable compensation policy, leader's Initiatives, and learning culture formation should be done In a short period so that mutual trust is built among members quickly, and necessary knowledge and information can be learnt rapidly. Fourth, in the permanent team, it should be kept in mind that the degree of participation in knowledge management determines level of Individual creativity. Therefore, the team ought to facilitate knowledge circulation process such as knowledge creation, storage, sharing, utilization, and learning among team members, which will lead to team performance. In this way, firms must control knowledge networks in permanent team and ad hoc team in a way mentioned above so that individual creativity as well as team performance can be maximized.

Improvement Plan to Facilitate a Landscape Architectural Promotion Facility and Complex System (조경진흥시설과 조경진흥단지 제도 활성화 방안 연구)

  • Kim, Yong-Gook;Kim, Shin-Sung
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.46 no.1
    • /
    • pp.9-16
    • /
    • 2018
  • Landscape architecture is an indispensable professional service in building sustainable land and urban environments. The landscape architecture industry is closely related to the promotion of the health and welfare of the people, urban revitalization and residential environment improvement as well as job creation. Despite various public interest values of landscape architecture, the growth engine of the landscape architecture industry, which is supposed to improve the quality of landscape services, has stagnated. In 2015, the Landscape Architecture Promotion Act was enacted to provide a landscape architectural promotion facility and complex system to support revitalization through the integration of the landscape architecture industry. The purpose of this study is to suggest an improvement plan to enhance the effectiveness of the landscape architectural promotion facility and complex system. The results of the analysis are as follows: First, workers and experts in landscape architecture recognized the need for policies and projects to promote the landscape architecture industry. Second, the industrial types suitable for the landscape architectural promotion facility were landscape design, landscape maintenance and management, and landscape construction industry. Meanwhile the industrial types suitable for a landscape architectural promotion complex were landscape trees and landscape facilities production and distribution. Third, the expected effect of the designation of the landscape architectural facility was 'the increase of the business opportunity through the expansion of the network'. On the other hand, that of the landscape architectural promotion complex was 'the activation of various information sharing'. Fourth, 'the size of the local government landscape architecture industry and the capacity to cultivate' was the most important among the designation criteria of the landscape architectural promotion facility. As for that of the landscape architectural promotion complex, the 'feasibility of promotion plan' was the most crucial. Fifth, 'tax benefit and deductible exemption' was considered as a necessary support method for the activation of the landscape architectural promotion facility, and 'maintenance and management fee support' was recognized in the case of the landscape architectural promotion complex.

Analysis of Mutant Chinese Cabbage Plants Using Gene Tagging System (Gene Tagging System을 이용한 돌연변이 배추의 분석)

  • Yu, Jae-Gyeong;Lee, Gi-Ho;Lim, Ki-Byung;Hwang, Yoon-Jung;Woo, Eun-Taek;Kim, Jung-Sun;Park, Beom-Seok;Lee, Youn-Hyung;Park, Young-Doo
    • Horticultural Science & Technology
    • /
    • v.28 no.3
    • /
    • pp.442-448
    • /
    • 2010
  • The objectives of this study were to analyze mutant lines of Chinese cabbage ($Brassica$ $rapa$ ssp. $pekinensis$) using gene tagging system (plasmid rescue and inverse polymerase chain reaction) and to observe the phenotypic characteristics. Insertional mutants were derived by transferring DNA (T-DNA) of $Agrobacterium$ for functional genomics study in Chinese cabbage. The hypocotyls of Chinese cabbage 'Seoul' were used to obtain transgenic plants with $Agrobacterium$ $tumefaciens$ harboring pRCV2 vector. To tag T-DNA from the Chinese cabbage genomic DNA, plasmid rescue and inverse PCR were applied for multiple copies and single copy insertional mutants. These techniques were successfully conducted to Chinese cabbage plant with high efficiency, and as a result, T-DNA of pRCV2 vector showed distinct various integration patterns in the transgenic plant genome. The polyploidy level analysis showed the change in phenotypic characteristics of 13 mutant lines was not due to variation in somatic chromosome number. Compared with wild type, the $T_1$ progenies showed varied phenotypes, such as decreased stamen numbers, larger or smaller flowers, upright growth habit, hairless leaves, chlorosis symptoms, narrow leaves, and deeply serrated leaves. The polyploidy level analysis showed the change in phenotypic characteristics of 13 mutant lines was not due to variation in somatic chromosome number. To tag T-DNA from the Chinese cabbage genomic DNA, plasmid rescue and inverse PCR were applied for multiple copies and single copy insertional mutants. Mutants that showed distinct phenotypic difference compared to wild type with 1 copy of T-DNA by Southern blot analysis, and with 2n = 20 of chromosome number were selected. These selected mutant lines were sequenced flanking DNA, mapped genomic loci, and the genome information of the lines is being recorded in specially developed database.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Trends of Study and Classification of Reference on Occupational Health Management in Korea after Liberation (해방 이후 우리나라 산업보건관리에 관한 문헌분류 및 연구동향)

  • Ha, Eun-Hee;Park, Hye-Sook;Kim, Young-Bok;Song, Hyun-Jong
    • Journal of Preventive Medicine and Public Health
    • /
    • v.28 no.4 s.51
    • /
    • pp.809-844
    • /
    • 1995
  • The purposes of this study are to define the scope of occupational health management and to classify occupational management by review of related journals from 1945 to 1994 in Korea. The steps of this study were as follows: (1) Search of secondary reference; (2) Collection and review of primary reference; (3) Survey; and (4) Analysis and discussion. The results were as follows ; 1. Most of the respondents majored in occupational health(71.6%), and were working in university (68.3%), males and over the age 40. Seventy percent of the respondents agreed with the idea that classification of occupational health management is necessary, and 10% disagreed. 2. After integration of the idea of respondents, we reclassified the scope of occupational health management. It was defined 3 parts, that is , occupational health system, occupational health service and others (such as assessment, epidemiology, cost-effectiveness analysis and so on). 3. The number of journals on occupational health management was 510. It was sightly increased from 1986 and abruptly increased after 1991. The kinds of journals related to occupational health management were The Korean Journal of Occupational Medicine(18.2%), Several Kinds of Medical Colloge Journal(17.0%), The Korean Journal Occupational Health(15.1%), The Korean Journal of Preventive Medicine(15.1%) and others(34.6%). As for the contents, the number of journals on occupational health management systems was 33(6.5%) and occupational health services 477(93.5%). Of the journals on occupational health management systems, the number of journals on the occupational health resource system was 15(45.5%), occupational finance system 8(24.2%), occupational health management system 6(18.2%), occupational organization 3(9.1%) and occupational health delivery system 1 (3.0%). Of the journals on occupational health services, the number of journals on disease management was 269(57.2%), health management 116(24.7%), working environmental management 85(18.1%). As for the subjects, the number of journals on general workers was 185(71.1%), followed by women worker, white coiler workers and so on. 4. Respondents made occupational health service(such as health management, working environmental management and health education) the first priority of occupational health management. Tied for the second are quality analysis(such as education, training and job contents of occupational health manager) and occupational health systems(such as the recommendation of systems of occupational and general disease and occupational health organization). 5. Thirty seven respondents suggested 48 ideas about the future research of occupational health management. The results were as follows: (1) Study of occupational health service 40.5%; (2) Study of organization system 27.1%; (3) Study of occupational health system (e.g. information network) 8.3%; (4) Study of working condition 6.2%; and (5) Study of occupational health service analysis 4.2%.

  • PDF

Soil Loss and Pollutant Load Estimation in Sacheon River Watershed using a Geographic Information System (GIS를 이용한 동해안 하천유역의 토양유실량과 오염부하량 평가 -사천천을 중심으로-)

  • Cho, Jae-Heon;Yeon, Je-Chul
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.22 no.7
    • /
    • pp.1331-1343
    • /
    • 2000
  • Through the integration of USLE and GIS, the methodology to estimate the soil loss was developed, and applicated to the Sacheon river in Gangrung. Using GIS, spatial analysis such as watershed boundary determination, flow routing. slope steepness calculation was done. Spatial information from the GIS application was given for each grid. With soil and land use map, information about soil classification and land use was given for each grid too. Based upon these data, thematic maps about the factors of USLE were made. We estimated the soil loss by overlaying the thematic maps. In this manner, we can assess the degree of soil loss for each grid using GIS. Annual average soil loss of Sacheon river watershed is 1.36 ton/ha/yr. Soil loss in forest, dry field, and paddy field is 0.15 ton/ha/yr, 27.04 ton/ha/yr, 0.78 ton/ha/yr respectively. The area of dry field, which is 4% of total area, is $2.4km^2$. But total soil loss of dry field is 6561 ton/yr, and it occupies 84.9 % of total soil loss eroded in Sacheon river watershed. Comparing with the 11.2 ton/ha/yr of an average soil loss tolerance for cropland, provision for the soil loss in dry field is necessary. Run-off and water quality of Sacheon river were measured two times in flood season: from July 24, 1998 to July 28 and from September 29 to October 1. As the run-off of the river increased, SS, TN, TP concentrations and pollutant loadings increased. SS, TN, TP loads of Sacheon river discharged during the 2 heavy rains were 21%, 39%, and 19% of the total pollutant loadings generated in the Sacheon river watershed for one year. We can see that much pollutants are discharged in short period of flood season.

  • PDF

Change Detection of land-surface Environment in Gongju Areas Using Spatial Relationships between Land-surface Change and Geo-spatial Information (지표변화와 지리공간정보의 연관성 분석을 통한 공주지역 지표환경 변화 분석)

  • Jang Dong-Ho
    • Journal of the Korean Geographical Society
    • /
    • v.40 no.3 s.108
    • /
    • pp.296-309
    • /
    • 2005
  • In this study, we investigated the change of future land-surface and relationships of land-surface change with geo-spatial information, using a Bayesian prediction model based on a likelihood ratio function, for analysing the land-surface change of the Gongju area. We classified the land-surface satellite images, and then extracted the changing area using a way of post classification comparison. land-surface information related to the land-surface change is constructed in a GIS environment, and the map of land-surface change prediction is made using the likelihood ratio function. As the results of this study, the thematic maps which definitely influence land-surface change of rural or urban areas are elevation, water system, population density, roads, population moving, the number of establishments, land price, etc. Also, thematic maps which definitely influence the land-surface change of forests areas are elevation, slope, population density, population moving, land price, etc. As a result of land-surface change analysis, center proliferation of old and new downtown is composed near Gum-river, and the downtown area will spread around the local roads and interchange areas in the urban area. In case of agricultural areas, a small tributary of Gum-river or an area of local roads which are attached with adjacent areas showed the high probability of change. Most of the forest areas are located in southeast and from this result we can guess why the wide chestnut-tree cultivation complex is located in these areas and the capability of forest damage is very high. As a result of validation using a prediction rate curve, a capability of prediction of urban area is $80\%$, agriculture area is $55\%$, forest area is $40\%$ in higher $10\%$ of possibility which the land-surface change would occur. This integration model is unsatisfactory to Predict the forest area in the study area and thus as a future work, it is necessary to apply new thematic maps or prediction models In conclusion, we can expect that this way can be one of the most essential land-surface change studies in a few years.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.

A Study on the Profitability Enhancement of SI Business in Public and Finance Sector (공공(公共)/금융분야(金融分野) SI사업(事業)의 수익성(收益性) 향상(向上) 방안(方案)에 관한 연구(硏究))

  • Joo, Jeong-Soo;Jahng, Jung-Joo;Cho, Hurn-Jin
    • Information Systems Review
    • /
    • v.12 no.1
    • /
    • pp.165-188
    • /
    • 2010
  • Recently public and finance SI (system integration) industry is called as 4D (difficult, dangerous, dirty, dreamless) industry because of low profit, overtime works and poor motivation of employees. Even some people think at SI industry to be a labor intensive industry instead of a high technology industry. The current study considers outside environmental change of SI industry as well as inside capability enhancement of SI companies. The study adopted action research method with the author's expertise and experiences as a head of a major SI company in Korea. The current research framework suggests 5 areas of profitability enhancement that offers propositions and implications. 5 areas of profitability enhancement are (1) policy improvement, (2) business portfolio innovation (3) sales capability reinforcement, (4) delivery capability reinforcement, and (5) cost management innovation. The five areas include 11 propositional factors and 21 implementation plans which were chosen from the profitability perspectives of SI companies.In order to successfully execute propositions and implementation plans of the framework, 3 years is needed and after 3 years profitability are expected to increase 10% higher than the current level. The framework, propositions and suggestions in this study are expected to offer a real contribution for SI companies that want to enhance competitiveness and profitability. Future extension of the current study to benchmarking the competitiveness and profitability between local companies and global companieswill bring a solid attention from industry and academics.