• Title/Summary/Keyword: Software knowledge

Search Result 1,070, Processing Time 0.03 seconds

Age-related Changes of the Finger Photoplethysmogram in Frequency Domain Analysis (연령증가에 따른 지첨용적맥파의 주파수 영역에서의 변화)

  • Nam, Tong-Hyun;Park, Young-Bae;Park, Young-Jae;Shin, Sang-Hoon
    • The Journal of the Society of Korean Medicine Diagnostics
    • /
    • v.12 no.1
    • /
    • pp.42-62
    • /
    • 2008
  • Objectives: It is well known that some parameters of the photoplethysmogram (PPG) acquired by time domain contour analysis can be used as markers of vascular aging. But the previous studies that have been performed for frequency domain analysis of the PPG to date have provided only restrictive and fragmentary information. The aim of the present investigation was to determine whether the harmonics extracted from the PPG using a fast Fourier transformation could be used as an index of vascular aging. Methods: The PPG was measured in 600 recruited subjects for 30 second durations, To grasp the gross age-related change of the PPG waveform, we grouped subjects according to gender and age and averaged the PPG signal of one pulse cycle. To calculate the conventional indices of vascular aging, we selected the 5-6 cycles of pulse that the baseline was relatively stable and then acquired the coordinates of the inflection points. For the frequency domain analysis we performed a power spectral analysis on the PPG signals for 30 seconds using a fast Fourier transformation and dissociated the harmonic components from the PPG signals. Results: A final number of 390 subjects (174 males and 216 females) were included in the statistical analysis. The normalized power of the harmonics decreased with age and on a logarithmic scale reduction of the normalized power in the third (r=-0.492, P<0.0001), fourth (r=-0.621, P<0.0001) and fifth harmonic (r=-0.487, P<0.0001) was prominent. From a multiple linear regression analysis, Stiffness index, reflection index and corrected up-stroke time influenced the normalized power of the harmonics on a logarithmic scale. Conclusions: The normalized harmonic power decreased with age in healthy subjects and may be less error prone due to the essential attributes of frequency domain analysis. Therefore, we expect that the normalized harmonic power density can be useful as a vascular aging marker.

  • PDF

Elementary School Teachers' Perception of Gifted Education (영재교육에 대한 초등학교 교사들의 인식)

  • Choi, Moon-Kyung;Park, Jung-Ok
    • Journal of Gifted/Talented Education
    • /
    • v.14 no.4
    • /
    • pp.125-149
    • /
    • 2004
  • The purpose of this study is to provide basic information on the current status of elementary school teachers' perception of gifted education. For this purpose, this study wi1l analyze elementary school teachers' perception of the gifted education (i.e., general perception of gifted education, characteristics of gifted children, identification of gifted children, programs for gifted education, and teachers of gifted education). A questionnaire survey was used for the purpose and research questions of this study. The questionnaire used in this study was constructed by taking into account the results of surveys conducted in previous studies and the literature on gifted education. Before conducting the research, a preliminary inquiry was made to identify problems that may occur while the subjects were participating in the survey, as well as to determine the appropriateness of the questionnaire and the amount of time needed. The preliminary inquiry was conducted with ten randomly selected elementary school teachers who did not participate as subjects in the actual research. The results were later used as initial data for the actual research. The subjects of this study were teachers who were teaching in8 elementary schools under each office of education in Seoul. This process was conducted for 180 elementary school teachers from April to May 2004. The results were analyzed using SPSS (Statistical Package for Social Science) Ver. 10.1, a software program for statistical research. After the data were analyzed, the following conclusions were arrived at: 1. The result of the genera1 perception of gifted education by elementary school teachers were positive and reasonably high. The level of their perception of detailed information or knowledge, however, was relatively low. 2. As for their perceptions of the emotional characteristics of gifted children, t11e results showed a low level of understanding of the characteristics of gifted children. 3. As for their perceptions of identification of gifted children, the results showed a high level of understanding of the appropriate time to provide special education to gifted children and of the methods to identify such. On the other hand, their understanding of the identification of gifted children in an actual class was poor. 4. The respondents' level of perceptions of programs for gifted education was very low since many subjects did not have any experiences with such programs. 5. The results showed a very positive response to receiving training on gifted education, though they were very reluctant to be assigned as teachers of gifted education because of the excessive work that is associated with such and their lack of capability in handling gifted children.

BOLD Responses to Acupuncture on Each Side of ST36 (족삼리 좌우측 자침에 대한 BOLD 반응)

  • Yeo, Sujung;Bae, Seong-In;Choe, Ilwhan;Jahng, Geon-Ho;Lim, Sabina
    • Korean Journal of Acupuncture
    • /
    • v.31 no.1
    • /
    • pp.20-32
    • /
    • 2014
  • Objectives : There has been some controversy about the modulatory effects on brain function during acupuncture on each side of the same acupoint. This study was designed to investigate and compare the blood oxygen level-dependent(BOLD) responses of acupuncture on each side of ST36. Methods : Fourteen healthy subjects were recruited for imaging and received acupuncture or placebo stimulations either on the left or on the right acupoint of ST36 in each scan. For the voxel-wise statistical analysis, one sample T-test and the within-subject analysis of variance(ANOVA) test were performed using SPM8 software. Results : This study showed that acupuncture on each side of ST36 showed different BOLD signal patterns. Higher BOLD responses after acupuncture stimulations at the left ST36 compared to the right were observed mainly in the parahippocampal gyrus(BA 28), dorsolateral prefrontal cortex(DLPFC, BA 44), thalamus, culmen and claustrum. We investigated the different neural responses between rest and activation periods of placebo and acupuncture stimulations on each side of ST36. Acupuncture at the right ST36 elicited activation mainly in the insula, supplementary motor area(SMA) and anterior cingulate cortex(ACC), while acupuncture at the left ST36 elicited activation mainly in the insula, primary somatosensory cortex(SI, BA 2) and DLPFC(BA 44). Conclusions : To our knowledge, this is the first reported functional MRI study directly comparing when needling at the right and at the left side of ST36. This study's preliminary results proved to be evidence of acupuncture's different effects when performed on opposite sides of an acupoint.

Impact of Dentists' Attitudes and Dental Hygienists' Services on Dental Anxiety (치과의사의 태도와 치과위생사의 서비스가 치과불안에 미치는 영향)

  • Yang, Jeong A;Lee, Su-Young;Oh, Se-Jin
    • Journal of dental hygiene science
    • /
    • v.18 no.4
    • /
    • pp.227-233
    • /
    • 2018
  • The purpose of this study was to investigate the factors affecting dentists' attitudes and dental hygienists' services on dental anxiety in adults. The subjects were 300 adults older than 20 years of age living in Seoul, Gyeonggi, Daejeon, and Daegu. Data were collected using structured questionnaires. Among the distributed questionnaires, 225 respondents were selected as subjects, excluding 74 people who did not answer and 1 person who was not faithful. Data were analyzed using statistical software with a t-test, one-way ANOVA, and multiple regression. As a result, the gender was slightly higher in women (54.7%) than in men, and the last dental visit was less than one year in 59.6% of respondents. Most of the respondents' educational level was higher than college level (79.1%), and the monthly income was less than 2 million won in 53.8 of respondents. This study showed that distrust of dentists affected dental anxiety and anxiety stimulation. Higher reliability of the dentist was correlated with less dental anxiety in patients. Dental anxiety showed statistically significant results in dentist subcategories of patient slight and dentists' trust (p<0.01). Additionally, the factors affecting dental anxiety and anxiety stimulus were knowledge of dental hygienist and distrust of dentist (p<0.01). According to this study, dentists' and dental hygienists' trust of dental staff show the importance of oral health professionals' role in reducing dental anxiety in patients. It is also suggested that efforts should be made to improve public awareness of oral health experts. It is believed that dentists, and dental hygienists need to promoted to become professionals. In addition, a variety of programs have been developed to reduce dental anxiety, so patients need to be comfortable to receive dental treatment.

Applying Meta-model Formalization of Part-Whole Relationship to UML: Experiment on Classification of Aggregation and Composition (UML의 부분-전체 관계에 대한 메타모델 형식화 이론의 적용: 집합연관 및 복합연관 판별 실험)

  • Kim, Taekyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.99-118
    • /
    • 2015
  • Object-oriented programming languages have been widely selected for developing modern information systems. The use of concepts relating to object-oriented (OO, in short) programming has reduced efforts of reusing pre-existing codes, and the OO concepts have been proved to be a useful in interpreting system requirements. In line with this, we have witnessed that a modern conceptual modeling approach supports features of object-oriented programming. Unified Modeling Language or UML becomes one of de-facto standards for information system designers since the language provides a set of visual diagrams, comprehensive frameworks and flexible expressions. In a modeling process, UML users need to consider relationships between classes. Based on an explicit and clear representation of classes, the conceptual model from UML garners necessarily attributes and methods for guiding software engineers. Especially, identifying an association between a class of part and a class of whole is included in the standard grammar of UML. The representation of part-whole relationship is natural in a real world domain since many physical objects are perceived as part-whole relationship. In addition, even abstract concepts such as roles are easily identified by part-whole perception. It seems that a representation of part-whole in UML is reasonable and useful. However, it should be admitted that the use of UML is limited due to the lack of practical guidelines on how to identify a part-whole relationship and how to classify it into an aggregate- or a composite-association. Research efforts on developing the procedure knowledge is meaningful and timely in that misleading perception to part-whole relationship is hard to be filtered out in an initial conceptual modeling thus resulting in deterioration of system usability. The current method on identifying and classifying part-whole relationships is mainly counting on linguistic expression. This simple approach is rooted in the idea that a phrase of representing has-a constructs a par-whole perception between objects. If the relationship is strong, the association is classified as a composite association of part-whole relationship. In other cases, the relationship is an aggregate association. Admittedly, linguistic expressions contain clues for part-whole relationships; therefore, the approach is reasonable and cost-effective in general. Nevertheless, it does not cover concerns on accuracy and theoretical legitimacy. Research efforts on developing guidelines for part-whole identification and classification has not been accumulated sufficient achievements to solve this issue. The purpose of this study is to provide step-by-step guidelines for identifying and classifying part-whole relationships in the context of UML use. Based on the theoretical work on Meta-model Formalization, self-check forms that help conceptual modelers work on part-whole classes are developed. To evaluate the performance of suggested idea, an experiment approach was adopted. The findings show that UML users obtain better results with the guidelines based on Meta-model Formalization compared to a natural language classification scheme conventionally recommended by UML theorists. This study contributed to the stream of research effort about part-whole relationships by extending applicability of Meta-model Formalization. Compared to traditional approaches that target to establish criterion for evaluating a result of conceptual modeling, this study expands the scope to a process of modeling. Traditional theories on evaluation of part-whole relationship in the context of conceptual modeling aim to rule out incomplete or wrong representations. It is posed that qualification is still important; but, the lack of consideration on providing a practical alternative may reduce appropriateness of posterior inspection for modelers who want to reduce errors or misperceptions about part-whole identification and classification. The findings of this study can be further developed by introducing more comprehensive variables and real-world settings. In addition, it is highly recommended to replicate and extend the suggested idea of utilizing Meta-model formalization by creating different alternative forms of guidelines including plugins for integrated development environments.

Product Recommender Systems using Multi-Model Ensemble Techniques (다중모형조합기법을 이용한 상품추천시스템)

  • Lee, Yeonjeong;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.39-54
    • /
    • 2013
  • Recent explosive increase of electronic commerce provides many advantageous purchase opportunities to customers. In this situation, customers who do not have enough knowledge about their purchases, may accept product recommendations. Product recommender systems automatically reflect user's preference and provide recommendation list to the users. Thus, product recommender system in online shopping store has been known as one of the most popular tools for one-to-one marketing. However, recommender systems which do not properly reflect user's preference cause user's disappointment and waste of time. In this study, we propose a novel recommender system which uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user's preference. The research data is collected from the real-world online shopping store, which deals products from famous art galleries and museums in Korea. The data initially contain 5759 transaction data, but finally remain 3167 transaction data after deletion of null data. In this study, we transform the categorical variables into dummy variables and exclude outlier data. The proposed model consists of two steps. The first step predicts customers who have high likelihood to purchase products in the online shopping store. In this step, we first use logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. We perform above data mining techniques using SAS E-Miner software. In this study, we partition datasets into two sets as modeling and validation sets for the logistic regression and decision trees. We also partition datasets into three sets as training, test, and validation sets for the artificial neural network model. The validation dataset is equal for the all experiments. Then we composite the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. Bagging is the abbreviation of "Bootstrap Aggregation" and it composite outputs from several machine learning techniques for raising the performance and stability of prediction or classification. This technique is special form of the averaging method. Bumping is the abbreviation of "Bootstrap Umbrella of Model Parameter," and it only considers the model which has the lowest error value. The results show that bumping outperforms bagging and the other predictors except for "Poster" product group. For the "Poster" product group, artificial neural network model performs better than the other models. In the second step, we use the market basket analysis to extract association rules for co-purchased products. We can extract thirty one association rules according to values of Lift, Support, and Confidence measure. We set the minimum transaction frequency to support associations as 5%, maximum number of items in an association as 4, and minimum confidence for rule generation as 10%. This study also excludes the extracted association rules below 1 of lift value. We finally get fifteen association rules by excluding duplicate rules. Among the fifteen association rules, eleven rules contain association between products in "Office Supplies" product group, one rules include the association between "Office Supplies" and "Fashion" product groups, and other three rules contain association between "Office Supplies" and "Home Decoration" product groups. Finally, the proposed product recommender systems provides list of recommendations to the proper customers. We test the usability of the proposed system by using prototype and real-world transaction and profile data. For this end, we construct the prototype system by using the ASP, Java Script and Microsoft Access. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The participants for the survey are 173 persons who use MSN Messenger, Daum Caf$\acute{e}$, and P2P services. We evaluate the user satisfaction using five-scale Likert measure. This study also performs "Paired Sample T-test" for the results of the survey. The results show that the proposed model outperforms the random selection model with 1% statistical significance level. It means that the users satisfied the recommended product list significantly. The results also show that the proposed system may be useful in real-world online shopping store.

Real-time CRM Strategy of Big Data and Smart Offering System: KB Kookmin Card Case (KB국민카드의 빅데이터를 활용한 실시간 CRM 전략: 스마트 오퍼링 시스템)

  • Choi, Jaewon;Sohn, Bongjin;Lim, Hyuna
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.1-23
    • /
    • 2019
  • Big data refers to data that is difficult to store, manage, and analyze by existing software. As the lifestyle changes of consumers increase the size and types of needs that consumers desire, they are investing a lot of time and money to understand the needs of consumers. Companies in various industries utilize Big Data to improve their products and services to meet their needs, analyze unstructured data, and respond to real-time responses to products and services. The financial industry operates a decision support system that uses financial data to develop financial products and manage customer risks. The use of big data by financial institutions can effectively create added value of the value chain, and it is possible to develop a more advanced customer relationship management strategy. Financial institutions can utilize the purchase data and unstructured data generated by the credit card, and it becomes possible to confirm and satisfy the customer's desire. CRM has a granular process that can be measured in real time as it grows with information knowledge systems. With the development of information service and CRM, the platform has change and it has become possible to meet consumer needs in various environments. Recently, as the needs of consumers have diversified, more companies are providing systematic marketing services using data mining and advanced CRM (Customer Relationship Management) techniques. KB Kookmin Card, which started as a credit card business in 1980, introduced early stabilization of processes and computer systems, and actively participated in introducing new technologies and systems. In 2011, the bank and credit card companies separated, leading the 'Hye-dam Card' and 'One Card' markets, which were deviated from the existing concept. In 2017, the total use of domestic credit cards and check cards grew by 5.6% year-on-year to 886 trillion won. In 2018, we received a long-term rating of AA + as a result of our credit card evaluation. We confirmed that our credit rating was at the top of the list through effective marketing strategies and services. At present, Kookmin Card emphasizes strategies to meet the individual needs of customers and to maximize the lifetime value of consumers by utilizing payment data of customers. KB Kookmin Card combines internal and external big data and conducts marketing in real time or builds a system for monitoring. KB Kookmin Card has built a marketing system that detects realtime behavior using big data such as visiting the homepage and purchasing history by using the customer card information. It is designed to enable customers to capture action events in real time and execute marketing by utilizing the stores, locations, amounts, usage pattern, etc. of the card transactions. We have created more than 280 different scenarios based on the customer's life cycle and are conducting marketing plans to accommodate various customer groups in real time. We operate a smart offering system, which is a highly efficient marketing management system that detects customers' card usage, customer behavior, and location information in real time, and provides further refinement services by combining with various apps. This study aims to identify the traditional CRM to the current CRM strategy through the process of changing the CRM strategy. Finally, I will confirm the current CRM strategy through KB Kookmin card's big data utilization strategy and marketing activities and propose a marketing plan for KB Kookmin card's future CRM strategy. KB Kookmin Card should invest in securing ICT technology and human resources, which are becoming more sophisticated for the success and continuous growth of smart offering system. It is necessary to establish a strategy for securing profit from a long-term perspective and systematically proceed. Especially, in the current situation where privacy violation and personal information leakage issues are being addressed, efforts should be made to induce customers' recognition of marketing using customer information and to form corporate image emphasizing security.

Observation of Methane Flux in Rice Paddies Using a Portable Gas Analyzer and an Automatic Opening/Closing Chamber (휴대용 기체분석기와 자동 개폐 챔버를 활용한 벼논에서의 메탄 플럭스 관측)

  • Sung-Won Choi;Minseok Kang;Jongho Kim;Seungwon Sohn;Sungsik Cho;Juhan Park
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.436-445
    • /
    • 2023
  • Methane (CH4) emissions from rice paddies are mainly observed using the closed chamber method or the eddy covariance method. In this study, a new observation technique combining a portable gas analyzer (Model LI-7810, LI-COR, Inc., USA) and an automatic opening/closing chamber (Model Smart Chamber, LI-COR, Inc., USA) was introduced based on the strengths and weaknesses of the existing measurement methods. A cylindrical collar was manufactured according to the maximum growth height of rice and used as an auxiliary measurement tool. All types of measured data can be monitored in real time, and CH4 flux is also calculated simultaneously during the measurement. After the measurement is completed, all the related data can be checked using the software called 'SoilFluxPro'. The biggest advantage of the new observation technique is that time-series changes in greenhouse gas concentrations can be immediately confirmed in the field. It can also be applied to small areas with various treatment conditions, and it is simpler to use and requires less effort for installation and maintenance than the eddy covariance system. However, there are also disadvantages in that the observation system is still expensive, requires specialized knowledge to operate, and requires a lot of manpower to install multiple collars in various observation areas and travel around them to take measurements. It is expected that the new observation technique can make a significant contribution to understanding the CH4 emission pathways from rice paddies and quantifying the emissions from those pathways.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Impact of Shortly Acquired IPO Firms on ICT Industry Concentration (ICT 산업분야 신생기업의 IPO 이후 인수합병과 산업 집중도에 관한 연구)

  • Chang, YoungBong;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.51-69
    • /
    • 2020
  • Now, it is a stylized fact that a small number of technology firms such as Apple, Alphabet, Microsoft, Amazon, Facebook and a few others have become larger and dominant players in an industry. Coupled with the rise of these leading firms, we have also observed that a large number of young firms have become an acquisition target in their early IPO stages. This indeed results in a sharp decline in the number of new entries in public exchanges although a series of policy reforms have been promulgated to foster competition through an increase in new entries. Given the observed industry trend in recent decades, a number of studies have reported increased concentration in most developed countries. However, it is less understood as to what caused an increase in industry concentration. In this paper, we uncover the mechanisms by which industries have become concentrated over the last decades by tracing the changes in industry concentration associated with a firm's status change in its early IPO stages. To this end, we put emphasis on the case in which firms are acquired shortly after they went public. Especially, with the transition to digital-based economies, it is imperative for incumbent firms to adapt and keep pace with new ICT and related intelligent systems. For instance, after the acquisition of a young firm equipped with AI-based solutions, an incumbent firm may better respond to a change in customer taste and preference by integrating acquired AI solutions and analytics skills into multiple business processes. Accordingly, it is not unusual for young ICT firms become an attractive acquisition target. To examine the role of M&As involved with young firms in reshaping the level of industry concentration, we identify a firm's status in early post-IPO stages over the sample periods spanning from 1990 to 2016 as follows: i) being delisted, ii) being standalone firms and iii) being acquired. According to our analysis, firms that have conducted IPO since 2000s have been acquired by incumbent firms at a relatively quicker time than those that did IPO in previous generations. We also show a greater acquisition rate for IPO firms in the ICT sector compared with their counterparts in other sectors. Our results based on multinomial logit models suggest that a large number of IPO firms have been acquired in their early post-IPO lives despite their financial soundness. Specifically, we show that IPO firms are likely to be acquired rather than be delisted due to financial distress in early IPO stages when they are more profitable, more mature or less leveraged. For those IPO firms with venture capital backup have also become an acquisition target more frequently. As a larger number of firms are acquired shortly after their IPO, our results show increased concentration. While providing limited evidence on the impact of large incumbent firms in explaining the change in industry concentration, our results show that the large firms' effect on industry concentration are pronounced in the ICT sector. This result possibly captures the current trend that a few tech giants such as Alphabet, Apple and Facebook continue to increase their market share. In addition, compared with the acquisitions of non-ICT firms, the concentration impact of IPO firms in early stages becomes larger when ICT firms are acquired as a target. Our study makes new contributions. To our best knowledge, this is one of a few studies that link a firm's post-IPO status to associated changes in industry concentration. Although some studies have addressed concentration issues, their primary focus was on market power or proprietary software. Contrast to earlier studies, we are able to uncover the mechanism by which industries have become concentrated by placing emphasis on M&As involving young IPO firms. Interestingly, the concentration impact of IPO firm acquisitions are magnified when a large incumbent firms are involved as an acquirer. This leads us to infer the underlying reasons as to why industries have become more concentrated with a favor of large firms in recent decades. Overall, our study sheds new light on the literature by providing a plausible explanation as to why industries have become concentrated.