• 제목/요약/키워드: Intelligence information technology

Search Result 1,975, Processing Time 0.033 seconds

Intents of Acquisitions in Information Technology Industrie (정보기술 산업에서의 인수 유형별 인수 의도 분석)

  • Cho, Wooje;Chang, Young Bong;Kwon, Youngok
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.123-138
    • /
    • 2016
  • This study investigates intents of acquisitions in information technology industries. Mergers and acquisitions are a strategic decision at corporate-level and have been an important tool for a firm to grow. Plenty of firms in information technology industries have acquired startups to increase production efficiency, expand customer base, or improve quality over the last decades. For example, Google has made about 200 acquisitions since 2001, Cisco has acquired about 210 firms since 1993, Oracle has made about 125 acquisitions since 1994, and Microsoft has acquired about 200 firms since 1987. Although there have been many existing papers that theoretically study intents or motivations of acquisitions, there are limited papers that empirically investigate them mainly because it is challenging to measure and quantify intents of M&As. This study examines the intent of acquisitions by measuring specific intents for M&A transactions. Using our measures of acquisition intents, we compare the intents by four acquisition types: (1) the acquisition where a hardware firm acquires a hardware firm, (2) the acquisition where a hardware firm acquires a software/IT service firm, (3) the acquisition where a software/IT service firm acquires a hardware firm, and (4) the acquisition where a software /IT service firm acquires a software/IT service firm. We presume that there are difference in reasons why a hardware firm acquires another hardware firm, why a hardware firm acquires a software firm, why a software/IT service firm acquires a hardware firm, and why a software/IT service firm acquires another software/IT service firm. Using data of the M&As in US IT industries, we identified major intents of the M&As. The acquisition intents are identified based on the press release of M&A announcements and measured with four categories. First, an acquirer may have intents of cost saving in operations by sharing common resources between the acquirer and the target. The cost saving can accrue from economies of scope and scale. Second, an acquirer may have intents of product enhancement/development. Knowledge and skills transferred from the target may enable the acquirer to enhance the product quality or to expand product lines. Third, an acquirer may have intents of gain additional customer base to expand the market, to penetrate the market, or to enter a foreign market. Fourth, a firm may acquire a target with intents of expanding customer channels. By complementing existing channel to the customer, the firm can increase its revenue. Our results show that acquirers have had intents of cost saving more in acquisitions between hardware companies than in acquisitions between software companies. Hardware firms are more likely to acquire with intents of product enhancement or development than software firms. Overall, the intent of product enhancement/development is the most frequent intent in all of the four acquisition types, and the intent of customer base expansion is the second. We also analyze our data with the classification of production-side intents and customer-side intents, which is based on activities of the value chain of a firm. Intents of cost saving operations and those of product enhancement/development can be viewed as production-side intents and intents of customer base expansion and those of expanding customer channels can be viewed as customer-side intents. Our analysis shows that the ratio between the number of customer-side intents and that of production-side intents is higher in acquisitions where a software firm is an acquirer than in the acquisitions where a hardware firm is an acquirer. This study can contribute to IS literature. First, this study provides insights in understanding M&As in IT industries by answering for question of why an IT firm intends to another IT firm. Second, this study also provides distribution of acquisition intents for acquisition types.

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

Application of Amplitude Demodulation to Acquire High-sampling Data of Total Flux Leakage for Tendon Nondestructive Estimation (덴던 비파괴평가를 위한 Total Flux Leakage에서 높은 측정빈도의 데이터를 획득하기 위한 진폭복조의 응용)

  • Joo-Hyung Lee;Imjong Kwahk;Changbin Joh;Ji-Young Choi;Kwang-Yeun Park
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.27 no.2
    • /
    • pp.17-24
    • /
    • 2023
  • A post-processing technique for the measurement signal of a solenoid-type sensor is introduced. The solenoid-type sensor nondestructively evaluates an external tendon of prestressed concrete using the total flux leakage (TFL) method. The TFL solenoid sensor consists of primary and secondary coils. AC electricity, with the shape of a sinusoidal function, is input in the primary coil. The signal proportional to the differential of the input is induced in the secondary coil. Because the amplitude of the induced signal is proportional to the cross-sectional area of the tendon, sectional loss of the tendon caused by ruptures or corrosion can be identified by the induced signal. Therefore, it is important to extract amplitude information from the measurement signal of the TFL sensor. Previously, the amplitude was extracted using local maxima, which is the simplest way to obtain amplitude information. However, because the sampling rate is dramatically decreased by amplitude extraction using the local maxima, the previous method places many restrictions on the direction of TFL sensor development, such as applying additional signal processing and/or artificial intelligence. Meanwhile, the proposed method uses amplitude demodulation to obtain the signal amplitude from the TFL sensor, and the sampling rate of the amplitude information is same to the raw TFL sensor data. The proposed method using amplitude demodulation provides ample freedom for development by eliminating restrictions on the first coil input frequency of the TFL sensor and the speed of applying the sensor to external tension. It also maintains a high measurement sampling rate, providing advantages for utilizing additional signal processing or artificial intelligence. The proposed method was validated through experiments, and the advantages were verified through comparison with the previous method. For example, in this study the amplitudes extracted by amplitude demodulation provided a sampling rate 100 times greater than those of the previous method. There may be differences depending on the given situation and specific equipment settings; however, in most cases, extracting amplitude information using amplitude demodulation yields more satisfactory results than previous methods.

A Study on Recent Research Trend in Management of Technology Using Keywords Network Analysis (키워드 네트워크 분석을 통해 살펴본 기술경영의 최근 연구동향)

  • Kho, Jaechang;Cho, Kuentae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.101-123
    • /
    • 2013
  • Recently due to the advancements of science and information technology, the socio-economic business areas are changing from the industrial economy to a knowledge economy. Furthermore, companies need to do creation of new value through continuous innovation, development of core competencies and technologies, and technological convergence. Therefore, the identification of major trends in technology research and the interdisciplinary knowledge-based prediction of integrated technologies and promising techniques are required for firms to gain and sustain competitive advantage and future growth engines. The aim of this paper is to understand the recent research trend in management of technology (MOT) and to foresee promising technologies with deep knowledge for both technology and business. Furthermore, this study intends to give a clear way to find new technical value for constant innovation and to capture core technology and technology convergence. Bibliometrics is a metrical analysis to understand literature's characteristics. Traditional bibliometrics has its limitation not to understand relationship between trend in technology management and technology itself, since it focuses on quantitative indices such as quotation frequency. To overcome this issue, the network focused bibliometrics has been used instead of traditional one. The network focused bibliometrics mainly uses "Co-citation" and "Co-word" analysis. In this study, a keywords network analysis, one of social network analysis, is performed to analyze recent research trend in MOT. For the analysis, we collected keywords from research papers published in international journals related MOT between 2002 and 2011, constructed a keyword network, and then conducted the keywords network analysis. Over the past 40 years, the studies in social network have attempted to understand the social interactions through the network structure represented by connection patterns. In other words, social network analysis has been used to explain the structures and behaviors of various social formations such as teams, organizations, and industries. In general, the social network analysis uses data as a form of matrix. In our context, the matrix depicts the relations between rows as papers and columns as keywords, where the relations are represented as binary. Even though there are no direct relations between papers who have been published, the relations between papers can be derived artificially as in the paper-keyword matrix, in which each cell has 1 for including or 0 for not including. For example, a keywords network can be configured in a way to connect the papers which have included one or more same keywords. After constructing a keywords network, we analyzed frequency of keywords, structural characteristics of keywords network, preferential attachment and growth of new keywords, component, and centrality. The results of this study are as follows. First, a paper has 4.574 keywords on the average. 90% of keywords were used three or less times for past 10 years and about 75% of keywords appeared only one time. Second, the keyword network in MOT is a small world network and a scale free network in which a small number of keywords have a tendency to become a monopoly. Third, the gap between the rich (with more edges) and the poor (with fewer edges) in the network is getting bigger as time goes on. Fourth, most of newly entering keywords become poor nodes within about 2~3 years. Finally, keywords with high degree centrality, betweenness centrality, and closeness centrality are "Innovation," "R&D," "Patent," "Forecast," "Technology transfer," "Technology," and "SME". The results of analysis will help researchers identify major trends in MOT research and then seek a new research topic. We hope that the result of the analysis will help researchers of MOT identify major trends in technology research, and utilize as useful reference information when they seek consilience with other fields of study and select a new research topic.

Learning with a Robot for STEAM in Elementary School Curriculum (초등정규교육과정에서 STEAM을 위한 로봇활용교육)

  • Han, Jeong-Hye;Park, Ju-Hyun;Jo, Mi-Heon;Park, Ill-Woo;Kim, Jin-Oh
    • Journal of The Korean Association of Information Education
    • /
    • v.15 no.3
    • /
    • pp.483-492
    • /
    • 2011
  • 'Learning with a robot' is now considered as one of the best candidates for STEAM education, which is recently growing its importance. Most of the 'learning with a robot' programs in elementary schools serve as afterschool classes. The students participating in the afterschool classes are mostly boys who are interested in science and robots. This paper mainly concerns that a robot can be helpful for improving students' interest in STEAM education. We divided the robot utilizable aspects into 5 areas with educational points of view; abstract concept understanding type, structure-oriented type, athletics-oriented type, intelligence-oriented type and value-orientated type. We extracted all robot utilizable subjects and units from elementary school curriculum, and developed lesson plans which can be applicable to regular classes. And we also verified them by applying into an elementary school for 5 months. As the result of the analysis, we can conclude that 'learning with a robot' can encourage students' interest in STEAM, and it is more effective for girls than boys. Finally, we discuss problems that teachers may face in using a robot for regular classes, and make suggestions about the use of a robot for STEAM education.

  • PDF

Web-enabled Healthcare System for Hypertension: Hyperlink-based Inference Approach (고혈압관리를 위한 웹 기반의 지능정보시스템: 하이퍼링크를 이용한 추론방식으로)

  • Song, Yong-Uk;Ho, Seung-Hee;Chae, Young-Moon;Cho, Kyoung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.9 no.1
    • /
    • pp.91-107
    • /
    • 2003
  • In the conduct of this study, a web-enabled healthcare system for the management of hypertension was implemented through a hyperlink-based inference approach. The hyperlink-based inference platform was implemented using the hypertext capacity of HTML which ensured accessibility, multimedia facilities, fast response, stability, ease of use and upgrade, and platform independency of expert systems. Many HTML documents, which are hyperlinked to each other based on expert rules, were uploaded beforehand to perform the hyperlink-based inference. The HTML documents were uploaded and maintained automatically by our proprietary tool called the Web-Based Inference System (WeBIS) that supports a graphical user interface (GUI) for the input and edit of decision graphs. Nevertheless, the editing task of the decision graph using the GUI tool is a time consuming and tedious chore when the knowledge engineer must perform it manually. Accordingly, this research implemented an automatic generator of the decision graph for the management of hypertension. As a result, this research suggests a methodology for the development of Web-enabled healthcare systems using the hyperlink-based inference approach and, as an example, implements a Web-enabled healthcare system for hypertension, a platform which performed especially well in the areas of speed and stability.

  • PDF

I3A Framework of Defense Network Centric Based C2 Facilities (국방 NC 기반 C2 시설 I3A Framework)

  • Kim, Young-Dong;Lee, Tae-Gong;Park, Bum-Shik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.8
    • /
    • pp.615-625
    • /
    • 2014
  • Ministry of National Defense, MND, established a "Master Plan of Military Facility" in 2010 based on the defense reform to prepare for future war. It was a plan for consolidating small military facilities into battalion units, reflecting on and preparing for the needs of various changes in defense environment as well as balanced growth of ROK Army, Navy, and Air Force. However, to move forward with "Military Facility Master Plan," current design criteria for military facilities need to be revised to be enacted due to numerous calculation errors in facility footprints because of the absence of a sound facility criteria. Because the future war environment will be changed from Platform basis to Network Centric Warfare basis, Command & Control capability of C4I systems is getting more important. Therefore, Successful mission accomplishment can be secured by convergence of facility and military Information Technology(IT). So, MND should quickly prepare for the operational guidance, design criteria and policy that are suitable for Network Centric Warfare accomplishment, and implement infrastructure of IT and installation of C2 facility in conjunction with consolidation movement of military facilities. In this paper, we propose the defense I3A framework in order to solve this problem.

Social Media Analysis Based on Keyword Related to Educational Policy Using Topic Modeling (토픽모델링을 이용한 교육정책 키워드 기반 소셜미디어 분석)

  • Chung, Jin-myeong;Park, Young-ho;Kim, Woo-ju
    • Journal of Internet Computing and Services
    • /
    • v.19 no.4
    • /
    • pp.53-63
    • /
    • 2018
  • The traditional mass media function of conveying information and forming public opinion has rapidly changed into an environment in which information and opinions are shared through social media with the development of ICT technology, and such social media further strengthens its influence. In other words, it has been confirmed that the influence of the public opinion through the production and sharing of public opinion on political, social and economic changes is increasing, and this change is already in use on the political campaign. In addition, efforts to grasp and reflect the opinions of the public by utilizing social media are being actively carried out not only in the political area but also in the public area. The purpose of this study is to explore the possibility of using social media based public opinion in educational policy. We collected media data, analyzed the main topic and probability of occurrence of each topic, and topic trends. As a result, we were able to catch the main interest of the public(the 'Domestic Computer Education Time' accounted for 43.99%, and 'Prime Project Selection' topics was 36.81% and 'Artificial Intelligence Program' topics was 7.94%). In addition, we could get a suggestion that flexible policies should be established according to the timing of the curriculum and the subject of the policy even if the category of the policy is same.

Empirical Research on Search model of Web Service Repository (웹서비스 저장소의 검색기법에 관한 실증적 연구)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.173-193
    • /
    • 2010
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component-based software development to promote application interaction and integration within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web services repositories not only be well-structured but also provide efficient tools for an environment supporting reusable software components for both service providers and consumers. As the potential of Web services for service-oriented computing is becoming widely recognized, the demand for an integrated framework that facilitates service discovery and publishing is concomitantly growing. In our research, we propose a framework that facilitates Web service discovery and publishing by combining clustering techniques and leveraging the semantics of the XML-based service specification in WSDL files. We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the Web service domain. We have developed a Web service discovery tool based on the proposed approach using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web services repositories. We believe that both service providers and consumers in a service-oriented computing environment can benefit from our Web service discovery approach.

A Study on Model for Drivable Area Segmentation based on Deep Learning (딥러닝 기반의 주행가능 영역 추출 모델에 관한 연구)

  • Jeon, Hyo-jin;Cho, Soo-sun
    • Journal of Internet Computing and Services
    • /
    • v.20 no.5
    • /
    • pp.105-111
    • /
    • 2019
  • Core technologies that lead the Fourth Industrial Revolution era, such as artificial intelligence, big data, and autonomous driving, are implemented and serviced through the rapid development of computing power and hyper-connected networks based on the Internet of Things. In this paper, we implement two different models for drivable area segmentation in various environment, and propose a better model by comparing the results. The models for drivable area segmentation are using DeepLab V3+ and Mask R-CNN, which have great performances in the field of image segmentation and are used in many studies in autonomous driving technology. For driving information in various environment, we use BDD dataset which provides driving videos and images in various weather conditions and day&night time. The result of two different models shows that Mask R-CNN has higher performance with 68.33% IoU than DeepLab V3+ with 48.97% IoU. In addition, the result of visual inspection of drivable area segmentation on driving image, the accuracy of Mask R-CNN is 83% and DeepLab V3+ is 69%. It indicates Mask R-CNN is more efficient than DeepLab V3+ in drivable area segmentation.