• Title/Summary/Keyword: number system

Search Result 20,627, Processing Time 0.059 seconds

Relationships Among Employees' IT Personnel Competency, Personal Work Satisfaction, and Personal Work Performance: A Goal Orientation Perspective (조직구성원의 정보기술 인적역량과 개인 업무만족 및 업무성과 간의 관계: 목표지향성 관점)

  • Heo, Myung-Sook;Cheon, Myun-Joong
    • Asia pacific journal of information systems
    • /
    • v.21 no.4
    • /
    • pp.63-104
    • /
    • 2011
  • The study examines the relationships among employee's goal orientation, IT personnel competency, personal effectiveness. The goal orientation includes learning goal orientation, performance approach goal orientation, and performance avoid goal orientation. Personal effectiveness consists of personal work satisfaction and personal work performance. In general, IT personnel competency refers to IT expert's skills, expertise, and knowledge required to perform IT activities in organizations. However, due to the advent of the internet and the generalization of IT, IT personnel competency turns out to be an important competency of technological experts as well as employees in organizations. While the competency of IT itself is important, the appropriate harmony between IT personnel's business capability and technological capability enhances the value of human resources and thus provides organizations with sustainable competitive advantages. The rapid pace of organization change places increased pressure on employees to continually update their skills and adapt their behavior to new organizational realities. This challenge raises a number of important questions concerning organizational behavior? Why do some employees display remarkable flexibility in their behavioral responses to changes in the organization, whereas others firmly resist change or experience great stress when faced with the need to alter behavior? Why do some employees continually strive to improve themselves over their life span, whereas others are content to forge through life using the same basic knowledge and skills? Why do some employees throw themselves enthusiastically into challenging tasks, whereas others avoid challenging tasks? The goal orientation proposed by organizational psychology provides at least a partial answer to these questions. Goal orientations refer to stable personally characteristics fostered by "self-theories" about the nature and development of attributes (such as intelligence, personality, abilities, and skills) people have. Self-theories are one's beliefs and goal orientations are achievement motivation revealed in seeking goals in accordance with one's beliefs. The goal orientations include learning goal orientation, performance approach goal orientation, and performance avoid goal orientation. Specifically, a learning goal orientation refers to a preference to develop the self by acquiring new skills, mastering new situations, and improving one's competence. A performance approach goal orientation refers to a preference to demonstrate and validate the adequacy of one's competence by seeking favorable judgments and avoiding negative judgments. A performance avoid goal orientation refers to a preference to avoid the disproving of one's competence and to avoid negative judgements about it, while focusing on performance. And the study also examines the moderating role of work career of employees to investigate the difference in the relationship between IT personnel competency and personal effectiveness. The study analyzes the collected data using PASW 18.0 and and PLS(Partial Least Square). The study also uses PLS bootstrapping algorithm (sample size: 500) to test research hypotheses. The result shows that the influences of both a learning goal orientation (${\beta}$ = 0.301, t = 3.822, P < 0.000) and a performance approach goal orientation (${\beta}$ = 0.224, t = 2.710, P < 0.01) on IT personnel competency are positively significant, while the influence of a performance avoid goal orientation(${\beta}$ = -0.142, t = 2.398, p < 0.05) on IT personnel competency is negatively significant. The result indicates that employees differ in their psychological and behavioral responses according to the goal orientation of employees. The result also shows that the impact of a IT personnel competency on both personal work satisfaction(${\beta}$ = 0.395, t = 4.897, P < 0.000) and personal work performance(${\beta}$ = 0.575, t = 12.800, P < 0.000) is positively significant. And the impact of personal work satisfaction(${\beta}$ = 0.148, t = 2.432, p < 0.05) on personal work performance is positively significant. Finally, the impacts of control variables (gender, age, type of industry, position, work career) on the relationships between IT personnel competency and personal effectiveness(personal work satisfaction work performance) are partly significant. In addition, the study uses PLS algorithm to find out a GoF(global criterion of goodness of fit) of the exploratory research model which includes a mediating variable, IT personnel competency. The result of analysis shows that the value of GoF is 0.45 above GoFlarge(0.36). Therefore, the research model turns out be good. In addition, the study performs a Sobel Test to find out the statistical significance of the mediating variable, IT personnel competency, which is already turned out to have the mediating effect in the research model using PLS. The result of a Sobel Test shows that the values of Z are all significant statistically (above 1.96 and below -1.96) and indicates that IT personnel competency plays a mediating role in the research model. At the present day, most employees are universally afraid of organizational changes and resistant to them in organizations in which the acceptance and learning of a new information technology or information system is particularly required. The problem is due' to increasing a feeling of uneasiness and uncertainty in improving past practices in accordance with new organizational changes. It is not always possible for employees with positive attitudes to perform their works suitable to organizational goals. Therefore, organizations need to identify what kinds of goal-oriented minds employees have, motivate them to do self-directed learning, and provide them with organizational environment to enhance positive aspects in their works. Thus, the study provides researchers and practitioners with a matter of primary interest in goal orientation and IT personnel competency, of which they have been unaware until very recently. Some academic and practical implications and limitations arisen in the course of the research, and suggestions for future research directions are also discussed.

A Study on the Expression of CD44s and CD44v6 in Non-Small Cell Lung Carcinomas (비소세포성 폐암종의 CD44s 및 CD44v6의 발현에 대한 연구 -CD44의 발현에 대한 연구-)

  • Chang, Woon-Ha;Oh, Tae-Yun;Kim, Jung-Tae
    • Journal of Chest Surgery
    • /
    • v.39 no.1 s.258
    • /
    • pp.1-11
    • /
    • 2006
  • Background: CD44 is a glycoprotein on the cell surface which is involved in the cell-to-cell and cell-to-matrix interaction. The standard form, CD44s and multiple isoforms are determined by alternative splicing of 10 exons. Recent studies have suggested that CD44 may help invasion and metastasis of various epithelial tumors as well as activation of Iymphocytes and monocytes. The expression pattern of CD44 can be different according to tumor types. The author studied the expression pattern of CD44s and one of its variants, CD44v6 in non-small cell lung carcinomas (NSCLC) to find their implications on clinicopathologic aspects, including the survival of the patients. Material and Method: A total of 89 primary NSCLSs (48 squamous cell carcinomas, 33 adenocarcinomas, and 8 undifferentiated large cell carcinomas) were retrieved during the years between 1985 to 1994. The immunohisto chemistry was done by using monoclonal antibodies and the CD44 expression for angiogenesis was evaluated by counting the number of tumor microvessels. Result: Seventy-one (79.8$\%$) and 64 (71 .9$\%$) among 89 NSCLSs revealed the expression of CD44s and CD44v6, respectively. The expression of CD44s was well correlated with that of CD44v6 (r=0.710, p < 0.0001). The expression of CD44s and CD44v6 was associated with the histopathologic type of the NSCLCs, and squamous cell carcinoma was the type that showed the highest expression of CD44s and CD44v6 (p < 0.0001). Microvessel count was the highest in adenocarcinomas (113.6$\pm$69.7 on 200-fold magnification and 54.8$\pm$41.1 on 400-fold magnification) and correlated with the tumor size of TNM system (r=0.217, p=0.043) and CD44s expression (r=0.218, p=0.040). In adenocarcinoma, the patients with higher CD44s expression survived shorter than those with lower CD44s expression (p=0.0194) but there was no statistical significance on multivariate analysis(p=0.3298). Conclusion: The expression of both CD44s and CD44v6 may be associated with the squamous differentiation in non-small cell lung carcinomas. The relationship of CD44s expression with micro-vessel density of the tumor suggests an involvement of CD44s in tumor angiogenesis, which in turn would help tumor growth.

Comparative Analysis of Patterns of Care Study of Radiotherapy for Esophageal Cancer among Three Countries: South Korea, Japan and the United States (한국, 미국, 일본의 식도암 방사선 치료에 대한 PCS($1998{\sim}1999$) 결과의 비교 분석)

  • Hur, Won-Joo;Choi, Young-Min;Kim, Jeung-Kee;Lee, Hyung-Sik;Choi, Seok-Reyol;Kim, Il-Han
    • Radiation Oncology Journal
    • /
    • v.26 no.2
    • /
    • pp.83-90
    • /
    • 2008
  • Purpose: For the first time, a nationwide survey of the Patterns of Care Study(PCS) for the various radiotherapy treatments of esophageal cancer was carried out in South Korea. In order to observe the different parameters, as well as offer a solid cooperative system, we compared the Korean results with those observed in the United States(US) and Japan. Materials and Methods: Two hundreds forty-six esophageal cancer patients from 21 institutions were enrolled in the South Korean study. The patients received radiation theraphy(RT) from 1998 to 1999. In order to compare these results with those from the United States, a published study by Suntharalingam, which included 414 patients[treated by Radiotherapy(RT)] from 59 institutions between 1996 and 1999 was chosen. In order to compare the South Korean with the Japanese data, we choose two different studies. The results published by Gomi were selected as the surgery group, in which 220 esophageal cancer patients were analyzed from 76 facilities. The patients underwent surgery and received RT with or without chemotherapy between 1998 and 2001. The non-surgery group originated from a study by Murakami, in which 385 patients were treated either by RT alone or RT with chemotherapy, but no surgery, between 1999 and 2001. Results: The median age of enrolled patients was highest in the Japanese non-surgery group(71 years old). The gender ratio was approximately 9:1(male:female) in both the Korean and Japanese studies, whereas females made up 23.1% of the study population in the US study. Adenocarcinoma outnumbered squamous cell carcinoma in the US study, whereas squamous cell carcinoma was more prevalent both the Korean and Japanese studies(Korea 96.3%, Japan 98%). An esophagogram, endoscopy, and chest CT scan were the main modalities of diagnostic evaluation used in all three countries. The US and Japan used the abdominal CT scan more frequently than the abdominal ultrasonography. Radiotherapy alone treatment was most rarely used in the US study(9.5%), compared to the Korean(23.2%) and Japanese(39%) studies. The combination of the three modalities(Surgery+RT+Chemotherapy) was performed least often in Korea(11.8%) compared to the Japanese(49.5%) and US(32.8%) studies. Chemotherapy(89%) and chemotherapy with concurrent chemoradiotherapy(97%) was most frequently used in the US study. Fluorouracil(5-FU) and Cisplatin were the most preferred drug treatments used in all three countries. The median radiation dose was 50.4 Gy in the US study, as compared to 55.8 Gy in the Korean study regardless of whether an operation was performed. However, in Japan, different median doses were delivered for the surgery(48 Gy) and non-surgery groups(60 Gy). Conclusion: Although some aspects of the evaluation of esophageal cancer and its various treatment modalities were heterogeneous among the three countries surveyed, we found no remarkable differences in the RT dose or technique, which includes the number of portals and energy beams.

Analysis of Critical Control Points through Field Assessment of Sanitation Management Practices in Foodservice Establishments (현장실사를 통한 급식유헝별 위생관리실태 분석)

  • Kwak Tong-Kyung;Lee Kyung-Mi;Chang Hye-Ja;Kang Yong-Jae;Hong Wan-Soo;Moon Hye-Kyung
    • Korean journal of food and cookery science
    • /
    • v.21 no.3 s.87
    • /
    • pp.290-300
    • /
    • 2005
  • Increased sanitation management of foodservice establishments is required because most of the reported foodborne-disease outbreaks were in the foodservice industry. The purpose of this study was to determine the important control points for good sanitation. In this study, we inspected twenty foodservice establishments in Seoul, Kyunggi, Kyungnam with a self-developed monitoring tool. These foodservice establishments included secondary schools, universities, and industries. Six of them had appointed as the HACCP-certified establishments from the Korea Food and Drug Administration. The inspection was conducted from June to August in 2002. The inspection tool consisted of nine dimensions and sixty-five items. The dimensions were 'personal sanitation', 'supply of raw food', 'food storage', 'handling of raw food and ready-to-eat', 'cleaning and sterilization', 'waste control', 'pest control', and 'control of establishment and equipment' The highest possible score of this inspection tool is 105 points. Statistical data analysis was completed using the SPSS Package(11.0) for descriptive analysis Kruskal-Wallis. The score for the secondary schools (83.6 points) was higher than for the others and number of in compliance item was 50.9 on average. Therefore, we concluded that the secondary schools' sanitation condition was good. The foodservice establishments acquired HACCP certification was 89.7 points, which was significantly higher than that of establishments not applying foodservices in total score. Instituting the HACCP system in a foodservice is very effective for sanitation management. Many out of the compliance observations were found in the dimensions of 'waste control', 'control of establishment and equipment', and 'supply of raw food' 'Clean condition of refrigerator' item was $65\%$ out of the compliance that was the highest percent in this study. 'Notify and observance of heating/reheating temperature' was $45\%$ out of compliance. Items which were over $30\%$ out of compliance were 'sterilization of knifes and chopping boards in cooking', 'education of workers', 'maintain refrigerator temperature blow $5^{\circ}C$', and 'countermeasure of infection workers' In the results, most of the foodservice establishments were poorly managed in temperature control and cross-contamination. The important control points revealed in this study were preventing contamination, cooking temperature compliance, management of raw food and refrigerator. Therefore foodservice establishments should pay attention to education and training about important control points. The systematic sanitation management monitoring tool developed in this study can be effectively applied for conducting self-inspection and improving the sanitary conditions of their own foodservice operations.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

Derivation of Digital Music's Ranking Change Through Time Series Clustering (시계열 군집분석을 통한 디지털 음원의 순위 변화 패턴 분류)

  • Yoo, In-Jin;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.171-191
    • /
    • 2020
  • This study focused on digital music, which is the most valuable cultural asset in the modern society and occupies a particularly important position in the flow of the Korean Wave. Digital music was collected based on the "Gaon Chart," a well-established music chart in Korea. Through this, the changes in the ranking of the music that entered the chart for 73 weeks were collected. Afterwards, patterns with similar characteristics were derived through time series cluster analysis. Then, a descriptive analysis was performed on the notable features of each pattern. The research process suggested by this study is as follows. First, in the data collection process, time series data was collected to check the ranking change of digital music. Subsequently, in the data processing stage, the collected data was matched with the rankings over time, and the music title and artist name were processed. Each analysis is then sequentially performed in two stages consisting of exploratory analysis and explanatory analysis. First, the data collection period was limited to the period before 'the music bulk buying phenomenon', a reliability issue related to music ranking in Korea. Specifically, it is 73 weeks starting from December 31, 2017 to January 06, 2018 as the first week, and from May 19, 2019 to May 25, 2019. And the analysis targets were limited to digital music released in Korea. In particular, digital music was collected based on the "Gaon Chart", a well-known music chart in Korea. Unlike private music charts that are being serviced in Korea, Gaon Charts are charts approved by government agencies and have basic reliability. Therefore, it can be considered that it has more public confidence than the ranking information provided by other services. The contents of the collected data are as follows. Data on the period and ranking, the name of the music, the name of the artist, the name of the album, the Gaon index, the production company, and the distribution company were collected for the music that entered the top 100 on the music chart within the collection period. Through data collection, 7,300 music, which were included in the top 100 on the music chart, were identified for a total of 73 weeks. On the other hand, in the case of digital music, since the cases included in the music chart for more than two weeks are frequent, the duplication of music is removed through the pre-processing process. For duplicate music, the number and location of the duplicated music were checked through the duplicate check function, and then deleted to form data for analysis. Through this, a list of 742 unique music for analysis among the 7,300-music data in advance was secured. A total of 742 songs were secured through previous data collection and pre-processing. In addition, a total of 16 patterns were derived through time series cluster analysis on the ranking change. Based on the patterns derived after that, two representative patterns were identified: 'Steady Seller' and 'One-Hit Wonder'. Furthermore, the two patterns were subdivided into five patterns in consideration of the survival period of the music and the music ranking. The important characteristics of each pattern are as follows. First, the artist's superstar effect and bandwagon effect were strong in the one-hit wonder-type pattern. Therefore, when consumers choose a digital music, they are strongly influenced by the superstar effect and the bandwagon effect. Second, through the Steady Seller pattern, we confirmed the music that have been chosen by consumers for a very long time. In addition, we checked the patterns of the most selected music through consumer needs. Contrary to popular belief, the steady seller: mid-term pattern, not the one-hit wonder pattern, received the most choices from consumers. Particularly noteworthy is that the 'Climbing the Chart' phenomenon, which is contrary to the existing pattern, was confirmed through the steady-seller pattern. This study focuses on the change in the ranking of music over time, a field that has been relatively alienated centering on digital music. In addition, a new approach to music research was attempted by subdividing the pattern of ranking change rather than predicting the success and ranking of music.

The Framework of Research Network and Performance Evaluation on Personal Information Security: Social Network Analysis Perspective (개인정보보호 분야의 연구자 네트워크와 성과 평가 프레임워크: 소셜 네트워크 분석을 중심으로)

  • Kim, Minsu;Choi, Jaewon;Kim, Hyun Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.177-193
    • /
    • 2014
  • Over the past decade, there has been a rapid diffusion of electronic commerce and a rising number of interconnected networks, resulting in an escalation of security threats and privacy concerns. Electronic commerce has a built-in trade-off between the necessity of providing at least some personal information to consummate an online transaction, and the risk of negative consequences from providing such information. More recently, the frequent disclosure of private information has raised concerns about privacy and its impacts. This has motivated researchers in various fields to explore information privacy issues to address these concerns. Accordingly, the necessity for information privacy policies and technologies for collecting and storing data, and information privacy research in various fields such as medicine, computer science, business, and statistics has increased. The occurrence of various information security accidents have made finding experts in the information security field an important issue. Objective measures for finding such experts are required, as it is currently rather subjective. Based on social network analysis, this paper focused on a framework to evaluate the process of finding experts in the information security field. We collected data from the National Discovery for Science Leaders (NDSL) database, initially collecting about 2000 papers covering the period between 2005 and 2013. Outliers and the data of irrelevant papers were dropped, leaving 784 papers to test the suggested hypotheses. The co-authorship network data for co-author relationship, publisher, affiliation, and so on were analyzed using social network measures including centrality and structural hole. The results of our model estimation are as follows. With the exception of Hypothesis 3, which deals with the relationship between eigenvector centrality and performance, all of our hypotheses were supported. In line with our hypothesis, degree centrality (H1) was supported with its positive influence on the researchers' publishing performance (p<0.001). This finding indicates that as the degree of cooperation increased, the more the publishing performance of researchers increased. In addition, closeness centrality (H2) was also positively associated with researchers' publishing performance (p<0.001), suggesting that, as the efficiency of information acquisition increased, the more the researchers' publishing performance increased. This paper identified the difference in publishing performance among researchers. The analysis can be used to identify core experts and evaluate their performance in the information privacy research field. The co-authorship network for information privacy can aid in understanding the deep relationships among researchers. In addition, extracting characteristics of publishers and affiliations, this paper suggested an understanding of the social network measures and their potential for finding experts in the information privacy field. Social concerns about securing the objectivity of experts have increased, because experts in the information privacy field frequently participate in political consultation, and business education support and evaluation. In terms of practical implications, this research suggests an objective framework for experts in the information privacy field, and is useful for people who are in charge of managing research human resources. This study has some limitations, providing opportunities and suggestions for future research. Presenting the difference in information diffusion according to media and proximity presents difficulties for the generalization of the theory due to the small sample size. Therefore, further studies could consider an increased sample size and media diversity, the difference in information diffusion according to the media type, and information proximity could be explored in more detail. Moreover, previous network research has commonly observed a causal relationship between the independent and dependent variable (Kadushin, 2012). In this study, degree centrality as an independent variable might have causal relationship with performance as a dependent variable. However, in the case of network analysis research, network indices could be computed after the network relationship is created. An annual analysis could help mitigate this limitation.

Development of the Accident Prediction Model for Enlisted Men through an Integrated Approach to Datamining and Textmining (데이터 마이닝과 텍스트 마이닝의 통합적 접근을 통한 병사 사고예측 모델 개발)

  • Yoon, Seungjin;Kim, Suhwan;Shin, Kyungshik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.1-17
    • /
    • 2015
  • In this paper, we report what we have observed with regards to a prediction model for the military based on enlisted men's internal(cumulative records) and external data(SNS data). This work is significant in the military's efforts to supervise them. In spite of their effort, many commanders have failed to prevent accidents by their subordinates. One of the important duties of officers' work is to take care of their subordinates in prevention unexpected accidents. However, it is hard to prevent accidents so we must attempt to determine a proper method. Our motivation for presenting this paper is to mate it possible to predict accidents using enlisted men's internal and external data. The biggest issue facing the military is the occurrence of accidents by enlisted men related to maladjustment and the relaxation of military discipline. The core method of preventing accidents by soldiers is to identify problems and manage them quickly. Commanders predict accidents by interviewing their soldiers and observing their surroundings. It requires considerable time and effort and results in a significant difference depending on the capabilities of the commanders. In this paper, we seek to predict accidents with objective data which can easily be obtained. Recently, records of enlisted men as well as SNS communication between commanders and soldiers, make it possible to predict and prevent accidents. This paper concerns the application of data mining to identify their interests, predict accidents and make use of internal and external data (SNS). We propose both a topic analysis and decision tree method. The study is conducted in two steps. First, topic analysis is conducted through the SNS of enlisted men. Second, the decision tree method is used to analyze the internal data with the results of the first analysis. The dependent variable for these analysis is the presence of any accidents. In order to analyze their SNS, we require tools such as text mining and topic analysis. We used SAS Enterprise Miner 12.1, which provides a text miner module. Our approach for finding their interests is composed of three main phases; collecting, topic analysis, and converting topic analysis results into points for using independent variables. In the first phase, we collect enlisted men's SNS data by commender's ID. After gathering unstructured SNS data, the topic analysis phase extracts issues from them. For simplicity, 5 topics(vacation, friends, stress, training, and sports) are extracted from 20,000 articles. In the third phase, using these 5 topics, we quantify them as personal points. After quantifying their topic, we include these results in independent variables which are composed of 15 internal data sets. Then, we make two decision trees. The first tree is composed of their internal data only. The second tree is composed of their external data(SNS) as well as their internal data. After that, we compare the results of misclassification from SAS E-miner. The first model's misclassification is 12.1%. On the other hand, second model's misclassification is 7.8%. This method predicts accidents with an accuracy of approximately 92%. The gap of the two models is 4.3%. Finally, we test if the difference between them is meaningful or not, using the McNemar test. The result of test is considered relevant.(p-value : 0.0003) This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of enlisted men's data. Additionally, various independent variables used in the decision tree model are used as categorical variables instead of continuous variables. So it suffers a loss of information. In spite of extensive efforts to provide prediction models for the military, commanders' predictions are accurate only when they have sufficient data about their subordinates. Our proposed methodology can provide support to decision-making in the military. This study is expected to contribute to the prevention of accidents in the military based on scientific analysis of enlisted men and proper management of them.

A Study on Recent Research Trend in Management of Technology Using Keywords Network Analysis (키워드 네트워크 분석을 통해 살펴본 기술경영의 최근 연구동향)

  • Kho, Jaechang;Cho, Kuentae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.101-123
    • /
    • 2013
  • Recently due to the advancements of science and information technology, the socio-economic business areas are changing from the industrial economy to a knowledge economy. Furthermore, companies need to do creation of new value through continuous innovation, development of core competencies and technologies, and technological convergence. Therefore, the identification of major trends in technology research and the interdisciplinary knowledge-based prediction of integrated technologies and promising techniques are required for firms to gain and sustain competitive advantage and future growth engines. The aim of this paper is to understand the recent research trend in management of technology (MOT) and to foresee promising technologies with deep knowledge for both technology and business. Furthermore, this study intends to give a clear way to find new technical value for constant innovation and to capture core technology and technology convergence. Bibliometrics is a metrical analysis to understand literature's characteristics. Traditional bibliometrics has its limitation not to understand relationship between trend in technology management and technology itself, since it focuses on quantitative indices such as quotation frequency. To overcome this issue, the network focused bibliometrics has been used instead of traditional one. The network focused bibliometrics mainly uses "Co-citation" and "Co-word" analysis. In this study, a keywords network analysis, one of social network analysis, is performed to analyze recent research trend in MOT. For the analysis, we collected keywords from research papers published in international journals related MOT between 2002 and 2011, constructed a keyword network, and then conducted the keywords network analysis. Over the past 40 years, the studies in social network have attempted to understand the social interactions through the network structure represented by connection patterns. In other words, social network analysis has been used to explain the structures and behaviors of various social formations such as teams, organizations, and industries. In general, the social network analysis uses data as a form of matrix. In our context, the matrix depicts the relations between rows as papers and columns as keywords, where the relations are represented as binary. Even though there are no direct relations between papers who have been published, the relations between papers can be derived artificially as in the paper-keyword matrix, in which each cell has 1 for including or 0 for not including. For example, a keywords network can be configured in a way to connect the papers which have included one or more same keywords. After constructing a keywords network, we analyzed frequency of keywords, structural characteristics of keywords network, preferential attachment and growth of new keywords, component, and centrality. The results of this study are as follows. First, a paper has 4.574 keywords on the average. 90% of keywords were used three or less times for past 10 years and about 75% of keywords appeared only one time. Second, the keyword network in MOT is a small world network and a scale free network in which a small number of keywords have a tendency to become a monopoly. Third, the gap between the rich (with more edges) and the poor (with fewer edges) in the network is getting bigger as time goes on. Fourth, most of newly entering keywords become poor nodes within about 2~3 years. Finally, keywords with high degree centrality, betweenness centrality, and closeness centrality are "Innovation," "R&D," "Patent," "Forecast," "Technology transfer," "Technology," and "SME". The results of analysis will help researchers identify major trends in MOT research and then seek a new research topic. We hope that the result of the analysis will help researchers of MOT identify major trends in technology research, and utilize as useful reference information when they seek consilience with other fields of study and select a new research topic.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.