• Title/Summary/Keyword: Scientists

Search Result 24,335, Processing Time 0.049 seconds

Design and Implementation of the SSL Component based on CBD (CBD에 기반한 SSL 컴포넌트의 설계 및 구현)

  • Cho Eun-Ae;Moon Chang-Joo;Baik Doo-Kwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.3
    • /
    • pp.192-207
    • /
    • 2006
  • Today, the SSL protocol has been used as core part in various computing environments or security systems. But, the SSL protocol has several problems, because of the rigidity on operating. First, SSL protocol brings considerable burden to the CPU utilization so that performance of the security service in encryption transaction is lowered because it encrypts all data which is transferred between a server and a client. Second, SSL protocol can be vulnerable for cryptanalysis due to the key in fixed algorithm being used. Third, it is difficult to add and use another new cryptography algorithms. Finally. it is difficult for developers to learn use cryptography API(Application Program Interface) for the SSL protocol. Hence, we need to cover these problems, and, at the same time, we need the secure and comfortable method to operate the SSL protocol and to handle the efficient data. In this paper, we propose the SSL component which is designed and implemented using CBD(Component Based Development) concept to satisfy these requirements. The SSL component provides not only data encryption services like the SSL protocol but also convenient APIs for the developer unfamiliar with security. Further, the SSL component can improve the productivity and give reduce development cost. Because the SSL component can be reused. Also, in case of that new algorithms are added or algorithms are changed, it Is compatible and easy to interlock. SSL Component works the SSL protocol service in application layer. First of all, we take out the requirements, and then, we design and implement the SSL Component, confidentiality and integrity component, which support the SSL component, dependently. These all mentioned components are implemented by EJB, it can provide the efficient data handling when data is encrypted/decrypted by choosing the data. Also, it improves the usability by choosing data and mechanism as user intend. In conclusion, as we test and evaluate these component, SSL component is more usable and efficient than existing SSL protocol, because the increase rate of processing time for SSL component is lower that SSL protocol's.

A Control Method for designing Object Interactions in 3D Game (3차원 게임에서 객체들의 상호 작용을 디자인하기 위한 제어 기법)

  • 김기현;김상욱
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.3
    • /
    • pp.322-331
    • /
    • 2003
  • As the complexity of a 3D game is increased by various factors of the game scenario, it has a problem for controlling the interrelation of the game objects. Therefore, a game system has a necessity of the coordination of the responses of the game objects. Also, it is necessary to control the behaviors of animations of the game objects in terms of the game scenario. To produce realistic game simulations, a system has to include a structure for designing the interactions among the game objects. This paper presents a method that designs the dynamic control mechanism for the interaction of the game objects in the game scenario. For the method, we suggest a game agent system as a framework that is based on intelligent agents who can make decisions using specific rules. Game agent systems are used in order to manage environment data, to simulate the game objects, to control interactions among game objects, and to support visual authoring interface that ran define a various interrelations of the game objects. These techniques can process the autonomy level of the game objects and the associated collision avoidance method, etc. Also, it is possible to make the coherent decision-making ability of the game objects about a change of the scene. In this paper, the rule-based behavior control was designed to guide the simulation of the game objects. The rules are pre-defined by the user using visual interface for designing their interaction. The Agent State Decision Network, which is composed of the visual elements, is able to pass the information and infers the current state of the game objects. All of such methods can monitor and check a variation of motion state between game objects in real time. Finally, we present a validation of the control method together with a simple case-study example. In this paper, we design and implement the supervised classification systems for high resolution satellite images. The systems support various interfaces and statistical data of training samples so that we can select the most effective training data. In addition, the efficient extension of new classification algorithms and satellite image formats are applied easily through the modularized systems. The classifiers are considered the characteristics of spectral bands from the selected training data. They provide various supervised classification algorithms which include Parallelepiped, Minimum distance, Mahalanobis distance, Maximum likelihood and Fuzzy theory. We used IKONOS images for the input and verified the systems for the classification of high resolution satellite images.

A Hierarchical Grid Alignment Algorithm for Microarray Image Analysis (마이크로어레이 이미지 분석을 위한 계층적 그리드 정렬 알고리즘)

  • Chun Bong-Kyung;Jin Hee-Jeong;Lee Pyung-Jun;Cho Hwan-Gue
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.2
    • /
    • pp.143-153
    • /
    • 2006
  • Microarray which enables us to obtain hundreds and thousands of expression of gene or genotype at once is an epoch-making technology in comparative analysis of genes. First of all, we have to measure the intensity of each gene in an microarray image from the experiment to gain the expression level of each gene. But it is difficult to analyze the microarray image in manual because it has a lot of genes. Meta-gridding method and various auto-gridding methods have been proposed for this, but thew still have some problems. For example, meta-gridding requires manual-work due to some variations in spite of experiment in same microarray, and auto-gridding nay not carried out fully or correctly when an image has a lot of noises or is lowly expressed. In this article, we propose Hierarchical Grid Alignment algorithm for new methodology combining meta-gridding method with auto-gridding method. In our methodology, we necd a meta-grid as an input, and then align it with the microarray image automatically. Experimental results show that the proposed method serves more robust and reliable gridding result than the previous methods. It is also possible for user to do more reliable batch analysis by using our algorithm.

Automated-Database Tuning System With Knowledge-based Reasoning Engine (지식 기반 추론 엔진을 이용한 자동화된 데이터베이스 튜닝 시스템)

  • Gang, Seung-Seok;Lee, Dong-Joo;Jeong, Ok-Ran;Lee, Sang-Goo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.06a
    • /
    • pp.17-18
    • /
    • 2007
  • 데이터베이스 튜닝은 일반적으로 데이터베이스 어플리케이션을 "좀 더 빠르게" 실행하게 하는 일련의 활동을 뜻한다[1]. 데이터베이스 관리자가 튜닝에 필요한 주먹구구식 룰(Rule of thumb)들을 모두 파악 하고 상황에 맞추어 적용하는 것은 비싼 비용과 오랜 시간을 요구한다. 그렇게 때문에 서로 다른 어플 리케이션들이 맞물려 있는 복잡한 서비스는 필수적으로 자동화된 데이터베이스 성능 관리와 튜닝을 필 요로 한다. 본 논문에서는 이를 해결하기 위하여 지식 도매인(Knowledge Domain)을 기초로 한 자동화 된 데이터베이스 튜닝 원칙(Tuning Principle)을 제시하는 시스템을 제안한다. 각각의 데이터베이스 튜닝 이론들은 지식 도매인의 지식으로 활용되며, 성능에 영향을 미치는 요소들을 개체(Object)와 콘셉트 (Concept)로 구성하고 추론 시스템을 통해 튜닝 원칙을 추론하여 쉽고 빠르게 현재 상황에 맞는 튜닝 방법론을 적용시킬 수 있다. 자동화된 데이터베이스 튜닝에 대해 여러 분야에 걸쳐 학문적인 연구가 이루어지고 있다. 그 예로써 Microsoft의 AutoAdmin Project[2], Oracle의 SQL 튜닝 아키텍처[3], COLT[4], DBA Companion[5], SQUASH[6] 등을 들 수 있다. 이러한 최적화 기법들을 각각의 기능적인 방법론에 따라 다시 분류하면 크게 Design Tuning, Logical Structure Tuning, Sentence Tuning, SQL Tuning, Server Tuning, System/Network Tuning으로 나누어 볼 수 있다. 이 중 SQL Tuning 등은 수치적으로 결정되어 이미 존재하는 정보를 이용하기 때문에 구조화된 모델로 표현하기 쉽고 사용자의 다양한 요구에 의해 변화하는 조건들을 수용하기 쉽기 때문에 이에 중점을 두고 성능 문제를 해결하는 데 초점을 맞추었다. 데이터베이스 시스템의 일련의 처리 과정에 따라 DBMS를 구성하는 개체들과 속성, 그리고 연관 관계들이 모델링된다. 데이터베이스 시스템은 Application / Query / DBMS Level의 3개 레벨에 따라 구조화되며, 본 논문에서는 개체, 속성, 연관 관계 및 데이터베이스 튜닝에 사용되는 Rule of thumb들을 분석하여 튜닝 원칙을 포함한 지식의 형태로 변환하였다. 튜닝 원칙은 데이터베이스 시스템에서 발생하는 문제를 해결할 수 있게 하는 일종의 황금률로써 지식 도매인의 바탕이 되는 사실(Fact)과 룰(Rule) 로써 표현된다. Fact는 모델링된 시스템을 지식 도매인의 하나의 지식 개체로 표현하는 방식이고, Rule 은 Fact에 기반을 두어 튜닝 원칙을 지식의 형태로 표현한 것이다. Rule은 다시 시스템 모델링을 통해 사전에 정의되는 Rule와 튜닝 원칙을 추론하기 위해 사용되는 Rule의 두 가지 타업으로 나뉘며, 대부분의 Rule은 입력되는 값에 따라 다른 솔루션을 취하게 하는 분기의 역할을 수행한다. 사용자는 제한적으로 자동 생성된 Fact와 Rule을 통해 튜닝 원칙을 추론하여 데이터베이스 시스템에 적용할 수 있으며, 요구나 필요에 따라 GUI를 통해 상황에 맞는 Fact와 Rule을 수동으로 추가할 수도 었다. 지식 도매인에서 튜닝 원칙을 추론하기 위해 JAVA 기반의 추론 엔진인 JESS가 사용된다. JESS는 스크립트 언어를 사용하는 전문가 시스템[7]으로 선언적 룰(Declarative Rule)을 이용하여 지식을 표현 하고 추론을 수행하는 추론 엔진의 한 종류이다. JESS의 지식 표현 방식은 튜닝 원칙을 쉽게 표현하고 수용할 수 있는 구조를 가지고 있으며 작은 크기와 빠른 추론 성능을 가지기 때문에 실시간으로 처리 되는 어플리케이션 튜닝에 적합하다. 지식 기반 모률의 가장 큰 역할은 주어진 데이터베이스 시스템의 모델을 통하여 필요한 새로운 지식을 생성하고 저장하는 것이다. 이를 위하여 Fact와 Rule은 지식 표현 의 기본 단위인 트리플(Triple)의 형태로 표현된다, 트리플은 Subject, Property, Object의 3가지 요소로 구성되며, 대부분의 Fact와 Rule들은 트리플의 기본 형태 또는 트리플의 조합으로 이루어진 C Condition과 Action의 두 부분의 결합으로 구성된다. 이와 같이 데이터베이스 시스템 모델의 개체들과 속성, 그리고 연관 관계들을 표현함으로써 지식들이 추론 엔진의 Fact와 Rule로 기능할 수 있다. 본 시스템에서는 이를 구현 및 실험하기 위하여 웹 기반 서버-클라이언트 시스템을 가정하였다. 서버는 Process Controller, Parser, Rule Database, JESS Reasoning Engine으로 구성 되 어 있으며, 클라이 언트는 Rule Manager Interface와 Result Viewer로 구성되어 었다. 실험을 통해 얻어지는 튜닝 원칙 적용 전후의 실행 시간 측정 등 데이터베이스 시스템 성능 척도를 비교함으로써 시스템의 효용을 판단하였으며, 실험 결과 적용 전에 비하여 튜닝 원칙을 적용한 경우 최대 1초 미만의 전처리에 따른 부하 시간 추가와 최소 약 1.5배에서 최대 약 3배까지의 처리 시간 개선을 확인하였다. 본 논문에서 제안하는 시스템은 튜닝 원칙을 자동으로 생성하고 지식 형태로 변형시킴으로써 새로운 튜닝 원칙을 파생하여 제공하고, 성능에 영향을 미치는 요소와 함께 직접 Fact과 Rule을 추가함으로써 커스터마이정된 튜닝을 수행할 수 있게 하는 장점을 가진다. 추후 쿼리 자체의 튜닝 및 인텍스 최적화 등의 프로세스 자동화와 Rule을 효율적으로 정의하고 추가하는 방법 그리고 시스템 모델링을 효과적으로 구성하는 방법에 대한 연구를 통해 본 연구를 더욱 개선시킬 수 있을 것이다.

  • PDF

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

Implementation of Reporting Tool Supporting OLAP and Data Mining Analysis Using XMLA (XMLA를 사용한 OLAP과 데이타 마이닝 분석이 가능한 리포팅 툴의 구현)

  • Choe, Jee-Woong;Kim, Myung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.3
    • /
    • pp.154-166
    • /
    • 2009
  • Database query and reporting tools, OLAP tools and data mining tools are typical front-end tools in Business Intelligence environment which is able to support gathering, consolidating and analyzing data produced from business operation activities and provide access to the result to enterprise's users. Traditional reporting tools have an advantage of creating sophisticated dynamic reports including SQL query result sets, which look like documents produced by word processors, and publishing the reports to the Web environment, but data source for the tools is limited to RDBMS. On the other hand, OLAP tools and data mining tools have an advantage of providing powerful information analysis functions on each own way, but built-in visualization components for analysis results are limited to tables or some charts. Thus, this paper presents a system that integrates three typical front-end tools to complement one another for BI environment. Traditional reporting tools only have a query editor for generating SQL statements to bring data from RDBMS. However, the reporting tool presented by this paper can extract data also from OLAP and data mining servers, because editors for OLAP and data mining query requests are added into this tool. Traditional systems produce all documents in the server side. This structure enables reporting tools to avoid repetitive process to generate documents, when many clients intend to access the same dynamic document. But, because this system targets that a few users generate documents for data analysis, this tool generates documents at the client side. Therefore, the tool has a processing mechanism to deal with a number of data despite the limited memory capacity of the report viewer in the client side. Also, this reporting tool has data structure for integrating data from three kinds of data sources into one document. Finally, most of traditional front-end tools for BI are dependent on data source architecture from specific vendor. To overcome the problem, this system uses XMLA that is a protocol based on web service to access to data sources for OLAP and data mining services from various vendors.

Evaluation on the Implementation of Girl Friendly Science Activity (여학생 친화적 과학활동 프로그램의 운영 평가)

  • Jhun, Young-Seok;Shin, Young-Joon
    • Journal of The Korean Association For Science Education
    • /
    • v.24 no.3
    • /
    • pp.442-458
    • /
    • 2004
  • This study was conducted to develop a plan for a large-scale implementation of the Girl Friendly Science Program based on the results of analysis and investigation of its current pilot implementation, Girl Friendly Science Program materials, which was first developed in 1999 with the support from Ministry of Gender Equality, consist of 1) five theme-based units that are specifically targeted individual students' unique ability, aptitude, and career choice, and 2) differentiated learning materials for 7th through 10th grade female students. All the materials are available at the homepage (http://tes.or.kr/gfsp.cgi) of 'Teachers for Exciting Science(the organization of science teachers in Seoul area)'. Since the materials are well organized by topic and grade level and presented in both Korean word process document and html format, anyone can easily access to the materials for their own instructional use. Ever since its launch the number of visitors to the homepage has been constantly increasing. The evaluation results of the current pilot implementation of the materials that targeted individual students' ability and aptitude showed that it scored high in terms of its alignment to the original purpose, content, level, and effectiveness to implement in classrooms. However, its evaluation scores were low in terms of the convenience for teachers to guide the materials, and its organization and operation. The results also showed a significant change in students' perception of science, and students' positive experiences of science through various interdisciplinary activities. On the other hand, the evaluation of students' experiences with the materials showed that students' assessment about an activity was largely depending on a success or failure of their experiences. Overall, students' evaluation of activities scores were low for simple activities such as cutting off or pasting papers. According to students' achievement test results, differences between pre and post test scores in the Affective Domain was statistically significant (p<0.05), but not in Inquiry Domain. Based on teachers observations, numerous schools where have run this program reported that students' abilities to cooperate, discuss, observe and reason with evidences were improved. In order to implement this program in a larger scale, it is critical to have a strong support of teachers and induce them to change their teaching strategy through building a community of teachers and developing ongoing teacher professional development programs. Finally, there still remain strong needs to develop more programs, and actively discover and train more domestic woman scientists and engineers and collaborate with them to develop more educational materials for girls in all ages.

Analysis of Misconceptions on Oceanic Front and Fishing Ground in Secondary-School Science and Earth Science Textbooks (중등학교 과학 및 지구과학 교과서 조경 수역 및 어장에 관한 오개념 분석)

  • Park, Kyung-Ae;Lee, Jae Yon;Kang, Chang-Keun;Kim, Chang-Sin
    • Journal of the Korean earth science society
    • /
    • v.41 no.5
    • /
    • pp.504-519
    • /
    • 2020
  • Oceanic fronts, which are areas where sea water with different properties meet in the ocean, play an important role in controlling weather and climate change through air-sea interactions and marine dynamics such as heat and momentum exchange and processes by which properties of sea water are mixed. Such oceanic fronts have long been described in secondary school textbooks with the term 'Jokyung water zone (JWC hereafter) or oceanic front', meaning areas where the different currents met, and were related to fishing grounds in the East Sea. However, higher education materials and marine scientists have not used this term for the past few decades; therefore, the appropriateness of the term needs to be analyzed to remove any misconceptions presented. This study analyzed 11 secondary school textbooks (5 middle school textbooks and 6 high school textbooks) based on the revised 2015 curriculum. A survey of 30 secondary school science teachers was also conducted to analyze their awareness of the problems. An analysis of the textbook contents related to the JWC and fishing grounds found several errors and misconceptions that did not correspond with scientific facts. Although the textbooks mainly uses the concept of the JWC to represent the meeting of cold and warm currents, it would be reasonable to replace it with the more comprehensive term 'oceanic front', which would indicate an area where different properties of sea water-such as its temperature, salinity, density, and velocity-interact. In the textbooks, seasonal changes in the fishing grounds are linked to seasonal changes in the North Korean Cold Current (NKCC), which moves southwards in winter and northwards in summer; this is the complete opposite of previous scientific knowledge, which describes it strengthening in summer. Fishing grounds are not limited to narrow coastal zones; they are widespread throughout the East Sea. The results of the survey of teachers demonstrated that this misconception has persisted for decades. This study emphasized the importance of using scientific knowledge to correct misconceptions related to the JWC, fishing grounds, and the NKCC and addressed the importance of transferring procedures to the curriculum. It is expected that the conclusions of this study will have an important role on textbook revision and teacher education in the future.

Prediction of Changes in Habitat Distribution of the Alfalfa Weevil (Hypera postica) Using RCP Climate Change Scenarios (RCP 기후변화 시나리오 따른 알팔파바구미(Hypera postica)의 서식지 분포 변화 예측)

  • Kim, Mi-Jeong;Lee, Heejo;Ban, Yeong-Gyu;Lee, Soo-Dong;Kim, Dong Eon
    • Korean journal of applied entomology
    • /
    • v.57 no.3
    • /
    • pp.127-135
    • /
    • 2018
  • Climate change can affect variables related to the life cycle of insects, including growth, development, survival, reproduction and distribution. As it encourages alien insects to rapidly spread and settle, climate change is regarded as one of the direct causes of decreased biodiversity because it disturbed ecosystems and reduces the population of native species. Hypera postica caused a great deal of damage in the southern provinces of Korea after it was first identified on Jeju lsland in the 1990s. In recent years, the number of individuals moving to estivation sites has concerned scientists due to the crop damage and national proliferation. In this study, we examine how climate change could affect inhabitation of H. postica. The MaxEnt model was applied to estimate potential distributions of H. postica using future climate change scenarios, namely, representative concentration pathway (RCP) 4.5 and RCP 8.5. As variables of the model, this study used six bio-climates (bio3, bio6, bio10, bio12, bio14, and bio16) in consideration of the ecological characteristics of 66 areas where inhabitation of H. postica was confirmed from 2015 to 2017, and in consideration of the interrelation between prediction variables. The fitness of the model was measured at a considered potentially useful level of 0.765 on average, and the warmest quarter has a high contribution rate of 60-70%. Prediction models (RCP 4.5 and RCP 8.5) results for the year 2050 and 2070 indicated that H. postica habitats are projected to expand across the Korean peninsula due to increasing temperatures.

Influence of Motivational, Social, and Environmental Factors on the Learning of Hackers (동기적, 사회적, 그리고 환경적 요인이 해커의 기술 습득에 미치는 영향)

  • Jang, Jaeyoung;Kim, Beomsoo
    • Information Systems Review
    • /
    • v.18 no.1
    • /
    • pp.57-78
    • /
    • 2016
  • Hacking has raised many critical issues in the modern world, particularly because the size and cost of the damages caused by this disruptive activity have steadily increased. Accordingly, many significant studies have been conducted by behavioral scientists to understand hackers and their practices. Nonetheless, only qualitative methods, such as interviews, meta-studies, and media studies, have been employed in such studies because of hacker sampling limitations. Existing studies have determined that intrinsic motivation was the dominant factor influencing hackers, and that their techniques were mainly acquired from online hacking communities. However, such results have yet to be causally proven. This study attempted to identify the causal factors influencing the motivational and environmental factors encouraging hackers to learn hacking skills. To this end, hacker community members using the theory of planned behavior were observed to identify the causal factors of their learning of hacking skills. We selected a group of students who were developing their hacking skills. The survey was conducted over a two-week period in May 2015 with a total of 227 students as respondents. After list-wise deletion, 215 of the responses were deemed usable (94.7 percent). In summary, the hackers were aware that hacking skills are considered socially unethical, and their attitudes toward the learning of hacking skills were affected by both intrinsic and extrinsic motivations. In addition, the characteristics of the online hacking community affected their perceived behavioral control. This study introduced new concepts in the process of conducting a causal relationship analysis on a hacker sample. Moreover, this research expanded the discussion on the causal direction of subjective norms in unethical research, and empirically confirmed that both intrinsic and extrinsic motivations affect the learning of hacking skills. This study also made a practical contribution by raising the educational and policy response issues for ethical hackers and demonstrating the necessity to intensify the punishment for hacking.