• Title/Summary/Keyword: e-학습

Search Result 2,423, Processing Time 0.028 seconds

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

An Integrated Model based on Genetic Algorithms for Implementing Cost-Effective Intelligent Intrusion Detection Systems (비용효율적 지능형 침입탐지시스템 구현을 위한 유전자 알고리즘 기반 통합 모형)

  • Lee, Hyeon-Uk;Kim, Ji-Hun;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.125-141
    • /
    • 2012
  • These days, the malicious attacks and hacks on the networked systems are dramatically increasing, and the patterns of them are changing rapidly. Consequently, it becomes more important to appropriately handle these malicious attacks and hacks, and there exist sufficient interests and demand in effective network security systems just like intrusion detection systems. Intrusion detection systems are the network security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. Conventional intrusion detection systems have generally been designed using the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. However, they cannot handle new or unknown patterns of the network attacks, although they perform very well under the normal situation. As a result, recent studies on intrusion detection systems use artificial intelligence techniques, which can proactively respond to the unknown threats. For a long time, researchers have adopted and tested various kinds of artificial intelligence techniques such as artificial neural networks, decision trees, and support vector machines to detect intrusions on the network. However, most of them have just applied these techniques singularly, even though combining the techniques may lead to better detection. With this reason, we propose a new integrated model for intrusion detection. Our model is designed to combine prediction results of four different binary classification models-logistic regression (LOGIT), decision trees (DT), artificial neural networks (ANN), and support vector machines (SVM), which may be complementary to each other. As a tool for finding optimal combining weights, genetic algorithms (GA) are used. Our proposed model is designed to be built in two steps. At the first step, the optimal integration model whose prediction error (i.e. erroneous classification rate) is the least is generated. After that, in the second step, it explores the optimal classification threshold for determining intrusions, which minimizes the total misclassification cost. To calculate the total misclassification cost of intrusion detection system, we need to understand its asymmetric error cost scheme. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, total misclassification cost is more affected by FNE rather than FPE. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 10,000 samples from them by using random sampling method. Also, we compared the results from our model with the results from single techniques to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell R4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on GA outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that the proposed model outperformed all the other comparative models in the total misclassification cost perspective. Consequently, it is expected that our study may contribute to build cost-effective intelligent intrusion detection systems.

Lee Ungno (1904-1989)'s Theory of Painting and Art Informel Perception in the 1950s (이응노(1904~1989)의 회화론과 1950년대 앵포르멜 미술에 대한 인식)

  • Lee, Janghoon
    • Korean Journal of Heritage: History & Science
    • /
    • v.52 no.2
    • /
    • pp.172-195
    • /
    • 2019
  • Among the paintings of Goam Lee Ungno (1904-1989), his works of the 1960s in Paris have been evaluated as his most avant-garde works of experimenting with and innovating objects as an artist. At that time, his works, such as Papier Colle and Abstract Letter, were influenced by abstract expressionism and Western Art Informel, illustrating his transformation from a traditional artist into a contemporary artist. An exhibition, which was held prior to his going to Paris in March 1958, has received attention because it exhibited the painting style of his early Informel art. Taking this into consideration, this study was conducted by interpreting his work from two perspectives; first, that his works of 1958 were influenced by abstract expressionism and Art Informel, and, second, that he expressed Xieyi (寫意) as literati painting, focusing on the fact that Lee Ungno first started his career adopting this style. In this paper, I aimed to confirm Lee Ungno's recognition of Art Informel and abstract painting, which can be called abstract expressionism. To achieve this, it was necessary to study Lee's painting theory at that time, so I first considered Hae-gang Kim Gyu-jin whom Lee Ungno began studying painting under, and his paintings during his time in Japan. It was confirmed that in order to escape from stereotypical paintings, deep contemplation of nature while painting was his first important principle. This principle, also known as Xieyi (寫意), lasted until the 1950s. In addition, it is highly probable that he understood the dictionary definition of abstract painting, i.e., the meaning of extracting shapes from nature according to the ideas which became important to him after studying in Japan, rather than the theory of abstract painting realized in Western paintings. Lee Ungno himself also stated that the shape of nature was the basis of abstract painting. In other words, abstractive painting and abstract painting are different concepts and based on this, it is necessary to analyze the paintings of Lee Ungno. Finally, I questioned the view that Lee Ungno's abstract paintings of the 1950s were painted as representative of the Xieyi (寫意) mind of literary art painting. Linking traditional literary art painting theory directly to Lee Ungno, who had been active in other worlds in space and time, may minimize Lee Ungno's individuality and make the distinction between traditional paintings and contemporary paintings obscure. Lee Ungno emphasized Xieyi (寫意) in his paintings; however, this might have been an emphasis signifying a great proposition. This is actually because his works produced in the 1950s, such as Self-Portrait (1956), featured painting styles with boldly distorted forms achieved by strong ink brushwork, a style which Lee Ungno defined as 'North Painting.' This is based on the view that it is necessary to distinguish between Xieyi (寫意) and 'the way of Xieyi (寫意) painting' as an important aspect of literary art painting. Therefore, his paintings need a new interpretation in consideration of the viewpoint that he represented abstract paintings according to his own Xieyi (寫意) way, rather than the view that his paintings were representations of Xieyi (寫意), or rather a succession of traditional paintings in the literary artist's style.

On Using Near-surface Remote Sensing Observation for Evaluation Gross Primary Productivity and Net Ecosystem CO2 Partitioning (근거리 원격탐사 기법을 이용한 총일차생산량 추정 및 순생태계 CO2 교환량 배분의 정확도 평가에 관하여)

  • Park, Juhan;Kang, Minseok;Cho, Sungsik;Sohn, Seungwon;Kim, Jongho;Kim, Su-Jin;Lim, Jong-Hwan;Kang, Mingu;Shim, Kyo-Moon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.251-267
    • /
    • 2021
  • Remotely sensed vegetation indices (VIs) are empirically related with gross primary productivity (GPP) in various spatio-temporal scales. The uncertainties in GPP-VI relationship increase with temporal resolution. Uncertainty also exists in the eddy covariance (EC)-based estimation of GPP, arising from the partitioning of the measured net ecosystem CO2 exchange (NEE) into GPP and ecosystem respiration (RE). For two forests and two agricultural sites, we correlated the EC-derived GPP in various time scales with three different near-surface remotely sensed VIs: (1) normalized difference vegetation index (NDVI), (2) enhanced vegetation index (EVI), and (3) near infrared reflectance from vegetation (NIRv) along with NIRvP (i.e., NIRv multiplied by photosynthetically active radiation, PAR). Among the compared VIs, NIRvP showed highest correlation with half-hourly and monthly GPP at all sites. The NIRvP was used to test the reliability of GPP derived by two different NEE partitioning methods: (1) original KoFlux methods (GPPOri) and (2) machine-learning based method (GPPANN). GPPANN showed higher correlation with NIRvP at half-hourly time scale, but there was no difference at daily time scale. The NIRvP-GPP correlation was lower under clear sky conditions due to co-limitation of GPP by other environmental conditions such as air temperature, vapor pressure deficit and soil moisture. However, under cloudy conditions when photosynthesis is mainly limited by radiation, the use of NIRvP was more promising to test the credibility of NEE partitioning methods. Despite the necessity of further analyses, the results suggest that NIRvP can be used as the proxy of GPP at high temporal-scale. However, for the VIs-based GPP estimation with high temporal resolution to be meaningful, complex systems-based analysis methods (related to systems thinking and self-organization that goes beyond the empirical VIs-GPP relationship) should be developed.

Exploring the contextual factors of episodic memory: dissociating distinct social, behavioral, and intentional episodic encoding from spatio-temporal contexts based on medial temporal lobe-cortical networks (일화기억을 구성하는 맥락 요소에 대한 탐구: 시공간적 맥락과 구분되는 사회적, 행동적, 의도적 맥락의 내측두엽-대뇌피질 네트워크 특징을 중심으로)

  • Park, Jonghyun;Nah, Yoonjin;Yu, Sumin;Lee, Seung-Koo;Han, Sanghoon
    • Korean Journal of Cognitive Science
    • /
    • v.33 no.2
    • /
    • pp.109-133
    • /
    • 2022
  • Episodic memory consists of a core event and the associated contexts. Although the role of the hippocampus and its neighboring regions in contextual representations during encoding has become increasingly evident, it remains unclear how these regions handle various context-specific information other than spatio-temporal contexts. Using high-resolution functional MRI, we explored the patterns of the medial temporal lobe (MTL) and cortical regions' involvement during the encoding of various types of contextual information (i.e., journalism principle 5W1H): "Who did it?," "Why did it happen?," "What happened?," "When did it happen?," "Where did it happen?," and "How did it happen?" Participants answered six different contextual questions while looking at simple experimental events consisting of two faces with one object on the screen. The MTL was divided to sub-regions by hierarchical clustering from resting-state data. General linear model analyses revealed a stronger activation of MTL sub-regions, the prefrontal lobe (PFC), and the inferior parietal lobule (IPL) during social (Who), behavioral (How), and intentional (Why) contextual processing when compared with spatio-temporal (Where/When) contextual processing. To further investigate the functional networks involved in contextual encoding dissociation, a multivariate pattern analysis was conducted with features selected as the task-based connectivity links between the hippocampal subfields and PFC/IPL. Each social, behavioral, and intentional contextual processing was individually and successfully classified from spatio-temporal contextual processing, respectively. Thus, specific contexts in episodic memory, namely social, behavior, and intention, involve distinct functional connectivity patterns that are distinct from those for spatio-temporal contextual memory.

The Influence of Webtoon Usage Motivation and Theory of Planned Behavior on Intentions to Use Webtoon: Comparison between movie viewing, switching to paid content, and intention for buying character products (웹툰 이용동기와 계획행동이론 변인이 웹툰 관련 행동의도에 미치는 영향: 영화관람, 유료 콘텐츠 전환시 이용, 캐릭터 상품 구매의도의 비교)

  • Lee, Jeong Ki;Lee, You Jin;Kim, Byung Gue;Kim, Bo Mi;Choi, Sun Ryul;Koo, Ja Young;Koleva, Vanya Slavche
    • Korean Journal of Communication Studies
    • /
    • v.22 no.2
    • /
    • pp.89-121
    • /
    • 2014
  • In order to suggest a strategy for continuous growth of webtoon, this article examined webtoon usage motivation and tried to make a prediction about culture content products and services connected with webtoon, including intention for viewing movies, based on webtoon; intention for switching to paid webtoon content, and intention for buying webtoon character products. From the point of view of Uses and Gratification Theory intentions for using webtoon and human sociocultural behavior intention are already predicted but with the usefulness of Theory of Planned Behavior Integrated Model this study extended the explanation power of prediction about webtoon related behavioral intention. Results found 5 motivational factors for webtoon usage i.e. 'seeking information', 'entertainment and access availability', 'webtoon genre characteristics', 'influence from a friend or acquaintance', and 'escapism and tension release'. Among them the ones that influenced the intention for viewing movies, based on webtoon, were found to be 'webtoon genre characteristics', 'escapism and tension release' and the 3 variables from Theory of Planned Behavior. 'Seeking information', 'entertainment and access availability', 'webtoon genre characteristics', and all the 3 variables from Theory of Planned Behavior were found to influence the intention for switching to paid webtoon content. The intention for buying webtoon based character products was affected by the motivational factors 'seeking information', 'escapism and tension release' and the behavior and subjective norms variables from Theory of Planned Behavior. Based on the uncommon results from the research several suggestions were made for the continuous growth of webtoon.

Strategic Issues in Managing Complexity in NPD Projects (신제품개발 과정의 복잡성에 대한 주요 연구과제)

  • Kim, Jongbae
    • Asia Marketing Journal
    • /
    • v.7 no.3
    • /
    • pp.53-76
    • /
    • 2005
  • With rapid technological and market change, new product development (NPD) complexity is a significant issue that organizations continually face in their development projects. There are numerous factors, which cause development projects to become increasingly costly & complex. A product is more likely to be successfully developed and marketed when the complexity inherent in NPD projects is clearly understood and carefully managed. Based upon the previous studies, this study examines the nature and importance of complexity in developing new products and then identifies several issues in managing complexity. Issues considered include: definition of complexity : consequences of complexity; and methods for managing complexity in NPD projects. To achieve high performance in managing complexity in development projects, these issues need to be addressed, for example: A. Complexity inherent in NPD projects is multi-faceted and multidimensional. What factors need to be considered in defining and/or measuring complexity in a development project? For example, is it sufficient if complexity is defined only from a technological perspective, or is it more desirable to consider the entire array of complexity sources which NPD teams with different functions (e.g., marketing, R&D, manufacturing, etc.) face in the development process? Moreover, is it sufficient if complexity is measured only once during a development project, or is it more effective and useful to trace complexity changes over the entire development life cycle? B. Complexity inherent in a project can have negative as well as positive influences on NPD performance. Thus, which complexity impacts are usually considered negative and which are positive? Project complexity also can affect the entire organization. Any complexity could be better assessed in broader and longer perspective. What are some ways in which the long-term impact of complexity on an organization can be assessed and managed? C. Based upon previous studies, several approaches for managing complexity are derived. What are the weaknesses & strengths of each approach? Is there a desirable hierarchy or order among these approaches when more than one approach is used? Are there differences in the outcomes according to industry and product types (incremental or radical)? Answers to these and other questions can help organizations effectively manage the complexity inherent in most development projects. Complexity is worthy of additional attention from researchers and practitioners alike. Large-scale empirical investigations, jointly conducted by researchers and practitioners, will help gain useful insights into understanding and managing complexity. Those organizations that can accurately identify, assess, and manage the complexity inherent in projects are likely to gain important competitive advantages.

  • PDF

Efficient Deep Learning Approaches for Active Fire Detection Using Himawari-8 Geostationary Satellite Images (Himawari-8 정지궤도 위성 영상을 활용한 딥러닝 기반 산불 탐지의 효율적 방안 제시)

  • Sihyun Lee;Yoojin Kang;Taejun Sung;Jungho Im
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.979-995
    • /
    • 2023
  • As wildfires are difficult to predict, real-time monitoring is crucial for a timely response. Geostationary satellite images are very useful for active fire detection because they can monitor a vast area with high temporal resolution (e.g., 2 min). Existing satellite-based active fire detection algorithms detect thermal outliers using threshold values based on the statistical analysis of brightness temperature. However, the difficulty in establishing suitable thresholds for such threshold-based methods hinders their ability to detect fires with low intensity and achieve generalized performance. In light of these challenges, machine learning has emerged as a potential-solution. Until now, relatively simple techniques such as random forest, Vanilla convolutional neural network (CNN), and U-net have been applied for active fire detection. Therefore, this study proposed an active fire detection algorithm using state-of-the-art (SOTA) deep learning techniques using data from the Advanced Himawari Imager and evaluated it over East Asia and Australia. The SOTA model was developed by applying EfficientNet and lion optimizer, and the results were compared with the model using the Vanilla CNN structure. EfficientNet outperformed CNN with F1-scores of 0.88 and 0.83 in East Asia and Australia, respectively. The performance was better after using weighted loss, equal sampling, and image augmentation techniques to fix data imbalance issues compared to before the techniques were used, resulting in F1-scores of 0.92 in East Asia and 0.84 in Australia. It is anticipated that timely responses facilitated by the SOTA deep learning-based approach for active fire detection will effectively mitigate the damage caused by wildfires.

Analysis of the linkage between the three categories of content system according to the 2022 revised mathematics curriculum and the lesson titles of mathematics textbooks for the first and second-grade elementary school (2022 개정 수학과 교육과정에 따른 내용 체계의 세 범주와 초등학교 1~2학년 수학 교과서 차시명의 연계성 분석)

  • Kim, Sung Joon;Kim, Eun kyung;Kwon, Mi sun
    • Communications of Mathematical Education
    • /
    • v.38 no.2
    • /
    • pp.167-186
    • /
    • 2024
  • Since the 5th mathematics curriculum, the goals of mathematics education have been presented in three categories: cognitive, process, and affective goals. In the 2022 revised mathematics curriculum, the content system was also presented as knowledge-understanding, process-skill, and value-attitude. Therefore, in order to present lesson goals to students, it is necessary to present all three aspects that are the goals of mathematics education. Currently, the lesson titles presented in mathematics textbooks are directly linked to lesson goals and are the first source of information for students during class. Accordingly, this study analyzed how the three categories of lesson titles and content system presented in the 2015 revised 1st and 2nd grade mathematics textbook are connected. As a result, most lesson titles presented two of the three categories, but the reflected elements showed a tendency to focus on the categories of knowledge-understanding and process-skill. Some cases of lesson titles reflected content elements of the value-attitude category, but this showed significant differences depending on the mathematics content area. Considering the goals of mathematics lessons, it will be necessary to look at ways to present lesson titles that reflect the content elements of the value-attitude categories and also explore ways to present them in a balanced way. In particular, considering the fact that students can accurately understand the goals of the knowledge-understanding categories even without presenting them, descriptions that specifically reflect the content elements of the process-skill and value-attitude categories seem necessary. Through this, we attempted to suggest the method of presenting the lesson titles needed when developing the 2022 revised mathematics textbook and help present effective lesson goals using this.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.