• 제목/요약/키워드: web logs

Search Result 83, Processing Time 0.022 seconds

Regulatory Focus Classification for Web Shopping Consumers According to Product Type (제품유형에 따른 웹쇼핑 소비자의 조절초점성향 분류)

  • Baik, Jong-Bum;Han, Chung-Seok;Jang, Eun-Young;Kim, Yong-Bum;Choi, Ja-Young;Lee, Soo-Won
    • The KIPS Transactions:PartB
    • /
    • v.19B no.4
    • /
    • pp.231-236
    • /
    • 2012
  • According to consumer behavior theory, human propensity can be divided into two regulatory focus types: promotion and prevention. These two types have much influence on the consumer's decision in many diverse areas. In this research, we apply regulatory focus theory to personalized recommendation to minimize the cold start problem and to improve the performance of recommendation algorithms. To achieve this goal, we extract the consumer behavior variables and information exploration activity index from web shopping logs. We then use them for classifying regulatory focus of the consumer. This research has the contribution to show the possibility of systematization of consumer behavior theory as an interdisciplinary research tool of social science and information technology. Based on this attempt, we will extend the research to IT services adapting theories on other areas.

Detecting CSRF through Analysis of Web Site Structure and Web Usage Patterns (웹사이트 구조와 사용패턴 분석을 통한 CSRF 공격 탐지)

  • Choi, Jae-Yeong;Lee, Hyuk-Jun;Min, Byung-Jun
    • Convergence Security Journal
    • /
    • v.11 no.6
    • /
    • pp.9-15
    • /
    • 2011
  • It is difficult to identify attack requests from normal ones when those attacks are based on CSRF which enables an attacker transmit fabricated requests of a trusted user to the website. For the protection against the CSRF, there have been a lot of research efforts including secret token, custom header, proxy, policy model, CAPTCHA, and user reauthentication. There remains, however, incapacitating means and CAPTCHA and user reauthentication incur user inconvenience. In this paper, we propose a method to detect CSRF attacks by analyzing the structure of websites and the usage patterns. Potential victim candidates are selected and website usage patterns according to the structure and usage logs are analyzed. CSRF attacks can be detected by identifying normal usage patterns. Also, the proposed method does not damage users' convenience not like CAPTCHA by requiring user intervention only in case of detecting abnormal requests.

Interplay of Text Mining and Data Mining for Classifying Web Contents (웹 컨텐츠의 분류를 위한 텍스트마이닝과 데이터마이닝의 통합 방법 연구)

  • 최윤정;박승수
    • Korean Journal of Cognitive Science
    • /
    • v.13 no.3
    • /
    • pp.33-46
    • /
    • 2002
  • Recently, unstructured random data such as website logs, texts and tables etc, have been flooding in the internet. Among these unstructured data there are potentially very useful data such as bulletin boards and e-mails that are used for customer services and the output from search engines. Various text mining tools have been introduced to deal with those data. But most of them lack accuracy compared to traditional data mining tools that deal with structured data. Hence, it has been sought to find a way to apply data mining techniques to these text data. In this paper, we propose a text mining system which can incooperate existing data mining methods. We use text mining as a preprocessing tool to generate formatted data to be used as input to the data mining system. The output of the data mining system is used as feedback data to the text mining to guide further categorization. This feedback cycle can enhance the performance of the text mining in terms of accuracy. We apply this method to categorize web sites containing adult contents as well as illegal contents. The result shows improvements in categorization performance for previously ambiguous data.

  • PDF

Event Log Analysis Framework Based on the ATT&CK Matrix in Cloud Environments (클라우드 환경에서의 ATT&CK 매트릭스 기반 이벤트 로그 분석 프레임워크)

  • Yeeun Kim;Junga Kim;Siyun Chae;Jiwon Hong;Seongmin Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.263-279
    • /
    • 2024
  • With the increasing trend of Cloud migration, security threats in the Cloud computing environment have also experienced a significant increase. Consequently, the importance of efficient incident investigation through log data analysis is being emphasized. In Cloud environments, the diversity of services and ease of resource creation generate a large volume of log data. Difficulties remain in determining which events to investigate when an incident occurs, and examining all the extensive log data requires considerable time and effort. Therefore, a systematic approach for efficient data investigation is necessary. CloudTrail, the Amazon Web Services(AWS) logging service, collects logs of all API call events occurring in an account. However, CloudTrail lacks insights into which logs to analyze in the event of an incident. This paper proposes an automated analysis framework that integrates Cloud Matrix and event information for efficient incident investigation. The framework enables simultaneous examination of user behavior log events, event frequency, and attack information. We believe the proposed framework contributes to Cloud incident investigations by efficiently identifying critical events based on the ATT&CK Framework.

A Study on Vulnerability Prevention Mechanism Due to Logout Problem Using OAuth (OAuth를 이용한 로그아웃 문제로 인한 취약점 방지 기법에 대한 연구)

  • Kim, Jinouk;Park, Jungsoo;Nguyen-Vu, Long;Jung, Souhwan
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.1
    • /
    • pp.5-14
    • /
    • 2017
  • Many web services which use OAuth Protocol offer users to log in using their personal profile information given by resource servers. This method reduces the inconvenience of the users to register for new membership. However, at the time a user finishes using OAuth client web service, even if he logs out of the client web service, the resource server remained in the login state may cause the problem of leaking personal information. In this paper, we propose a solution to mitigate the threat by providing an additional security behavior check: when a user requests to log out of the Web Client service, he or she can make decision whether or not to log out of the resource server via confirmation notification regarding the state of the resource server. By utilizing the proposed method, users who log in through the OAuth Protocol in the public PC environment like department stores, libraries, printing companies, etc. can prevent the leakage of personal information issues that may arise from forgetting to check the other OAuth related services. To verify our study, we implement a Client Web Service that uses OAuth 2.0 protocol and integrate it with our security behavior check. The result shows that with this additional function, users will have a better security when dealing with resource authorization in OAuth 2.0 implementation.

Social Network Online Game to the development of online games (국내 온라인 게임의 SNOG로의 발전 방향)

  • Kim, Tae-Yul;Kyung, Byung-Pyo;Ryu, Seuc-Ho;Lee, Wan-Bok
    • Journal of Digital Convergence
    • /
    • v.10 no.1
    • /
    • pp.423-428
    • /
    • 2012
  • By shifting web2.0 users who share information from passive consumption and create their own information and exchange in the form of an active and visible appearance was changing. Most simply and easily with features that can be accessed. SNS is native to Korea me2day, Cyworld, (c) Logs and foreign SNS of Facebook, Twitter and a surge in user FramVille, Mafia War's Game, and many users use to SNG are. SNG's compared to the foreign national is active and not yet is a step. The domestic market, the benefits of this game online games and SNS in vogue these days to incorporate the concept in the market for a new form of the domestic game that the game, SNOG (Social Network Online Game, social networks, online games) to the expansion of flexible development direction, Expand accessibility, expansion of social skills is to present to the three.

A Control Path Analysis Mechanism for Workflow Mining (워크플로우 마이닝을 위한 제어 경로 분석 메커니즘)

  • Min Jun-Ki;Kim Kwang-Hoon;Chung Jung-Su
    • Journal of Internet Computing and Services
    • /
    • v.7 no.1
    • /
    • pp.91-99
    • /
    • 2006
  • This paper proposes a control path analysis mechanism to be used in the workflow mining framework maximizing the workflow traceability and re discoverability by analyzing the total sequences of the control path perspective of a workflow model and by rediscovering their runtime enactment history from the workflow log information. The mechanism has two components One is to generate the total sequences of the control paths from a workflow mode by transforming it to a control path decision tree, and the other is to rediscover the runtime enactment history of each control path out of the total sequences from the corresponding workflow's execution logs. Eventually, these rediscovered knowledge and execution history of a workflow model make up a control path oriented intelligence of the workflow model. which ought to be an essential ingredient for maintaining and reengineering the qualify of the workflow model. Based upon the workflow intelligence, it is possible for the workflow model to be gradually refined and finally maximize its qualify by repeatedly redesigning and reengineering during its whole life long time period.

  • PDF

A New Latent Class Model for Analysis of Purchasing and Browsing Histories on EC Sites

  • Goto, Masayuki;Mikawa, Kenta;Hirasawa, Shigeichi;Kobayashi, Manabu;Suko, Tota;Horii, Shunsuke
    • Industrial Engineering and Management Systems
    • /
    • v.14 no.4
    • /
    • pp.335-346
    • /
    • 2015
  • The electronic commerce site (EC site) has become an important marketing channel where consumers can purchase many kinds of products; their access logs, including purchase records and browsing histories, are saved in the EC sites' databases. These log data can be utilized for the purpose of web marketing. The customers who purchase many product items are good customers, whereas the other customers, who do not purchase many items, must not be good customers even if they browse many items. If the attributes of good customers and those of other customers are clarified, such information is valuable as input for making a new marketing strategy. Regarding the product items, the characteristics of good items that are bought by many users are valuable information. It is necessary to construct a method to efficiently analyze such characteristics. This paper proposes a new latent class model to analyze both purchasing and browsing histories to make latent item and user clusters. By applying the proposal, an example of data analysis on an EC site is demonstrated. Through the clusters obtained by the proposed latent class model and the classification rule by the decision tree model, new findings are extracted from the data of purchasing and browsing histories.

Digital Epidemiology: Use of Digital Data Collected for Non-epidemiological Purposes in Epidemiological Studies

  • Park, Hyeoun-Ae;Jung, Hyesil;On, Jeongah;Park, Seul Ki;Kang, Hannah
    • Healthcare Informatics Research
    • /
    • v.24 no.4
    • /
    • pp.253-262
    • /
    • 2018
  • Objectives: We reviewed digital epidemiological studies to characterize how researchers are using digital data by topic domain, study purpose, data source, and analytic method. Methods: We reviewed research articles published within the last decade that used digital data to answer epidemiological research questions. Data were abstracted from these articles using a data collection tool that we developed. Finally, we summarized the characteristics of the digital epidemiological studies. Results: We identified six main topic domains: infectious diseases (58.7%), non-communicable diseases (29.4%), mental health and substance use (8.3%), general population behavior (4.6%), environmental, dietary, and lifestyle (4.6%), and vital status (0.9%). We identified four categories for the study purpose: description (22.9%), exploration (34.9%), explanation (27.5%), and prediction and control (14.7%). We identified eight categories for the data sources: web search query (52.3%), social media posts (31.2%), web portal posts (11.9%), webpage access logs (7.3%), images (7.3%), mobile phone network data (1.8%), global positioning system data (1.8%), and others (2.8%). Of these, 50.5% used correlation analyses, 41.3% regression analyses, 25.6% machine learning, and 19.3% descriptive analyses. Conclusions: Digital data collected for non-epidemiological purposes are being used to study health phenomena in a variety of topic domains. Digital epidemiology requires access to large datasets and advanced analytics. Ensuring open access is clearly at odds with the desire to have as little personal data as possible in these large datasets to protect privacy. Establishment of data cooperatives with restricted access may be a solution to this dilemma.

Implementation of Autonomous Intrusion Analysis Agent(AIAA) and Tool for using Intruder Retrace (인터넷 해킹피해 시스템자동분석에이젼트(AIAA) 및 침입자 역추적 지원도구 구현)

  • Im, Chae-Ho;Won, Yu-Heon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.11S
    • /
    • pp.3410-3419
    • /
    • 1999
  • Autonomous Intrusion Analysis Agent(AIAA) is Incident Response Team staff's tool that scans, analyses, reports and alerts the traces of intrusion based on system logs and intruder's backdoors inside compromised system by IR staff after security incident is reported to the IR team. AIAA is intelligent to recognize to check out who is intruder from all the user accounts and to report the suspected candidates to the master control system in IR team. IR staff who controls AIAA with master system can pick up an intruder from the candidates reported by AIAA agent and review all related summary reports and details including source host's mane, finger information, all illegal behavior and so on. AIAA is moved to compromised system by the staff to investigate the signature of intrusion along the trace of victim hosts and it is also operated in secret mode to detect the further intrusion. AIAA is alive in all victim systems until the incident is closed and IR staff can control AIAA operation and dialogue with AIAA agent in Web interface.

  • PDF