• Title/Summary/Keyword: 자동로그인

Search Result 117, Processing Time 0.024 seconds

Relaying Rogue AP detection scheme using SVM (SVM을 이용한 중계 로그 AP 탐지 기법)

  • Kang, Sung-Bae;Nyang, Dae-Hun;Choi, Jin-Chun;Lee, Sok-Joon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.23 no.3
    • /
    • pp.431-444
    • /
    • 2013
  • Widespread use of smartphones and wireless LAN accompany a threat called rogue AP. When a user connects to a rogue AP, the rogue AP can mount the man-in-the-middle attack against the user, so it can easily acquire user's private information. Many researches have been conducted on how to detect a various kinds of rogue APs, and in this paper, we are going to propose an algorithm to identify and detect a rogue AP that impersonates a regular AP by showing a regular AP's SSID and connecting to a regular AP. User is deceived easily because the rogue AP's SSID looks the same as that of a regular AP. To detect this type of rogue APs, we use a machine learning algorithm called SVM(Support Vector Machine). Our algorithm detects rogue APs with more than 90% accuracy, and also adjusts automatically detection criteria. We show the performance of our algorithm by experiments.

Automated Emotional Tagging of Lifelog Data with Wearable Sensors (웨어러블 센서를 이용한 라이프로그 데이터 자동 감정 태깅)

  • Park, Kyung-Wha;Kim, Byoung-Hee;Kim, Eun-Sol;Jo, Hwi-Yeol;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.6
    • /
    • pp.386-391
    • /
    • 2017
  • In this paper, we propose a system that automatically assigns user's experience-based emotion tags from wearable sensor data collected in real life. Four types of emotional tags are defined considering the user's own emotions and the information which the user sees and listens to. Based on the collected wearable sensor data from multiple sensors, we have trained a machine learning-based tagging system that combines the known auxiliary tools from the existing affective computing research and assigns emotional tags. In order to show the usefulness of this multi-modality-based emotion tagging system, quantitative and qualitative comparison with the existing single-modality-based emotion recognition approach are performed.

Development of facial recognition application for automation logging of emotion log (감정로그 자동화 기록을 위한 표정인식 어플리케이션 개발)

  • Shin, Seong-Yoon;Kang, Sun-Kyoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.4
    • /
    • pp.737-743
    • /
    • 2017
  • The intelligent life-log system proposed in this paper is intended to identify and record a myriad of everyday life information as to the occurrence of various events based on when, where, with whom, what and how, that is, a wide variety of contextual information involving person, scene, ages, emotion, relation, state, location, moving route, etc. with a unique tag on each piece of such information and to allow users to get a quick and easy access to such information. Context awareness generates and classifies information on a tag unit basis using the auto-tagging technology and biometrics recognition technology and builds a situation information database. In this paper, we developed an active modeling method and an application that recognizes expressionless and smile expressions using lip lines to automatically record emotion information.

A study on Translation-, Magnification- and Rotation- Invariant automatic Inspection System Development (이동, 배율, 회전에 무관한 자동 검사 장치 개발에 관한 연구)

  • O, Chun-Seok;Im, Jong-Seol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.4
    • /
    • pp.1136-1142
    • /
    • 1999
  • A difficulty of the visual inspection for translated, magnified and rotated objects exists owing to the limitation of recognition rate. In this paper, we perform to define Integral Logarithm Transform(ILT), to consider its characteristic for implementation of Translation-, Magnification- and Rotation-invariant inspection system, and to compare with other methods in inspection error rate. By using magnification and rotation invariance properties of ILT, it makes easier than other methods to extract the rotation degree. The new method employs the ILT for the good/bad inspection of translated, magnified and rotated objects and experiment is performed to achieve translation, magnification and rotation invariance. In other methods both magnification and rotation invariance can't be available. As the result of he experiment, it is not better than the self-organizing map in the improvement of recognition rate, but it shows us the possibility to be used as a tool for the good/bad inspection system.

  • PDF

A Table Parametric Method for Automatic Generation of Parametric CAD Models in a Mold Base e-Catalog System (몰드베이스 전자 카탈로그 시스템의 파라메트릭 CAD 모델 자동 생성을 위한 테이블 파라메트릭 방법)

  • Mun, Du-Hwan;Kim, Heung-Ki;Jang, Kwang-Sub;Cho, Jun-Myun;Kim, Jun-Hwan;Han, Soon-Hung
    • The Journal of Society for e-Business Studies
    • /
    • v.9 no.4
    • /
    • pp.117-136
    • /
    • 2004
  • As the time-to-market gets more important for competitiveness of an enterprise in manufacturing industry, it becomes important to shorten the development cycle of a product. Reuse of existing design models and e-Catalog for components are required for faster product development. To achieve this goal, an electric catalog must provide parametric CAD models since parametric information is indispensable for configuration design. There are difficulties in building up a parametric library of all the necessary combination using a CAD system, since we have too many combinations of components for a product. For example, there are at least 80 million combinations of components on one page of paper catalog of a mold base. To solve this problem, we propose the method of table parametric for the automatic generation of parametric CAD models. Any combination of mold base can be generated by mapping between a classification system of an electric catalog and the design parameters set of the table parametric. We propose how to select parametric models and to construct the design parameters set.

  • PDF

Building Data Warehouse System for Weblog Analysis (웹로그 분석을 위한 데이터 웨어하우스 시스템 구축)

  • Lee, Joo-Il;Baek, Kyung-Min;Shin, Joo-Hahn;Lee, Won-Suk
    • 한국IT서비스학회:학술대회논문집
    • /
    • 2010.05a
    • /
    • pp.291-295
    • /
    • 2010
  • 최근 급격한 하드웨어 기술과 데이터베이스 시스템의 발전은 우리 주변에서 발생하는 다양한 분야의 데이터를 자동으로 수집하는 것을 가능하게 하였다. 흔히 데이터 스트림(data stream)이라고 언급되는 끊임없이 생산되는 대용량의 데이터를 효율적으로 처리하여 유용한 정보를 얻어내는 기술은 이미 많은 응용 분야에서 광범위하게 연구되고 있다. 인터넷은 이러한 데이터 스트림을 양산해 내는 주요 원천 중의 하나이다. 인터넷 비즈니스의 활성화와 더불어 웹로그 데이터 스트림은 마케팅, 전략 수립, 고객관리 등 여러 부분에 광범위하게 활용되기 시작했으며, 보다 정확하고 효율적인 분석에 대한 요구사항도 점점 늘어나고 있다. 데이터 웨어하우스(Data Warehouse)는 수집된 데이터를 주제 기반으로 통합하여 시계열 형태로 적재하는 저장소서 유용한 분석이나 의사결정에 많이 사용되어 왔다. 데이터웨어하우스는 데이터를 요약하고 통합 및 정제하는 기능을 제공하여 대용량의 데이터 처리에 적합하고 데이터의 품질을 향상시키기 때문에 데이터 마이닝 분야에서 전처리 과정으로도 많이 이용되어 왔다. 본 논문에서는 웹로그 데이터 스트림에 대한 데이터 웨어하우스를 구축하여 보다 고품질의 유용한 정보를 효율적으로 얻어내는 시스템을 제안한다.

  • PDF

UX Analysis for Mobile Devices Using MapReduce on Distributed Data Processing Platform (MapReduce 분산 데이터처리 플랫폼에 기반한 모바일 디바이스 UX 분석)

  • Kim, Sungsook;Kim, Seonggyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.9
    • /
    • pp.589-594
    • /
    • 2013
  • As the concept of web characteristics represented by openness and mind sharing grows more and more popular, device log data generated by both users and developers have become increasingly complicated. For such reasons, a log data processing mechanism that automatically produces meaningful data set from large amount of log records have become necessary for mobile device UX(User eXperience) analysis. In this paper, we define the attributes of to-be-analyzed log data that reflect the characteristics of a mobile device and collect real log data from mobile device users. Along with the MapReduce programming paradigm in Hadoop platform, we have performed a mobile device User eXperience analysis in a distributed processing environment using the collected real log data. We have then demonstrated the effectiveness of the proposed analysis mechanism by applying the various combinations of Map and Reduce steps to produce a simple data schema from the large amount of complex log records.

A Study on VoiceXML Application of User-Controlled Form Dialog System (사용자 주도 폼 다이얼로그 시스템의 VoiceXML 어플리케이션에 관한 연구)

  • Kwon, Hyeong-Joon;Roh, Yong-Wan;Lee, Hyon-Gu;Hong, Hwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.183-190
    • /
    • 2007
  • VoiceXML is new markup language which is designed for web resource navigation via voice based on XML. An application using VoiceXML is classified into mutual-controlled and machine-controlled form dialog structure. Such dialog structures can't construct service which provide free navigation of web resource by user because a scenario is decided by application developer. In this paper, we propose VoiceXML application structure using user-controlled form dialog system which decide service scenario according to user's intention. The proposed application automatically detects recognition candidates from requested information by user, and then system uses recognition candidate as voice-anchor. Also, system connects each voice-anchor with new voice-node. An example of proposed system, we implement news service with IT term dictionary, and we confirm detection and registration of voice-anchor and make an estimate of hit rate about measurement of an successive offer from information according to user's intention and response speed. As the experiment result, we confirmed possibility which is more freely navigation of web resource than existing VoiceXML form dialog systems.

IoT-based Automatic Greenhouse Snow Removal System (IoT기반 비닐하우스 자동제설 시스템)

  • Jun-young Cheon;Eun-ser Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.132-133
    • /
    • 2023
  • 라즈베리파이를 이용하여 폭설로 인한 비닐하우스 붕괴 위험을 감지하고 이를 예방하는 장치를 구현하였다. 비닐하우스 상부에 설치된 압력센서를 이용해 눈이 쌓였음을 판단하고 일정량 이상이 쌓였다고 판단되면 서보모터를 이용한 제설기를 작동시킨다. 사용자는 애플리케이션을 통하여 회원가입, 로그인을 할 수 있고 제설기의 자동 또는 수동제어 여부를 결정할 수 있다.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.