• Title/Summary/Keyword: Information processing Model

Search Result 5,542, Processing Time 0.031 seconds

Development of Win32 API Message Authorization System for Windows based Application Provision Service (윈도우 기반 응용프로그램 제공 서비스를 위한 Win32 API 메시지 인가 시스템의 개발)

  • Kim, Young-Ho;Jung, Mi-Na;Won, Yong-Gwan
    • The KIPS Transactions:PartC
    • /
    • v.11C no.1
    • /
    • pp.47-54
    • /
    • 2004
  • The growth of computer resource and network speed has increased requests for the use of remotely located computer systems by connecting through computer networks. This phenomenon has hoisted research activities for application service provision that uses server-based remote computing paradigm. The server-based remote computing paradigm has been developed as the ASP (Application Service Provision) model, which provides remote users through application sharing protocol to application programs. Security requirement such as confidentiality, availability, integrity should be satisfied to provide ASP service using centralized computing system. Existing Telnet or FTP service for a remote computing systems have satisfied security requirement by a simple access control to files and/or data. But windows-based centralized computing system is vulnerable to confidentiality, availability, integrity where many users use the same application program installed in the same computer. In other words, the computing system needs detailed security level for each user different from others, such that only authorized user or group of users can run some specific functional commands for the program. In this paper, we propose windows based centralized computing system that sets security policies for each user for the use of instructions of the application programs, and performs access control to the instructions based on the security policies. The system monitors all user messages which are executed through graphical user interface by the users connecting to the system. Ail Instructions, i.e. messages, for the application program are now passed to authorization process that decides if an Instruction is delivered to the application program based on the pre-defined security polices. This system can be used as security clearance for each user for the shared computing resource as well as shared application programs.

Dynamic Polling Algorithm Based on Line Utilization Prediction (선로 이용률 예측 기반의 동적 폴링 기법)

  • Jo, Gang-Hong;An, Seong-Jin;Jeong, Jin-Uk
    • The KIPS Transactions:PartC
    • /
    • v.9C no.4
    • /
    • pp.489-496
    • /
    • 2002
  • This study proposes a new polling algorithm allowing dynamic change in polling period based on line utilization prediction. Polling is the most important function in network monitoring, but excessive polling data causes rather serious congestion conditions of network when network is In congestion. Therefore, existing multiple polling algorithms decided network congestion or load of agent with previously performed polling Round Trip Time or line utilization, chanced polling period, and controlled polling traffic. But, this algorithm is to change the polling period based on the previous polling and does not reflect network conditions in the current time to be polled. A algorithm proposed in this study is to predict whether polling traffic exceeds threshold of line utilization on polling path based on the past data and to change the polling period with the prediction. In this study, utilization of each line configuring network was predicted with AR model and violation of threshold was presented in probability. In addition, suitability was evaluated by applying the proposed dynamic polling algorithm based on line utilization prediction to the actual network, reasonable level of threshold for line utilization and the violation probability of threshold were decided by experiment. Performance of this algorithm was maximized with these processes.

Enabling reuse driven software development : lessons learned from embedded software industry practice (재사용 기반의 소프트웨어 개발 체계 구축 : 내장형 소프트웨어 영역의 기업 사례)

  • Kim Kang-Tae
    • The KIPS Transactions:PartD
    • /
    • v.13D no.2 s.105
    • /
    • pp.271-278
    • /
    • 2006
  • This paper presents industry feedback and a case of improvement trial on enabling reuse driven software development which is one of several activities to improve software quality and productivity in a company which develops software that are embedded into consumer electronic products. Several case studies will be introduced that are related to software reuse strategies and practices to show how to establish environment for reuse basis in a company, how to apply it to development team and project and how to improve that through trials and errors. To enable reuse-oriented software development in a huge company, integrated and focused approach is needed among technical, management and environmental point of view. We tried to solve that problem in technical field with reuse method, in management filed with reuse metric and in environment field with reuse repository. The characteristics of our software development environment could be summarized as below. The first, embedded software which would not independent to hardware devices and the second, it is very huge company which develops extremely various products by many different organization with different domain characteristics and the third, development lead time is extremely short and many variation models are stems from basic models. We expect that our study would give contribution to industry struggling to solve similar problem for presenting our experience and could be a reference model for enabling software reuse in a real world practically.

Evaluation of Multivariate Stream Data Reduction Techniques (다변량 스트림 데이터 축소 기법 평가)

  • Jung, Hung-Jo;Seo, Sung-Bo;Cheol, Kyung-Joo;Park, Jeong-Seok;Ryu, Keun-Ho
    • The KIPS Transactions:PartD
    • /
    • v.13D no.7 s.110
    • /
    • pp.889-900
    • /
    • 2006
  • Even though sensor networks are different in user requests and data characteristics depending on each application area, the existing researches on stream data transmission problem focus on the performance improvement of their methods rather than considering the original characteristic of stream data. In this paper, we introduce a hierarchical or distributed sensor network architecture and data model, and then evaluate the multivariate data reduction methods suitable for user requirements and data features so as to apply reduction methods alternatively. To assess the relative performance of the proposed multivariate data reduction methods, we used the conventional techniques, such as Wavelet, HCL(Hierarchical Clustering), Sampling and SVD (Singular Value Decomposition) as well as the experimental data sets, such as multivariate time series, synthetic data and robot execution failure data. The experimental results shows that SVD and Sampling method are superior to Wavelet and HCL ia respect to the relative error ratio and execution time. Especially, since relative error ratio of each data reduction method is different according to data characteristic, it shows a good performance using the selective data reduction method for the experimental data set. The findings reported in this paper can serve as a useful guideline for sensor network application design and construction including multivariate stream data.

Vulnerability Analysis and Detection Mechanism against Denial of Sleep Attacks in Sensor Network based on IEEE 802.15.4 (IEEE 802.15.4기반 센서 네트워크에서 슬립거부 공격의 취약성 분석 및 탐지 메커니즘)

  • Kim, A-Reum;Kim, Mi-Hui;Chae, Ki-Joon
    • The KIPS Transactions:PartC
    • /
    • v.17C no.1
    • /
    • pp.1-14
    • /
    • 2010
  • IEEE 802.15.4[1] has been standardized for the physical layer and MAC layer of LR-PANs(Low Rate-Wireless Personal Area Networks) as a technology for operations with low power on sensor networks. The standardization is applied to the variety of applications in the shortrange wireless communication with limited output and performance, for example wireless sensor or virtual wire, but it includes vulnerabilities for various attacks because of the lack of security researches. In this paper, we analyze the vulnerabilities against the denial of sleep attacks on the MAC layer of IEEE 802.15.4, and propose a detection mechanism against it. In results, we analyzed the possibilities of denial of sleep attacks by the modification of superframe, the modification of CW(Contention Window), the process of channel scan or PAN association, and so on. Moreover, we comprehended that some of these attacks can mount even though the standardized security services such as encryption or authentication are performed. In addition to, we model for denial of sleep attacks by Beacon/Association Request messages, and propose a detection mechanism against them. This detection mechanism utilizes the management table consisting of the interval and node ID of request messages, and signal strength. In simulation results, we can show the effect of attacks, the detection possibility and performance superiorities of proposed mechanism.

An Analysis of Korean Dependency Relation by Homograph Disambiguation (동형이의어 분별에 의한 한국어 의존관계 분석)

  • Kim, Hong-Soon;Ock, Cheol-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.6
    • /
    • pp.219-230
    • /
    • 2014
  • An analysis of dependency relation is a job that determines the governor and the dependent between words in sentence. The dependency relation of predicate is established by patterns and selectional restriction of subcategorization of the predicate. This paper proposes a method of analysis of Korean dependency relation using homograph predicate disambiguated in morphology analysis phase. The disambiguated homograph predicates has each different pattern. Especially reusing a stage transition training dictionary used during tagging POS and homograph, we propose a method of fixing the dependency relation of {noun+postposition, predicate}, and we analyze the accuracy and an effect of homograph for analysis of dependency relation. We used the Sejong Phrase Structured Corpus for experiment. We transformed the phrase structured corpus to dependency relation structure and tagged homograph. From the experiment, the accuracy of dependency relation by disambiguating homograph is 80.38%, the accuracy is increased by 0.42% compared with one of undisambiguated homograph. The Z-values in statistical hypothesis testing with significance level 1% is ${\mid}Z{\mid}=4.63{\geq}z_{0.01}=2.33$. So we can conclude that the homograph affects on analysis of dependency relation, and the stage transition training dictionary used in tagging POS and homograph affects 7.14% on the accuracy of dependency relation.

A Method of Detecting the Aggressive Driving of Elderly Driver (노인 운전자의 공격적인 운전 상태 검출 기법)

  • Koh, Dong-Woo;Kang, Hang-Bong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.11
    • /
    • pp.537-542
    • /
    • 2017
  • Aggressive driving is a major cause of car accidents. Previous studies have mainly analyzed young driver's aggressive driving tendency, yet they were only done through pure clustering or classification technique of machine learning. However, since elderly people have different driving habits due to their fragile physical conditions, it is necessary to develop a new method such as enhancing the characteristics of driving data to properly analyze aggressive driving of elderly drivers. In this study, acceleration data collected from a smartphone of a driving vehicle is analyzed by a newly proposed ECA(Enhanced Clustering method for Acceleration data) technique, coupled with a conventional clustering technique (K-means Clustering, Expectation-maximization algorithm). ECA selects high-intensity data among the data of the cluster group detected through K-means and EM in all of the subjects' data and models the characteristic data through the scaled value. Using this method, the aggressive driving data of all youth and elderly experiment participants were collected, unlike the pure clustering method. We further found that the K-means clustering has higher detection efficiency than EM method. Also, the results of K-means clustering demonstrate that a young driver has a driving strength 1.29 times higher than that of an elderly driver. In conclusion, the proposed method of our research is able to detect aggressive driving maneuvers from data of the elderly having low operating intensity. The proposed method is able to construct a customized safe driving system for the elderly driver. In the future, it will be possible to detect abnormal driving conditions and to use the collected data for early warning to drivers.

A Classification Method of Delirium Patients Using Local Covering-Based Rule Acquisition Approach with Rough Lower Approximation (러프 하한 근사를 갖는 로컬 커버링 기반 규칙 획득 기법을 이용한 섬망 환자의 분류 방법)

  • Son, Chang Sik;Kang, Won Seok;Lee, Jong Ha;Moon, Kyoung Ja
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.4
    • /
    • pp.137-144
    • /
    • 2020
  • Delirium is among the most common mental disorders encountered in patients with a temporary cognitive impairment such as consciousness disorder, attention disorder, and poor speech, particularly among those who are older. Delirium is distressing for patients and families, can interfere with the management of symptoms such as pain, and is associated with increased elderly mortality. The purpose of this paper is to generate useful clinical knowledge that can be used to distinguish the outcomes of patients with delirium in long-term care facilities. For this purpose, we extracted the clinical classification knowledge associated with delirium using a local covering rule acquisition approach with the rough lower approximation region. The clinical applicability of the proposed method was verified using data collected from a prospective cohort study. From the results of this study, we found six useful clinical pieces of evidence that the duration of delirium could more than 12 days. Also, we confirmed eight factors such as BMI, Charlson Comorbidity Index, hospitalization path, nutrition deficiency, infection, sleep disturbance, bed scores, and diaper use are important in distinguishing the outcomes of delirium patients. The classification performance of the proposed method was verified by comparison with three benchmarking models, ANN, SVM with RBF kernel, and Random Forest, using a statistical five-fold cross-validation method. The proposed method showed an improved average performance of 0.6% and 2.7% in both accuracy and AUC criteria when compared with the SVM model with the highest classification performance of the three models respectively.

Surficial Sediment Classification using Backscattered Amplitude Imagery of Multibeam Echo Sounder(300 kHz) (다중빔 음향 탐사시스템(300 kHz)의 후방산란 자료를 이용한 해저면 퇴적상 분류에 관한 연구)

  • Park, Yo-Sup;Lee, Sin-Je;Seo, Won-Jin;Gong, Gee-Soo;Han, Hyuk-Soo;Park, Soo-Chul
    • Economic and Environmental Geology
    • /
    • v.41 no.6
    • /
    • pp.747-761
    • /
    • 2008
  • In order to experiment the acoustic remote classification of seabed sediment, we achieved ground-truth data(i.e. video and grab samples, etc.) and developed post-processing for automatic classification procedure on the basis of 300 kHz MultiBeam Echo Sounder(MBES) backscattering data, which was acquired using KONGBERG Simrad EM3000 at Sock-Cho Port, East Sea of South Korea. Sonar signal and its classification performance were identified with geo-referenced video imagery with the aid of GIS (Geographic Information System). The depth range of research site was from 5 m to 22.7 m, and the backscattering amplitude showed from -36dB to -15dB. The mean grain sizes of sediment from equi-distanced sampling site(50 m interval) varied from 2.86$(\phi)$ to 0.88(\phi). To acquire the main feature for the seabed classification from backscattering amplitude of MBES, we evaluated the correlation factors between the backscattering amplitude and properties of sediment samples. The performance of seabed remote classification proposed was evaluated with comparing the correlation of human expert segmentation to automatic algorithm results. The cross-model perception error ratio on automatic classification algorithm shows 8.95% at rocky bottoms, and 2.06% at the area representing low mean grain size.

DEM_Comp Software for Effective Compression of Large DEM Data Sets (대용량 DEM 데이터의 효율적 압축을 위한 DEM_Comp 소프트웨어 개발)

  • Kang, In-Gu;Yun, Hong-Sik;Wei, Gwang-Jae;Lee, Dong-Ha
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.2
    • /
    • pp.265-271
    • /
    • 2010
  • This paper discusses a new software package, DEM_Comp, developed for effectively compressing large digital elevation model (DEM) data sets based on Lempel-Ziv-Welch (LZW) compression and Huffman coding. DEM_Comp was developed using the $C^{++}$ language running on a Windows-series operating system. DEM_Comp was also tested on various test sites with different territorial attributes, and the results were evaluated. Recently, a high-resolution version of the DEM has been obtained using new equipment and the related technologies of LiDAR (LIght Detection And Radar) and SAR (Synthetic Aperture Radar). DEM compression is useful because it helps reduce the disk space or transmission bandwidth. Generally, data compression is divided into two processes: i) analyzing the relationships in the data and ii) deciding on the compression and storage methods. DEM_Comp was developed using a three-step compression algorithm applying a DEM with a regular grid, Lempel-Ziv compression, and Huffman coding. When pre-processing alone was used on high- and low-relief terrain, the efficiency was approximately 83%, but after completing all three steps of the algorithm, this increased to 97%. Compared with general commercial compression software, these results show approximately 14% better performance. DEM_Comp as developed in this research features a more efficient way of distributing, storing, and managing large high-resolution DEMs.