• Title/Summary/Keyword: model based systems

Search Result 11,916, Processing Time 0.046 seconds

Analysis of Research Trends of the Information Security Audit Area Through Literature Review (문헌 분석을 통한 정보보안 감사 분야의 국내 및 국제 연구동향 분석)

  • So, Youngjae;Hwang, Kyung Tae
    • Informatization Policy
    • /
    • v.30 no.4
    • /
    • pp.3-39
    • /
    • 2023
  • With the growing importance of information/information system, information security is emphasized, and the significance of information security audit as a tool for maintaining the proper security level is increasing as well. The objectives of the study are to identify the overall research trends and to propose future research areas by analyzing domestic and overseas research in the area. To achieve the objectives, 103 research papers were analyzed based on both general and subject-related criteria. The following are the major research results : In terms of research approach, more empirical studies are needed; For subject "Auditor," studies to develop a framework for related variables (e.g., capability) are needed; For subject "Audit Activities/Procedures," future research should focus on the process/results of detailed audit activities; Future domestic research for "Audit Areas" should look for the new technology/industry/security areas covered by foreign studies; For "Audit Objective/Impact," studies to define the variables (e.g., performance and quality) systematically and comprehensively are needed; For "Audit Standard/Guidelines," research on model/guideline needs to be continued.

Assessment of potential carbon storage in North Korea based on forest restoration strategies (북한 산림복원 전략에 따른 탄소저장량 잠재성 평가)

  • Wonhee Cho;Inyoo Kim;Dongwook Ko
    • Korean Journal of Environmental Biology
    • /
    • v.41 no.3
    • /
    • pp.204-214
    • /
    • 2023
  • This study aimed to conduct a comprehensive assessment of the potential impact of deforestation and forest restoration on carbon storage in North Korea until 2050, employing rigorous analyses of trends of land use change in the past periods and projecting future land use change scenarios. We utilized the CA-Markov model, which can reflect spatial trends in land use changes, and verified the impact of forest restoration strategies on carbon storage by creating land use change scenarios (reforestation and non-reforestation). We employed two distinct periods of land use maps (2000 to 2010 and 2010 to 2020). To verify the overall terrestrial carbon storage in North Korea, our evaluation included estimations of carbon storage for various elements such as above-ground, below-ground, soil, and debris (including litters) for settlement, forest, cultivated, grass, and bare areas. Our results demonstrated that effective forest restoration strategies in North Korea have the potential to increase carbon storage by 4.4% by the year 2050, relative to the carbon storage observed in 2020. In contrast, if deforestation continues without forest restoration efforts, we predict a concerning decrease in carbon storage by 11.5% by the year 2050, compared to the levels in 2020. Our findings underscore the significance of prioritizing and continuing forest restoration efforts to effectively increase carbon storage in North Korea. Furthermore, the implications presented in this study are expected to be used in the formulation and implementation of long-term forest restoration strategies in North Korea, while fostering international cooperation towards this common environmental goal.

Foreigner Tourists Acceptance of Surtitle Information Service: Focusing on Transformed TAM and Effects of Perceived Risks (외국 관광객의 공연자막 서비스 수용에 관한 연구 - 변형된 기술수용모형과 인지된 위험의 효과 검증을 중심으로 -)

  • Kim, Seoung Gon;Heo, Shik
    • Korean Association of Arts Management
    • /
    • no.50
    • /
    • pp.213-241
    • /
    • 2019
  • Recently, many interests in the economic contribution of performing arts for the city's tourist attractions have been increasing, and the policy projects supporting surtitle for foreign tourists are expanding. Therefore, the purpose of this study is to explore the acceptance process of subtitle systems using the TAM(Technical Acceptance Model) to understand the influential relations of factors affecting the viewing of the performance of subtitling service by foreign tourists. Data for empirical analysis were collected in a survey of foreign tourists who had experienced performance subtitles with smart pads in three languages. The results of this study are as follows. First, the higher the information system quality of the performance subtitles, the higher the perceived usefulness of the subtitles. Second, for Korean performances, the decreasing level of both the performance-based risk and the psychological risk has a positive influence on the viewing intent. But, the decreasing level of the financial risk has a negative influence on the viewing intent. Third, the decreasing level of performance risk has a positive influence on the perceived usefulness, while the decreasing level of psychological risk has a negative influence on the perceived usefulness. Finally, the psychological risk has the moderating effect of the viewing intention, which it has a negative influence on the perceived usefulness.

A Study on the Policy Direction of the Online Platform Industry: Focusing on PEST-SWOT-AHP Analysis for Scholars and Researchers (온라인 플랫폼 산업의 정책 방향성 연구: 학자 및 연구자 대상 PEST-SWOT-AHP 분석을 중심으로)

  • Sun-Ho Park
    • Journal of Industrial Convergence
    • /
    • v.22 no.5
    • /
    • pp.1-10
    • /
    • 2024
  • This study proposes a developmental policy direction for the online platform industry, moving away from the regulatory-centered discussions that have predominated thus far. To offer policy directions, the PEST-SWOT-AHP analysis model was employed. The study first categorizes the issues of the domestic online platform industry into political, economic, social, and technological aspects, which are then further categorized into 16 strengths, weaknesses, opportunities, and threats. The relative importance among these factors was measured, leading to the derivation of four final strategies. The analysis indicates that policy directions should prioritize addressing weaknesses, with 'improving regulations that hinder innovation' being the most important factor across all categories, while technological factors were consistently rated highly in importance apart from this. Accordingly, the policy direction for the domestic online platform industry suggests avoiding excessive regulation and instead emphasizing policy support centered around technological development. This study is significant in that it presents a macroscopic developmental direction for online platform policies that have not been discussed in existing academic research, and it provides professional and objective indicators through consensus among scholars and researchers. In the future, it is hoped that research will continue to propose detailed policy strategies and implementation systems based on a macroscopic perspective.

Performance of Passive UHF RFID System in Impulsive Noise Channel Based on Statistical Modeling (통계적 모델링 기반의 임펄스 잡음 채널에서 수동형 UHF RFID 시스템의 성능)

  • Jae-sung Roh
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.6
    • /
    • pp.835-840
    • /
    • 2023
  • RFID(Radio Frequency Identification) systems are attracting attention as a key component of Internet of Things technology due to the cost and energy efficiency of application services. In order to use RFID technology in the IoT application service field, it is necessary to be able to store and manage various information for a long period of time as well as simple recognition between the reader and tag of the RFID system. And in order to read and write information to tags, a performance improvement technology that is strong and reliable in poor wireless channels is needed. In particular, in the UHF(Ultra High Frequency) RFID system, since multiple tags communicate passively in a crowded environment, it is essential to improve the recognition rate and transmission speed of individual tags. In this paper, Middleton's Class A impulsive noise model was selected to analyze the performance of the RFID system in an impulsive noise environment, and FM0 encoding and Miller encoding were applied to the tag to analyze the error rate performance of the RFID system. As a result of analyzing the performance of the RFID system in Middleton's Class A impulsive noise channel, it was found that the larger the Gaussian noise to impulsive noise power ratio and the impulsive noise index, the more similar the characteristics to the Gaussian noise channel.

An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

  • Ka, Hoi-Kwang;Kim, Jin-soo
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.443-472
    • /
    • 2014
  • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.

Comparison of Association Rule Learning and Subgroup Discovery for Mining Traffic Accident Data (교통사고 데이터의 마이닝을 위한 연관규칙 학습기법과 서브그룹 발견기법의 비교)

  • Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.1-16
    • /
    • 2015
  • Traffic accident is one of the major cause of death worldwide for the last several decades. According to the statistics of world health organization, approximately 1.24 million deaths occurred on the world's roads in 2010. In order to reduce future traffic accident, multipronged approaches have been adopted including traffic regulations, injury-reducing technologies, driving training program and so on. Records on traffic accidents are generated and maintained for this purpose. To make these records meaningful and effective, it is necessary to analyze relationship between traffic accident and related factors including vehicle design, road design, weather, driver behavior etc. Insight derived from these analysis can be used for accident prevention approaches. Traffic accident data mining is an activity to find useful knowledges about such relationship that is not well-known and user may interested in it. Many studies about mining accident data have been reported over the past two decades. Most of studies mainly focused on predict risk of accident using accident related factors. Supervised learning methods like decision tree, logistic regression, k-nearest neighbor, neural network are used for these prediction. However, derived prediction model from these algorithms are too complex to understand for human itself because the main purpose of these algorithms are prediction, not explanation of the data. Some of studies use unsupervised clustering algorithm to dividing the data into several groups, but derived group itself is still not easy to understand for human, so it is necessary to do some additional analytic works. Rule based learning methods are adequate when we want to derive comprehensive form of knowledge about the target domain. It derives a set of if-then rules that represent relationship between the target feature with other features. Rules are fairly easy for human to understand its meaning therefore it can help provide insight and comprehensible results for human. Association rule learning methods and subgroup discovery methods are representing rule based learning methods for descriptive task. These two algorithms have been used in a wide range of area from transaction analysis, accident data analysis, detection of statistically significant patient risk groups, discovering key person in social communities and so on. We use both the association rule learning method and the subgroup discovery method to discover useful patterns from a traffic accident dataset consisting of many features including profile of driver, location of accident, types of accident, information of vehicle, violation of regulation and so on. The association rule learning method, which is one of the unsupervised learning methods, searches for frequent item sets from the data and translates them into rules. In contrast, the subgroup discovery method is a kind of supervised learning method that discovers rules of user specified concepts satisfying certain degree of generality and unusualness. Depending on what aspect of the data we are focusing our attention to, we may combine different multiple relevant features of interest to make a synthetic target feature, and give it to the rule learning algorithms. After a set of rules is derived, some postprocessing steps are taken to make the ruleset more compact and easier to understand by removing some uninteresting or redundant rules. We conducted a set of experiments of mining our traffic accident data in both unsupervised mode and supervised mode for comparison of these rule based learning algorithms. Experiments with the traffic accident data reveals that the association rule learning, in its pure unsupervised mode, can discover some hidden relationship among the features. Under supervised learning setting with combinatorial target feature, however, the subgroup discovery method finds good rules much more easily than the association rule learning method that requires a lot of efforts to tune the parameters.

Function of the Korean String Indexing System for the Subject Catalog (주제목록을 위한 한국용어열색인 시스템의 기능)

  • Yoon Kooho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.15
    • /
    • pp.225-266
    • /
    • 1988
  • Various theories and techniques for the subject catalog have been developed since Charles Ammi Cutter first tried to formulate rules for the construction of subject headings in 1876. However, they do not seem to be appropriate to Korean language because the syntax and semantics of Korean language are different from those of English and other European languages. This study therefore attempts to develop a new Korean subject indexing system, namely Korean String Indexing System(KOSIS), in order to increase the use of subject catalogs. For this purpose, advantages and disadvantages between the classed subject catalog nd the alphabetical subject catalog, which are typical subject ca-alogs in libraries, are investigated, and most of remarkable subject indexing systems, in particular the PRECIS developed by the British National Bibliography, are reviewed and analysed. KOSIS is a string indexing based on purely the syntax and semantics of Korean language, even though considerable principles of PRECIS are applied to it. The outlines of KOSIS are as follows: 1) KOSIS is based on the fundamentals of natural language and an ingenious conjunction of human indexing skills and computer capabilities. 2) KOSIS is. 3 string indexing based on the 'principle of context-dependency.' A string of terms organized accoding to his principle shows remarkable affinity with certain patterns of words in ordinary discourse. From that point onward, natural language rather than classificatory terms become the basic model for indexing schemes. 3) KOSIS uses 24 role operators. One or more operators should be allocated to the index string, which is organized manually by the indexer's intellectual work, in order to establish the most explicit syntactic relationship of index terms. 4) Traditionally, a single -line entry format is used in which a subject heading or index entry is presented as a single sequence of words, consisting of the entry terms, plus, in some cases, an extra qualifying term or phrase. But KOSIS employs a two-line entry format which contains three basic positions for the production of index entries. The 'lead' serves as the user's access point, the 'display' contains those terms which are themselves context dependent on the lead, 'qualifier' sets the lead term into its wider context. 5) Each of the KOSIS entries is co-extensive with the initial subject statement prepared by the indexer, since it displays all the subject specificities. Compound terms are always presented in their natural language order. Inverted headings are not produced in KOSIS. Consequently, the precision ratio of information retrieval can be increased. 6) KOSIS uses 5 relational codes for the system of references among semantically related terms. Semantically related terms are handled by a different set of routines, leading to the production of 'See' and 'See also' references. 7) KOSIS was riginally developed for a classified catalog system which requires a subject index, that is an index -which 'trans-lates' subject index, that is, an index which 'translates' subjects expressed in natural language into the appropriate classification numbers. However, KOSIS can also be us d for a dictionary catalog system. Accordingly, KOSIS strings can be manipulated to produce either appropriate subject indexes for a classified catalog system, or acceptable subject headings for a dictionary catalog system. 8) KOSIS is able to maintain a constistency of index entries and cross references by means of a routine identification of the established index strings and reference system. For this purpose, an individual Subject Indicator Number and Reference Indicator Number is allocated to each new index strings and new index terms, respectively. can produce all the index entries, cross references, and authority cards by means of either manual or mechanical methods. Thus, detailed algorithms for the machine-production of various outputs are provided for the institutions which can use computer facilities.

  • PDF

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

A study for improvement of far-distance performance of a tunnel accident detection system by using an inverse perspective transformation (역 원근변환 기법을 이용한 터널 영상유고시스템의 원거리 감지 성능 향상에 관한 연구)

  • Lee, Kyu Beom;Shin, Hyu-Soung
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.24 no.3
    • /
    • pp.247-262
    • /
    • 2022
  • In domestic tunnels, it is mandatory to install CCTVs in tunnels longer than 200 m which are also recommended by installation of a CCTV-based automatic accident detection system. In general, the CCTVs in the tunnel are installed at a low height as well as near by the moving vehicles due to the spatial limitation of tunnel structure, so a severe perspective effect takes place in the distance of installed CCTV and moving vehicles. Because of this effect, conventional CCTV-based accident detection systems in tunnel are known in general to be very hard to achieve the performance in detection of unexpected accidents such as stop or reversely moving vehicles, person on the road and fires, especially far from 100 m. Therefore, in this study, the region of interest is set up and a new concept of inverse perspective transformation technique is introduced. Since moving vehicles in the transformed image is enlarged proportionally to the distance from CCTV, it is possible to achieve consistency in object detection and identification of actual speed of moving vehicles in distance. To show this aspect, two datasets in the same conditions are composed with the original and the transformed images of CCTV in tunnel, respectively. A comparison of variation of appearance speed and size of moving vehicles in distance are made. Then, the performances of the object detection in distance are compared with respect to the both trained deep-learning models. As a result, the model case with the transformed images are able to achieve consistent performance in object and accident detections in distance even by 200 m.