• Title/Summary/Keyword: Security Method

Search Result 5,326, Processing Time 0.034 seconds

KANO-TOPSIS Model for AI Based New Product Development: Focusing on the Case of Developing Voice Assistant System for Vehicles (KANO-TOPSIS 모델을 이용한 지능형 신제품 개발: 차량용 음성비서 시스템 개발 사례)

  • Yang, Sungmin;Tak, Junhyuk;Kwon, Donghwan;Chung, Doohee
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.287-310
    • /
    • 2022
  • Companies' interest in developing AI-based intelligent new products is increasing. Recently, the main concern of companies is to innovate customer experience and create new values by developing new products through the effective use of Artificial intelligence technology. However, due to the nature of products based on radical technologies such as artificial intelligence, intelligent products differ from existing products and development methods, so it is clear that there is a limitation to applying the existing development methodology as it is. This study proposes a new research method based on KANO-TOPSIS for the successful development of AI-based intelligent new products by using car voice assistants as an example. Using the KANO model, select and evaluate functions that customers think are necessary for new products, and use the TOPSIS method to derives priorities by finding the importance of functions that customers need. For the analysis, major categories such as vehicle condition check and function control elements, driving-related elements, characteristics of voice assistant itself, infotainment elements, and daily life support elements were selected and customer demand attributes were subdivided. As a result of the analysis, high recognition accuracy should be considered as a top priority in the development of car voice assistants. Infotainment elements that provide customized content based on driver's biometric information and usage habits showed lower priorities than expected, while functions related to driver safety such as vehicle condition notification, driving assistance, and security, also showed as the functions that should be developed preferentially. This study is meaningful in that it presented a new product development methodology suitable for the characteristics of AI-based intelligent new products with innovative characteristics through an excellent model combining KANO and TOPSIS.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

Fast Join Mechanism that considers the switching of the tree in Overlay Multicast (오버레이 멀티캐스팅에서 트리의 스위칭을 고려한 빠른 멤버 가입 방안에 관한 연구)

  • Cho, Sung-Yean;Rho, Kyung-Taeg;Park, Myong-Soon
    • The KIPS Transactions:PartC
    • /
    • v.10C no.5
    • /
    • pp.625-634
    • /
    • 2003
  • More than a decade after its initial proposal, deployment of IP Multicast has been limited due to the problem of traffic control in multicast routing, multicast address allocation in global internet, reliable multicast transport techniques etc. Lately, according to increase of multicast application service such as internet broadcast, real time security information service etc., overlay multicast is developed as a new internet multicast technology. In this paper, we describe an overlay multicast protocol and propose fast join mechanism that considers switching of the tree. To find a potential parent, an existing search algorithm descends the tree from the root by one level at a time, and it causes long joining latency. Also, it is try to select the nearest node as a potential parent. However, it can't select the nearest node by the degree limit of the node. As a result, the generated tree has low efficiency. To reduce long joining latency and improve the efficiency of the tree, we propose searching two levels of the tree at a time. This method forwards joining request message to own children node. So, at ordinary times, there is no overhead to keep the tree. But the joining request came, the increasing number of searching messages will reduce a long joining latency. Also searching more nodes will be helpful to construct more efficient trees. In order to evaluate the performance of our fast join mechanism, we measure the metrics such as the search latency and the number of searched node and the number of switching by the number of members and degree limit. The simulation results show that the performance of our mechanism is superior to that of the existing mechanism.

Policies for Improving Thermal Environment Using Vulnerability Assessment - A Case Study of Daegu, Korea - (열취약성 평가를 통한 열환경 개선 정책 제시 - 대구광역시를 사례로 -)

  • KIM, Kwon;EUM, Jeong-Hee
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.2
    • /
    • pp.1-23
    • /
    • 2018
  • This study aims to propose a way for evaluating thermal environment vulnerability associated with policy to improve thermal environment. For this purpose, a variety of indices concerning thermal vulnerability assessment and adaptation policies for climate change applied to 17 Korean cities were reviewed and examined. Finally, 15 indices associated with policies for improving thermal environment were selected. The selected indices for thermal vulnerability assessment were applied to Daegu Metropolitan City of South Korea as a case study. As results, 15 vulnerability maps based on the standardized indices were established, and a comprehensive map with four grades of thermal vulnerability were established for Daegu Metropolitan City. As results, the area with the highest rated area in the first-grade(most vulnerable to heat) was Dong-gu, followed by Dalseo-gu and Buk-gu, and the highest area ratio of the first-grade regions was Ansim-1-dong in Dong-gu. Based on the standardized indices, the causes of the thermal environment vulnerability of Ansim-1-dong were accounted for the number of basic livelihood security recipients, the number of cardiovascular disease deaths, heat index, and Earth's surface temperature. To improve the thermal environment vulnerability of Ansim-1-dong, active policy implementation is required in expansion and maintenance of heat wave shelters, establishment of database for the population with diseases susceptible to high temperature environments, expansion of shade areas and so on. This study shows the applicability of the vulnerability assessment method linked with the policies and is expected to contribute to the strategic and effective establishment of thermal environment policies in urban master district plans.

Dong-Mu Lee Je-Ma and The Rising of Choi Moon-Hwan (동무(東武) 이제마(李齊馬)와 최문환(崔文煥)의 난(亂))

  • Park, Seong-sik
    • Journal of Sasang Constitutional Medicine
    • /
    • v.9 no.2
    • /
    • pp.39-55
    • /
    • 1997
  • Purpose : Dong-Mu(東武) Lee Je-Ma(李濟馬) was designated as a member of fifty persons 'The Wise Ancestors of Korean Culture and Art' in 1984 and the december of 1996 was appointed as 'The Month of Lee Je-Ma'. Though his achivements was valued like this, some historian criticized that he suppressed the righteous army. So this study was for clarifing the background, the motive, and the course of 'The Rising of Choi Moon-Hwan' occurred in Hamhung on february in 1896, and for the correct appraisement about this event. And also through this, author tried to make clear the origin of Lee Je-Ma's thought. Method : After studing the background of the end of Chosun dynasty and the righteous army in 1895(乙未義兵). Author made a comparative study through the historical materials of the goverment side, the Choi Moon-Hwan side, and the Lee Je-Ma side about 'The Rising of Choi Moon-Hwan' occurred in Hamhung on februrary in 1896. Results & Conclusion : The event occurred in Hamhung on february in 1896 was a part of rebellion of the righteous army in 1895 which had risen against The startling Occurrence of 1895'(乙未義兵) and 'The Royal Commands To Cut Off People's Hair'(斷髮令). Lee Je-Ma suppressed the Rising and put Choi Moon-Hwan in the prison, and which was criticized that he suppressed the righteous army later day. That time was a conflict period between conservatism and civilization, and the Lee Je-Ma's act was the best way to protect the security of residents from the attack which maybe occurred by Japanese army in Wonsan. Judging from this events, author could find Lee Je-Ma's thought was quite different from righteous army's neo-confucianism and conservatism. In the aspect of the history of 'Korean National Movement', further study about Choi Moon-Hwan, the chief of righteous army will be need.

  • PDF

Development on Early Warning System about Technology Leakage of Small and Medium Enterprises (중소기업 기술 유출에 대한 조기경보시스템 개발에 대한 연구)

  • Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.143-159
    • /
    • 2017
  • Due to the rapid development of IT in recent years, not only personal information but also the key technologies and information leakage that companies have are becoming important issues. For the enterprise, the core technology that the company possesses is a very important part for the survival of the enterprise and for the continuous competitive advantage. Recently, there have been many cases of technical infringement. Technology leaks not only cause tremendous financial losses such as falling stock prices for companies, but they also have a negative impact on corporate reputation and delays in corporate development. In the case of SMEs, where core technology is an important part of the enterprise, compared to large corporations, the preparation for technological leakage can be seen as an indispensable factor in the existence of the enterprise. As the necessity and importance of Information Security Management (ISM) is emerging, it is necessary to check and prepare for the threat of technology infringement early in the enterprise. Nevertheless, previous studies have shown that the majority of policy alternatives are represented by about 90%. As a research method, literature analysis accounted for 76% and empirical and statistical analysis accounted for a relatively low rate of 16%. For this reason, it is necessary to study the management model and prediction model to prevent leakage of technology to meet the characteristics of SMEs. In this study, before analyzing the empirical analysis, we divided the technical characteristics from the technology value perspective and the organizational factor from the technology control point based on many previous researches related to the factors affecting the technology leakage. A total of 12 related variables were selected for the two factors, and the analysis was performed with these variables. In this study, we use three - year data of "Small and Medium Enterprise Technical Statistics Survey" conducted by the Small and Medium Business Administration. Analysis data includes 30 industries based on KSIC-based 2-digit classification, and the number of companies affected by technology leakage is 415 over 3 years. Through this data, we conducted a randomized sampling in the same industry based on the KSIC in the same year, and compared with the companies (n = 415) and the unaffected firms (n = 415) 1:1 Corresponding samples were prepared and analyzed. In this research, we will conduct an empirical analysis to search for factors influencing technology leakage, and propose an early warning system through data mining. Specifically, in this study, based on the questionnaire survey of SMEs conducted by the Small and Medium Business Administration (SME), we classified the factors that affect the technology leakage of SMEs into two factors(Technology Characteristics, Organization Characteristics). And we propose a model that informs the possibility of technical infringement by using Support Vector Machine(SVM) which is one of the various techniques of data mining based on the proven factors through statistical analysis. Unlike previous studies, this study focused on the cases of various industries in many years, and it can be pointed out that the artificial intelligence model was developed through this study. In addition, since the factors are derived empirically according to the actual leakage of SME technology leakage, it will be possible to suggest to policy makers which companies should be managed from the viewpoint of technology protection. Finally, it is expected that the early warning model on the possibility of technology leakage proposed in this study will provide an opportunity to prevent technology Leakage from the viewpoint of enterprise and government in advance.

Variation of Hospital Costs and Product Heterogeneity

  • Shin, Young-Soo
    • Journal of Preventive Medicine and Public Health
    • /
    • v.11 no.1
    • /
    • pp.123-127
    • /
    • 1978
  • The major objective of this research is to identify those hospital characteristics that best explain cost variation among hospitals and to formulate linear models that can predict hospital costs. Specific emphasis is placed on hospital output, that is, the identification of diagnosis related patient groups (DRGs) which are medically meaningful and demonstrate similar patterns of hospital resource consumption. A casemix index is developed based on the DRGs identified. Considering the common problems encountered in previous hospital cost research, the following study requirements are estab-lished for fulfilling the objectives of this research: 1. Selection of hospitals that exercise similar medical and fiscal practices. 2. Identification of an appropriate data collection mechanism in which demographic and medical characteristics of individual patients as well as accurate and comparable cost information can be derived. 3. Development of a patient classification system in which all the patients treated in hospitals are able to be split into mutually exclusive categories with consistent and stable patterns of resource consumption. 4. Development of a cost finding mechanism through which patient groups' costs can be made comparable across hospitals. A data set of Medicare patients prepared by the Social Security Administration was selected for the study analysis. The data set contained 27,229 record abstracts of Medicare patients discharged from all but one short-term general hospital in Connecticut during the period from January 1, 1971, to December 31, 1972. Each record abstract contained demographic and diagnostic information, as well as charges for specific medical services received. The 'AUT-OGRP System' was used to generate 198 DRGs in which the entire range of Medicare patients were split into mutually exclusive categories, each of which shows a consistent and stable pattern of resource consumption. The 'Departmental Method' was used to generate cost information for the groups of Medicare patients that would be comparable across hospitals. To fulfill the study objectives, an extensive analysis was conducted in the following areas: 1. Analysis of DRGs: in which the level of resource use of each DRG was determined, the length of stay or death rate of each DRG in relation to resource use was characterized, and underlying patterns of the relationships among DRG costs were explained. 2. Exploration of resource use profiles of hospitals; in which the magnitude of differences in the resource uses or death rates incurred in the treatment of Medicare patients among the study hospitals was explored. 3. Casemix analysis; in which four types of casemix-related indices were generated, and the significance of these indices in the explanation of hospital costs was examined. 4. Formulation of linear models to predict hospital costs of Medicare patients; in which nine independent variables (i. e., casemix index, hospital size, complexity of service, teaching activity, location, casemix-adjusted death. rate index, occupancy rate, and casemix-adjusted length of stay index) were used for determining factors in hospital costs. Results from the study analysis indicated that: 1. The system of 198 DRGs for Medicare patient classification was demonstrated not only as a strong tool for determining the pattern of hospital resource utilization of Medicare patients, but also for categorizing patients by their severity of illness. 2. The wei틴fed mean total case cost (TOTC) of the study hospitals for Medicare patients during the study years was $11,27.02 with a standard deviation of $117.20. The hospital with the highest average TOTC ($1538.15) was 2.08 times more expensive than the hospital with the lowest average TOTC ($743.45). The weighted mean per diem total cost (DTOC) of the study hospitals for Medicare patients during the sutdy years was $107.98 with a standard deviation of $15.18. The hospital with the highest average DTOC ($147.23) was 1.87 times more expensive than the hospital with the lowest average DTOC ($78.49). 3. The linear models for each of the six types of hospital costs were formulated using the casemix index and the eight other hospital variables as the determinants. These models explained variance to the extent of 68.7 percent of total case cost (TOTC), 63.5 percent of room and board cost (RMC), 66.2 percent of total ancillary service cost (TANC), 66.3 percent of per diem total cost (DTOC), 56.9 percent of per diem room and board cost (DRMC), and 65.5 percent of per diem ancillary service cost (DTANC). The casemix index alone explained approximately one half of interhospital cost variation: 59.1 percent for TOTC and 44.3 percent for DTOC. Thsee results demonstrate that the casemix index is the most importand determinant of interhospital cost variation Future research and policy implications in regard to the results of this study is envisioned in the following three areas: 1. Utilization of casemix related indices in the Medicare data systems. 2. Refinement of data for hospital cost evaluation. 3. Development of a system for reimbursement and cost control in hospitals.

  • PDF

A Study on Fast Iris Detection for Iris Recognition in Mobile Phone (휴대폰에서의 홍채인식을 위한 고속 홍채검출에 관한 연구)

  • Park Hyun-Ae;Park Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.19-29
    • /
    • 2006
  • As the security of personal information is becoming more important in mobile phones, we are starting to apply iris recognition technology to these devices. In conventional iris recognition, magnified iris images are required. For that, it has been necessary to use large magnified zoom & focus lens camera to capture images, but due to the requirement about low size and cost of mobile phones, the zoom & focus lens are difficult to be used. However, with rapid developments and multimedia convergence trends in mobile phones, more and more companies have built mega-pixel cameras into their mobile phones. These devices make it possible to capture a magnified iris image without zoom & focus lens. Although facial images are captured far away from the user using a mega-pixel camera, the captured iris region possesses sufficient pixel information for iris recognition. However, in this case, the eye region should be detected for accurate iris recognition in facial images. So, we propose a new fast iris detection method, which is appropriate for mobile phones based on corneal specular reflection. To detect specular reflection robustly, we propose the theoretical background of estimating the size and brightness of specular reflection based on eye, camera and illuminator models. In addition, we use the successive On/Off scheme of the illuminator to detect the optical/motion blurring and sunlight effect on input image. Experimental results show that total processing time(detecting iris region) is on average 65ms on a Samsung SCH-S2300 (with 150MHz ARM 9 CPU) mobile phone. The rate of correct iris detection is 99% (about indoor images) and 98.5% (about outdoor images).

Development of Three-Dimensional Trajectory Model for Detecting Source Region of the Radioactive Materials Released into the Atmosphere (대기 누출 방사성물질 선원 위치 추적을 위한 3차원 궤적모델 개발)

  • Suh, Kyung-Suk;Park, Kihyun;Min, Byung-Il;Kim, Sora;Yang, Byung-Mo
    • Journal of Radiation Protection and Research
    • /
    • v.41 no.1
    • /
    • pp.31-39
    • /
    • 2016
  • Background: It is necessary to consider the overall countermeasure for analysis of nuclear activities according to the increase of the nuclear facilities like nuclear power and reprocessing plants in the neighboring countries including China, Taiwan, North Korea, Japan and South Korea. South Korea and comprehensive nuclear-test-ban treaty organization (CTBTO) are now operating the monitoring instruments to detect radionuclides released into the air. It is important to estimate the origin of radionuclides measured using the detection technology as well as the monitoring analysis in aspects of investigation and security of the nuclear activities in neighboring countries. Materials and methods: A three-dimensional forward/backward trajectory model has been developed to estimate the origin of radionuclides for a covert nuclear activity. The developed trajectory model was composed of forward and backward modules to track the particle positions using finite difference method. Results and discussion: A three-dimensional trajectory model was validated using the measured data at Chernobyl accident. The calculated results showed a good agreement by using the high concentration measurements and the locations where was near a release point. The three-dimensional trajectory model had some uncertainty according to the release time, release height and time interval of the trajectory at each release points. An atmospheric dispersion model called long-range accident dose assessment system (LADAS), based on the fields of regards (FOR) technique, was applied to reduce the uncertainties of the trajectory model and to improve the detective technology for estimating the radioisotopes emission area. Conclusion: The detective technology developed in this study can evaluate in release area and origin for covert nuclear activities based on measured radioisotopes at monitoring stations, and it might play critical tool to improve the ability of the nuclear safety field.

Estimation of Uranium Particle Concentration in the Korean Peninsula Caused by North Korea's Uranium Enrichment Facility (북한 우라늄 농축시설로 인한 한반도에서의 공기중 우라늄 입자 농도 예측)

  • Kwak, Sung-Woo;Kang, Han-Byeol;Shin, Jung-Ki;Lee, Junghyun
    • Journal of Radiation Protection and Research
    • /
    • v.39 no.3
    • /
    • pp.127-133
    • /
    • 2014
  • North Korea's uranium enrichment facility is a matter of international concern. It is of particular alarming to South Korea with regard to the security and safety of the country. This situation requires continuous monitoring of the DPRK and emergency preparedness on the part of the ROK. To assess the detectability of an undeclared uranium enrichment plant in North Korea, uranium concentrations in the air at both a short and a long distance from the enrichment facility were estimated. $UF_6$ source terms were determined by using existing information on North Korean facility and data from the operation experience of enrichment plants from other countries. Using the calculated source terms, two atmospheric dispersion models (Gaussian Plume Model and HYSPLIT models) and meteorological data were used to estimate the uranium particle concentrations from the Yongbyon enrichment facility. A maximum uranium concentration and its location are dependent upon the meteorological conditions and the height of the UF6 release point. This study showed that the maximum uranium concentration around the enrichment facility was about $1.0{\times}10^{-7}g{\cdot}m^{-3}$. The location of the maximum concentration was within about 0.4 km of the facility. It has been assumed that the uranium sample of about a few micrograms (${\mu}g$) could be obtained; and that few micrograms of uranium can be easily measured with current measurement instruments. On the contrary, a uranium concentration at a distance of more than 100 kilometers from the enrichment facility was estimated to be about $1.0{\times}10^{-13}{\sim}1.0{\times}10^{-15}g{\cdot}m^{-3}$, which is less than back-ground level. Therefore, based on the results of our paper, an air sample taken within the vicinity of the Yongbyon enrichment facility could be used to determine as to whether or not North Korea is carrying out an undeclared nuclear program. However, the air samples taken at a longer distance of a few hundred kilometers would prove difficult in detecting a clandestine nuclear activities.