• Title/Summary/Keyword: 정보시스템 효과성

Search Result 3,195, Processing Time 0.037 seconds

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Improving Usage of the Korea Meteorological Administration's Digital Forecasts in Agriculture: I. Correction for Local Temperature under the Inversion Condition (기상청 동네예보의 영농활용도 증진을 위한 방안: I. 기온역전조건의 국지기온 보정)

  • Kim, Soo-Ock;Kim, Dae-Jun;Kim, Jin-Hee;Yun, Jin I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.15 no.2
    • /
    • pp.76-84
    • /
    • 2013
  • An adequate downscaling of the official forecasts of Korea Meteorological Administration (KMA) is a prerequisite to improving the value and utility of agrometeorological information in rural areas, where complex terrain and small farms constitute major features of the landscape. In this study, we suggest a simple correction scheme for scaling down the KMA temperature forecasts from mesoscale (5 km by 5 km) to the local scale (30 m by 30 m) across a rural catchment, especially under temperature inversion conditions. The study area is a rural catchment of $50km^2$ area with complex terrain and located on a southern slope of Mountain Jiri National Park. Temperature forecasts for 0600 LST on 62 days with temperature inversion were selected from the fall 2011-spring 2012 KMA data archive. A geospatial correction scheme which can simulate both cold air drainage and the so-called 'thermal belt' was used to derive the site-specific temperature deviation across the study area at a 30 m by 30 m resolution from the original 5 km by 5 km forecast grids. The observed temperature data at 12 validation sites within the study area showed a substantial reduction in forecast error: from ${\pm}2^{\circ}C$ to ${\pm}1^{\circ}C$ in the mean error range and from $1.9^{\circ}C$ to $1.6^{\circ}C$ in the root mean square error. Improvement was most remarkable at low lying locations showing frequent cold pooling events. Temperature prediction error was less than $2^{\circ}C$ for more than 80% of the observed inversion cases and less than $1^{\circ}C$ for half of the cases. Temperature forecasts corrected by this scheme may accelerate implementation of the freeze and frost early warning service for major fruits growing regions in Korea.

Estimation of the Accuracy of Genomic Breeding Value in Hanwoo (Korean Cattle) (한우의 유전체 육종가의 정확도 추정)

  • Lee, Seung Soo;Lee, Seung Hwan;Choi, Tae Jeong;Choy, Yun Ho;Cho, Kwang Hyun;Choi, You Lim;Cho, Yong Min;Kim, Nae Soo;Lee, Jung Jae
    • Journal of Animal Science and Technology
    • /
    • v.55 no.1
    • /
    • pp.13-18
    • /
    • 2013
  • This study was conducted to estimate the Genomic Estimated Breeding Value (GEBV) using Genomic Best Linear Unbiased Prediction (GBLUP) method in Hanwoo (Korean native cattle) population. The result is expected to adapt genomic selection onto the national Hanwoo evaluation system. Carcass weight (CW), eye muscle area (EMA), backfat thickness (BT), and marbling score (MS) were investigated in 552 Hanwoo progeny-tested steers at Livestock Improvement Main Center. Animals were genotyped with Illumina BovineHD BeadChip (777K SNPs). For statistical analysis, Genetic Relationship Matrix (GRM) was formulated on the basis of genotypes and the accuracy of GEBV was estimated with 10-fold Cross-validation method. The accuracies estimated with cross-validation method were between 0.915~0.957. In 534 progeny-tested steers, the maximum difference of GEBV accuracy compared to conventional EBV for CW, EMA, BT, and MS traits were 9.56%, 5.78%, 5.78%, and 4.18% respectively. In 3,674 pedigree traced bulls, maximum increased difference of GEBV for CW, EMA, BT, and MS traits were increased as 13.54%, 6.50%, 6.50%, and 4.31% respectively. This showed that the implementation of genomic pre-selection for candidate calves to test on meat production traits could improve the genetic gain by increasing accuracy and reducing generation interval in Hanwoo genetic evaluation system to select proven bulls.

Quality Dimensions Affecting the Effectiveness of a Semantic-Web Search Engine (검색 효과성에 영향을 미치는 시맨틱웹 검색시스템 품질요인에 관한 연구)

  • Han, Dong-Il;Hong, Il-Yoo
    • Asia pacific journal of information systems
    • /
    • v.19 no.1
    • /
    • pp.1-31
    • /
    • 2009
  • This paper empirically examines factors that potentially influence the success of a Web-based semantic search engine. A research model has been proposed that shows the impact of quality-related factors upon the effectiveness of a semantic search engine, based on DeLone and McLean's(2003) information systems success model. An empirical study has been conducted to test hypotheses formulated around the research model, and statistical methods were applied to analyze gathered data and draw conclusions. Implications for academics and practitioners are offered based on the findings of the study. The proposed model includes three quality dimensions of a Web-based semantic search engine-namely, information quality, system quality and service quality. These three dimensions each have measures designed to collectively assess the respective dimension. The model is intended to examine the relationship between measures of these quality dimensions and measures of two dependent constructs, including individuals' net benefit and user satisfaction. Individuals' net benefit was measured by the extent to which the user's information needs were adequately met, whereas user satisfaction was measured by a combination of the perceived satisfaction with search results and the perceived satisfaction with the overall system. A total of 23 hypotheses have been formulated around the model, and a questionnaire survey has been conducted using a functional semantic search website created by KT and Hakia, so as to collect data to validate the model. Copies of a questionnaire form were handed out in person to 160 research associates and employees working in the area of designing and developing semantic search engines. Those who received the form, 148 respondents returned valid responses. The survey form asked respondents to use the given website to answer questions concerning the system. The results of the empirical study have indicated that, of the three quality dimensions, information quality was found to have the strongest association with the effectiveness of a Web-based semantic search engine. This finding is consistent with the observation in the literature that the aspects of the information quality should serve as a basis for evaluating the search outcomes from a semantic search engine. Measures under the information quality dimension that have a positive effect on informational gratification and user satisfaction were found to be recall and currency. Under the system quality dimension, response time and interactivity, were positively related to informational gratification. On the other hand, only one measure under the service quality dimension, reliability was found to have a positive relationship with user satisfaction. The results were based on the seven hypotheses that have been accepted. One may wonder why 15 out of the 23 hypotheses have been rejected and question the theoretical soundness of the model. However, the correlations between independent variables and dependent variables came out to be fairly high. This suggests that the structural equation model yielded results inconsistent with those of coefficient analysis, because the structural equation model intends to examine the relationship among independent variables as well as the relationship between independent variables and dependent variables. The findings offer some useful implications for owners of a semantic search engine, as far as the design and maintenance of the website is concerned. First, the system should be designed to respond to the user's query as fast as possible. Also it should be designed to support the search process by recommending, revising, and choosing a search query, so as to maximize users' interactions with the system. Second, the system should present search results with maximum recall and currency to effectively meet the users' expectations. Third, it should be capable of providing online services in a reliable and trustworthy manner. Finally, effective increase in user satisfaction requires the improvement of quality factors associated with a semantic search engine, which would in turn help increase the informational gratification for users. The proposed model can serve as a useful framework for measuring the success of a Web-based semantic search engine. Applying the search engine success framework to the measurement of search engine effectiveness has the potential to provide an outline of what areas of a semantic search engine needs improvement, in order to better meet information needs of users. Further research will be needed to make this idea a reality.

Factors Influencing Satisfaction on Home Visiting Health Care Service of the Elderly based on the degree of chronic diseases (만성질환 유병상태에 따른 노인 방문건강관리 서비스 만족도 영향요인 연구)

  • Seo, Daram;Shon, Changwoo
    • 한국노년학
    • /
    • v.41 no.2
    • /
    • pp.271-284
    • /
    • 2021
  • This study was conducted to derive factors that affect the satisfaction of home visiting health care services and to develop effective community care models by using the results of Seoul's outreach service which is the basis for Korean community care. The population of the study was the elderly aged 65 and 70 who participated in the Seoul's outreach community services 3rd stage (July 2017 - June 2018) and 4th stage (July 2018 to June 2019). 2,200 people were extracted by the proportional allocation method and home visit interviews were conducted on them. Subjects were divided into sub-groups based on chronic disease prevalence, and logistic regression was conducted to derive factors that affect the satisfaction of home visiting health care services. The results demonstrated that the elderly without chronic diseases were more satisfied when they received health education and counseling services, the elderly with one chronic disease were more satisfied when they received Community resource-linked services. In the case of elderly people with two or more chronic diseases, the service satisfaction level is increased when health condition assessment and Community resource-linked services are provided. Regardless of whether or not they have chronic diseases, service delivery time was a factor that increased satisfaction in home visiting health care. And the degree of explanation understanding was a factor that increased satisfaction for both single and complex chronic patients. Home Visiting health care services based on the community is a key component of the ongoing community care. In order to increase the sustainability and effectiveness of community care in the future, Community-oriented health care services based on the degree of chronic diseases of the elderly should be provided. In order to provide more effective services, however, it is necessary (1) to establish a linkage system to share health information of the subject held by the National Health Insurance Service to local governments and (2) to provide capacity-building education for visiting nurses to improve the quality of home visiting health care services. It is hoped that this study will be us ed as bas ic data for the successful settlement of community care.

Fabrication of Portable Self-Powered Wireless Data Transmitting and Receiving System for User Environment Monitoring (사용자 환경 모니터링을 위한 소형 자가발전 무선 데이터 송수신 시스템 개발)

  • Jang, Sunmin;Cho, Sumin;Joung, Yoonsu;Kim, Jaehyoung;Kim, Hyeonsu;Jang, Dayeon;Ra, Yoonsang;Lee, Donghan;La, Moonwoo;Choi, Dongwhi
    • Korean Chemical Engineering Research
    • /
    • v.60 no.2
    • /
    • pp.249-254
    • /
    • 2022
  • With the rapid advance of the semiconductor and Information and communication technologies, remote environment monitoring technology, which can detect and analyze surrounding environmental conditions with various types of sensors and wireless communication technologies, is also drawing attention. However, since the conventional remote environmental monitoring systems require external power supplies, it causes time and space limitations on comfortable usage. In this study, we proposed the concept of the self-powered remote environmental monitoring system by supplying the power with the levitation-electromagnetic generator (L-EMG), which is rationally designed to effectively harvest biomechanical energy in consideration of the mechanical characteristics of biomechanical energy. In this regard, the proposed L-EMG is designed to effectively respond to the external vibration with the movable center magnet considering the mechanical characteristics of the biomechanical energy, such as relatively low-frequency and high amplitude of vibration. Hence the L-EMG based on the fragile force equilibrium can generate high-quality electrical energy to supply power. Additionally, the environmental detective sensor and wireless transmission module are composed of the micro control unit (MCU) to minimize the required power for electronic device operation by applying the sleep mode, resulting in the extension of operation time. Finally, in order to maximize user convenience, a mobile phone application was built to enable easy monitoring of the surrounding environment. Thus, the proposed concept not only verifies the possibility of establishing the self-powered remote environmental monitoring system using biomechanical energy but further suggests a design guideline.

The Advancement of Underwriting Skill by Selective Risk Acceptance (보험Risk 세분화를 통한 언더라이팅 기법 선진화 방안)

  • Lee, Chan-Hee
    • The Journal of the Korean life insurance medical association
    • /
    • v.24
    • /
    • pp.49-78
    • /
    • 2005
  • Ⅰ. 연구(硏究) 배경(背景) 및 목적(目的) o 우리나라 보험시장의 세대가입율은 86%로 보험시장 성숙기에 진입하였으며 기존의 전통적인 전업채널에서 방카슈랑스의 도입, 온라인전문보험사의 출현, TM 영업의 성장세 等멀티채널로 진행되고 있음 o LTC(장기간병), CI(치명적질환), 실손의료보험 등(等)선 진형 건강상품의 잇따른 출시로 보험리스크 관리측면에서 언더라이팅의 대비가 절실한 시점임 o 상품과 마케팅 等언더라이팅 측면에서 매우 밀접한 영역의 변화에 발맞추어 언더라이팅의 인수기법의 선진화가 시급히 요구되는 상황하에서 위험을 적절히 분류하고 평가하는 선진적 언더라이팅 기법 구축이 필수 적임 o 궁극적으로 고객의 다양한 보장니드 충족과 상품, 마케팅, 언더라이팅의 경쟁력 강화를 통한 보험사의 종합이익 극대화에 기여할 수 있는 방안을 모색하고자 함 Ⅱ. 선진보험시장(先進保險市場)Risk 세분화사례(細分化事例) 1. 환경적위험(環境的危險)에 따른 보험료(保險料) 차등(差等) (1) 위험직업 보험료 할증 o 미국, 유럽등(等) 대부분의 선진시장에서는 가입당시 피보험자의 직업위험도에 따라 보험료를 차등 적용중(中)임 o 가입하는 보장급부에 따라 직업 분류방법 및 할증방식도 상이하며 일반사망과 재해사망,납입면제, DI에 대해서 별도의 방법을 사용함 o 할증적용은 표준위험율의 일정배수를 적용하여 할증 보험료를 산출하거나, 가입금액당 일정한 추가보험료를 적용하고 있음 - 광부의 경우 재해사망 가입시 표준위험율의 300% 적용하며, 일반사망 가입시 $1,000당 $2.95 할증보험료 부가 (2) 위험취미 보험료 할증 o 취미와 관련 사고의 지속적 다발로 취미활동도 위험요소로 인식되어 보험료를 차등 적용중(中)임 o 할증보험료는 보험가입금액당 일정비율로 부가(가입 금액과 무관)하며, 신종레포츠 등(等)일부 위험취미는 통계의 부족으로 언더라이터가 할증율 결정하여 적용함 - 패러글라이딩 년(年)$26{\sim}50$회(回) 취미생활의 경우 가입금액 $1,000당 재해사망 $2, DI보험 8$ 할증보험료 부가 o 보험료 할증과는 별도로 위험취미에 대한 부담보를 적용함. 위험취미 활동으로 인한 보험사고 발생시 사망을 포함한 모든 급부에 대한 보장을 부(不)담보로 인수함. (3) 위험지역 거주/ 여행 보험료 할증 o 피보험자가 거주하고 있는 특정국가의 임시 혹은 영구적 거주시 기후위험, 거주지역의 위생과 의료수준, 여행위험, 전쟁과 폭동위험 등(等)을 고려하여 평가 o 일반사망, 재해사망 등(等)보장급부별로 할증보험료 부가 또는 거절 o 할증보험료는 보험全기간에 대해 동일하게 적용 - 러시아의 경우 가입금액 $1,000당 일반사망은 2$의 할증보험료 부가, 재해사망은 거절 (4) 기타 위험도에 대한 보험료 차등 o 비행관련 위험은 세가지로 분류(항공운송기, 개인비행, 군사비행), 청약서, 추가질문서, 진단서, 비행이력 정보를 바탕으로 할증보험료를 부가함 - 농약살포비행기조종사의 경우 가입금액 $1,000당 일반사망 6$의 할증보험료 부가, 재해사망은 거절 o 미국, 일본등(等)서는 교통사고나 교통위반 관련 기록을 활용하여 무(無)사고운전자에 대해 보험료 할인(우량체 위험요소로 활용) 2. 신체적위험도(身體的危險度)에 따른 보험료차등(保險料差等) (1) 표준미달체 보험료 할증 1) 총위험지수 500(초과위험지수 400)까지 인수 o 300이하는 25점단위, 300점 초과는 50점 단위로 13단계로 구분하여 할증보험료를 적용중(中)임 2) 삭감법과 할증법을 동시 적용 o 보험금 삭감부분만큼 할증보험료가 감소하는 효과가 있어 청약자에게 선택의 기회를 제공할수 있으며 고(高)위험 피보험자에게 유용함 3) 특정암에 대한 기왕력자에 대해 단기(Temporary)할증 적용 o 질병성향에 따라 가입후 $1{\sim}5$년간 할증보험료를 부가하고 보험료 할증 기간이 경과한 후에는 표준체보험료를 부가함 4) 할증보험료 반환옵션(Return of the extra premium)의 적용 o 보험계약이 유지중(中)이며, 일정기간 생존시 할증보험료가 반환됨 (2) 표준미달체 급부증액(Enhanced annuity) o 영국에서는 표준미달체를 대상으로 연금급부를 증가시킨 증액형 연금(Enhanced annuity) 상품을 개발 판매중(中)임 o 흡연, 직업, 병력 등(等)다양한 신체적, 환경적 위험도에 따라 표준체에 비해 증액연금을 차등 지급함 (3) 우량 피보험체 가격 세분화 o 미국시장에서는 $8{\sim}14$개 의적, 비(非)의적 위험요소에 대한 평가기준에 따라 표준체를 최대 8개 Class로 분류하여 할인보험료를 차등 적용 - 기왕력, 혈압, 가족력, 흡연, BMI, 콜레스테롤, 운전, 위험취미, 거주지, 비행력, 음주/마약 등(等) o 할인율은 회사, Class, 가입기준에 따라 상이(최대75%)하며, 가입연령은 최저 $16{\sim}20$세, 최대 $65{\sim}75$세, 최저보험금액은 10만달러(HIV검사가 필요한 최저 금액) o 일본시장에서는 $3{\sim}4$개 위험요소에 따라 $3{\sim}4$개 Class로 분류 우량체 할인중(中)임 o 유럽시장에서는 영국 등(等)일부시장에서만 비(非)흡연할인 또는 우량체할인 적용 Ⅲ. 국내보험시장(國內保險市場) 현황(現況)및 문제점(問題點) 1. 환경적위험도(環境的危險度)에 따른 가입한도제한(加入限度制限) (1) 위험직업 보험가입 제한 o 업계공동의 직업별 표준위험등급에 따라 각 보험사 자체적으로 위험등급별 가입한도를 설정 운영중(中)임. 비(非)위험직과의 형평성, 고(高)위험직업 보장 한계, 수익구조 불안정화 등(等)문제점을 내포하고 있음 - 광부의 경우 위험1급 적용으로 사망 최대 1억(億), 입원 1일(日) 2만원까지 제한 o 금융감독원이 2002년(年)7월(月)위험등급별 위험지수를 참조 위험율로 인가하였으나, 비위험직은 70%, 위험직은 200% 수준으로 산정되어 현실적 적용이 어려움 (2) 위험취미 보험가입 제한 o 해당취미의 직업종사자에 준(準)하여 직업위험등급을 적용하여 가입 한도를 제한하고 있음. 추가질문서를 활용하여 자격증 유무, 동호회 가입등(等)에 대한 세부정보를 입수하지 않음 - 패러글라이딩의 경우 위험2급을 적용, 사망보장 최대 2 억(億)까지 제한 (3) 거주지역/ 해외여행 보험가입 제한 o 각(各)보험사별로 지역적 특성상 사고재해 다발 지역에 대해 보험가입을 제한하고 있음 - 강원, 충청 일부지역 상해보험 가입불가 - 전북, 태백 일부지역 입원급여금 1일(日)2만원이내 o 해외여행을 포함한 해외체류에 대해서는 일정한 가입 요건을 정하여 운영중(中)이며, 가입한도 설정 보험가입을 제한하거나 재해집중보장 상품에 대해 거절함 - 러시아의 경우 단기체류는 위험1급 및 상해보험 가입 불가, 장기 체류는 거절처리함 2. 신체적위험도(身體的危險度)에 따른 인수차별화(引受差別化) (1) 표준미달체 인수방법 o 체증성, 항상성 위험에 대한 초과위험지수를 보험금삭감법으로 전환 사망보험에 적용(최대 5년(年))하여 5년(年)이후 보험 Risk노출 심각 o 보험료 할증은 일부 회사에서 주(主)보험 중심으로 사용중(中)이며, 총위험지수 300(8단계)까지 인수 - 주(主)보험 할증시 특약은 가입 불가하며, 암 기왕력자는 대부분 거절 o 신체부위 39가지, 질병 5가지에 대해 부담보 적용(입원, 수술 등(等)생존급부에 부담보) (2) 비(非)흡연/ 우량체 보험료 할인 o 1999년(年)최초 도입 이래 $3{\sim}4$개의 위험요소로 1개 Class 운영중(中)임 S생보사의 경우 비(非)흡연우량체, 비(非)흡연표준체의 2개 Class 운영 o 보험료 할인율은 회사, 상품에 따라 상이하며 최대 22%(영업보험료기준)임. 흡연여부는 뇨스틱을 활용 코티닌테스트를 실시함 o 우량체 판매는 신계약의 $2{\sim}15%$수준(회사의 정책에 따라 상이) Ⅳ. 언더라이팅 기법(技法) 선진화(先進化) 방안(方案) 1. 직업위험도별 보험료 차등 적용 o 생 손보 직업위험등급 일원화와 연계하여 3개등급으로 위험지수개편, 비위험직 기준으로 보험요율 차별적용 2. 위험취미에 대한 부담보 적용 o 해당취미를 원인으로 보험사고(사망포함) 발생시 부담보 제도 도입 3. 표준미달체 인수기법 선진화를 통한 인수범위 대폭 확대 o 보험료 할증법 적용 확대를 통한 Risk 헷지로 총위험지수 $300{\rightarrow}500$으로 확대(거절건 최소화) 4. 보험료 할증법 보험금 삭감 병행 적용 o 삭감기간을 적용한 보험료 할증방식 개발, 고객에게 선택권 제공 5. 기한부 보험료할증 부가 o 위암, 갑상선암 등(等)특정암의 성향에 따라 위험도가 높은 가입초기에 평준할증보험료를 적용하여 인수 6. 보험료 할증법 부가특약 확대 적용, 부담보 병행 사용 o 정기특약 등(等)사망관련 특약에 할증법 확대, 생존급부 특약은 부담보 7. 표준체 고객 세분화 확대 o 콜레스테롤, HDL 등(等)위험평가요소 확대를 통한 Class 세분화 Ⅴ. 기대효과(期待效果) 1. 고(高)위험직종사자, 위험취미자, 표준미달체에 대한 보험가입 문호개방 2. 보험계약자간 형평성 제고 및 다양한 고객의 보장니드에 부응 3. 상품판매 확대 및 Risk헷지를 통한 수입보험료 증대 및 사차익 개선 4. 본격적인 가격경쟁에 대비한 보험사 체질 개선 5. 회사 이미지 제고 및 진단 거부감 해소, 포트폴리오 약화 방지 Ⅵ. 결론(結論) o 종래의 소극적이고 일률적인 인수기법에서 탈피하여 피보험자를 다양한 측면에서 위험평가하여 적정 보험료 부가와 합리적 가입조건을 제시하는 적절한 위험평가 수단을 도입하고, o 언더라이팅 인수기법의 선진화와 함께 언더라이팅 인력의 전문화, 정보입수 및 시스템 인프라의 구축 등이 병행함으로써, o 보험사의 사차손익 관리측면에서 뿐만 아니라 보험시장 개방 및 급변하는 보험환경에 대비한 한국 생보언더라이팅 경쟁력 강화 및 언더라이터의 글로벌화에도 크게 기여할 것임.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Measuring the Economic Impact of Item Descriptions on Sales Performance (온라인 상품 판매 성과에 영향을 미치는 상품 소개글 효과 측정 기법)

  • Lee, Dongwon;Park, Sung-Hyuk;Moon, Songchun
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.1-17
    • /
    • 2012
  • Personalized smart devices such as smartphones and smart pads are widely used. Unlike traditional feature phones, theses smart devices allow users to choose a variety of functions, which support not only daily experiences but also business operations. Actually, there exist a huge number of applications accessible by smart device users in online and mobile application markets. Users can choose apps that fit their own tastes and needs, which is impossible for conventional phone users. With the increase in app demand, the tastes and needs of app users are becoming more diverse. To meet these requirements, numerous apps with diverse functions are being released on the market, which leads to fierce competition. Unlike offline markets, online markets have a limitation in that purchasing decisions should be made without experiencing the items. Therefore, online customers rely more on item-related information that can be seen on the item page in which online markets commonly provide details about each item. Customers can feel confident about the quality of an item through the online information and decide whether to purchase it. The same is true of online app markets. To win the sales competition against other apps that perform similar functions, app developers need to focus on writing app descriptions to attract the attention of customers. If we can measure the effect of app descriptions on sales without regard to the app's price and quality, app descriptions that facilitate the sale of apps can be identified. This study intends to provide such a quantitative result for app developers who want to promote the sales of their apps. For this purpose, we collected app details including the descriptions written in Korean from one of the largest app markets in Korea, and then extracted keywords from the descriptions. Next, the impact of the keywords on sales performance was measured through our econometric model. Through this analysis, we were able to analyze the impact of each keyword itself, apart from that of the design or quality. The keywords, comprised of the attribute and evaluation of each app, are extracted by a morpheme analyzer. Our model with the keywords as its input variables was established to analyze their impact on sales performance. A regression analysis was conducted for each category in which apps are included. This analysis was required because we found the keywords, which are emphasized in app descriptions, different category-by-category. The analysis conducted not only for free apps but also for paid apps showed which keywords have more impact on sales performance for each type of app. In the analysis of paid apps in the education category, keywords such as 'search+easy' and 'words+abundant' showed higher effectiveness. In the same category, free apps whose keywords emphasize the quality of apps showed higher sales performance. One interesting fact is that keywords describing not only the app but also the need for the app have asignificant impact. Language learning apps, regardless of whether they are sold free or paid, showed higher sales performance by including the keywords 'foreign language study+important'. This result shows that motivation for the purchase affected sales. While item reviews are widely researched in online markets, item descriptions are not very actively studied. In the case of the mobile app markets, newly introduced apps may not have many item reviews because of the low quantity sold. In such cases, item descriptions can be regarded more important when customers make a decision about purchasing items. This study is the first trial to quantitatively analyze the relationship between an item description and its impact on sales performance. The results show that our research framework successfully provides a list of the most effective sales key terms with the estimates of their effectiveness. Although this study is performed for a specified type of item (i.e., mobile apps), our model can be applied to almost all of the items traded in online markets.

Monitoring of a Time-series of Land Subsidence in Mexico City Using Space-based Synthetic Aperture Radar Observations (인공위성 영상레이더를 이용한 멕시코시티 시계열 지반침하 관측)

  • Ju, Jeongheon;Hong, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1657-1667
    • /
    • 2021
  • Anthropogenic activities and natural processes have been causes of land subsidence which is sudden sinking or gradual settlement of the earth's solid surface. Mexico City, the capital of Mexico, is one of the most severe land subsidence areas which are resulted from excessive groundwater extraction. Because groundwater is the primary water resource occupies almost 70% of total water usage in the city. Traditional terrestrial observations like the Global Navigation Satellite System (GNSS) or leveling survey have been preferred to measure land subsidence accurately. Although the GNSS observations have highly accurate information of the surfaces' displacement with a very high temporal resolution, it has often been limited due to its sparse spatial resolution and highly time-consuming and high cost. However, space-based synthetic aperture radar (SAR) interferometry has been widely used as a powerful tool to monitor surfaces' displacement with high spatial resolution and high accuracy from mm to cm-scale, regardless of day-or-night and weather conditions. In this paper, advanced interferometric approaches have been applied to get a time-series of land subsidence of Mexico City using four-year-long twenty ALOS PALSAR L-band observations acquired from Feb-11, 2007 to Feb-22, 2011. We utilized persistent scatterer interferometry (PSI) and small baseline subset (SBAS) techniques to suppress atmospheric artifacts and topography errors. The results show that the maximum subsidence rates of the PSI and SBAS method were -29.5 cm/year and -27.0 cm/year, respectively. In addition, we discuss the different subsidence rates where the study area is discriminated into three districts according to distinctive geotechnical characteristics. The significant subsidence rate occurred in the lacustrine sediments with higher compressibility than harder bedrock.