• Title/Summary/Keyword: Actual Test

Search Result 3,534, Processing Time 0.033 seconds

The Crisis of AIDS and responses of South African Churches in the task of new national building (새로운 민주주의 국가건설의 과제 속에 직면한 AIDS와 이에 대한 교회의 반응과 과제: 남아프리카 공화국을 중심으로)

  • Kim, Dae-Yoong
    • Journal of the Korean Association of African Studies
    • /
    • v.29
    • /
    • pp.27-53
    • /
    • 2009
  • At the start of the new century, South Africa probably had the largest number of HIV-infected people of any country in the world. The only nation that comes close is India with a population of one billion people compared to South Africa's figure of 57 million. The tragedy is that this did not have to happen. South Africa was aware of the dangers posed by AIDS as early as 1985. In 1991, the national survey of women attending antenatal clinics found that only 0.8percent were infected. In 1994, when the new government took power, the figure was still comparatively low at 7.6 %. The 2004 figure which has been published is 26.5%. This article tracks the epidemic globally, in the region and in South Africa. I explain some of the basic concepts around the disease and look at what may happen with respect to numbers. The situation is bad, and the number of people falling ill, dying and leaving families will rise over next few years. This will impact on South Africa in a number of important ways. This article assesses the demographic, economic and social consequences of the epidemic. It disposes of a number of myths and present the real facts. The AIDS in South Africa is not related to individuals only. It warns that AIDS in Africa is becoming a community and systemic problem. The acuteness of the problem does not stem merely from the fact that communities are affected, or could even be wipe out by the end of this decade, but from the fact that AIDS will place incredible burdens and obligations upon medical services, health care and religious communities such as churches. The facts confront churches' mission with the important question: who is going to take care of all the patients and where? The reality is that people dying of AIDS will have to be cared for at home by relatives and friends. A further question that arises is whether our people are prepared for this. AIDS was considered to be a homo-plague and the hunt was on for a scapegoat in the light of the fatal implication of the disease. At present we are in the strategic phase where we all realize that it will be of no avail to scare people with the ominous threat of AIDS AIDS destroys the optimism of our achievement ethics. This exposure of the culture of optimism is also an exposure of the so-called 'human basic fear which accuses Christianity that their concept of sin is a damper on man's search for liberation and basic need to be freed from all Imitation. AIDS is also a test for our ecclesiastical genuineness and the sincerity of our mission sensibility. It poses the question: How unconditional is Christian love? Is there room for the AIDS sufferer in the community of believers, despite the fact he is an acknowledged homosexual? The question to put to the church is whether the community of believers is an exclusive to put to the koinonia which excludes homosexuals. They may be welcome on principle, but in actual fact are not acceptable to the church community. As South Africa enters the new century, it is clear that the epidemic is not having a measurable impact. However, the impact of AIDS is gradual, subtle and incremental. The author's proposal of what is currently most needed in South Africa is that the little things will make a difference. It's about doing lots of little things better at grassroots level, with the emphasis on doing. There are so many community, churches and NGOs initiatives worth building on and intensifying. One must not underestimate the therapeutic value of working together in small groups to overcome a problem

Evaluation of Application Possibility for Floating Marine Pollutants Detection Using Image Enhancement Techniques: A Case Study for Thin Oil Film on the Sea Surface (영상 강화 기법을 통한 부유성 해양오염물질 탐지 기술 적용 가능성 평가: 해수면의 얇은 유막을 대상으로)

  • Soyeong Jang;Yeongbin Park;Jaeyeop Kwon;Sangheon Lee;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1353-1369
    • /
    • 2023
  • In the event of a disaster accident at sea, the scale of damage will vary due to weather effects such as wind, currents, and tidal waves, and it is obligatory to minimize the scale of damage by establishing appropriate control plans through quick on-site identification. In particular, it is difficult to identify pollutants that exist in a thin film at sea surface due to their relatively low viscosity and surface tension among pollutants discharged into the sea. Therefore, this study aims to develop an algorithm to detect suspended pollutants on the sea surface in RGB images using imaging equipment that can be easily used in the field, and to evaluate the performance of the algorithm using input data obtained from actual waters. The developed algorithm uses image enhancement techniques to improve the contrast between the intensity values of pollutants and general sea surfaces, and through histogram analysis, the background threshold is found,suspended solids other than pollutants are removed, and finally pollutants are classified. In this study, a real sea test using substitute materials was performed to evaluate the performance of the developed algorithm, and most of the suspended marine pollutants were detected, but the false detection area occurred in places with strong waves. However, the detection results are about three times better than the detection method using a single threshold in the existing algorithm. Through the results of this R&D, it is expected to be useful for on-site control response activities by detecting suspended marine pollutants that were difficult to identify with the naked eye at existing sites.

The Effect of Retinal and Perceived Motion Trajectory of Visual Motion Stimulus on Estimated Speed of Motion (운동자극의 망막상 운동거리와 지각된 운동거리가 운동속도 추정에 미치는 영향)

  • Park Jong-Jin;Hyng-Chul O. Li;ShinWoo Kim
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.3
    • /
    • pp.181-196
    • /
    • 2023
  • Size, velocity, and time equivalence are mechanisms that allow us to perceive objects in three-dimensional space consistently, despite errors on the two-dimensional retinal image. These mechanisms work on common cues, suggesting that the perception of motion distance, motion speed, and motion time may share common processing. This can lead to the hypothesis that, despite the spatial nature of visual stimuli distorting temporal perception, the perception of motion speed and the perception of motion duration will tend to oppose each other, as observed for objects moving in the environment. To test this hypothesis, the present study measured perceived speed using Müller-Lyer illusion stimulus to determine the relationship between the time-perception consequences of motion stimuli observed in previous studies and the speed perception measured in the present study. Experiment 1 manipulated the perceived motion trajectory while controlling for the retinal motion trajectory, and Experiment 2 manipulated the retinal motion trajectory while controlling for the perceived motion trajectory. The result is that the speed of the inward stimulus, which is perceived to be shorter, is estimated to be higher than that of the outward stimulus, which is perceived to be longer than the actual distance traveled. Taken together with previous time perception findings, namely that time perception is expanded for outward stimuli and contracted for inward stimuli, this suggests that when the perceived trajectory of a stimulus manipulated by the Müller-Lyer illusion is controlled for, perceived speed decreases with increasing duration and increases with decreasing duration when the perceived distance of the stimulus is constant. This relationship suggests that the relationship between time and speed perceived by spatial cues corresponds to the properties of objects moving in the environment, i.e, an increase in time decreases speed and a decrease in time increases speed when distance remains the same.

The Correction Effect of Motion Artifacts in PET/CT Image using System (PET/CT 검사 시 움직임 보정 기법의 유용성 평가)

  • Yeong-Hak Jo;Se-Jong Yoo;Seok-Hwan Bae;Jong-Ryul Seon;Seong-Ho Kim;Won-Jeong Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.1
    • /
    • pp.45-52
    • /
    • 2024
  • In this study, an AI-based algorithm was developed to prevent image quality deterioration and reading errors due to patient movement in PET/CT examinations that use radioisotopes in medical institutions to test cancer and other diseases. Using the Mothion Free software developed using, we checked the degree of correction of movement due to breathing, evaluated its usefulness, and conducted a study for clinical application. The experimental method was to use an RPM Phantom to inject the radioisotope 18F-FDG into a vacuum vial and a sphere of a NEMA IEC body Phantom of different sizes, and to produce images by directing the movement of the radioisotope into a moving lesion during respiration. The vacuum vial had different degrees of movement at different positions, and the spheres of the NEMA IEC body Phantom of different sizes produced different sizes of lesions. Through the acquired images, the lesion volume, maximum SUV, and average SUV were each measured to quantitatively evaluate the degree of motion correction by Motion Free. The average SUV of vacuum vial A, with a large degree of movement, was reduced by 23.36 %, and the error rate of vacuum vial B, with a small degree of movement, was reduced by 29.3 %. The average SUV error rate at the sphere 37mm and 22mm of the NEMA IEC body Phantom was reduced by 29.3 % and 26.51 %, respectively. The average error rate of the four measurements from which the error rate was calculated decreased by 30.03 %, indicating a more accurate average SUV value. In this study, only two-dimensional movements could be produced, so in order to obtain more accurate data, a Phantom that can embody the actual breathing movement of the human body was used, and if the diversity of the range of movement was configured, a more accurate evaluation of usability could be made.

Study of the Actual Condition and Satisfaction of Volunteer Activity in Australian Hospital (호주 일 지역의 병원 자원봉사활동 실태와 만족도)

  • Park, Geum-Ja;Choi, Hae-Young
    • Journal of Hospice and Palliative Care
    • /
    • v.9 no.1
    • /
    • pp.17-29
    • /
    • 2006
  • Purpose: This research aimed to investigate the actual condition and satisfaction of volunteer activity in Australian hospital. Methods: Data was collected by self reported questionnaire from 101 volunteers and analyzed by frequency and percentage, t-test, ANOVA and Sheffe and Pearson's correlation coefficients using SPSS 12.0. Results: 1. Years involved in volunteer work were $5{\sim}10$ years (32.7%), above 10 years (30.7%), $2{\sim}3$ years (11.9%) and $3{\sim}5$ years (10.9%). Types of volunteer work were physical care (32.7%), physical and emotional care (14.9%), and others (18.8%). Types of allocation of tasks were by volunteer coordination (55.7%), and by volunteer preference and consent between volunteer and coordinator (both respectively, 20.5%). Main reasons for volunteer work were to help sick people (61.4%) and to make good use of leisure time (22.8%). Routes to start volunteer work were from his (her) own inquiries (43.4%), from hearing from other volunteers (30.7%) and from mass media (13.1%). 80.2% of volunteers had received some kinds of training or preparation for volunteer work. Suitability of volunteer's skill and ability to voluntary work were 'very well' (74.0%) and 'mostly well' (18.0%). Reimbursements or benefits received for volunteer work were token or lunch or group outing (31.7%), and token and lunch or group outing (19.8%). Evaluation frequency for volunteer work was occasionally (372%), frequently (30.9%), always (17.0%) and never (14.9%). Relationship with volunteer work coordinator was very good (85.0%). The relationship with other volunteers was very good (81.2%). The relationship with hospital staffs was very good (69.7%) and mostly good (21.2%). Family and friend's support for volunteer work was very good (83.2%). 2 The mean score of satisfaction for the hospital volunteer activity was $3.09{\pm}0.49\;(range:\;1{\sim}4)$. The highest score domain was 'social contact', $3.48{\pm}0.61$, and the lowest was 'social exchange', $1.65{\pm}0.63$. An item of the highest score was 'I have an opportunity to help other people' ($3.83{\pm}0.40$), and the lowest score item was 'I will receive compensation for volunteer work I have done ($1.10{\pm}0.78$).' 3. The satisfaction from hospital volunteer activity was shown by significant difference according to sex (t=2.038, P=0.044), marital status (F=3.806, P=0.013), years involved in volunteer work (F=3.326), nam reason to do volunteer work (F=2.707, P=0.035), receive any training or preparation for volunteer work (t=-1.982, 0=0.050), frequency of evaluation for volunteer work (F=7.877, P=0.000), suitability of volunteer's skill and ability to voluntary work (t=2.712, P=0.049), relationship with volunteer work coordinators (F=-2.517, P=0.013), relation with hospital staffs (F=5.202, P=0.007), and support of their volunteer work by their family and friends (t=-3.394, P=0.001). Conclusion: The satisfaction of hospice volunteer activity was moderate. The satisfaction for hospice volunteer activity was shown by significant difference according to sex (t=2.038, P=0.044), marital status (F=3.806, P=0.013), years involved in volunteer work (F=3.326), main reason to do volunteer work (F=2.707, P=0.035), receive any training or preparation for volunteer work (t=-1.982, 0=0.050), frequency of evaluation for volunteer work (F=7.877, P=0.000), suitability of volunteer's skill and ability to voluntary work (t=2.712, P=0.049), relationship with volunteer work coordinator (F=-2.517, P=0.013), relation with hospital staffs (F=5.202, P=0.007), and family and friend's support for volunteer work (t=-3.394, P=0.001). Therefore, it is necessary to consider various factors to improve the satisfaction of voluntary work.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Perceptional Change of a New Product, DMB Phone

  • Kim, Ju-Young;Ko, Deok-Im
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.3
    • /
    • pp.59-88
    • /
    • 2008
  • Digital Convergence means integration between industry, technology, and contents, and in marketing, it usually comes with creation of new types of product and service under the base of digital technology as digitalization progress in electro-communication industries including telecommunication, home appliance, and computer industries. One can see digital convergence not only in instruments such as PC, AV appliances, cellular phone, but also in contents, network, service that are required in production, modification, distribution, re-production of information. Convergence in contents started around 1990. Convergence in network and service begins as broadcasting and telecommunication integrates and DMB(digital multimedia broadcasting), born in May, 2005 is the symbolic icon in this trend. There are some positive and negative expectations about DMB. The reason why two opposite expectations exist is that DMB does not come out from customer's need but from technology development. Therefore, customers might have hard time to interpret the real meaning of DMB. Time is quite critical to a high tech product, like DMB because another product with same function from different technology can replace the existing product within short period of time. If DMB does not positioning well to customer's mind quickly, another products like Wibro, IPTV, or HSPDA could replace it before it even spreads out. Therefore, positioning strategy is critical for success of DMB product. To make correct positioning strategy, one needs to understand how consumer interprets DMB and how consumer's interpretation can be changed via communication strategy. In this study, we try to investigate how consumer perceives a new product, like DMB and how AD strategy change consumer's perception. More specifically, the paper segment consumers into sub-groups based on their DMB perceptions and compare their characteristics in order to understand how they perceive DMB. And, expose them different printed ADs that have messages guiding consumer think DMB in specific ways, either cellular phone or personal TV. Research Question 1: Segment consumers according to perceptions about DMB and compare characteristics of segmentations. Research Question 2: Compare perceptions about DMB after AD that induces categorization of DMB in direction for each segment. If one understand and predict a direction in which consumer perceive a new product, firm can select target customers easily. We segment consumers according to their perception and analyze characteristics in order to find some variables that can influence perceptions, like prior experience, usage, or habit. And then, marketing people can use this variables to identify target customers and predict their perceptions. If one knows how customer's perception is changed via AD message, communication strategy could be constructed properly. Specially, information from segmented customers helps to develop efficient AD strategy for segment who has prior perception. Research framework consists of two measurements and one treatment, O1 X O2. First observation is for collecting information about consumer's perception and their characteristics. Based on first observation, the paper segment consumers into two groups, one group perceives DMB similar to Cellular phone and the other group perceives DMB similar to TV. And compare characteristics of two segments in order to find reason why they perceive DMB differently. Next, we expose two kinds of AD to subjects. One AD describes DMB as Cellular phone and the other Ad describes DMB as personal TV. When two ADs are exposed to subjects, consumers don't know their prior perception of DMB, in other words, which subject belongs 'similar-to-Cellular phone' segment or 'similar-to-TV' segment? However, we analyze the AD's effect differently for each segment. In research design, final observation is for investigating AD effect. Perception before AD is compared with perception after AD. Comparisons are made for each segment and for each AD. For the segment who perceives DMB similar to TV, AD that describes DMB as cellular phone could change the prior perception. And AD that describes DMB as personal TV, could enforce the prior perception. For data collection, subjects are selected from undergraduate students because they have basic knowledge about most digital equipments and have open attitude about a new product and media. Total number of subjects is 240. In order to measure perception about DMB, we use indirect measurement, comparison with other similar digital products. To select similar digital products, we pre-survey students and then finally select PDA, Car-TV, Cellular Phone, MP3 player, TV, and PSP. Quasi experiment is done at several classes under instructor's allowance. After brief introduction, prior knowledge, awareness, and usage about DMB as well as other digital instruments is asked and their similarities and perceived characteristics are measured. And then, two kinds of manipulated color-printed AD are distributed and similarities and perceived characteristics for DMB are re-measured. Finally purchase intension, AD attitude, manipulation check, and demographic variables are asked. Subjects are given small gift for participation. Stimuli are color-printed advertising. Their actual size is A4 and made after several pre-test from AD professionals and students. As results, consumers are segmented into two subgroups based on their perceptions of DMB. Similarity measure between DMB and cellular phone and similarity measure between DMB and TV are used to classify consumers. If subject whose first measure is less than the second measure, she is classified into segment A and segment A is characterized as they perceive DMB like TV. Otherwise, they are classified as segment B, who perceives DMB like cellular phone. Discriminant analysis on these groups with their characteristics of usage and attitude shows that Segment A knows much about DMB and uses a lot of digital instrument. Segment B, who thinks DMB as cellular phone doesn't know well about DMB and not familiar with other digital instruments. So, consumers with higher knowledge perceive DMB similar to TV because launching DMB advertising lead consumer think DMB as TV. Consumers with less interest on digital products don't know well about DMB AD and then think DMB as cellular phone. In order to investigate perceptions of DMB as well as other digital instruments, we apply Proxscal analysis, Multidimensional Scaling technique at SPSS statistical package. At first step, subjects are presented 21 pairs of 7 digital instruments and evaluate similarity judgments on 7 point scale. And for each segment, their similarity judgments are averaged and similarity matrix is made. Secondly, Proxscal analysis of segment A and B are done. At third stage, get similarity judgment between DMB and other digital instruments after AD exposure. Lastly, similarity judgments of group A-1, A-2, B-1, and B-2 are named as 'after DMB' and put them into matrix made at the first stage. Then apply Proxscal analysis on these matrixes and check the positional difference of DMB and after DMB. The results show that map of segment A, who perceives DMB similar as TV, shows that DMB position closer to TV than to Cellular phone as expected. Map of segment B, who perceive DMB similar as cellular phone shows that DMB position closer to Cellular phone than to TV as expected. Stress value and R-square is acceptable. And, change results after stimuli, manipulated Advertising show that AD makes DMB perception bent toward Cellular phone when Cellular phone-like AD is exposed, and that DMB positioning move towards Car-TV which is more personalized one when TV-like AD is exposed. It is true for both segment, A and B, consistently. Furthermore, the paper apply correspondence analysis to the same data and find almost the same results. The paper answers two main research questions. The first one is that perception about a new product is made mainly from prior experience. And the second one is that AD is effective in changing and enforcing perception. In addition to above, we extend perception change to purchase intention. Purchase intention is high when AD enforces original perception. AD that shows DMB like TV makes worst intention. This paper has limitations and issues to be pursed in near future. Methodologically, current methodology can't provide statistical test on the perceptual change, since classical MDS models, like Proxscal and correspondence analysis are not probability models. So, a new probability MDS model for testing hypothesis about configuration needs to be developed. Next, advertising message needs to be developed more rigorously from theoretical and managerial perspective. Also experimental procedure could be improved for more realistic data collection. For example, web-based experiment and real product stimuli and multimedia presentation could be employed. Or, one can display products together in simulated shop. In addition, demand and social desirability threats of internal validity could influence on the results. In order to handle the threats, results of the model-intended advertising and other "pseudo" advertising could be compared. Furthermore, one can try various level of innovativeness in order to check whether it make any different results (cf. Moon 2006). In addition, if one can create hypothetical product that is really innovative and new for research, it helps to make a vacant impression status and then to study how to form impression in more rigorous way.

  • PDF

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

The Influence Evaluation of $^{201}Tl$ Myocardial Perfusion SPECT Image According to the Elapsed Time Difference after the Whole Body Bone Scan (전신 뼈 스캔 후 경과 시간 차이에 따른 $^{201}Tl$ 심근관류 SPECT 영상의 영향 평가)

  • Kim, Dong-Seok;Yoo, Hee-Jae;Ryu, Jae-Kwang;Yoo, Jae-Sook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.67-72
    • /
    • 2010
  • Purpose: In Asan Medical Center we perform myocardial perfusion SPECT to evaluate cardiac event risk level for non-cardiac surgery patients. In case of patients with cancer, we check tumor metastasis using whole body bone scan and whole body PET scan and then perform myocardial perfusion SPECT to reduce unnecessary exam. In case of short term in patients, we perform $^{201}Tl$ myocardial perfusion SPECT after whole body bone scan a minimum 16 hours in order to reduce hospitalization period but it is still the actual condition in which the evaluation about the affect of the crosstalk contamination due to the each other dissimilar isotope administration doesn't properly realize. So in our experiments, we try to evaluate crosstalk contamination influence on $^{201}Tl$ myocardial perfusion SPECT using anthropomorphic torso phantom and patient's data. Materials and Methods: From 2009 August to September, we analyzed 87 patients with $^{201}Tl$ myocardial perfusion SPECT. According to $^{201}Tl$ myocardial perfusion SPECT yesterday whole body bone scan possibility of carrying out, a patient was classified. The image data are obtained by using the dual energy window in $^{201}Tl$ myocardial perfusion SPECT. We analyzed $^{201}Tl$ and $^{99m}Tc$ counts ratio in each patients groups obtained image data. We utilized anthropomorphic torso phantom in our experiment and administrated $^{201}Tl$ 14.8 MBq (0.4 mCi) at myocardium and $^{99m}Tc$ 44.4 MBq (1.2 mCi) at extracardiac region. We obtained image by $^{201}Tl$ myocardial perfusion SPECT without gate method application and analyzed spatial resolution using Xeleris ver 2.0551. Results: In case of $^{201}Tl$ window and the counts rate comparison result yesterday whole body bone scan of being counted in $^{99m}Tc$ window, the difference in which a rate to 24 hours exponential-functionally notes in 1:0.114 with Ventri (GE Healthcare, Wisconsin, USA), 1:0.249 after the bone tracer injection in 12 hours in 1:0.411 with 1:0.79 with Infinia (GE healthcare, Wisconsin, USA) according to a reduction a time-out was shown (Ventri p=0.001, Infinia p=0.001). Moreover, the rate of the case in which it doesn't perform the whole body bone scan showed up as the average 1:$0.067{\pm}0.6$ of Ventri, and 1:$0.063{\pm}0.7$ of Infinia. According to the phantom after experiment spatial resolution measurement result, and an addition or no and time-out of $^{99m}Tc$ administrated, it doesn't note any change of FWHM (p=0.134). Conclusion: Through the experiments using anthropomorphic torso phantom and patients data, we found that $^{201}Tl$ myocardium perfusion SPECT image later carried out after the bone tracer injection with 16 hours this confirmed that it doesn't receive notable influence in spatial resolution by $^{99m}Tc$. But this investigation is only aimed to image quality, so it needs more investigation in patient's radiation dose and exam accuracy and precision. The exact guideline presentation about the exam interval should be made of the validation test which is exact and in which it is standardized about the affect of the crosstalk contamination according to the isotope use in which it is different later on.

  • PDF

Review of 2015 Major Medical Decisions (2015년 주요 의료판결 분석)

  • Yoo, Hyun Jung;Lee, Dong Pil;Lee, Jung Sun;Jeong, Hye Seung;Park, Tae Shin
    • The Korean Society of Law and Medicine
    • /
    • v.17 no.1
    • /
    • pp.299-346
    • /
    • 2016
  • There were also various decisions made in medical area in 2015. In the case that an inmate in a sanatorium was injured due to the reason which can be attributable to the sanatorium and the social welfare foundation that operates the sanatorium request treatment of the patient, the court set the standard of fixation of a party in medical contract. In the case that the family of the patient who was declared brain dead required withdrawal of meaningless life sustaining treatment but the hospital rejected and continued the treatment, the court made a decision regarding chargeable fee for such treatment. When it comes to the eye brightening operation which received measure of suspension from the Ministry of Health and Welfare for the first time in February, 2011, because of uncertainty of its safety, the court did not accept the illegality of such operation itself, however, ordered compensation of the whole damage based on the violation of liability for explanation, which is the omission of explanation about the fact that the cost-effectiveness is not sure as it is still in clinical test stage. There were numerous cases that courts actively acknowledged malpractices; in the cases of paresis syndrome after back surgery, quite a few malpractices during the surgery were acknowledged by the court and in the case of nosocomial infection, hospital's negligence to cause such nosocomial infection was acknowledged by the court. There was a decision which acknowledged malpractice by distinguishing the duty of installation of emergency equipment according to the Emergency Medical Service Act and duty of emergency measure in emergency situations, and a decision which acknowledged negligence of a hospital if the hospital did not take appropriate measures, although it was a very rare disease. In connection with the scope of compensation for damage, there were decisions which comply with substantive truth such as; a court applied different labor ability loss rate as the labor ability loss rate decreased after result of reappraisal of physical ability in appeal compared to the one in the first trial, and a court acknowledged lower labor ability loss rate than the result of appraisal of physical ability considering the condition of a patient, etc. In the event of any damage caused by malpractice, in regard to whether there is a limitation on liability in fee charge after such medical malpractice, the court rejected the hospital's claim for setoff saying that if the hospital only continued treatments to cure the patient or prevent aggravation of disease, the hospital cannot charge Medical bills to the patient. In regard to the provision of the Medical Law that prohibit medical advertisement which was not reviewed preliminarily and punish the violation of such, a decision of unconstitutionality was made as it is a precensorship by an administrative agency as the deliberative bodies such as Korean Medical Association, etc. cannot be denied to be considered as administrative bodies. When it comes to the issue whether PRP treatment, which is commonly performed clinically, should be considered as legally determined uninsured treatment, the court made it clear that legally determined uninsured treatment should not be decided by theoretical possibility or actual implementation but should be acknowledged its medical safety and effectiveness and included in medical care or legally determined uninsured treatment. Moreover, court acknowledged the illegality of investigation method or process in the administrative litigation regarding evaluation of suitability of sanatorium, however, denied the compensation liability or restitution of unjust enrichment of the Health Insurance Review & Assessment Service and the National Health Insurance Corporation as the evaluation agents did not cause such violation intentionally or negligently. We hope there will be more decisions which are closer to substantive truth through clear legal principles in respect of variously arisen issues in the future.

  • PDF