• Title/Summary/Keyword: LIMITATIONS

Search Result 12,194, Processing Time 0.039 seconds

Identification of Sorption Characteristics of Cesium for the Improved Coal Mine Drainage Treated Sludge (CMDS) by the Addition of Na and S (석탄광산배수처리슬러지에 Na와 S를 첨가하여 개량한 흡착제의 세슘 흡착 특성 규명)

  • Soyoung Jeon;Danu Kim;Jeonghyeon Byeon;Daehyun Shin;Minjune Yang;Minhee Lee
    • Economic and Environmental Geology
    • /
    • v.56 no.2
    • /
    • pp.125-138
    • /
    • 2023
  • Most of previous cesium (Cs) sorbents have limitations on the treatment in the large-scale water system having low Cs concentration and high ion strength. In this study, the new Cs sorbent that is eco-friendly and has a high Cs removal efficiency was developed by improving the coal mine drainage treated sludge (hereafter 'CMDS') with the addition of Na and S. The sludge produced through the treatment process for the mine drainage originating from the abandoned coal mine was used as the primary material for developing the new Cs sorbent because of its high Ca and Fe contents. The CMDS was improved by adding Na and S during the heat treatment process (hereafter 'Na-S-CMDS' for the developed sorbent in this study). Laboratory experiments and the sorption model studies were performed to evaluate the Cs sorption capacity and to understand the Cs sorption mechanisms of the Na-S-CMDS. The physicochemical and mineralogical properties of the Na-S-CMDS were also investigated through various analyses, such as XRF, XRD, SEM/EDS, XPS, etc. From results of batch sorption experiments, the Na-S-CMDS showed the fast sorption rate (in equilibrium within few hours) and the very high Cs removal efficiency (> 90.0%) even at the low Cs concentration in solution (< 0.5 mg/L). The experimental results were well fitted to the Langmuir isotherm model, suggesting the mostly monolayer coverage sorption of the Cs on the Na-S-CMDS. The Cs sorption kinetic model studies supported that the Cs sorption tendency of the Na-S-CMDS was similar to the pseudo-second-order model curve and more complicated chemical sorption process could occur rather than the simple physical adsorption. Results of XRF and XRD analyses for the Na-S-CMDS after the Cs sorption showed that the Na content clearly decreased in the Na-S-CMDS and the erdite (NaFeS2·2(H2O)) was disappeared, suggesting that the active ion exchange between Na+ and Cs+ occurred on the Na-S-CMDS during the Cs sorption process. From results of the XPS analysis, the strong interaction between Cs and S in Na-S-CMDS was investigated and the high Cs sorption capacity was resulted from the binding between Cs and S (or S-complex). Results from this study supported that the Na-S-CMDS has an outstanding potential to remove the Cs from radioactive contaminated water systems such as seawater and groundwater, which have high ion strength but low Cs concentration.

Legal Issues on the Collection and Utilization of Infectious Disease Data in the Infectious Disease Crisis (감염병 위기 상황에서 감염병 데이터의 수집 및 활용에 관한 법적 쟁점 -미국 감염병 데이터 수집 및 활용 절차를 참조 사례로 하여-)

  • Kim, Jae Sun
    • The Korean Society of Law and Medicine
    • /
    • v.23 no.4
    • /
    • pp.29-74
    • /
    • 2022
  • As social disasters occur under the Disaster Management Act, which can damage the people's "life, body, and property" due to the rapid spread and spread of unexpected COVID-19 infectious diseases in 2020, information collected through inspection and reporting of infectious disease pathogens (Article 11), epidemiological investigation (Article 18), epidemiological investigation for vaccination (Article 29), artificial technology, and prevention policy Decision), (3) It was used as an important basis for decision-making in the context of an infectious disease crisis, such as promoting vaccination and understanding the current status of damage. In addition, medical policy decisions using infectious disease data contribute to quarantine policy decisions, information provision, drug development, and research technology development, and interest in the legal scope and limitations of using infectious disease data has increased worldwide. The use of infectious disease data can be classified for the purpose of spreading and blocking infectious diseases, prevention, management, and treatment of infectious diseases, and the use of information will be more widely made in the context of an infectious disease crisis. In particular, as the serious stage of the Disaster Management Act continues, the processing of personal identification information and sensitive information becomes an important issue. Information on "medical records, vaccination drugs, vaccination, underlying diseases, health rankings, long-term care recognition grades, pregnancy, etc." needs to be interpreted. In the case of "prevention, management, and treatment of infectious diseases", it is difficult to clearly define the concept of medical practicesThe types of actions are judged based on "legislative purposes, academic principles, expertise, and social norms," but the balance of legal interests should be based on the need for data use in quarantine policies and urgent judgment in public health crises. Specifically, the speed and degree of transmission of infectious diseases in a crisis, whether the purpose can be achieved without processing sensitive information, whether it unfairly violates the interests of third parties or information subjects, and the effectiveness of introducing quarantine policies through processing sensitive information can be used as major evaluation factors. On the other hand, the collection, provision, and use of infectious disease data for research purposes will be used through pseudonym processing under the Personal Information Protection Act, consent under the Bioethics Act and deliberation by the Institutional Bioethics Committee, and data provision deliberation committee. Therefore, the use of research purposes is recognized as long as procedural validity is secured as it is reviewed by the pseudonym processing and data review committee, the consent of the information subject, and the institutional bioethics review committee. However, the burden on research managers should be reduced by clarifying the pseudonymization or anonymization procedures, the introduction or consent procedures of the comprehensive consent system and the opt-out system should be clearly prepared, and the procedure for re-identifying or securing security that may arise from technological development should be clearly defined.

Structural features and Diffusion Patterns of Gartner Hype Cycle for Artificial Intelligence using Social Network analysis (인공지능 기술에 관한 가트너 하이프사이클의 네트워크 집단구조 특성 및 확산패턴에 관한 연구)

  • Shin, Sunah;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.107-129
    • /
    • 2022
  • It is important to preempt new technology because the technology competition is getting much tougher. Stakeholders conduct exploration activities continuously for new technology preoccupancy at the right time. Gartner's Hype Cycle has significant implications for stakeholders. The Hype Cycle is a expectation graph for new technologies which is combining the technology life cycle (S-curve) with the Hype Level. Stakeholders such as R&D investor, CTO(Chef of Technology Officer) and technical personnel are very interested in Gartner's Hype Cycle for new technologies. Because high expectation for new technologies can bring opportunities to maintain investment by securing the legitimacy of R&D investment. However, contrary to the high interest of the industry, the preceding researches faced with limitations aspect of empirical method and source data(news, academic papers, search traffic, patent etc.). In this study, we focused on two research questions. The first research question was 'Is there a difference in the characteristics of the network structure at each stage of the hype cycle?'. To confirm the first research question, the structural characteristics of each stage were confirmed through the component cohesion size. The second research question is 'Is there a pattern of diffusion at each stage of the hype cycle?'. This research question was to be solved through centralization index and network density. The centralization index is a concept of variance, and a higher centralization index means that a small number of nodes are centered in the network. Concentration of a small number of nodes means a star network structure. In the network structure, the star network structure is a centralized structure and shows better diffusion performance than a decentralized network (circle structure). Because the nodes which are the center of information transfer can judge useful information and deliver it to other nodes the fastest. So we confirmed the out-degree centralization index and in-degree centralization index for each stage. For this purpose, we confirmed the structural features of the community and the expectation diffusion patterns using Social Network Serice(SNS) data in 'Gartner Hype Cycle for Artificial Intelligence, 2021'. Twitter data for 30 technologies (excluding four technologies) listed in 'Gartner Hype Cycle for Artificial Intelligence, 2021' were analyzed. Analysis was performed using R program (4.1.1 ver) and Cyram Netminer. From October 31, 2021 to November 9, 2021, 6,766 tweets were searched through the Twitter API, and converting the relationship user's tweet(Source) and user's retweets (Target). As a result, 4,124 edgelists were analyzed. As a reult of the study, we confirmed the structural features and diffusion patterns through analyze the component cohesion size and degree centralization and density. Through this study, we confirmed that the groups of each stage increased number of components as time passed and the density decreased. Also 'Innovation Trigger' which is a group interested in new technologies as a early adopter in the innovation diffusion theory had high out-degree centralization index and the others had higher in-degree centralization index than out-degree. It can be inferred that 'Innovation Trigger' group has the biggest influence, and the diffusion will gradually slow down from the subsequent groups. In this study, network analysis was conducted using social network service data unlike methods of the precedent researches. This is significant in that it provided an idea to expand the method of analysis when analyzing Gartner's hype cycle in the future. In addition, the fact that the innovation diffusion theory was applied to the Gartner's hype cycle's stage in artificial intelligence can be evaluated positively because the Gartner hype cycle has been repeatedly discussed as a theoretical weakness. Also it is expected that this study will provide a new perspective on decision-making on technology investment to stakeholdes.

Basic Research on the Possibility of Developing a Landscape Perceptual Response Prediction Model Using Artificial Intelligence - Focusing on Machine Learning Techniques - (인공지능을 활용한 경관 지각반응 예측모델 개발 가능성 기초연구 - 머신러닝 기법을 중심으로 -)

  • Kim, Jin-Pyo;Suh, Joo-Hwan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.3
    • /
    • pp.70-82
    • /
    • 2023
  • The recent surge of IT and data acquisition is shifting the paradigm in all aspects of life, and these advances are also affecting academic fields. Research topics and methods are being improved through academic exchange and connections. In particular, data-based research methods are employed in various academic fields, including landscape architecture, where continuous research is needed. Therefore, this study aims to investigate the possibility of developing a landscape preference evaluation and prediction model using machine learning, a branch of Artificial Intelligence, reflecting the current situation. To achieve the goal of this study, machine learning techniques were applied to the landscaping field to build a landscape preference evaluation and prediction model to verify the simulation accuracy of the model. For this, wind power facility landscape images, recently attracting attention as a renewable energy source, were selected as the research objects. For analysis, images of the wind power facility landscapes were collected using web crawling techniques, and an analysis dataset was built. Orange version 3.33, a program from the University of Ljubljana was used for machine learning analysis to derive a prediction model with excellent performance. IA model that integrates the evaluation criteria of machine learning and a separate model structure for the evaluation criteria were used to generate a model using kNN, SVM, Random Forest, Logistic Regression, and Neural Network algorithms suitable for machine learning classification models. The performance evaluation of the generated models was conducted to derive the most suitable prediction model. The prediction model derived in this study separately evaluates three evaluation criteria, including classification by type of landscape, classification by distance between landscape and target, and classification by preference, and then synthesizes and predicts results. As a result of the study, a prediction model with a high accuracy of 0.986 for the evaluation criterion according to the type of landscape, 0.973 for the evaluation criterion according to the distance, and 0.952 for the evaluation criterion according to the preference was developed, and it can be seen that the verification process through the evaluation of data prediction results exceeds the required performance value of the model. As an experimental attempt to investigate the possibility of developing a prediction model using machine learning in landscape-related research, this study was able to confirm the possibility of creating a high-performance prediction model by building a data set through the collection and refinement of image data and subsequently utilizing it in landscape-related research fields. Based on the results, implications, and limitations of this study, it is believed that it is possible to develop various types of landscape prediction models, including wind power facility natural, and cultural landscapes. Machine learning techniques can be more useful and valuable in the field of landscape architecture by exploring and applying research methods appropriate to the topic, reducing the time of data classification through the study of a model that classifies images according to landscape types or analyzing the importance of landscape planning factors through the analysis of landscape prediction factors using machine learning.

The Process of Establishing a Japanese-style Garden and Embodying Identity in Modern Japan (일본 근대 시기 일본풍 정원의 확립과정과 정체성 구현)

  • An, Joon-Young;Jun, Da-Seul
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.41 no.3
    • /
    • pp.59-66
    • /
    • 2023
  • This study attempts to examine the process of establishing a Japanese-style garden in the modern period through the perspectives of garden designers, spatial composition, spatial components, and materials used in their works, and to use it as data for embodying the identity of Korean garden. The results are as follows: First, by incorporating elements associated with Koreanness into the modern garden culture, there are differences in location, presence, and subjectivity when compared to Japan. This reflects Japan's relatively seamless cultural continuity compared to Korea's cultural disconnection during the modern period. Second, prior to the modern period, Japan's garden culture spread and continued to develop throughout the country without significant interruptions. However, during the modern period, the Meiji government promoted the policy of 'civilization and enlightenment (Bunmei-kaika, 文明開化)' and introduced advanced European and American civilizations, leading to the popularity of Western-style architectural techniques. Unfortunately, the rapid introduction of Western culture caused the traditional Japanese culture to be overshadowed. In 1879, British architect Josiah Condor guided Japanese architects and introduced atelier and traditional designs of Japanese gardens into the design. The garden style of Ogawa Jihei VII, a garden designer in Kyoto during the Meiji and Taisho periods, was accepted by influential political and business leaders who sought to preserve Japan's traditional culture. And a protection system of garden was established through the preparation of various laws and regulations. Third, as a comprehensive analysis of Japanese modern gardens, the examination of garden designers, Japanese components, materials, elements, and the Japanese-style showed that Yamagata Aritomo, Ogawa Jihei VII, and Mirei Shigemori were representative garden designers who preserved the Japanese-style in their gardens. They introduced features such as the creation of a Daejicheon(大池泉) garden, which involves a large pond on a spacious land, as well as the naturalistic borrowed scenery method and water flow. Key components of Japanese-style gardens include the use of turf, winding garden paths, and the variation of plant species. Fourth, an analysis of the Japanese-style elements in the target sites revealed that the use of flowing water had the highest occurrence at 47.06% among the individual elements of spatial composition. Daejicheon and naturalistic borrowed scenery were also shown. The use of turf and winding paths were at 65.88% and 78.82%, respectively. The alteration of tree species was relatively less common at 28.24% compared to the application of turf or winding paths. Fifth, it is essential to discover more gardens from the modern period and meticulously document the creators or owners of the gardens, the spatial composition, spatial components, and materials used. This information will be invaluable in uncovering the identity of our own gardens. This study was conducted based on the analysis of the process of establishing the Japanese-style during Japan's modern period, utilizing examples of garden designers and gardens. While this study has limitations, such as the absence of in-depth research and more case studies or specific techniques, it sets the stage for future exploration.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Perceptional Change of a New Product, DMB Phone

  • Kim, Ju-Young;Ko, Deok-Im
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.3
    • /
    • pp.59-88
    • /
    • 2008
  • Digital Convergence means integration between industry, technology, and contents, and in marketing, it usually comes with creation of new types of product and service under the base of digital technology as digitalization progress in electro-communication industries including telecommunication, home appliance, and computer industries. One can see digital convergence not only in instruments such as PC, AV appliances, cellular phone, but also in contents, network, service that are required in production, modification, distribution, re-production of information. Convergence in contents started around 1990. Convergence in network and service begins as broadcasting and telecommunication integrates and DMB(digital multimedia broadcasting), born in May, 2005 is the symbolic icon in this trend. There are some positive and negative expectations about DMB. The reason why two opposite expectations exist is that DMB does not come out from customer's need but from technology development. Therefore, customers might have hard time to interpret the real meaning of DMB. Time is quite critical to a high tech product, like DMB because another product with same function from different technology can replace the existing product within short period of time. If DMB does not positioning well to customer's mind quickly, another products like Wibro, IPTV, or HSPDA could replace it before it even spreads out. Therefore, positioning strategy is critical for success of DMB product. To make correct positioning strategy, one needs to understand how consumer interprets DMB and how consumer's interpretation can be changed via communication strategy. In this study, we try to investigate how consumer perceives a new product, like DMB and how AD strategy change consumer's perception. More specifically, the paper segment consumers into sub-groups based on their DMB perceptions and compare their characteristics in order to understand how they perceive DMB. And, expose them different printed ADs that have messages guiding consumer think DMB in specific ways, either cellular phone or personal TV. Research Question 1: Segment consumers according to perceptions about DMB and compare characteristics of segmentations. Research Question 2: Compare perceptions about DMB after AD that induces categorization of DMB in direction for each segment. If one understand and predict a direction in which consumer perceive a new product, firm can select target customers easily. We segment consumers according to their perception and analyze characteristics in order to find some variables that can influence perceptions, like prior experience, usage, or habit. And then, marketing people can use this variables to identify target customers and predict their perceptions. If one knows how customer's perception is changed via AD message, communication strategy could be constructed properly. Specially, information from segmented customers helps to develop efficient AD strategy for segment who has prior perception. Research framework consists of two measurements and one treatment, O1 X O2. First observation is for collecting information about consumer's perception and their characteristics. Based on first observation, the paper segment consumers into two groups, one group perceives DMB similar to Cellular phone and the other group perceives DMB similar to TV. And compare characteristics of two segments in order to find reason why they perceive DMB differently. Next, we expose two kinds of AD to subjects. One AD describes DMB as Cellular phone and the other Ad describes DMB as personal TV. When two ADs are exposed to subjects, consumers don't know their prior perception of DMB, in other words, which subject belongs 'similar-to-Cellular phone' segment or 'similar-to-TV' segment? However, we analyze the AD's effect differently for each segment. In research design, final observation is for investigating AD effect. Perception before AD is compared with perception after AD. Comparisons are made for each segment and for each AD. For the segment who perceives DMB similar to TV, AD that describes DMB as cellular phone could change the prior perception. And AD that describes DMB as personal TV, could enforce the prior perception. For data collection, subjects are selected from undergraduate students because they have basic knowledge about most digital equipments and have open attitude about a new product and media. Total number of subjects is 240. In order to measure perception about DMB, we use indirect measurement, comparison with other similar digital products. To select similar digital products, we pre-survey students and then finally select PDA, Car-TV, Cellular Phone, MP3 player, TV, and PSP. Quasi experiment is done at several classes under instructor's allowance. After brief introduction, prior knowledge, awareness, and usage about DMB as well as other digital instruments is asked and their similarities and perceived characteristics are measured. And then, two kinds of manipulated color-printed AD are distributed and similarities and perceived characteristics for DMB are re-measured. Finally purchase intension, AD attitude, manipulation check, and demographic variables are asked. Subjects are given small gift for participation. Stimuli are color-printed advertising. Their actual size is A4 and made after several pre-test from AD professionals and students. As results, consumers are segmented into two subgroups based on their perceptions of DMB. Similarity measure between DMB and cellular phone and similarity measure between DMB and TV are used to classify consumers. If subject whose first measure is less than the second measure, she is classified into segment A and segment A is characterized as they perceive DMB like TV. Otherwise, they are classified as segment B, who perceives DMB like cellular phone. Discriminant analysis on these groups with their characteristics of usage and attitude shows that Segment A knows much about DMB and uses a lot of digital instrument. Segment B, who thinks DMB as cellular phone doesn't know well about DMB and not familiar with other digital instruments. So, consumers with higher knowledge perceive DMB similar to TV because launching DMB advertising lead consumer think DMB as TV. Consumers with less interest on digital products don't know well about DMB AD and then think DMB as cellular phone. In order to investigate perceptions of DMB as well as other digital instruments, we apply Proxscal analysis, Multidimensional Scaling technique at SPSS statistical package. At first step, subjects are presented 21 pairs of 7 digital instruments and evaluate similarity judgments on 7 point scale. And for each segment, their similarity judgments are averaged and similarity matrix is made. Secondly, Proxscal analysis of segment A and B are done. At third stage, get similarity judgment between DMB and other digital instruments after AD exposure. Lastly, similarity judgments of group A-1, A-2, B-1, and B-2 are named as 'after DMB' and put them into matrix made at the first stage. Then apply Proxscal analysis on these matrixes and check the positional difference of DMB and after DMB. The results show that map of segment A, who perceives DMB similar as TV, shows that DMB position closer to TV than to Cellular phone as expected. Map of segment B, who perceive DMB similar as cellular phone shows that DMB position closer to Cellular phone than to TV as expected. Stress value and R-square is acceptable. And, change results after stimuli, manipulated Advertising show that AD makes DMB perception bent toward Cellular phone when Cellular phone-like AD is exposed, and that DMB positioning move towards Car-TV which is more personalized one when TV-like AD is exposed. It is true for both segment, A and B, consistently. Furthermore, the paper apply correspondence analysis to the same data and find almost the same results. The paper answers two main research questions. The first one is that perception about a new product is made mainly from prior experience. And the second one is that AD is effective in changing and enforcing perception. In addition to above, we extend perception change to purchase intention. Purchase intention is high when AD enforces original perception. AD that shows DMB like TV makes worst intention. This paper has limitations and issues to be pursed in near future. Methodologically, current methodology can't provide statistical test on the perceptual change, since classical MDS models, like Proxscal and correspondence analysis are not probability models. So, a new probability MDS model for testing hypothesis about configuration needs to be developed. Next, advertising message needs to be developed more rigorously from theoretical and managerial perspective. Also experimental procedure could be improved for more realistic data collection. For example, web-based experiment and real product stimuli and multimedia presentation could be employed. Or, one can display products together in simulated shop. In addition, demand and social desirability threats of internal validity could influence on the results. In order to handle the threats, results of the model-intended advertising and other "pseudo" advertising could be compared. Furthermore, one can try various level of innovativeness in order to check whether it make any different results (cf. Moon 2006). In addition, if one can create hypothetical product that is really innovative and new for research, it helps to make a vacant impression status and then to study how to form impression in more rigorous way.

  • PDF

A Study of Equipment Accuracy and Test Precision in Dual Energy X-ray Absorptiometry (골밀도검사의 올바른 질 관리에 따른 임상적용과 해석 -이중 에너지 방사선 흡수법을 중심으로-)

  • Dong, Kyung-Rae;Kim, Ho-Sung;Jung, Woon-Kwan
    • Journal of radiological science and technology
    • /
    • v.31 no.1
    • /
    • pp.17-23
    • /
    • 2008
  • Purpose : Because there is a difference depending on the environment as for an inspection equipment the important part of bone density scan and the precision/accuracy of a tester, the management of quality must be made systematically. The equipment failure caused by overload effect due to the aged equipment and the increase of a patient was made frequently. Thus, the replacement of equipment and additional purchases of new bonedensity equipment caused a compatibility problem in tracking patients. This study wants to know whether the clinical changes of patient's bonedensity can be accurately and precisely reflected when used it compatiblly like the existing equipment after equipment replacement and expansion. Materials and methods : Two equipments of GE Lunar Prodigy Advance(P1 and P2) and the Phantom HOLOGIC Spine Road(HSP) were used to measure equipment precision. Each device scans 20 times so that precision data was acquired from the phantom(Group 1). The precision of a tester was measured by shooting twice the same patient, every 15 members from each of the target equipment in 120 women(average age 48.78, 20-60 years old)(Group 2). In addition, the measurement of the precision of a tester and the cross-calibration data were made by scanning 20 times in each of the equipment using HSP, based on the data obtained from the management of quality using phantom(ASP) every morning (Group 3). The same patient was shot only once in one equipment alternately to make the measurement of the precision of a tester and the cross-calibration data in 120 women(average age 48.78, 20-60 years old)(Group 4). Results : It is steady equipment according to daily Q.C Data with $0.996\;g/cm^2$, change value(%CV) 0.08. The mean${\pm}$SD and a %CV price are ALP in Group 1(P1 : $1.064{\pm}0.002\;g/cm^2$, $%CV=0.190\;g/cm^2$, P2 : $1.061{\pm}0.003\;g/cm^2$, %CV=0.192). The mean${\pm}$SD and a %CV price are P1 : $1.187{\pm}0.002\;g/cm^2$, $%CV=0.164\;g/cm^2$, P2 : $1.198{\pm}0.002\;g/cm^2$, %CV=0.163 in Group 2. The average error${\pm}$2SD and %CV are P1 - (spine: $0.001{\pm}0.03\;g/cm^2$, %CV=0.94, Femur: $0.001{\pm}0.019\;g/cm^2$, %CV=0.96), P2 - (spine: $0.002{\pm}0.018\;g/cm^2$, %CV=0.55, Femur: $0.001{\pm}0.013\;g/cm^2$, %CV=0.48) in Group 3. The average error${\pm}2SD$, %CV, and r value was spine : $0.006{\pm}0.024\;g/cm^2$, %CV=0.86, r=0.995, Femur: $0{\pm}0.014\;g/cm^2$, %CV=0.54, r=0.998 in Group 4. Conclusion: Both LUNAR ASP CV% and HOLOGIC Spine Phantom are included in the normal range of error of ${\pm}2%$ defined in ISCD. BMD measurement keeps a relatively constant value, so showing excellent repeatability. The Phantom has homogeneous characteristics, but it has limitations to reflect the clinical part including variations in patient's body weight or body fat. As a result, it is believed that quality control using Phantom will be useful to check mis-calibration of the equipment used. A value measured a patient two times with one equipment, and that of double-crossed two equipment are all included within 2SD Value in the Bland - Altman Graph compared results of Group 3 with Group 4. The r value of 0.99 or higher in Linear regression analysis(Regression Analysis) indicated high precision and correlation. Therefore, it revealed that two compatible equipment did not affect in tracking the patients. Regular testing equipment and capabilities of a tester, then appropriate calibration will have to be achieved in order to calculate confidential BMD.

  • PDF

A study of the plan dosimetic evaluation on the rectal cancer treatment (직장암 치료 시 치료계획에 따른 선량평가 연구)

  • Jeong, Hyun Hak;An, Beom Seok;Kim, Dae Il;Lee, Yang Hoon;Lee, Je hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.2
    • /
    • pp.171-178
    • /
    • 2016
  • Purpose : In order to minimize the dose of femoral head as an appropriate treatment plan for rectal cancer radiation therapy, we compare and evaluate the usefulness of 3-field 3D conformal radiation therapy(below 3fCRT), which is a universal treatment method, and 5-field 3D conformal radiation therapy(below 5fCRT), and Volumetric Modulated Arc Therapy (VMAT). Materials and Methods : The 10 cases of rectal cancer that treated with 21EX were enrolled. Those cases were planned by Eclipse(Ver. 10.0.42, Varian, USA), PRO3(Progressive Resolution Optimizer 10.0.28) and AAA(Anisotropic Analytic Algorithm Ver. 10.0.28). 3fCRT and 5fCRT plan has $0^{\circ}$, $270^{\circ}$, $90^{\circ}$ and $0^{\circ}$, $95^{\circ}$, $45^{\circ}$, $315^{\circ}$, $265^{\circ}$ gantry angle, respectively. VMAT plan parameters consisted of 15MV coplanar $360^{\circ}$ 1 arac. Treatment prescription was employed delivering 54Gy to recum in 30 fractions. To minimize the dose difference that shows up randomly on optimizing, VMAT plans were optimized and calculated twice, and normalized to the target V100%=95%. The indexes of evaluation are D of Both femoral head and aceta fossa, total MU, H.I.(Homogeneity index) and C.I.(Conformity index) of the PTV. All VMAT plans were verified by gamma test with portal dosimetry using EPID. Results : D of Rt. femoral head was 53.08 Gy, 50.27 Gy, and 30.92 Gy, respectively, in the order of 3fCRT, 5fCRT, and VMAT treatment plan. Likewise, Lt. Femoral head showed average 53.68 Gy, 51.01 Gy and 29.23 Gy in the same order. D of Rt. aceta fossa was 54.86 Gy, 52.40 Gy, 30.37 Gy, respectively, in the order of 3fCRT, 5fCRT, and VMAT treatment plan. Likewise, Lt. Femoral head showed average 53.68 Gy, 51.01 Gy and 29.23 Gy in the same order. The maximum dose of both femoral head and aceta fossa was higher in the order of 3fCRT, 5fCRT, and VMAT treatment plan. C.I. showed the lowest VMAT treatment plan with an average of 1.64, 1.48, and 0.99 in the order of 3fCRT, 5fCRT, and VMAT treatment plan. There was no significant difference on H.I. of the PTV among three plans. Total MU showed that the VMAT treatment plan used 124.4MU and 299MU more than the 3fCRT and 5fCRT treatment plan, respectively. IMRT verification gamma test results for the VMAT plan passed over 90.0% at 2mm/2%. Conclusion : In rectal cancer treatment, the VMAT plan was shown to be advantageous in most of the evaluation indexes compared to the 3D plan, and the dose of the femoral head was greatly reduced. However, because of practical limitations there may be a case where it is difficult to select a VMAT treatment plan. 5fCRT has the advantage of reducing the dose of the femoral head as compared to the existing 3fCRT, without regard to additional problems. Therefore, not only would it extend survival time but the quality of life in general, if hospitals improved radiation therapy efficiency by selecting the treatment plan in accordance with the hospital's situation.

  • PDF

Study on the Effects of Shop Choice Properties on Brand Attitudes: Focus on Six Major Coffee Shop Brands (점포선택속성이 브랜드 태도에 미치는 영향에 관한 연구: 6개 메이저 브랜드 커피전문점을 중심으로)

  • Yi, Weon-Ho;Kim, Su-Ok;Lee, Sang-Youn;Youn, Myoung-Kil
    • Journal of Distribution Science
    • /
    • v.10 no.3
    • /
    • pp.51-61
    • /
    • 2012
  • This study seeks to understand how the choice of a coffee shop is related to a customer's loyalty and which characteristics of a shop influence this choice. It considers large-sized coffee shops brands whose market scale has gradually grown. The users' choice of shop is determined by price, employee service, shop location, and shop atmosphere. The study investigated the effects of these four properties on the brand attitudes of coffee shops. The effects were found to vary depending on users' characteristics. The properties with the largest influence were shop atmosphere and shop location Therefore, the purpose of the study was to examine the properties that could help coffee shops get loyal customers, and the choice properties that could satisfy consumers' desires The study examined consumers' perceptions of shop properties at selection of coffee shop and the difference between perceptual difference and coffee brand in order to investigate customers' desires and needs and to suggest ways that could supply products and service. The research methodology consisted of two parts: normative and empirical research, which includes empirical analysis and statistical analysis. In this study, a statistical analysis of the empirical research was carried out. The study theoretically confirmed the shop choice properties by reviewing previous studies and performed an empirical analysis including cross tabulation based on secondary material. The findings were as follows: First, coffee shop choice properties varied by gender. Price advantage influenced the choice of both men and women; men preferred nearer coffee shops where they could buy coffee easily and more conveniently than women did. The atmosphere of the coffee shop had the greatest influence on both men and women, and shop atmosphere was thought to be the most important for age analysis. In the past, customers selected coffee shops solely to drink coffee. Now, they select the coffee shop according to its interior, menu variety, and atmosphere owing to improved quality and service of coffee shop brands. Second, the prices of the brands did not vary much because the coffee shops were similarly priced. The service was thought to be more important and to elevate service quality so that price and employee service and other properties did not have a great influence on shop choice. However, those working in the farming, forestry, fishery, and livestock industries were more concerned with the price than the shop atmosphere. College and graduate school students were also affected by inexpensive price. Third, shop choice properties varied depending on income. The shop location and shop atmosphere had a greater influence on shop choice. The customers in an income bracket of less than 2 million won selected low-price coffee shops more than those earning 6 million won or more. Therefore, price advantage had no relation with difference in income. The higher income group was not affected by employee service. Fourth, shop choice properties varied depending on place. For instance, customers at Ulsan were the most affected by the price, and the ones at Busan were the least affected. The shop location had the greatest influence among all of the properties. Among the places surveyed, Gwangju had the least influence. The alternate use of space in a coffee shop was thought to be important in all the cities under consideration. The customers at Ulsan were not affected by employee service, and they selected coffee shops according to quality and preference of shop atmosphere. Lastly, the price factor was found to be a little higher than other factors when customers frequently selected brands according to shop properties. Customers at Gwangju reacted to discounts more than those in other cities did, and the former gave less priority to the quality and taste of coffee. Brand preference varied depending on coffee shop location. Customers at Busan selected brands according to the coffee shop location, and those at Ulsan were not influenced by employee kindness and specialty. The implications of this study are that franchise coffee shop businesses should focus on customers rather than aggressive marketing strategies that increase the number of coffee shops. Thus, they should create an environment with a good atmosphere and set up coffee shops in places that customers have good access to. This study has some limitations. First, the respondents were concentrated in metropolitan areas. Secondary data showed that the number of respondents at Seoul was much more than that at Gyeonggi-do. Furthermore, the number of respondents at Gyeonggi-do was much more than those at the six major cities in the nation. Thus, the regional sample was not representative enough of the population. Second, respondents' ratio was used as a measurement scale to test the perception of shop choice properties and brand preference. The difficulties arose when examining the relation between these properties and brand preference, as well as when understanding the difference between groups. Therefore, future research should seek to address some of the shortcomings of this study: If the coffee shops are being expanded to local areas, then a questionnaire survey of consumers at small cities in local areas shall be conducted to collect primary material. In particular, variables of the questionnaire survey shall be measured using Likert scales in order to include perception on shop choice properties, brand preference, and repurchase. Therefore, correlation analysis, multi-regression, and ANOVA shall be used for empirical analysis and to investigate consumers' attitudes and behavior in detail.

  • PDF