• Title/Summary/Keyword: Real data

Search Result 15,582, Processing Time 0.042 seconds

Oil Fluorescence Spectrum Analysis for the Design of Fluorimeter (형광 광도계 설계인자 도출을 위한 기름의 형광 스펙트럼 분석)

  • Oh, Sangwoo;Seo, Dongmin;Ann, Kiyoung;Kim, Jaewoo;Lee, Moonjin;Chun, Taebyung;Seo, Sungkyu
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.18 no.4
    • /
    • pp.304-309
    • /
    • 2015
  • To evaluate the degree of contamination caused by oil spill accident in the sea, the in-situ sensors which are based on the scientific method are needed in the real site. The sensors which are based on the fluorescence detection theory can provide the useful data, such as the concentration of oil. However these kinds of sensors commonly are composed of the ultraviolet (UV) light source such as UV mercury lamp, the multiple excitation/emission filters and the optical sensor which is mainly photomultiplier tube (PMT) type. Therefore, the size of the total sensing platform is large not suitable to be handled in the oil spill field and also the total price of it is extremely expensive. To overcome these drawbacks, we designed the fluorimeter for the oil spill detection which has compact size and cost effectiveness. Before the detail design process, we conducted the experiments to measure the excitation and emission spectrum of oils using five different kinds of crude oils and three different kinds of processed oils. And the fluorescence spectrometer were used to analyze the excitation and emission spectrum of oil samples. We have compared the spectrum results and drawn the each common spectrum regions of excitation and emission. In the experiments, we can see that the average gap between maximum excitation and emission peak wavelengths is near 50 nm for the every case. In the experiment which were fixed by the excitation wavelength of 365 nm and 405 nm, we can find out that the intensity of emission was weaker than that of 280 nm and 325 nm. So, if the light sources having the wavelength of 365 nm or 405 nm are used in the design process of fluorimeter, the optical sensor needs to have the sensitivity which can cover the weak light intensity. Through the results which were derived by the experiment, we can define the important factors which can be useful to select the effective wavelengths of light source, photo detector and filters.

Keyword Network Analysis for Technology Forecasting (기술예측을 위한 특허 키워드 네트워크 분석)

  • Choi, Jin-Ho;Kim, Hee-Su;Im, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.227-240
    • /
    • 2011
  • New concepts and ideas often result from extensive recombination of existing concepts or ideas. Both researchers and developers build on existing concepts and ideas in published papers or registered patents to develop new theories and technologies that in turn serve as a basis for further development. As the importance of patent increases, so does that of patent analysis. Patent analysis is largely divided into network-based and keyword-based analyses. The former lacks its ability to analyze information technology in details while the letter is unable to identify the relationship between such technologies. In order to overcome the limitations of network-based and keyword-based analyses, this study, which blends those two methods, suggests the keyword network based analysis methodology. In this study, we collected significant technology information in each patent that is related to Light Emitting Diode (LED) through text mining, built a keyword network, and then executed a community network analysis on the collected data. The results of analysis are as the following. First, the patent keyword network indicated very low density and exceptionally high clustering coefficient. Technically, density is obtained by dividing the number of ties in a network by the number of all possible ties. The value ranges between 0 and 1, with higher values indicating denser networks and lower values indicating sparser networks. In real-world networks, the density varies depending on the size of a network; increasing the size of a network generally leads to a decrease in the density. The clustering coefficient is a network-level measure that illustrates the tendency of nodes to cluster in densely interconnected modules. This measure is to show the small-world property in which a network can be highly clustered even though it has a small average distance between nodes in spite of the large number of nodes. Therefore, high density in patent keyword network means that nodes in the patent keyword network are connected sporadically, and high clustering coefficient shows that nodes in the network are closely connected one another. Second, the cumulative degree distribution of the patent keyword network, as any other knowledge network like citation network or collaboration network, followed a clear power-law distribution. A well-known mechanism of this pattern is the preferential attachment mechanism, whereby a node with more links is likely to attain further new links in the evolution of the corresponding network. Unlike general normal distributions, the power-law distribution does not have a representative scale. This means that one cannot pick a representative or an average because there is always a considerable probability of finding much larger values. Networks with power-law distributions are therefore often referred to as scale-free networks. The presence of heavy-tailed scale-free distribution represents the fundamental signature of an emergent collective behavior of the actors who contribute to forming the network. In our context, the more frequently a patent keyword is used, the more often it is selected by researchers and is associated with other keywords or concepts to constitute and convey new patents or technologies. The evidence of power-law distribution implies that the preferential attachment mechanism suggests the origin of heavy-tailed distributions in a wide range of growing patent keyword network. Third, we found that among keywords that flew into a particular field, the vast majority of keywords with new links join existing keywords in the associated community in forming the concept of a new patent. This finding resulted in the same outcomes for both the short-term period (4-year) and long-term period (10-year) analyses. Furthermore, using the keyword combination information that was derived from the methodology suggested by our study enables one to forecast which concepts combine to form a new patent dimension and refer to those concepts when developing a new patent.

Development of Agent-based Platform for Coordinated Scheduling in Global Supply Chain (글로벌 공급사슬에서 경쟁협력 스케줄링을 위한 에이전트 기반 플랫폼 구축)

  • Lee, Jung-Seung;Choi, Seong-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.213-226
    • /
    • 2011
  • In global supply chain, the scheduling problems of large products such as ships, airplanes, space shuttles, assembled constructions, and/or automobiles are complicated by nature. New scheduling systems are often developed in order to reduce inherent computational complexity. As a result, a problem can be decomposed into small sub-problems, problems that contain independently small scheduling systems integrating into the initial problem. As one of the authors experienced, DAS (Daewoo Shipbuilding Scheduling System) has adopted a two-layered hierarchical architecture. In the hierarchical architecture, individual scheduling systems composed of a high-level dock scheduler, DAS-ERECT and low-level assembly plant schedulers, DAS-PBS, DAS-3DS, DAS-NPS, and DAS-A7 try to search the best schedules under their own constraints. Moreover, the steep growth of communication technology and logistics enables it to introduce distributed multi-nation production plants by which different parts are produced by designated plants. Therefore vertical and lateral coordination among decomposed scheduling systems is necessary. No standard coordination mechanism of multiple scheduling systems exists, even though there are various scheduling systems existing in the area of scheduling research. Previous research regarding the coordination mechanism has mainly focused on external conversation without capacity model. Prior research has heavily focuses on agent-based coordination in the area of agent research. Yet, no scheduling domain has been developed. Previous research regarding the agent-based scheduling has paid its ample attention to internal coordination of scheduling process, a process that has not been efficient. In this study, we suggest a general framework for agent-based coordination of multiple scheduling systems in global supply chain. The purpose of this study was to design a standard coordination mechanism. To do so, we first define an individual scheduling agent responsible for their own plants and a meta-level coordination agent involved with each individual scheduling agent. We then suggest variables and values describing the individual scheduling agent and meta-level coordination agent. These variables and values are represented by Backus-Naur Form. Second, we suggest scheduling agent communication protocols for each scheduling agent topology classified into the system architectures, existence or nonexistence of coordinator, and directions of coordination. If there was a coordinating agent, an individual scheduling agent could communicate with another individual agent indirectly through the coordinator. On the other hand, if there was not any coordinating agent existing, an individual scheduling agent should communicate with another individual agent directly. To apply agent communication language specifically to the scheduling coordination domain, we had to additionally define an inner language, a language that suitably expresses scheduling coordination. A scheduling agent communication language is devised for the communication among agents independent of domain. We adopt three message layers which are ACL layer, scheduling coordination layer, and industry-specific layer. The ACL layer is a domain independent outer language layer. The scheduling coordination layer has terms necessary for scheduling coordination. The industry-specific layer expresses the industry specification. Third, in order to improve the efficiency of communication among scheduling agents and avoid possible infinite loops, we suggest a look-ahead load balancing model which supports to monitor participating agents and to analyze the status of the agents. To build the look-ahead load balancing model, the status of participating agents should be monitored. Most of all, the amount of sharing information should be considered. If complete information is collected, updating and maintenance cost of sharing information will be increasing although the frequency of communication will be decreasing. Therefore the level of detail and updating period of sharing information should be decided contingently. By means of this standard coordination mechanism, we can easily model coordination processes of multiple scheduling systems into supply chain. Finally, we apply this mechanism to shipbuilding domain and develop a prototype system which consists of a dock-scheduling agent, four assembly- plant-scheduling agents, and a meta-level coordination agent. A series of experiments using the real world data are used to empirically examine this mechanism. The results of this study show that the effect of agent-based platform on coordinated scheduling is evident in terms of the number of tardy jobs, tardiness, and makespan.

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

Evaluation of Web Service Similarity Assessment Methods (웹서비스 유사성 평가 방법들의 실험적 평가)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.4
    • /
    • pp.1-22
    • /
    • 2009
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component based software development to promote application interaction and integration both within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web service repositories not only be well-structured but also provide efficient tools for developers to find reusable Web service components that meet their needs. As the potential of Web services for service-oriented computing is being widely recognized, the demand for effective Web service discovery mechanisms is concomitantly growing. A number of techniques for Web service discovery have been proposed, but the discovery challenge has not been satisfactorily addressed. Unfortunately, most existing solutions are either too rudimentary to be useful or too domain dependent to be generalizable. In this paper, we propose a Web service organizing framework that combines clustering techniques with string matching and leverages the semantics of the XML-based service specification in WSDL documents. We believe that this is one of the first attempts at applying data mining techniques in the Web service discovery domain. Our proposed approach has several appealing features : (1) It minimizes the requirement of prior knowledge from both service consumers and publishers; (2) It avoids exploiting domain dependent ontologies; and (3) It is able to visualize the semantic relationships among Web services. We have developed a prototype system based on the proposed framework using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web service registries. We report on some preliminary results demonstrating the efficacy of the proposed approach.

  • PDF

THE CURRENT STATUS OF BIOMEDICAL ENGINEERING IN THE USA

  • Webster, John G.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1992 no.05
    • /
    • pp.27-47
    • /
    • 1992
  • Engineers have developed new instruments that aid in diagnosis and therapy Ultrasonic imaging has provided a nondamaging method of imaging internal organs. A complex transducer emits ultrasonic waves at many angles and reconstructs a map of internal anatomy and also velocities of blood in vessels. Fast computed tomography permits reconstruction of the 3-dimensional anatomy and perfusion of the heart at 20-Hz rates. Positron emission tomography uses certain isotopes that produce positrons that react with electrons to simultaneously emit two gamma rays in opposite directions. It locates the region of origin by using a ring of discrete scintillation detectors, each in electronic coincidence with an opposing detector. In magnetic resonance imaging, the patient is placed in a very strong magnetic field. The precessing of the hydrogen atoms is perturbed by an interrogating field to yield two-dimensional images of soft tissue having exceptional clarity. As an alternative to radiology image processing, film archiving, and retrieval, picture archiving and communication systems (PACS) are being implemented. Images from computed radiography, magnetic resonance imaging (MRI), nuclear medicine, and ultrasound are digitized, transmitted, and stored in computers for retrieval at distributed work stations. In electrical impedance tomography, electrodes are placed around the thorax. 50-kHz current is injected between two electrodes and voltages are measured on all other electrodes. A computer processes the data to yield an image of the resistivity of a 2-dimensional slice of the thorax. During fetal monitoring, a corkscrew electrode is screwed into the fetal scalp to measure the fetal electrocardiogram. Correlations with uterine contractions yield information on the status of the fetus during delivery To measure cardiac output by thermodilution, cold saline is injected into the right atrium. A thermistor in the right pulmonary artery yields temperature measurements, from which we can calculate cardiac output. In impedance cardiography, we measure the changes in electrical impedance as the heart ejects blood into the arteries. Motion artifacts are large, so signal averaging is useful during monitoring. An intraarterial blood gas monitoring system permits monitoring in real time. Light is sent down optical fibers inserted into the radial artery, where it is absorbed by dyes, which reemit the light at a different wavelength. The emitted light travels up optical fibers where an external instrument determines O2, CO2, and pH. Therapeutic devices include the electrosurgical unit. A high-frequency electric arc is drawn between the knife and the tissue. The arc cuts and the heat coagulates, thus preventing blood loss. Hyperthermia has demonstrated antitumor effects in patients in whom all conventional modes of therapy have failed. Methods of raising tumor temperature include focused ultrasound, radio-frequency power through needles, or microwaves. When the heart stops pumping, we use the defibrillator to restore normal pumping. A brief, high-current pulse through the heart synchronizes all cardiac fibers to restore normal rhythm. When the cardiac rhythm is too slow, we implant the cardiac pacemaker. An electrode within the heart stimulates the cardiac muscle to contract at the normal rate. When the cardiac valves are narrowed or leak, we implant an artificial valve. Silicone rubber and Teflon are used for biocompatibility. Artificial hearts powered by pneumatic hoses have been implanted in humans. However, the quality of life gradually degrades, and death ensues. When kidney stones develop, lithotripsy is used. A spark creates a pressure wave, which is focused on the stone and fragments it. The pieces pass out normally. When kidneys fail, the blood is cleansed during hemodialysis. Urea passes through a porous membrane to a dialysate bath to lower its concentration in the blood. The blind are able to read by scanning the Optacon with their fingertips. A camera scans letters and converts them to an array of vibrating pins. The deaf are able to hear using a cochlear implant. A microphone detects sound and divides it into frequency bands. 22 electrodes within the cochlea stimulate the acoustic the acoustic nerve to provide sound patterns. For those who have lost muscle function in the limbs, researchers are implanting electrodes to stimulate the muscle. Sensors in the legs and arms feed back signals to a computer that coordinates the stimulators to provide limb motion. For those with high spinal cord injury, a puff and sip switch can control a computer and permit the disabled person operate the computer and communicate with the outside world.

  • PDF

An Analysis of the Locational Selection Factors of the Small- and Medium-sized Hospitals Using the AHP : Centered on the Spine and Joint Hospitals (AHP를 이용한 중·소 병원 입지선택요인 분석 : 척추·관절 병원중심으로)

  • Kim, Duck Ki;Shim, Gyo-Eon
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.5
    • /
    • pp.191-214
    • /
    • 2018
  • This research empirically analyzed the selection factors and the locational selection factors of the medical service facilities according to the gradual increase of the importance of the selection factors and the locational selection factors regarding the establishments of the small- and medium-sized hospitals according to the rapid changes of the socio-economic conditions. By analyzing the priority order according to the levels of the importance of each evaluation item factor through a research related to the selection factors and the locational selection factors of the small- and medium-sized hospitals and by drawing what the important factors that have the influences on the competitiveness of the pre-existent small- and medium-sized hospitals are through the classification of the real estate locational factors and the non-locational factors, the purpose lies in utilizing them as the basic data and materials for the opening strategies of the small- and medium-sized hospitals considering the special, locational characteristics according to the important factors of the selection factors of the small- and medium-sized hospitals, regarding the medical suppliers that have been preparing, for opening the new, small- and medium-sized hospitals. Based on the results of the preceding researches and the researches on the case examples, 28 evaluation factors were arrived at in terms of the level of the medical treatment, the medical services, the accessibilities of the hospitals, the conveniences of the hospitals, and the physical environment. And, regarding the 28 detailed evaluation factors that had been collected, through the interviews with the related experts, the 5 factors of the medical level, the medical service, the expertise of the hospital, the convenience of the hospital, and the physical environment were selected as the upper class evaluation factors. And, according to each upper class, a total of 28 low-part evaluation factors were selected. Regarding the optimal evaluation factors that were selected, the optimal locational factors were selected by carrying out an AHP questionnaire survey investigation with 200 medical experts as the subjects. Regarding the AHP analysis results, similarly with the case examples of the precedent researches, the levels of the importance appeared in the order of the medical level, the medical services, the accessibility of the hospital, the physical environment, and the convenience. And the factors that were related to the facilities of a hospital appeared low. The results of this research can be applied in providing the basis for the decision-makings regarding the selections of the locations of the small- and medium-sized hospitals in the future.

A Study on the K-REITs of Characteristic Analysis by Investment Type (K-REITs(부동산투자회사)의 투자 유형별 특성 분석)

  • Kim, Sang-Jin;Lee, Myenog-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.11
    • /
    • pp.66-79
    • /
    • 2016
  • A discussion has recently emerged over the increase of approvals of K-REITs, which is concluded on the basis of how to raise funds for business activity, fulfill the expected rate of return and maximize the management of managing investment funds. In addition, corporations need to acknowledge the necessity of the capital structure reflected in the current economic environment and decision-making processes. This research analyzed the characteristics by investment types and influence factors about the debt ratio of K-REITs. The data were collected from general management about business state, investment, and finance from 2002 to 2015 in K-REITs (except for the GFC period of 2007~2009). The results of the research demonstrated the high ratios of the largest shareholder characteristics, which are corporation, pension funds, mutual funds, banks, securities, insurance, and, recently, the increasing ratio of the largest shareholder and major stockholder. The investment of K-REITs is increasing the role of institutional investors that take a leading development of K-REITs. The behaviors of simultaneous investment of institutional investors were analyzed to show that they received higher interest rates than other financial institutions and ran in parallel with attraction and compensation. The results of the multiple regressions analysis, utilizing variables about debt ratio were as follows. The debt ratio showed a negative (-) relation that profitability is increasing, which matches the pecking order theory and trade off theory. On the other hand, investment opportunities (growth potential) showed a negative (-) relation and assets scale that indicated a positive (+) relation. The research results are reflected as follows. K-REITs focused on private equity REITs more than public offering REITs, and in the case of financing the capital of others, loan capital is operated under the guarantee of tangible assets (most of real estate) more than financing of the stock market. Further, after the GFC, the capital of others was actively utilized in K-REITs business, and the debt ratio showed that the determinant factors by the ratio and characteristics of the largest shareholder and investment products.

The Relationship with Electronic Trust, Web Site Commitment and Service Transaction Intention in Public Shipping B2B e-marketplace (해운 B2B e-marketplace의 전자적 신뢰, 사이트몰입 및 서비스 거래의도와의 관계성)

  • Kim, Yong-Man;Kim, Seog-Yong;Lee, Jong-Hwan;Shim, Gyu-Yeol
    • Journal of Global Scholars of Marketing Science
    • /
    • v.17 no.4
    • /
    • pp.113-139
    • /
    • 2007
  • This study aims to, looking from a standpoint of network, has investigated the shipping industry's B2B e-marketplace, the characteristics that can earn electronic trust from the users, and characteristics of the web-site. It has examined the mechanism whereby electronic trust be earned and how it affects web-site involvement and service transaction intention. Ultimately, The study attempts to make proposals whereby such trust can lead for a cooperative trading community in the shipping industry's B2B e-marketplace The Covalence structural equation modeling was designed and empirically tested for the shipping industry's B2B e-marketplace. The shipping industry employees were given questionnaires and data were analyzed. Except for perceived security of the three characteristic factors on the web-site, the perceived site quality and characteristics factors in operation only affected co-variables. Transaction Fairness was determined to be the most important factor among exogenous factors increasing electronic trust. With regards to transaction rules, if a transaction is beneficial only to one side, then no long term transaction will not take place. If the concerned parties properly recognize that transaction fairness is crucial to electronic transaction, then it will enormously contribute to successful operations of shipping e-marketplace. Also, Perceived efficiency in transaction also affects electronic trust. This reduces transaction costs and speeds up and simplifies the transaction process. It has reduced greater time and costs than existing off-line transaction, and would positively affect electronic trust. By making an open forum for participants to obtain information for transaction, they can gather useful information, and at the same time, the web-site operator can provide information, which, in turn, will increase electronic trust in electronic transaction. Furthermore, such formation of trust in electronic transaction influences shipping companies in such a way that they will want to continuously participate in the transaction, raising web-site involvement. The result of increased trust is that shipping companies in the future will do business with each other and form a foundation for continuous transactions amongst themselves. Consequently, the formation of trust in electronic transaction greatly influences web-site involvement and service transaction intention. The results of the study have again proved that in order to maintain continuous business relationship with the current clients, electronic trust in virtual space, which operates the shipping industry's B2B e-marketplace, is important for the interested parties.

  • PDF

The Relationship of Anxiety, Hopelessness, and Family Suppoort of Breast Cancer Patients Undergoing Chemotherapy (암화학요법을 받는 유방암 환자의 불안, 절망감 및 가족지지와의 관계)

  • Park Jum-Hee;Lee Hyoun-Ju;Kim Hyun-Mi;Lyu Eun-Kyung
    • Journal of Korean Academy of Fundamentals of Nursing
    • /
    • v.4 no.1
    • /
    • pp.147-162
    • /
    • 1997
  • This study was attempted to provide the basic data for nursing intervention to improve the psychosociological adaptation of patients receiving chemotherapy for breast cancer by examining relationship between anxiety and hopelessness that they are experiencing and family support, in order to help them successfully cope with various psychological problems. This study was carried out with 93 breast cancer patients who are receiving chemotherapy in the injection treatment room of K University Hospital located in the downtown of Taegu after having underwent mastectomy in the hospital between December 1995 and August 1996. This study used the systematized questionnaires which contain 7 questions about general characteristics, Spielberger's trait anxiety & state anxiety scale, the tool that WON(1987) modified the hopelessness scale which was developed by Beck et al.(1967) and the family support tool made by TAE(1985). By using the SPSS/PC program, this study obtained the real number and percentage for the general characteristics of the subjects, and mean and standard variation for the degrees of trait anxiety, state anxiety, hopelessness and family support. The correlation between each variables was identified on the basis of the Pearson Correlation, and the degrees of trait anxiety, state anxiety, hopelessness and family support in the general characteristics of the subjects were analyzed by using the t-test, ANOVA, and Duncan test. The results of this study were summarized as follows. In the general characteristics of the subjects, most of each group were 51 years old or more and the middle class in income, had educational background under elementary school, no job, Buddhism in religion and spouse, and were receiving chemotherapy using MTX and 5FU. It was shown that the degree of the subjects' trait anxiety is, on an average, 50. 29, state anxiety 49. 68, hopelessness 51. 46 and family support 34. 28. Both trait anxiety and hopelessness showed normal correlation ; the higher the degree of trait anxiety is, the higher the degree of hopelessness is, while trait anxiety and family support showed reverse correlation ; the higher the degree of trait anxiety, the lower the degree of family support that the subjects perceive is. State anxiety and hopelessness also showed normal correlation ; the higher the degree of state anxiety is, the higher the degree of hopelessness is. Family support and hopelessness showed reverse correlation ; the higher the degree of family support is, the lower the degree of hopelessness that the subjects perceive is. And family support and state anxiety showed reverse correlation but there was a statistically significant difference. The degree of trait anxiety in the general characteristics of the subjects showed a significant difference by age, job and religion, the degree of state anxiety a signigicant difference by job and religion, the degree of hopelessness a signigicant difference by age, educational background and existence or not of spouse. In conclusion, the breast cancer patients receiving chemotherapy perceive anxiety and hopelessness due to several causes such as diagnosis itself or side effects of chemotherapy, so that it is required not only to develop specific nursing interventions including family support to alleviate anxiety and hopelessness but also to apply such interventions to clinical practice.

  • PDF