• Title/Summary/Keyword: human-machine system

Search Result 778, Processing Time 0.03 seconds

[ $^1H$ ] MR Spectroscopy of the Normal Human Brains: Comparison between Signa and Echospeed 1.5 T System (정상 뇌의 수소 자기공명분광 소견: 1.5 T Signa와 Echospeed 자기공명영상기기에서의 비교)

  • Kang Young Hye;Lee Yoon Mi;Park Sun Won;Suh Chang Hae;Lim Myung Kwan
    • Investigative Magnetic Resonance Imaging
    • /
    • v.8 no.2
    • /
    • pp.79-85
    • /
    • 2004
  • Purpose : To evaluate the usefulness and reproducibility of $^1H$ MRS in different 1.5 T MR machines with different coils to compare the SNR, scan time and the spectral patterns in different brain regions in normal volunteers. Materials and Methods : Localized $^1H$ MR spectroscopy ($^1H$ MRS) was performed in a total of 10 normal volunteers (age; 20-45 years) with spectral parameters adjusted by the autoprescan routine (PROBE package). In all volunteers, MRS was performed in a three times using conventional MRS (Signa Horizon) with 1 channel coil and upgraded MRS (Echospeed plus with EXCITE) with both 1 channel and 8 channel coil. Using these three different machines and coils, SNRs of the spectra in both phantom and volunteers and (pre)scan time of MRS were compared. Two regions of the human brain (basal ganglia and deep white matter) were examined and relative metabolite ratios (NAA/Cr, Cho/Cr, and mI/Cr ratios) were measured in all volunteers. For all spectra, a STEAM localization sequence with three-pulse CHESS $H_2O$ suppression was used, with the following acquisition parameters: TR=3.0/2.0 sec, TE=30 msec, TM=13.7 msec, SW=2500 Hz, SI=2048 pts, AVG : 64/128, and NEX=2/8 (Signa/Echospeed). Results : The SNR was about over $30\%$ higher in Echospeed machine and time for prescan and scan was almost same in different machines and coils. Reliable spectra were obtained on both MRS systems and there were no significant differences in spectral patterns and relative metabolite ratios in two brain regions (p>0.05). Conclusion : Both conventional and new MRI systems are highly reliable and reproducible for $^1H$ MR spectroscopic examinations in human brains and there are no significant differences in applications for $^1H$ MRS between two different MRI systems.

  • PDF

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Progress of Composite Fabrication Technologies with the Use of Machinery

  • Choi, Byung-Keun;Kim, Yun-Hae;Ha, Jin-Cheol;Lee, Jin-Woo;Park, Jun-Mu;Park, Soo-Jeong;Moon, Kyung-Man;Chung, Won-Jee;Kim, Man-Soo
    • International Journal of Ocean System Engineering
    • /
    • v.2 no.3
    • /
    • pp.185-194
    • /
    • 2012
  • A Macroscopic combination of two or more distinct materials is commonly referred to as a "Composite Material", having been designed mechanically and chemically superior in function and characteristic than its individual constituent materials. Composite materials are used not only for aerospace and military, but also heavily used in boat/ship building and general composite industries which we are seeing increasingly more. Regardless of the various applications for composite materials, the industry is still limited and requires better fabrication technology and methodology in order to expand and grow. An example of this is that the majority of fabrication facilities nearby still use an antiquated wet lay-up process where fabrication still requires manual hand labor in a 3D environment impeding productivity of composite product design advancement. As an expert in the advanced composites field, I have developed fabrication skills with the use of machinery based on my past composite experience. In autumn 2011, the Korea government confirmed to fund my project. It is the development of a composite sanding machine. I began development of this semi-robotic prototype beginning in 2009. It has possibilities of replacing or augmenting the exhaustive and difficult jobs performed by human hands, such as sanding, grinding, blasting, and polishing in most often, very awkward conditions, and is also will boost productivity, improve surface quality, cut abrasive costs, eliminate vibration injuries, and protect workers from exposure to dust and airborne contamination. Ease of control and operation of the equipment in or outside of the sanding room is a key benefit to end-users. It will prove to be much more economical than normal robotics and minimize errors that commonly occur in factories. The key components and their technologies are a 360 degree rotational shoulder and a wrist that is controlled under PLC controller and joystick manual mode. Development on both of the key modules is complete and are now operational. The Korean government fund boosted my development and I expect to complete full scale development no later than 3rd quarter 2012. Even with the advantages of composite materials, there is still the need to repair or to maintain composite products with a higher level of technology. I have learned many composite repair skills on composite airframe since many composite fabrication skills including repair, requires training for non aerospace applications. The wind energy market is now requiring much larger blades in order to generate more electrical energy for wind farms. One single blade is commonly 50 meters or longer now. When a wind blade becomes damaged from external forces, on-site repair is required on the columns even under strong wind and freezing temperature conditions. In order to correctly obtain polymerization, the repair must be performed on the damaged area within a very limited time. The use of pre-impregnated glass fabric and heating silicone pad and a hot bonder acting precise heating control are surely required.

A Hybrid Forecasting Framework based on Case-based Reasoning and Artificial Neural Network (사례기반 추론기법과 인공신경망을 이용한 서비스 수요예측 프레임워크)

  • Hwang, Yousub
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.43-57
    • /
    • 2012
  • To enhance the competitive advantage in a constantly changing business environment, an enterprise management must make the right decision in many business activities based on both internal and external information. Thus, providing accurate information plays a prominent role in management's decision making. Intuitively, historical data can provide a feasible estimate through the forecasting models. Therefore, if the service department can estimate the service quantity for the next period, the service department can then effectively control the inventory of service related resources such as human, parts, and other facilities. In addition, the production department can make load map for improving its product quality. Therefore, obtaining an accurate service forecast most likely appears to be critical to manufacturing companies. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average simulation. However, these methods are only efficient for data with are seasonal or cyclical. If the data are influenced by the special characteristics of product, they are not feasible. In our research, we propose a forecasting framework that predicts service demand of manufacturing organization by combining Case-based reasoning (CBR) and leveraging an unsupervised artificial neural network based clustering analysis (i.e., Self-Organizing Maps; SOM). We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the service forecasting domain. Our proposed approach has several appealing features : (1) We applied CBR and SOM in a new forecasting domain such as service demand forecasting. (2) We proposed our combined approach between CBR and SOM in order to overcome limitations of traditional statistical forecasting methods and We have developed a service forecasting tool based on the proposed approach using an unsupervised artificial neural network and Case-based reasoning. In this research, we conducted an empirical study on a real digital TV manufacturer (i.e., Company A). In addition, we have empirically evaluated the proposed approach and tool using real sales and service related data from digital TV manufacturer. In our empirical experiments, we intend to explore the performance of our proposed service forecasting framework when compared to the performances predicted by other two service forecasting methods; one is traditional CBR based forecasting model and the other is the existing service forecasting model used by Company A. We ran each service forecasting 144 times; each time, input data were randomly sampled for each service forecasting framework. To evaluate accuracy of forecasting results, we used Mean Absolute Percentage Error (MAPE) as primary performance measure in our experiments. We conducted one-way ANOVA test with the 144 measurements of MAPE for three different service forecasting approaches. For example, the F-ratio of MAPE for three different service forecasting approaches is 67.25 and the p-value is 0.000. This means that the difference between the MAPE of the three different service forecasting approaches is significant at the level of 0.000. Since there is a significant difference among the different service forecasting approaches, we conducted Tukey's HSD post hoc test to determine exactly which means of MAPE are significantly different from which other ones. In terms of MAPE, Tukey's HSD post hoc test grouped the three different service forecasting approaches into three different subsets in the following order: our proposed approach > traditional CBR-based service forecasting approach > the existing forecasting approach used by Company A. Consequently, our empirical experiments show that our proposed approach outperformed the traditional CBR based forecasting model and the existing service forecasting model used by Company A. The rest of this paper is organized as follows. Section 2 provides some research background information such as summary of CBR and SOM. Section 3 presents a hybrid service forecasting framework based on Case-based Reasoning and Self-Organizing Maps, while the empirical evaluation results are summarized in Section 4. Conclusion and future research directions are finally discussed in Section 5.

Clinical Study of Acute and Chronic Pain by the Application of Magnetic Resonance Analyser $I_{TM}$ (자기공명분석기를 이용한 통증관리)

  • Park, Wook;Jin, Hee-Cheol;Cho, Myun-Hyun;Yoon, Suk-Jun;Lee, Jin-Seung;Lee, Jeong-Seok;Choi, Surk-Hwan;Kim, Sung-Yell
    • The Korean Journal of Pain
    • /
    • v.6 no.2
    • /
    • pp.192-198
    • /
    • 1993
  • In 1984, a magnetic resonance spectrometer(magnetic resonance analyser, MRA $I_{TM}$) was developed by Sigrid Lipsett and Ronald J. Weinstock in the USA, Biomedical applications of the spectrometer have been examined by Dr. Hoang Van Duc(pathologist, USC), and Nakamura, et al(Japan). From their theoretical views, the biophysical functions of this machine are to analyse and synthesize a healthy tissue and organ resonance pattern, and to detect and correct an abnormal tissue and organ resonance pattern. All of the above functions are based on Quantum physics. The healthy tissue and organ resonance patterns are predetermined as standard magnetic resonance patterns by digitizing values based on peak resonance emissions(response levels or high pitched echo-sounds amplified via human body). In clinical practice, a counter or neutralizing resonance pattern calculated by the spectrometer can correct a phase-shifted resonance pattern(response levels or low pitched echo-sounds) of a diseased tissue and organ. By administering the counter resonance pattern into the site of pain and trigger point, it is possible to readjust the phase-shifted resonance pattern and then to alleviate pain through regulation of the neurotransmitter function of the nervous system. For assessing clinical effectiveness of pain relief with MRA $I_{TM}$ this study was designed to estimate pain intensity by the patient's subjective verbal rating scale(VRS such as graded to no pain, mild, moderate and severe) before application of it, to evaluate an amount of pain relief as applied the spectrometer by the patients subjective pain relief scale(visual analogue scale, VAS, 0~100%), and then to observe a continuation of pain relief following its application for managing acute and chronic pain in the 102 patients during an 8 months period beginning March, 1993. An application time of the spectrometer ranged from 15 to 30 minutes daily in each patient at or near the site of pain and trigger point when the patient wanted to be treated. The subjects consisted of 54 males and 48 females, with the age distribution between 23~40 years in 29 cases, 41~60 years in 48 cases and 61~76 years in 25 cases respectively(Table 1). The kinds of diagnosis and the main site of pain, the duration of pain before the application, and the frequency of it's application were recorded on the Table 2, 3 and 4. A distinction between acute and chronic pain was defined according to both of the pain intervals lasting within and over 3 months. The results of application of the spectrometer were noted as follows; In 51 cases of acute pain before the application, the pain intensities were rated mild in 10 cases, moderate in 15 cases and severe in 26 cases. The amounts of pain relief were noted as between 30~50% in 9 cases, 51~70% in 13 cases and 71~95% in 29 cases. The continuation of pain relief appeared between 6~24 hours in two cases, 2~5 days in 10 cases, 6~14 days in 4 cases, 15 days in one case, and completely relived of pain in 34 cases(Table 5~7). In 51 cases of chronic pain before the application, the pain intensities were rated mild in 12 cases, moderate in l8 cases and severe in 21 cases. The amounts of pain relief were noted as between 0~50% in 10 cases, 51~70% in 27 cases and 71~90% in 14 cases. The continuation of pain relief appeared to have no effect in two cases. The level of effective duration was between 6~12 hours in two cases, 2~5 days in 11 cases, 6~14 days in 14 cases, 15~60 days in 9 cases and in 13 cases the patient was completely relieved of pain(Table 5~7). There were no complications in the patients except a mild reddening and tingling sensation of skin while applying the spectrometer. Total amounts of pain relief in all of the subjects were accounted as poor and fair in 19(18.6%) cases, good in 40(39.2%) cases and excellent in 43(42.2%) cases. The clinical effectiveness of MRA $I_{TM}$ showed variable distributions from no improvements to complete relief of pain by the patient's assessment. In conclusion, we suggest that MRA $I_{TM}$ may be successful in immediate and continued pain relief but still requires several treatments for continued relief and may be gradually effective in pain relief while being applied repeatedly.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

The Empirical Exploration of the Conception on Nursing (간호개념에 대한 기초조사)

  • 백혜자
    • Journal of Korean Academy of Nursing
    • /
    • v.11 no.1
    • /
    • pp.65-87
    • /
    • 1981
  • The study is aimed at exploring concept held by clinical nurses of nursing. The data were collected from 225 nurses conviniently selected from the population of nurses working in Kang Won province. Findings include. 1) Nurse's Qualification. The respondents view that specialized knowledge is more important qualification of the nurse. Than warm personality. Specifically, 92.9% of the respondents indicated specialized knowledge as the most important qualification while only 43.1% indicated warm personality. 2) On Nursing Profession. The respondents view that nursing profession as health service oriented rather than independent profession specifically. This suggests that nursing profession is not consistentic present health care delivery system nor support nurses working independently. 3) On Clients of Nursing Care The respondents include patients, family and the community residents in the category of nursing care. Specifically, 92.0% of the respondents view that patient is the client, while only 67.1% of nursing student and 74.7% of herself. This indicates the lack of the nurse's recognition toward their clients. 4) On the Priority of Nursing care. Most of the respondents view the clients physical psychological respects as important component of nursing care but not the spiritual ones. Specially, 96.0% of the respondents indicated the physical respects, 93% psychological ones, while 64.1% indicated the spiritual ones. This means the lack of comprehensive conception on nursing aimension. 5) On Nursing Care. 91.6% of the respondents indicated that nursing care is the activity decreasing pain or helping to recover illness, while only 66.2% indicated earring out the physicians medical orders. 6) On Purpose of Nursing Care. 89.8% of the respondents indicated preventing illness and than 76.6% of them decreasing 1;ai of clients. On the other hand, maintaining health has the lowest selection at the degree of 13.8%. This means the lack of nurses' recognition for maintaining health as the most important point. 7) On Knowledge Needed in Nursing Care. Most of the respondents view that the knowledge faced with the spot of nursing care is needed. Specially, 81.3% of the respondents indicated simple curing method and 75.1%, 73.3%, 71.6% each indicated child nursing, maternal nursing and controlling for the communicable disease. On the other hand, knowledge w hick has been neglected in the specialized courses of nursing education, that is, thinking line among com-w unity members, overcoming style against between stress and personal relation in each home, and administration, management have a low selection at the depree of 48.9%,41.875 and 41.3%. 8) On Nursing Idea. The highest degree of selection is that they know themselves rightly, (The mean score measuring distribution was 4.205/5) In the lowest degree,3.016/5 is that devotion is the essential element of nursing, 2.860/5 the religious problems that human beings can not settle, such as a fatal ones, 2,810/5 the nursing profession is worth trying in one's life. This means that the peculiarly essential ideas on the professional sense of value. 9) On Nursing Services. The mean score measuring distribution for the nursing services showed that the inserting of machine air way is 2.132/5, the technique and knowledge for surviving heart-lung resuscitating is 2.892/s, and the preventing air pollution 3.021/5. Specially, 41.1% of the respondents indicated the lack of the replied ratio. 10) On Nurses' Qualifications. The respondents were selected five items as the most important qualifications. Specially, 17.4% of the respondents indicated specialized knowledge, 15.3% the nurses' health, 10.6% satisfaction for nursing profession, 9.8% the experience need, 9.2% comprehension and cooperation, while warm personality as nursing qualifications have a tendency of being lighted. 11) On the Priority of Nursing Care The respondents were selected three items as the most important component. Most of the respondents view the client's physical, spiritual: economic points as important components of nursing care. They showed each 36.8%, 27.6%, 13.8% while educational ones showed 1.8%. 12) On Purpose of Nursing Care. The respondents were selected four items as the most important purpose. Specially,29.3% of the respondents indicated curing illness for clients, 21.3% preventing illness for client 17.4% decreasing pain, 15.3% surviving. 13) On the Analysis of Important Nursing Care Ranging from 5 point to 25 point, the nurses' qualification are concentrated at the degree of 95.1%. Ranging from 3 point to 25, the priorities of nursing care are concentrated at the degree of 96.4%. Ranging from 4 point to 16, the purpose of nursing care is concentrated at the degree of 84.0%. 14) The Analysis, of General Characteristics and Facts of Nursing Concept. The correlation between the educational high level and nursing care showed significance. (P < 0.0262). The correction between the educational low level and purpose of nursing care showed significance. (P < 0.002) The correlation between nurses' working yeras and the degree of importance for the purpose of nursing care showed significance (P < 0.0155) Specially, the most affirmative answers were showed from two years to four ones. 15) On Nunes' qualification and its Degree of Importance The correlation between nurses' qualification and its degree of importance showed significance. (r = 0.2172, p< 0.001) 0.005) B. General characteristics of the subjects The mean age of the subject was 39 ; with 38.6% with in the age range of 20-29 ; 52.6% were male; 57.9% were Schizophrenia; 35.1% were graduated from high school or high school dropouts; 56.l% were not have any religion; 52.6% were unmarried; 47.4% were first admission; 91.2% were involuntary admission patients. C. Measurement of anxiety variables. 1. Measurement tools of affective anxiety in this study demonstrated high reliability (.854). 2. Measurement tools of somatic anxiety in this study demonstrated high reliability (.920). D. Relationship between the anxiety variables and the general characteristics. 1. Relationship between affective anxiety and general characteristics. 1) The level of female patients were higher than that of the male patient (t = 5.41, p < 0.05). 2) Frequencies of admission were related to affective anxiety, so in the first admission the anxiety level was the highest. (F = 5.50, p < 0.005). 2, Relationship between somatic anxiety and general characteristics. 1) The age range of 30-39 was found to have the highest level of the somatic anxiety. (F = 3.95, p < 0.005). 2) Frequencies of admission were related to the somatic anxiety, so .in first admission the anxiety level was the highest. (F = 9.12, p < 0.005) 0. Analysis of significant anxiety symptoms for nursing intervention. 1. Seven items such as dizziness, mental integration, sweating, restlessness, anxiousness, urinary frequency and insomnia, init. accounted for 96% of the variation within the first 24 hours after admission. 2. Seven items such as fear, paresthesias, restlessness, sweating insomnia, init., tremors and body aches and pains accounted for 84% of the variation on the 10th day after admission.

  • PDF