• Title/Summary/Keyword: 상황적장애

Search Result 543, Processing Time 0.021 seconds

Comparison of Weighted Needle Pinprick Sensory Thresholds and Sensory Nerve Conduction Studies in Diabetic Patients (당뇨병(糖尿病) 환자(患者)에서의 가중침자(加重針刺) 감각역치와 감각신경(感覺神經) 전도검사(傳導檢査)와의 비교(比較))

  • Yoo, Jae-Kwan;Kim, Seong-Ah;Lee, Jong-Young
    • Journal of Preventive Medicine and Public Health
    • /
    • v.28 no.4 s.51
    • /
    • pp.899-910
    • /
    • 1995
  • This study was conducted to determine the correlation between weighted needle pinprick sensory threshold(PPT) and sensory nerve conduction tests. The subjects were 53 healthy controls, 31 diabetic patients without peripheral neuropathic symptoms(DM) and 36 diabetic patients with peripheral neuropathic symptoms(DN). PPT was measured on the index and little fingers, bilaterally, as well as under the lateral malleolus, bilaterally. In electrophysiologic assessment the left and right median, ulnar and sural nerves were studied. Each mean PPTs was high in order of controls, DM and DN. Age adjusted PPT was significantly different among three groups on right little finger(p<0.05) and left malleolus(p<0.05), but not significantly different between DN and DM on other sites. Each sensory nerve conduction velocity and amplitude was statistically significantly different among three groups(p<0.05). Correlations of PPT with sensory nerve conduction velocity and amplitude were statistically significant on each site and ranged from -0.4203(left malleolus) to -0.5649(right index finger) and from -0.3897(left index finger) to -0.6200(right index finger), respectively. When electrophysiological study is not feasible, measurement of PPT may be helpful for the assessment of peripheral sensory neurological function.

  • PDF

Autonomic Nervous Response of Female College Students with Type D Personality during an Acute Stress Task: Heart Rate Variability (Type D 성격 여대생의 급성 스트레스에 따른 자율신경계 반응 : 심박률 변동성을 중심으로)

  • Ko, Seon-Young;Kim, Myung-Sun
    • Korean Journal of Health Psychology
    • /
    • v.14 no.2
    • /
    • pp.277-292
    • /
    • 2009
  • This study investigated the responses of the autonomic nervous system of individuals with Type D personality during an acute stressful situation. Twenty-three female students of Type D personality and 23 female students with non-Type D personality. Stroop Color-Word Task was used to induce a stressful situation, heart rate variability (HRV) was used to measure the responses of the autonomic nervous system during the baseline, acute stress, recovery periods. To analyze the data, the repeated measures analysis of variance was used to compare the autonomic nervous system of the Type D group to that of the non-Type D group. Regression analysis is used to determine if the Type D scale and stress vulnerability predicted the activities of the autonomic nervous system during the baseline period. The results of this study demonstrated that the Type D group's normalized low frequency (LF norm) and ratio of low frequency to high frequency (LF/HF ratio) were higher than those for the non-Type D group, while its normalized high frequency (HF norm) was lower than that for the non-Type D group in all three periods. There were no statistically significant differences among the three periods in terms of LF norm, HF norm, and LF/HF ratio in the Type D group. The study demonstrated that the total scores of the Type DS-14 and scores of social inhibition and negative affect were independent predictors of LF norm and HF norm during the baseline. The Type D group showed increased activation of the sympathetic nervous system and/or decreased activation of the parasympathetic nervous system. These results support the hypothesis that the Type D personality is vulnerable to the stress. Also, the highly activated sympathetic and/or lowly activated parasympathetic nervous systems, which were observed in the Type D group during the baseline, indicated that the Type D individual is susceptible to psychosomatic disorders.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.