• Title/Summary/Keyword: Computer Software

Search Result 8,423, Processing Time 0.035 seconds

Permanent Preservation and Use of Historical Archives : Preservation Issues Digitization of Historical Collection (역사기록물(Archives)의 항구적인 보존화 이용 : 보존전략과 디지털정보화)

  • Lee, Sang-min
    • The Korean Journal of Archival Studies
    • /
    • no.1
    • /
    • pp.23-76
    • /
    • 2000
  • In this paper, I examined what have been researched and determined about preservation strategy and selection of preservation media in the western archival community. Archivists have primarily been concerned with 'preservation' and 'use' of archival materials worth of being preserved permanently. In the new information era, preservation and use of archival materials were faced with new challenge. Life expectancy of paper records was shortened due to acidification and brittleness of the modem papers. Also emergence of information technology affects the traditional way of preservation and use of archival materials. User expectations are becoming so high technology-oriented and so complicated as to make archivists act like information managers using computer technology rather than traditional archival handicraft. Preservation strategy plays an important role in archival management as well as information management. For a cost-effective management of archives and archival institutions, preservation strategy is a must. The preservation strategy encompasses all aspects of archival preservation process and practices, from selection of archives, appraisal, inventorying, arrangement, description, conservation, microfilming or digitization, archival buildings, and access service. Those archival functions should be considered in their relations to each other to ensure proper preservation of archival materials. In the integrated preservation strategy, 'preservation' and 'use' should be combined and fulfilled without sacrificing the other. Preservation strategy planning is essential to determine the policies of archives to preserve their holdings safe and provide people with a maximum access in most effective ways. Preservation microfilming is to ensure permanent preservation of information held in important archival materials. To do this, a detailed standardization has been developed to guarantee the permanence of microfilm as well as its product quality. Silver gelatin film can last up to 500 years in the optimum storage environment and the most viable option for permanent preservation media. ISO and ANIS developed such standards for the quality of microfilms and microfilming technology. Preservation microfilming guidelines was also developed to ensure effective archival management and picture quality of microfilms. It is essential to assess the need of preservation microfilming. Limit in resources always put a restraint on preservation management. Appraisal (and selection) of what to be preserved was the most important part of preservation microfilming. In addition, microfilms with standard quality can be scanned to produce quality digital images for instant use through internet. As information technology develops, archivists began to utilize information technology to make preservation easier and more economical, and to promote use of archival materials through computer communication network. Digitization was introduced to provide easy and universal access to unique archives, and its large capacity of preserving archival data seems very promising. However, digitization, i.e., transferring images of records to electronic codes, still, needs to be standardized. Digitized data are electronic records, and st present electronic records are very unstable and not to be preserved permanently. Digital media including optical disks materials have not been proved as reliable media for permanent preservation. Due to their chemical coating and physical character using light, they are not stable and can be preserved at best 100 years in the optimum storage environment. Most CD-R can last only 20 years. Furthermore, obsolescence of hardware and software makes hard to reproduce digital images made from earlier versions. Even if when reformatting is possible, the cost of refreshing or upgrading of digital images is very expensive and the very process has to be done at least every five to ten years. No standard for this obsolescence of hardware and software has come into being yet. In short, digital permanence is not a fact, but remains to be uncertain possibility. Archivists must consider in their preservation planning both risk of introducing new technology and promising possibility of new technology at the same time. In planning digitization of historical materials, archivists should incorporate planning for maintaining digitized images and reformatting them in the coming generations of new applications. Without the comprehensive planning, future use of the expensive digital images will become unavailable. And that is a loss of information, and a final failure of both 'preservation' and 'use' of archival materials. As peter Adelstein said, it is wise to be conservative when considerations of conservations are involved.

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.

Power Conscious Disk Scheduling for Multimedia Data Retrieval (저전력 환경에서 멀티미디어 자료 재생을 위한 디스크 스케줄링 기법)

  • Choi, Jung-Wan;Won, Yoo-Jip;Jung, Won-Min
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.242-255
    • /
    • 2006
  • In the recent years, Popularization of mobile devices such as Smart Phones, PDAs and MP3 Players causes rapid increasing necessity of Power management technology because it is most essential factor of mobile devices. On the other hand, despite low price, hard disk has large capacity and high speed. Even it can be made small enough today, too. So it appropriates mobile devices. but it consumes too much power to embed In mobile devices. Due to these motivations, in this paper we had suggested methods of minimizing Power consumption while playing multimedia data in the disk media for real-time and we evaluated what we had suggested. Strict limitation of power consumption of mobile devices has a big impact on designing both hardware and software. One difference between real-time multimedia streaming data and legacy text based data is requirement about continuity of data supply. This fact is why disk drive must persist in active state for the entire playback duration, from power management point of view; it nay be a great burden. A legacy power management function of mobile disk drive affects quality of multimedia playback negatively because of excessive I/O requests when the disk is in standby state. Therefore, in this paper, we analyze power consumption profile of disk drive in detail, and we develop the algorithm which can play multimedia data effectively using less power. This algorithm calculates number of data block to be read and time duration of active/standby state. From this, the algorithm suggested in this paper does optimal scheduling that is ensuring continual playback of data blocks stored in mobile disk drive. And we implement our algorithms in publicly available MPEG player software. This MPEG player software saves up to 60% of power consumption as compared with full-time active stated disk drive, and 38% of power consumption by comparison with disk drive controlled by native power management method.

Computer Assisted EPID Analysis of Breast Intrafractional and Interfractional Positioning Error (유방암 방사선치료에 있어 치료도중 및 분할치료 간 위치오차에 대한 전자포탈영상의 컴퓨터를 이용한 자동 분석)

  • Sohn Jason W.;Mansur David B.;Monroe James I.;Drzymala Robert E.;Jin Ho-Sang;Suh Tae-Suk;Dempsey James F.;Klein Eric E.
    • Progress in Medical Physics
    • /
    • v.17 no.1
    • /
    • pp.24-31
    • /
    • 2006
  • Automated analysis software was developed to measure the magnitude of the intrafractional and interfractional errors during breast radiation treatments. Error analysis results are important for determining suitable planning target volumes (PTV) prior to Implementing breast-conserving 3-D conformal radiation treatment (CRT). The electrical portal imaging device (EPID) used for this study was a Portal Vision LC250 liquid-filled ionization detector (fast frame-averaging mode, 1.4 frames per second, 256X256 pixels). Twelve patients were imaged for a minimum of 7 treatment days. During each treatment day, an average of 8 to 9 images per field were acquired (dose rate of 400 MU/minute). We developed automated image analysis software to quantitatively analyze 2,931 images (encompassing 720 measurements). Standard deviations ($\sigma$) of intrafractional (breathing motion) and intefractional (setup uncertainty) errors were calculated. The PTV margin to include the clinical target volume (CTV) with 95% confidence level was calculated as $2\;(1.96\;{\sigma})$. To compensate for intra-fractional error (mainly due to breathing motion) the required PTV margin ranged from 2 mm to 4 mm. However, PTV margins compensating for intefractional error ranged from 7 mm to 31 mm. The total average error observed for 12 patients was 17 mm. The intefractional setup error ranged from 2 to 15 times larger than intrafractional errors associated with breathing motion. Prior to 3-D conformal radiation treatment or IMRT breast treatment, the magnitude of setup errors must be measured and properly incorporated into the PTV. To reduce large PTVs for breast IMRT or 3-D CRT, an image-guided system would be extremely valuable, if not required. EPID systems should incorporate automated analysis software as described in this report to process and take advantage of the large numbers of EPID images available for error analysis which will help Individual clinics arrive at an appropriate PTV for their practice. Such systems can also provide valuable patient monitoring information with minimal effort.

  • PDF

Diagnostic Performance of Combined Single Photon Emission Computed Tomographic Scintimammography and Ultrasonography Based on Computer-Aided Diagnosis for Breast Cancer (유방 SPECT 및 초음파 컴퓨터진단시스템 결합의 유방암 진단성능)

  • Hwang, Kyung-Hoon;Lee, Jun-Gu;Kim, Jong-Hyo;Lee, Hyung-Ji;Om, Kyong-Sik;Lee, Byeong-Il;Choi, Duck-Joo;Choe, Won-Sick
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.201-208
    • /
    • 2007
  • Purpose: We investigated whether the diagnostic performance of SPECT scintimammography (SMM) can be improved by adding computer-aided diagnosis (CAD) of ultrasonography (US). Materials and methods: We reviewed breast SPECT SMM images and corresponding US images from 40 patients with breast masses (21 malignant and 19 benign tumors). The quantitative data of SPECT SMM were obtained as the uptake ratio of lesion to contralateral normal breast. The morphologic features of the breast lesions on US were extracted and quantitated using the automated CAD software program. The diagnostic performance of SPECT SMM and CAD of US alone was determined using receiver operating characteristic (ROC) curve analysis. The best discriminating parameter (D-value) combining SPECT SMM and the CAD of US was created. The sensitivity, specificity and accuracy of combined two diagnostic modalities were compared to those of a single one. Results: Both SPECT SMM and CAD of US showed a relatively good diagnostic performance (area under curve = 0.846 and 0.831, respectively). Combining the results of SPECT SMM and CAD of US resulted in improved diagnostic performance (area under curve =0.860), but there was no statistical differerence in sensitivity, specificity and accuracy between the combined method and a single modality. Conclusion: It seems that combining the results of SPECT SMM and CAD of breast US do not significantly improve the diagnostic performance for diagnosis of breast cancer, compared with that of SPECT SMM alone. However, SPECT SMM and CAD of US may complement each other in differential diagnosis of breast cancer.

The Development of Theoretical Model for Relaxation Mechanism of Sup erparamagnetic Nano Particles (초상자성 나노 입자의 자기이완 특성에 관한 이론적 연구)

  • 장용민;황문정
    • Investigative Magnetic Resonance Imaging
    • /
    • v.7 no.1
    • /
    • pp.39-46
    • /
    • 2003
  • Purpose : To develop a theoretical model for magnetic relaxation behavior of the superparamagnetic nano-particle agent, which demonstrates multi-functionality such as liver- and lymp node-specificity. Based on the developed model, the computer simulation was performed to clarify the relationship between relaxation time and the applied magnetic field strength. Materials and Methods : The ultrasmall superparamagnetic iron oxide (USPIO) was encapsulated with biocompatiable polymer, to develop a relaxation model based on outsphere mechanism, which was resulting from diffusion and/or electron spin fluctuation. In addition, Brillouin function was introduced to describe the full magnetization by considering the fact that the low-field approximation, which was adapted in paramagnetic case, is no longer valid. The developed model describes therefore the T1 and T2 relaxation behavior of superparamagnetic iron oxide both in low-field and in high-field. Based on our model, the computer simulation was performed to test the relaxation behavior of superparamagnetic contrast agent over various magnetic fields using MathCad (MathCad, U.S.A.), a symbolic computation software. Results : For T1 and T2 magnetic relaxation characteristics of ultrasmall superparamagnetic iron oxide, the theoretical model showed that at low field (<1.0 Mhz), $\tau_{S1}(\tau_{S2}$, in case of T2), which is a correlation time in spectral density function, plays a major role. This suggests that realignment of nano-magnetic particles is most important at low magnetic field. On the other hand, at high field, $\tau$, which is another correlation time in spectral density function, plays a major role. Since $\tau$ is closely related to particle size, this suggests that the difference in R1 and R2 over particle sizes, at high field, is resulting not from the realignment of particles but from the particle size itself. Within normal body temperature region, the temperature dependence of T1 and T2 relaxation time showed that there is no change in T1 and T2 relaxation times at high field. Especially, T1 showed less temperature dependence compared to T2. Conclusion : We developed a theoretical model of r magnetic relaxation behavior of ultrasmall superparamagnetic iron oxide (USPIO), which was reported to show clinical multi-functionality by utilizing physical properties of nano-magnetic particle. In addition, based on the developed model, the computer simulation was performed to investigate the relationship between relaxation time of USPIO and the applied magnetic field strength.

  • PDF

Comparative study of volumetric change in water-stored and dry-stored complete denture base (공기중과 수중에서 보관한 총의치 의치상의 체적변화에 대한 비교연구)

  • Kim, Jinseon;Lee, Younghoo;Hong, Seoung-Jin;Paek, Janghyun;Noh, Kwantae;Pae, Ahran;Kim, Hyeong-Seob;Kwon, Kung-Rock
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.59 no.1
    • /
    • pp.18-26
    • /
    • 2021
  • Purpose: Generally, patients are noticed to store denture in water when removed from the mouth. However, few studies have reported the advantage of volumetric change in underwater storage over dry storage. To be a reference in defining the proper denture storage method, this study aims to evaluate the volumetric change and dimensional deformation in case of underwater and dry storage. Materials and methods: Definitive casts were scanned by a model scanner, and denture bases were designed with computer-aided design (CAD) software. Twelve denture bases (upper 6, lower 6) were printed with 3D printer. Printed denture bases were invested and flasked with heat-curing method. 6 upper and 6 lower dentures were divided into group A and B, and each group contains 3 upper and 3 lower dentures. Group A was stored dry at room temperature, group B was stored underwater. Group B was scanned at every 24 hours for 28 days and scanned data was saved as stereolithography (SLA) file. These SLA files were analyzed to measure the difference in volumetric change of a month and Kruskal-Wallis test were used for statistical analysis. Best-fit algorithm was used to overlap and 3-dimensional color-coded map was used to observe the changing pattern of impression surface. Results: No significant difference was found in volumetric changes regardless of the storage methods. In dry-stored denture base, significant changes were found in the palate of upper jaw and posterior lingual border of lower jaw in direction away from the underlying tissue, maxillary tuberosity of upper jaw and retromolar pad area of lower jaw in direction towards the underlying tissue. Conclusion: Storing the denture underwater shows less volumetric change of impression surface than storing in the dry air.

An Empirical Study on the Success Factors of Korean Venture Firms: The Suggestion of the Integrated Model Utilizing Secondary Data (한국 벤처기업의 성공요인에 관한 실증적 연구: 2차 자료를 활용한 통합적 모형의 제시)

  • Koh, InKon
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.13 no.2
    • /
    • pp.1-13
    • /
    • 2018
  • This study examines the relationship between the organizational general characteristics (industry, size, location, development stage, and company age) and success factors of Korean venture firms using secondary data. Among the industries with the highest sales figures in 2016 are food / fiber / (non) metals, and the smallest category was software development. The sectors with the highest net profit were computer / semiconductor / electronic components, and the smallest category was telecommunication equipment / broadcasting equipment. The industries with the largest sales growth rate are IT / broadcasting services and software development. The industries with the highest net profit margin of sales are energy / medical / precision, and the smallest is telecommunication equipment / broadcasting equipment. In terms of the number of employees, venture firms with more than 100 employees have the largest sales and net profit, with employees between 1 and 9 have the smallest. However, these results are predictable. In general, the number of employees is highly correlated with sales and net profit. Rather, the sales growth rate and the net profit margin of sales may be meaningful. In particular, with employees between 50 ~ 99, the growth rate of sales and the net profit margin of sales were high. In terms of location, Seoul / Incheon / Gyeonggi were the regions with the highest sales and Daejeon / Sejong / Chungcheong / Gangwon were the least regions. Gwangju / Jeolla / Jeju and Seoul / Incheon / Gyeonggi were almost similar in the areas with the largest net profit. However, Daejeon / Sejong / Chungcheong / Gangwon had the lowest net profit. Unusually, the areas with the highest sales growth rate and the highest net profit margin of sales were Gwangju / Jeolla / Jeju, and the smallest areas were Busan / Jeonnam / Ulsan In the relationship between the stage of development and the performance of the company, the sales of maturity and decline stages were the highest and establishing stage was the lowest. Net profit was also the highest in mature stage and the smallest in establishing stage. The sales growth rate shows a typical pattern in the order of establishing stage, early growth stage, high growth stage, maturity stage, and decline stage. In terms of business performance, sales and net profit are the highest with 21 years or more of company age, and the smallest is less than 3 years. In addition, the sales growth rate was the highest in three years or less, and the net profit margin of sales was the highest in 4 to 10 years. This study can present lots of useful implications by suggesting integrated research model and examining the success factors of Korean venture firms and presenting the application methods of secondary data in analyzing the current status of venture industry in Korea.

Accuracy of 5-axis precision milling for guided surgical template (가이드 수술용 템플릿을 위한 5축 정밀가공공정의 정확성에 관한 연구)

  • Park, Ji-Man;Yi, Tae-Kyoung;Jung, Je-Kyo;Kim, Yong;Park, Eun-Jin;Han, Chong-Hyun;Koak, Jai-Young;Kim, Seong-Kyun;Heo, Seong-Joo
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.48 no.4
    • /
    • pp.294-300
    • /
    • 2010
  • Purpose: The template-guided implant surgery offers several advantages over the traditional approach. The purpose of this study was to evaluate the accuracy of coordinate synchronization procedure with 5-axis milling machine for surgical template fabrication by means of reverse engineering through universal CAD software. Materials and methods: The study was performed on ten edentulous models with imbedded gutta percha stoppings which were hidden under silicon gingival form. The platform for synchordination was formed on the bottom side of models and these casts were imaged in Cone beam CT. Vectors of stoppings were extracted and transferred to those of planned implant on virtual planning software. Depth of milling process was set to the level of one half of stoppings and the coordinate of the data was synchronized to the model image. Synchronization of milling coordinate was done by the conversion process for the platform for the synchordination located on the bottom of the model. The models were fixed on the synchordination plate of 5-axis milling machine and drilling was done as the planned vector and depth based on the synchronized data with twist drill of the same diameter as GP stopping. For the 3D rendering and image merging, the impression tray was set on the conbeam CT and pre- and post- CT acquiring was done with the model fixed on the impression body. The accuracy analysis was done with Solidworks (Dassault systems, Concord, USA) by measuring vector of stopping’s top and bottom centers of experimental model through merging and reverse engineering the planned and post-drilling CT image. Correlations among the parameters were tested by means of Pearson correlation coefficient and calculated with SPSS (release 14.0, SPSS Inc. Chicago, USA) ($\alpha$ = 0.05). Results: Due to the declination, GP remnant on upper half of stoppings was observed for every drilled bores. The deviation between planned image and drilled bore that was reverse engineered was 0.31 (0.15 - 0.42) mm at the entrance, 0.36 (0.24 - 0.51) mm at the apex, and angular deviation was 1.62 (0.54 - 2.27)$^{\circ}$. There was positive correlation between the deviation at the entrance and that at the apex (Pearson Correlation Coefficient = 0.904, P = .013). Conclusion: The coordinate synchronization 5-axis milling procedure has adequate accuracy for the production of the guided surgical template.

Development of the Information Delivery System for the Home Nursing Service (가정간호사업 운용을 위한 정보전달체계 개발 I (가정간호 데이터베이스 구축과 뇌졸중 환자의 가정간호 전산개발))

  • Park, J.H;Kim, M.J;Hong, K.J;Han, K.J;Park, S.A;Yung, S.N;Lee, I.S;Joh, H.;Bang, K.S
    • Journal of Home Health Care Nursing
    • /
    • v.4
    • /
    • pp.5-22
    • /
    • 1997
  • The purpose of the study was to development an information delivery system for the home nursing service, to demonstrate and to evaluate the efficiency of it. The period of research conduct was from September 1996 to August 31, 1997. At the 1st stage to achieve the purpose, Firstly Assessment tool for the patients with cerebral vascular disease who have the first priority of HNS among the patients with various health problems at home was developed through literature review. Secondly, after identification of patient nursing problem by the home care nurse with the assessment tool, the patient's classification system developed by Park (1988) that was 128 nursing activities under 6 categories was used to identify the home care nurse's activities of the patient with CAV at home. The research team had several workshops with 5 clinical nurse experts to refine it. At last 110 nursing activities under 11 categories for the patients with CVA were derived. At the second stage, algorithms were developed to connect 110 nursing activities with the patient nursing problems identified by assessment tool. The computerizing process of the algorithms is as follows: These algorithms are realized with the computer program by use of the software engineering technique. The development is made by the prototyping method, which is the requirement analysis of the software specifications. The basic features of the usability, compatibility, adaptability and maintainability are taken into consideration. Particular emphasis is given to the efficient construction of the database. To enhance the database efficiency and to establish the structural cohesion, the data field is categorized with the weight of relevance to the particular disease. This approach permits the easy adaptability when numerous diseases are applied in the future. In paralleled with this, the expandability and maintainability is stressed through out the program development, which leads to the modular concept. However since the disease to be applied is increased in number as the project progress and since they are interrelated and coupled each other, the expand ability as well as maintainability should be considered with a big priority. Furthermore, since the system is to be synthesized with other medical systems in the future, these properties are very important. The prototype developed in this project is to be evaluated through the stage of system testing. There are various evaluation metrics such as cohesion, coupling and adaptability so on. But unfortunately, direct measurement of these metrics are very difficult, and accordingly, analytical and quantitative evaluations are almost impossible. Therefore, instead of the analytical evaluation, the experimental evaluation is to be applied through the test run by various users. This system testing will provide the viewpoint analysis of the user's level, and the detail and additional requirement specifications arising from user's real situation will be feedback into the system modeling. Also. the degree of freedom of the input and output will be improved, and the hardware limitation will be investigated. Upon the refining, the prototype system will be used as a design template. and will be used to develop the more extensive system. In detail. the relevant modules will be developed for the various diseases, and the module will be integrated by the macroscopic design process focusing on the inter modularity, generality of the database. and compatibility with other systems. The Home care Evaluation System is comprised of three main modules of : (1) General information on a patient, (2) General health status of a patient, and (3) Cerebrovascular disease patient. The general health status module has five sub modules of physical measurement, vitality, nursing, pharmaceutical description and emotional/cognition ability. The CVA patient module is divided into ten sub modules such as subjective sense, consciousness, memory and language pattern so on. The typical sub modules are described in appendix 3.

  • PDF