• Title/Summary/Keyword: SIMPLE

Search Result 35,010, Processing Time 0.068 seconds

Lung Clearance of Inhaled $^{99m}Tc$-DTPA by Urine Excretion Ratio (소변내 방사능배설량비를 이용한 $^{99m}Tc$-DTPA 폐청소율에 관한 연구)

  • Suh, G.Y.;Park, K.Y.;Jung, M.P.;Yoo, C.G.;Lee, D.S.;Kim, Y.W.;Han, S.K.;Jung, J.K.;Lee, M.C.;Shim, Y.S.;Kim, K.Y.;Han, Y.C.
    • Tuberculosis and Respiratory Diseases
    • /
    • v.40 no.4
    • /
    • pp.357-366
    • /
    • 1993
  • Background: Lung clearance of inhaled $^{99m}Tc$-DTPA reflects alveolar epithelial permeability and it had been reported as more sensitive than conventional pulmonary function tests in detecting lung epithelial damage. However, measuring lung clearance of inhaled $^{99m}Tc$-DTPA by gamma camera may not always reflect alveolar epithelial permeability exactly because it is influenced by mucociliary clearance depending on the site of particle deposition. Moreover, this method takes much time and patient's effort because he has to sit or lie still in front of the camera for a prolonged period. Most of the absorbed DTPA is excreted in urine within 24 hours and the amount of excreted DTPA in urine during the first few hours after inhalation is influenced by absorption rate which is correlated with the alveolar-epithelial permeability suggesting that the urinary excretion, especially in first few hours, may be an alternate index for lung clearance. The purpose of this study was to evaluate the usefulness of ratio of excreted $^{99m}Tc$-DTPA in 2 hour and 24 hour urine as an index of alveolar-epithelial damage. Methods: Pulmonary function tests including diffusing capacity and lung clearance of $^{99m}Tc$-DTPA measured by gama camera ($T_{1/2}$) and 2hr/24hr urine excretion ratio (Ratio) of inhaled $^{99m}Tc$-DTPA in 8 normal subjects and 14 patients with diffuse interstitial lung disease were compared. Results: 1) In the normal control, there was significant negative correlation between the $T_{1/2}$ and the Ratio (r=-0.77, p<0.05). In patients with diffuse interstitial lung disease, there also was significant negative correlation between $T_{1/2}$ and Ratio(r=-0.63, p<0.05). 2) In diffuse interstitial lung disease patients, the $T_{1/2}$ was $38.65{\pm}11.63$ min which was significantly lower than that of normal control, $55.53{\pm}11.15$ min and the Ratio was $52.15{\pm}10.07%$ also signifantly higher than that of the normal control, $40.43{\pm}5.53%$ (p<0.05). 3) There was no significant correlations between $T_{1/2}$ or Ratio and diffusing capactiy of lung in both patients and controls (p>0.05). Conclusion: These results suggests that 2hr/24hr urine excretion ratio of inhaled $^{99m}Tc$-DTPA is a useful simple bedside test in assessing alveolar epithelial permeability and that it may be used as an additive follow-up test in patients with diffuse interstitial lung disease complementing conventional pulmonary function tests.

  • PDF

Pulmonary Mycoses in Immunocompromised Hosts (면역기능저하 환자에서 폐진균증에 대한 임상적 고찰)

  • Suh, Gee-Young;Park, Sang-Joon;Kang, Kyeong-Woo;Koh, Young-Min;Kim, Tae-Sung;Chung, Man-Pyo;Kim, Ho-Joong;Han, Jong-Ho;Choi, Dong-Chull;Song, Jae-Hoon;Kwon, O-Jung;Rhee, Chong-H.
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.6
    • /
    • pp.1199-1213
    • /
    • 1998
  • Background : The number of immunocompromised hosts has been increasing steadily and a new pulmonary infiltrate in these patients is a potentially lethal condition which needs rapid diagnosis and treatment. In this study we sought to examine the clinical manifestations, radiologic findings, and therapeutic outcomes of pulmonary mycoses presenting as a new pulmonary infiltrate in immunocompromised hosts. Method : All cases presenting as a new pulmonary infiltrate in immunocompromised hosts and confirmed to be pulmonary mycoses by pathologic examination or by positive culture from a sterile site between October of 1996 and April of 1998 were included in the study and their chart and radiologic findings were retrospectively reviewed. Results : In all, 14 cases of pulmonary mycoses from 13 patients(male : female ratio = 8 : 5, median age 47 yr) were found. Twelve cases were diagnosed as aspergillosis while two were diagnosed as mucormycosis. Major risk factors for fungal infections were chemotherapy for hematologic malignancy(10 cases) and organ transplant recipients(4 cases). Three cases were receiving empirical amphotericin B at the time of appearance of new lung infiltrates. Cases in the hematologic malignancy group had more prominent symptoms : fever(9/10), cough(6/10), sputum(5/10), dyspnea(4/10), chest pain(5/10). Patients in the organ transplant group had minimal symptoms(p<0.05). On simple chest films, all of the cases presented as single or multiple nodules(6/14) or consolidations(8/14). High resolution computed tomograph showed peri-lesional ground glass opacities(14/14), pleural effusions(5/14), and cavitary changes(7/14). Definitive diagnostic methods were as follows : 10 cases underwent minithoracotomy, 2 underwent video-assisted thoracoscopic surgery, 1 underwent percutaneous needle aspiration and 1 case was diagnosed by culture of abscess fluid. All cases received treatment with amphotericin B with 1 case each being treated with liposomal amphotericin B and itraconazole due to renal toxicity. Lung lesion improved in 12 of 14 patient but 4 patients died before completing therapy. Conclusion : When a new lung infiltrate develops presenting either as a nodule or consolidation in a neutropenic patient with hematologic malignancy or in a transplant recipient, you should always consider pulmonary mycoses as one of the differential diagnosis. By performing aggressive work up and early treatment, we may improve prognosis of these patients.

  • PDF

Diagnostic Efficacy of FDG-PET Imaging in Solitary Pulmonary Nodule (고립성폐결절의 진단시 FDG-PET의 임상적 유용성에 관한 연구)

  • Cheon, Eun Mee;Kim, Byung-Tae;Kwon, O. Jung;Kim, Hojoong;Chung, Man Pyo;Rhee, Chong H.;Han, Yong Chol;Lee, Kyung Soo;Shim, Young Mog;Kim, Jhingook;Han, Jungho
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.6
    • /
    • pp.882-893
    • /
    • 1996
  • Background : Over one-third of solitary pulmonary nodules are malignant, but most malignant SPNs are in the early stages at diagnosis and can be cured by surgical removal. Therefore, early diagnosis of malignant SPN is essential for the lifesaving of the patient. The incidence of pulmonary tuberculosis in Korea is somewhat higher than those of other countries and a large number of SPNs are found to be tuberculoma. Most primary physicians tend to regard newly detected solitary pulmonary nodule as tuberculoma with only noninvasive imaging such as CT and they prefer clinical observation if the findings suggest benignancy without further invasive procedures. Many kinds of noninvasive procedures for confirmatory diagnosis have been introduced to differentiate malignant SPNs from benign ones, but none of them has been satisfactory. FOG-PET is a unique tool for imaging and quantifying the status of glucose metabolism. On the basis that glucose metabolism is increased in the malignant transfomled cells compared with normal cells, FDG-PET is considered to be the satisfactory noninvasive procedure which can differentiate malignant SPNs from benign SPNs. So we performed FOG-PET in patients with solitary pulmonary nodule and evaluated the diagnostic accuracy in the diagnosis of malignant SPNs. Method : 34 patients with a solitary pulmonary nodule less than 6 cm of irs diameter who visited Samsung Medical Center from Semptember, 1994 to Semptember, 1995 were evaluated prospectively. Simple chest roentgenography, chest computer tomography, FOG-PET scan were performed for all patients. The results of FOG-PET were evaluated comparing with the results of final diagnosis confirmed by sputum study, PCNA, fiberoptic bronchoscopy, or thoracotomy. Results : (I) There was no significant difference in nodule size between malignant (3.1 1.5cm) and benign nodule(2.81.0cm)(p>0.05). (2) Peal SUV(standardized uptake value) of malignant nodules (6.93.7) was significantly higher than peak SUV of benign nodules(2.71.7) and time-activity curves showed continuous increase in malignant nodules. (3) Three false negative cases were found among eighteen malignant nodule by the FDG-PET imaging study and all three cases were nonmucinous bronchioloalveolar carcinoma less than 2 em diameter. (4) FOG-PET imaging resulted in 83% sensitivity, 100% specificity, 100% positive predictive value and 84% negative predictive value. Conclusion: FOG-PET imaging is a new noninvasive diagnostic method of solitary pulmonary nodule thai has a high accuracy of differential diagnosis between malignant and benign nodule. FDG-PET imaging could be used for the differential diagnosis of SPN which is not properly diagnosed with conventional methods before thoracotomy. Considering the high accuracy of FDG-PET imaging, this procedure may play an important role in making the dicision to perform thoracotomy in diffcult cases.

  • PDF

Term Mapping Methodology between Everyday Words and Legal Terms for Law Information Search System (법령정보 검색을 위한 생활용어와 법률용어 간의 대응관계 탐색 방법론)

  • Kim, Ji Hyun;Lee, Jong-Seo;Lee, Myungjin;Kim, Wooju;Hong, June Seok
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.137-152
    • /
    • 2012
  • In the generation of Web 2.0, as many users start to make lots of web contents called user created contents by themselves, the World Wide Web is overflowing by countless information. Therefore, it becomes the key to find out meaningful information among lots of resources. Nowadays, the information retrieval is the most important thing throughout the whole field and several types of search services are developed and widely used in various fields to retrieve information that user really wants. Especially, the legal information search is one of the indispensable services in order to provide people with their convenience through searching the law necessary to their present situation as a channel getting knowledge about it. The Office of Legislation in Korea provides the Korean Law Information portal service to search the law information such as legislation, administrative rule, and judicial precedent from 2009, so people can conveniently find information related to the law. However, this service has limitation because the recent technology for search engine basically returns documents depending on whether the query is included in it or not as a search result. Therefore, it is really difficult to retrieve information related the law for general users who are not familiar with legal terms in the search engine using simple matching of keywords in spite of those kinds of efforts of the Office of Legislation in Korea, because there is a huge divergence between everyday words and legal terms which are especially from Chinese words. Generally, people try to access the law information using everyday words, so they have a difficulty to get the result that they exactly want. In this paper, we propose a term mapping methodology between everyday words and legal terms for general users who don't have sufficient background about legal terms, and we develop a search service that can provide the search results of law information from everyday words. This will be able to search the law information accurately without the knowledge of legal terminology. In other words, our research goal is to make a law information search system that general users are able to retrieval the law information with everyday words. First, this paper takes advantage of tags of internet blogs using the concept for collective intelligence to find out the term mapping relationship between everyday words and legal terms. In order to achieve our goal, we collect tags related to an everyday word from web blog posts. Generally, people add a non-hierarchical keyword or term like a synonym, especially called tag, in order to describe, classify, and manage their posts when they make any post in the internet blog. Second, the collected tags are clustered through the cluster analysis method, K-means. Then, we find a mapping relationship between an everyday word and a legal term using our estimation measure to select the fittest one that can match with an everyday word. Selected legal terms are given the definite relationship, and the relations between everyday words and legal terms are described using SKOS that is an ontology to describe the knowledge related to thesauri, classification schemes, taxonomies, and subject-heading. Thus, based on proposed mapping and searching methodologies, our legal information search system finds out a legal term mapped with user query and retrieves law information using a matched legal term, if users try to retrieve law information using an everyday word. Therefore, from our research, users can get exact results even if they do not have the knowledge related to legal terms. As a result of our research, we expect that general users who don't have professional legal background can conveniently and efficiently retrieve the legal information using everyday words.

Resolving the 'Gray sheep' Problem Using Social Network Analysis (SNA) in Collaborative Filtering (CF) Recommender Systems (소셜 네트워크 분석 기법을 활용한 협업필터링의 특이취향 사용자(Gray Sheep) 문제 해결)

  • Kim, Minsung;Im, Il
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.137-148
    • /
    • 2014
  • Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used

    . Past studies to improve CF performance typically used additional information other than users' evaluations such as demographic data. Some studies applied SNA techniques as a new similarity metric. This study is novel in that it used SNA to separate dataset. This study shows that performance of CF can be improved, without any additional information, when SNA techniques are used as proposed. This study has several theoretical and practical implications. This study empirically shows that the characteristics of dataset can affect the performance of CF recommender systems. This helps researchers understand factors affecting performance of CF. This study also opens a door for future studies in the area of applying SNA to CF to analyze characteristics of dataset. In practice, this study provides guidelines to improve performance of CF recommender systems with a simple modification.

  • A Study on the Determinants of Patent Citation Relationships among Companies : MR-QAP Analysis (기업 간 특허인용 관계 결정요인에 관한 연구 : MR-QAP분석)

    • Park, Jun Hyung;Kwahk, Kee-Young;Han, Heejun;Kim, Yunjeong
      • Journal of Intelligence and Information Systems
      • /
      • v.19 no.4
      • /
      • pp.21-37
      • /
      • 2013
    • Recently, as the advent of the knowledge-based society, there are more people getting interested in the intellectual property. Especially, the ICT companies leading the high-tech industry are working hard to strive for systematic management of intellectual property. As we know, the patent information represents the intellectual capital of the company. Also now the quantitative analysis on the continuously accumulated patent information becomes possible. The analysis at various levels becomes also possible by utilizing the patent information, ranging from the patent level to the enterprise level, industrial level and country level. Through the patent information, we can identify the technology status and analyze the impact of the performance. We are also able to find out the flow of the knowledge through the network analysis. By that, we can not only identify the changes in technology, but also predict the direction of the future research. In the field using the network analysis there are two important analyses which utilize the patent citation information; citation indicator analysis utilizing the frequency of the citation and network analysis based on the citation relationships. Furthermore, this study analyzes whether there are any impacts between the size of the company and patent citation relationships. 74 S&P 500 registered companies that provide IT and communication services are selected for this study. In order to determine the relationship of patent citation between the companies, the patent citation in 2009 and 2010 is collected and sociomatrices which show the patent citation relationship between the companies are created. In addition, the companies' total assets are collected as an index of company size. The distance between companies is defined as the absolute value of the difference between the total assets. And simple differences are considered to be described as the hierarchy of the company. The QAP Correlation analysis and MR-QAP analysis is carried out by using the distance and hierarchy between companies, and also the sociomatrices that shows the patent citation in 2009 and 2010. Through the result of QAP Correlation analysis, the patent citation relationship between companies in the 2009's company's patent citation network and the 2010's company's patent citation network shows the highest correlation. In addition, positive correlation is shown in the patent citation relationships between companies and the distance between companies. This is because the patent citation relationship is increased when there is a difference of size between companies. Not only that, negative correlation is found through the analysis using the patent citation relationship between companies and the hierarchy between companies. Relatively it is indicated that there is a high evaluation about the patent of the higher tier companies influenced toward the lower tier companies. MR-QAP analysis is carried out as follow. The sociomatrix that is generated by using the year 2010 patent citation relationship is used as the dependent variable. Additionally the 2009's company's patent citation network and the distance and hierarchy networks between the companies are used as the independent variables. This study performed MR-QAP analysis to find the main factors influencing the patent citation relationship between the companies in 2010. The analysis results show that all independent variables have positively influenced the 2010's patent citation relationship between the companies. In particular, the 2009's patent citation relationship between the companies has the most significant impact on the 2010's, which means that there is consecutiveness regarding the patent citation relationships. Through the result of QAP correlation analysis and MR-QAP analysis, the patent citation relationship between companies is affected by the size of the companies. But the most significant impact is the patent citation relationships that had been done in the past. The reason why we need to maintain the patent citation relationship between companies is it might be important in the use of strategic aspect of the companies to look into relationships to share intellectual property between each other, also seen as an important auxiliary of the partner companies to cooperate with.

    Image Watermarking for Copyright Protection of Images on Shopping Mall (쇼핑몰 이미지 저작권보호를 위한 영상 워터마킹)

    • Bae, Kyoung-Yul
      • Journal of Intelligence and Information Systems
      • /
      • v.19 no.4
      • /
      • pp.147-157
      • /
      • 2013
    • With the advent of the digital environment that can be accessed anytime, anywhere with the introduction of high-speed network, the free distribution and use of digital content were made possible. Ironically this environment is raising a variety of copyright infringement, and product images used in the online shopping mall are pirated frequently. There are many controversial issues whether shopping mall images are creative works or not. According to Supreme Court's decision in 2001, to ad pictures taken with ham products is simply a clone of the appearance of objects to deliver nothing but the decision was not only creative expression. But for the photographer's losses recognized in the advertising photo shoot takes the typical cost was estimated damages. According to Seoul District Court precedents in 2003, if there are the photographer's personality and creativity in the selection of the subject, the composition of the set, the direction and amount of light control, set the angle of the camera, shutter speed, shutter chance, other shooting methods for capturing, developing and printing process, the works should be protected by copyright law by the Court's sentence. In order to receive copyright protection of the shopping mall images by the law, it is simply not to convey the status of the product, the photographer's personality and creativity can be recognized that it requires effort. Accordingly, the cost of making the mall image increases, and the necessity for copyright protection becomes higher. The product images of the online shopping mall have a very unique configuration unlike the general pictures such as portraits and landscape photos and, therefore, the general image watermarking technique can not satisfy the requirements of the image watermarking. Because background of product images commonly used in shopping malls is white or black, or gray scale (gradient) color, it is difficult to utilize the space to embed a watermark and the area is very sensitive even a slight change. In this paper, the characteristics of images used in shopping malls are analyzed and a watermarking technology which is suitable to the shopping mall images is proposed. The proposed image watermarking technology divide a product image into smaller blocks, and the corresponding blocks are transformed by DCT (Discrete Cosine Transform), and then the watermark information was inserted into images using quantization of DCT coefficients. Because uniform treatment of the DCT coefficients for quantization cause visual blocking artifacts, the proposed algorithm used weighted mask which quantizes finely the coefficients located block boundaries and coarsely the coefficients located center area of the block. This mask improves subjective visual quality as well as the objective quality of the images. In addition, in order to improve the safety of the algorithm, the blocks which is embedded the watermark are randomly selected and the turbo code is used to reduce the BER when extracting the watermark. The PSNR(Peak Signal to Noise Ratio) of the shopping mall image watermarked by the proposed algorithm is 40.7~48.5[dB] and BER(Bit Error Rate) after JPEG with QF = 70 is 0. This means the watermarked image is high quality and the algorithm is robust to JPEG compression that is used generally at the online shopping malls. Also, for 40% change in size and 40 degrees of rotation, the BER is 0. In general, the shopping malls are used compressed images with QF which is higher than 90. Because the pirated image is used to replicate from original image, the proposed algorithm can identify the copyright infringement in the most cases. As shown the experimental results, the proposed algorithm is suitable to the shopping mall images with simple background. However, the future study should be carried out to enhance the robustness of the proposed algorithm because the robustness loss is occurred after mask process.

    Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

    • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
      • Journal of Intelligence and Information Systems
      • /
      • v.19 no.4
      • /
      • pp.123-132
      • /
      • 2013
    • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

    Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

    • Park, Dae Seo;Kim, Hwa Jong
      • Journal of Intelligence and Information Systems
      • /
      • v.22 no.4
      • /
      • pp.109-122
      • /
      • 2016
    • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

    A Study on the Liturgical Vestments of Catholic-With reference to the Liturgical Vestments Firm of Paderborn and kevelaer in Germany (카톨릭교 전례복에 관한 연구-독일 Paderborn 과 kevelaer의 전례복 회사를 중심으로)

    • Yang, Ri-Na
      • The Journal of Natural Sciences
      • /
      • v.7
      • /
      • pp.133-162
      • /
      • 1995
    • Paderborn's companies, Wameling and Cassau, produce the liturgical vestments, which have much traditional artistic merit. And Kevelaerer Fahnen + Paramenten GmbH, located in Kevelater which is a place of pilgrimage of the Virgin Mary, was known to Europe, Africa, America and the Scandinavia Peninsula as the "Hidden Company" of liturgical vesments maker up to now. Paderborn and Kevelaer were the place of the center of the religious world and the Catholic ceremony during a good few centries. The Catholic liturgical vestiments of these 3 companies use versatile design, color, shape and techniques. These have not only the symbolism of religion, but also can meet our's expectations of utilization of modern textile art, art clothing and wide-all division of design. These give the understanding of symbolic meanings and harmony according to liturgical vestments to the believers. And these have an influence on mental thinking and induction of religious belief to the non-believers as the recognition and concerns about the religious art. The liturgical vestments are clothes which churchmen put on at the all ceremonial function of a mass, a sacrament, performance and a parade according to rules of church. These show the represen-tation of "Holy God" in silence and distinguish between common people and churchmen. And these represent a status and dignity of churchmen and induce majesty and respect to churchmen. Common clothes of the beginning of the Greece and Rome was developed to Christian clothes with the tendency of religion. There were no special uniforms distinguished from commen people until the Christianity was recognized officially by the Roman Emperor Constantinus at A.D.313. The color of liturgical vestments was originally white and changed to special colors according to liturgical day and each time by the Pope Innocentius at 12th century. The color and symbolic meaning of the liturgical vestments of present day was originated by the Pope St. Pius(1566-1572). Wool and Linen was used as decorations and materials in the beginnings and the special materials like silk was used after 4th century and beautiful materials made of gold thread was used at 12th century. It is expected that there is no critical changes to the liturgical vestments of future. But the development of liturgical vestments will continues slowly by the command of conservative church and will change to simple and convenient formes according to the culture, the trend of the times and the fashion of clothes. The companies of liturgical vestments develop versatile design, embroidery technique and realization of creative design for distinction of the liturgical vestments of each company and artistic progress. The cooperation of companies, artists and church will make the bright future of these 3 companies. We expect that our country will be a famous producing center of the liturgical vestments through the research and development of companies, participation of artists in religeous arts and concerts of church.

    • PDF

    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.