• Title/Summary/Keyword: HYBRID 기법

Search Result 1,374, Processing Time 0.026 seconds

A Study on Falling Detection of Workers in the Underground Utility Tunnel using Dual Deep Learning Techniques (이중 딥러닝 기법을 활용한 지하공동구 작업자의 쓰러짐 검출 연구)

  • Jeongsoo Kim;Sangmi Park;Changhee Hong
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.3
    • /
    • pp.498-509
    • /
    • 2023
  • Purpose: This paper proposes a method detecting the falling of a maintenance worker in the underground utility tunnel, by applying deep learning techniques using CCTV video, and evaluates the applicability of the proposed method to the worker monitoring of the utility tunnel. Method: Each rule was designed to detect the falling of a maintenance worker by using the inference results from pre-trained YOLOv5 and OpenPose models, respectively. The rules were then integrally applied to detect worker falls within the utility tunnel. Result: Although the worker presence and falling were detected by the proposed model, the inference results were dependent on both the distance between the worker and CCTV and the falling direction of the worker. Additionally, the falling detection system using YOLOv5 shows superior performance, due to its lower dependence on distance and fall direction, compared to the OpenPose-based. Consequently, results from the fall detection using the integrated dual deep learning model were dependent on the YOLOv5 detection performance. Conclusion: The proposed hybrid model shows detecting an abnormal worker in the utility tunnel but the improvement of the model was meaningless compared to the single model based YOLOv5 due to severe differences in detection performance between each deep learning model

The Possibility of Design Creation by Convergence of Contemporary technology and Traditional Craft (신기술과 공예의 융합을 통한 디자인 창작의 가능성)

  • Ha, Euna
    • Korea Science and Art Forum
    • /
    • v.25
    • /
    • pp.463-475
    • /
    • 2016
  • As the transition to the digital age in the late 20th century, the intrinsic value of the craft, the emotional values of human, has been noticed as an alternative to overcome the adverse effects of the modernism of the industrial age. To introduce experimental tries which convergence of contemporary technologies and elements of traditional craft, and to inspire artists and present the new possibility of creation to them who want to take advantage of craft emotion as the elements of creation is the purpose of this study in the current digital technology age. First, the meaning and value of craft in modern times and digital media and hybrid creation environments are theoretically investigated based on previous studies and literature. Second, design cases produced by combining digital technology as a tool and craft elements are classified for substantial understanding of the design. Thirdly, identify the design characteristics presented through case studies and suggest the new possibility of creation. The results of the study are as follows. Reject typical types highlight the functional role and try free express conversion, e.g. form, material, texture, making process etc. Extracts the various elements that can be applied and search combining ways, because the convergence of digital technology and the craft is sufficient to activate the human emotion. Interaction between the craft and the digital medium is made actively. Craft accepts digital form, the craft appeared again as the contents of the digital. the traditional and digital method appropriately fused and utilized depending on the situation in process.

Optimization of Support Vector Machines for Financial Forecasting (재무예측을 위한 Support Vector Machine의 최적화)

  • Kim, Kyoung-Jae;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.241-254
    • /
    • 2011
  • Financial time-series forecasting is one of the most important issues because it is essential for the risk management of financial institutions. Therefore, researchers have tried to forecast financial time-series using various data mining techniques such as regression, artificial neural networks, decision trees, k-nearest neighbor etc. Recently, support vector machines (SVMs) are popularly applied to this research area because they have advantages that they don't require huge training data and have low possibility of overfitting. However, a user must determine several design factors by heuristics in order to use SVM. For example, the selection of appropriate kernel function and its parameters and proper feature subset selection are major design factors of SVM. Other than these factors, the proper selection of instance subset may also improve the forecasting performance of SVM by eliminating irrelevant and distorting training instances. Nonetheless, there have been few studies that have applied instance selection to SVM, especially in the domain of stock market prediction. Instance selection tries to choose proper instance subsets from original training data. It may be considered as a method of knowledge refinement and it maintains the instance-base. This study proposes the novel instance selection algorithm for SVMs. The proposed technique in this study uses genetic algorithm (GA) to optimize instance selection process with parameter optimization simultaneously. We call the model as ISVM (SVM with Instance selection) in this study. Experiments on stock market data are implemented using ISVM. In this study, the GA searches for optimal or near-optimal values of kernel parameters and relevant instances for SVMs. This study needs two sets of parameters in chromosomes in GA setting : The codes for kernel parameters and for instance selection. For the controlling parameters of the GA search, the population size is set at 50 organisms and the value of the crossover rate is set at 0.7 while the mutation rate is 0.1. As the stopping condition, 50 generations are permitted. The application data used in this study consists of technical indicators and the direction of change in the daily Korea stock price index (KOSPI). The total number of samples is 2218 trading days. We separate the whole data into three subsets as training, test, hold-out data set. The number of data in each subset is 1056, 581, 581 respectively. This study compares ISVM to several comparative models including logistic regression (logit), backpropagation neural networks (ANN), nearest neighbor (1-NN), conventional SVM (SVM) and SVM with the optimized parameters (PSVM). In especial, PSVM uses optimized kernel parameters by the genetic algorithm. The experimental results show that ISVM outperforms 1-NN by 15.32%, ANN by 6.89%, Logit and SVM by 5.34%, and PSVM by 4.82% for the holdout data. For ISVM, only 556 data from 1056 original training data are used to produce the result. In addition, the two-sample test for proportions is used to examine whether ISVM significantly outperforms other comparative models. The results indicate that ISVM outperforms ANN and 1-NN at the 1% statistical significance level. In addition, ISVM performs better than Logit, SVM and PSVM at the 5% statistical significance level.

A Study on Improvement of Collaborative Filtering Based on Implicit User Feedback Using RFM Multidimensional Analysis (RFM 다차원 분석 기법을 활용한 암시적 사용자 피드백 기반 협업 필터링 개선 연구)

  • Lee, Jae-Seong;Kim, Jaeyoung;Kang, Byeongwook
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.139-161
    • /
    • 2019
  • The utilization of the e-commerce market has become a common life style in today. It has become important part to know where and how to make reasonable purchases of good quality products for customers. This change in purchase psychology tends to make it difficult for customers to make purchasing decisions in vast amounts of information. In this case, the recommendation system has the effect of reducing the cost of information retrieval and improving the satisfaction by analyzing the purchasing behavior of the customer. Amazon and Netflix are considered to be the well-known examples of sales marketing using the recommendation system. In the case of Amazon, 60% of the recommendation is made by purchasing goods, and 35% of the sales increase was achieved. Netflix, on the other hand, found that 75% of movie recommendations were made using services. This personalization technique is considered to be one of the key strategies for one-to-one marketing that can be useful in online markets where salespeople do not exist. Recommendation techniques that are mainly used in recommendation systems today include collaborative filtering and content-based filtering. Furthermore, hybrid techniques and association rules that use these techniques in combination are also being used in various fields. Of these, collaborative filtering recommendation techniques are the most popular today. Collaborative filtering is a method of recommending products preferred by neighbors who have similar preferences or purchasing behavior, based on the assumption that users who have exhibited similar tendencies in purchasing or evaluating products in the past will have a similar tendency to other products. However, most of the existed systems are recommended only within the same category of products such as books and movies. This is because the recommendation system estimates the purchase satisfaction about new item which have never been bought yet using customer's purchase rating points of a similar commodity based on the transaction data. In addition, there is a problem about the reliability of purchase ratings used in the recommendation system. Reliability of customer purchase ratings is causing serious problems. In particular, 'Compensatory Review' refers to the intentional manipulation of a customer purchase rating by a company intervention. In fact, Amazon has been hard-pressed for these "compassionate reviews" since 2016 and has worked hard to reduce false information and increase credibility. The survey showed that the average rating for products with 'Compensated Review' was higher than those without 'Compensation Review'. And it turns out that 'Compensatory Review' is about 12 times less likely to give the lowest rating, and about 4 times less likely to leave a critical opinion. As such, customer purchase ratings are full of various noises. This problem is directly related to the performance of recommendation systems aimed at maximizing profits by attracting highly satisfied customers in most e-commerce transactions. In this study, we propose the possibility of using new indicators that can objectively substitute existing customer 's purchase ratings by using RFM multi-dimensional analysis technique to solve a series of problems. RFM multi-dimensional analysis technique is the most widely used analytical method in customer relationship management marketing(CRM), and is a data analysis method for selecting customers who are likely to purchase goods. As a result of verifying the actual purchase history data using the relevant index, the accuracy was as high as about 55%. This is a result of recommending a total of 4,386 different types of products that have never been bought before, thus the verification result means relatively high accuracy and utilization value. And this study suggests the possibility of general recommendation system that can be applied to various offline product data. If additional data is acquired in the future, the accuracy of the proposed recommendation system can be improved.

Numerical modeling of secondary flow behavior in a meandering channel with submerged vanes (잠긴수제가 설치된 만곡수로에서의 이차류 거동 수치모의)

  • Lee, Jung Seop;Park, Sang Deog;Choi, Cheol Hee;Paik, Joongcheol
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.10
    • /
    • pp.743-752
    • /
    • 2019
  • The flow in the meandering channel is characterized by the spiral motion of secondary currents that typically cause the erosion along the outer bank. Hydraulic structures, such as spur dike and groyne, are commonly installed on the channel bottom near the outer bank to mitigate the strength of secondary currents. This study is to investigate the effects of submerged vanes installed in a $90^{\circ}$ meandering channel on the development of secondary currents through three-dimensional numerical modeling using the hybrid RANS/LES method for turbulence and the volume of fluid method, based on OpenFOAM open source toolbox, for capturing the free surface at the Froude number of 0.43. We employ the second-order-accurate finite volume methods in the space and time for the numerical modeling and compare numerical results with experimental measurements for evaluating the numerical predictions. Numerical results show that the present simulations well reproduce the experimental measurements, in terms of the time-averaged streamwise velocity and secondary velocity vector fields in the bend with submerged vanes. The computed flow fields reveal that the streamwise velocity near the bed along the outer bank at the end section of bend dramatically decrease by one third of mean velocity after the installation of vanes, which support that submerged vanes mitigate the strength of primary secondary flow and are helpful for the channel stability along the outer bank. The flow between the top of vanes and the free surface accelerates and the maximum velocity of free surface flow near the flow impingement along the outer bank increases about 20% due to the installation of submerged vanes. Numerical solutions show the formations of the horseshoe vortices at the front of vanes and the lee wakes behind the vanes, which are responsible for strong local scour around vanes. Additional study on the shapes and arrangement of vanes is required for mitigate the local scour.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

Development of Robotic Inspection System over Bridge Superstructure (교량 상판 하부 안전점검 로봇개발)

  • Nam Soon-Sung;Jang Jung-Whan;Yang Kyung-Taek
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • autumn
    • /
    • pp.180-185
    • /
    • 2003
  • The increase of traffic over a bridge has been emerged as one of the most severe problems in view of bridge maintenance, since the load effect caused by the vehicle passage over the bridge has brought out a long-term damage to bridge structure, and it is nearly impossible to maintain operational serviceability of bridge to user's satisfactory level without any concern on bridge maintenance at the phase of completion. Moreover, bridge maintenance operation should be performed by regular inspection over the bridge to prevent structural malfunction or unexpected accidents front breaking out by monitoring on cracks or deformations during service. Therefore, technical breakthrough related to this uninterested field of bridge maintenance leading the public to the turning point of recognition is desperately needed. This study has the aim of development on automated inspection system to lower surface of bridge superstructures to replace the conventional system of bridge inspection with the naked eye, where the monitoring staff is directly on board to refractive or other type of maintenance .vehicles, with which it is expected that we can solve the problems essentially where the results of inspection are varied to change with subjective manlier from monitoring staff, increase stabilities in safety during the inspection, and make contribution to construct data base by providing objective and quantitative data and materials through image processing method over data captured by cameras. By this system it is also expected that objective estimation over the right time of maintenance and reinforcement work will lead enormous decrease in maintenance cost.

  • PDF

Recommending Core and Connecting Keywords of Research Area Using Social Network and Data Mining Techniques (소셜 네트워크와 데이터 마이닝 기법을 활용한 학문 분야 중심 및 융합 키워드 추천 서비스)

  • Cho, In-Dong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.127-138
    • /
    • 2011
  • The core service of most research portal sites is providing relevant research papers to various researchers that match their research interests. This kind of service may only be effective and easy to use when a user can provide correct and concrete information about a paper such as the title, authors, and keywords. However, unfortunately, most users of this service are not acquainted with concrete bibliographic information. It implies that most users inevitably experience repeated trial and error attempts of keyword-based search. Especially, retrieving a relevant research paper is more difficult when a user is novice in the research domain and does not know appropriate keywords. In this case, a user should perform iterative searches as follows : i) perform an initial search with an arbitrary keyword, ii) acquire related keywords from the retrieved papers, and iii) perform another search again with the acquired keywords. This usage pattern implies that the level of service quality and user satisfaction of a portal site are strongly affected by the level of keyword management and searching mechanism. To overcome this kind of inefficiency, some leading research portal sites adopt the association rule mining-based keyword recommendation service that is similar to the product recommendation of online shopping malls. However, keyword recommendation only based on association analysis has limitation that it can show only a simple and direct relationship between two keywords. In other words, the association analysis itself is unable to present the complex relationships among many keywords in some adjacent research areas. To overcome this limitation, we propose the hybrid approach for establishing association network among keywords used in research papers. The keyword association network can be established by the following phases : i) a set of keywords specified in a certain paper are regarded as co-purchased items, ii) perform association analysis for the keywords and extract frequent patterns of keywords that satisfy predefined thresholds of confidence, support, and lift, and iii) schematize the frequent keyword patterns as a network to show the core keywords of each research area and connecting keywords among two or more research areas. To estimate the practical application of our approach, we performed a simple experiment with 600 keywords. The keywords are extracted from 131 research papers published in five prominent Korean journals in 2009. In the experiment, we used the SAS Enterprise Miner for association analysis and the R software for social network analysis. As the final outcome, we presented a network diagram and a cluster dendrogram for the keyword association network. We summarized the results in Section 4 of this paper. The main contribution of our proposed approach can be found in the following aspects : i) the keyword network can provide an initial roadmap of a research area to researchers who are novice in the domain, ii) a researcher can grasp the distribution of many keywords neighboring to a certain keyword, and iii) researchers can get some idea for converging different research areas by observing connecting keywords in the keyword association network. Further studies should include the following. First, the current version of our approach does not implement a standard meta-dictionary. For practical use, homonyms, synonyms, and multilingual problems should be resolved with a standard meta-dictionary. Additionally, more clear guidelines for clustering research areas and defining core and connecting keywords should be provided. Finally, intensive experiments not only on Korean research papers but also on international papers should be performed in further studies.

A Collaborative Filtering System Combined with Users' Review Mining : Application to the Recommendation of Smartphone Apps (사용자 리뷰 마이닝을 결합한 협업 필터링 시스템: 스마트폰 앱 추천에의 응용)

  • Jeon, ByeoungKug;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.1-18
    • /
    • 2015
  • Collaborative filtering(CF) algorithm has been popularly used for recommender systems in both academic and practical applications. A general CF system compares users based on how similar they are, and creates recommendation results with the items favored by other people with similar tastes. Thus, it is very important for CF to measure the similarities between users because the recommendation quality depends on it. In most cases, users' explicit numeric ratings of items(i.e. quantitative information) have only been used to calculate the similarities between users in CF. However, several studies indicated that qualitative information such as user's reviews on the items may contribute to measure these similarities more accurately. Considering that a lot of people are likely to share their honest opinion on the items they purchased recently due to the advent of the Web 2.0, user's reviews can be regarded as the informative source for identifying user's preference with accuracy. Under this background, this study proposes a new hybrid recommender system that combines with users' review mining. Our proposed system is based on conventional memory-based CF, but it is designed to use both user's numeric ratings and his/her text reviews on the items when calculating similarities between users. In specific, our system creates not only user-item rating matrix, but also user-item review term matrix. Then, it calculates rating similarity and review similarity from each matrix, and calculates the final user-to-user similarity based on these two similarities(i.e. rating and review similarities). As the methods for calculating review similarity between users, we proposed two alternatives - one is to use the frequency of the commonly used terms, and the other one is to use the sum of the importance weights of the commonly used terms in users' review. In the case of the importance weights of terms, we proposed the use of average TF-IDF(Term Frequency - Inverse Document Frequency) weights. To validate the applicability of the proposed system, we applied it to the implementation of a recommender system for smartphone applications (hereafter, app). At present, over a million apps are offered in each app stores operated by Google and Apple. Due to this information overload, users have difficulty in selecting proper apps that they really want. Furthermore, app store operators like Google and Apple have cumulated huge amount of users' reviews on apps until now. Thus, we chose smartphone app stores as the application domain of our system. In order to collect the experimental data set, we built and operated a Web-based data collection system for about two weeks. As a result, we could obtain 1,246 valid responses(ratings and reviews) from 78 users. The experimental system was implemented using Microsoft Visual Basic for Applications(VBA) and SAS Text Miner. And, to avoid distortion due to human intervention, we did not adopt any refining works by human during the user's review mining process. To examine the effectiveness of the proposed system, we compared its performance to the performance of conventional CF system. The performances of recommender systems were evaluated by using average MAE(mean absolute error). The experimental results showed that our proposed system(MAE = 0.7867 ~ 0.7881) slightly outperformed a conventional CF system(MAE = 0.7939). Also, they showed that the calculation of review similarity between users based on the TF-IDF weights(MAE = 0.7867) leaded to better recommendation accuracy than the calculation based on the frequency of the commonly used terms in reviews(MAE = 0.7881). The results from paired samples t-test presented that our proposed system with review similarity calculation using the frequency of the commonly used terms outperformed conventional CF system with 10% statistical significance level. Our study sheds a light on the application of users' review information for facilitating electronic commerce by recommending proper items to users.

Development of Assay Methods for Enterotoxin of Escherichia coli Employing the Hybridoma Technology (잡종세포종기법을 이용한 대장균의 장독소 측정법 개발)

  • Kim, Moon-Kyo;Cho, Myung-Je;Park, Kyung-Hee;Lee, Woo-Kon;Kim, Yoon-Won;Choi, Myung-Sik;Park, Joong-Soo;Cha, Chang-Yong;Chang, Woo-Hyun;Chung, Hong-Keun
    • The Journal of the Korean Society for Microbiology
    • /
    • v.21 no.1
    • /
    • pp.151-161
    • /
    • 1986
  • In order to develop sensitive and sepcific assay methods for E. coli heat labile enterotoxin(LT) hybridoma cell lines secreting LT specific monoclonal antibody were obtained. LT was purified from cell lysate of E. coli O15H11. The steps included disruption of bacteria by French pressure, DEAE Sephacel ion exchange chromatography, Sephadex G200 gel filtration, and second DEAE Sephacel ion exchange chromatography, successively. Spleen cells from Balb/c mice immunized with the purified LT and $HGPRT^{(-)}$ plasmacytomas, $P3{\times}63Ag8.V653$ were mixed and fused by 50% (w/v) PEG. Hybrid cells were grown in 308 wells out of 360 wells, and 13 wells out of them secreted antibodies reacting to LT. Among these hybridoma cell 1G8-1D1 cell line was selected since it had produced high-titered monoclonal antibody continuously. By using culture supernatant and ascites from 1G8-1D1 cells the monoclonal antibody was characterized, and an assay system for detecting enterotoxigenic E. coli was established by double sandwich enzyme-linked immunosorbent assay (ELISA). The following results were obtained. 1. Antibody titers of culture supernatant and ascites from 1G8-1D1 hybridoma cells were 512, and 102, 400, respectively by GM1-ELISA and its immunoglobulin class was IgM. 2. The maximum absorption ratio of 1G8-1D1 cell culture supernatant to LT was 90% at $300\;{\mu}g/ml$ of LT concentration. LT concentration shown at 50% absorption ratio was $103.45{\mu}g$ and the absorption ratio was decreased with tile reduction of LT concentration. This result suggests that monoclonal antibody from 1G8-1D1 hybridoma cell bound with LT specifically. 3. The reactivities of 1G8-1D1 cell culture supernatant to LT and V. cholerae enterotoxin(CT) were 0.886 and 0.142(O.D. at 492nm) measured by the GM1-ELISA, indicating 1G8-1D1 monoclonal antibody reacted specifically with LT but not with CT. 4. The addition of 0.1ml of ascites to 0.6mg and 0.12mg of LT decreased the vascular permeability factor to 41% and 44% respectively, but it did not completely neutralize LT. 5. By double sandwich ELISA using monoclonal antibody, as little as 75ng of the purified LT per ml could be detected. 6. The results by assay of detecting LT in culture supernatants of 14 wild strains E. coli isolated from diarrhea patients by the double sandwich ELISA were almost the same level as those by reverse passive latex agglutination.

  • PDF