• Title/Summary/Keyword: Accuracy Rate

Search Result 3,386, Processing Time 0.031 seconds

Computed Tomography-guided Localization with a Hook-wire Followed by Video-assisted Thoracic Surgery for Small Intrapulmonary and Ground Glass Opacity Lesions (폐실질 내에 위치한 소결질 및 간유리 병변에서 흉부컴퓨터단층촬영 유도하에 Hook Wire를 이용한 위치 선정 후 시행한 흉강경 폐절제술의 유용성)

  • Kang, Pil-Je;Kim, Yong-Hee;Park, Seung-Il;Kim, Dong-Kwan;Song, Jae-Woo;Do, Kyoung-Hyun
    • Journal of Chest Surgery
    • /
    • v.42 no.5
    • /
    • pp.624-629
    • /
    • 2009
  • Background: Making the histologic diagnosis of small pulmonary nodules and ground glass opacity (GGO) lesions is difficult. CT-guided percutaneous needle biopsies often fail to provide enough specimen for making the diagnosis. Video-assisted thoracoscopic surgery (VATS) can be inefficient for treating non-palpable lesions. Preoperative localization of small intrapulmonary lesions provides a more obvious target to facilitate performing intraoperative. resection. We evaluated the efficacy of CT-guided localization with using a hook wire and this was followed by VATS for making the histologic diagnosis of small intrapulmonary nodules and GGO lesions. Material and Method: Eighteen patients (13 males) were included in this study from August 2005 to March 2008. 18 intrapulmonary lesions underwent preoperative localization by using a CT-guided a hook wire system prior to performing VATS resection for intrapulmonary lesions and GGO lesions. The clinical data such as the accuracy of localization, the rate of conversion-to-thoracotomy, the operation time, the postoperative complications and the histology of the pulmonary lesion were retrospectively collected. Result: Eighteen VATS resections were performed in 18 patients. Preoperative CT-guided localization with a hook-wire was successful in all the patients. Dislodgement of a hook wire was observed in one case. There was no conversion to thoracotomy, The median diameter of lesions was 8 mm (range: $3{\sim}15\;mm$). The median depth of the lesions from the pleural surfaces was 5.5 mm (range: $1{\sim}30\;mm$). The median interval between preoperative CT-guided with a hook-wire and VATS was 34.5 min (range: ($10{\sim}226$ min). The median operative time was 43.5.min (range: $26{\sim}83$ min). In two patients, clinically insignificant pneumothorax developed after CT-guided localization with a hook-wire and there were no other complications. Histological examinations confirmed 8 primary lung cancers, 3 cases of metastases, 3 cases of inflammation, 2 intrapulmonary lymph nodes and 2 other benign lesions. Conclusion: CT-guided localization with a hook-wire followed by VATS for treating small intrapulmonary nodules and GGO lesions provided a low conversion thoracotomy rate, a short operation time and few localization-related or postoperative complications. This procedure was efficient to confirm intrapulmonary lesions and GGO lesions.

A Study on the Utilzation of Two Furrow Combine (2조형(條型) Combine의 이용(利用)에 관(關)한 연구(硏究))

  • Lee, Sang Woo;Kim, Soung Rai
    • Korean Journal of Agricultural Science
    • /
    • v.3 no.1
    • /
    • pp.95-104
    • /
    • 1976
  • This study was conducted to test the harvesting operation of two kinds of rice varieties such as Milyang #15 and Tong-il with a imported two furrow Japanese combine and was performed to find out the operational accuracy of it, the adaptability of this machine, and the feasibility of supplying this machine to rural area in Korea. The results obtained in this study are summarized as follows; 1. The harvesting test of the Milyang #15 was carried out 5 times from the optimum harvesting operation was good regardless of its maturity. The field grain loss ratio and the rate of unthreshed paddy were all about 1 percent. 2. The field grain loss of Tong-il harvested was increased from 5.13% to 10.34% along its maturity as shown in Fig 1. In considering this, it was needed that the combine mechanism should be improved mechanically for harvesting of Tong-il rice variety. 3. The rate of unthreshed paddy of Tong-il rice variety of which stem was short was average 1.6 percent, because the sample combine used in this study was developed on basisof the long stem variety in Japan, therefore some ears owing to the uneven stem of Tong-il rice could nat reach the teeth of the threshing drum. 4. The cracking rates of brown rice depending mostly upon the revolution speed of the threshing drum(240-350 rpm) in harvesting of Tong-il and Milyang #15 were all below 1 percent, and there was no significance between two varieties. 5. Since the ears of Tong-il rice variety covered with its leaves, a lots of trashes was produced, especially when threshed in raw materials, and the cleaning and the trashout mechanisms were clogged with those trashes very often, and so these two mechanisms were needed for being improved. 6. The sample combine of which track pressure was $0.19kg/cm^2$ could drive on the soft ground of which sinking was even 25cm as shown in Fig 3. But in considering the reaping height adjustment, 5cm sinking may be afford to drive the combine on the irregular sinking level ground without any readjustment of the resaping height. 7. The harvesting expenses per ha. by the sample combine of which annual coverage area is 4.7 ha. under conditions that the yearly workable days is 40, percentage of days being good for harvesting operation is 60%, field efficiency is 56%, working speed is 0.273m/sec, and daily workable hours is 8 hrs is reasonable to spread this combine to rural area in Korea, comparing to the expenses by the conventional harvesting expenses, if mechanical improvement is supplemented so as to harvest Tong-il rice. 8. In order to harvest Tong-il rice, the two furrow combine should be needed some mechanical improvements that divider can control not to touch ears of paddy, the space between the feeding chain and the thrshing drum is reduced, trash treatment apparatus must be improved, fore and rear adjust-interval is enlarged, and width of track must be enlarged so as to drive on the soft ground.

  • PDF

The Diagnostic Yield and Complications of Percutaneous Needle Aspiration Biopsy for the Intrathoracic Lesions (경피적 폐생검의 진단성적 및 합병증)

  • Jang, Seung Hun;Kim, Cheal Hyeon;Koh, Won Jung;Yoo, Chul-Gyu;Kim, Young Whan;Han, Sung Koo;Shim, Young-Soo
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.6
    • /
    • pp.916-924
    • /
    • 1996
  • Bacground : Percutaneous needle aspiration biopsy (PCNA) is one of the most frequently used diagnostic methcxJs for intrathoracic lesions. Previous studies have reponed wide range of diagnostic yield from 28 to 98%. However, diagnostic yield has been increased by accumulation of experience, improvement of needle and the image guiding systems. We analysed the results of PCNA performed for one year to evaluate the diagnostic yield, the rate and severity of complications and factors affecting the diagnostic yield. Method : 287 PCNAs undergone in 236 patients from January, 1994 to December, 1994 were analysed retrospectively. The intrathoracic lesions was targeted and aspirated with 21 - 23 G Chiba needle under fluoroscopic guiding system. Occasionally, 19 - 20 G Biopsy gun was used for core tissue specimen. The specimen was requested for microbiologic, cytologic and histopathologic examination in the case of obtained core tissue. Diagnostic yields and complication rate of benign and malignant lesions were ca1culaled based on patients' chans. The comparison for the diagnostic yields according to size and shape of the lesions was analysed with chi square test (p<0.05). Results : There are 19.9% of consolidative lesion and 80.1% of nodular or mass lesion, and the lesion is located at the right upper lobe in 26.3% of cases, the right middle lobe in 6.4%, the right lower lobe 21.2%, the left upper lobe in 16.8%, the left lower lobe in 10.6%, and mediastinum in 1.3%. The lesion distributed over 2 lobes is as many as 17.4% of cases. There are 74 patients with benign lesions, 142 patients with malignant lesions in final diagnosis and confirmative diagnosis was not made in 22 patients despite of all available diagnostic methods. 2 patients have lung cancer and pulmonary tuberculosis concomittantly. Experience with 236 patients showed that PCNA can diagnose benign lesions in 62.2% (42 patients) of patients with such lesions and malignant lesions in 82.4% (117 patients) of patients. For the patients in whom the first PCNA failed to make diagnosis, the procedure was repeated and the cumulative diagnostic yield was increased as 44.6%, 60.8%, 62.2% in benign lesions and as 73.4%, 81.7%, 82.4% in malignant lesions through serial PCNA. Thoracotomy was performed in 9 patients with benign lesions and in 43 patients with malignant lesions. PCNA and thoracotomy showed the same pathologic result in 44.4% (4 patients) of benign lesions and 58.1% (25 patients) of malignant lesions. Thoracotomy confirmed 4 patients with malignat lesions against benign result of PCNA and 2 patients with benign lesions against malignant result of PCNA. There are 1.0% (3 cases) of hemoptysis, 19.2% (55 cases) of blood tinged sputum, 12.5% (36 cases) of pneumothorax and 1.0% (3 cases) of fever through 287 times of PCNA. Hemoptysis and blood tinged sputum didn't need therapy. 8 cases of pneumothorax needed insertion of classical chest tube or pig-tail catheter. Fever subsided within 48 hours in all cases. There was no difference between size and shape of lesion with diagnostic yield. Conclusion: PCNA shows relatively high diagnostic yield and mild degree complications but the accuracy of histologic diagnosis has to be improved.

  • PDF

APPLICATION OF FUZZY SET THEORY IN SAFEGUARDS

  • Fattah, A.;Nishiwaki, Y.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1051-1054
    • /
    • 1993
  • The International Atomic Energy Agency's Statute in Article III.A.5 allows it“to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy”. Safeguards are essentially a technical means of verifying the fulfilment of political obligations undertaken by States and given a legal force in international agreements relating to the peaceful uses of nuclear energy. The main political objectives are: to assure the international community that States are complying with their non-proliferation and other peaceful undertakings; and to deter (a) the diversion of afeguarded nuclear materials to the production of nuclear explosives or for military purposes and (b) the misuse of safeguarded facilities with the aim of producing unsafeguarded nuclear material. It is clear that no international safeguards system can physically prevent diversion. The IAEA safeguards system is basically a verification measure designed to provide assurance in those cases in which diversion has not occurred. Verification is accomplished by two basic means: material accountancy and containment and surveillance measures. Nuclear material accountancy is the fundamental IAEA safeguards mechanism, while containment and surveillance serve as important complementary measures. Material accountancy refers to a collection of measurements and other determinations which enable the State and the Agency to maintain a current picture of the location and movement of nuclear material into and out of material balance areas, i. e. areas where all material entering or leaving is measurab e. A containment measure is one that is designed by taking advantage of structural characteristics, such as containers, tanks or pipes, etc. To establish the physical integrity of an area or item by preventing the undetected movement of nuclear material or equipment. Such measures involve the application of tamper-indicating or surveillance devices. Surveillance refers to both human and instrumental observation aimed at indicating the movement of nuclear material. The verification process consists of three over-lapping elements: (a) Provision by the State of information such as - design information describing nuclear installations; - accounting reports listing nuclear material inventories, receipts and shipments; - documents amplifying and clarifying reports, as applicable; - notification of international transfers of nuclear material. (b) Collection by the IAEA of information through inspection activities such as - verification of design information - examination of records and repo ts - measurement of nuclear material - examination of containment and surveillance measures - follow-up activities in case of unusual findings. (c) Evaluation of the information provided by the State and of that collected by inspectors to determine the completeness, accuracy and validity of the information provided by the State and to resolve any anomalies and discrepancies. To design an effective verification system, one must identify possible ways and means by which nuclear material could be diverted from peaceful uses, including means to conceal such diversions. These theoretical ways and means, which have become known as diversion strategies, are used as one of the basic inputs for the development of safeguards procedures, equipment and instrumentation. For analysis of implementation strategy purposes, it is assumed that non-compliance cannot be excluded a priori and that consequently there is a low but non-zero probability that a diversion could be attempted in all safeguards ituations. An important element of diversion strategies is the identification of various possible diversion paths; the amount, type and location of nuclear material involved, the physical route and conversion of the material that may take place, rate of removal and concealment methods, as appropriate. With regard to the physical route and conversion of nuclear material the following main categories may be considered: - unreported removal of nuclear material from an installation or during transit - unreported introduction of nuclear material into an installation - unreported transfer of nuclear material from one material balance area to another - unreported production of nuclear material, e. g. enrichment of uranium or production of plutonium - undeclared uses of the material within the installation. With respect to the amount of nuclear material that might be diverted in a given time (the diversion rate), the continuum between the following two limiting cases is cons dered: - one significant quantity or more in a short time, often known as abrupt diversion; and - one significant quantity or more per year, for example, by accumulation of smaller amounts each time to add up to a significant quantity over a period of one year, often called protracted diversion. Concealment methods may include: - restriction of access of inspectors - falsification of records, reports and other material balance areas - replacement of nuclear material, e. g. use of dummy objects - falsification of measurements or of their evaluation - interference with IAEA installed equipment.As a result of diversion and its concealment or other actions, anomalies will occur. All reasonable diversion routes, scenarios/strategies and concealment methods have to be taken into account in designing safeguards implementation strategies so as to provide sufficient opportunities for the IAEA to observe such anomalies. The safeguards approach for each facility will make a different use of these procedures, equipment and instrumentation according to the various diversion strategies which could be applicable to that facility and according to the detection and inspection goals which are applied. Postulated pathways sets of scenarios comprise those elements of diversion strategies which might be carried out at a facility or across a State's fuel cycle with declared or undeclared activities. All such factors, however, contain a degree of fuzziness that need a human judgment to make the ultimate conclusion that all material is being used for peaceful purposes. Safeguards has been traditionally based on verification of declared material and facilities using material accountancy as a fundamental measure. The strength of material accountancy is based on the fact that it allows to detect any diversion independent of the diversion route taken. Material accountancy detects a diversion after it actually happened and thus is powerless to physically prevent it and can only deter by the risk of early detection any contemplation by State authorities to carry out a diversion. Recently the IAEA has been faced with new challenges. To deal with these, various measures are being reconsidered to strengthen the safeguards system such as enhanced assessment of the completeness of the State's initial declaration of nuclear material and installations under its jurisdiction enhanced monitoring and analysis of open information and analysis of open information that may indicate inconsistencies with the State's safeguards obligations. Precise information vital for such enhanced assessments and analyses is normally not available or, if available, difficult and expensive collection of information would be necessary. Above all, realistic appraisal of truth needs sound human judgment.

  • PDF

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Variation on Estimated Values of Radioactivity Concentration According to the Change of the Acquisition Time of SPECT/CT (SPECT/CT의 획득시간 증감에 따른 방사능농도 추정치의 변화)

  • Kim, Ji-Hyeon;Lee, Jooyoung;Son, Hyeon-Soo;Park, Hoon-Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.25 no.2
    • /
    • pp.15-24
    • /
    • 2021
  • Purpose SPECT/CT was noted for its excellent correction method and qualitative functions based on fusion images in the early stages of dissemination, and interest in and utilization of quantitative functions has been increasing with the recent introduction of companion diagnostic therapy(Theranostics). Unlike PET/CT, various conditions like the type of collimator and detector rotation are a challenging factor for image acquisition and reconstruction methods at absolute quantification of SPECT/CT. Therefore, in this study, We want to find out the effect on the radioactivity concentration estimate by the increase or decrease of the total acquisition time according to the number of projections and the acquisition time per projection among SPECT/CT imaging conditions. Materials and Methods After filling the 9,293 ml cylindrical phantom with sterile water and diluting 99mTc 91.76 MBq, the standard image was taken with a total acquisition time of 600 sec (10 sec/frame × 120 frames, matrix size 128 × 128) and also volume sensitivity and the calibration factor was verified. Based on the standard image, the comparative images were obtained by increasing or decreasing the total acquisition time. namely 60 (-90%), 150 (-75%), 300 (-50%), 450 (-25%), 900 (+50%), and 1200 (+100%) sec. For each image detail, the acquisition time(sec/frame) per projection was set to 1.0, 2.5, 5.0, 7.5, 15.0 and 20.0 sec (fixed number of projections: 120 frame) and the number of projection images was set to 12, 30, 60, 90, 180 and 240 frames(fixed time per projection:10 sec). Based on the coefficients measured through the volume of interest in each acquired image, the percentage of variation about the contrast to noise ratio (CNR) was determined as a qualitative assessment, and the quantitative assessment was conducted through the percentage of variation of the radioactivity concentration estimate. At this time, the relationship between the radioactivity concentration estimate (cps/ml) and the actual radioactivity concentration (Bq/ml) was compared and analyzed using the recovery coefficient (RC_Recovery Coefficients) as an indicator. Results The results [CNR, radioactivity Concentration, RC] by the change in the number of projections for each increase or decrease rate (-90%, -75%, -50%, -25%, +50%, +100%) of total acquisition time are as follows. [-89.5%, +3.90%, 1.04] at -90%, [-77.9%, +2.71%, 1.03] at -75%, [-55.6%, +1.85%, 1.02] at -50%, [-33.6%, +1.37%, 1.01] at -25%, [-33.7%, +0.71%, 1.01] at +50%, [+93.2%, +0.32%, 1.00] at +100%. and also The results [CNR, radioactivity Concentration, RC] by the acquisition time change for each increase or decrease rate (-90%, -75%, -50%, -25%, +50%, +100%) of total acquisition time are as follows. [-89.3%, -3.55%, 0.96] at - 90%, [-73.4%, -0.17%, 1.00] at -75%, [-49.6%, -0.34%, 1.00] at -50%, [-24.9%, 0.03%, 1.00] at -25%, [+49.3%, -0.04%, 1.00] at +50%, [+99.0%, +0.11%, 1.00] at +100%. Conclusion In SPECT/CT, the total coefficient obtained according to the increase or decrease of the total acquisition time and the resulting image quality (CNR) showed a pattern that changed proportionally. On the other hand, quantitative evaluations through absolute quantification showed a change of less than 5% (-3.55 to +3.90%) under all experimental conditions, maintaining quantitative accuracy (RC 0.96 to 1.04). Considering the reduction of the total acquisition time rather than the increasing of the image acquiring time, The reduction in total acquisition time is applicable to quantitative analysis without significant loss and is judged to be clinically effective. This study shows that when increasing or decreasing of total acquisition time, changes in acquisition time per projection have fewer fluctuations that occur in qualitative and quantitative condition changes than the change in the number of projections under the same scanning time conditions.

Establishment of an Analytical Method for Prometryn Residues in Clam Using GC-MS (GC-MS를 이용한 바지락 중 prometryn 잔류분석법 확립)

  • Chae, Young-Sik;Cho, Yoon-Jae;Jang, Kyung-Joo;Kim, Jae-Young;Lee, Sang-Mok;Chang, Moon-Ik
    • Korean Journal of Food Science and Technology
    • /
    • v.45 no.5
    • /
    • pp.531-536
    • /
    • 2013
  • We developed a simple, sensitive, and specific analytical method for prometryn using gas chromatography-mass spectrometry (GC-MS). Prometryn is a selective herbicide used for the control of annual grasses and broadleaf weeds in cotton and celery crops. On the basis of high specificity, sensitivity, and reproducibility, combined with simple analytical operation, we propose that our newly developed method is suitable for use as a Ministry of Food and Drug Safety (MFDS, Korea) official method in the routine analysis of individual pesticide residues. Further, the method is applicable in clams. The separation condition for GC-MS was optimized by using a DB-5MS capillary column ($30m{\times}0.25mm$, 0.25 ${\mu}m$) with helium as the carrier gas, at a flow rate of 0.9 mL/min. We achieved high linearity over the concentration range 0.02-0.5 mg/L (correlation coefficient, $r^2$ >0.998). Our method is specific and sensitive, and has a quantitation limit of 0.04 mg/kg. The average recovery in clams ranged from 84.0% to 98.0%. The reproducibility of measurements expressed as the coefficient of variation (CV%) ranged from 3.0% to 7.1%. Our analytical procedure showed high accuracy and acceptable sensitivity regarding the analytical requirements for prometryn in fishery products. Finally, we successfully applied our method to the determination of residue levels in fishery products, and showed that none of the analyzed samples contained detectable amounts of residues.

Increasing Accuracy of Classifying Useful Reviews by Removing Neutral Terms (중립도 기반 선택적 단어 제거를 통한 유용 리뷰 분류 정확도 향상 방안)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.129-142
    • /
    • 2016
  • Customer product reviews have become one of the important factors for purchase decision makings. Customers believe that reviews written by others who have already had an experience with the product offer more reliable information than that provided by sellers. However, there are too many products and reviews, the advantage of e-commerce can be overwhelmed by increasing search costs. Reading all of the reviews to find out the pros and cons of a certain product can be exhausting. To help users find the most useful information about products without much difficulty, e-commerce companies try to provide various ways for customers to write and rate product reviews. To assist potential customers, online stores have devised various ways to provide useful customer reviews. Different methods have been developed to classify and recommend useful reviews to customers, primarily using feedback provided by customers about the helpfulness of reviews. Most shopping websites provide customer reviews and offer the following information: the average preference of a product, the number of customers who have participated in preference voting, and preference distribution. Most information on the helpfulness of product reviews is collected through a voting system. Amazon.com asks customers whether a review on a certain product is helpful, and it places the most helpful favorable and the most helpful critical review at the top of the list of product reviews. Some companies also predict the usefulness of a review based on certain attributes including length, author(s), and the words used, publishing only reviews that are likely to be useful. Text mining approaches have been used for classifying useful reviews in advance. To apply a text mining approach based on all reviews for a product, we need to build a term-document matrix. We have to extract all words from reviews and build a matrix with the number of occurrences of a term in a review. Since there are many reviews, the size of term-document matrix is so large. It caused difficulties to apply text mining algorithms with the large term-document matrix. Thus, researchers need to delete some terms in terms of sparsity since sparse words have little effects on classifications or predictions. The purpose of this study is to suggest a better way of building term-document matrix by deleting useless terms for review classification. In this study, we propose neutrality index to select words to be deleted. Many words still appear in both classifications - useful and not useful - and these words have little or negative effects on classification performances. Thus, we defined these words as neutral terms and deleted neutral terms which are appeared in both classifications similarly. After deleting sparse words, we selected words to be deleted in terms of neutrality. We tested our approach with Amazon.com's review data from five different product categories: Cellphones & Accessories, Movies & TV program, Automotive, CDs & Vinyl, Clothing, Shoes & Jewelry. We used reviews which got greater than four votes by users and 60% of the ratio of useful votes among total votes is the threshold to classify useful and not-useful reviews. We randomly selected 1,500 useful reviews and 1,500 not-useful reviews for each product category. And then we applied Information Gain and Support Vector Machine algorithms to classify the reviews and compared the classification performances in terms of precision, recall, and F-measure. Though the performances vary according to product categories and data sets, deleting terms with sparsity and neutrality showed the best performances in terms of F-measure for the two classification algorithms. However, deleting terms with sparsity only showed the best performances in terms of Recall for Information Gain and using all terms showed the best performances in terms of precision for SVM. Thus, it needs to be careful for selecting term deleting methods and classification algorithms based on data sets.

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.

A Multimodal Profile Ensemble Approach to Development of Recommender Systems Using Big Data (빅데이터 기반 추천시스템 구현을 위한 다중 프로파일 앙상블 기법)

  • Kim, Minjeong;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.93-110
    • /
    • 2015
  • The recommender system is a system which recommends products to the customers who are likely to be interested in. Based on automated information filtering technology, various recommender systems have been developed. Collaborative filtering (CF), one of the most successful recommendation algorithms, has been applied in a number of different domains such as recommending Web pages, books, movies, music and products. But, it has been known that CF has a critical shortcoming. CF finds neighbors whose preferences are like those of the target customer and recommends products those customers have most liked. Thus, CF works properly only when there's a sufficient number of ratings on common product from customers. When there's a shortage of customer ratings, CF makes the formation of a neighborhood inaccurate, thereby resulting in poor recommendations. To improve the performance of CF based recommender systems, most of the related studies have been focused on the development of novel algorithms under the assumption of using a single profile, which is created from user's rating information for items, purchase transactions, or Web access logs. With the advent of big data, companies got to collect more data and to use a variety of information with big size. So, many companies recognize it very importantly to utilize big data because it makes companies to improve their competitiveness and to create new value. In particular, on the rise is the issue of utilizing personal big data in the recommender system. It is why personal big data facilitate more accurate identification of the preferences or behaviors of users. The proposed recommendation methodology is as follows: First, multimodal user profiles are created from personal big data in order to grasp the preferences and behavior of users from various viewpoints. We derive five user profiles based on the personal information such as rating, site preference, demographic, Internet usage, and topic in text. Next, the similarity between users is calculated based on the profiles and then neighbors of users are found from the results. One of three ensemble approaches is applied to calculate the similarity. Each ensemble approach uses the similarity of combined profile, the average similarity of each profile, and the weighted average similarity of each profile, respectively. Finally, the products that people among the neighborhood prefer most to are recommended to the target users. For the experiments, we used the demographic data and a very large volume of Web log transaction for 5,000 panel users of a company that is specialized to analyzing ranks of Web sites. R and SAS E-miner was used to implement the proposed recommender system and to conduct the topic analysis using the keyword search, respectively. To evaluate the recommendation performance, we used 60% of data for training and 40% of data for test. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. A widely used combination metric called F1 metric that gives equal weight to both recall and precision was employed for our evaluation. As the results of evaluation, the proposed methodology achieved the significant improvement over the single profile based CF algorithm. In particular, the ensemble approach using weighted average similarity shows the highest performance. That is, the rate of improvement in F1 is 16.9 percent for the ensemble approach using weighted average similarity and 8.1 percent for the ensemble approach using average similarity of each profile. From these results, we conclude that the multimodal profile ensemble approach is a viable solution to the problems encountered when there's a shortage of customer ratings. This study has significance in suggesting what kind of information could we use to create profile in the environment of big data and how could we combine and utilize them effectively. However, our methodology should be further studied to consider for its real-world application. We need to compare the differences in recommendation accuracy by applying the proposed method to different recommendation algorithms and then to identify which combination of them would show the best performance.