• Title/Summary/Keyword: fraction algorithm

Search Result 214, Processing Time 0.026 seconds

An Efficient Machine Learning-based Text Summarization in the Malayalam Language

  • P Haroon, Rosna;Gafur M, Abdul;Nisha U, Barakkath
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.6
    • /
    • pp.1778-1799
    • /
    • 2022
  • Automatic text summarization is a procedure that packs enormous content into a more limited book that incorporates significant data. Malayalam is one of the toughest languages utilized in certain areas of India, most normally in Kerala and in Lakshadweep. Natural language processing in the Malayalam language is relatively low due to the complexity of the language as well as the scarcity of available resources. In this paper, a way is proposed to deal with the text summarization process in Malayalam documents by training a model based on the Support Vector Machine classification algorithm. Different features of the text are taken into account for training the machine so that the system can output the most important data from the input text. The classifier can classify the most important, important, average, and least significant sentences into separate classes and based on this, the machine will be able to create a summary of the input document. The user can select a compression ratio so that the system will output that much fraction of the summary. The model performance is measured by using different genres of Malayalam documents as well as documents from the same domain. The model is evaluated by considering content evaluation measures precision, recall, F score, and relative utility. Obtained precision and recall value shows that the model is trustable and found to be more relevant compared to the other summarizers.

Accuracy Evaluation of Supervised Classification by Using Morphological Attribute Profiles and Additional Band of Hyperspectral Imagery (초분광 영상의 Morphological Attribute Profiles와 추가 밴드를 이용한 감독분류의 정확도 평가)

  • Park, Hong Lyun;Choi, Jae Wan
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.25 no.1
    • /
    • pp.9-17
    • /
    • 2017
  • Hyperspectral imagery is used in the land cover classification with the principle component analysis and minimum noise fraction to reduce the data dimensionality and noise. Recently, studies on the supervised classification using various features having spectral information and spatial characteristic have been carried out. In this study, principle component bands and normalized difference vegetation index(NDVI) was utilized in the supervised classification for the land cover classification. To utilize additional information not included in the principle component bands by the hyperspectral imagery, we tried to increase the classification accuracy by using the NDVI. In addition, the extended attribute profiles(EAP) generated using the morphological filter was used as the input data. The random forest algorithm, which is one of the representative supervised classification, was used. The classification accuracy according to the application of various features based on EAP was compared. Two areas was selected in the experiments, and the quantitative evaluation was performed by using reference data. The classification accuracy of the proposed algorithm showed the highest classification accuracy of 85.72% and 91.14% compared with existing algorithms. Further research will need to develop a supervised classification algorithm and additional input datasets to improve the accuracy of land cover classification using hyperspectral imagery.

The Evaluation of the dose calculation algorithm(AAA)'s Accuracy in Case of a Radiation Therapy on Inhomogeneous tissues using FFF beam (FFF빔을 사용한 불균질부 방사선치료 시 선량계산 알고리즘(AAA)의 정확성 평가)

  • Kim, In Woo;Chae, Seung Hoon;Kim, Min Jung;Kim, Bo Gyoum;Kim, Chan Yong;Park, So Yeon;Yoo, Suk Hyun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.321-327
    • /
    • 2014
  • Purpose : To verify the accuracy of the Ecilpse's dose calculation algorithm(AAA:Analytic anisotropic algorithm) in case of a radiation treatment on Inhomogeneous tissues using FFF beam comparing dose distribution at TPS with actual distribution. Materials and Methods : After acquiring CT images for radiation treatment by the location of tumors and sizes using the solid water phantoms, cork and chest tumor phantom made of paraffin, we established the treatment plan for 6MV photon therapy using our radiation treatment planning system for chest SABR, Ecilpse's AAA(Analytic anisotropic algorithm). According to the completed plan, using our TrueBeam STx(Varian medical system, Palo Alto, CA), we irradiated radiation on the chest tumor phantom on which EBT2 films are inserted and evaluated the dose value of the treatment plan and that of the actual phantom on Inhomogeneous tissue. Results : The difference of the dose value between TPS and measurement at the medial target is 1.28~2.7%, and, at the side of target including inhomogeneous tissues, the difference is 2.02%~7.40% at Ant, 4.46%~14.84% at Post, 0.98%~7.12% at Rt, 1.36%~4.08% at Lt, 2.38%~4.98% at Sup, and 0.94%~3.54% at Inf. Conclusion : In this study, we discovered the possibility of dose calculation's errors caused by FFF beam's characteristics and the inhomogeneous tissues when we do SBRT for inhomogeneous tissues. SBRT which is most popular therapy method needs high accuracy because it irradiates high dose radiation in small fraction. So, it is supposed that ideal treatment is possible if we minimize the errors when planning for treatment through more study about organ's characteristics like Inhomogeneous tissues and FFF beam's characteristics.

A Comparative Study of Elementary School Mathematics Textbooks between Korea and Japan - Focused on the 4th Grade - (한국과 일본의 초등학교 수학교과서 비교 연구 - 4학년을 중심으로 -)

  • Lee, Jae-Chun;Kim, Seon-Yu;Kang, Hong-Jae
    • Journal of Elementary Mathematics Education in Korea
    • /
    • v.13 no.1
    • /
    • pp.1-15
    • /
    • 2009
  • This research is to provide a useful reference for the future revision of textbook by comparative analysis with the textbook in the 4th grade of elementary school in Japan. The results from this research is same as follows: First, Korean curriculum is emphasizing the reasonable problem-solving ability developed on the base of the mathematical knowledge and skill. Meantime, Japanese puts much value on the is focusing on discretion and the capability in life so that they emphasize each person's learning and raising the power of self-learning and thinking. The ratio on mathematics in both company are high, but Japanese ensures much more hours than Korean. Second, the chapter of Korean textbook is composed of 8 units and the title of the chapter is shown as key word, then the next objects are describes as 'Shall we do$\sim$' type. Hence, the chapter composition of Japanese textbook is different among the chapter and the title of the chapter is described as 'Let's do$\sim$'. Moreover, Korean textbook is arranged focusing on present study, however Japanese is composed with each independent segments in the present study subject to the study contents. Third, Japanese makes students understand the decimal as the extension of the decimal system with measuring unit($\ell$, km, kg) then, learn the operation by algorithm. In Korea, students learn fraction earlier than decimal, but, in Japan students learn decimal earlier than fraction. For the diagram, in Korea, making angle with vertex and side comes after the concept of angle, vertex and side is explained. Hence, in Japan, they show side and vertex to present angle.

  • PDF

A Historical, Mathematical, Psychological Analysis on Ratio Concept (비 개념에 대한 역사적, 수학적, 심리적 분석)

  • 정은실
    • School Mathematics
    • /
    • v.5 no.4
    • /
    • pp.421-440
    • /
    • 2003
  • It is difficult for the learner to understand completely the ratio concept which forms a basis of proportional reasoning. And proportional reasoning is, on the one hand, the capstone of children's elementary school arithmetic and, the other hand, it is the cornerstone of all that is to follow. But school mathematics has centered on the teachings of algorithm without dealing with its essence and meaning. The purpose of this study is to analyze the essence of ratio concept from multidimensional viewpoint. In addition, this study will show the direction for improvement of ratio concept. For this purpose, I tried to analyze the historical development of ratio concept. Most mathematicians today consider ratio as fraction and, in effect, identify ratios with what mathematicians called the denominations of ratios. But Euclid did not. In line with Euclid's theory, ratio should not have been represented in the same way as fraction, and proportion should not have been represented as equation, but in line with the other's theory they might be. The two theories of ratios were running alongside each other, but the differences between them were not always clearly stated. Ratio can be interpreted as a function of an ordered pair of numbers or magnitude values. A ratio is a numerical expression of how much there is of one quantity in relation to another quantity. So ratio can be interpreted as a binary vector which differentiates between the absolute aspect of a vector -its size- and the comparative aspect-its slope. Analysis on ratio concept shows that its basic structure implies 'proportionality' and it is formalized through transmission from the understanding of the invariance of internal ratio to the understanding of constancy of external ratio. In the study, a fittingness(or comparison) and a covariation were examined as the intuitive origins of proportion and proportional reasoning. These form the basis of the protoquantitative knowledge. The development of sequences of proportional reasoning was examined. The first attempts at quantifying the relationships are usually additive reasoning. Additive reasoning appears as a precursor to proportional reasoning. Preproportions are followed by logical proportions which refer to the understanding of the logical relationships between the four terms of a proportion. Even though developmental psychologists often speak of proportional reasoning as though it were a global ability, other psychologists insist that the evolution of proportional reasoning is characterized by a gradual increase in local competence.

  • PDF

An extension of multifactor dimensionality reduction method for detecting gene-gene interactions with the survival time (생존시간과 연관된 유전자 간의 교호작용에 관한 다중차원축소방법의 확장)

  • Oh, Jin Seok;Lee, Seung Yeoun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.5
    • /
    • pp.1057-1067
    • /
    • 2014
  • Many genetic variants have been identified to be associated with complex diseases such as hypertension, diabetes and cancers throughout genome-wide association studies (GWAS). However, there still exist a serious missing heritability problem since the proportion explained by genetic variants from GWAS is very weak less than 10~15%. Gene-gene interaction study may be helpful to explain the missing heritability because most of complex disease mechanisms are involved with more than one single SNP, which include multiple SNPs or gene-gene interactions. This paper focuses on gene-gene interactions with the survival phenotype by extending the multifactor dimensionality reduction (MDR) method to the accelerated failure time (AFT) model. The standardized residual from AFT model is used as a residual score for classifying multiple geno-types into high and low risk groups and algorithm of MDR is implemented. We call this method AFT-MDR and compares the power of AFT-MDR with those of Surv-MDR and Cox-MDR in simulation studies. Also a real data for leukemia Korean patients is analyzed. It was found that the power of AFT-MDR is greater than that of Surv-MDR and is comparable with that of Cox-MDR, but is very sensitive to the censoring fraction.

Performance Characteristics of 3D GSO PET/CT Scanner (Philips GEMINI PET/DT) (3차원 GSO PET/CT 스캐너(Philips GEMINI PET/CT의 특성 평가)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Byeong-Il;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.4
    • /
    • pp.318-324
    • /
    • 2004
  • Purpose: Philips GEMINI is a newly introduced whole-body GSO PET/CT scanner. In this study, performance of the scanner including spatial resolution, sensitivity, scatter fraction, noise equivalent count ratio (NECR) was measured utilizing NEMA NU2-2001 standard protocol and compared with performance of LSO, BGO crystal scanner. Methods: GEMINI is composed of the Philips ALLEGRO PET and MX8000 D multi-slice CT scanners. The PET scanner has 28 detector segments which have an array of 29 by 22 GSO crystals ($4{\times}6{\times}20$ mm), covering axial FOV of 18 cm. PET data to measure spatial resolution, sensitivity, scatter fraction, and NECR were acquired in 3D mode according to the NEMA NU2 protocols (coincidence window: 8 ns, energy window: $409[\sim}664$ keV). For the measurement of spatial resolution, images were reconstructed with FBP using ramp filter and an iterative reconstruction algorithm, 3D RAMLA. Data for sensitivity measurement were acquired using NEMA sensitivity phantom filled with F-18 solution and surrounded by $1{\sim}5$ aluminum sleeves after we confirmed that dead time loss did not exceed 1%. To measure NECR and scatter fraction, 1110 MBq of F-18 solution was injected into a NEMA scatter phantom with a length of 70 cm and dynamic scan with 20-min frame duration was acquired for 7 half-lives. Oblique sinograms were collapsed into transaxial slices using single slice rebinning method, and true to background (scatter+random) ratio for each slice and frame was estimated. Scatter fraction was determined by averaging the true to background ratio of last 3 frames in which the dead time loss was below 1%. Results: Transverse and axial resolutions at 1cm radius were (1) 5.3 and 6.5 mm (FBP), (2) 5.1 and 5.9 mm (3D RAMLA). Transverse radial, transverse tangential, and axial resolution at 10 cm were (1) 5.7, 5.7, and 7.0 mm (FBP), (2) 5.4, 5.4, and 6.4 mm (3D RAMLA). Attenuation free values of sensitivity were 3,620 counts/sec/MBq at the center of transaxial FOV and 4,324 counts/sec/MBq at 10 cm offset from the center. Scatter fraction was 40.6%, and peak true count rate and NECR were 88.9 kcps @ 12.9 kBq/mL and 34.3 kcps @ 8.84 kBq/mL. These characteristics are better than that of ECAT EXACT PET scanner with BGO crystal. Conclusion: The results of this field test demonstrate high resolution, sensitivity and count rate performance of the 3D PET/CT scanner with GSO crystal. The data provided here will be useful for the comparative study with other 3D PET/CT scanners using BGO or LSO crystals.

IMAGING SPECTROMETRY FOR DETECTING FECES AND INGESTA ON POULTRY CARCASSES

  • Park, Bo-Soon;William R.Windham;Kurt C.Lawrence;Smith, Douglas-P
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.3106-3106
    • /
    • 2001
  • Imaging spectrometry or hyperspectral imaging is a recent development that makes possible quantitative and qualitative measurement for food quality and safety. This paper presents the research results that a hyperspectral imaging system can be used effectively for detecting fecal (from duodenum, cecum, and colon) and ingesta contamination on poultry carcasses from the different feed meals (wheat, mile, and corn with soybean) for poultry safety inspection. A hyperspectral imaging system has been developed and tested for the identification of fecal and ingesta surface contamination on poultry carcasses. Hypercube image data including both spectral and spatial domains between 430 and 900 nm were acquired from poultry carcasses with fecal and ingesta contamination. A transportable hyperspectral imaging system including fiber optically fabricated line lights, motorized lens control for line scans, and hypercube image data from contaminated carcasses with different feeds are presented. Calibration method of a hyperspectral imaging system is demonstrated using different lighting sources and reflectance panels. Principal Component and Minimum Noise Fraction transformations will be discussed to characterize hyperspectral images and further image processing algorithms such as image band ratio of dual-wavelength images and its histogram stretching with thresholding process will be demonstrated to identify fecal and ingesta materials on poultry carcasses. This algorithm could be further applied for real-time classification of fecal and ingesta contamination on poultry carcasses in the poultry processing line.

  • PDF

A Numerical Study of the 2-D Cold Flow for a Qubec City Stoker Incinerator (큐벡시 스토커 소각로 2차원 비반응 유동장 수치해석)

  • 박지영;송은영;장동순
    • Journal of Energy Engineering
    • /
    • v.2 no.3
    • /
    • pp.268-275
    • /
    • 1993
  • A series of parametric investigations are performed in order to resolve the flow characteristic of a Quebec city stoker incinerator. The parameters considered in this study are five internal configurations of the Quebec city stoker itself and its modified ones, primary air velocity, the injection velocity and angle of the secondary air, and the reduction of the stoker exit area. A control-volume based finite-difference method by Patankar together with the power-law scheme is employed for discretization. The resolution of the pressure-velocity coupling is made by the use of SIMPLEC algorithm. The standard, two equation, k-$\varepsilon$ model is incorporated for the closure of turbulence. The size of recirculation region, turbulent viscosity, the mass fraction of the secondary air and pressure drop are calculated in order to analyze the characteristics of flow field. The results are physically acceptable and discussed in detail. The flow field of the Quebec city stoker shows the strong recirculation zone together with the high turbulence intensity over the upper part of the incinerator.

  • PDF

Development of Web-based Off-site Consequence Analysis Program and its Application for ILRT Extension (격납건물종합누설률시험 주기연장을 위한 웹기반 소외결말분석 프로그램 개발 및 적용)

  • Na, Jang-Hwan;Hwang, Seok-Won;Oh, Ji-Yong
    • Journal of the Korean Society of Safety
    • /
    • v.27 no.5
    • /
    • pp.219-223
    • /
    • 2012
  • For an off-site consequence analysis at nuclear power plant, MELCOR Accident Consequence Code System(MACCS) II code is widely used as a software tool. In this study, the algorithm of web-based off-site consequence analysis program(OSCAP) using the MACCS II code was developed for an Integrated Leak Rate Test (ILRT) interval extension and Level 3 probabilistic safety assessment(PSA), and verification and validation(V&V) of the program was performed. The main input data for the MACCS II code are meteorological, population distribution and source term information. However, it requires lots of time and efforts to generate the main input data for an off-site consequence analysis using the MACCS II code. For example, the meteorological data are collected from each nuclear power site in real time, but the formats of the raw data collected are different from each site. To reduce the efforts and time for risk assessments, the web-based OSCAP has an automatic processing module which converts the format of the raw data collected from each site to the input data format of the MACCS II code. The program also provides an automatic function of converting the latest population data from Statistics Korea, the National Statistical Office, to the population distribution input data format of the MACCS II code. For the source term data, the program includes the release fraction of each source term category resulting from modular accident analysis program(MAAP) code analysis and the core inventory data from ORIGEN. These analysis results of each plant in Korea are stored in a database module of the web-based OSCAP, so the user can select the defaulted source term data of each plant without handling source term input data.