• Title/Summary/Keyword: Data Accuracy

Search Result 11,668, Processing Time 0.045 seconds

Diagnostic Performance of Digital Breast Tomosynthesis with the Two-Dimensional Synthesized Mammogram for Suspicious Breast Microcalcifications Compared to Full-Field Digital Mammography in Stereotactic Breast Biopsy (정위적 유방 조직검사 시 미세석회화 의심 병변에서의 디지털 유방단층영상합성법과 전역 디지털 유방촬영술의 진단능 비교)

  • Jiwon Shin;Ok Hee Woo;Hye Seon Shin;Sung Eun Song;Kyu Ran Cho;Bo Kyoung Seo
    • Journal of the Korean Society of Radiology
    • /
    • v.83 no.5
    • /
    • pp.1090-1103
    • /
    • 2022
  • Purpose To evaluate the diagnostic performance of digital breast tomosynthesis (DBT) with the two-dimensional synthesized mammogram (2DSM), compared to full-field digital mammography (FFDM), for suspicious microcalcifications in the breast ahead of stereotactic biopsy and to assess the diagnostic image visibility of the images. Materials and Methods This retrospective study involved 189 patients with microcalcifications, which were histopathologically verified by stereotactic breast biopsy, who underwent DBT with 2DSM and FFDM between January 8, 2015, and January 20, 2020. Two radiologists assessed all cases of microcalcifications based on Breast Imaging Reporting and Data System (BI-RADS) independently. They were blinded to the histopathologic outcome and additionally evaluated lesion visibility using a fivepoint scoring scale. Results Overall, the inter-observer agreement was excellent (0.9559). Under the setting of category 4A as negative due to the low possibility of malignancy and to avoid the dilution of malignancy criteria in our study, McNemar tests confirmed no significant difference between the performances of the two modalities in detecting microcalcifications with a high potential for malignancy (4B, 4C, or 5; p = 0.1573); however, the tests showed a significant difference between their performances in detecting microcalcifications with a high potential for benignancy (4A; p = 0.0009). DBT with 2DSM demonstrated superior visibility and diagnostic performance than FFDM in dense breasts. Conclusion DBT with 2DSM is superior to FFDM in terms of total diagnostic accuracy and lesion visibility for benign microcalcifications in dense breasts. This study suggests a promising role for DBT with 2DSM as an accommodating tool for stereotactic biopsy in female with dense breasts and suspicious breast microcalcifications.

Assessment of Additional MRI-Detected Breast Lesions Using the Quantitative Analysis of Contrast-Enhanced Ultrasound Scans and Its Comparability with Dynamic Contrast-Enhanced MRI Findings of the Breast (유방자기공명영상에서 추가적으로 발견된 유방 병소에 대한 조영증강 초음파의 정량적 분석을 통한 진단 능력 평가와 동적 조영증강 유방 자기공명영상 결과와의 비교)

  • Sei Young Lee;Ok Hee Woo;Hye Seon Shin;Sung Eun Song;Kyu Ran Cho;Bo Kyoung Seo;Soon Young Hwang
    • Journal of the Korean Society of Radiology
    • /
    • v.82 no.4
    • /
    • pp.889-902
    • /
    • 2021
  • Purpose To assess the diagnostic performance of contrast-enhanced ultrasound (CEUS) for additional MR-detected enhancing lesions and to determine whether or not kinetic pattern results comparable to dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) of the breast can be obtained using the quantitative analysis of CEUS. Materials and Methods In this single-center prospective study, a total of 71 additional MR-detected breast lesions were included. CEUS examination was performed, and lesions were categorized according to the Breast Imaging-Reporting and Data System (BI-RADS). The sensitivity, specificity, and diagnostic accuracy of CEUS were calculated by comparing the BI-RADS category to the final pathology results. The degree of agreement between CEUS and DCE-MRI kinetic patterns was evaluated using weighted kappa. Results On CEUS, 46 lesions were assigned as BI-RADS category 4B, 4C, or 5, while 25 lesions category 3 or 4A. The diagnostic performance of CEUS for enhancing lesions on DCE-MRI was excellent, with 84.9% sensitivity, 94.4% specificity, and 97.8% positive predictive value. A total of 57/71 (80%) lesions had correlating kinetic patterns and showed good agreement (weighted kappa = 0.66) between CEUS and DCE-MRI. Benign lesions showed excellent agreement (weighted kappa = 0.84), and invasive ductal carcinoma (IDC) showed good agreement (weighted kappa = 0.69). Conclusion The diagnostic performance of CEUS for additional MR-detected breast lesions was excellent. Accurate kinetic pattern assessment, fairly comparable to DCE-MRI, can be obtained for benign and IDC lesions using CEUS.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

A Morphologic Study of head and face for Sasang Constitution (사상체질별(四象體質別) 두면부(頭面部)의 형태학적(形態學的) 특징(特徵))

  • Ko, Byung-Hee;Song, Il-Byung;Cho, Yong-Jin;Choi, Chang-Seok;Kim, Jong-Weon;Hong, Suck-CHull;Lee, Eui-Ju;Lee, Sang-Yong;Seo, Jeong-Sug
    • Journal of Sasang Constitutional Medicine
    • /
    • v.8 no.1
    • /
    • pp.101-186
    • /
    • 1996
  • The clinical application of constitutional Diagnosis is the most important part of Sasang constitutional medicine. It has been studied in various way. However, the study of morphologic characteristics on the face is applied for the first time. For quantitative analysis of the correlation between the sasang constitution and the shape of the face, the head-facial part of 170 cases were measured by Martin's measurement and analysis of a) the measurement value of height and the component ratio from the Gnathion to each part of face by constitution. b) the measurement value of depth and the component ratio from T-projected to each part of the face by constitution. c) the measurement value of breadth and component ratio between each parts of the facial breadth by constitution. d) the ratio of square on every part of face by constitution. e) the characteristics on each part of the face by constitution. f) the contour line of the forehead. g) the result of discriminant analysis about the constitution. Authors obtained the results from the study as follows; 1. The characteristics of Taeum-IN (1) The measurement value of Height, Breadth, T-Projected had a tendency to maximum value in general. (2) The value of lower opthal height and the square of lower opthal part was maximum. (3) The value of Pronasal T-projected length and Subnasal T-projected length was minimum, so Taeum-In has characteristics of depression in middle face, nasal part. (4) In the ratio of Breadth, T-Projected, T-Projected was minimum. (5) It was maximum that the square of nose, Alare, Middle face, Lower face and it was minimum that the square of eye. The square of nose, Alare, Middle facc, Lower face was maximum and the square of eye was minimum. (6) The curvature of the eyebrow was minimum. (7) The projection of jaw (Pogonion T-projection length) was maximum. (8) The breadth of eye was minimum. (9) There was a tendency that the projection of the forehead to the right in general. 2. The characteristics of Soeum-In (1) In all cases of projected length the measurement value was minimum. (2) The value of lower opthal height and the square of lower opthal part was minimum. (3) By the Pupulare T-projected length, the value of Pronasal T-projected length and Subnasal T-projected length was minimum, so the Soeum In's face shape is flat. (4) The square of eye, mouth, forehead was maximum and the square of nose, Alare, Middle face, Lower face was minimum. (5) The curvature of the eyebrow was maximum. (6) The projection of mouth was minimum. (7) The jaw was flat. (8) The breadth of eye was maximum. (9) There was a tendency that the projection of the forehead to the left in general. 3. The characteristics of Soyang-In. (1) In most cases of 고경 length the measurement value was minimum. (2) By the Pupulare T-projected length, each ratio of projected length was maximum, so the Soyang-In's face shape has many protrusions (3) In the ratio of Breadth, T-Projected, T-Projected was maximum. (4) The square of mouth was minimum. (5) The inclination of the forehead was minimum. (6) The projection of mouth was maximum. (7) The breadth of eye was minimum. (8) There was a tendency that the projection of the forehead to the left in general. (9) The middle face was protruded. 4. Discriminant about the constitution. According to the result of discriminant, the accuracy probability of discriminant was 85.58% in total and Taeum-In was 90.5%, Soeum-In was 70.8%, Soyang-In was 89.5%. The accuracy probability of discriminant about 3 constitutional group increased by 49.03% than the accident probility 36.55% 5. Suggestion (1) The study which gather and analysis the data should be continued. (2) The study which subdivide the characteristics of each part of the face by the constitution should be continued. (3) The analysis method about Moire should be supplement. (4) The study about the morphologic characteristics of the whole body should be continued. (5) Computer program of constitution diagnosis should be developed. (6) To increase utility of this method, the measurement should be automation.

  • PDF

A Comparative Study of Subset Construction Methods in OSEM Algorithms using Simulated Projection Data of Compton Camera (모사된 컴프턴 카메라 투사데이터의 재구성을 위한 OSEM 알고리즘의 부분집합 구성법 비교 연구)

  • Kim, Soo-Mee;Lee, Jae-Sung;Lee, Mi-No;Lee, Ju-Hahn;Kim, Joong-Hyun;Kim, Chan-Hyeong;Lee, Chun-Sik;Lee, Dong-Soo;Lee, Soo-Jin
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.41 no.3
    • /
    • pp.234-240
    • /
    • 2007
  • Purpose: In this study we propose a block-iterative method for reconstructing Compton scattered data. This study shows that the well-known expectation maximization (EM) approach along with its accelerated version based on the ordered subsets principle can be applied to the problem of image reconstruction for Compton camera. This study also compares several methods of constructing subsets for optimal performance of our algorithms. Materials and Methods: Three reconstruction algorithms were implemented; simple backprojection (SBP), EM, and ordered subset EM (OSEM). For OSEM, the projection data were grouped into subsets in a predefined order. Three different schemes for choosing nonoverlapping subsets were considered; scatter angle-based subsets, detector position-based subsets, and both scatter angle- and detector position-based subsets. EM and OSEM with 16 subsets were performed with 64 and 4 iterations, respectively. The performance of each algorithm was evaluated in terms of computation time and normalized mean-squared error. Results: Both EM and OSEM clearly outperformed SBP in all aspects of accuracy. The OSEM with 16 subsets and 4 iterations, which is equivalent to the standard EM with 64 iterations, was approximately 14 times faster in computation time than the standard EM. In OSEM, all of the three schemes for choosing subsets yielded similar results in computation time as well as normalized mean-squared error. Conclusion: Our results show that the OSEM algorithm, which have proven useful in emission tomography, can also be applied to the problem of image reconstruction for Compton camera. With properly chosen subset construction methods and moderate numbers of subsets, our OSEM algorithm significantly improves the computational efficiency while keeping the original quality of the standard EM reconstruction. The OSEM algorithm with scatter angle- and detector position-based subsets is most available.

Studies on the analysis of phytin by the Chelatometric method (Chelate 법(法)에 의(依)한 Phytin 분석(分析)에 관(關)한 연구(硏究))

  • Shin, Jai-Doo
    • Applied Biological Chemistry
    • /
    • v.10
    • /
    • pp.1-13
    • /
    • 1968
  • Phytin is a salt(mainly calcium and magnesium) of phytic acid and its purity and molecular formula can be determined by assaying the contents of phosporus, calcium and magnesium in phytin. In order to devise a new method for the quantitative analysis of the three elements in phytin, the chelatometric method was developed as follows: 1) As the pretreatment for phytin analysis, it was ashfied st $550{\sim}600^{\circ}C$ in the presence of concentrated nitric acid. This dry process is more accurate than the wet process. 2) Phosphorus, calcium and megnesium were analyzed by the conventional and the new method described here, for the phytin sample decomposed by the dry process. The ashfied phytin solution in hydrochloric acid was partitioned into cation and anion fractions by means of a ration exchange resin. A portion of the ration fraction was adjusted to pH 7.0, followed by readjustment to pH 10 and titrated with standard EDTA solution using the BT [Eriochrome black T] indicator to obtain the combined value of calcium and magnesium. Another portion of the ration fraction was made to pH 7.0, and a small volume of standard EDTA solution was added to it. pH was adjusted to $12{\sim}13$ with 8 N KOH and it was titrate by a standard EDTA solution in the presence of N-N[2-Hydroxy-1-(2-hydroxy-4-sulfo-1-naphytate)-3-naphthoic acid] diluted powder indicator in order to obtain the calcium content. Magnesium content was calculated from the difference between the two values. From the anion fraction the magnesium ammonium phosphate precipitate was obtained. The precipitate was dissolved in hydrochloric acid, and a standard EDTA solution was added to it. The solution was adjusted to pH 7.0 and then readjusted to pH 10.0 by a buffer solution and titrated with a standard magnesium sulfate solution in the presence of BT indicator to obtain the phosphorus content. The analytical data for phosphorus, calcium and magnesium were 98.9%, 97.1% and 99.1% respectively, in reference to the theoretical values for the formula $C_6H_6O_{24}P_6Mg_4CaNa_2{\cdot}5H_2O$. Statical analysis indicated a good coincidence of the theoretical and experimental values. On the other hand, the observed values for the three elements by the conventional method were 92.4%, 86.8% and 93.8%, respectively, revealing a remarkable difference from the theoretical. 3) When sodium phytate was admixed with starch and subjected to the analysis of phosphorus, calcium and magnesium by the chelatometric method, their recovery was almost 100% 4) In order to confirm the accuracy of this method, phytic acid was reacted with calcium chloride and magnesium chloride in the molar ratio of phytic: calcium chloride: magnesium chloride=1 : 5 : 20 to obtain sodium phytate containing one calcium atom and four magnesium atoms per molecule of sodium phytate. The analytical data for phosporus, calcium and magnesium were coincident with those as determine d by the aforementioned method. The new method employing the dry process, ion exchange resin and chelatometric assay of phosphorus, calcium and magnesium is considered accurate and rapid for the determination of phytin.

  • PDF

New Insights on Mobile Location-based Services(LBS): Leading Factors to the Use of Services and Privacy Paradox (모바일 위치기반서비스(LBS) 관련한 새로운 견해: 서비스사용으로 이끄는 요인들과 사생활염려의 모순)

  • Cheon, Eunyoung;Park, Yong-Tae
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.33-56
    • /
    • 2017
  • As Internet usage is becoming more common worldwide and smartphone become necessity in daily life, technologies and applications related to mobile Internet are developing rapidly. The results of the Internet usage patterns of consumers around the world imply that there are many potential new business opportunities for mobile Internet technologies and applications. The location-based service (LBS) is a service based on the location information of the mobile device. LBS has recently gotten much attention among many mobile applications and various LBSs are rapidly developing in numerous categories. However, even with the development of LBS related technologies and services, there is still a lack of empirical research on the intention to use LBS. The application of previous researches is limited because they focused on the effect of one particular factor and had not shown the direct relationship on the intention to use LBS. Therefore, this study presents a research model of factors that affect the intention to use and actual use of LBS whose market is expected to grow rapidly, and tested it by conducting a questionnaire survey of 330 users. The results of data analysis showed that service customization, service quality, and personal innovativeness have a positive effect on the intention to use LBS and the intention to use LBS has a positive effect on the actual use of LBS. These results implies that LBS providers can enhance the user's intention to use LBS by offering service customization through the provision of various LBSs based on users' needs, improving information service qualities such as accuracy, timeliness, sensitivity, and reliability, and encouraging personal innovativeness. However, privacy concerns in the context of LBS are not significantly affected by service customization and personal innovativeness and privacy concerns do not significantly affect the intention to use LBS. In fact, the information related to users' location collected by LBS is less sensitive when compared with the information that is used to perform financial transactions. Therefore, such outcomes on privacy concern are revealed. In addition, the advantages of using LBS are more important than the sensitivity of privacy protection to the users who use LBS than to the users who use information systems such as electronic commerce that involves financial transactions. Therefore, LBS are recommended to be treated differently from other information systems. This study is significant in the theoretical point of contribution that it proposed factors affecting the intention to use LBS in a multi-faceted perspective, proved the proposed research model empirically, brought new insights on LBS, and broadens understanding of the intention to use and actual use of LBS. Also, the empirical results of the customization of LBS affecting the user's intention to use the LBS suggest that the provision of customized LBS services based on the usage data analysis through utilizing technologies such as artificial intelligence can enhance the user's intention to use. In a practical point of view, the results of this study are expected to help LBS providers to develop a competitive strategy for responding to LBS users effectively and lead to the LBS market grows. We expect that there will be differences in using LBSs depending on some factors such as types of LBS, whether it is free of charge or not, privacy policies related to LBS, the levels of reliability related application and technology, the frequency of use, etc. Therefore, if we can make comparative studies with those factors, it will contribute to the development of the research areas of LBS. We hope this study can inspire many researchers and initiate many great researches in LBS fields.

System Development for Measuring Group Engagement in the Art Center (공연장에서 다중 몰입도 측정을 위한 시스템 개발)

  • Ryu, Joon Mo;Choi, Il Young;Choi, Lee Kwon;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.45-58
    • /
    • 2014
  • The Korean Culture Contents spread out to Worldwide, because the Korean wave is sweeping in the world. The contents stand in the middle of the Korean wave that we are used it. Each country is ongoing to keep their Culture industry improve the national brand and High added value. Performing contents is important factor of arousal in the enterprise industry. To improve high arousal confidence of product and positive attitude by populace is one of important factor by advertiser. Culture contents is the same situation. If culture contents have trusted by everyone, they will give information their around to spread word-of-mouth. So, many researcher study to measure for person's arousal analysis by statistical survey, physiological response, body movement and facial expression. First, Statistical survey has a problem that it is not possible to measure each person's arousal real time and we cannot get good survey result after they watched contents. Second, physiological response should be checked with surround because experimenter sets sensors up their chair or space by each of them. Additionally it is difficult to handle provided amount of information with real time from their sensor. Third, body movement is easy to get their movement from camera but it difficult to set up experimental condition, to measure their body language and to get the meaning. Lastly, many researcher study facial expression. They measures facial expression, eye tracking and face posed. Most of previous studies about arousal and interest are mostly limited to reaction of just one person and they have problems with application multi audiences. They have a particular method, for example they need room light surround, but set limits only one person and special environment condition in the laboratory. Also, we need to measure arousal in the contents, but is difficult to define also it is not easy to collect reaction by audiences immediately. Many audience in the theater watch performance. We suggest the system to measure multi-audience's reaction with real-time during performance. We use difference image analysis method for multi-audience but it weaks a dark field. To overcome dark environment during recoding IR camera can get the photo from dark area. In addition we present Multi-Audience Engagement Index (MAEI) to calculate algorithm which sources from sound, audience' movement and eye tracking value. Algorithm calculates audience arousal from the mobile survey, sound value, audience' reaction and audience eye's tracking. It improves accuracy of Multi-Audience Engagement Index, we compare Multi-Audience Engagement Index with mobile survey. And then it send the result to reporting system and proposal an interested persons. Mobile surveys are easy, fast, and visitors' discomfort can be minimized. Also additional information can be provided mobile advantage. Mobile application to communicate with the database, real-time information on visitors' attitudes focused on the content stored. Database can provide different survey every time based on provided information. The example shown in the survey are as follows: Impressive scene, Satisfied, Touched, Interested, Didn't pay attention and so on. The suggested system is combine as 3 parts. The system consist of three parts, External Device, Server and Internal Device. External Device can record multi-Audience in the dark field with IR camera and sound signal. Also we use survey with mobile application and send the data to ERD Server DB. The Server part's contain contents' data, such as each scene's weights value, group audience weights index, camera control program, algorithm and calculate Multi-Audience Engagement Index. Internal Device presents Multi-Audience Engagement Index with Web UI, print and display field monitor. Our system is test-operated by the Mogencelab in the DMC display exhibition hall which is located in the Sangam Dong, Mapo Gu, Seoul. We have still gotten from visitor daily. If we find this system audience arousal factor with this will be very useful to create contents.

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.