• Title/Summary/Keyword: form-accuracy

Search Result 1,357, Processing Time 0.03 seconds

Rapid Analysis of Boric Acid in Nickel Plating Solutions (니켈도금액중의 붕산 신속정량법)

  • 염희택
    • Journal of the Korean institute of surface engineering
    • /
    • v.3 no.1
    • /
    • pp.7-12
    • /
    • 1970
  • Only mannitol or glycerine is generally used for the determination of boric acid in a nickel plating solution in order to make its acidic property so strong that it can be titrated with NaOH. However, these solutions give very amgiguous color change of indicator due to the precipitation of nickel salts . Therefore, only experienced dchemistsorwell trained workimen can accurately confirm the actual end point of the titratiion. For eliminating such interference of nickel salts and easily confirming the end point by any persons , the author attempted to find out a solution which produces no precipitates during the titration in these experiments, and also he tried to funish the reason for ambiguousness in titration. The following results were obtained after many experiments. (1) In any titrations which produce nickel salts such nI(oh)$_2$, the salt is formed umption very approximate to the end point, which shows some error by the consumption of titrant(NaOH) . Then, the pink color of phenolphthalein is absorbed by Ni(OH)$_2$ and the pH jumping at the end pint is also diminished to as little as less than 15% of the total phenophthalein ph range. (2) Known methods by complex salts of citrate,w hich do not produce precipitates of Ni(OH)$_2$, are also not very satisfactory, because, the pH jumping at the end point is only about 35% and the color change of phenolphthalein is form blue-green to purple-blue. (3) New method by complex salts of oxalate were attempted in these experiments. They also did not produce precipitates of Ni(OH)$_2$ and were very satisfactory in color change at the end of point was about 65% and the color change was from blue-green to purpled. In this methods, analytica cost was minimized by the use of less amounts of cheaper chemicals than the conventional citrates complex methods. The mixture of chemicals used was composed methods. The mixture of chemical used was composed of 37g/ι of sodium oxalate(Na$_2$C$_2$O$_4$$.$5H$_2$O), 2g/ι of phenolphthalein, and 400ml /ι of glycerin. The accuracy of analysis was within the error of 0.5%. (4) The procedure of analysis was as follows. One ml of nickel plating solution was taken out and to it were added 20ml of water and 20 ml of the above mixture for the indicator. The solution was titrated with 0.1N NaOH. The quantity of boric acid was calculated by the following equation. Boric acid (g/ι) = 6.184${\times}$F${\times}$ml .

  • PDF

Development of a River Maintenance Management Technology Related with National River Management Data (국가하천관리자료와 연계한 하천유지관리 기술개발)

  • Jo, Myung-Hee;Kim, Kyung-Jun;Kim, Hyun-Jung
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.15 no.1
    • /
    • pp.159-171
    • /
    • 2012
  • This study has developed a technology for river basin including the management of the data related with riverbed and the analysis of the riverbed maintenance based on the high-resolution imagery data and LiDAR (Light Detection and Raging) in order to enhance the utilization of river management by using RIMGIS(River Information Management GIS) and to acquire the advanced operation for river management. Using the detailed river topographical map specially designed in the form of LiDAR or high-resolution images, riverbed data and river bank channel information that are dynamically changed were informationized and established in the RIMGIS DB. At this stage, a monitoring techniques that is established in the river management system associated with RIMGIS and minimized the impact of riverbed maintenance (fluctuations) has been studied. In addition, functions and data structure of RIMGIS containing the information of geography and management of the river have been supplemented to make an improvement of the real-time management of the river. Furthermore, a technology that is capable of supplementing RIMGIS has been developed, making it feasible to maintain the river in use of structural method including an structural scheme of cross-section of the river by providing the information of riverbed and cross-section of the river. This is considered to solve an issue of insufficient data on accuracy and based on a lack of information of the river based on the two-dimensional lines, making it feasible to (steadily) improve the function of RIMGIS and to operate management tasks. Therefore, it is highly expected to enhance aforementioned technology of the river information management as a great foundation that maximizes the utilization of the river management to support RIMGIS for the development of national river management data.

Evaluation of Strength and Stiffness Gain of Concrete at Early-ages (조기재령에서 콘크리트의 강도 및 강성 발현 평가)

  • Hong, Geon-Ho;Park, Hong-Gun;Eum, Tae-Sun;Mihn, Joon-Soo;Kim, Yong-Nam
    • Journal of the Korea Concrete Institute
    • /
    • v.22 no.2
    • /
    • pp.237-245
    • /
    • 2010
  • Recently, deflection of the slab during construction periods becoming one of the important issues because of increasing the large-span structures. Early removing the form and support of the slab to achieve the rapid construction cause falling-off in quality of the structures. To reduce these deterioration and make rapid construction, construction of strength and stiffness gain model is needed by the research about the early-age concrete properties. Previous research results indicated that concrete model in existing design codes could not provide the mechanical properties of early age concrete. This paper carried out the concrete compressive strength tests on the curing age at early age stage. Evaluation of the accuracy of compressive strength and modulus of elasticity gain formula in existing various design codes was performed based on this test results, and new design model was proposed. This new model will be useful to develop the new rapid construction methods or prevent the deterioration of the deflection at construction periods. Material tests were performed at 1, 3, 7, 14, 28 curing days, total 159 cylinder style specimens were tested. Based on analyzing the test results, the relationship between compressive strength and modulus of elasticity at early age was proposed.

Assessment of absorption ability of air pollutant on forest in Gongju-city

  • Eom, Ji-Young;Jeong, Seok-Hee;Lee, Jae-Seok
    • Journal of Ecology and Environment
    • /
    • v.41 no.12
    • /
    • pp.328-335
    • /
    • 2017
  • Background: Some researchers have attempted to evaluate the ecological function of various additional services, away from the main point of view on the timber production of Korean forests. However, basic data, evaluation models, or studies on the absorption of air pollutants related to major plant communities in Korea are very rare. Therefore, we evaluated the functional value of the forest ecosystem in Gongju-city. Plantation manual for air purification, supplied from the Ministry of Environment in Japan, was referred to process and method for assessment of air pollutant absorption. Results: Gross primary production was calculated about average 18.2 t/ha/year. It was a relatively low value in forests mixed with deciduous broad and evergreen coniferous compared to pure coniferous forest. Net primary production was the highest value in deciduous coniferous and was the lowest value in mixed forest with deciduous broad and evergreen broad. And the mean sequestration amount of each air pollutant per unit area per year assessed from gross primary production and concentration of gas was the highest with 75.81 kg/ha/year in $O_3$ and was 16.87 and 6.04 kg/ha/year in $NO_2$ and $SO_2$, respectively. In addition, total amounts of $CO_2$ absorption and $O_2$ production were 716,045 t $CO_2$/year and 520,760 t $O_2$/year in all forest vegetation in Gongju-city. Conclusions: In this study, we evaluated the absorption ability of air pollutant in 2014 on forest in Gongju-city area. Gongju-city has the broad mountain area about 70.3%, and area of deciduous broad leaves forest was established the broadest with 47.4% of genus Quercus. Pg was calculated about average 18.2 t/ha/year. The mean sequestration amount of each air pollutant per unit area per year assessed from Pg and $C_{gas}$ was the highest with 75.81 kg/ha/year in $O_3$ and were 16.87 and 6.04 kg/ha/year in $NO_2$ and $SO_2$, respectively. Absorption rates of $O_3$, $NO_2$, and $SO_2$ were the highest in evergreen coniferous forest about $14.87kgO_3/ha/year$, $3.30kgNO_2/ha/year$, $1.18kgSO_2/ha/year$, and the lowest were $5.95kgO_3/ha/year$, $1.32kgNO_2/ha/year$, and $0.47kgSO_2/ha/year$ in deciduous broad forest. In conclusion, it was evaluated that Japanese model is suitable for estimating air pollutants in Japan to Korean vegetation. However, in Korea, there is a very limited basic data needed to assess the ability of forests to absorption of air pollutants. In this study, the accuracy of a calculated value is not high because the basic data of trees with similar life form are used in evaluation.

Spatial Locality Preservation Metric for Constructing Histogram Sequences (히스토그램 시퀀스 구성을 위한 공간 지역성 보존 척도)

  • Lee, Jeonggon;Kim, Bum-Soo;Moon, Yang-Sae;Choi, Mi-Jung
    • Journal of Information Technology and Architecture
    • /
    • v.10 no.1
    • /
    • pp.79-91
    • /
    • 2013
  • This paper proposes a systematic methodology that could be used to decide which one shows the best performance among space filling curves (SFCs) in applying lower-dimensional transformations to histogram sequences. A histogram sequence represents a time-series converted from an image by the given SFC. Due to the high-dimensionality nature, histogram sequences are very difficult to be stored and searched in their original form. To solve this problem, we generally use lower-dimensional transformations, which produce lower bounds among high dimensional sequences, but the tightness of those lower-bounds is highly affected by the types of SFC. In this paper, we attack a challenging problem of evaluating which SFC shows the better performance when we apply the lower-dimensional transformation to histogram sequences. For this, we first present a concept of spatial locality, which comes from an intuition of "if the entries are adjacent in a histogram sequence, their corresponding cells should also be adjacent in its original image." We also propose spatial locality preservation metric (slpm in short) that quantitatively evaluates spatial locality and present its formal computation method. We then evaluate five SFCs from the perspective of slpm and verify that this evaluation result concurs with the performance evaluation of lower-dimensional transformations in real image matching. Finally, we perform k-NN (k-nearest neighbors) search based on lower-dimensional transformations and validate accuracy of the proposed slpm by providing that the Hilbert-order with the highest slpm also shows the best performance in k-NN search.

A Study about Internal Control Deficient Company Forecasting and Characteristics - Based on listed and unlisted companies - (내부통제 취약기업 예측과 특성에 관한 연구 - 상장기업군과 비상장기업군 중심으로 -)

  • Yoo, Kil-Hyun;Kim, Dae-Lyong
    • Journal of Digital Convergence
    • /
    • v.15 no.2
    • /
    • pp.121-133
    • /
    • 2017
  • The propose of study is to examine the characteristics of companies with high possibility to form an internal control weakness using forecasting model. This study use the actual listed/unlisted companies' data from K_financial institution. The first conclusion is that discriminant model is more valid than logit model to predict internal control weak companies. A discriminant model for predicting the vulnerability of internal control has high classification accuracy and has low the Type II error that is incorrectly classifying vulnerable companies to normal companies. The second conclusion is that the characteristic of weak internal control companies have a low credit rating, low asset soundness assessment, high delinquency rates, lower operating cash flow, high debt ratios, and minus operating profit to the net sales ratio. As not only a case of listed companies but unlisted companies which did not occur in previous studies are extended in this study, research results including the forecasting model can be used as a predictive tool of financial institutions predicting companies with high potential internal control weakness to prevent asset losses.

Runoff Analysis for Urban Unit Subbasin Based on its Shape (유역형상을 고려한 도시 단위 소유역의 유출 해석)

  • Hur, Sung-Chul;Park, Sang-Sik;Lee, Jong-Tae
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.5
    • /
    • pp.491-501
    • /
    • 2008
  • In order to describe runoff characteristics of urban drainage area, outflow from subbasins divided by considering topography and flow path, is analyzed through stormwater system. In doing so, concentration time and time-area curve change significantly according to basin shape, and runoff characteristics are changed greatly by these attributes. Therefore, in this development study of FFC2Q model by MLTM, we aim to improve the accuracy in analyzing runoff by adding a module that considers basin shape, giving it an advantage over popular urban hydrology models, such as SWMM and ILLUDAS, that can not account for geometric shape of a basin due to their assumptions of unit subbasin as having a simple rectangular form. For subbasin shapes, symmetry types (rectangular, ellipse, lozenge), divergent types (triangle, trapezoid), and convergent types (inverted triangle, inverted trapezoid) have been analyzed in application of time-area curve for surface runoff analysis. As a result, we found that runoff characteristic can be quite different depending on basin shape. For example, when Gunja basin was represented by lozenge shape, the best results for peak flow discharge and overall shape of runoff hydrograph were achieved in comparison to observed data. Additionally, in case of considering subbasin shape, the number of division of drainage basin did not affect peak flow magnitude and gave stable results close to observed data. However, in case of representing the shape of subbasins by traditional rectangular approximation, the division number had sensitive effects on the analysis results.

Analysis of the Accuracy of Quaternion-Based Spatial Resection Based on the Layout of Control Points (기준점 배치에 따른 쿼터니언기반 공간후방교회법의 정확도 분석)

  • Kim, Eui Myoung;Choi, Han Seung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.4
    • /
    • pp.255-262
    • /
    • 2018
  • In order to determine the three-dimensional position in photogrammetry, a spatial resection is a pre-requisite step to determine exterior orientation parameters. The existing spatial resection method is a non-linear equation that requires initial values of exterior orientation parameters and has a problem that a gimbal lock phenomenon may occur. On the other hand, the spatial resection using quaternion is a closed form solution that does not require initial values of EOP (Exterior Orientation Parameters) and is a method that can eliminate the problem of gimbal lock. In this study, to analyze the stability of the quaternion-based spatial resection, the exterior orientation parameters were determined according to the different layout of control points and were compared with the determined values using existing non-linear equation. As a result, it can be seen that the quaternionbased spatial resection is affected by the layout of the control points. Therefore, if the initial value of exterior orientation parameters could not be obtained, it would be more effective to estimate the initial exterior orientation values using the quaternion-based spatial resection and apply it to the collinearity equation-based spatial resection method.

A Study on the Correlation with the Degree of Compaction and the Penetration Depth Using the Portable Penetration Meter at Field Test (휴대용 다짐도 측정기의 현장실험을 통한 다짐도와 관입깊이 상관성 연구)

  • Park, Geoun Hyun
    • Journal of the Korean GEO-environmental Society
    • /
    • v.19 no.11
    • /
    • pp.5-14
    • /
    • 2018
  • Worldwide, soil compaction work is one of the most important activities that are carried out on civil engineering works sites. Compaction work, particularly in the area of road construction, is considered to be important, as poor compaction work is closely related with poor construction even after a construction is complete. Currently, the plate bearing test or the sand cone method relative to the unit weight of soil test are commonly used to measure the degree of compaction, but as these require a great deal of time, equipment and manpower, it is difficult to secure economic efficiency. The method that is used to measure the degree of compaction according to the penetration depth achieved by free fall objects through gravity is the Free-Fall Penetration Test (FFPT), which uses a so-called "portable compaction measuring meter (PCMM)." In this study, the degree of compaction was measured and a penetration depth graph was developed after the field test using the portable compaction measuring meter. The coefficient of determination was 0.963 at a drop height of 10 cm, showing the highest level of accuracy. Both horizontal axis and longitudinal axis were developed in a decimal form of graph, and the range of allowable error was ${\pm}1.28mm$ based on the penetration depth. The portable compaction measuring meter makes it possible to measure the degree of compaction simply, quickly and accurately in the field, which will ensure economic efficiency and facilitate the process management.

Rare Malware Classification Using Memory Augmented Neural Networks (메모리 추가 신경망을 이용한 희소 악성코드 분류)

  • Kang, Min Chul;Kim, Huy Kang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.4
    • /
    • pp.847-857
    • /
    • 2018
  • As the number of malicious code increases steeply, cyber attack victims targeting corporations, public institutions, financial institutions, hospitals are also increasing. Accordingly, academia and security industry are conducting various researches on malicious code detection. In recent years, there have been a lot of researches using machine learning techniques including deep learning. In the case of research using Convolutional Neural Network, ResNet, etc. for classification of malicious code, it can be confirmed that the performance improvement is higher than the existing classification method. However, one of the characteristics of the target attack is that it is custom malicious code that makes it operate only for a specific company, so it is not a form spreading widely to a large number of users. Since there are not many malicious codes of this kind, it is difficult to apply the previously studied machine learning or deep learning techniques. In this paper, we propose a method to classify malicious codes when the amount of samples is insufficient such as targeting type malicious code. As a result of the study, we confirmed that the accuracy of 97% can be achieved even with a small amount of data by applying the Memory Augmented Neural Networks model.