• Title/Summary/Keyword: Input index

Search Result 837, Processing Time 0.027 seconds

Tracing the Drift Ice Using the Particle Tracking Method in the Arctic Ocean (북극해에서 입자추적 방법을 이용한 유빙 추적 연구)

  • Park, GwangSeob;Kim, Hyun-Cheol;Lee, Taehee;Son, Young Baek
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_2
    • /
    • pp.1299-1310
    • /
    • 2018
  • In this study, we analyzed distribution and movement trends using in-situ observations and particle tracking methods to understand the movement of the drift ice in the Arctic Ocean. The in-situ movement data of the drift ice in the Arctic Ocean used ITP (Ice-Tethered Profiler) provided by NOAA (National Oceanic and Atmospheric Administration) from 2009 to 2018, which was analyzed with the location and speed for each year. Particle tracking simulates the movement of the drift ice using daily current and wind data provided by HYCOM (Hybrid Coordinate Ocean Model) and ECMWF (European Centre for Medium-Range Weather Forecasts, 2009-2017). In order to simulate the movement of the drift ice throughout the Arctic Ocean, ITP data, a field observation data, were used as input to calculate the relationship between the current and wind and follow up the Lagrangian particle tracking. Particle tracking simulations were conducted with two experiments taking into account the effects of current and the combined effects of current and wind, most of which were reproduced in the same way as in-situ observations, given the effects of currents and winds. The movement of the drift ice in the Arctic Ocean was reproduced using a wind-imposed equation, which analyzed the movement of the drift ice in a particular year. In 2010, the Arctic Ocean Index (AOI) was a negative year, with particles clearly moving along the Beaufort Gyre, resulting in relatively large movements in Beaufort Sea. On the other hand, in 2017 AOI was a positive year, with most particles not affected by Gyre, resulting in relatively low speed and distance. Around the pole, the speed of the drift ice is lower in 2017 than 2010. From seasonal characteristics in 2010 and 2017, the movement of the drift ice increase in winter 2010 (0.22 m/s) and decrease to spring 2010 (0.16 m/s). In the case of 2017, the movement is increased in summer (0.22 m/s) and decreased to spring time (0.13 m/s). As a result, the particle tracking method will be appropriate to understand long-term drift ice movement trends by linking them with satellite data in place of limited field observations.

International and domestic research trends in longitudinal connectivity evaluations of aquatic ecosystems, and the applicability analysis of fish-based models (수생태계 종적 연결성 평가를 위한 국내외 연구 현황 및 어류기반 종적 연속성 평가모델 적용성 분석)

  • Kim, Ji Yoon;Kim, Jai-Gu;Bae, Dae-Yeul;Kim, Hye-Jin;Kim, Jeong-Eun;Lee, Ho-Seong;Lim, Jun-Young;An, Kwang-Guk
    • Korean Journal of Environmental Biology
    • /
    • v.38 no.4
    • /
    • pp.634-649
    • /
    • 2020
  • Recently, stream longitudinal connectivity has been a topic of investigation due to the frequent disconnections and the impact of aquatic ecosystems caused by the construction of small and medium-sized weirs and various artificial structures (fishways) directly influencing the stream ecosystem health. In this study, the international and domestic research trends of the longitudinal connectivity in aquatic ecosystems were evaluated and the applicability of fish-based longitudinal connectivity models used in developed countries was analyzed. For these purposes, we analyzed the current status of research on longitudinal connectivity and structural problems, fish monitoring methodology, monitoring approaches, longitudinal disconnectivity of fish movement, and biodiversity. In addition, we analyzed the current status and some technical limitations of physical habitat suitability evaluation, ecology-based water flow, eco-hydrological modeling for fish habitat connectivity, and the s/w program development for agent-based model. Numerous references, data, and various reports were examined to identify worldwide longitudinal stream connectivity evaluation models in European and non-European countries. The international approaches to longitudinal connectivity evaluations were categorized into five phases including 1) an approach integrating fish community and artificial structure surveys (two types input variables), 2) field monitoring approaches, 3) a stream geomorphological approach, 4) an artificial structure-based DB analytical approach, and 5) other approaches. the overall evaluation of survey methodologies and applicability for longitudinal stream connectivity suggested that the ICE model (Information sur la Continuite Ecologique) and the ICF model (Index de Connectivitat Fluvial), widely used in European countries, were appropriate for the application of longitudinal connectivity evaluations in Korean streams.

Organic Matter and Heavy Metals Pollution Assessment of Surface Sediment from a Fish Farming Area in Tongyoung-Geoje Coast of Korea (통영-거제 연안 어류 양식장 표층 퇴적물 중 유기물 및 중금속 오염 평가)

  • Hwang, Dong-Woon;Hwang, Hyunjin;Lee, Garam;Kim, Sunyoung;Park, Sohyun;Yoon, Sang-Pil
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.4
    • /
    • pp.510-520
    • /
    • 2021
  • To understand the status of organic matter and heavy metal pollution in surface sediment of a fish farming area, we have measured the concentrations of total organic carbon (TOC), total nitrogen (TN), and heavy metals (As, Cd, Cr, Cu, Fe, Hg, Mn, Pb, and Zn) in surface sediments of a fish farming area near Tongyoung-Geoje coast. The mean concentrations of TOC and TN were 22.7 mg/g and 3.4 mg/g, respectively, and were much higher than those in surface sediments of a semi-enclosed bay in the southern coast of Korea. The mean concentrations of As, Cd, Cr, Cu, Fe, Hg, Mn, Pb, and Zn were 10.5 mg/kg, 0.37 mg/kg, 82.9 mg/kg, 127 mg/kg, 4.19%, 0.041 mg/kg, 596 mg/kg, 39.5 mg/kg, and 175 mg/kg, respectively, and the mean concentrations of Cd and Cu were three times higher than those in surface sediments of shellfish farming area in the southeastern coast of Korea. In addition, the concentrations of TOC and corrected Cu exceeded the values of sediment quality guidelines applied in Korea, and pollution load index (PLI) and ecological risk index (ERI) showed that the metal concentrations in the sediments of some fish farming area have a strongly negative ecological impact on benthic organisms, although most metal concentrations did not exceed the sediment quality guidelines. Based on overall assessment results, the surface sediments of fish farming areas in the study region are polluted with organic matter and some heavy metals. Thus, a comprehensive management plan is necessary to improve the sedimentary environments, identify primary contamination sources, and reduce the input of pollution load for organic matter and heavy metals in the sediments of fish farming areas.

Analysis of Skin Color Pigments from Camera RGB Signal Using Skin Pigment Absorption Spectrum (피부색소 흡수 스펙트럼을 이용한 카메라 RGB 신호의 피부색 성분 분석)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • In this paper, a method to directly calculate the major elements of skin color such as melanin and hemoglobin from the RGB signal of the camera is proposed. The main elements of skin color typically measure spectral reflectance using specific equipment, and reconfigure the values at some wavelengths of the measured light. The values calculated by this method include such things as melanin index and erythema index, and require special equipment such as a spectral reflectance measuring device or a multi-spectral camera. It is difficult to find a direct calculation method for such component elements from a general digital camera, and a method of indirectly calculating the concentration of melanin and hemoglobin using independent component analysis has been proposed. This method targets a region of a certain RGB image, extracts characteristic vectors of melanin and hemoglobin, and calculates the concentration in a manner similar to that of Principal Component Analysis. The disadvantage of this method is that it is difficult to directly calculate the pixel unit because a group of pixels in a certain area is used as an input, and since the extracted feature vector is implemented by an optimization method, it tends to be calculated with a different value each time it is executed. The final calculation is determined in the form of an image representing the components of melanin and hemoglobin by converting it back to the RGB coordinate system without using the feature vector itself. In order to improve the disadvantages of this method, the proposed method is to calculate the component values of melanin and hemoglobin in a feature space rather than an RGB coordinate system using a feature vector, and calculate the spectral reflectance corresponding to the skin color using a general digital camera. Methods and methods of calculating detailed components constituting skin pigments such as melanin, oxidized hemoglobin, deoxidized hemoglobin, and carotenoid using spectral reflectance. The proposed method does not require special equipment such as a spectral reflectance measuring device or a multi-spectral camera, and unlike the existing method, direct calculation of the pixel unit is possible, and the same characteristics can be obtained even in repeated execution. The standard diviation of density for melanin and hemoglobin of proposed method was 15% compared to conventional and therefore gives 6 times stable.

Detection of Site Environment and Estimation of Stand Yield in Mixed Forests Using National Forest Inventory (국가산림자원조사를 이용한 혼효림의 입지환경 탐색 및 임분수확량 추정)

  • Seongyeop Jeong;Jongsu Yim;Sunjung Lee;Jungeun Song;Hyokeun Park;JungBin Lee;Kyujin Yeom;Yeongmo Son
    • Journal of Korean Society of Forest Science
    • /
    • v.112 no.1
    • /
    • pp.83-92
    • /
    • 2023
  • This study was established to investigate the site environment of mixed forests in Korea and to estimate the growth and yield of stands using national forest resources inventory data. The growth of mixed forests was derived by applying the Chapman-Richards model with diameter at breast height (DBH), height, and cross-sectional area at breast height (BA), and the yield of mixed forests was derived by applying stepwise regression analysis with factors such as cross-sectional area at breast height, site index (SI), age, and standing tree density per ha. Mixed forests were found to be growing in various locations. By climate zone, more than half of them were distributed in the temperate central region. By altitude, about 62% were distributed at 101-400 m. The fitness indexes (FI) for the growth model of mixed forests, which is the independent variable of stand age, were 0.32 for the DBH estimation, 0.22 for the height estimation, and 0.18 for the basal area at breast height estimation, which were somewhat low. However, considering the graph and residual between the estimated and measured values of the estimation equation, the use of this estimation model is not expected to cause any particular problems. The yield prediction model of mixed forests was derived as follows: Stand volume =-162.6859+6.3434 ∙ BA+9.9214 ∙ SI+0.7271 ∙ Age, which is a step- by-step input of basal area at breast height (BA), site index (SI), and age among several growth factors, and the determination coefficient (R2) of the equation was about 96%. Using our optimal growth and yield prediction model, a makeshift stand yield table was created. This table of mixed forests was also used to derive the rotation of the highest production in volume.

The Evaluation of SUV Variations According to the Errors of Entering Parameters in the PET-CT Examinations (PET/CT 검사에서 매개변수 입력오류에 따른 표준섭취계수 평가)

  • Kim, Jia;Hong, Gun Chul;Lee, Hyeok;Choi, Seong Wook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.43-48
    • /
    • 2014
  • Purpose: In the PET/CT images, The SUV (standardized uptake value) enables the quantitative assessment according to the biological changes of organs as the index of distinction whether lesion is malignant or not. Therefore, It is too important to enter parameters correctly that affect to the SUV. The purpose of this study is to evaluate an allowable error range of SUV as measuring the difference of results according to input errors of Activity, Weight, uptake Time among the parameters. Materials and Methods: Three inserts, Hot, Teflon and Air, were situated in the 1994 NEMA Phantom. Phantom was filled with 27.3 MBq/mL of 18F-FDG. The ratio of hotspot area activity to background area activity was regulated as 4:1. After scanning, Image was re-reconstructed after incurring input errors in Activity, Weight, uptake Time parameters as ${\pm}5%$, 10%, 15%, 30%, 50% from original data. ROIs (region of interests) were set one in the each insert areas and four in the background areas. $SUV_{mean}$ and percentage differences were calculated and compared in each areas. Results: $SUV_{mean}$ of Hot. Teflon, Air and BKG (Background) areas of original images were 4.5, 0.02. 0.1 and 1.0. The min and max value of $SUV_{mean}$ according to change of Activity error were 3.0 and 9.0 in Hot, 0.01 and 0.04 in Teflon, 0.1 and 0.3 in Air, 0.6 and 2.0 in BKG areas. And percentage differences were equally from -33% to 100%. In case of Weight error showed $SUV_{mean}$ as 2.2 and 6.7 in Hot, 0.01 and 0.03 in Tefron, 0.09 and 0.28 in Air, 0.5 and 1.5 in BKG areas. And percentage differences were equally from -50% to 50% except Teflon area's percentage deference that was from -50% to 52%. In case of uptake Time error showed $SUV_{mean}$ as 3.8 and 5.3 in Hot, 0.01 and 0.02 in Teflon, 0.1 and 0.2 in Air, 0.8 and 1.2 in BKG areas. And percentage differences were equally from 17% to -14% in Hot and BKG areas. Teflon area's percentage difference was from -50% to 52% and Air area's one was from -12% to 20%. Conclusion: As shown in the results, It was applied within ${\pm}5%$ of Activity and Weight errors if the allowable error range was configured within 5%. So, The calibration of dose calibrator and weighing machine has to conduct within ${\pm}5%$ error range because they can affect to Activity and Weight rates. In case of Time error, it showed separate error ranges according to the type of inserts. It showed within 5% error when Hot and BKG areas error were within ${\pm}15%$. So we have to consider each time errors if we use more than two clocks included scanner's one during the examinations.

  • PDF

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

An Iterative, Interactive and Unified Seismic Velocity Analysis (반복적 대화식 통합 탄성파 속도분석)

  • Suh Sayng-Yong;Chung Bu-Heung;Jang Seong-Hyung
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.26-32
    • /
    • 1999
  • Among the various seismic data processing sequences, the velocity analysis is the most time consuming and man-hour intensive processing steps. For the production seismic data processing, a good velocity analysis tool as well as the high performance computer is required. The tool must give fast and accurate velocity analysis. There are two different approches in the velocity analysis, batch and interactive. In the batch processing, a velocity plot is made at every analysis point. Generally, the plot consisted of a semblance contour, super gather, and a stack pannel. The interpreter chooses the velocity function by analyzing the velocity plot. The technique is highly dependent on the interpreters skill and requires human efforts. As the high speed graphic workstations are becoming more popular, various interactive velocity analysis programs are developed. Although, the programs enabled faster picking of the velocity nodes using mouse, the main improvement of these programs is simply the replacement of the paper plot by the graphic screen. The velocity spectrum is highly sensitive to the presence of the noise, especially the coherent noise often found in the shallow region of the marine seismic data. For the accurate velocity analysis, these noise must be removed before the spectrum is computed. Also, the velocity analysis must be carried out by carefully choosing the location of the analysis point and accuarate computation of the spectrum. The analyzed velocity function must be verified by the mute and stack, and the sequence must be repeated most time. Therefore an iterative, interactive, and unified velocity analysis tool is highly required. An interactive velocity analysis program, xva(X-Window based Velocity Analysis) was invented. The program handles all processes required in the velocity analysis such as composing the super gather, computing the velocity spectrum, NMO correction, mute, and stack. Most of the parameter changes give the final stack via a few mouse clicks thereby enabling the iterative and interactive processing. A simple trace indexing scheme is introduced and a program to nike the index of the Geobit seismic disk file was invented. The index is used to reference the original input, i.e., CDP sort, directly A transformation techinique of the mute function between the T-X domain and NMOC domain is introduced and adopted to the program. The result of the transform is simliar to the remove-NMO technique in suppressing the shallow noise such as direct wave and refracted wave. However, it has two improvements, i.e., no interpolation error and very high speed computing time. By the introduction of the technique, the mute times can be easily designed from the NMOC domain and applied to the super gather in the T-X domain, thereby producing more accurate velocity spectrum interactively. The xva program consists of 28 files, 12,029 lines, 34,990 words and 304,073 characters. The program references Geobit utility libraries and can be installed under Geobit preinstalled environment. The program runs on X-Window/Motif environment. The program menu is designed according to the Motif style guide. A brief usage of the program has been discussed. The program allows fast and accurate seismic velocity analysis, which is necessary computing the AVO (Amplitude Versus Offset) based DHI (Direct Hydrocarn Indicator), and making the high quality seismic sections.

  • PDF

Performance Improvement on Short Volatility Strategy with Asymmetric Spillover Effect and SVM (비대칭적 전이효과와 SVM을 이용한 변동성 매도전략의 수익성 개선)

  • Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.119-133
    • /
    • 2020
  • Fama asserted that in an efficient market, we can't make a trading rule that consistently outperforms the average stock market returns. This study aims to suggest a machine learning algorithm to improve the trading performance of an intraday short volatility strategy applying asymmetric volatility spillover effect, and analyze its trading performance improvement. Generally stock market volatility has a negative relation with stock market return and the Korean stock market volatility is influenced by the US stock market volatility. This volatility spillover effect is asymmetric. The asymmetric volatility spillover effect refers to the phenomenon that the US stock market volatility up and down differently influence the next day's volatility of the Korean stock market. We collected the S&P 500 index, VIX, KOSPI 200 index, and V-KOSPI 200 from 2008 to 2018. We found the negative relation between the S&P 500 and VIX, and the KOSPI 200 and V-KOSPI 200. We also documented the strong volatility spillover effect from the VIX to the V-KOSPI 200. Interestingly, the asymmetric volatility spillover was also found. Whereas the VIX up is fully reflected in the opening volatility of the V-KOSPI 200, the VIX down influences partially in the opening volatility and its influence lasts to the Korean market close. If the stock market is efficient, there is no reason why there exists the asymmetric volatility spillover effect. It is a counter example of the efficient market hypothesis. To utilize this type of anomalous volatility spillover pattern, we analyzed the intraday volatility selling strategy. This strategy sells short the Korean volatility market in the morning after the US stock market volatility closes down and takes no position in the volatility market after the VIX closes up. It produced profit every year between 2008 and 2018 and the percent profitable is 68%. The trading performance showed the higher average annual return of 129% relative to the benchmark average annual return of 33%. The maximum draw down, MDD, is -41%, which is lower than that of benchmark -101%. The Sharpe ratio 0.32 of SVS strategy is much greater than the Sharpe ratio 0.08 of the Benchmark strategy. The Sharpe ratio simultaneously considers return and risk and is calculated as return divided by risk. Therefore, high Sharpe ratio means high performance when comparing different strategies with different risk and return structure. Real world trading gives rise to the trading costs including brokerage cost and slippage cost. When the trading cost is considered, the performance difference between 76% and -10% average annual returns becomes clear. To improve the performance of the suggested volatility trading strategy, we used the well-known SVM algorithm. Input variables include the VIX close to close return at day t-1, the VIX open to close return at day t-1, the VK open return at day t, and output is the up and down classification of the VK open to close return at day t. The training period is from 2008 to 2014 and the testing period is from 2015 to 2018. The kernel functions are linear function, radial basis function, and polynomial function. We suggested the modified-short volatility strategy that sells the VK in the morning when the SVM output is Down and takes no position when the SVM output is Up. The trading performance was remarkably improved. The 5-year testing period trading results of the m-SVS strategy showed very high profit and low risk relative to the benchmark SVS strategy. The annual return of the m-SVS strategy is 123% and it is higher than that of SVS strategy. The risk factor, MDD, was also significantly improved from -41% to -29%.

The Present State and Solutions for Archival Arrangement and Description of National Archives & Records Service of Korea (국가기록원의 기록물 정리기술의 현황과 개선방안)

  • Yoon, Ju-Bom
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.4 no.2
    • /
    • pp.118-162
    • /
    • 2004
  • Archival description in archives has an important role in document control and reference service. Archives has made an effort to do archival description. But we have some differences and problems about a theory and practical processes comparing with advanced countries. The serious difference in a theory is that a function classification, maintenance of an original order, arrangement of multi-level description are not reflected in practical process. they are arranged in shelves after they are arranged by registration order in a unit of a volume like an arrangement of book. In addition, there are problems in history of agency change or control of index. So these can cause inconvenience for users. For improving, in this study we introduced the meaning and importance of arrangement of description, the situation and problem of arrangement of description in The National Archives, and a description guideline in other foreign countries. The next is an example for ISAD(G). This paper has chapter 8, the chapter 1 is introduction, the chapter 2 is the meaning and importance of arrangement of description, excluding the chapter 8 is conclusion we can say like this from the chapter 3 to the chapter 7. In the chapter 3, we explain GOVT we are using now and description element category in situation and problem of arrangement of description in Archives. In the chapter 4, this is about guideline from Archives in U.S.A, England and Australia. 1. Lifecycle Date Requirement Guide from NARA is introduced and of the description field, the way of the description about just one title element is introduced. 2. This is about the guideline of the description from Public Record Office. That name is National Archives Cataloguing Guidelines Introduction. We are saying "PROCAT" from this guideline and the seven procedure of description. 3. This is about Commomon Record Series from National Archives of Australia. we studied Registration & description procedures for CRS system. In the chapter 5, This is about the example which applied ISAD to. Archives introduce description of documents produced from Appeals Commission in the Ministry of Government Administration. In the chapter 6, 7. These are about the problems we pointed after using ISAD, naming for the document at procedure section in every institution, the lack of description fields category, the sort or classification of the kind or form, the reference or identified number, the absence description rule about the details, function classification, multi-level description, input format, arrangement of book shelf, authority control. The plan for improving are that problems. The best way for arrangement and description in Archives is to examine the standard, guideline, manual from archives in the advanced countries. So we suggested we need many research and study about this in the academic field.