• Title/Summary/Keyword: 3D Based

Search Result 15,786, Processing Time 0.059 seconds

Determination of shear wave velocity profiles in soil deposit from seismic piezo-cone penetration test (탄성파 피에조콘 관입 시험을 통한 국내 퇴적 지반의 전단파 속도 결정)

  • Sun Chung Guk;Jung Gyungja;Jung Jong Hong;Kim Hong-Jong;Cho Sung-Min
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2005.09a
    • /
    • pp.125-153
    • /
    • 2005
  • It has been widely known that the seismic piezo-cone penetration test (SCPTU) is one of the most useful techniques for investigating the geotechnical characteristics including dynamic soil properties. As the practical applications in Korea, SCPTU was carried out at two sites in Busan and four sites in Incheon, which are mainly composed of alluvial or marine soil deposits. From the SCPTU waveform data obtained from the testing sites, the first arrival times of shear waves were and the corresponding time differences with depth were determined using the cross-over method, and the shear wave velocity profiles (VS) were derived based on the refracted ray path method based on Snell's law and similar to the trend of cone tip resistance (qt) profiles. In Incheon area, the testing depths of SCPTU were deeper than those of conventional down-hole seismic tests. Moreover, for the application of the conventional CPTU to earthquake engineering practices, the correlations between VS and CPTU data were deduced based on the SCPTU results. For the empirical evaluation of VS for all soils together with clays and sands which are classified unambiguously in this study by the soil behavior type classification Index (IC), the authors suggested the VS-CPTU data correlations expressed as a function of four parameters, qt, fs, $\sigma$, v0 and Bq, determined by multiple statistical regression modeling. Despite the incompatible strain levels of the down-hole seismic test during SCPTU and the conventional CPTU, it is shown that the VS-CPTU data correlations for all soils clays and sands suggested in this study is applicable to the preliminary estimation of VS for the Korean deposits and is more reliable than the previous correlations proposed by other researchers.

  • PDF

Function of the Korean String Indexing System for the Subject Catalog (주제목록을 위한 한국용어열색인 시스템의 기능)

  • Yoon Kooho
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.15
    • /
    • pp.225-266
    • /
    • 1988
  • Various theories and techniques for the subject catalog have been developed since Charles Ammi Cutter first tried to formulate rules for the construction of subject headings in 1876. However, they do not seem to be appropriate to Korean language because the syntax and semantics of Korean language are different from those of English and other European languages. This study therefore attempts to develop a new Korean subject indexing system, namely Korean String Indexing System(KOSIS), in order to increase the use of subject catalogs. For this purpose, advantages and disadvantages between the classed subject catalog nd the alphabetical subject catalog, which are typical subject ca-alogs in libraries, are investigated, and most of remarkable subject indexing systems, in particular the PRECIS developed by the British National Bibliography, are reviewed and analysed. KOSIS is a string indexing based on purely the syntax and semantics of Korean language, even though considerable principles of PRECIS are applied to it. The outlines of KOSIS are as follows: 1) KOSIS is based on the fundamentals of natural language and an ingenious conjunction of human indexing skills and computer capabilities. 2) KOSIS is. 3 string indexing based on the 'principle of context-dependency.' A string of terms organized accoding to his principle shows remarkable affinity with certain patterns of words in ordinary discourse. From that point onward, natural language rather than classificatory terms become the basic model for indexing schemes. 3) KOSIS uses 24 role operators. One or more operators should be allocated to the index string, which is organized manually by the indexer's intellectual work, in order to establish the most explicit syntactic relationship of index terms. 4) Traditionally, a single -line entry format is used in which a subject heading or index entry is presented as a single sequence of words, consisting of the entry terms, plus, in some cases, an extra qualifying term or phrase. But KOSIS employs a two-line entry format which contains three basic positions for the production of index entries. The 'lead' serves as the user's access point, the 'display' contains those terms which are themselves context dependent on the lead, 'qualifier' sets the lead term into its wider context. 5) Each of the KOSIS entries is co-extensive with the initial subject statement prepared by the indexer, since it displays all the subject specificities. Compound terms are always presented in their natural language order. Inverted headings are not produced in KOSIS. Consequently, the precision ratio of information retrieval can be increased. 6) KOSIS uses 5 relational codes for the system of references among semantically related terms. Semantically related terms are handled by a different set of routines, leading to the production of 'See' and 'See also' references. 7) KOSIS was riginally developed for a classified catalog system which requires a subject index, that is an index -which 'trans-lates' subject index, that is, an index which 'translates' subjects expressed in natural language into the appropriate classification numbers. However, KOSIS can also be us d for a dictionary catalog system. Accordingly, KOSIS strings can be manipulated to produce either appropriate subject indexes for a classified catalog system, or acceptable subject headings for a dictionary catalog system. 8) KOSIS is able to maintain a constistency of index entries and cross references by means of a routine identification of the established index strings and reference system. For this purpose, an individual Subject Indicator Number and Reference Indicator Number is allocated to each new index strings and new index terms, respectively. can produce all the index entries, cross references, and authority cards by means of either manual or mechanical methods. Thus, detailed algorithms for the machine-production of various outputs are provided for the institutions which can use computer facilities.

  • PDF

Decreased White Matter Structural Connectivity in Psychotropic Drug-Naïve Adolescent Patients with First Onset Major Depressive Disorder (정신과적 투약력이 없는 초발 주요 우울장애 청소년 환아들에서의 백질 구조적 연결성 감소)

  • Suh, Eunsoo;Kim, Jihyun;Suh, Sangil;Park, Soyoung;Lee, Jeonho;Lee, Jongha;Kim, In-Seong;Lee, Moon-Soo
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.25 no.2
    • /
    • pp.153-165
    • /
    • 2017
  • Objectives : Recent neuroimaging studies focus on dysfunctions in connectivity between cognitive circuits and emotional circuits: anterior cingulate cortex that connects dorsolateral orbitofrontal cortex and prefrontal cortex to limbic system. Previous studies on pediatric depression using DTI have reported decreased neural connectivity in several brain regions, including the amygdala, anterior cingulate cortex, superior longitudinal fasciculus. We compared the neural connectivity of psychotropic drug naïve adolescent patients with a first onset of major depressive episode with healthy controls using DTI. Methods : Adolescent psychotropic drug naïve patients(n=26, 10 men, 16 women; age range, 13-18 years) who visited the Korea University Guro Hospital and were diagnosed with first onset major depressive disorder were registered. Healthy controls(n=27, 5 males, 22 females; age range, 12-17 years) were recruited. Psychiatric interviews, complete psychometrics including IQ and HAM-D, MRI including diffusion weighted image acquisition were conducted prior to antidepressant administration to the patients. Fractional anisotropy(FA), radial, mean, and axial diffusivity were estimated using DTI. FMRIB Software Library-Tract Based Spatial Statistics was used for statistical analysis. Results : We did not observe any significant difference in whole brain analysis. However, ROI analysis on right superior longitudinal fasciculus resulted in 3 clusters with significant decrease of FA in patients group. Conclusions : The patients with adolescent major depressive disorder showed statistically significant FA decrease in the DTI-based structure compared with healthy control. Therefore we suppose DTI can be used as a bio-marker in psychotropic drug-naïve adolescent patients with first onset major depressive disorder.

Characteristics and Seasonal Variations in the Structure of Coleoptera Communities (갑충군집(甲蟲群集)의 구조적(構造的) 특성(特性)과 계절적(季節的) 발생소장(發生消長))

  • Kim, Ho Jun
    • Journal of Korean Society of Forest Science
    • /
    • v.80 no.1
    • /
    • pp.82-96
    • /
    • 1991
  • This study was carried out to investigate the structural characteristics of Coleoptera communities inhabiting the crowns of the Korean pine(Pinus koraiensis S. et Z.). Four plantations of the Korean pine, stand A (11 years old), stand B(21 years old), stand C(31 years old), and stand D(46 years old), were selected in Sudong-myen, Namyangju-gun, Kyeonggi-do. Sampling was done by knock down methods using insectide(DDVP), which was conducted from April, 1986 to September, 1987, except for the winter season. The following major conclusions are drawn from this study : 1. The total number of Coleoptera was 107 species of 85 genera in 35 families : 83 species of 66 genera in 27 families in 1986 and 74 species of 52 genera in 30 families in 1987. 2. The abundant families, based on the number of species, were Staphylinidae (16.8%), Coccinellidae(7.5%), Chrysomlidae(6.5%), Curculionidae(6.5.%), and Cerambycidae(5.6%). These five families occupied 43.0% of the total number of species. 3. The important families, based on the number of individuals, were Cantharidae(28.2%), Catopidae(27.7%), and Coccinellidae(23.0%). These three families occupied 78.9% of the total number of individuals. 4. The important species, based on the number of individuals, were Podabrus sp. (22.6%, C-antharidae), Catnps sp. 1 (21.7%. Catopidae), Anatis halonis (15.2%. Coccinellidae). Dominant species was Podabrus sp. (25.2% in 1986 and Catops sp. 1(24.9%) in 1987. 5. Generally, more spices and individual numbers were found in older stands than in younger ones. 6. The Coleoptera communities decreased in the thinned stand (stand C). Such a phenomenon in the thinned stand was likely to last two or more years. 7. The Coleoptera communities reached their peak of abundance in May, and decreased thereafter.

  • PDF

A Study on the Effect of Students' Problem Solving Ability and Satisfactions in Woodworking Product Making Program Using Design Thinking (목공 제품 제작 활동에서 디자인 씽킹의 활용이 학생들의 만족도와 문제해결력에 미치는 영향)

  • Kim, SeongIl
    • 대한공업교육학회지
    • /
    • v.44 no.2
    • /
    • pp.142-163
    • /
    • 2019
  • The purpose of this study is to analyze the effect of problem solving ability and satisfaction of university students who are pre-technology teachers in woodworking products(birdhouse) making program using design thinking. Survey responses are analyzed by statistical programs(SPSS ver.20) such as satisfaction, confidence in problem solving, difficulties and causes of difficulties according to gender and grade of 33 students who conducted experience programs in extra-curricular programs to improve creativity and problem solving ability. The main conclusions of this study are as follows: First, the average of total satisfaction about experience programs is 4.39, which is somewhat high. The highest average response is 'feelings of accomplishment' and 'advice in the surroundings'(M = 4.46). There is no significant difference between male and female, and grade. The students interest in group-based different birdhouse woodworking together with the help of the surrounding people by the process of design thinking rather than practice to follow. Therefore, I'd like to recommend to other students due to this program shows a high self-confidence, sense of accomplishment, and satisfaction. Second, the total average response of students 'self-confidence for problem solving at the group based making experience program using design thinking is 3.80. In result of group activities, the students have self-confidence of 'problem-solving ability and deal with difficult situations'. Later, in making programs, complementing difficulties of making can enhance the satisfaction of the students. Third, in the survey questionnaire related with problem solving ability confidence, between 'I have the ability to solve many problems' and 'I always have the ability to cope with new and difficult business situations' show the highest correlation. Therefore, in order to improve self-confidence of problem solving ability, it is necessary to prepare teaching learning programs that can strengthen problem solving ability. Fourth, in the new design and making process not a given product design, the most difficult step is 'the process of rework and modifying idea product'. The main reason that students have difficulty in the production process is 'lack of knowledge and ability to produce'. To make various woodworking products using design thinking process, it can be helpful to make works if you have enough training on woodworking and design thinking before product making. The students' satisfaction about team-based learning using design thinking that helps improving creativity and problem solving ability is high. Therefore, the result of the research in other making activity program that design thinking is applied and analyzed can improve students' problem solving ability.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Development of the Regulatory Impact Analysis Framework for the Convergence Industry: Case Study on Regulatory Issues by Emerging Industry (융합산업 규제영향분석 프레임워크 개발: 신산업 분야별 규제이슈 사례 연구)

  • Song, Hye-Lim;Seo, Bong-Goon;Cho, Sung-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.199-230
    • /
    • 2021
  • Innovative new products and services are being launched through the convergence between heterogeneous industries, and social interest and investment in convergence industries such as AI, big data-based future cars, and robots are continuously increasing. However, in the process of commercialization of convergence new products and services, there are many cases where they do not conform to the existing regulatory and legal system, which causes many difficulties in companies launching their products and services into the market. In response to these industrial changes, the current government is promoting the improvement of existing regulatory mechanisms applied to the relevant industry along with the expansion of investment in new industries. This study, in these convergence industry trends, aimed to analysis the existing regulatory system that is an obstacle to market entry of innovative new products and services in order to preemptively predict regulatory issues that will arise in emerging industries. In addition, it was intended to establish a regulatory impact analysis system to evaluate adequacy and prepare improvement measures. The flow of this study is divided into three parts. In the first part, previous studies on regulatory impact analysis and evaluation systems are investigated. This was used as basic data for the development direction of the regulatory impact framework, indicators and items. In the second regulatory impact analysis framework development part, indicators and items are developed based on the previously investigated data, and these are applied to each stage of the framework. In the last part, a case study was presented to solve the regulatory issues faced by actual companies by applying the developed regulatory impact analysis framework. The case study included the autonomous/electric vehicle industry and the Internet of Things (IoT) industry, because it is one of the emerging industries that the Korean government is most interested in recently, and is judged to be most relevant to the realization of an intelligent information society. Specifically, the regulatory impact analysis framework proposed in this study consists of a total of five steps. The first step is to identify the industrial size of the target products and services, related policies, and regulatory issues. In the second stage, regulatory issues are discovered through review of regulatory improvement items for each stage of commercialization (planning, production, commercialization). In the next step, factors related to regulatory compliance costs are derived and costs incurred for existing regulatory compliance are calculated. In the fourth stage, an alternative is prepared by gathering opinions of the relevant industry and experts in the field, and the necessity, validity, and adequacy of the alternative are reviewed. Finally, in the final stage, the adopted alternatives are formulated so that they can be applied to the legislation, and the alternatives are reviewed by legal experts. The implications of this study are summarized as follows. From a theoretical point of view, it is meaningful in that it clearly presents a series of procedures for regulatory impact analysis as a framework. Although previous studies mainly discussed the importance and necessity of regulatory impact analysis, this study presented a systematic framework in consideration of the various factors required for regulatory impact analysis suggested by prior studies. From a practical point of view, this study has significance in that it was applied to actual regulatory issues based on the regulatory impact analysis framework proposed above. The results of this study show that proposals related to regulatory issues were submitted to government departments and finally the current law was revised, suggesting that the framework proposed in this study can be an effective way to resolve regulatory issues. It is expected that the regulatory impact analysis framework proposed in this study will be a meaningful guideline for technology policy researchers and policy makers in the future.

Investigating Data Preprocessing Algorithms of a Deep Learning Postprocessing Model for the Improvement of Sub-Seasonal to Seasonal Climate Predictions (계절내-계절 기후예측의 딥러닝 기반 후보정을 위한 입력자료 전처리 기법 평가)

  • Uran Chung;Jinyoung Rhee;Miae Kim;Soo-Jin Sohn
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.2
    • /
    • pp.80-98
    • /
    • 2023
  • This study explores the effectiveness of various data preprocessing algorithms for improving subseasonal to seasonal (S2S) climate predictions from six climate forecast models and their Multi-Model Ensemble (MME) using a deep learning-based postprocessing model. A pipeline of data transformation algorithms was constructed to convert raw S2S prediction data into the training data processed with several statistical distribution. A dimensionality reduction algorithm for selecting features through rankings of correlation coefficients between the observed and the input data. The training model in the study was designed with TimeDistributed wrapper applied to all convolutional layers of U-Net: The TimeDistributed wrapper allows a U-Net convolutional layer to be directly applied to 5-dimensional time series data while maintaining the time axis of data, but every input should be at least 3D in U-Net. We found that Robust and Standard transformation algorithms are most suitable for improving S2S predictions. The dimensionality reduction based on feature selections did not significantly improve predictions of daily precipitation for six climate models and even worsened predictions of daily maximum and minimum temperatures. While deep learning-based postprocessing was also improved MME S2S precipitation predictions, it did not have a significant effect on temperature predictions, particularly for the lead time of weeks 1 and 2. Further research is needed to develop an optimal deep learning model for improving S2S temperature predictions by testing various models and parameters.

Estimation of Rice Canopy Height Using Terrestrial Laser Scanner (레이저 스캐너를 이용한 벼 군락 초장 추정)

  • Dongwon Kwon;Wan-Gyu Sang;Sungyul Chang;Woo-jin Im;Hyeok-jin Bak;Ji-hyeon Lee;Jung-Il Cho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.387-397
    • /
    • 2023
  • Plant height is a growth parameter that provides visible insights into the plant's growth status and has a high correlation with yield, so it is widely used in crop breeding and cultivation research. Investigation of the growth characteristics of crops such as plant height has generally been conducted directly by humans using a ruler, but with the recent development of sensing and image analysis technology, research is being attempted to digitally convert growth measurement technology to efficiently investigate crop growth. In this study, the canopy height of rice grown at various nitrogen fertilization levels was measured using a laser scanner capable of precise measurement over a wide range, and a comparative analysis was performed with the actual plant height. As a result of comparing the point cloud data collected with a laser scanner and the actual plant height, it was confirmed that the estimated plant height measured based on the average height of the top 1% points showed the highest correlation with the actual plant height (R2 = 0.93, RMSE = 2.73). Based on this, a linear regression equation was derived and used to convert the canopy height measured with a laser scanner to the actual plant height. The rice growth curve drawn by combining the actual and estimated plant height collected by various nitrogen fertilization conditions and growth period shows that the laser scanner-based canopy height measurement technology can be effectively utilized for assessing the plant height and growth of rice. In the future, 3D images derived from laser scanners are expected to be applicable to crop biomass estimation, plant shape analysis, etc., and can be used as a technology for digital conversion of conventional crop growth assessment methods.

Comparative analysis of Glomerular Filtration Rate measurement and estimated glomerular filtration rate using 99mTc-DTPA in kidney transplant donors. (신장이식 공여자에서 99mTc-DTPA를 이용한 Glomerular Filtration Rate 측정과 추정사구체여과율의 비교분석)

  • Cheon, Jun Hong;Yoo, Nam Ho;Lee, Sun Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.25 no.2
    • /
    • pp.35-40
    • /
    • 2021
  • Purpose Glomerular filtration rate(GFR) is an important indicator for the diagnosis, treatment, and follow-up of kidney disease and is also used by healthy individuals for drug use and evaluating kidney function in donors. The gold standard method of the GFR test is to measure by continuously injecting the inulin which is extrinsic marker, but it takes a long time and the test method is complicated. so, the method of measuring the serum concentration of creatinine is used. Estimated glomerular filtration rate (eGFR) is used instead. However, creatinine is known to be affected by age, gender, muscle mass, etc. eGFR formulas that are currently used include the Cockroft-Gault formula, the modification of diet in renal disease (MDRD) formula, and the chronic kidney disease epidemilogy collaboration (CKD-EPI) formula for adults. For children, the Schwartz formula is used. Measurement of GFR using 51Cr-EDTA (diethylenetriamine tetraacetic acid), 99mTc-DTPA (diethylenetriamine pentaacetic acid) can replace inulin and is currently in use. Therefore, We compared the GFR measured using 99mTc-DTPA with the eGFR using CKD-EPI formula. Materials and Methods For 200 kidney transplant donors who visited Asan medical center.(96 males, 104 females, 47.3 years ± 12.7 years old) GFR was measured using plasma(Two-plasma-sample-method, TPSM) obtained by intravenous administration of 99mTc-DTPA(0.5mCi, 18.5 MBq). eGFR was derived using CKD-EPI formula based on serum creatinine concentration. Results GFR average measured using 99mTc-DTPA for 200 kidney transplant donors is 97.27±19.46(ml/min/1.73m2), and the eGFR average value using the CKD-EPI formula is 96.84±17.74(ml/min/1.73m2), The concentration of serum creatinine is 0.84±0.39(mg/dL). Regression formula of 99mTc-DTPA GFR for serum creatinine-based eGFR was Y = 0.5073X + 48.186, and the correlation coefficient was 0.698 (P<0.01). Difference (%) was 1.52±18.28. Conclusion The correlation coefficient between the 99mTc-DTPA and the eGFR derived on serum creatinine concentration was confirmed to be moderate. This is estimated that eGFR is affected by external factors such as age, gender, and muscle mass and use of formulas made for kidney disease patients. By using 99mTc-DTPA, we can provide reliable GFR results, which is used for diagnosis, treatment and observation of kidney disease, and kidney evaluation of kidney transplant patients.