• Title/Summary/Keyword: 알고리즘화

Search Result 6,410, Processing Time 0.036 seconds

Mapping of the Righteous Tree Selection for a Given Site Using Digital Terrain Analysis on a Central Temperate Forest (수치지형해석(數値地形解析)에 의한 온대중부림(溫帶中部林)의 적지적수도(適地適樹圖) 작성(作成))

  • Kang, Young-Ho;Jeong, Jin-Hyun;Kim, Young-Kul;Park, Jae-Wook
    • Journal of Korean Society of Forest Science
    • /
    • v.86 no.2
    • /
    • pp.241-250
    • /
    • 1997
  • The study was conducted to make a map for selecting righteous tree species for each site by digital terrain analysis. We set an algorithmic value for each tree species' characteristics with distribution pattern analysis, and the soil types were digitized from data indicated on soil map. Mean altitude, slope, aspect and micro-topography were estimated from the digital map for each block which had been calculated by regression equations with altitude. The results obtained from the study could be summarized as follows 1. We could develope a method to select righteous tree species for a given site with concern of soil, forest condition and topographic factors on Muju-Gun in Chonbuk province(2,500ha) by the terrain analysis and multi-variate digital map with a personal computer. 2. The brown forest soils were major soil types for the study area, and 29 tree species were occurred with Pinus densiflora as a dominant species. The differences in site condition and soil properties resulted in site quality differences for each tree species. 3. We tried to figure out the accuracy of a basic program(DTM.BAS) enterprised for this study with comparing the mean altitude and aspect calculated from the topographic terrain analysis map and those from surveyed data. The differences between the values were less than 5% which could be accepted as a statistically allowable value for altitude, as well as the values for aspect showed no differences between both the mean altitude and aspect. The result may indicate that the program can be used further in efficiency. 4. From the righteous-site selection map, the 2nd group(R, $B_1$) took the largest area with 46% followed by non-forest area (L) with 23%, the 5th group with 7% and the 4th group with 5%, respectively. The other groups occupied less than 6%. 5. We suggested four types of management tools by silvicultural tree species with considering soil type and topographic conditions.

  • PDF

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.

Perceptions of Information Technology Competencies among Gifted and Non-gifted High School Students (영재와 평재 고등학생의 IT 역량에 대한 인식)

  • Shin, Min;Ahn, Doehee
    • Journal of Gifted/Talented Education
    • /
    • v.25 no.2
    • /
    • pp.339-358
    • /
    • 2015
  • This study was to examine perceptions of information technology(IT) competencies among gifted and non-gifted students(i.e., information science high school students and technical high school students). Of the 370 high school students surveyed from 3 high schools(i.e., gifted academy, information science high school, and technical high school) in three metropolitan cities, Korea, 351 students completed and returned the questionnaires yielding a total response rate of 94.86%. High school students recognized the IT professional competence as being most important when recruiting IT employees. And they considered that practice-oriented education was the most importantly needed to improve their IT skills. In addition, the most important sub-factors of IT core competencies among gifted academy students and information science high school students were basic software skills. Also Technical high school students responded that the main network and security capabilities were the most importantly needed to do so. Finally, the most appropriate training courses for enhancing IT competencies were recognized differently among gifted and non-gifted students. Gifted academy students responded that the 'algorithm' was the mostly needed for enhancing IT competencies, whereas information science high school students responded that 'data structures' and 'computer architecture' were mostly needed to do. For technical high school students, they responded that a 'programming language' course was the most needed to do so. Results are discussed in relations to IT corporate and school settings.

Local Shape Analysis of the Hippocampus using Hierarchical Level-of-Detail Representations (계층적 Level-of-Detail 표현을 이용한 해마의 국부적인 형상 분석)

  • Kim Jeong-Sik;Choi Soo-Mi;Choi Yoo-Ju;Kim Myoung-Hee
    • The KIPS Transactions:PartA
    • /
    • v.11A no.7 s.91
    • /
    • pp.555-562
    • /
    • 2004
  • Both global volume reduction and local shape changes of hippocampus within the brain indicate their abnormal neurological states. Hippocampal shape analysis consists of two main steps. First, construct a hippocampal shape representation model ; second, compute a shape similarity from this representation. This paper proposes a novel method for the analysis of hippocampal shape using integrated Octree-based representation, containing meshes, voxels, and skeletons. First of all, we create multi-level meshes by applying the Marching Cube algorithm to the hippocampal region segmented from MR images. This model is converted to intermediate binary voxel representation. And we extract the 3D skeleton from these voxels using the slice-based skeletonization method. Then, in order to acquire multiresolutional shape representation, we store hierarchically the meshes, voxels, skeletons comprised in nodes of the Octree, and we extract the sample meshes using the ray-tracing based mesh sampling technique. Finally, as a similarity measure between the shapes, we compute $L_2$ Norm and Hausdorff distance for each sam-pled mesh pair by shooting the rays fired from the extracted skeleton. As we use a mouse picking interface for analyzing a local shape inter-actively, we provide an interaction and multiresolution based analysis for the local shape changes. In this paper, our experiment shows that our approach is robust to the rotation and the scale, especially effective to discriminate the changes between local shapes of hippocampus and more-over to increase the speed of analysis without degrading accuracy by using a hierarchical level-of-detail approach.

A Review of Multivariate Analysis Studies Applied for Plant Morphology in Korea (국내 식물 형태 연구에 사용된 다변량분석 논문에 대한 재고)

  • Chang, Kae Sun;Oh, Hana;Kim, Hui;Lee, Heung Soo;Chang, Chin-Sung
    • Journal of Korean Society of Forest Science
    • /
    • v.98 no.3
    • /
    • pp.215-224
    • /
    • 2009
  • A review was given of the role of traditional morphometrics in plant morphological studies using 54 published studies in three major journals and others in Korea, such as Journal of Korean Forestry Society, Korean Journal of Plant Taxonomy, Korean Journal of Breeding, Korean Journal of Apiculture, Journal of Life Science, and Korean Journal of Plant Resources from 1997 to 2008. The two most commonly used techniques of data analysis, cluster analysis (CA) and principal components analysis (PCA) with other statistical tests were discussed. The common problem of PCA is the underlying assumptions of methods, like random sampling and multivariate normal distribution of data. The procedure was intended mainly for continuous data and was not efficient for data which were not well summarized by variances or covariances. Likewise CA was most appropriate for categorical rather than continuous data. Also, the CA produced clusters whether or not natural groupings existed, and the results depended on both the similarity measure chosen and the algorithm used for clustering. An additional problems of the PCA and the CA arised with both qualitative and quantitative data with a limited number of variables and/or too few numbers of samples. Some of these problems may be avoided if a certain number of variables (more than 20 at least) and sufficient samples (40-50 at least) are considered for morphometric analyses, but we do not think that the methods are all mighty tools for data analysts. Instead, we do believe that reasonable applications combined with focus on objectives and limitations of each procedure would be a step forward.

Study of Crustal Structure in North Korea Using 3D Velocity Tomography (3차원 속도 토모그래피를 이용한 북한지역의 지각구조 연구)

  • So Gu Kim;Jong Woo Shin
    • The Journal of Engineering Geology
    • /
    • v.13 no.3
    • /
    • pp.293-308
    • /
    • 2003
  • New results about the crustal structure down to a depth of 60 km beneath North Korea were obtained using the seismic tomography method. About 1013 P- and S-wave travel times from local earthquakes recorded by the Korean stations and the vicinity were used in the research. All earthquakes were relocated on the basis of an algorithm proposed in this study. Parameterization of the velocity structure is realized with a set of nodes distributed in the study volume according to the ray density. 120 nodes located at four depth levels were used to obtain the resulting P- and S-wave velocity structures. As a result, it is found that P- and S-wave velocity anomalies of the Rangnim Massif at depth of 8 km are high and low, respectively, whereas those of the Pyongnam Basin are low up to 24 km. It indicates that the Rangnim Massif contains Archean-early Lower Proterozoic Massif foldings with many faults and fractures which may be saturated with underground water and/or hot springs. On the other hand, the Pyongyang-Sariwon in the Pyongnam Basin is an intraplatform depression which was filled with sediments for the motion of the Upper Proterozoic, Silurian and Upper Paleozoic, and Lower Mesozoic origin. In particular, the high P- and S-wave velocity anomalies are observed at depth of 8, 16, and 24 km beneath Mt. Backdu, indicating that they may be the shallow conduits of the solidified magma bodies, while the low P-and S-wave velocity anomalies at depth of 38 km must be related with the magma chamber of low velocity bodies with partial melting. We also found the Moho discontinuities beneath the Origin Basin including Sari won to be about 55 km deep, whereas those of Mt. Backdu is found to be about 38 km. The high ratio of P-wave velocity/S-wave velocity at Moho suggests that there must be a partial melting body near the boundary of the crust and mantle. Consequently we may well consider Mt. Backdu as a dormant volcano which is holding the intermediate magma chamber near the Moho discontinuity. This study also brought interesting and important findings that there exist some materials with very high P- and S-wave velocity annomoalies at depth of about 40 km near Mt. Myohyang area at the edge of the Rangnim Massif shield.

Development of Multimedia Annotation and Retrieval System using MPEG-7 based Semantic Metadata Model (MPEG-7 기반 의미적 메타데이터 모델을 이용한 멀티미디어 주석 및 검색 시스템의 개발)

  • An, Hyoung-Geun;Koh, Jae-Jin
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.573-584
    • /
    • 2007
  • As multimedia information recently increases fast, various types of retrieval of multimedia data are becoming issues of great importance. For the efficient multimedia data processing, semantics based retrieval techniques are required that can extract the meaning contents of multimedia data. Existing retrieval methods of multimedia data are annotation-based retrieval, feature-based retrieval and annotation and feature integration based retrieval. These systems take annotator a lot of efforts and time and we should perform complicated calculation for feature extraction. In addition. created data have shortcomings that we should go through static search that do not change. Also, user-friendly and semantic searching techniques are not supported. This paper proposes to develop S-MARS(Semantic Metadata-based Multimedia Annotation and Retrieval System) which can represent and extract multimedia data efficiently using MPEG-7. The system provides a graphical user interface for annotating, searching, and browsing multimedia data. It is implemented on the basis of the semantic metadata model to represent multimedia information. The semantic metadata about multimedia data is organized on the basis of multimedia description schema using XML schema that basically comply with the MPEG-7 standard. In conclusion. the proposed scheme can be easily implemented on any multimedia platforms supporting XML technology. It can be utilized to enable efficient semantic metadata sharing between systems, and it will contribute to improving the retrieval correctness and the user's satisfaction on embedding based multimedia retrieval algorithm method.

Traffic Forecasting Model Selection of Artificial Neural Network Using Akaike's Information Criterion (AIC(AKaike's Information Criterion)을 이용한 교통량 예측 모형)

  • Kang, Weon-Eui;Baik, Nam-Cheol;Yoon, Hye-Kyung
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.7 s.78
    • /
    • pp.155-159
    • /
    • 2004
  • Recently, there are many trials about Artificial neural networks : ANNs structure and studying method of researches for forecasting traffic volume. ANNs have a powerful capabilities of recognizing pattern with a flexible non-linear model. However, ANNs have some overfitting problems in dealing with a lot of parameters because of its non-linear problems. This research deals with the application of a variety of model selection criterion for cancellation of the overfitting problems. Especially, this aims at analyzing which the selecting model cancels the overfitting problems and guarantees the transferability from time measure. Results in this study are as follow. First, the model which is selecting in sample does not guarantees the best capabilities of out-of-sample. So to speak, the best model in sample is no relationship with the capabilities of out-of-sample like many existing researches. Second, in stability of model selecting criterion, AIC3, AICC, BIC are available but AIC4 has a large variation comparing with the best model. In time-series analysis and forecasting, we need more quantitable data analysis and another time-series analysis because uncertainty of a model can have an effect on correlation between in-sample and out-of-sample.

A Study on LRFD Reliability Based Design Criteria of RC Flexural Members (R.C. 휨부재(部材)의 L.R.F.D. 신뢰성(信賴性) 설계기준(設計基準)에 관한 연구(研究))

  • Cho, Hyo Nam
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.1 no.1
    • /
    • pp.21-32
    • /
    • 1981
  • Recent trends in design standards development in some European countries and U.S.A. have encouraged the use of probabilistic limit sate design concepts. Reliability based design criteria such as LSD, LRFD, PBLSD, adopted in those advanced countries have the potentials that they afford for symplifying the design process and placing it on a consistent reliability bases for various construction materials. A reliability based design criteria for RC flexural members are proposed in this study. Lind-Hasofer's invariant second-moment reliability theory is used in the derivation of an algorithmic reliability analysis method as well as an iterative determination of load and resistance factors. In addition, Cornell's Mean First-Order Second Moment Method is employed as a practical tool for the approximate reliability analysis and the derivation of design criteria. Uncertainty measures for flexural resistance and load effects are based on the Ellingwood's approach for the evaluation of uncertainties of loads and resistances. The implied relative safety levels of RC flexural members designed by the strength design provisions of the current standard code were evaluated using the second moment reliability analysis method proposed in this study. And then, resistance and load factors corresponding to the target reliability index(${\beta}=4$) which is considered to be appropriate level of reliability considering our practices are calculated by using the proposed methods. These reliability based factors were compared to those specified by our current ultimate strength design provisions. It was found that the reliability levels of flexural members designed by current code are not appropriate, and the code specified resistance and load factors were considerably different from the reliability based resistance and load factors proposed in this study.

  • PDF

A Study on Spatial Pattern of Impact Area of Intersection Using Digital Tachograph Data and Traffic Assignment Model (차량 운행기록정보와 통행배정 모형을 이용한 교차로 영향권의 공간적 패턴에 관한 연구)

  • PARK, Seungjun;HONG, Kiman;KIM, Taegyun;SEO, Hyeon;CHO, Joong Rae;HONG, Young Suk
    • Journal of Korean Society of Transportation
    • /
    • v.36 no.2
    • /
    • pp.155-168
    • /
    • 2018
  • In this study, we studied the directional pattern of entering the intersection from the intersection upstream link prior to predicting short future (such as 5 or 10 minutes) intersection direction traffic volume on the interrupted flow, and examined the possibility of traffic volume prediction using traffic assignment model. The analysis method of this study is to investigate the similarity of patterns by performing cluster analysis with the ratio of traffic volume by intersection direction divided by 2 hours using taxi DTG (Digital Tachograph) data (1 week). Also, for linking with the result of the traffic assignment model, this study compares the impact area of 5 minutes or 10 minutes from the center of the intersection with the analysis result of taxi DTG data. To do this, we have developed an algorithm to set the impact area of intersection, using the taxi DTG data and traffic assignment model. As a result of the analysis, the intersection entry pattern of the taxi is grouped into 12, and the Cubic Clustering Criterion indicating the confidence level of clustering is 6.92. As a result of correlation analysis with the impact area of the traffic assignment model, the correlation coefficient for the impact area of 5 minutes was analyzed as 0.86, and significant results were obtained. However, it was analyzed that the correlation coefficient is slightly lowered to 0.69 in the impact area of 10 minutes from the center of the intersection, but this was due to insufficient accuracy of O/D (Origin/Destination) travel and network data. In future, if accuracy of traffic network and accuracy of O/D traffic by time are improved, it is expected that it will be able to utilize traffic volume data calculated from traffic assignment model when controlling traffic signals at intersections.