• Title/Summary/Keyword: Dynamic weights

Search Result 229, Processing Time 0.029 seconds

Intensity Based Stereo Matching Algorithm Including Boundary Information (경계선 영역 정보를 이용한 밝기값 기반 스테레오 정합)

  • Choi, Dong-Jun;Kim, Do-Hyun;Yang, Yeong-Yil
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.12
    • /
    • pp.84-92
    • /
    • 1998
  • In this paper, we propose the novel cost functions for finding the disparity between the left and the right images in the stereo matching problem. The dynamic programming method is used in solving the stereo matching problem by Cox et al[10]. In the reference[10], only the intensity of the pixels in the epipolar line is used as the cost functions to find the corresponding pixels. We propose the two new cost functions. The information of the slope of the pixel is introduced to the constraints in determining the weights of intensity and direction(the historical information). The pixels with the higher slope are matched mainly by the intensity of pixels. As the slope becomes lower, the matching is performed mainly by the direction. Secondly, the disparity information of the previous epipolar line the pixel is used to find the disparity of the current epipolar line. If the pixel in the left epipolar line, $p-i$ and the pixel in the right epipolar line, $p-j$ satisfy the following conditions, the higher matching probability is given to the pixels, $p-i$ and $p-j$. i) The pixels, $p-i$ and $p-j$ are the pixles on the edges in the left and the right images, respectively. ⅱ) For the pixels $p-k$ and $p-l$ in the previous epipolar line, $p-k$and $p-l$ are matched and are the pixels on the same edge with $p-i$ and $p-j$, respectively. The proposed method compared with the original method[10] finds the better matching results for the test images.

  • PDF

An Efficient Dynamic Workload Balancing Strategy (PIECES 프레임워크 중심의 요구사항 정제와 우선순위 결정 전략)

  • Jeon, Hye-Young;Byun, Jung-Won;Rhew, Sung-Yul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.10
    • /
    • pp.117-127
    • /
    • 2012
  • Identifying user requirements efficiently and reflecting them on the existing system is very important in a rapidly changing web and mobile environments. This study proposes the strategies to refining requirements and to prioritizing those refined requirements for changing of web and mobile application based on user requirements (e.g. mobile application comments, Q&A, reported information as discomfort factors). In order to refining the user requirements, those requirements are grouped by using the advancement of the software business of the Forum of standardization and the existing configuration-based programs. Then, we mapped them onto the PIECES framework to identifying whether the refined requirements are correctly reflected to the system in a way of valid and pure. To determine the priority of refined requirements, first, relative weights are given to software structure, requirements and categories of PIECES. Second, integration points on each requirement are counted to obtain the relative value of partial and overall score of a set of software structural requirements. In order to verifying the possibility and proving the effectiveness of proposing technique in this study, survey was conducted on changing requirements of mobile application which have been serviced at S University by targeting 15 people of work-related stakeholders.

Estimation of Shear Wave Velocity of Earth Dam Materials Using Artificial Blasting Vibration Test (인공발파진동실험을 이용한 흙댐 축조재료의 전단파속도 산정)

  • Ha, Ik-Soo;Kim, Nam-Ryong;Lim, Jeong-Yeul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.2
    • /
    • pp.619-629
    • /
    • 2013
  • The objective of this study is to estimate shear wave velocity of earth dam materials using artificially generated vibration from blasting events and to verify its applicability. In this study, the artificial blasting and vibration monitoring were carried out at the site adjacent to Seongdeok dam, which is the first blasting test for an existing dam in Korea. The vibrations were induced by 4 different types of blasting with various depths of blasting boreholes and explosive charge weights. During the tests, the acceleration time histories were recorded at the bedrock adjacent to the explosion and the crest of the dam. From frequency analyses of acceleration histories measured at the crest, the fundamental frequency of the target dam could be evaluated. Numerical analyses varying shear moduli of earth fill zone were carried out using the acceleration histories measured at the bedrock as input ground motions. From the comparison between the fundamental frequencies calculated by numerical analyses and measured records, the shear wave velocities with depth, which are closely related to shear moduli, could be determined. It is found that the effect of different blasting types on shear wave velocity estimation for the target dam materials is negligible and the shear wave velocity can be consistently evaluated. Furthermore, comparing the shear wave velocity with the previous researchers' empirical relationships, the applicability of suggested method is verified. Therefore, in case that the earthquake record is not available, the shear wave velocity of earth dam materials can be reasonably evaluated if blasting vibration test is allowed at the site adjacent to the dam.

Development of Portable Multi-function Sensor (Mini CPT Cone + VWC Sensor) to Improve the Efficiency of Slope Inspection (비탈면 점검 효율화를 위한 휴대형 복합센서 개발)

  • Kim, Jong-Woo;Jho, Youn-Beom
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.1
    • /
    • pp.49-57
    • /
    • 2022
  • In order to efficiently analysis the stability of a slope, measuring the shear strength of soil is needed. The Standard Penetration Test (SPT) is not appropriate for a slope inspection due to cost and weights. One of the ways to effectively measure the N-value is the Dynamic Cone Penetration Test (DCPT). This study was performed to develop a minimized multi-function sensors that can easily estimate CPT values and Volumetric Water Content. N value with multi-fuction sensor DCPT showed -2.5 ~ +3.9% error compared with the SPT N value (reference value) in the field tests. Also, the developed multi-fuction sensor system was tested the correlation between the CPT test and the portable tester with indoor test. The test result showed 0.85 R2 value in soil, 0.83 in weathered soil, and 0.98 in mixed soil. As a result of the field test, the multi-function sensor shows the excellent field applicability of the proposed sensor system. After further research, it is expected that the portable multi-function sensor will be useful for general slope inspection.

Optimal Mesh Size in Three-Dimensional Arbitrary Lagrangian-Eulerian Method of Free-air Explosions (3차원 Arbitrary Lagrangian-Eulerian 기법을 사용한 자유 대기 중 폭발 해석의 최적 격자망 크기 산정)

  • Yena Lee;Tae Hee Lee;Dawon Park;Youngjun Choi;Jung-Wuk Hong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.36 no.6
    • /
    • pp.355-364
    • /
    • 2023
  • The arbitrary Lagrangian-Eulerian (ALE) method has been extensively researched owing to its capability to accurately predict the propagation of blast shock waves. Although the use of the ALE method for dynamic analysis can produce unreliable results depending on the mesh size of the finite element, few studies have explored the relationship between the mesh size for the air domain and the accuracy of numerical analysis. In this study, we propose a procedure to calculate the optimal mesh size based on the mean squared error between the maximum blast pressure values obtained from numerical simulations and experiments. Furthermore, we analyze the relationship between the weight of explosive material (TNT) and the optimal mesh size of the air domain. The findings from this study can contribute to estimating the optimal mesh size in blast simulations with various explosion weights and promote the development of advanced blast numerical analysis models.

A New Bias Scheduling Method for Improving Both Classification Performance and Precision on the Classification and Regression Problems (분류 및 회귀문제에서의 분류 성능과 정확도를 동시에 향상시키기 위한 새로운 바이어스 스케줄링 방법)

  • Kim Eun-Mi;Park Seong-Mi;Kim Kwang-Hee;Lee Bae-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1021-1028
    • /
    • 2005
  • The general solution for classification and regression problems can be found by matching and modifying matrices with the information in real world and then these matrices are teaming in neural networks. This paper treats primary space as a real world, and dual space that Primary space matches matrices using kernel. In practical study, there are two kinds of problems, complete system which can get an answer using inverse matrix and ill-posed system or singular system which cannot get an answer directly from inverse of the given matrix. Further more the problems are often given by the latter condition; therefore, it is necessary to find regularization parameter to change ill-posed or singular problems into complete system. This paper compares each performance under both classification and regression problems among GCV, L-Curve, which are well known for getting regularization parameter, and kernel methods. Both GCV and L-Curve have excellent performance to get regularization parameters, and the performances are similar although they show little bit different results from the different condition of problems. However, these methods are two-step solution because both have to calculate the regularization parameters to solve given problems, and then those problems can be applied to other solving methods. Compared with UV and L-Curve, kernel methods are one-step solution which is simultaneously teaming a regularization parameter within the teaming process of pattern weights. This paper also suggests dynamic momentum which is leaning under the limited proportional condition between learning epoch and the performance of given problems to increase performance and precision for regularization. Finally, this paper shows the results that suggested solution can get better or equivalent results compared with GCV and L-Curve through the experiments using Iris data which are used to consider standard data in classification, Gaussian data which are typical data for singular system, and Shaw data which is an one-dimension image restoration problems.

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

A Real-Time Stock Market Prediction Using Knowledge Accumulation (지식 누적을 이용한 실시간 주식시장 예측)

  • Kim, Jin-Hwa;Hong, Kwang-Hun;Min, Jin-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.109-130
    • /
    • 2011
  • One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study. When the number of rules is increasing over time, it has to manage special rules such as redundant rules or conflicting rules efficiently.

Enhancement of Immune Activities of Peptides from Asterias amurensis Using a Nano-encapsulation Process (나노 입자 불가사리 펩타이드의 면역 활성 증진)

  • Jeong, Hyang-Suk;Oh, Sung-Ho;Kim, Seoung-Seop;Jeong, Myoung-Hoon;Choi, Woon-Yong;Seo, Yong-Chang;Choi, Geun-Pyo;Kim, Jin-Chul;Lee, Hyeon-Yong
    • Korean Journal of Food Science and Technology
    • /
    • v.42 no.4
    • /
    • pp.424-430
    • /
    • 2010
  • Immuno-modulatory activities of peptides from Asterias amurensis were investigated using a nano-encapsulation process. The molecular weights of the peptides in the range of 5-7 kDa were separated using Sephadex G-75 gel filtration. Eighty-five percent of the nano-particles were in the 300 nm range using dynamic light scattering. The cytotoxicity of the A. amurensis nano-particles against CCD-986sk human dermal fibroblast cells was 11.64% after adding 1.0 mg/mL of the samples, which was lower than that from the control (13.28% collagen). The secretion of $NO^-$ from macrophages was estimated as $40\;{\mu}M$ after adding 1.0 mg/mL of gelatin nano-particles, which was higher than the others. Prostaglandin $E_2$ production from UV-induced human skin cells decreased greatly to 860 pg/mL after adding 1.0 mg/mL of the samples. Confocal microscopy revealed that nano-particles effectively penetrated the cells within 1 hour. From these results, we consider that nano-encapsulation of the peptides from A. amurensis can improve their biological functions.